AI Engineering was recently named the #1 fastest growing job by LinkedIn, with data science cited as one of the top roles transitioned from. Meanwhile, every data scientist is in the middle of at best an identity crisis and at worst an existential crisis.
I recently switched from data science to AI engineering. That transition taught me something important: the way we define and staff “AI engineering” roles has huge influence on whether AI projects succeed. In this article I’ll share what I’ve learned about the differences between scientific and engineering work in AI, and propose a team design that lets both thrive.
My Path from Data Science to AI Engineering
I’ve been a practicing data scientist for almost 10 years. A year and a half ago, my work as a data scientist hit an inflection point where almost every project my team was working on involved AI agents instead of machine learning. We no longer needed to collect training data to train a custom model for each task. LLMs and agents could solve the tasks we used to solve (like document classification, named entity recognition, etc.) with just a prompt and some tools. We began looking beyond these simple tasks towards more complex ones that only agents can solve, like automating entire workflows or answering arbitrary questions.
I predicted in that moment that within a year the whole team would become AI engineers, whether de facto or formally by title. I completed an AI Engineering bootcamp, then joined a new company as an AI Engineering manager. One of my DS team mates joined me, and my prediction appeared to be true. So far so good.
But my new team was composed of 3 AI engineers who were all former data scientists. We were asked to operate primarily as engineers, following scrum by the book and shipping features on rigid timelines. Over time it became clear that we were trying to satisfy both research and engineering expectations within a single role and process, and that tension made it harder than it needed to be to do great work. As we clarified which work was genuinely research, which was engineering, and how to staff and sequence each, our effectiveness improved substantially. The rest of this article shares the frameworks and mental models that emerged from that experience, which I believe are broadly useful for organizing AI work.
Composition of ML Teams
Let’s turn back the clock a bit to set some context on the current moment in AI.
Earlier in my career, I joined Toast when they had approximately 100 software engineers and were just beginning to invest in data science. Over 2.5 years, the organization learned how to structure work around machine learning. I adapted and influenced where I could along the way. When I left we had a well-designed team of six that was both researching and shipping advanced ML models.
That team was split between data scientists (research) and ML engineers (implementation). They worked together to own the full ML lifecycle from lab to production. This pattern has emerged as an industry best practice for ML teams.
The next two companies I worked for had large research teams of 10+ data scientists. But neither had dedicated ML engineer roles, which meant that data scientists had to either figure out their own deployments or leave good research on the shelf. Because of that gap, as a data scientist I acquired far more engineering experience than most. This was invaluable experience, but it highlighted how hard it is to ask one role to excel at both research and production engineering.
For a team, this is a very demanding design. The reality is that very few people are equally strong at both of these tasks. Scientists optimize for learning what works and what doesn’t via experimentation and prototyping, operating in uncertainty. Engineers optimize for shipping what is already proven to work and focus on building robust, scalable, and secure systems. When those responsibilities are clearly separated but tightly partnered, ML teams perform at their peak.
I should note that Toast is a wildly successful company, while the other 2 companies I joined sort of…floundered. I won’t go so far as to attribute those company-level successes and failures to these team designs, but the correlation is noteworthy.
Earlier in my career I also experienced a data science team run strictly as a scrum team. We had a PM decide what experiments to run and had to report results every two weeks, ready or not. That cadence and structure caused us to often run the wrong experiments and share results before they were fully validated. It was a useful lesson for me: research and delivery processes don’t map 1:1, and being intentional about that distinction matters.
Composition of AI Teams
To apply these learnings to the new AI world, let’s first be clear about what’s different today compared to before the advent of LLMs.
- ML is only relevant for a shrinking subset of AI problems, though a modern AI team still needs ML muscle.
- Agent design, semantic search, and evals are the core new skill sets that all AI team members must master.
- AI coding agents make prototyping easier than ever.
But something important isn’t different: both AI and ML are fundamentally non-deterministic, and that fact necessitates a scientific process.
So while my prediction about my team of data scientists becoming AI engineers turned out to be directionally right, the experience convinced me that collapsing science and engineering into a single role is rarely the best long-term move. The distinction between scientist and engineer has nothing to do with whether we’re using LLMs or classical models, and everything to do with the kind of work being done. And science is more important than ever for agents.
So how do we adapt? Just like the optimal ML team design is composed of data scientists and ML engineers, the optimal AI team design must be composed of both AI Scientists and AI Engineers.
Introducing the AI Scientist
Let’s define what an AI Scientist is.
AI Scientist: possesses all the skills of a data scientist and adds proficiency in agent design and full-stack app prototyping
And let’s compare that to the definition of an AI Engineer.
AI Engineer: possesses all the skills of a software engineer and adds proficiency in agent design.
Note that agent design is the core new skill that both of these new roles add on top of traditional roles. (Also note that, in my opinion, an AI Engineer is not an evolution of an ML engineer but rather of a software engineer. An ML engineer could become an AI Engineer by acquiring skills in agent design, but an AI Engineer need not possess skills in ML infrastructure and deployment.)
Comparing Data Science to Software Engineering
From the definitions above for AI Scientist and AI Engineer it’s clear that what is shared is agent design and that what is different is which skill set is the foundation: data science and software engineering, respectively. So to truly understand the difference between an AI Scientist and an AI Engineer, we must understand the difference between a data scientist and a software engineer.
A data scientist operates in a research setting. Their goal is to explore, experiment, and prototype. Their department is an innovation center that operates multiple quarters ahead of the product roadmap. They are an input to the product roadmap, not an output. They operate on quarterly goal cycles with kanban boards to track work. They get more value from weekly hour-long problem solving sessions than daily 15 minute standups. They explore the solution space of open-ended problems, and would never think about shipping their work before evaluating it.
A software engineer operates in an implementation setting. Their goal is to ship features defined by the product team. They execute the roadmap and are measured by velocity and quality. They operate on 2-week sprints and observe scrum ceremonies. They solve well-defined problems that are easy to plan and easy to validate (and reject or reframe problems that aren’t).
To build a successful AI team, AI scientists must operate as data scientists always have, and AI engineers must operate as software engineers always have.
Risks of AI without Science
Nearly every company has an AI strategy in 2026, including many who did not previously have a research function. In that context, it’s tempting to staff AI Engineers first and rely on them to do everything from exploration to productionization.
The risk is that AI is fundamentally non-deterministic and cannot be designed and deployed with the same processes and techniques that are used for traditional, deterministic software. Specifically, AI needs evals. AI will be wrong or unacceptable X% of the time, and you need to know what X is before putting it in front of customers so you can minimize it and set realistic expectations. If you wait to learn what X is until it’s in customer hands, you may discover too late that you need to rework significant parts of the solution.
A dedicated research function (whether you call it data science, applied research, or AI science) helps teams fail early, long before expensive engineering resources are committed and long before you risk eroding customer trust.
Conclusions
- The industry needs a new role titled AI Scientist that is an evolution of data scientist, adding agent design as a core skill. Data scientists can naturally grow into this role, and the role should operate within a research function as data scientists have always done.
- Data scientists should be thoughtful about AI engineering roles, since some are scoped more for implementation than scientific exploration.
- For companies, the key is not to choose one or the other, but to be intentional about pairing AI Engineers with AI Scientists. Organizations that invest in both are best positioned to execute on their AI strategy without wasting resources or compromising customer trust.

4 replies on “AI Engineering Needs AI Scientists”
Very Insightful, so thank you. Have followed a similar path and have a similar sentiment. Also taking the AIM bootcamp at present.
Glad to hear this resonated. I still feel that the AIM AI Engineering Bootcamp was helpful. Perhaps they need two tracks though, one for AI Science and one for AI Engineering. I think it’s hard to optimize for both at the same time.
What a great read about the current situation in our sector! Thank you for sharing. I’m in Italy, and considering we were already behind schedule on ML and Data, the rush to implement “”AI”” is completely ridiculous…
Thanks! I just published another piece on how the pressure to ship AI features is creating chaos within AI teams: https://skillenai.com/2026/01/25/the-ai-lifecycle/