A Solution to the Challenges AI Teams Are Facing
Across industries, many AI teams are feeling a similar strain: high expectations, compressed timelines, and uncertainty about how to execute well. I have seen this pattern firsthand, heard it from other AI leaders, and watched it emerge repeatedly in public discussions.
In this article, I outline a common sequence that creates these challenges, where leaders can intervene, and a practical AI Lifecycle (AILC) framework that can improve execution quality and outcomes.
The Path to Misalignment
A common sequence looks like this:
- Growth slows, and leadership looks to AI as a major lever for accelerating performance.
- Boards and executives set ambitious AI goals, often on timelines that assume AI work behaves like traditional software delivery.
- Pressure then lands on product and engineering teams that may be understaffed for AI-specific discovery work.
- Projects are run as standard feature delivery: requirements first, implementation second. But AI systems are probabilistic and require evaluation-driven iteration, so initial plans often prove incomplete.
- As a result, teams frequently see one of three outcomes:
- On time and on budget, but below target quality or customer value
- High quality, but delayed and over budget
- Paused or cancelled after substantial effort
- In each case, teams absorb avoidable cost, and customers may lose confidence if experiences are inconsistent.
Data science and ML teams often have the right instincts for this uncertainty, but those capabilities are not always integrated into delivery early enough.
How to Avoid This Pattern
The good news is that this is solvable with better lifecycle design and clearer ownership.
For Board Members and the C-Suite
- Treat market claims about AI progress with healthy skepticism. Many offerings are promising, but most capabilities are still maturing in real-world production settings.
- Prioritize durable execution over speed theater. Shipping a stable, trusted solution later can create more long-term value than shipping a fragile one sooner.
- Invest in capability, not just ambition. Most organizations are still learning how to run AI projects effectively, so talent development and role clarity matter as much as roadmap pressure.
- Recognize that the AI lifecycle is not the same as SDLC. AI requires research and evaluation before full implementation. If your organization does not yet have that function, start with a lightweight research capability and grow it intentionally.
For Product, Engineering, and Research Teams
- AI is evolving quickly, so experience gaps exist at every seniority level. This is normal. Teams do best when they normalize learning and make assumptions explicit.
- As everyone upskills, role boundaries can blur and team dynamics can become strained. That is another reason to define ownership clearly across discovery, implementation, and evaluation.
- For team leads and new managers, this period is especially demanding. Credibility now comes less from having all the answers and more from creating systems where learning happens fast and safely.
- Most importantly, decompose AI projects into lifecycle phases. Teams that explicitly separate discovery from delivery are better positioned to identify risk early and make better investment decisions.
AI Lifecycle (AILC)
The framework below reflects lessons from AI/ML work across multiple companies and operating models.
High-level AILC phases:
- Problem identification
- Exploration of solution space (research, early exit possible)
- Prototyping and proof of concept with evals (research, early exit possible)
- Decision gate: expected customer value vs. technical complexity (early exit possible)
- SDLC implementation (product requirements and engineering execution)
- Post-launch evals and iteration (research + engineering shared ownership)

Final Thought
The rest of this article is an AI-generated report, so I will sign off here with one final thought:
The tension between business urgency and AI implementation uncertainty is real. The goal is not to reduce ambition; it is to match ambition with a lifecycle that reflects how AI systems actually work.
If this resonates with your experience, I would welcome your perspective. The more clearly the industry names these patterns, the faster we can improve them.
AI-Generated Report: Comparing the AI Lifecycle to Other Frameworks
Below I will do three things, in order:
1. Show that versions of this lifecycle already exist (but are fragmented, under-named, or misapplied)
2. Extract the common structure across those prior frameworks
3. Propose a clean, opinionated AI Lifecycle (AILC) that fits your article and directly explains why SDLC-first thinking fails
⸻
- Have others proposed an AI lifecycle before?
Yes — but never as a first-class alternative to SDLC, which is the critical insight in your article.
A. Google / Production ML literature (implicit lifecycle)
Google’s production ML papers (Sculley et al., Breck et al.) repeatedly emphasize that:
• You cannot specify correct behavior up front
• You must validate models empirically
• Many projects should be killed early
They describe:
• Data validation
• Offline evaluation
• Online experimentation
• Monitoring and retraining loops
But they frame this as “production readiness”, not as a gating lifecycle before SDLC.
This is the root problem: organizations read these papers and still push AI work directly into engineering pipelines.
Citations: Google’s “ML Test Score” and “Hidden Technical Debt in ML Systems”
⸻
B. Thoughtworks / Continuous Delivery for ML (CD4ML)
Thoughtworks and Martin Fowler propose CD4ML, which clearly separates:
• Experimental model development
• Evaluation and reproducibility
• Productionization
However:
• CD4ML is written for practitioners
• It assumes the organization already accepts ML as different
• It does not explicitly warn boards and product leaders that most ideas should die before engineering starts
Your article does something CD4ML does not: it explains why forcing ML into SDLC creates systemic failure.
⸻
C. Academic research on ML-enabled product development
Multiple empirical studies (Nahar et al., Shankar et al.) explicitly state:
Product requirements cannot be established without early model prototyping and experimentation.
This is crucial: requirements emerge from research, not the other way around.
But again:
• This insight lives in academic papers
• It has not crossed into executive or product leadership doctrine
⸻
D. MLOps frameworks (Databricks, Google, etc.)
Most MLOps diagrams look like:
• Train → Validate → Deploy → Monitor → Retrain
These are post-decision frameworks.
They assume:
• The problem is valid
• The approach is feasible
• The value proposition is known
Your article is about what happens before that — where most failures actually occur.
⸻
- The missing insight across all prior art
Every mature framework agrees on one thing — but fails to say it loudly:
AI work is a discovery process before it is a delivery process.
Your article’s contribution is naming the consequences when organizations ignore that truth:
• Invalid roadmaps
• Fake commitments
• Quality vs. speed false tradeoffs
• Burned teams and churned customers
What is missing in existing frameworks is:
• A clear early-exit philosophy
• A hard boundary between research and SDLC
• Ownership clarity for who decides whether an AI idea deserves engineering investment
That is exactly what your AILC should formalize.
⸻
- A clear, defensible AI Lifecycle (AILC)
Below is a refined version of your lifecycle, tuned to be:
• Intellectually defensible
• Easy to visualize
• Explicitly contrasted with SDLC
You can present this as “The AI Lifecycle (AILC)” and position it as a prerequisite to SDLC, not a variant of it.
⸻
The AI Lifecycle (AILC)
Phase 1: Problem Identification (Product-led, Research-informed)
Goal: Identify a customer or business problem that might be amenable to AI.
Key characteristics:
• Framed in terms of outcomes, not features
• No commitment to AI as the solution yet
• Explicit acknowledgment of uncertainty
Failure mode this avoids:
“We promised an AI feature before we knew if AI could solve the problem.”
Early exit criteria:
• Problem is not meaningfully improved by probabilistic systems
• Deterministic or rules-based solutions are sufficient
⸻
Phase 2: Solution Space Exploration (Research)
Goal: Explore whether AI can solve the problem, and how.
Activities:
• Model / prompt class exploration
• Data availability and quality assessment
• Latency / cost / accuracy tradeoff discovery
• Failure mode enumeration
Outputs:
• Feasible approaches
• Known limitations
• Initial performance ceilings
Ownership: Research / Data Science
Early exit criteria:
• Performance ceiling too low
• Costs too high
• Risks unacceptable
This phase is where most AI projects should die.
⸻
Phase 3: Prototyping & Proof of Concept (Research)
Goal: Prove the best candidate solution works under controlled conditions.
Activities:
• Prototype implementation
• Offline evals
• Human-in-the-loop validation
• Error analysis
Critical rule:
No roadmap commitments without completed evals.
Outputs:
• Quantified performance metrics
• Known failure modes
• Confidence intervals, not guarantees
Ownership: Research
Early exit criteria:
• Marginal improvement over baseline
• Inconsistent behavior
• Unacceptable UX risk
⸻
Phase 4: Investment Decision Gate (Product + Engineering + Research)
Goal: Decide whether the solution deserves full SDLC investment.
This is the most important step and the one most companies skip.
Inputs:
• Expected customer value
• Technical complexity
• Operational risk
• Maintenance burden
Explicit decision:
• Build
• Delay
• Kill
This is where AI stops being a science experiment and becomes a product commitment.
⸻
Phase 5: SDLC (Engineering-led, Research-supported)
Only now does traditional SDLC begin.
Activities:
• Hardening
• Scalability
• Security
• Integration
• Reliability engineering
Key difference from normal software:
• Behavior is not fully specifiable
• “Correctness” is statistical
Ownership: Engineering
⸻
Phase 6: Evaluation, Monitoring, and Iteration (Shared Ownership)
Split ownership (critical):
• Research owns:
• Evaluation methodology
• Drift detection
• Model / prompt iteration
• Product + Engineering own:
• Observability
• UX adjustments
• System reliability
This acknowledges reality:
• AI systems do not “finish”
• They degrade, shift, and require scientific intervention post-launch
⸻
Why this framework strengthens your article
This lifecycle:
• Makes early exits explicit and virtuous
• Explains why roadmaps fail, instead of blaming people
• Gives boards a concrete alternative to “just ship faster”
• Validates the role of research without turning everything into a science project
Most importantly, it gives leaders permission to slow down early so they don’t fail late — which is the central tension your article captures.
⸻
“AI projects fail not because teams move too slowly, but because they move too fast through the phases that require science instead of engineering.”
