TLDR: AI is breaking the traditional SDLC not because the tools are weak, but because senior engineers are using them with outdated mental models. The fix is shifting from edge-level coding to deep context building and mentoring AI systems to enable faster, safer system-wide transformation.
One extremely critical challenge software CTOs face today is the fundamental disruption of the traditional Software Development Life Cycle (SDLC) by AI, specifically the failure of highly experienced senior engineers to adapt their mental models to these new tools.
According to Conor Sibley, the CTO of Higher Logic, senior systems architects often flail because they attempt to use AI “at the edge”—asking it to implement small, isolated tasks based on a rigid mental model they have already constructed. When the tool produces code that doesn’t compile or aligns incorrectly with their unstated intent, these experts often dismiss the technology as ineffective. This disconnect prevents organizations from moving past “vibe coding” prototypes into the astronomically faster operational speeds required to remain competitive.
The Solution: Retraining for Context and Model Mentorship
To address this, Sibley advocates for a complete re-engineering of the organizational architecture and a fundamental retraining of how personnel approach software problems. His solution centers on three core shifts:
- Context Engineering over “Edge” Coding: Engineers must be retrained to move away from immediate code generation. Instead, they should spend 20 to 30 minutes in deep conversations with models to build massive amounts of context regarding the problem space. This includes feeding the model historical documentation, architectural patterns, and specific use cases before any code or tests are generated.
- The Transition to Model Mentorship: The senior engineer’s role must evolve from mentoring junior developers to mentoring the AI models. In this new structure, senior talent provides the routing, steering, and feedback necessary to create the high-fidelity context that allows the AI to produce better solutions in a fraction of the time.
- “Single-Shotting” New Systems: Organizations should use LLMs to extract every business use case and architectural component from existing legacy codebases. Once the new environment is established, these structured requirements allow the team to “single shot” entirely new infrastructure and systems—effectively modernizing old programming languages into new ones in a way that was previously considered too dangerous to attempt.
By shifting the workforce toward a model where subject matter experts and prompt engineers manage automated systems, Sibley believes CTOs can move their companies toward a world of continuous delivery where entire systems are rebuilt anew rather than simply iterated upon.
Analogy
Conor Sibley compares this transition to learning to ski: you cannot gain credibility or expertise by simply reading a book about it. You have to get on the mountain, strap on the skis, and be willing to fall on your face until you understand how to navigate the “moguls” of this new AI landscape.