The rise of AI-based systems and LLM-driven agents has sparked anxiety that software engineering (SE) as a discipline is shrinking. However, the central theme of this article is that agentic AI actually expands the scope of software engineering. The focus is shifting away from purely deterministic code toward semi-executable artifacts—combinations of natural language prompts, workflows, control mechanisms, and organizational routines whose enactment depends on probabilistic AI or human interpretation.
The Semi-Executable Stack
To diagnose and manage this expanding scope, the authors introduce The Semi-Executable Stack, a six-ring reference model ranging from deterministic machine execution to broad human interpretation:
- Ring 1: Executable artifacts: Traditional code, tests, schemas, and configurations.
- Ring 2: Instructional artifacts: Prompts, natural-language specifications, and task exemplars.
- Ring 3: Orchestrated execution: Tool use, agent workflows, multi-agent protocols, and human-agent loops.
- Ring 4: Control systems: Guardrails, evaluation harnesses, monitoring, and escalation rules.
- Ring 5: Operating logic: Decision preparation, organizational coordination routines, and delegation structures.
- Ring 6: Societal and institutional fit: Regulatory compliance (e.g., EU AI Act), cross-organizational integration, and institutional legitimacy.
Reframing AI Objections as Engineering Problems
The article argues that common objections to AI in software development should be treated as new engineering targets rather than reasons to dismiss the technology. For instance:
- Reliability: Agent hallucinations mean that evaluation harnesses and oversight controls (Ring 4) must become core engineering artifacts, not just optional wrappers.
- Maintenance Debt: Because AI lowers the barrier to creating software, organizations will face massive maintenance burdens. This requires pushing lifecycle discipline outward to versioning prompts and workflows.
- Organizational Inertia & Politics: Adoption and governance are now engineering problems. Designing transition playbooks, audit trails, and role definitions is critical for AI to succeed.
Conclusions and Key Takeaways
The major conclusion is that the “center of gravity” for human software engineers is moving outward. While AI automates lower-level coding tasks, human expertise is increasingly required for specification, architecture, judgment, and systemic evaluation. The authors recommend a Preserve vs. Purify heuristic: engineers must preserve durable principles like modularity, validation, and traceability, but purify away accidental complexity optimized for manual, low-bandwidth code production. Ultimately, engineering discipline becomes more vital, not less, as a wider array of non-developers create functional, semi-executable systems.
Mentoring question
How can your team apply traditional software engineering discipline (like version control, modularity, and automated testing) to non-code artifacts like AI prompts, agent workflows, and decision-making policies?