Suitability of the UK MoD CADMID Model for AI and Autonomous Systems
Purpose
This paper outlines why the current UK Government bureaucracy and the Ministry of Defence’s CADMID delivery model are unsuitable for the development and deployment of AI and autonomous weapon systems. It highlights risks, provides examples, and suggests mitigations and alternative approaches.
Context
The MoD currently applies the CADMID cycle (Concept, Assessment, Demonstration, Manufacture, In-Service, Disposal) as its standard approach to capability acquisition. CADMID was designed for traditional hardware platforms such as ships, aircraft, and missiles, long-lead, capital-intensive systems requiring decades of service life.
AI and autonomous systems are fundamentally different. They rely on data, software, and algorithms that evolve rapidly and require continuous improvement. Applying CADMID’s sequential and documentation-heavy processes to these technologies creates significant misalignment.
Challenges of CADMID for AI and Autonomy
Slow and Sequential Processes
AI technologies evolve in weeks or months, whereas CADMID approval cycles often take years.
By the time an AI-enabled system reaches the “In-Service” stage, its algorithms may already be outdated.
Change Control and Flexibility
AI models require regular retraining to remain effective against adaptive adversaries.
CADMID’s rigid change-control mechanisms hinder this, slowing down updates and reducing operational relevance.
Bureaucratic Overhead
Extensive documentation, multi-layered approvals, and risk-averse governance create delays disproportionate to the scale of change needed in AI systems.
International Benchmarking
Allies such as the US Department of Defense have already adapted, introducing the Software Acquisition Pathway and organisations like AFWERX and the Defense Innovation Unit (DIU) to accelerate software and AI delivery.
The UK risks lagging behind if it maintains the current model.
Risks
Operational Obsolescence: AI systems may underperform or fail if updates cannot be deployed at pace.
Strategic Vulnerability: Adversaries with faster innovation cycles could gain decisive advantage.
Economic Inefficiency: Significant investment may be wasted on outdated solutions delivered too late to meet operational needs.
Mitigations and Alternative Approaches
Adopt Agile and Iterative Acquisition Models
Introduce delivery frameworks that support continuous integration, testing, and deployment.
Dual-Speed Procurement Pathway
Retain CADMID for traditional hardware platforms.
Create a parallel lightweight pathway specifically for software, AI, and autonomy.
Empowered Experimentation Units
Establish Centres of Excellence and sandboxes within the MoD with authority to procure, test, and deploy AI at pace.
Framework-Based Approvals
Approve architectures, guardrails, and governance frameworks rather than individual products, enabling iterative updates without restarting the approval process.
Integrated Governance and Safety
Build ethical, legal, and safety assurance into iterative development cycles to maintain public trust and compliance without delaying delivery.
Conclusion
The CADMID model has served the MoD well for traditional, hardware-based capabilities but is ill-suited for AI and autonomous systems. Without reform, the UK risks falling behind international peers and adversaries, fielding outdated and less effective systems.
A dual-speed procurement approach, supported by agile acquisition models, dedicated AI sandboxes, and framework-based approvals, would provide the flexibility and responsiveness required to safely and effectively deliver AI-enabled defence capabilities.