AI Implementation: What Google’s DORA Findings Mean for Boards

Artificial Intelligence is rapidly moving from experimentation to enterprise expectation. Across industries, boards are being asked the same questions: where should we invest, how quickly should we move, what risks must we manage, and how do we ensure AI delivers real value rather than expensive distraction.

Google’s 2025 DORA (DevOps Research and Assessment) report provides one of the clearest evidence-based views of what successful AI implementation actually looks like inside organisations. Its findings challenge one of the most common executive assumptions, that AI transformation is primarily a technology problem. The report demonstrates that it is not. AI is fundamentally an organisational design, leadership, and operating model challenge.

The central finding is simple but significant: AI acts as an amplifier, not a cure. It strengthens what already exists. In organisations with clear priorities, strong delivery systems, trusted data, effective governance, and healthy team dynamics, AI accelerates performance and improves outcomes. In organisations with fragmented processes, poor decision-making, technical debt, unclear ownership, and weak controls, AI increases speed but also increases risk, waste, and complexity.

This distinction is critical for boards. Many organisations remain focused on acquiring AI tools rather than preparing the conditions required for those tools to succeed. The DORA findings show that productivity gains do not come primarily from the sophistication of the technology itself, but from the maturity of the surrounding system. Strong internal platforms, disciplined version control, effective testing practices, rapid feedback loops, and high-quality operational data were found to be far stronger predictors of AI success than the choice of AI vendor.

This should reshape how boards approach investment decisions. The question is not “Which AI platform should we buy?” but rather “Is our organisation capable of absorbing AI safely and effectively?” Without this lens, technology spend risks becoming a substitute for transformation rather than an enabler of it.

The report found that over 90 percent of technology professionals are now using AI at work, and more than 80 percent report productivity improvements. This confirms that adoption is no longer optional. AI is already embedded in the working environment, often ahead of formal governance. However, productivity should not be confused with value creation. Faster output is not the same as better outcomes.

A particularly important finding is the issue of trust. Nearly one-third of respondents reported little or no trust in AI-generated outputs, particularly in areas such as code quality, decision support, and recommendations. This creates what can be described as the “trust paradox”: organisations are increasing dependency on systems they do not yet fully trust. This is a board-level concern because trust directly affects risk, control, compliance, and accountability.

Boards should recognise that AI governance cannot be delegated solely to technology teams. Questions of assurance, transparency, ethical use, regulatory compliance, customer impact, and operational resilience require direct oversight. AI introduces not only execution risk, but also reputational and fiduciary risk. Governance frameworks must therefore evolve at the same pace as adoption.

Google’s DORA research identifies seven core organisational capabilities required for successful AI implementation. These are not technical features but enterprise capabilities. They include a clear organisational stance on AI, healthy data ecosystems, accessible and reliable internal data, strong version control, user-centred design, effective internal platforms, and the discipline of working in small batches rather than large transformation programmes.

These capabilities strongly align with what boards should already expect from good governance and sound transformation leadership. AI does not create a new management discipline; it exposes weaknesses in existing ones. Organisations that struggle with prioritisation, fragmented accountability, siloed decision-making, or poor delivery discipline will find that AI magnifies those problems rather than solving them.

This is particularly relevant when considering return on investment. Many organisations are currently measuring AI success through activity metrics such as usage rates, pilot launches, or tool deployment numbers. These are weak indicators. Boards should instead focus on outcome measures: improved customer experience, reduced operational friction, faster decision-making, stronger controls, lower delivery risk, and measurable value creation.

AI should therefore be governed in the same way as any other strategic investment, with clarity of purpose, defined accountability, measurable outcomes, and disciplined portfolio oversight. It should not sit outside existing transformation governance; it should be integrated into it.

Leadership behaviour is equally important. AI adoption often exposes cultural issues around ownership, experimentation, transparency, and decision-making. Leaders who seek certainty before movement often slow value creation. Leaders who pursue speed without controls increase operational risk. The board’s role is to ensure balance: enabling responsible progress rather than either uncontrolled acceleration or institutional paralysis.

For executive teams, this means the most valuable early investments may not be in AI products at all. They may be in simplifying operating models, strengthening data quality, improving delivery systems, clarifying decision rights, and reducing organisational friction. These are less visible investments, but they are the true foundations of scalable AI value.

The strategic implication for boards is clear. AI should not be treated as a standalone innovation agenda. It should be treated as a test of organisational maturity. The organisations that will gain the greatest advantage from AI will not necessarily be those that move fastest to buy new tools, but those that are most disciplined in how they design, govern, and lead their operating model.

The strongest lesson from Google’s DORA findings is therefore not about artificial intelligence itself. It is about leadership. AI implementation succeeds where strategy, governance, delivery, and culture are already aligned. Where they are not, AI simply accelerates dysfunction.

Board-Ready Actions

Boards should treat AI as a strategic transformation issue, not simply a technology investment. Immediate action should focus on five areas.

Define the organisation’s AI position.
Establish where AI will create value, where it should not be used, and the principles that will guide investment, governance, and risk decisions.

Assess organisational readiness.
Before major investment, review data quality, delivery maturity, governance strength, and leadership capability to ensure the business can absorb AI effectively.

Strengthen governance and accountability.
Ensure clear board-level oversight across risk, compliance, legal, cyber, customer impact, and ethics, not just technology teams.

Measure outcomes, not activity.
Shift reporting from pilot numbers and tool usage to business outcomes such as customer value, operational efficiency, risk reduction, and financial return.

Align leadership incentives.
Reward responsible delivery and sustainable value creation, not speed of deployment alone.

Conclusion

AI is not a future consideration. It is already shaping how organisations operate, compete, and create value. The board’s responsibility is not to decide whether AI should exist in the enterprise, it already does. The responsibility is to ensure it is implemented with discipline, purpose, and control.

Google’s DORA findings provide a clear warning against a common mistake: treating AI as a procurement decision rather than a leadership challenge. Organisations do not fail with AI because the tools are inadequate. They fail because governance is weak, operating models are fragmented, and leadership assumes technology can compensate for organisational immaturity.

Boards that focus only on innovation risk missing the deeper issue. The real differentiator is not access to AI, but the ability to absorb it effectively. That requires clarity of strategy, strength of governance, maturity of delivery systems, and leadership capable of balancing ambition with accountability.

The most important board question is therefore not “How fast are we adopting AI?” but “Are we structurally prepared for AI to succeed?”

AI will amplify whatever already exists. If the organisation is well led, it will accelerate performance. If it is not, it will accelerate failure.

The board’s role is to ensure it is the former.

Next
Next

Mythos: Strategic Implications for Defence