AI Centre of Excellence Guide
Every IT leader has the same problem with AI right now. Teams are experimenting in isolation, duplicating efforts across departments, and nobody has a clear picture of what is working. The answer is not more tools or bigger budgets. It is structure.
An AI Centre of Excellence (CoE) provides that structure. It is a dedicated function that coordinates AI strategy, shares expertise across teams, enforces governance, and makes sure every AI initiative ties back to measurable business outcomes. Without one, you get fragmented projects, wasted resources, and shadow AI running unchecked through the organisation.
This guide walks you through building an AI CoE from the ground up, covering the operating model, team composition, governance framework, and the practical steps to get from concept to operational in 90 days.
Why You Need an AI Centre of Excellence
The case for an AI CoE is straightforward. According to industry research, most organisations struggle to scale AI beyond pilot projects. The failure rate for AI initiatives hovers around 80%, and the primary cause is rarely the technology itself. It is organisational: misalignment between technical teams and business units, duplicated work, inconsistent data practices, and a lack of accountability for outcomes.
An AI CoE solves these problems by centralising three critical functions.
Strategic alignment. The CoE evaluates and prioritises AI projects based on business impact, not technical novelty. Every initiative gets vetted against organisational goals before it receives resources.
Knowledge sharing. When one team discovers that a particular approach to customer churn prediction works brilliantly, that learning gets documented and made available to every other team. Without a CoE, those insights stay trapped in silos.
Governance and risk management. AI introduces real risks around bias, privacy, regulatory compliance, and security. A CoE provides centralised oversight to manage these risks consistently rather than leaving each team to figure it out independently.
The alternative is what most organisations have today: scattered AI experiments with no central coordination, no shared learnings, and no governance framework. That is expensive, risky, and unsustainable.
Choosing Your Operating Model
Before hiring anyone or buying anything, you need to decide how your AI CoE will operate. There are three common models, and the right choice depends on your organisation's size, maturity, and culture.
The Hub and Spoke Model
This is the most popular approach for mid-to-large organisations. A central CoE team sets standards, provides tools and frameworks, and offers consultancy. Business units maintain their own AI practitioners who follow the CoE's guidelines but report into their respective departments.
Best for: Organisations with 500 or more employees that already have some AI capability distributed across teams.
Advantages: Balances central governance with departmental autonomy. Business units keep ownership of their AI projects while benefiting from shared resources.
Risk: The central hub can become a bottleneck if it tries to approve every project rather than enabling teams to move independently within guardrails.
The Centralised Model
All AI resources, talent, and decision-making sit within the CoE. Business units submit requests and the CoE delivers solutions.
Best for: Organisations in early AI maturity stages or those with fewer than 500 employees where concentrating scarce AI talent makes more sense than distributing it thinly.
Advantages: Maximum control, consistent quality, no duplication.
Risk: Can create an ivory tower effect where the CoE loses touch with real business problems. Projects may also queue up, frustrating business stakeholders.
The Federated Model
AI capability is fully distributed across business units with light-touch central coordination focused purely on standards and governance. There is no central execution team.
Best for: Mature organisations where multiple departments already have strong AI teams and the primary need is coordination rather than capability building.
Advantages: Maximum speed and autonomy for business units.
Risk: Standards can drift, duplication creeps back in, and governance becomes inconsistent without strong central oversight.
For most organisations reading this guide, the hub and spoke model is the right starting point. It provides enough central control to establish standards and governance while giving business units the autonomy to move quickly on their own priorities.
Building Your CoE Team
The team composition of your AI CoE matters more than its size. You need a blend of technical depth, business acumen, and governance expertise. Here is the core team you should aim for in the first phase.
Essential Roles
AI CoE Lead. This person owns the strategy, reports to the CIO or CTO, and is accountable for the CoE's business impact. They need to be equally comfortable in a board presentation and a technical architecture review. Do not make this a purely technical role. The biggest failure mode for AI CoEs is hiring a brilliant data scientist who cannot translate technical capabilities into business language.
Data Engineers (2-3). Your AI is only as good as your data. These engineers build and maintain the data pipelines, feature stores, and data quality frameworks that every AI project depends on. Without them, every team builds its own data infrastructure and you end up with inconsistent, unreliable foundations.
ML Engineers (1-2). Responsible for model development, training, deployment, and monitoring. They build the MLOps platform that enables repeatable, reliable AI delivery.
AI Ethics and Governance Lead. This role is not optional. Someone needs to own bias testing, fairness auditing, regulatory compliance, and the ethical review process for AI projects. In regulated industries, this person is your first line of defence against costly compliance failures.
Business Analysts (1-2). The bridge between technical teams and business stakeholders. They translate business problems into technical requirements and ensure AI solutions actually address the original need rather than becoming solutions looking for problems.
Hiring Realities
Finding people with AI expertise remains difficult. The IT skills crisis is real, and AI talent is among the most competitive to recruit. Consider these practical approaches.
Upskill existing staff. Your current developers and data analysts already understand your business. Investing in their AI training is often faster and more effective than hiring externally.
Start with contractors. Bring in experienced AI practitioners on 6-12 month contracts to build your initial frameworks and mentor permanent staff. This gets you moving quickly while you recruit for permanent roles.
Partner with universities. Many universities run applied AI programmes where students work on real business problems. This creates a pipeline of future hires who already know your organisation.
Do not try to hire the perfect team before launching. Start with three to four people and grow based on demand.
Establishing Your Governance Framework
Governance is where AI CoEs earn their keep. Without it, AI initiatives create risks that can damage your organisation's reputation, violate regulations, and erode customer trust. Your governance framework should cover four areas.
Project Intake and Prioritisation
Create a standardised process for evaluating AI project proposals. Every proposal should answer these questions:
- What business problem does this solve, and what is the expected value?
- What data is required, and do we have access to it?
- What are the risks around bias, privacy, and regulatory compliance?
- What is the estimated cost and timeline?
- How will success be measured?
Score proposals using a simple framework that weighs business impact against technical feasibility and risk. Publish the scoring criteria so teams understand what gets funded and what does not. Transparency here prevents the CoE from being seen as an arbitrary gatekeeper.
Ethical AI Standards
Define clear standards for fairness, transparency, and accountability. At minimum, your standards should cover:
Bias testing. Every model must be tested for bias across protected characteristics before deployment. Define what "acceptable" looks like and document the testing methodology.
Explainability. For any model that influences decisions affecting people, such as hiring, lending, or service eligibility, you need to be able to explain how it reaches its conclusions. Black box models are not acceptable in these contexts.
Data privacy. AI projects must comply with GDPR, the UK Data Protection Act, and any sector-specific regulations. Define clear rules about what data can be used for training, how long it is retained, and how consent is managed.
Human oversight. Define which decisions require human review before action is taken. Not everything needs a human in the loop, but high-stakes decisions absolutely do.
Model Lifecycle Management
AI models are not static. They degrade over time as the real world changes and training data becomes stale. Your governance framework needs to address the full model lifecycle.
Development standards. Version control for models, reproducible training pipelines, and mandatory documentation of training data, hyperparameters, and performance metrics.
Deployment gates. Before any model reaches production, it must pass defined quality, bias, and security checks. Automate these where possible to avoid creating bottlenecks.
Monitoring and drift detection. Once deployed, models need continuous monitoring for performance degradation, data drift, and concept drift. Define thresholds that trigger retraining or rollback.
Retirement criteria. Know when to switch off a model. If performance drops below acceptable thresholds and retraining does not help, retire it rather than letting a degraded model continue making poor decisions.
Measuring Success
Your CoE needs to demonstrate value. Define KPIs that connect AI activity to business outcomes.
Adoption metrics: Number of AI projects in production, number of business units using AI, employee AI literacy rates.
Business impact metrics: Revenue generated or costs saved through AI initiatives, process time reductions, customer satisfaction improvements.
Governance metrics: Number of models passing bias testing, compliance audit results, time from project proposal to production deployment.
Efficiency metrics: Reduction in duplicated AI efforts, reuse rate of shared models and datasets, time saved through standardised tooling.
Report these metrics quarterly to leadership. If the CoE cannot demonstrate measurable impact, it will lose its funding and organisational support.
Your 90 Day Launch Plan
Getting an AI CoE operational does not require years of planning. Here is a practical 90-day roadmap.
Days 1 to 30: Foundation
Week 1-2: Secure executive sponsorship and define the CoE's mandate. Document which operating model you are adopting and why. Get budget approval for the initial team.
Week 3-4: Hire or assign the CoE Lead and begin recruiting the core team. Audit existing AI initiatives across the organisation to understand what is already happening, who is doing it, and what tools they are using.
Days 31 to 60: Build
Week 5-6: Establish the governance framework: project intake process, ethical standards, and model lifecycle policies. Do not aim for perfection. Start with a pragmatic set of standards and iterate based on experience.
Week 7-8: Set up shared infrastructure: cloud environment for AI development, MLOps tooling, a model registry, and a feature store. Choose tools that lower the barrier to entry rather than the most technically sophisticated options.
Days 61 to 90: Deliver
Week 9-10: Launch two to three pilot projects selected through the intake process. These should be high visibility, achievable within weeks, and deliver measurable business value. Quick wins build credibility.
Week 11-12: Publish the CoE's first quarterly report. Share the results of pilot projects, the governance framework, and the roadmap for the next quarter. Hold an organisation-wide session to introduce the CoE and explain how teams can engage with it.
Common Pitfalls to Avoid
Having observed AI CoEs succeed and fail, several patterns emerge consistently.
Making it too academic. The CoE exists to deliver business value, not to explore interesting technology. Every activity should connect to a business outcome. If a project is technically fascinating but has no clear business case, it does not get resources.
Underinvesting in data. Organisations spend heavily on AI tools and talent but neglect data quality, data engineering, and data governance. Your data governance strategy is the foundation everything else sits on. Without clean, accessible, well-governed data, even the best AI team will struggle.
Ignoring change management. AI changes how people work. If you deploy an AI solution without investing in training, communication, and support, adoption will be low and the project will be deemed a failure regardless of its technical quality.
Creating a bottleneck. The CoE should enable teams to move faster, not slower. If every AI decision requires CoE approval, you have built a bureaucracy, not an accelerator. Set guardrails and let teams operate within them.
Measuring activity instead of outcomes. The number of models deployed means nothing if they are not generating value. Focus your metrics on business impact, not technical output.
Scaling Beyond the First Year
Once your CoE is established and delivering results, the focus shifts to scaling.
Expand the hub and spoke. Train and embed AI champions in each business unit who can identify opportunities, develop simpler solutions independently, and escalate complex projects to the central team.
Build an AI literacy programme. Not everyone needs to build models, but everyone benefits from understanding what AI can and cannot do. A structured literacy programme reduces fear, increases adoption, and generates better project proposals from business units.
Establish an AI product catalogue. Document every AI solution in production with its purpose, owner, performance metrics, and reuse potential. This prevents duplication and helps teams discover existing solutions before building new ones.
Invest in measuring AI ROI rigorously. As the portfolio of AI projects grows, your ability to demonstrate aggregate ROI becomes critical for continued investment.
Getting Started
Building an AI Centre of Excellence is not a technology project. It is an organisational change initiative that happens to involve technology. The organisations that succeed treat it that way, starting with clear business alignment, investing in people and governance alongside tools, and measuring success in business outcomes rather than technical sophistication.
Start small. Hire three to four good people. Pick the hub and spoke model. Launch a governance framework that is good enough rather than perfect. Deliver two quick wins in 90 days. Then iterate.
The organisations that wait for perfect conditions before starting never start. The ones that launch pragmatically and improve continuously are the ones building real competitive advantage with AI.
Share this post
Daniel J Glover
IT Leader with experience spanning IT management, compliance, development, automation, AI, and project management. I write about technology, leadership, and building better systems.
Related Posts
Measuring AI ROI: A Practical Guide
Most organisations cannot quantify their AI investments. A practical framework for IT leaders to measure AI ROI beyond the hype.
AI is eating software in 2026
AI is reshaping how software gets built, deployed, and maintained. What IT directors and CTOs need to understand about the shift from code to intent.
DEX Strategy for IT Leaders
Digital employee experience is now a board-level priority. A practical DEX strategy for IT leaders who want productivity gains, not just surveys.
Let's Work Together
Need expert IT consulting? Let's discuss how I can help your organisation.
Get in Touch