AI Training: Closing the 39% Gap
This is Part 4 of a 7-part series on Business AI Enablement for IT Leaders. The series covers why enablement matters, shadow AI risks, building an enablement framework, tool selection, governance controls, and concludes with a 90-day implementation roadmap.
Most AI users in your organisation are teaching themselves. They are experimenting through trial and error, learning from YouTube videos, and developing habits that may or may not be effective or safe.
Only 39% of employees who use AI have received any training from their employer. This means 61% are essentially unsupervised learners, developing practices without guidance on effectiveness, security, or appropriate use.
The training gap is one of the most addressable problems in AI enablement. Training programmes can be developed quickly, delivered at scale, and adapted as AI evolves. Yet most organisations have barely started.
This article provides a framework for AI training that builds genuine competence across your workforce.
The Training Deficit
The 39% training figure comes from Microsoft and LinkedIn research. But the depth of training matters as much as coverage. Among those who received training, many got only superficial introductions rather than practical skill development.
Why the deficit exists:
Speed of adoption. AI tools reached widespread use faster than training programmes could be developed. Employees started using ChatGPT months before HR had time to create formal curricula.
Unclear ownership. AI training falls between traditional boundaries. IT understands the technology. HR owns training. Business units understand use cases. Without clear ownership, development stalls.
Perceived self-sufficiency. AI tools appear intuitive. Leaders assume employees will figure them out. This underestimates the difference between basic use and effective use.
Uncertainty about content. What should AI training cover? The answer changes as tools evolve. This uncertainty paralyses curriculum development.
Competing priorities. Training budgets and time compete with other demands. AI training feels optional when it should be essential.
Consequences of the deficit:
- Productivity potential unrealised as employees use AI inefficiently
- Security risks as employees handle data inappropriately
- Quality problems as unverified AI outputs enter business processes
- Frustration as employees struggle without support
- Shadow AI as employees seek help from unvetted sources
The training gap represents a significant portion of the 95% AI ROI failure rate. Organisations invest in tools but not in the skills to use them.
What AI Literacy Actually Means
AI literacy is more than knowing how to access ChatGPT. It encompasses understanding that transforms casual use into effective, safe productivity.
Core literacy components:
Understanding AI Capabilities and Limits
Employees need a mental model of what AI can and cannot do. This includes:
-
Generative, not omniscient. AI generates plausible text based on patterns. It does not have access to current information, internal knowledge, or ground truth.
-
Confident but fallible. AI presents incorrect information with the same confidence as correct information. Without verification, employees cannot distinguish.
-
Context-dependent. AI performance varies dramatically based on how questions are framed. The same underlying question can yield excellent or poor results depending on formulation.
-
Bias inheritance. AI reflects patterns in training data, including biases. Outputs may perpetuate or amplify problematic patterns.
This conceptual understanding prevents the most damaging AI mistakes: trusting outputs that should not be trusted.
Prompt Engineering Fundamentals
The difference between mediocre and excellent AI results often lies in how questions are asked. Basic prompt engineering includes:
-
Clarity and specificity. Vague prompts yield vague results. Specific prompts with context yield useful outputs.
-
Role and context setting. Telling AI to assume a role or providing background information improves relevance.
-
Output formatting. Specifying desired format (bullet points, tables, specific length) increases usability.
-
Iterative refinement. Initial prompts rarely produce perfect results. Skilled users refine through follow-up questions.
-
Example provision. Showing AI examples of desired output guides generation more effectively than abstract descriptions.
Even basic prompt engineering doubles or triples the value employees extract from AI.
Data Handling Awareness
Every prompt is a potential data disclosure. Employees need to understand:
-
What data enters the AI system. Anything in a prompt may be processed, stored, or used for training depending on the tool.
-
Data classification alignment. Which data types can be used with which tools, based on organisational classification.
-
Sensitive information identification. Recognising when a task involves data that requires special handling.
-
Alternative approaches. How to accomplish tasks without exposing sensitive data when necessary.
Output Verification
AI outputs require checking before use. Employees should know:
-
What to verify. Facts, citations, code functionality, logical consistency.
-
How to verify. Appropriate methods for different content types.
-
When verification is mandatory. Understanding which use cases require formal review.
-
Common AI errors. Recognising patterns in AI mistakes that signal need for extra scrutiny.
Role-Specific Training Pathways
Beyond foundational literacy, different roles need different skills. A one-size-fits-all curriculum wastes time on irrelevant content and misses critical role-specific needs.
| Role Category | Primary Use Cases | Specific Training Needs |
|---|---|---|
| Marketing and Communications | Content creation, editing, campaign development | Brand voice consistency, copyright awareness, fact-checking, A/B testing with AI |
| Analytics and Data Science | Data exploration, insight generation, code assistance | Data privacy, output validation, statistical rigour, avoiding spurious correlations |
| Software Development | Code generation, debugging, documentation | Security review, testing requirements, licensing implications, code quality assessment |
| Customer Service | Response drafting, case summarisation, knowledge lookup | Customer data handling, tone appropriateness, escalation recognition, empathy preservation |
| Finance and Procurement | Analysis support, document review, reporting | Accuracy requirements, audit trail needs, regulatory compliance, numerical verification |
| Human Resources | Policy drafting, job descriptions, communications | Bias awareness, legal implications, consistency requirements, sensitivity handling |
Marketing Training Focus
Marketers need AI to produce content that works. Their training should emphasise:
-
Brand voice maintenance. Techniques for ensuring AI output matches established brand guidelines. This includes providing examples, specifying constraints, and systematic review.
-
Factual accuracy for public content. Every claim in external content requires verification. Training should cover efficient fact-checking approaches.
-
Copyright and attribution. Understanding when AI output may incorporate copyrighted material and how to address it.
-
Performance testing. Using AI to generate variants for A/B testing rather than relying on single outputs.
Analytics Training Focus
Analysts work with sensitive data and influence decisions. Their training should cover:
-
Data anonymisation techniques. How to work with AI without exposing personal or commercially sensitive information.
-
Output validation methods. Statistical and logical checks appropriate for analytical claims.
-
Correlation versus causation. AI can surface spurious patterns. Analysts need discipline to validate relationships.
-
Reproducibility. Documenting prompts and approaches so analysis can be verified and repeated.
Developer Training Focus
As I explored in my analysis of vibe coding security, AI-generated code requires particular attention. Developer training should address:
-
Security review requirements. All AI-generated code needs security scrutiny before deployment. Common vulnerability patterns in generated code.
-
Testing discipline. AI code must be tested as rigorously as human code. Prompt completion is not verification.
-
Licence compliance. Understanding the intellectual property implications of AI-generated code.
-
Dependency assessment. AI often suggests libraries or patterns that may not be appropriate. Developers need to evaluate suggestions critically.
Customer Service Training Focus
Frontline staff interact with customers using AI. Training should cover:
-
Data boundaries. What customer information can and cannot be shared with AI tools.
-
Response review. Ensuring AI-drafted responses are appropriate in tone and content before sending.
-
Escalation recognition. Knowing when a situation exceeds AI capability and requires human judgment.
-
Empathy preservation. AI can make responses feel impersonal. Training should address maintaining human connection.
Prompt Engineering for Everyone
Prompt engineering is the single highest-leverage skill for AI effectiveness. An organisation-wide foundation in prompt engineering significantly increases AI ROI.
Core techniques:
The RACE Framework
A simple structure for effective prompts:
- R - Role: Define who the AI should be ("You are an experienced marketing copywriter")
- A - Action: Specify what to do ("Write a product description")
- C - Context: Provide relevant background ("for our new eco-friendly water bottle, targeting environmentally conscious millennials")
- E - Examples: Show desired format or style ("similar to the attached example but shorter")
Iterative Refinement
First outputs rarely meet needs. Teach employees to:
- Start with an initial prompt
- Evaluate the output critically
- Identify specific improvements needed
- Refine with targeted follow-up
- Repeat until satisfactory
This process dramatically improves outcomes compared to accepting initial outputs.
Common Mistakes to Avoid
Training should highlight frequent errors:
-
Being too vague. "Help me with my presentation" yields less than "Create an outline for a 15-minute presentation to executive stakeholders about Q3 sales performance, emphasising positive trends and action items."
-
Accepting first outputs. Initial results are starting points, not endpoints.
-
Ignoring context. AI has no memory between sessions. Each conversation needs relevant background.
-
Overloading single prompts. Complex tasks work better as sequences of simpler prompts.
-
Neglecting format specification. Telling AI the desired structure (bullet points, table, paragraph length) improves usability.
Security Awareness for AI Users
Security training must be integrated into AI training, not treated as a separate topic. Every AI user needs security awareness proportionate to their role.
Universal security content:
-
Data classification review. Reinforcing what data types exist and their handling requirements.
-
Tool-specific boundaries. Clear guidance on what can be done with which approved tools.
-
Incident recognition. Identifying situations that may constitute security incidents.
-
Reporting procedures. How to report concerns without fear of consequence.
Scenario-based training:
Abstract security principles become concrete through scenarios:
-
"A colleague shares a prompt that includes customer names and account numbers. What should you do?"
-
"You need to analyse a dataset containing employee salaries. How do you proceed?"
-
"AI generates code that connects to an external API you don't recognise. What's your next step?"
Scenarios build judgment that policies alone cannot provide.
Building a Sustainable Learning Culture
One-time training is insufficient. AI capabilities change. Employee needs evolve. Sustainable learning requires ongoing mechanisms.
Training delivery options:
-
Self-paced online modules. Scalable foundation that employees complete at their convenience.
-
Live workshops. Deeper engagement with hands-on practice and Q&A.
-
Champion-led sessions. Local training tailored to specific team needs.
-
Microlearning. Short, focused content addressing specific topics or updates.
-
Community learning. Peer-to-peer knowledge sharing through communities of practice.
Effective programmes typically combine multiple approaches, with foundations delivered online and depth developed through interactive sessions.
Measuring training effectiveness:
-
Completion rates. Are employees taking available training?
-
Knowledge assessment. Do employees retain what they learned? Pre and post-tests reveal.
-
Behaviour change. Are trained employees using AI differently than untrained? Usage analytics can indicate.
-
Outcome improvement. Do trained employees achieve better results? Requires baseline comparison.
-
Satisfaction and confidence. Do employees feel prepared? Surveys reveal gaps.
Maintaining currency:
AI training must evolve with AI capabilities. Establish processes to:
- Review and update content quarterly
- Add modules when new tools are approved
- Address emerging risks as they become understood
- Incorporate learnings from incident reviews
Stale training is worse than no training because it creates false confidence.
Quick Reference: AI Training Curriculum Template
Use this template as a starting point for your organisation's AI training programme:
Module 1: AI Fundamentals (All Employees)
- [ ] What AI is and how it works (conceptual)
- [ ] Capabilities and limitations
- [ ] Common AI tools and their purposes
- [ ] Basic prompt engineering techniques
- [ ] Data handling principles
- [ ] Output verification basics
- [ ] Security awareness essentials
- [ ] Organisational policies overview
Module 2: Role-Specific Skills (By Function)
- [ ] Use cases specific to the role
- [ ] Best practices for common tasks
- [ ] Quality standards for outputs
- [ ] Function-specific data considerations
- [ ] Integration with existing workflows
Module 3: Advanced Prompt Engineering (Power Users)
- [ ] Complex prompt structures
- [ ] Multi-turn conversation strategies
- [ ] System prompts and customisation
- [ ] Tool-specific optimisation
- [ ] Automation and integration basics
Module 4: AI Champions (Selected Employees)
- [ ] Deep dive on all approved tools
- [ ] Training facilitation skills
- [ ] Use case identification methods
- [ ] Policy interpretation guidance
- [ ] Feedback collection and escalation
Ongoing Learning (All Employees)
- [ ] Tool update briefings
- [ ] New capability introductions
- [ ] Emerging risk awareness
- [ ] Best practice sharing sessions
- [ ] Incident lessons learned
Implementation Considerations
Building training programmes requires thoughtful planning. Consider these factors:
Start with highest-impact roles. Not all roles benefit equally from early training. Prioritise roles with:
- High AI adoption potential
- Significant risk exposure
- Strong business case for improvement
Leverage existing infrastructure. If your organisation has learning management systems, compliance training processes, or established training cultures, build on them rather than creating parallel structures.
Balance quality and speed. Perfect training delayed is worse than good training now. Start with essential content and refine based on feedback.
Make it mandatory but practical. Optional training sees low completion. Mandatory training without allocated time creates resentment. Ensure employees have time to complete requirements.
Involve business units. HR or IT developing training in isolation often misses practical needs. Business unit involvement ensures relevance.
As I noted in my analysis of managing remote IT teams, developing team capabilities requires deliberate investment. AI training is no different.
Developing AI Competence Across Your Organisation
Building an effective AI training programme requires understanding both the technology and your organisation's specific needs. My technical consulting services help organisations design and deliver training that builds genuine AI competence.
Get in touch to discuss how to close the AI training gap in your organisation.
Share this post
Daniel J Glover
IT Leader with experience spanning IT management, compliance, development, automation, AI, and project management. I write about technology, leadership, and building better systems.
Related Posts
Building a Cybersecurity Culture That Works
Cybersecurity culture goes beyond annual training. Learn practical strategies IT leaders use to build security awareness that changes behaviour.
AI Enablement: Your 90-Day Roadmap
Part 7 of 7: A practical 90-day plan to transform your organisation from AI chaos to controlled enablement. Week-by-week actions for IT leaders.
AI Governance: Controls That Work
Part 6 of 7: Only 32% of organisations have formal AI controls. Build governance that enables innovation while managing risk without blocking progress.
Let's Work Together
Need expert IT consulting? Let's discuss how I can help your organisation.
Get in Touch