Selecting AI Tools for Business Units
This is Part 5 of a 7-part series on Business AI Enablement for IT Leaders. The series covers why enablement matters, shadow AI risks, building an enablement framework, employee training, governance controls, and concludes with a 90-day implementation roadmap.
Gartner predicts that by 2026, 80% of people using low-code and no-code tools will work outside IT departments. The traditional model where IT selects and deploys technology before business users touch it is breaking down.
AI accelerates this shift. Business users can access powerful AI tools directly through browsers and mobile apps. They do not need IT to provision anything. This creates both opportunity and risk.
The opportunity is faster adoption and innovation closer to business problems. The risk is fragmentation, security exposure, and the shadow AI proliferation explored in Part 2.
IT leaders need a tool selection strategy that captures the opportunity while managing the risk. This article provides that strategy.
The Low-Code AI Revolution
The barrier between technology and business has never been lower. AI tools available today require no technical expertise to use:
- General-purpose assistants like ChatGPT and Claude handle natural language requests
- Specialised tools automate specific tasks without coding
- Embedded AI appears in everyday productivity software
- Low-code platforms enable business users to build automated workflows
This democratisation has profound implications for IT. Technology selection is no longer exclusively an IT function. Business users make technology choices daily, whether IT recognises it or not.
The shift in decision-making:
| Aspect | Traditional Model | AI Era Model |
|---|---|---|
| Discovery | IT identifies needs through formal processes | Business users discover tools through personal use |
| Evaluation | IT conducts technical assessment | Business users test usability; IT validates security |
| Decision | IT recommends, leadership approves | Joint decision with business and IT input |
| Deployment | IT provisions and configures | Self-service with IT-defined guardrails |
| Support | IT help desk handles issues | Blended model with champions and central support |
This shift does not diminish IT's role. It changes it. IT becomes the enabler of safe, productive adoption rather than the gatekeeper of technology access.
Evaluation Criteria for Business AI Tools
A structured evaluation framework ensures consistent decision-making across tool requests. These criteria should guide both IT-initiated evaluations and assessments of tools business users want to adopt.
Capability Assessment
Does the tool actually do what users need?
-
Use case coverage. Which specific tasks does the tool address? Does it handle the primary use cases employees have identified?
-
Quality of output. How good are the results? Evaluation should include hands-on testing with realistic scenarios.
-
Ease of use. Will employees actually use it? Tools that require extensive training or have cumbersome interfaces see low adoption.
-
Integration capability. Can it connect with existing systems? Standalone tools that require copy-paste between applications reduce productivity gains.
-
Customisation options. Can the tool be tailored to organisational needs? Custom prompts, templates, or configurations can significantly improve fit.
Security Evaluation
What are the security implications of adoption?
-
Data handling. Where is data processed and stored? What retention policies apply? Is data used for model training?
-
Authentication. Does the tool support enterprise SSO? Can it integrate with identity management systems?
-
Encryption. Is data encrypted in transit and at rest? What encryption standards apply?
-
Access controls. Can administrators manage who has access? Are there role-based permission options?
-
Audit capabilities. What logging exists? Can usage be monitored for compliance and security purposes?
-
Incident response. What are the vendor's breach notification procedures? What liability exists?
Consumer-grade AI tools typically fail security evaluations. Enterprise versions of the same underlying technology often address these gaps.
Compliance Considerations
Does adoption create regulatory risk?
-
Data residency. Where is data processed geographically? This matters for GDPR, sector-specific regulations, and national security considerations.
-
Regulatory alignment. Does the vendor have relevant certifications (SOC 2, ISO 27001)? Do their practices meet regulatory expectations?
-
AI-specific requirements. How does the tool align with emerging AI regulations like the EU AI Act?
-
Industry requirements. Sector-specific regulations (financial services, healthcare, government) create additional constraints.
As I discussed in my analysis of SOC 2 and the secure controls framework, compliance is increasingly a factor in technology decisions.
Integration Assessment
How does the tool fit with existing systems?
-
Technical integration. APIs, connectors, or native integrations with current tools.
-
Workflow integration. How naturally does it fit into existing work patterns?
-
Data integration. Can it access the information employees need? Can outputs feed other systems?
-
Identity integration. SSO and user provisioning compatibility.
Cost Analysis
What is the true cost of adoption?
-
Direct costs. Licensing fees per user, seat, or consumption.
-
Implementation costs. Configuration, integration, training development.
-
Governance costs. Ongoing monitoring, policy maintenance, audit.
-
Support costs. Help desk, champion time, vendor support requirements.
-
Opportunity costs. Does adopting this tool foreclose other options?
Total cost of ownership often exceeds initial licensing by a significant margin. Factor governance overhead into decisions.
Enterprise vs Point Solutions
A fundamental choice in AI tool strategy is between enterprise platforms and point solutions.
Enterprise platforms provide broad AI capability across many use cases through a single vendor relationship. Examples include Microsoft Copilot across the Office 365 suite or Google Gemini integrated with Workspace.
Advantages:
- Simplified vendor management and contracting
- Consistent user experience across tools
- Native integration with existing infrastructure
- Single governance framework
- Potential volume discounts
Disadvantages:
- May not excel at any specific use case
- Vendor lock-in and dependency
- Less innovation than specialised tools
- Feature roadmap driven by vendor priorities
Point solutions address specific use cases with dedicated tools. A marketing team might use Jasper for content, a development team might use GitHub Copilot, and an analytics team might use a specialised data assistant.
Advantages:
- Best-of-breed capability for specific tasks
- Faster innovation in specialised areas
- Flexibility to change tools for specific use cases
- Access to unique capabilities not available in platforms
Disadvantages:
- Complexity of managing multiple vendors
- Inconsistent user experience
- Integration challenges
- Governance overhead for each tool
The pragmatic approach:
Most organisations benefit from a hybrid strategy:
- Adopt an enterprise platform for broad productivity use cases
- Add point solutions for specific high-value needs not well served by the platform
- Establish clear criteria for when point solutions are justified
This balances capability with manageability.
The Build vs Buy vs Enable Decision
For some use cases, organisations face a three-way choice:
Build: Create custom AI applications using foundation models through APIs.
- Appropriate when: Unique requirements not met by existing tools; need for deep integration with proprietary systems; competitive advantage from differentiation
- Requires: Technical capability, ongoing maintenance resources, longer development timeline
Buy: Procure commercial AI products designed for the use case.
- Appropriate when: Standard use cases with quality commercial options; need for fast deployment; preference to outsource capability development
- Requires: Vendor evaluation, contract negotiation, integration work
Enable: Provide general-purpose AI tools and let employees apply them flexibly.
- Appropriate when: Use cases are varied and evolving; employees can adapt tools to needs with training; experimentation is valuable
- Requires: Training investment, governance frameworks, ongoing support
The right answer varies by use case:
| Use Case Type | Recommended Approach | Rationale |
|---|---|---|
| Standard productivity | Enable | General tools work well; flexibility valuable |
| Domain-specific workflow | Buy | Commercial products address specific needs |
| Unique competitive process | Build | Differentiation justifies custom development |
| High-volume automation | Buy or Build | Specialised capability needed at scale |
| Experimental exploration | Enable | Low commitment allows discovery |
Integration and Data Architecture Considerations
AI tools do not exist in isolation. Their value depends on integration with existing systems and data.
Data Access Patterns
AI tools need access to information to provide value. Consider:
-
What data does the tool need? Customer records, internal documents, code repositories, market data?
-
How will data flow to the tool? Direct integration, manual upload, API connection?
-
What are the security implications? Data leaving controlled environments creates exposure.
-
How current does data need to be? Real-time integration is more complex than periodic sync.
Output Integration
AI outputs need to feed into business processes:
-
Where do outputs go? Documents, emails, code repositories, business systems?
-
What format is required? Does the tool produce usable formats directly?
-
Is review required before use? How does the review workflow function?
-
Can outputs be automated? Or does every output require manual handling?
Architecture Implications
Large-scale AI adoption creates architectural considerations:
-
API management. Multiple tools connecting to AI services need coordinated management.
-
Data layer. Consistent data access for AI tools may require new infrastructure.
-
Identity integration. Unified identity across AI tools simplifies governance.
-
Monitoring and observability. Understanding AI usage across tools requires consolidated visibility.
Early architectural planning prevents fragmentation that becomes expensive to address later.
Vendor Security and Compliance Assessment
Every AI tool vendor requires security assessment. The depth of assessment should match the risk level of the tool.
Tier 1: Basic Assessment (Low-Risk Tools)
For tools handling only public information with no integration:
- Confirm enterprise data protection agreement
- Verify no data used for training
- Check basic security practices (encryption, access controls)
- Review privacy policy terms
Tier 2: Standard Assessment (Medium-Risk Tools)
For tools handling internal information or integrating with business systems:
All Tier 1 items, plus:
- Request security certifications (SOC 2 Type II, ISO 27001)
- Review data processing locations
- Assess incident response procedures
- Evaluate subprocessor risks
- Confirm audit log capabilities
Tier 3: Comprehensive Assessment (High-Risk Tools)
For tools handling sensitive data or making automated decisions:
All Tier 1 and 2 items, plus:
- Conduct vendor security questionnaire
- Request penetration test results
- Review data encryption practices in detail
- Assess AI-specific risks (bias, reliability)
- Evaluate business continuity provisions
- Consider on-site assessment or third-party audit
Red Flags in Vendor Assessment:
- Reluctance to provide security documentation
- No enterprise tier available
- Data used for model training by default
- Processing in jurisdictions with weak data protection
- No audit or logging capabilities
- Unclear data retention policies
- Limited incident response commitments
These signals suggest consumer-grade operations unsuitable for enterprise use.
Quick Reference: AI Tool Evaluation Scorecard
Use this scorecard for consistent evaluation across tool requests:
Capability (Weight: 30%)
- [ ] Addresses identified use cases effectively
- [ ] Quality of outputs verified through testing
- [ ] User experience acceptable for target users
- [ ] Customisation options meet requirements
- [ ] Roadmap aligns with anticipated needs
Security (Weight: 25%)
- [ ] Data handling meets organisational standards
- [ ] Authentication integrates with enterprise identity
- [ ] Encryption meets requirements
- [ ] Access controls are adequate
- [ ] Audit capabilities support governance needs
Compliance (Weight: 20%)
- [ ] Data residency requirements satisfied
- [ ] Relevant certifications obtained
- [ ] AI regulation alignment demonstrated
- [ ] Industry-specific requirements met
- [ ] Contract terms acceptable to legal
Integration (Weight: 15%)
- [ ] Technical integration feasible
- [ ] Workflow integration natural
- [ ] Data access achievable
- [ ] Identity integration supported
- [ ] Scalability adequate
Total Cost (Weight: 10%)
- [ ] Direct costs within budget
- [ ] Implementation costs acceptable
- [ ] Ongoing costs sustainable
- [ ] Value justifies investment
- [ ] Comparison with alternatives favourable
Score each item 1-5 and weight by category for a composite score. Tools scoring below threshold should not proceed to approval.
Deployment Considerations
Approved tools require thoughtful deployment to achieve value.
Phased rollout. Start with pilot groups to identify issues before broad deployment. Champions and early adopters make good pilots.
Training integration. Deploy training alongside tools, not after. Employees need skills to use tools effectively from day one.
Support preparation. Ensure help resources exist before users encounter problems. Champions should be briefed and ready.
Monitoring activation. Enable usage monitoring and audit logging from initial deployment. Retrofitting monitoring is difficult.
Feedback channels. Establish mechanisms for users to report issues, request features, and suggest improvements.
Success metrics. Define what success looks like and how it will be measured. Without metrics, you cannot demonstrate value.
As explored in the supply chain resilience discussion in the CISO series, vendor relationships require ongoing attention, not just initial assessment.
Managing the Tool Portfolio
As AI adoption matures, organisations accumulate multiple tools. Portfolio management prevents fragmentation.
Regular review. Assess the tool portfolio periodically:
- Which tools are actively used?
- Where is there overlap or redundancy?
- Are tools still meeting needs?
- Have better alternatives emerged?
- Are all tools still compliant?
Consolidation. When multiple tools serve similar purposes, consolidation reduces:
- Vendor management overhead
- Training complexity
- Integration maintenance
- Governance burden
Sunsetting. Tools that no longer provide value should be retired. This requires:
- Transition planning for affected users
- Data migration or archival
- Contract termination
- Access deprovisioning
Addition governance. New tool requests should be evaluated against existing portfolio:
- Does an approved tool already address this need?
- Would adoption create problematic overlap?
- Is the incremental value worth the incremental complexity?
Portfolio discipline prevents the tool sprawl that undermines governance and increases cost.
Developing Your AI Tool Strategy
Selecting and managing AI tools for business units requires balancing capability needs with governance requirements. My technical consulting services help organisations develop tool strategies that enable productive, safe AI adoption.
Get in touch to discuss how to select and deploy AI tools that deliver value while managing risk.
Share this post
Daniel J Glover
IT Leader with experience spanning IT management, compliance, development, automation, AI, and project management. I write about technology, leadership, and building better systems.
Related Posts
AI Enablement: Your 90-Day Roadmap
Part 7 of 7: A practical 90-day plan to transform your organisation from AI chaos to controlled enablement. Week-by-week actions for IT leaders.
AI Governance: Controls That Work
Part 6 of 7: Only 32% of organisations have formal AI controls. Build governance that enables innovation while managing risk without blocking progress.
AI Training: Closing the 39% Gap
Part 4 of 7: Only 39% of AI-using employees have received training. Build an AI skills programme that bridges the gap between adoption and competence.
Let's Work Together
Need expert IT consulting? Let's discuss how I can help your organisation.
Get in Touch