AI Investments: Key Questions for the Board
Strategy
What Questions Should the Board Ask About AI Investment?
AI investments are no longer just on the technology department’s agenda; they have become a strategic topic at the board level. However, many board members feel they lack the competence to ask the right questions about AI. You do not need to know the technical details, but asking the right questions is a fundamental responsibility of the board.
In this article, we cover 10 critical questions the board should ask regarding AI investments, the investment evaluation framework, and the risk management approach. Our AI consulting services include providing independent perspectives to boards on these matters.
Table of Contents
The Board’s Role in AI
The board’s role regarding AI is not to make technical decisions, but to provide strategic direction and oversight of risks. Just as with cybersecurity or financial risk, AI should be a standing agenda item for the board.
The Board’s Three Fundamental Responsibilities
Strategic Direction: Ensuring that AI investments align with the company’s overall strategy. Questioning how AI projects will contribute to the company’s competitive advantage. Approving resource allocation and setting priorities.
Risk Oversight: Assessing the risks created by AI usage—ethical, legal, reputational, and operational. Defining risk appetite and questioning the adequacy of risk management mechanisms.
Performance Tracking: Monitoring whether AI investments are producing the expected value. Evaluating progress reports and deciding on strategic pivots when necessary.
Board AI Literacy
Board members are not expected to know deep learning algorithms or data science techniques. However, they are expected to understand the following concepts at a foundational level:
- What AI can and cannot do
- The role of data quality in AI projects
- The differences between AI projects and traditional IT projects
- The general outlines of the ethical and regulatory framework
- Sectoral AI trends and the competitive landscape
10 Critical Questions to Ask
Question 1: What Concrete Business Problem Does This Investment Solve?
This question ensures the AI project remains tied to business value. Generic statements like “we will innovate with AI” or “we will accelerate our digital transformation” are not enough. The board must clearly understand which business process the project will improve, to what extent, and what the financial impact will be.
Example of a satisfactory answer: “We aim to reduce our excess inventory costs by 2 million TL annually by increasing our demand forecasting accuracy from 70% to 85%.”
Example of an unsatisfactory answer: “We will optimize our supply chain with AI.”
Question 2: What is the Expected Return and How Will It Be Measured?
Every investment should carry an expectation of return. In AI investments, the return is evaluated in three layers:
- Layer 1 — Direct Cost Savings: Labor reduction, error cost reduction, time savings. This is the easiest layer to measure.
- Layer 2 — Revenue Impact: Better customer experience, new product opportunities, market expansion. Measurement is more complex and takes time.
- Layer 3 — Strategic Value: Competitive advantage, organizational learning, risk reduction. This is long-term and the most difficult to measure.
The board should ask which layer the project aims to create value in and what the measurement methodology is.
Question 3: Is Our Data Infrastructure Sufficient to Support This Project?
AI projects are built on data. If there is no data or the quality is low, even the best algorithm will not produce business value. The board must understand the state of the data infrastructure and investment requirements.
Sub-questions:
- Is the data required for the project available and in a digital format?
- Is the data quality (consistency, completeness, accuracy) sufficient?
- Is additional investment required for the data infrastructure, and if so, how much?
- Is data governance (access control, privacy, retention) sufficient?
Question 4: Do We Have the Organizational Competence, or How Will It Be Acquired?
AI projects require different competencies such as data scientists, data engineers, business analysts, and project managers. Will these competencies be provided in-house or externally? If provided externally, how will knowledge transfer be handled?
The board should also question the long-term competence strategy: Will we remain dependent on external parties for every project, or will we develop our own capacity?
Question 5: What Are the Risks and How Will They Be Managed?
AI projects carry four main risk categories:
- Technical Risk: The model failing to provide expected performance, data quality issues, integration difficulties
- Operational Risk: Disruptions in business process integration, user resistance, maintenance difficulties
- Ethical and Legal Risk: Biased model outputs, data privacy breaches, regulatory non-compliance
- Reputational Risk: Public perception of erroneous or unethical AI decisions
Mitigation strategies and contingency plans must be defined for each risk category.
Question 6: What Is Our AI Ethical Framework and Governance Model?
The use of AI carries ethical and social responsibility dimensions. Are model outputs fair? Are users aware of AI decisions? How will the model be audited? The answers to these questions must be structured within an ethical framework and governance model.
Topics the governance model should cover:
- Model development standards and processes
- Model approval and deployment procedures
- Continuous monitoring and performance evaluation
- Definitions of responsibility and accountability
- Ethical principles and red lines
Question 7: How Does It Affect Our Competitive Position?
AI investments cannot be evaluated in isolation. What are our competitors doing? What is the speed of AI adoption in our sector? What is the opportunity cost of not making this investment? The board must understand the competitive context and evaluate that the decision “not to do it” also carries risk.
However, this question should not turn into a reflex of “everyone is doing it, so we must too.” Competitive evaluation must be strategic and measured.
Question 8: What Is the Total Investment Amount and How Is It Phased?
The cost structure of AI projects differs from traditional IT projects. Alongside the initial investment, there are ongoing cost items: data infrastructure, model maintenance, retraining, monitoring tools, and cloud computing resources.
The board must understand the total cost of ownership (TCO) and the distribution of the investment over time. A phasing strategy (start small, prove success, scale) should be preferred for risk management purposes.
Question 9: What Is Our Exit Strategy in Case of Failure?
Every investment can fail. The board must evaluate in advance under what conditions the project will be stopped, what the contingency plan is, and the risk of sunk costs. The “this project will continue no matter what” approach is the most dangerous one.
Project gates must be defined, and a structure must be in place where a go/no-go decision is made at each gate.
Question 10: How Does Our AI Strategy Integrate with Our Overall Business Strategy?
The final and perhaps most important question asks about the alignment of AI initiatives with the company’s overall strategic direction. Does the AI project support the growth strategy? Does it serve the cost leadership goal? Does it align with the customer experience strategy?
Rather than an independent “AI strategy,” the use of AI integrated into the business strategy produces much more sustainable results.
Investment Evaluation Framework
Placing the answers to the questions above into a structured framework facilitates the board’s decision-making process. The framework below can be used to evaluate AI investment proposals.
Evaluation Dimensions
| Dimension | Evaluation Question | Weight |
|---|---|---|
| Strategic Alignment | How well does it align with business strategy? | 25% |
| Business Value | What is the expected financial and operational impact? | 25% |
| Feasibility | Is it technically and organizationally viable? | 20% |
| Risk Profile | Are risks identified and manageable? | 15% |
| Resource Requirement | Can budget, competence, and time requirements be met? | 15% |
Gate Model
Manage AI investment not as a single large decision, but as a process passing through phased gates:
- Gate 1 — Feasibility Approval: Is the business problem clear, is data preparation sufficient, are resources available? Projects that do not pass this gate are not initiated.
- Gate 2 — Pilot Evaluation: Do PoC/pilot results meet expectations? Is production feasibility proven? Projects that do not pass this gate are stopped or redesigned.
- Gate 3 — Production Approval: Is the production environment ready, is the change management plan complete, is the monitoring mechanism established? Projects that pass this gate are deployed into production.
- Gate 4 — Scaling Decision: Are production results satisfactory, is there scaling feasibility? Projects that pass this gate are expanded.
Risk Management
In AI investments, risk management should be handled at two levels: project-based and portfolio-based.
Project-Based Risk Management
Each AI project should have its own risk assessment. A risk register should be created, and probability, impact, and mitigation strategies should be defined for each risk.
Typical project risks and mitigation strategies:
- Data quality risk: Perform a data quality assessment at the start of the project, define a minimum quality threshold
- Model performance risk: Set minimum acceptance criteria, compare with a baseline model
- Integration risk: Test compatibility with existing systems early, start with shadow mode
- User acceptance risk: Involve end-users early, prepare a training plan
- Budget overrun risk: Apply a phasing strategy, perform budget control at each phase
Portfolio-Based Risk Management
In organizations running multiple AI projects, the risk distribution of the project portfolio must also be managed. Having all projects with the same risk profile (all high-risk or all low-value) indicates a portfolio imbalance.
A balanced AI project portfolio:
- 50-60% low-risk, proven use cases (quick wins)
- 30-40% medium-risk, medium-to-high value projects (growth projects)
- 10-20% high-risk, high-potential projects (discovery projects)
Regulatory and Compliance Risks
AI regulations are developing rapidly. The board should follow regulatory developments in the markets where the company operates and determine its compliance strategy. Proactive compliance is much less costly than trying to comply after the fact.
Providing independent assessments to boards on these matters is within the scope of our AI project management services.
Conclusion
The board’s biggest mistake regarding AI is leaving this topic entirely to the technical team. AI is no longer a technology decision; it is a strategic business decision. Asking the right questions, making the right investments, and managing risks are the board’s responsibility.
The 10 critical questions we shared in this article provide boards with a structured inquiry framework. If these questions cannot be answered satisfactorily, it is a sign that the project requires more preparation.
It should be remembered that the decision not to invest in AI can also be a conscious strategy. What matters is that this decision is made based on information, with risk awareness, and within a strategic framework.
“The board’s role regarding AI is not to understand technical details, but to ask the right questions and provide strategic direction.”
Get Support for Your AI Project
Does your board need an independent perspective while evaluating AI investments? We provide support from strategic evaluation to risk analysis, from roadmaps to governance models.
Get Support for Your Project
I can help guide your digital transformation initiative. Book a free preliminary call to discuss your priorities.