Most board-level AI conversations circle the same two questions: Are we adopting AI? and What’s the ROI? While these are the right starting points, they don’t surface where the real risk lives.
Ironically, the organizations furthest along in AI maturity are the most concerned. Among the most advanced organizations, AI innovation risk is the top obstacle, with talent and infrastructure constraints close behind, according to Protiviti’s 2026 Global Board Governance Survey.
Organizations in the early stages face even larger blind spots, and those gaps often stay hidden because boards still operate too early in the work to see them clearly.
Here are four AI risks most boards miss, along with the questions that close them.
The comprehensive blueprint for selecting a results-driven board management vendor.
Blind Spot 1: Cybersecurity Risk Has Been Amplified, Not Solved
Many boards believe they already solved the cybersecurity problem. They added a cybersecurity expert to the board. They shored up the defenses. They checked the AI governance box.
AI makes this thinking dangerous, because it introduces new threats and expands the entire attack surface. Data exfiltration, model manipulation, and deepfake-enabled fraud are now active risks.
According to the World Economic Forum’s Global Cybersecurity Outlook 2026, deepfakes have become the second most common cybersecurity incident globally, trailing only malware.
“AI has changed cybersecurity. It has expanded the risks, whether it’s data exfiltration or manipulating models,” says Mark Rogers, Managing Director & Head of Board Practice at Robert Half. “There’s a tendency for corporations to say, ‘We’ve got that, we shored that up a long time ago.’ But AI presents a whole new risk.”
The exposure isn’t always from external attackers. During a recent OnBoard ATLAS webinar, Frank Kurre, Managing Director and Global Board Governance Program Leader at Protiviti, points to a pattern boards rarely account for: employees using public tools with confidential information.
“There have been instances where company employees have logged on to ChatGPT with confidential source code, not realizing they’ve breached confidentiality and potentially exposed that information,” Kurre notes. “If people are using the public version, you could run into real problems.”
The governance data reinforces the urgency. Only one-third of webinar participants said their organization already has a documented, actively enforced AI governance policy, suggesting many boards are still closing foundational oversight gaps.
Boards that rely on AI tools for prep and decision-making should also consider whether those tools operate inside a secure, closed-loop AI system or out in the open. Using public AI platforms for board work creates its own exposure.
The question for boards: How has AI expanded our cybersecurity risk profile, and have we updated our defenses to match?
Blind Spot 2: The True Cost of AI at Scale
Boards often fail to consider the infrastructure costs, energy demands, change management, and the third-party platform shifts that come with scaling AI.
“There’s a blind spot in terms of the costs associated with AI implementation at scale,” Rogers says. “U.S. utilities are going to spend more than $1.4 trillion over the next five years to support AI and data infrastructure, and they’re not going to sit on it. They’re going to pass it along to businesses and customers.”
On top of infrastructure costs, the path to ROI requires a significant upfront investment that most boards haven’t fully priced in.
“In order to get to the point where you’re going to have efficiencies and cost savings, you’ve got to make a very large upfront investment. You’re buying the AI tools, training your people up, managing significant change,” Kurre says. “So, even though lots of companies were thinking ‘I’m going to save money right away,’ that’s not yet happening.”
Boards that approved AI spending based on optimistic near-term returns may be surprised by the timeline. Most evidence shows AI ROI will come, but boards need financial models that account for what happens between the investment and the return.
The question for boards: Do we have a realistic, total-cost model for AI implementation, including infrastructure, change management, and indirect costs we haven’t yet priced in?
Blind Spot 3: False Confidence in Readiness
Confidence and readiness are not the same thing, and boards increasingly risk confusing one for the other.
Protiviti’s survey showed that 95% of high-ROI organizations express confidence in their AI integration. Among low-ROI organizations, that figure drops to 33%. That gap raises the risk that some organizations may be more confident than they are prepared. Further, Protiviti’s data found around 15% of respondents believe their organizations are in Stage 4 to 5 of AI adoption (Optimization and Transformation stages).
“My only concern with boards or management teams where they think they’re at the optimization level is: don’t be self-satisfied,” Kurre says. “Things are changing so quickly. Even if you’re in that transformation stage, if you don’t continue to pay attention, you could quickly drop down.”
Grant Thornton’s 2026 AI Impact Survey puts the readiness side of that gap in stark terms: 78% of executives say they lack confidence they could pass an independent AI governance audit within 90 days.
Regulatory exposure amplifies the risk. According to the Corporate Board Member and Diligent Institute’s 2026 What Directors Think Survey, 41% of directors say AI-related regulation is the most underestimated compliance risk they face this year. Organizations that assume their current AI governance posture is audit-ready may find themselves on the wrong side of a rapidly evolving compliance landscape.
The WilmerHale and EqualAI 2026 Governance Playbook for Boards frames it directly: AI governance is now both a legal and strategic imperative, and only a minority of boards have formal frameworks or clear metrics in place. Confidence without those structures becomes a liability.
The question for boards: If we were audited on AI governance tomorrow, would we pass?
Enhance strategic meetings with OnBoard's intuitive board management tools.
Blind Spot 4: Unmapped Changes to Processes and Controls
As organizations integrate AI into operations, processes and controls change, but often without documentation or audit mapping.
“Lots of companies are incorporating AI into their processes and controls,” Kurre says. “But if you ask the internal or external auditors whether the company has mapped those changes, how AI has been incorporated, in many cases, you’re going to get a blank stare.”
The problem extends to every vendor and platform an organization relies on. Companies like Oracle, SAP, Salesforce and AWS are all incorporating AI into their systems. Boards need to ask how those changes affect the controls their organizations depend on.
Third-party providers are changing too, and so are the fourth-party providers those third parties rely on. SOC (System and Organization Controls) reports written before a vendor integrated AI may not reflect their current risk posture.
The WilmerHale and EqualAI playbook outlines a four-step path boards can apply for AI governance monitoring: assess current AI use and risk, establish oversight structures, implement risk protocols, and empower teams to execute.
The question for boards: Have we mapped every AI-driven change to our processes and controls? Have our auditors done the same? And do we understand how our vendors have incorporated AI into the systems we rely on?
The Questions That Close These Blind Spots
Protiviti’s survey includes a subtitle worth noting: “Success is defined not by having all the right answers, but by asking the right questions.” That framing applies directly here.
The next time AI comes up in the boardroom, start with these questions:
- How has AI expanded our cybersecurity risk profile, and have we updated our defenses?
- What is the full cost model for AI implementation, including infrastructure and change management?
- If we were audited on AI governance tomorrow, would we pass?
- Have we mapped every AI-driven change to our processes, controls, and third-party dependencies?
- Are we using AI tools that keep board materials inside a secure, governed, closed-loop system?
- Do we have directors with genuine AI fluency, or are we relying on one person to carry that weight?
The organizations generating the most value from AI aren’t the ones without blind spots. They’re the ones that went looking for them.
When AI introduces new risks, the board’s ability to track those discussions, decisions, and follow-ups in a secure, governed environment becomes a governance imperative. OnBoard drives board meeting optimization and keeps that record intact so the conversation builds instead of restarting.
For a deeper look at how boards are navigating AI governance risk right now, watch the full session on demand: The Board’s AI Moment: Navigating AI Acceleration, Integration, and ROI.
Frequently Asked Questions
What are the biggest AI blind spots for boards?
Common AI blind spots boards miss include underestimating how AI amplifies cybersecurity risk, not accounting for the true cost of AI implementation at scale, overestimating organizational AI readiness, and failing to map AI-driven changes to processes and controls for audit purposes.
What AI risks should boards be asking about?
Boards should ask how AI has changed their cybersecurity risk profile, whether AI governance could withstand an independent audit, what the total cost model for AI includes beyond technology spend, and whether AI-related changes to processes and controls have been documented.
How can boards reduce AI governance blind spots?
Make AI a standing agenda item, invest in board AI fluency through education and targeted recruitment, require management to map all AI-driven changes to controls, and ensure board materials and AI tools operate within a secure, closed-loop governance platform.
Why is AI cybersecurity a growing board concern?
AI expands the attack surface through data exfiltration, model manipulation, and deepfake-enabled fraud. Employees using public AI tools for work can inadvertently expose confidential information. Boards that treated cybersecurity as a solved problem need to reassess in light of AI-specific threats.
Sources
Protiviti: 2026 Global Board Governance Survey. March, 2026.
World Economic Forum: Global Cybersecurity Outlook 2026. January, 2026.
McKinsey & Company: The State of AI in 2025: Agents, Innovation, and Transformation. November, 2025.
OnBoard ATLAS Webinar: The Board’s AI Moment: Navigating AI Acceleration, Integration, and ROI. April, 2026.
Grant Thornton: 2026 AI Impact Survey. April, 2026.
Corporate Board Member & Diligent Institute: What Directors Think Survey. April, 2026.
WilmerHale & EqualAI: 2026 AI Governance Playbook for Boards. January, 2026.
About The Author

- Ben Blanc
- Ben Blanc is the Brand Narrative Manager at OnBoard, where he shapes the company's public voice across social media, live programming, and external communications. With 18+ years of experience spanning media, operations, and marketing, he brings a blend of storytelling instinct and editorial discipline to B2B SaaS. Ben has spent his career turning complex ideas into clear, accessible, and actionable narratives. At OnBoard, his focus is on thought leadership grounded in real customer proof, credible perspective, and content worth paying attention to.
Latest entries
AI & TechnologyMay 5, 2026The AI Blind Spots Boards Don’t See Coming
AI & TechnologyApril 22, 2026AI Belongs on Every Board Agenda. Here’s What Happens When It’s Not.
USApril 22, 2026Why Board Culture Is the Key to High Performance
Board Management SoftwareApril 15, 2026When Board Portal Friction Becomes a Governance Risk