U.S. Artificial Intelligence Laws:
Board of Directors’ Guide

Artificial intelligence (AI) has fascinated people for decades. Libraries are full of science fiction stories featuring intelligent machines, and movies often portray AI systems that spiral out of control. These cautionary tales warn of technology that surpasses human decision-making.

Now that AI has shifted from science fiction to front-page news, local and federal governments are drafting laws to govern its use and increase oversight. Because the technology continues to evolve faster than regulations, executive leaders often struggle to keep up. You may hear about New York’s artificial intelligence laws, for example, but there’s still no comprehensive national framework.

As a board member, you must understand current AI laws and anticipate where regulations are heading. Policymakers at the federal, state, and local levels will continue introducing new rules in the coming years, and organizations need to stay prepared.

Editor’s Note: The information here reflects AI laws and regulations as of October 2025. Because this is an evolving legal landscape, readers should consult official government sources or legal counsel for the most up-to-date requirements.

Artificial intelligence is reshaping industries across the U.S. – from banking and healthcare to manufacturing and education. Yet with innovation comes scrutiny. Federal, state, and local governments are moving quickly to regulate how AI is developed and deployed. For boards, an increased adoption of AI presents a new category of risk, oversight, and accountability.

What Is Artificial Intelligence Oversight?

Artificial intelligence oversight involves monitoring and supervising AI systems to ensure safe, ethical use, and compliance with regulations. Human-led oversight helps prevent errors and bias in training data and ensures organizations follow applicable legal frameworks.

Boards of directors hold responsibility for how AI is implemented in the workplace. They must ensure the organization complies with current local and federal AI regulations and adopt internal policies that guide responsible use. When clear laws don’t exist, boards can look to emerging regulations in other jurisdictions to anticipate requirements and address potential risks.

Ignoring AI oversight exposes organizations and their boards to legal, regulatory, and ethical vulnerabilities. Even if few laws exist today, boards must stay informed and adapt as new AI rules and compliance standards emerge across industries.

AI Regulatory Frameworks in the U.S.

NOTE: The U.S does not yet have a comprehensive federal AI law, but regulation is advancing quickly. President Biden’s Executive Order on Safe, Secure and Trustworthy AI (2023) directs agencies to develop standards for AI safety, data protection, and civil rights compliance. Additionally, the National Institute of Standards and Technology (NIST) has released an Artificial Intelligence Risk Management Framework to guide responsible AI use.

Meanwhile, the Federal Trade Commission (FTC) enforces existing laws against deceptive or discriminatory AI practices, and several states — including California and Colorado — are drafting their own legislation.

California Artificial Intelligence Laws

The California artificial intelligence laws are designed to minimize the risk of misuse of personal data, discrimination, and misinformation. There is an ever-evolving legal landscape surrounding AI in California. The state has managed to pass one law and has several more pending. These laws include:

  • California Consumer Privacy Act (CCPA): This law allows the public and consumers certain rights over their personal information and how it’s used. All AI systems that use consumer data must be evaluated for privacy compliance and offer opt-out mechanisms.
  • Automated Decision Systems (ADS) Accountability Act (Pending 2025): If the law passes, AI systems would need regular formal audits, documentation of their model impact, and receive ongoing monitoring for bias.
  • California Artificial Intelligence Safety Act (Proposed 2025): The California Attorney General would receive notification of any AI safety incidents. The law also protects whistleblowers. This law targets large AI developers, but it might include smaller companies using advanced large language models (LLMs).
  • California Workplace Technology Accountability Act (Pending 2025): This law would limit the type of data corporations can collect from employees and the use of ADS for employment-related decisions.

Colorado Artificial Intelligence Laws

Colorado artificial intelligence laws aim to offer transparency, avoid bias, and protect user data. The state enacted its first law affecting AI in June of 2023 and a second law in May of 2024. Here’s a closer look at these laws:

  • Colorado Privacy Act (CPA) (Enacted July 2023): This law grants ownership of personal information to the resident. It places limits on how companies can gather and use personal information. Businesses must be transparent about the personal info they gather, how they get it, and how they plan to use it. The businesses must offer a way for their customers to opt out of the collection of their personal information.
  • Colorado AI Act (SB 205 – Enacted May 2024): Businesses must reveal their use of AI and the ways they use it. They must have a risk management policy in place for each of their AI programs and developers must disclose the information the AI program uses. Each company is responsible for regular impact assessments for each AI system it uses.

New York Artificial Intelligence Laws

The New York artificial intelligence laws focus on companies and state agencies that use automated decision-making systems (ADMs) for hiring and other purposes. Some AI systems can show bias or discrimination depending on their programming and learning models. The New York AI laws include:

  • NYC Local Law 144 – Bias Audits for Employment Tools: In order to use an Automated Employment Decision Tools (AEDT), it has to undergo a bias assessment within the last year. Employees or job candidates must receive a notice of the results and receive the results upon request.
  • New York State “LOADinG” Act: The Legislative Oversight of Automated Decision-Making in Government (LOADinG) Act also concerns automated decision-making systems (ADMs) and AI. However, this law focuses on AI use at New York state agencies.
  • New York A3008 – Personalized Algorithmic Pricing and AI Companions: This law is aimed at companies that use personal data and algorithms to tailor prices for products and services. They must tell their customers that this is the way they set prices.

Virginia Artificial Intelligence Laws

The Virginia artificial intelligence laws combine existing privacy laws that are already in place. The goal of these laws is to protect residents and their personal information. They currently have three laws that directly affect the usage of AI they’ve attempted to pass with varying success. These include:

  • High-Risk AI Bill (HB 2094): This law would have created operating standards and disclosure and documentation for the use of AI by developers, deployers, integrators, and distributors of AI. It had strict penalties in place for violators. This bill didn’t pass, but something similar may pass in the future.
  • Executive Order No. 30 (2024): This executive order regulates companies that do business with the state and ensures they meet VITA’s AI standards, covering procurement, deployment safeguards, and compliance oversight.
  • Synthetic Digital Content Act: If AI-generated audio, text, video, or images realistically portray a real person, the creator can face defamation, libel, or slander claims.

Canada's Artificial Intelligence and Data Act (AIDA)

Although Canada’s Artificial Intelligence and Data Act (AIDA) did not pass, it reflects how Canada is approaching the challenges businesses face with AI. The proposed law aimed to establish a risk-based framework for designing, developing, and deploying AI systems, especially those handling individuals’ personal data.

It required businesses to perform risk assessments, implement mitigation strategies, ensure transparency, and maintain records of AI systems from development through daily use.

Boards operating in Canada should monitor AIDA’s progress and ensure their organizations align with the Office of the Privacy Commissioner of Canada (OPC) guidance on data handling and algorithmic fairness.

Brazil's AI Bill of Law (PL 2338/2023)

Still making its way through the legislative process, Brazil’s Bill of Law (PL) 2338/2023 was approved by the Senate in December 2024. It’s currently being reviewed by the Chamber of Deputies. The country used an AI bill created by the European Union as a basis for this bill. If passed, the bill will create a risk-based approach to regulating AI in Brazil.

The goal of the bills is to protect fundamental human rights, promote responsible innovation, and ensure transparency and accountability when working with or creating new AI programs. Some of the AI systems could be classified as excessive risk and wouldn’t be allowed to be used by businesses or companies in the country.

High-risk AI programs would have to follow strict guidelines. High-risk AI CAN include autonomous vehicles, medical diagnostic tools, credit scoring, and more. The bill would also establish users’ rights, such as the right to privacy and information. Individuals would also have the right to contest determinations made by AI.

Brazil's General Data Protection Law

Brazil’s General Data Protection Law (LGPD) was designed to regulate the collection, processing, storage, and sharing of CITIZEN’S personal data. The law applies to any and all companies using information of Brazilian citizens, even if the business is outside of the country. Some of the key points in the law include:

  • Scope
  • Personal and sensitive data
  • Data subject rights
  • Principles of processing
  • Enforcement
  • Penalties for non-compliance

Each organization must meet certain requirements, including:

  • Data Protection Officer (DPO)
  • Record keeping
  • Transparency
  • Security measures
  • Data breach notification

Mexico's National Artificial Intelligence Policy

While Mexico hasn’t passed any national laws or policies around AI, they have used legislative proposals, government initiatives, and a new digital agency to create a comprehensive framework. These proposals are making their way through the legislative process.

Here are some things to know:

  • Key legislative developments:
    • Proposed federal law on regulating AI
    • Criminal Code Amendments
    • Constitutional Amendments
  •  Government agencies and initiatives:
    • Agency for Digital Transformation and Telecommunications (ATT)
    • National Artificial Intelligence Agenda for Mexico 2024–2030

Chile's National AI Policy

Chile developed an AI policy that stands on three pillars: enabling factors, AI development and adoption, and ethics and regulation. A risk-based AI bill has been introduced in the legislative branch and is making its way through the process. The bill would regulate various AI systems to protect the health, safety, and fundamental rights of citizens.

The bill would also ban certain types of AI systems that pose a danger in some way to the general public. AI programs would be separated into a few categories based on their risk to the public. The bill would also ensure transparency and human oversight.

Adopt AI Across the Board Without Widening Risk
Speed decisions across the board by trimming prep time and surfacing context in one secure portal, with agendas from a prompt, on‑page briefs, and minutes drafts generated by AI inside your governance record.

Common Themes in AI Regulation in the U.S.

Despite differences in scope and maturity, AI regulations across the Americas share several common themes. Most emphasize risk-based governance – requiring stricter oversight for higher-impact AI systems. Transparency, human accountability, and fairness appear consistently across national frameworks.

For boards, these themes translate into concrete governance actions: maintaining documentation of AI decision processes, conducting bias and safety audits and ensuring that human review remains central to critical business decisions.

How Boards Should Prepare for AI

Ready or not, AI is rapidly becoming a powerful tool to help boards streamline operations and improve efficiency. You must be prepared to use it responsibly. That starts with developing a clear policy that outlines how your organization will apply AI in everyday tasks.

To help your board successfully navigate new and evolving AI regulations, we’ve created a simple checklist that you can reference:

Focus Area

Key Questions to Ask

Action Steps

Governance & Oversight

- Has the board defined who oversees AI (risk, audit or tech committee)?
- Are AI risks integrated into the organization’s risk register?

- Assign AI oversight to a committee.
- Update board and committee charters to include AI governance responsibilities.

Strategy & Purpose

- How does AI align with our mission and business goals?
- Are we clear on where and why AI is being deployed?

- Request an AI strategy briefing from management.
- Review alignment between AI initiatives and long-term strategy.

Legal & Regulatory Compliance

- Which AI laws apply to each jurisdiction we operate in?
- Are we compliant with privacy and data laws?

- Conduct a legal gap assessment.
- Assign responsibility for ongoing compliance monitoring.

Risk Management

- Have we identified potential risks (bias, cybersecurity, data misuse, liability)?
- How are AI risks tracked and reported to the board?

- Integrate AI risk into enterprise risk management (ERM).
- Require periodic AI risk reviews and mitigation plans

Data Governance

- Are we managing data responsibly across boards?
- Do we have protocols for data quality, consent, and retention.

- Review Data management frameworks and vendor contracts.
- Ensure alignment with privacy and localization laws.

Human Capital & Skills

- Do management and staff understand AI risks and opportunities?
- Is there adequate training for responsible AI use?

- Sponsor AI literacy and ethics training.
- Encourage cross-functional AI governance teams.

Reporting & Disclosure

- How will AI governance be reported to stakeholders, investors, or regulators?
- Are we transparent about AI’s role in decision-making?

- Develop an AI governance disclosure framework.
- Include AI oversight in annual reports or ESG disclosures.

Update Governance Frameworks

Before you invest in meeting management or some other type of software, you need to update your governance framework to include your board’s policies on using AI.

Include clear guidelines for how your organization handles personal data—whether it belongs to employees, customers, or the public. Define how you will remain transparent about the data you collect and how your AI systems use it. You should also establish strong risk-governance practices and apply heightened safeguards to any high-impact AI tools.

Develop AI Transparency and Reporting Standards

When your board relies on AI for administrative tasks, you must remain transparent with the public. Your AI policy should clearly outline how you will disclose AI use and what information will be shared. It should also require regular reporting on how AI supports board administration and city, municipal, company, or corporate operations. Define how often reports are delivered, what they must include, and who is responsible for preparing them.

Strengthen Director Education

Some board members may not fully understand the risks and responsibilities associated with AI. Because AI technology evolves rapidly, ongoing education is essential. Train directors on what AI is, how your organization uses it, and the potential dangers or limitations. As AI adoption expands, continue offering updates and additional training. Informed board members can make stronger decisions about AI policies, oversight, and performance safeguards.

Add AI to the Boardroom With OnBoard

Artificial intelligence is here to stay, and your board will soon rely on it as the foundation of many new software tools. While the U.S. has not yet established federal AI regulations, several states have already identified key concerns, especially around how AI uses personal data. Your board needs a clear AI policy framework in place now, so you can fully leverage AI’s potential while staying prepared for future compliance requirements.

At OnBoard, we use AI to streamline board activities and improve efficiency. Our tools automate some of your most time-consuming tasks, including building meeting agendas and drafting meeting minutes. Because our platform is powered by AI, we continually enhance your experience as the technology evolves.

Key OnBoard features include:

  • Agenda Builder
  • Secure Document Sharing
  • Minutes AI
  • Voting & Approvals
  • Meeting Analytics

OnBoard AI brings the power of automation to the boardroom, freeing your team from tedious administrative work. It instantly turns discussions into polished minutes, builds agendas with a click, and ensures every board member has the information they need when they need it. With smarter tools that learn and adapt, OnBoard AI helps your board make decisions with confidence and speed.

See how much smoother meetings can be. Request your free trial today.