AI and Board Governance

6 min. read

A woman addressing a business meeting about AI

One of the most important roles for a company’s board and executives is to set AI strategy, ensure there are  processes in place for the company to make fulsome, informed decisions about whether and how to utilize AI safely and effectively, and to oversee the operation of AI-based solutions by establishing mechanisms that provide sufficient information that the solutions are acting as intended and not causing unintended harm. AI presents opportunities for competitive advantage and innovation while also presenting significant potential for risks. Boards and corporate leaders need to assess the impact of AI on corporate strategy and risk.

The Right Questions Matter

Setting an AI strategy requires an understanding of how the company achieves its objectives and how AI can further those objectives. A highly technical background is not required to understand the benefits and risks of AI in the specific context in which the company operates. Perhaps the most important thing for board members to consider when investigating AI solutions is to ask the right questions. It would be difficult to find a single leader with both the technical acumen to understand an AI deployment strategy and the business savvy to lead a company’s efforts in such a deployment. But skillful leaders can insist on receiving comprehensive and well-framed information around the benefits and risks of an AI solution. Some important questions to consider are:

  • What regulatory structures are already in place in the area where AI is intended to be deployed?
  • What new risks is the company introducing by its intended deployment?
  • What reasonable steps can the company take to mitigate those risks?
  • What mechanisms are in place to ensure that the deployed AI solution is acting within the parameters set for it? 

You may need to support the up-skilling of both your leadership team and other board members in order to have the right knowledge available to you. While experience with off-the-shelf AI products is a good starting point, enterprise level AI implementations are far more complex and robust.

Forming an AI Steering Committee with Diversity and Comprehensive Perspectives

Obtaining sufficient information on these and other critical questions may require the creation of an AI Steering Committee at the board or executive levels.  Boards should ensure that the AI Steering Committee includes a diversity of thought and is empowered to drive strategy. Rather than appointing one tech-minded individual to be responsible for AI, a committee should be formed with representatives from technology, operations, legal, and compliance departments to benefit from diverse perspectives. That committee should be chartered within the company and operate on clearly defined parameters with a transparent process of approval for the use of AI. A critical factor of success in deploying AI is to have a well-designed  process whereby a proposed AI use case can be fully vetted, including its intended purpose, the data it is built upon or will utilize, the risks the solution may entail, how those risks will be mitigated, and how will the solution be built so that metrics on its performance are regularly and accurately produced. 

Acting with Confidence in a Fluid Regulatory Environment

The regulation of AI through national or local legislation and new rulemaking is nascent and unsettled. There is no federal legislation in the U.S. relating to AI, though more than two dozen bills have been introduced. In the absence of federal legislation, federal regulatory bodies, most especially the Federal Trade Commission and Equal Employment Opportunity Commission, are expanding their enforcement efforts to include AI in their respective jurisdictions. Several states, such as Colorado, Utah, and California, have passed AI-related laws. This trend is likely to continue for some time, making a patchwork of regulations similar to the way privacy regulation has developed in the U.S.  There are also AI laws being passed in foreign jurisdictions, like the EU AI Act. 

These regulatory developments vary in some respects, but the overarching governance frameworks are similar. Companies should have a governance process in place to oversee and monitor the development and deployment of AI systems, identify and mitigate the risks these systems introduce, and monitor and test their performance. The higher the risk the system presents, the higher the degree of controls required.  A company should have on hand data and performance metrics to prove to regulators, courts, or the public that its AI systems are staying within the bounds set by law or policy.  By doing so, the company can react to developing regulatory requirements and pivot more deftly to create additional controls or metrics in response.

Understanding AI Applications

While generative AI, think here of ChatGPT or Gemini, has dominated the current discussion of AI in general, there is an important role for AI technology to play in automated and predictive decision making.  For years, companies have used “classical AI” systems that run on algorithms or non-generative machine learning to make many kinds of decisions, such as underwriting decisions for insurance or credit, employment decisions, social media recommendation systems, medical diagnostic tools and other devices, and many, many more.  These classical AI systems can be made even more powerful by integrating generative AI into their workflows.  These hybrid solutions are some of the most transformative applications of AI, leading to the discovery of new pharmaceuticals, materials, patterns in employee or customer behavior, optimizing supply chains, and more.  The limiting factor is not what the technology can do, but what a company’s leaders can envision and then effectuate safely and effectively.

Iron Man, not the Terminator

An excellent metaphor for the capabilities and best usage of AI tools is, “Think Iron Man, not Terminator." AI is not going to be a fully autonomous robot that you stick somewhere to do a certain thing. It's much more like Iron Man where you have Tony Stark, an inventor whose impressive human abilities are augmented and extended because of being surrounded by enabling technology. Accordingly, it is necessary first to plot where your company is before deciding where to go next. Then, determine where AI can help and where a human in the loop is critical.

Getting the Balance Right

AI is the most transformative and disruptive technology since electricity.  The impact it will have on every aspect of our society is inconceivably immense.  It can bring almost unimaginable benefits but carries immense potential risks.  But these risks are predictable. Companies act in a specific context, with laws and regulations about how to do so. Incorporating AI into a company’s workflows does not change its risk profile entirely, but merely adds a new dimension to known risks, and may introduce some novel ones. However, with the right governance in place, leaders can identify and mitigate potential risks and deploy AI solutions that can powerfully propel their company into this new AI age.

Meet the Authors
 

Lisa Hooker
Lisa Hooker

Benett Borden
Bennett Borden