Insight

Regulatory Roundup

march 22, 2024

Even as companies are rushing to invest in AI strategies, they’re also thinking about the implications of some of the regulatory changes that might be coming down the pike—or at least they should be. While the AI landscape has a bit of a Wild West feel to it at the moment, eventually the sheriff is coming to town, and firms should be thinking now about how they can stave off the incipient gunbattles before they wind up in the hoosegow.

Now that we’ve beaten that horse of a metaphor to death, let’s take a look at what kinds of regulations are or may be in the pipeline. We can look to Europe to get an idea what might be in store for the U.S., as the EU is several steps ahead of us in this respect. Their primary bit of legislation is the AI Act, which is expected to be adopted in 2024, though most provisions won’t be coming into effect until 2025. This regulatory framework includes the following:

  • Categorizing AI systems into risk tiers, with more stringent obligations for high-risk systems

  • Transparency requirements concerning AI development, testing, energy consumption, and training data

  • Safeguarding of human rights

In addition to ensuring safety and compliance within the field of AI, the AI Act also establishes measures to support innovation, with real-world testing to be made accessible at the national level. And while the U.S. doesn’t as of yet have any comprehensive federal legislation specifically governing AI, organizations at the state and local level have been initiating their own regulations, focusing on those that limit the use of AI in police investigations and hiring. In 2023 alone, there was a 440% increase in AI-related bills introduced by state lawmakers compared to the previous year.

The US is also spearheading the first United Nations resolution on artificial intelligence, aimed at ensuring AI is "safe, secure, and trustworthy" and that all countries, especially those in the developing world, have equal access. This resolution emphasizes human rights and fundamental freedoms throughout the AI lifecycle.

Areas of particular interest to the business community include legislative proposals that address AI-related concerns such as AI deepfakes and IP issues. To tackle these problems, there’s been congressional debate as to whether or not to create a new federal agency to regulate AI or to simply use existing laws. The legal complexities involved in AI usage have been particularly evident when it comes to intellectual property (IP); AI development and deployment has been criticized in relation to copyright infringement, patent issues, and the use of training data. In fact, a number of class-action lawsuits and other cases have been filed against AI firms, most often focused on the use of copyrighted materials to train AI models. What does all of this mean for your business? Well, conceivably a business that (for example) uses AI to generate all of its content could find itself subject to accusations of infringement, or potentially losing IP rights due to the processing and generation of data by AI tools. Conversely, companies that use tools like ChatGPT or Copilot may find that their own data is no longer secure or that employees are inadvertently disclosing sensitive or proprietary information. 

Given the risks, cautious companies are choosing a wait-and-see approach, although that inadvertently introduces risks, as they’re likely to see their competition seize an AI-fueled, first-mover advantage. The better tack is to take an informed viewpoint towards managing these risks, either by doing the necessary research to enable a forward-thinking but measured strategy, or by bringing in experts with sound business experience that can help navigate these complexities. 

We, of course, recommend the latter approach for the following reasons:

  • It’s already happening whether you like it or not. And we’re not just talking about other companies; the AI is coming from within your own firm (cue menacing music). Really though, the chance that your employees are NOT using AI tools right now is essentially zero. They’re finding that it’s making their jobs easier. And they’ll continue using it, security be damned.

  • You can go too fast and waste a lot of money in doing so. See: Bloomberg and the $10M+ it spent on training a GPT-3.5 AI on their own financial data in 2023, obviously hoping that the proprietary data would produce better results. To the contrary, they discovered that GPT-4, the AI version available to everyone, was better at almost all finance-related tasks. Action today must not be ready, fire, aim.

  • It’s all too easy to look at the barrage of AI tools on the market and think that surely some of them are applicable to your business. And they likely are—but making AI decisions based on a solution looking for a problem isn’t the way to go. The more judicious route involves identifying key business problems and building a strategy around using the right AI tools to address those issues. 

The bottom line is that no one wants to be first with new technologies, but with AI, there are real risks to lagging behind. Our recommendation: de-risk your AI transformation by taking a strategic approach: identify opportunities that move the needle (e.g., margin improvement and/or revenue growth), evaluate viable options based on your unique requirements, beta test recommendations for efficacy, implement solutions, and measure results. 

Contact Us