Insight

Do You know where your AI Is?

June 24, 2024

Tasha Huebner

Going to see a doctor for a health concern is never a pleasant experience, but patients at least have the expectation of treatments that will lead to improvement. Yet, in 2023 a software developer at a healthcare company integrated an unauthorized AI diagnostic tool into the company's electronic health record system, leading to inaccurate patient diagnoses and treatment recommendations. This incident highlights the risks of using unvetted AI tools in sensitive industries like healthcare, but the risks span across all industries. And while many companies think they’re being cautious about the use of AI among their workforce, either forbidding its use or taking time to launch a comprehensive AI strategy, the reality is that, all too often, their employees are using AI tools like ChatGPT, Perplexity, and more on the sly. By using these tools without authorization, employees are opening up their organizations to potential legal risks as well as security threats and system vulnerabilities. 

What does this mean for the executives in charge? Essentially, that the AI you know is better than the AI you don’t, or put more simply: by trying to constrain the usage of these tools, companies are leaving themselves open to problems that could be avoided by a more judicious approach. They would be much better served by proactively establishing clear policies and guidelines to ensure compliance with employment laws, protect sensitive information, and stave off system meltdowns and data breaches. 

Employee Autonomy and AI: The BYOAI Trend

As the following chart suggests, the rapid rise of the "Bring Your Own AI" (BYOAI) trend is reflected across all age demographics, not just the young. 

The fact that AI usage is so prevalent reflects the desire of many workers to experiment with AI to automate low-reward, mundane tasks and make time-consuming processes more efficient.   For example, an outbound sales rep might typically spend a great deal of time developing an individualized email campaign that must go through so many rounds of reviews that the initial work is unrecognizable. Using AI to generate that campaign instead might lead to the same number of reviews, but it enables the sales rep to reclaim time for more immediate revenue-generating tasks. However, while employees are increasingly adopting AI tools in order to enhance productivity, the reality is that the lack of formal AI policies and strategies in many organizations has contributed to the prevalence of BYOAI, exposing those very organizations to the associated risks of unsanctioned technology usage. 

Identifying Unauthorized AI Tools and Mitigating the Risks

In order to tackle the thorny issue of unauthorized AI usage, companies first need to know who’s using what AI tools. This can be challenging for the simple reason that most employees are unlikely to admit to using technology that hasn’t been approved. Conducting regular AI audits allows companies to inventory the AI applications in use and ensure compliance with guidelines; those audits should at least start by asking employees about the specific tools they’re using. This type of survey is most effective in companies where there’s a high level of trust between managers and employees. 

In addition, analyzing network traffic patterns can help uncover unique digital footprints left by unsanctioned AI tools, such as frequent data transfers or connections to specific servers. Some guidelines recommend encouraging employees to report unauthorized AI use through confidential whistleblower mechanisms, but this is a surefire way to wind up with a demoralized and suspicious workforce that will be on the lookout for better job opportunities. Ultimately, a combination of active monitoring, audits, and fostering a culture of openness and accountability is the best approach for identifying and addressing the use of unauthorized AI tools.

And once AI usage is known, then what? Unauthorized AI usage in the workplace poses significant security risks, including data breaches, intellectual property theft, and compliance violations. To mitigate these risks, organizations should implement access controls and educate employees about the potential dangers. Proactive IT strategies like data encryption, security software updates, and thorough risk assessments are essential. Companies must ensure that any AI systems, whether authorized or not, comply with relevant laws, regulations, and ethical standards in order to avoid legal consequences and reputational damage. In one case, an HR professional at a tech startup used an AI resume screening tool to filter job applications, not realizing that the tool's algorithms had inherent biases based on the training data. As a result, the company faced accusations of discriminatory hiring practices and potential legal action. By taking a comprehensive approach to AI security, organizations can harness the benefits of these powerful technologies while safeguarding sensitive data and maintaining trust among stakeholders.

As for training employees on AI compliance, this is crucial for ensuring the safe and responsible use of artificial intelligence in the workplace. A well-structured AI compliance training program should cover not only the technical aspects of using AI tools but also the ethical considerations, privacy laws, and potential impacts on brand reputation and customer satisfaction. Consider the example of a marketing manager at a global electronics firm who used ChatGPT to generate social media posts and email campaigns, but the AI-generated content contained inaccurate information and made false claims about the company's products, leading to customer complaints and reputational damage. Leveraging real-life scenarios like this and simulations can provide hands-on experience with the practical challenges AI presents. AI compliance training should be tailored to the specific needs of the organization and continuously updated to keep pace with the rapid advancements in AI technology. To be truly effective, this training will foster a culture of ethical awareness while also empowering employees to use AI in a secure manner that mitigates risks such as data breaches and misuse. 

Risks of Undisclosed AI

Of course, the greatest issue with employees using AI tools like ChatGPT or Gemini covertly at work and without their managers' knowledge is that it can lead to a range of issues, from inconsistent work quality to ethical concerns and data breaches. Additional real-world examples of the consequences of this behavior include:

  • In 2022, an employee at a major financial institution used an unauthorized AI-powered trading bot to execute high-frequency trades, resulting in significant losses for the company when the market unexpectedly shifted. 

  • A marketing manager at a major CPG firm used an AI content generation tool to create product descriptions and social media posts, unaware that the tool had been trained on copyrighted material. This led to legal issues for the company when the original content creators filed lawsuits for intellectual property infringement.

  • A software developer secretly used GitHub Copilot to write code, but the AI tool introduced security vulnerabilities and bugs that went undetected until the software was deployed, causing system failures and data breaches.

  • A customer service representative used an AI chatbot to handle customer inquiries without proper testing or oversight, resulting in the chatbot providing incorrect information, making inappropriate responses, and frustrating customers, ultimately damaging the company's brand image.

These examples highlight the wide-ranging consequences of employees using AI tools without proper authorization, training, or oversight. From financial losses and legal issues to reputational damage and patient safety concerns, the risks underscore the need for robust AI governance frameworks and employee education to ensure the responsible use of AI in the workplace. 

The answer is not simply a knee-jerk ban on generative AI tools, though this is often what many companies turn to. A BlackBerry survey indicated that 75% of IT decision-makers were planning long-term or permanent restrictions.  Not only does this deprive those companies of the powerful benefits of AI, but such bans may backfire if employees become frustrated by the lack of access to tools they’ve grown dependent on, potentially leading to attrition … or to even more secretive use of the tools. 

The reality is that employees will use whatever tools are at their disposal to make their jobs easier, more interesting, or to get ahead in their careers. Companies that recognize this and plan appropriately by developing a comprehensive AI strategy are those that are less likely to wind up in the news for spectacular data breaches or brand reputation mishaps. Investing in comprehensive AI compliance training up front - and embracing the controlled use of AI tools and platforms - can reap huge rewards in both the short and long-term, as organizations lean into the benefits of AI while ensuring compliance with legal standards and maintaining the trust of customers and stakeholders.

Developing an AI strategy can be a daunting task. Our clients often reach out to NextAccess after trying to go it alone or hiring a so-called “AI expert developer.” While experimentation is necessary, we believe in compartmentalizing the risk to your business model by running controlled AI pilots with well-defined and measurable goals. Include your employees who are more inclined to seek out new tools to complete tasks faster and generate more value rather than hold them back with restrictive, yet well-intentioned, AI use policies.

Contact Us