Risks and Best Practices for GenAI in the Workplace


March 26, 2024

By: Brianna Long

Labor and employment attorney Brianna Long took a close look at employer use of GenAI. Here are highlights from her client exclusive February 2024 Law & the Workplace webinar.

 

Generative artificial intelligence, or GenAI, is moving into all facets of life—including the workplace. Employees are using GenAI now. To mitigate GenAI legal risks, employers need to set up effective GenAI employment policies. 

 

What Is GenAI?

According to Merriam-Webster:

 

Artificial intelligence, or AI, tools are already incorporated into daily life. They include voice assistants such as Siri and Alexa. Smart searches, such as those found on Netflix or other streaming services are AI tools. Spam filters use AI algorithms.

 

GenAI tools can create something new using assets and training from large datasets. GenAI tools can create all kinds of content including:

  • Text
  • Images
  • Voices
  • Video
  • Music

 

How GenAI Might Affect Business

More and more companies are introducing tools that advertise faster, better, and more effective ways to handle work tasks and procedures.

 

Chat GPT, Google Bard, and Claude may be leaders, but many competitors are emerging. As a result, employers must face these realities:

  • GenAI is everywhere. The tools are easily accessible, user-friendly, and readily available to everyone.
  • GenAI promises big rewards. The cutting-edge tools can help create efficiencies, spark innovation, and inspire creativity.
  • GenAI has risks. The potential pitfalls of GenAI should focus employers’ attention on creating workplace policies that balance risk with business advantages.

 

GenAI Risks for Employers

With GenAI so easily available to employees, employers need to develop GenAI use policies to address and mitigate these four main risks.

  • Hallucinations
  • Algorithmic bias
  • Release of confidential information
  • Intellectual property concerns

 

Hallucination has become the common term to describe when GenAI produces a wrong answer. The tool may state an opinion convincingly, but it still might be a completely fictitious answer. GenAI hallucinations include:

  • Factual inaccuracies, such as incorrect summaries of financial data.
  • Misinformation, such as using a source associated with the topic in an untrue situation.
  • Fabricated information, such as fake citations for legal cases and statutes.

 

GenAI companies are working to fix the problem. By some estimates, the tools still offer false output at least 20 percent of the time.

 

Algorithmic bias is especially problematic in human resources functions, such as reviewing résumés or other employment-related decisions. GenAI tools with an algorithmic bias produce unequal outcomes based on biased, inaccurate, or discriminatory datasets.

 

Discrimination laws still apply. Companies using GenAI don’t set out to create biased databases, but it can occur. In 2023, the Equal Employment Opportunity Commission (EEOC) settled cases and began actions against employers who made decisions based on biased GenAI results. For example:

  • The EEOC settled with iTutorGroup for alleged age and sex discrimination through use of an algorithm that automatically rejected female applicants over the age of 55 and males over the age 60. Because of the gender and age discrimination issues, the company had to change the practices and pay out to some applicants.
  • DHI Group, a job search website for tech professionals, reached an EEOC conciliation agreement over national origin discrimination charges. Customer job ads allegedly discouraged American workers from applying. The GenAI dataset focused on visas and out-of-country terminology. When American applicants didn’t have those elements or use terms related to being from another country, the tool pushed down the application. The company was required to rewrite the GenAI algorithm.

 

Confidential information can become public. To produce an output, information has to go into the system. Information that’s part of the training database becomes part of the gigantic information pool that makes the GenAI tool function. If information is in the database, it’s probably not confidential anymore.

 

Samsung banned GenAI after an employee leaked sensitive information. The company had originally set a policy that allowed and encouraged employees to use GenAI tools. Shortly after the policy rolled out, an employee typed in potential trade secret information. As a result, the information likely lost trade secret status. Samsung reversed course and banned GenAI until it could come up with safeguards.

 

Inputting attorney advice or other privileged information in GenAI might compromise attorney-client privilege.

 

Incorporating names, addresses, phone numbers, or other consumer information could violate data privacy laws. State by state, data privacy laws are changing. A new Iowa data privacy law goes into effect January 1, 2025.

 

Intellectual property concerns are similar to confidential information risks.

  • Outputs that include copyright or trademarked materials might lead to infringement claims.
  • Ownership of input and output is not clear. What’s protected, who owns data in the AI datasets, and if the output is protectable are all questions working their way through the courts.
  • Courts, so far, have declared that only a human can be identified as an inventor. GenAI products cannot be listed as an inventor in patent applications.
  • Violating GenAI terms and conditions can create problems for the company using the tools. GenAI tools are products. The terms and conditions of use must be followed.

 

The Essentials of GenAI Policies

A GenAI employee use policy puts guardrails in place. The policy should clearly state what employees can and can’t do using GenAI. Employers need a GenAI policy, even if it simply says GenAI tools are banned. 

 

The purpose and scope of GenAI use should be clear.

  • Who does the GenAI policy apply to?
  • What can employees do and not do using GenAI?
  • When is GenAI permitted?
  • What GenAI tools may be used?
  • Where can it be used?
  • Why can employees use or not use GenAI?

 

The policy and related training must explain the benefits and risks of using AI tools:

  • Outline whether the company is providing enterprise or other GenAI tools and permitted uses.
  • Educate employees on policy and responsible use.
  • Train employees and discuss the policy to help employees understand the importance of the information.

 

The policy also should note who will monitor for violations and watch for AI developments. Employers need to be vigilant about follow-through.

 

GenAI Best Practices for Employers

Best practices help employers mitigate risks. They help employees understand the expectations related to GenAI tools.

 

Set data privacy and security limits.

  • List information that should never be input, such as employee or consumer names and addresses.
  • Set up an approval process for gray areas, such as some sales or consumer information. Have human oversight to determine whether to allow input of information that is not clearly within established guidelines.
  • Be alert to state consumer privacy laws that may apply. Check and verify compliance.

 

Define confidential information.

  • Protect trade secrets and proprietary information. Employees need to understand submitting information to a GenAI product is essentially putting it out in the world.
  • Explain how AI could impact protectability of intellectual property.

 

Flag copyright issues. Check outputs for closely quoted works. Its use could open up the company to potential copyright infringement claims.

 

Do not allow GenAI to substitute for human judgment. This is especially important when using AI tools in recruiting, hiring, promotion, or any employment-based decision.

 

Audit for bias—beyond obvious biases.

 

Incorporate related existing policies, such as equal employment opportunity, code of conduct, and privacy.

 

Apply a National Labor Relations Act disclaimer. Note that the policy is not intended to interfere with employee labor law rights.

 

Make clear what may be a policy violation. Explain consequences such as disciplinary action, up to and including termination and possible legal action. Set a reporting procedure.

 

The Legislative and Regulatory Landscape

Clear AI policies will become even more important as legislative and regulatory changes focus on developing technologies.

 

The EEOC is reviewing uses of technology that are barriers to recruitment and hiring. The EEOC has flagged AI as a strategic enforcement priority and is focusing on employment decisions, practices, or policies in which covered entities’ use of technology contributes to discrimination based on a protected characteristic. 

 

The European Union and an increasing number of states and cities are enacting restrictions that apply to employers using AI. The patchwork of state laws related to AI will continue unless and until there’s federal action.

 

Employers need to remain agile. Nyemaster labor and employment attorneys can assist employers with developing effective policies that leverage the benefits of GenAI while mitigating the risks.