Generative AI in the Workplace – What Employers Need to Know
ChatGPT and other generative artificial intelligence tools are having a significant impact in the workplace – whether on administrative functions (like resume screening, applicant selection, employee evaluations, etc.) or production (like research, writing, and other content creation). These generative AI tools are already transforming the way that many employees – including higher-level white-collar workers – do work, or even the need for such workers. But the use of these tools carry risks that employers need to recognize and address.
Generative AI Risks and Concerns. There are a multitude of media stories about ChatGPT gone wrong – whether trying to break up a journalist’s marriage or creating fake case citations and opinions. But the concerns go far beyond that, including the following:
- Discrimination in employment decisions. The data used by the generative AI can be biased in several ways. This bias may arise from the individuals collecting the original data and training the tool. The tool itself may have a learning bias. Or there may be bias in the way the data is deployed.
- False information. As media reports have highlighted, these generative AI tools readily provide false answers and, when pressed, create fake sources – what has famously been termed “hallucinations.”
- Limitations on knowledgebase. The data used by the generative AI tool may not be entirely up to date. For example, ChatGPT was trained on a dataset that cut off in 2021, so it does not have any information after that point.
- Client/Company Confidentiality. Employees may upload confidential and proprietary information into generative AI tools, without realizing that such information may then enter into the tool’s public database.
- Employee Confidentiality. Personal data that is entered into an AI tool may result in the disclosure of protected information (e.g. health, financial, etc.)
- Transparency. It is not always clear when and how generative AI is being used – both internally and externally. This may be by applicants and third parties, as well as by employees.
- Copyright Infringement. Without knowing where generative AI tools are gathering data to create content, it is possible that the tools are improperly using copyrighted material.
- Intellectual Property Rights. If content is created by generative AI, it cannot be copyrighted – or the copyright may lie with the toolmaker. If employees are using AI to assist in content creation, they may be the copyright owners for such content, unless there are specific written provisions that vest ownership in the employer.
- Other Compliance Issues. There may be other regulatory requirements that intersect with the use of AI, including in the areas of consumer protection, financial services, and protected health information under HIPAA, among other things.
- Impact on Staffing. With AI, certain job functions – or entire jobs – may be eliminated. This may result in organizational restructuring and reductions in force. It may also result in the creation of new duties and job positions, and require training on new skills for existing employees.
- Environmental Impact. For companies that are focused on environmental issues, including ESG investing concerns, it may be important to know that generative AI tools can use tremendous amounts of energy.
Governmental Regulation of AI. Governmental entities at all levels in the US, as well as in other countries, have developed AI regulations/guidance or are in the process of doing so. Thus, it is important for employers to monitor legislative or regulatory developments in the jurisdictions in which they operate. Some of the more major initiatives include the following:
- The White House’s Blueprint for an AI Bill of Rights, which sets out five principles that should guide the design, use, and deployment of AI: (1) safe and effective systems; (2) algorithmic discrimination protections; (3) data privacy; (4) notice and explanation; and (5) human alternatives, consideration and fallback.
- A joint statement from the Equal Employment Opportunity Commission (EEOC), the Federal Trade Commission, the Consumer Financial Protection Bureau, and the Justice Department’s Civil Rights Division (DOJ) on their enforcement efforts against discrimination and bias in the use of automated systems or artificial intelligence (AI) in the workplace. As further discussed in our April 2023 E-Update, the statement identifies the roles each agency plays, as well as the specific concerns raised by the workplace use of AI.
- EEOC guidance on Select Issues: Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964, as discussed in our May 26, 2023 blog post.
- EEOC technical assistance document on The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees, which we discussed in our May 2022 E-Update.
- DOJ guidance on Algorithms, Artificial Intelligence, and Disability Discrimination in Hiring, also discussed in our May 2022 E-Update.
- A memo from the General Counsel of the National Labor Relations Board, targeting employers’ use of electronic monitoring and algorithmic management technologies, which we discussed in our November 2022 E-Update.
It is important for employers to monitor developments at the state and local level as well. For example, New York City has just implement regulations on the use of AI in employment screening and hiring. In addition, multi-national employers should be aware that other countries are taking measures to regulate the use of AI, like the European Union’s draft AI Act.
Possible Steps for Employers. Among the things that employers can do to address the risks and concerns associated with the use of generative AI in the workplace are the following:
- Develop an AI policy. Among the issues the policy could address are the following: explanation of how AI is being used by the Company; permitted/forbidden use of AI by employees; procedures for receiving approval for the use of AI; limitations on what data may be input into general AI tools – including a clear prohibition on the use of confidential or proprietary information; independent verification of information or output from the AI tool; ensuring that the use of the AI tool does not result in discrimination, harassment, or defamation; compliance with regulatory requirements; and clarification of intellectual property rights.
- Review existing policies that may be impacted by AI, including confidentiality and trade secret policies and codes of conduct, as well as policies on computer systems use and intellectual property.
- Consider whether a Chief AI Officer position would be warranted.
- Carefully review HR software vendor contracts to ensure that appropriate validation studies have been done to comply with the EEOC’s Uniform Guidelines on Employee Selection Procedures and to avoid bias in other situations.
- Routine audits of the use of AI to ensure nondiscrimination and appropriate use.
- Train employees on the use of AI.
- If unionized, bargaining may be required over the use of AI.
- Ensure that there is appropriate disclosure to third parties, applicants and employees about the use of AI, as well as the availability of reasonable accommodations with regard to its use.
This is an area of explosive growth and development. Employers must be certain to ensure that they are staying abreast of any legal obligations and requirements in this fast-changing environment.