Articles You May Have Missed

Six Ethical Artificial Intelligence Principles for Your Code of Conduct

Ethical AI does no harm. But for it to live up to its considerable potential and avoid its much-discussed pitfalls, then it needs human oversight.

In a recent Pew Research article, Rainie et al wrote that artificial Intelligence (AI) applications “speak” to people and answer questions. They run the chatbots that handle customer-service issues. They help diagnose cancer and other medical conditions. They scour the use of credit cards for signs of fraud and determine who could be a credit risk. They are the operating system of driverless vehicles. They sift applications to make recommendations about job candidates. They determine material that is offered up in people’s newsfeeds and video choices. They recognize people’s faces, translate languages and suggest how to complete people’s sentences or search queries. They can “read” people’s emotions. They beat them at sophisticated games. They write news stories, paint in the style of Vincent Van Gogh and create music.1

Artificial intelligence systems are spurring headlines with new breakthroughs, all the while fostering worries about job skills, workforce replacement, burgeoning use without appropriate or regulated oversight, and potential bad actors. Even world leaders are working to reap the benefits of AI while strategizing how to keep various risks at bay. Last October, it was announced that the “G7 (a group of seven industrial countries consisting of Canada, France, Germany, Italy, Japan, Britain, and the United States) along with the European Union, will develop an 11-point Code of Conduct around AI worldwide that is meant to help seize the benefits and address the risks and challenges brought by these technologies”.2

With so many AI uses at hand, a primary focus now is ensuring the use of ethical AI. What exactly, is ethical AI? “Ethical AI is artificial intelligence that adheres to well-defined ethical guidelines regarding fundamental values, including such areas as individual rights, privacy, non-discrimination, and non-manipulation. Ethical AI places fundamental importance on ethical considerations in determining legitimate and illegitimate uses of AI.”3

Here is where the Code of Conduct steps forward as a key educational platform for applying ethical AI. This article discusses six key principles addressing ethical AI within a Code of Conduct, beginning with an overarching principle regarding organizational alignment, oversight of the adoption of AI, and the establishment of a roadmap to manage identified functional risks associated with the use of AI. The remaining principles focus on the user of AI, the individual sharing data with an AI system—with the expectation of an improved product, a problem-solving solution, or the efficient completion of a task in an attempt to work smarter not harder. Ethical AI principles within your Code of Conduct strive to mitigate risk at the first stop in the journey—the user—and can serve as a living set of principles that evolve alongside the development of AI.

Here are the six key principles addressing ethical AI within a Code of Conduct.

1. Establish an AI Governance Council

Why is this needed? Provides organizational alignment, oversight of the adoption of AI, and addresses mitigation of risk.

In order to ensure enterprise-wide alignment on the adoption and safe use of AI, establish an AI Governance Council. Obtain appropriate executive sponsorship from the Chief Compliance Officer and/or the Chief Information Officer to support initiatives. Outline the primary responsibilities and accountabilities for the AI Governance Council and set forth a roadmap for the company to ensure the adoption of ethical AI is implemented in a controlled and responsible manner that mitigates risk for the workforce and the company as a whole.

2. Protect Company Data

Why is this needed? Aims to protect intellectual property by preventing inappropriate sharing with AI.

As Markel et al wrote, not all AI systems are alike. “Open” AI systems (those that do not limit how the prompts input to the system are used by the AI tool), such as ChatGPT, Bard, and other AI chatbots are free and available to all users inside and outside the workplace. Information that is entered into an “open” AI system might be shared with another unintended user, and retained in the AI’s neural network, potentially in perpetuity, to be used for further training of the system.4 This data is now untethered, likely unretrievable, and becomes part of the AI lexicon available to all other users, leaving the employer without control over how the data is to be used or with whom it might be shared.

“Unlike ‘open’ AI systems, ‘closed’ AI systems are typically proprietary and may limit or prevent circumstances under which user prompts would be shared with outside users.”4 However, these systems still require an understanding of how and when information entered into the system could be shared outside of the intended recipients.

Assurance activities such as training the workforce on all relevant company policies, standard operating procedures, and information classification and records management protocols should be in place at the outset to prevent the inadvertent sharing of sensitive or confidential information with AI applications. Managerial review and approval of intellectual data should occur in advance of any plans to utilize AI in order to detect issues before they become public news. Provide clearly defined procedures on the appropriate and compliant use of AI and ensure support systems are in place to assist users with AI technology. Protecting the company’s intellectual property, reputation, and trustworthiness is a top priority.

3. Safeguard Individual Privacy

Why is this needed? Aims to prevent violations of privacy law and associated civil or monetary penalties.

Artificial intelligence systems collect vast amounts of information that may include personal information such as names, addresses, biometrics, preferences, and financial and medical records, to name a few. Cybersecurity hacks and data breaches create damaging news headlines, expose organizations to legal and regulatory risk, and alarm individuals about potential identity theft, financial risk, medical data exposure, and other malicious uses of personal data.

In the United States, privacy laws exist at both the federal and state levels. Federal laws such as the Gramm- Leach-Bliley Act (GLBA) which protects financial privacy or the Health Insurance Portability and Accountability Act (HIPAA) which protects patient health information, are sector-specific to that particular industry. On the other hand, AI systems touch a multitude of industries and it is the Federal Trade Commission (FTC) that occupies a strategic position with the needed tools and authority (derived from the FTC Act) to protect the consumer from deceptive or unfair practices, including infringements on privacy, associated with AI. Additionally, state and local levels of government may have current or proposed AI frameworks for addressing individual privacy as well.

Taking steps to safeguard personal data from unauthorized access and wrongful use is not only essential, but critical. Respect personal privacy by adhering to applicable federal and state privacy laws. Create and implement a privacy impact assessment for use prior to inputting any data into an AI tool. Train the workforce on required disclosures and requirements for obtaining consent from individuals prior to collecting sensitive personal information such as financial or health data. Provide training on relevant policies and procedures, appropriate security measures, and how to report a privacy incident.

4. Promote Appropriate and Respectful Use of AI

Why is this needed? Promotes ethical AI use through effective training and identifies resources for voicing concerns or reporting behavior related to unethical use of AI.

Training should include emphasis on the use of AI in a respectful and professional manner at all times. Only company-approved AI tools should be utilized. Avoid use of profanity and any form of indecent or discriminatory language. Avoid use of any communication that may be perceived as offensive. Review established avenues for voicing concerns or reporting behavior related to unethical use of AI.

5. Prevent Incorporation of Bias, Discrimination, Inaccuracy, and Misuse

Why is this needed? Supports requisite fairness when evaluating AI input and output.

“For a machine to ‘learn’, it needs data to learn from, or train on. Examples of training data are text, images, videos, numbers, and computer code,” notes a 2023 Reuters article on AI and employee privacy. “In most cases, the larger the data set, the better the AI will perform. But no data set is perfectly objective; each comes with baked-in biases, or assumptions and preferences.”5

A 2023 Harvard Business Review article goes even further: “Bias can creep into algorithms in several ways. AI systems learn to make decisions based on training data, which can include biased human decisions or reflect historical or social inequities, even if sensitive variables such as gender, race, or sexual orientation are removed. Amazon stopped using a hiring algorithm after finding it favored applicants based on words like ‘executed’ or ‘captured’ that were more commonly found on men’s resumes, for example.”6

Generative AI can produce inaccurate or false information referred to as “hallucinations” and present it as if it were fact. These nonsensical or inaccurate results can arise from limitations or biases within algorithms, insufficient or low-quality data sets, or a lack of appropriate context, for example. Glover wrote, AI hallucinations are a direct result of large language models (LLMs) which are what allow generative AI tools (like ChatGPT and Bard) to process language in a human-like way. Although LLMs are designed to produce fluent and coherent text, they have no understanding of the underlying reality that they are describing. All they do is predict what the next word will be based on probability, not accuracy.7 If you needed a reason to double check the output of AI, this is it. Failure to verify the accuracy of AI output risks providing inaccurate, fabricated, or even dangerous information.

Misuse is another area of caution. “Organizations can improperly use licensed content through generative AI by unknowingly engaging in activities such as plagiarism, unauthorized adaptations, commercial use without licensing, and misusing open-source content, exposing themselves to potential legal consequences.”8

Establish processes to recognize and address such issues. Do not take AI output at face value. Question it, evaluate it, look for transparency in how the algorithm produced it, have an appropriately qualified human double-check it, and implement an assessment form to identify red flags for further investigation. Provide on-going training and development to the workforce to reinforce the responsible use of AI tools.

6. Ensure Accountability, Responsibility, and Transparency

Why is this needed? Emphasizes responsibility and promotes an auditable and traceable process.

It is important that anyone choosing to apply AI to a process or data for example, must have sufficient knowledge about the subject. The user is responsible for identifying whether data is sensitive, proprietary, confidential, or restricted beforehand and should consult with management regarding the decision to apply AI to the process. The end-to-end process for using AI needs to be transparent. Ideally, the user should advise the recipient that AI was used to generate the data, identify the AI system employed, explain how the data was processed, and communicate limitations that may apply.

Review all data generated by AI for accuracy prior to its use and/or distribution. Appropriate oversight of AI-generated materials should include the assessment of any potential bias, discrimination, inaccuracy or misuse. The data produced should be auditable and traceable throughout its lifecycle development.

The application of ethical AI needs human oversight. Ethical AI does no harm. It aims to protect intellectual property, safeguard privacy, promote appropriate and respectful use, prevent incorporation of bias, discrimination, and inaccuracy, and ensure accountability, responsibility, and transparency. These are all praiseworthy attributes that fit squarely into Code of Conduct.

ENDNOTES

  1. Rainie, Lee; Anderson, Janna; Vogels, Emily A. Experts Doubt Ethical AI Design Will Be Broadly Adopted as the Norm Within the Next Decade. Pew Research Center Page. Retrieved November 22, 2023, from https://www.pewresearch. org/internet/2021/06/16/ experts-doubt-ethical-ai-design-will-be-broadly-adopted-as-the-norm-within-the-next-decade/.
  2. Chee, Foo Yun. Exclusive: G7 to agree to AI code of conduct for companies. Reuters Page. Retrieved November 22, 2023, from https://www.reuters.com/technology/ g7-agree-ai-code-conduct-companies-g7-document-2023-10-29/.
  3. Glossary Ethical AI. C3ai Page. Retrieved November 27, 2023, from https://c3.ai/glossary/ artificial-intelligence/ethical-ai/.
  4. Markel, Keith A.; Mildner, Alana R.; Lipson Jessica L. AI and employee privacy: important considerations for employers. Reuters Page. Retrieved November 29, 2023, from https://www.reuters.com/ legal/legalindustry/ai-employee-privacy-important-considerations-employers-2023-09-29/.
  5. California Institute of Technology Faculty. Can We Trust Artificial Intelligence? California Institute of Technology Science Exchange Page. Retrieved November 29, 2023, from https://scienceexchange.caltech. edu/topics/artificial-intelligence-research/trustworthy-ai.
  6. Manyika, James; Silberg, Jake; Presten, Brittany. What Do We Do About the Biases in AI? Harvard Business Review Page. Retrieved November 27, 2023, from https://hbr.org/2019/10/what-do-we-do-about-the-biases-in-ai.
  7. Glover, Ellen. What Is An AI Hallucination? Built In Page. Retrieved November 30, 2023, from https://builtin.com/artificial-intelligence/ai-hallucination.
  8. Spisak, Brian; Rosenberg, Louis B.; Beilby, Max.13 Principles for Using AI Responsibly. Harvard Business Review Page. Retrieved November 27, 2023, from https://hbr.org/2023/06/13- principles-for-using-ai-responsibly.

ABOUT THE AUTHOR

Susan Jones is a Senior Manager at Amgen Inc., in the Worldwide Compliance & Business Ethics function. With 25+ years of experience, Susan has worked in highly cross-matrixed environments, building and leading teams, developing training resources, and supporting compliant and ethical business initiatives. While she has a connection to Amgen, opinions are her own and do not represent Amgen’s position.

Subscribe to our bi-weekly newsletter Ethisphere Insights for the latest articles, episodes, and updates.

RELATED POSTS

Free Magazine Access!

Fill out the form below, and get access to our Magazine Library

Free Magazine Access!

Fill out the form below, and get access to our Magazine Library

%d