Applying AI Responsibly: Establishing Ethical Values for a Transformative Technology

The rapid rise and spread of artificial intelligence (AI) demands that we don’t just use it with good intentions, but that we craft Responsible AI guidelines and then live by them.

You can’t pick up a newspaper or magazine without seeing the initials AI. Artificial Intelligence (AI) is dominating headlines and corporate boardrooms alike. Its proliferation is undeniable. Expectations are that this groundbreaking technology—the roots of which date back to the mid-1950s with “The Logic Theorist” program funded by Research and Development (RAND) and presented at the “Dartmouth Summer Research Project on Artificial Intelligence”—will continue to grow and permeate almost all aspects of society.

MarketsandMarkets projected the AI market to reach $407 billion by 2027, and Statista reported that it will have an estimated 21% net increase on the United States’ gross domestic product (GDP) by 2030. For AI vendors, that’s all good news. AI does have great promise to deliver many benefits. It is already driving automation and increased productivity, enhancing customer service, supporting medical research, enabling autonomous vehicles, creating consumer-generated advertising, providing greater data protection and cybersecurity, and much more.

There is, however, broad concern about its misuse and abuse. In the wrong hands, AI can lead to a host of cybercrimes that can cripple critical infrastructure, drive financial fraud, invade privacy, and even generate a new medical crisis. While we can’t completely stop the bad guys, we can take measures that support promote responsible and ethical use of AI. At the top of that list is understanding the AI landscape today, how best to leverage AI, and why responsible AI guidelines are critical.

THE AI LANDSCAPE
AI applications are growing rapidly, but we’ve only tapped the surface of their potential. Back in 2017, The Boston Consulting Group found that 85% of executives believed AI would give their companies a competitive edge. Then, just one in five executives had incorporated AI into their offerings. An April 2023 Forbes Advisor survey found that 73% of businesses use or plan to use AI. Those numbers surely have not remained static since.

Among the trends driving AI adoption is the current labor shortage. The IBM Global AI Adoption Index 2022 noted that 25% of companies are looking to AI to address their workforce challenges and, on the flip side, many workers cite concern over AI replacing their jobs. It’s not a concern without merit as The McKinsey Global Institute reports that 400 million workers could conceivably be displaced by AI. Those opposed to this thinking cite what humans have that AI doesn’t: common-sense reasoning, the ability to collaborate with other humans, natural language understanding, empathy, etc. Additionally, it is more likely that AI will create new job opportunities which the World Economic Forum said could be as high as 97 million new jobs.

Currently, AI is delivering many benefits in streamlining manufacturing, supply chain processes, medical research, advertising, and hospitality processes. It’s speeding up production lines, supporting supply chain digitalization and resilience, responding to consumers’ financial inquiries, developing new therapies and improving patient outcomes, creating social media posts, and planning travel itineraries.

AI CHALLENGES
Its abuse in the hands of criminals notwithstanding, there are other common challenges facing AI. As previously noted, there is a lack of AI talent. Many businesses do not have staff with the expertise in AI technologies and how best to integrate and apply them. They are also unaware of how their customers will react and interact with AI.

Another challenge lies in the area of data privacy, security and related data breaches and dark web implications. Similarly, there are challenges relating to restricting the flow of data to prevent its unethical use which can potentially taint the accuracy of AI-generated results. Also relating to data is the issue concerning its capture and storage. AI systems use sensor data which can be massive but essential to validate AI findings. When these massive sets of data become difficult to store and assess, they can hinder AI’s algorithms and cause poor results.

Additionally, system-related issues pose a challenge. AI, Machine Learning and Deep Learning require considerable computing power for algorithms to perform. The power required is almost equivalent to that of a supercomputer which comes with a high price tag. Cloud computing and parallel processing are somewhat of a remedy, but not always sufficient to support the large amounts of data and complex algorithms AI uses.

While these challenges are significant, the ethical and legal challenges AI poses require the most thought and strategy on the part of AI adopters and providers.

ETHICAL AI
Despite its challenges and the many alarmist headlines, AI has the potential for the greater good across many sectors of our lives. The key to harnessing its power for the greater good is its ethical application reflecting responsible guidelines, best practices, and an ethical foundation. At the core of these requirements are pivotal principles that align with and drive trustworthy AI. Following are those principles:

Beneficial AI. Ensuring AI systems enrich both users and society, mitigating negative impacts on society and businesses such as bias amplification, misinformation, and societal divides.

Human-centric. Promoting AI’s supportive role to humans, assisting them in their work, enhancing decision-making processes and upholding human responsibility. It requires the review of AI algorithm outputs prior to results being put into practice. In cases of real-time decision-making, it’s important to allow for human monitoring and auditing thereby keeping accountability with humans and not an autonomous agent.

Aligned AI. Guaranteeing AI is in sync with human and business values with clear and understandable AI as a foundation. Reflecting human and business objectives should be an integral part of continuous AI algorithm engineering. This facilitates the control of judgements to determine what a “good solution” represents in, for example, objective function  in machine learning and training data’s analysis for bias.

Privacy-preserving AI. Upholding the European Union’s GDPR standards and achieving top-tier security standards endorsed by ISO 27001 certifications. The goal is to adhere to all relevant legislation, while being mindful of data protection and the ethical use of AI for use cases that significantly affect people.

Reliable AI. Prioritizing quality consistency and transparency in AI applications, especially in vital sectors. This requires the use of good software engineering practices for the design, development and testing of algorithms. Where machine learning algorithms are concerned, it is especially important that training data be thoroughly analyzed for bias with testing focused on unreasonable or other unwanted results. In operation, AI-based software audit trails and other software capabilities further provide monitoring to ensure reliability under changing conditions.

Safe AI. Crafting AI algorithms that ensure safety and ward off potential threats. This requires that their impact be clearly confined to a business’ domain in which the algorithms operate, and clearly defined interfaces surrounding the domain. This is routinely provided for search and optimization algorithms, as well as for focused AI use cases. In situations involving Large-Language Models using similar AI logic and for which safety issues can arise, best practices call for impact containment. If containment to the business domain is not evident, then the AI system should be subjected to an internal review to identify potential impacts (e.g., malicious API calls, code injection, jailbreaking and other malicious practices).

The intent of these principles is to maximize AI’s potential while minimizing risks. Trustworthy AI can only be achieved when society’s needs, and individual rights are prioritized.

Putting aside the hype and the naysayers, AI is a powerful, transformative tool that will continue to have enormous impact on our personal and professional lives. It has already demonstrated enormous value when channeled through best practices and responsible use. Building trust in AI’s broader application requires a commitment to sound principles that help mitigate potential risks and foster maximum benefits. Responsible AI conduct backed by ethical values is an essential prerequisite.

ABOUT THE AUTHOR
Justin Newell is Chief Executive Officer of INFORM North America, a leading provider of AI-based optimization software that facilitates improved decision making, processes and resource management, and a member of the INFORM Group, a global organization headquartered in Aachen, Germany. INFORM has published its Responsible AI Guidelines, the pillars of which are the six guiding principles which are noted in this article.

Subscribe to our bi-weekly newsletter Ethisphere Insights for the latest articles, episodes, and updates.

RELATED POSTS

Free Magazine Access!

Fill out the form below, and get access to our Magazine Library

Free Magazine Access!

Fill out the form below, and get access to our Magazine Library

%d