Articles You May Have Missed

Addressing Ethical Dilemmas in AI

From Netflix suggesting a movie you might like to Google recommending a trip to the country you want to visit next—artificial intelligence (AI) has become a part of our lives today.

Likewise, AI has spread its roots far and wide in the world of business. According to the 2022 Data and AI Leadership Executive Survey, 91% of companies today want to tap into the power of AI. While the thought of AI paints a picture of an intense sci-fi movie scene in our minds, AI is ultimately just a tool.

Like any technology, AI can be used for good or bad reasons. As AI systems take on more complex forms, it is more important than ever for organizations to take an ethical approach to its usage.

Giovanni Gallo | Ethico

What Ethical Dilemmas in AI Exist? Aside from expanding the scope of a business’s success, AI also comes with several ethical dilemmas that must be addressed immediately. These include:

Bias and Fairness. Although AI has the potential to perform seemingly complex tasks with ease, let’s not forget it is designed by humans. The most common source of AI bias often comes from the data we feed into it. Georgia Tech recently conducted research into object detection in self-driving cars. The results revealed that pedestrians with dark skin were hit about 5% more by self-driving cars than those with light skin. Why? Because the data used to train the AI included about 3.5 times more examples of light-skinned people, allowing AI to recognize that skin color better.

Biases in AI will exist as long as humans do. They not only create an atmosphere of unfairness in the way the businesses operate but also come with legal and financial implications of non-compliance. Businesses must invest time and effort to ensure the data fed into AI systems remains free of biases.

Transparency. AI transparency is largely concerned with the data used to train AI; the set of practices and tools deployed to understand the AI model; the type and number of errors and biases in the training system; and the methods of communicating these issues with users and developers.

As AI models become more evolved and powerful, their inner workings begin to get more mysterious. As the new models become harder to understand, the inner mechanisms get buried in what is called a “black box.” Without transparency, businesses can find it challenging to detect biases or privacy concerns surrounding their AI models. This is why AI transparency has arisen as its own field, and why highly regulated industries that use AI must also develop their own AI transparency skills, policies, and procedures.

Accountability. Once developed, AI models are capable of making decisions on their own. This, in turn, raises questions of accountability. Case in point: when an AI system generates an erroneous outcome or begins to have the potential to cause harm, it becomes paramount to determine who is responsible. This dilemma becomes stronger when AI systems fail to offer absolute transparency in the way they work.

Businesses investing in AI models must first ensure that their decision-making processes and AI algorithms come with the highest degree of transparency. Transparency can open up access to valuable human oversight. This, in turn, can cultivate trust in the system and ensure maximum accountability.

Privacy and Data Security. One of the most pressing issues surrounding AI lies in privacy and data security. Privacy is a fundamental human right and AI models (unfortunately) come with the potential to threaten it.

From surveillance cameras to smartphones to the internet, technology has made it endlessly easier to accumulate personal data. The goal of this data is to help brands create personalized experiences for their customers, as consumers are more likely to buy from brands that offer personalized experiences. But what happens when companies gather this data and fail to disclose how it’s collected and stored?

When companies (whether knowingly or unknowingly) monitor users without their explicit consent, they run the risk of entering unethical AI territory. A lack of solid data sanitization protocols can also raise the potential of having the data processed and sold to third parties who can then use it for unintended purposes. Those same third-parties are just as vulnerable to data breaches and cyber-attacks as the organization from whom they procured the personal data in question—if not more so. The risk of this personal data falling further into the wrong hands cannot be overstated.

Before deploying any AI model, businesses must put adequate ethical controls in place and comply with ethical privacy regulations surrounding AI.

Hiring and Onboarding. Deploying AI during onboarding and training opens many doors for biases and unfairness. Take Amazon’s flawed AI screening framework, for instance. The AI software Amazon created to screen job candidates ended up favoring male candidates because the data used by their AI tech included their hiring patterns from the previous decade (when the tech industry was majorly dominated by men).

Due to a lack of human touch, AI may also end up disregarding the worthiest candidates. Or, it may shortlist candidates who it “thinks” meet the company’s criteria even though they’re not suited for the position. Biased and discriminatory AI systems can negatively impact a business’s compliance efforts and result in heavy losses. Before deploying an AI hiring and onboarding system, businesses must ensure it runs fairly, accurately, and ethically.

Regulatory Compliance. When not analyzed and tackled with care, biases, data breaches, privacy concerns, and dilemmas surrounding AI can result in serious instances of noncompliance. This can cause businesses to suffer heavy legal, financial, and reputational losses. As AI digs its roots deeper into the world of business, several new laws and regulations will begin to break onto the shore. This will require businesses to become more fluent in their compliance language and adhere to complex (and yet to be explored) compliance requirements.

mountain of challenges AI brings, business leaders may be left wondering, “What’s the best way to mitigate risks each time my organization implements a new AI solution?” Here are a few best practices leaders can implement for ethical AI use in business:

Control AI Biases. To glean the ethical benefits of AI and mitigate biases, leaders must take a human-first approach. Countless biases control the results AI churns out. For instance, the software programmer demographic in the U.S. is approximately 62% white and 64% male. Businesses that don’t prioritize inclusivity and diversity often lose out on valuable ideas that can help pave a faster and more solid route to success.

Start with ensuring the data fed to your AI systems is not biased. Next, make the data more inclusive. Understand that people who create AI algorithms have the power to shape the way society works—in a positive or negative light. By making your hiring processes more inclusive, you can ensure your team prioritizes diversity over bias and ethics over easy profits.

Prioritize Transparency and Security in AI Use. Although your customers would prefer a personalized experience, they will not want to stay in the dark about what you do with their data. Educate your users about the way you store and use their data and how it can benefit them. By being transparent, you not only commit to ethics in business but also build trust with your customers.

Being transparent with your AI use can replace restrictive regulation with a positive customer sentiment. Remember, transparency breeds trust. It creates loyal customers and equally loyal employees. If you use AI frameworks to hire, onboard, and train your employees, let them know your process works to build a long-term, trust-based relationship with them.

Deploy AI Training Programs. Your business’s AI training shouldn’t just be limited to its technical borders. Make sure you take the legal, ethical, and societal impact your AI framework may have into account. Help your software developers understand that they aren’t just acting on their individual values. Instead, they have a role to play in terms of impacting the broader society positively.

Today, AI has crossed the realms of creating basic product lines with minimal social implications. Instead, it comes with the power to distort the way we think. As a leader, it’s your responsibility to ensure this power is used for the greater good.

Craft Policies and Procedures Surrounding AI. To ensure your business deploys AI in an ethical manner, establish a solid foundation with policies, procedures, and a code of conduct for ethical AI. Aside from developing comprehensive policies, offer the right training to ensure your workforce always puts ethics first. To boost sensitivity toward the entire spectrum of ethical issues surrounding AI, make sure you build diverse teams. Finally, check in regularly to ensure procedures are followed and objectives are achieved.

Build Trust in AI Systems. Your organization’s HR, communications, marketing, and customer service departments must learn to educate users about the ethical use of AI systems. This can help build their trust in your AI framework and empower them to have more control over it. To strengthen this trust further, leaders must also encourage proactive communication surrounding AI issues—both internally and externally.

Over to You! AI is deeply embedded into our everyday life. It is a valuable part of almost all business operations. Although, of course, AI comes with the potential to cause harm, a growing number of businesses have ethical mechanisms in place to prevent malicious use of AI.

As long as businesses follow the best practices for ethical AI use, this technology can bring oceans of benefits to businesses of all shapes and sizes. Leaders must pay special attention to who creates AI models for their organizations to ensure an ethics-first approach. They must also constantly question how AI could impact their employees, their customers, and the world at large.

 

RESOURCES

  • The role of corporations in addressing AI’s ethical dilemmas (Brookings, 2018)
  • Artificial Intelligence: examples of ethical dilemmas (UNESCO, 2023)
  • Great promise but potential for peril (Harvard Gazette, 2020)
  • The Ethical Dilemma of Artificial Intelligence (AI): Navigating the Intersection of Technology and Morality (Mayank Kumar, 2023)
  • AI Ethics Are a Concern. Learn How You Can Stay Ethical (G2, 2022)
  • Why addressing ethical questions in AI will benefit organizations (Capgemini, 2021)
  • Top Nine Ethical Issues in Artificial Intelligence (Forbes, 2022)

 

ABOUT THE AUTHOR

Giovanni Gallo is the Co-CEO of Ethico, where his team strives to make the world a better workplace with compliance hotline services, sanction and license monitoring, and workforce eLearning software and services. Growing up as the son of a Cuban refugee in an entrepreneurial family taught Gio how servanthood and deep care for employees can make a thriving business a platform for positive change in the world. He built on that through experience with startups and multinational organizations so Ethico’s solutions can empower caring leaders to build strong cultures for the betterment of every employee and their community. When he’s not working, Gio’s wrangling his four young kids, riding his motorcycle, and supporting education, families, and the homeless in the Charlotte community.

Subscribe to our bi-weekly newsletter Ethisphere Insights for the latest articles, episodes, and updates.

RELATED POSTS

Free Magazine Access!

Fill out the form below, and get access to our Magazine Library

Free Magazine Access!

Fill out the form below, and get access to our Magazine Library

%d