Articles You May Have Missed

Delivering the Promise of Responsible Artificial Intelligence

Artificial intelligence is a transformative technology that is widely recognized but poorly understood. As such, AI suffers from an image and reputation problem, especially among the public and regulators who are understandably concerned by a powerful technology they do not understand. At American aerospace, defense and security company Northrop Grumman, efforts are underway to better explain AI to the world, and more importantly, to outline how AI might be used responsibly.

AI is pivotal technology. It is already ubiquitous in our everyday lives, from streaming services to navigation apps, and robotic vacuums to secure banking.

But AI is also playing a larger and larger role in national defense, such as way-finding for unmanned vehicles, enhanced target recognition, and many other applications that can benefit from speed, scale, and efficiency. Some functions are simply not possible using traditional computation or manual processes, but AI provides the necessary cognitive and computing power to make them a reality.

The real genius of AI is its ability to learn and adapt to changing situations. The battlefield is a dynamic environment and the side that adapts fastest typically gains the advantage. But like with any system, AI is vulnerable to attack and failure.

What is Responsible AI?
AI has an image and reputation problem. The media frequently produce stories of AI gone rogue, bias in algorithms, or the dystopian specter of unaccountable killer robots. The lack of understanding about what AI can do, doesn’t do, or shouldn’t do has simply increased the confusion amongst the public and policymakers. This confusion will impede the innovation and progress needed to fully capture the potentially transformative benefits of AI unless urgent action is taken to build confidence and trust in AI solutions.

Fortunately, such action is well underway. Governments, think tanks, industry associations and many leading technology and other companies have publicly announced their commitment to the development and implementation of responsible, trustworthy artificial intelligence. The US government, particularly the Department of Defense (DoD), has been at the forefront of these efforts and in February 2020 the DoD formally adopted five principles which require AI to be (1) Responsible; (2) Equitable; (3) Traceable; (4) Reliable; and (5) Governable [A]. The US Intelligence Community released similar principles in July 2020, which further emphasized the importance of respecting not only human dignity, rights, and freedoms, but also protecting privacy, civil rights, and civil liberties [B].

In June 2022, the DoD issued its Responsible Artificial Intelligence Strategy and Implementation Pathway which is required reading for companies in the Defense sector because it points the way for embedding Responsible AI into the all-important acquisition and delivery of technology and solutions [C]. As stated in the Pathway document, “[i]t is imperative that the DoD adopts responsible behavior, processes, and objectives and implements them in a manner that reflects the Department’s commitment to its AI Ethical Principles. Failure to adopt AI responsibly puts our warfighters, the public, and our partnerships at risk.”

Leading industry organizations such as the Business Roundtable have announced their own AI ethics principles along with their priorities for AI policy and the United States is not alone. The European Union (EU) AI Act is moving through the legislative process and positions taken by other governments such as the UK’s plan to deliver “ambitious, safe, responsible” AI in support of defense [D]. The Atlantic Council has done important work in this area as well, recently publishing its Principles to Practice: Using Ethical Spectrums to Guide Decision-Making [E].

Turning Principles and Adjectives into Action
The common theme of these (and many other) sets of principles and frameworks is that developers of AI need to exercise discipline in their coding process so document and can explain what they’ve done, how they’ve done it, and the intent behind the solution design. This includes how data is used, the sources of that data, the limitations or any error rate associated with the data, and how data evolution and drift will be monitored and tested. From greater transparency will flow increased understanding and acceptance of the solution, and with that, heightened trust amongst users, policymakers, and ultimately the public.

Northrop Grumman is taking a system engineering approach to AI development and is a conduit for pulling in cutting-edge university research, commercial best practices, and government expertise and oversight. We have partnered with Credo AI, a leading Responsible AI Governance platform, to help Northrop Grumman create AI in accordance with the highest ethical standards. With Credo AI’s governance tools, we are using comprehensive and contextual AI policies to guide Responsible AI development, deployment, and use. We are also working with top universities to develop new, secure and ethical AI and data governance best practices, and technology companies to leverage commercial best practices.

The company is also extending its DevSecOps process to automate and document best practices in the development, testing, deployment, and monitoring of AI software systems. These practices enable effective and agile governance as well as real-time management of AI-related risks and opportunities. Critical to success is Northrop Grumman’s AI workforce – because knowing how to develop AI technology is just one piece of the complex mosaic. Our AI engineers must also understand the mission implications of the technology they develop to ensure operational effectiveness of AI systems in its intended mission space. That is why we are investing in a mission-focused AI workforce through formal training, mentoring, and apprenticeship programs.

Our use of Responsible AI principles and processes is not limited to our customer-facing endeavors. Northrop Grumman is also leveraging the power of AI for internal operations. Applications include AI chatbots for employee IT services, predictive modeling for software code review, natural language understanding for compliance risk, and numerous others. By embedding Responsible AI into internal information infrastructure, our timely and effective business operations and develop capabilities can be further leveraged for our customers’ benefit.

Tackling the Data Set Challenge
A key component of any AI-enabled system is the data used to train and operate it. Critical to the success of responsible AI-enabled systems is limiting data bias. Datasets are a representation of the real world, and like any representation, an individual dataset can’t represent the world exactly as it is. So, every dataset is both susceptible and prone to bias. High profile cases of bias have been demonstrated in commercial cases ranging from a chatbot making inflammatory and offensive tweets to more serious cases such as prejudice in models built for criminal sentencing. If ignored, data bias can have serious implications in the national security space. Understanding the nature of the bias and the risk associated with that bias is key to providing equitable technology solutions. By working to recognize potential sources of bias, and testing for bias, we actively working to mitigate bias in our data sets and AI systems.

As an additional complication, the events of interest in a dynamic battlefield environment are likely to be rare events as the adversary purposefully works to obscure their actions and surprise the United States and its allies. So, it may be necessary to complement data collections with augmented, simulated, and synthetic data to provide sufficient coverage of a domain. Adversaries may also seek to fill datasets with misinformation to spoof or subvert AI capabilities. To develop AI responsibly in the face of these challenges, it is critical to maintain records of data provenance, data lineage, and the impact of changing data sets on AI model performance.

Northrop Grumman established a Chief Data Office (CDO) to unify its customer-facing data management efforts and to address these challenges for its internal operations. The CDO sets and executes an enterprise data strategy and maintains a corporate data architecture to enable data-driven decision-making. Key tenets of the data strategy include securing and protecting the data, ensuring the usefulness and quality of the data, and accountable data access to information systems and stakeholders. This deliberate and comprehensive focus on data quality and access is a key enabling function to the responsible development of AI systems, both for internal operations and for customer-focused development.

Conclusion
AI enables revolutionary changes in the way national security operations are conducted. With the incredible power this technology enables, it is incumbent upon its developers and operators to be responsible and transparent in its design and use. Northrop Grumman and its industry partners are committed to the responsible development and use of AI and continuing to contribute to research and development regarding public policy and ethical use guidelines for AI in national security applications. Transparency, equitability, reliability, and governance are, and should continue to be, requirements for the responsible use of AI-enabled systems.

 

References

[A] DOD Adopts Ethical Principles for Artificial Intelligence, U.S. Department of Defense, 24 February 2020. www.defense. gov/Newsroom/Releases/Release/ Article/2091996/dod-adopts-ethical-principles-for-artificial-intelligence/

[B] Intelligence Community Releases Artificial Intelligence Principles and Framework, Office of Director of National Intelligence. www.dni.gov/index-php/newsroom/ press-releases/item/2134-intelligence-community-releases-artificial-intelligence-principles-and-framework

[C] Responsible Artificial Intelligence Strategy and Implementation Pathway, U.S. Department of Defense, 22 June 2022, https://media.defense.gov

[D] The Artificial Intelligence Act, The AI Act, 21 April 2021, www.artificialintelligenceact.eu

[E] Principles to Practice, The Atlantic Council, 28 July 2022, www.atlanticcouncil.org

 

About the Authors

Carl Hahn is VP & Chief Compliance Officer for Northrop Grumman, a global aerospace, defense, and security company headquartered in Falls Church, VA. The majority of Northrop Grumman’s business is with the U.S. government, principally the Department of Defense and the intelligence community. In addition, the company delivers solutions to global and commercial customers.

Dr. Amanda Muller is Chief, Responsible Technology for Northrop Grumman.

Dr. Jordan Gosselin is AI Campaign Chief Engineer & Technical Fellow for Northrop Grumman.

Jonathan Dyer is Principal AI Engineer Systems Architect for Northrop Grumman.

Subscribe to our bi-weekly newsletter Ethisphere Insights for the latest articles, episodes, and updates.

RELATED POSTS

Free Magazine Access!

Fill out the form below, and get access to our Magazine Library

Free Magazine Access!

Fill out the form below, and get access to our Magazine Library

%d