Deutsche Telekom: Compliance’s Role in AI Ethics and Digital Innovation

Germany-based Deutsche Telekom (DT) consistently ranks among the top ten largest global telecommunications companies and is Europe’s leader in the space. The company and its many subsidiaries provide the base upon which the digital economy runs for many consumers. As critical infrastructure, the company knows that it has an extra obligation to be responsible with its networks. Enter Manuela Mackert, who has been Chief Compliance Officer for the entire group since 2010 and for the past several years has been thinking seriously about digital responsibility and AI ethics, making DT a leader in that space.

Manuela Mackert, Chief Compliance Officer, Deutsche Telekom

Megatrends Shaping Compliance

Given her background—Mackert worked in human resources for many years before jumping to compliance—she is keenly interested in the changing nature of work in the digital age, and how that shift impacts the needs of the compliance function. With that in mind, she has her eyes on two sets of “megatrends” shaping our economy: the shift to digital work, and the pressures pushing towards “agile organizations.”

These twin megatrends drove Deutsche Telekom’s compliance organization to focus on a few priorities, including fostering values-based compliance. Agile organizations require empowered employees, versed in their company’s values and priorities, to make rapid decisions. Among other things, the need for agility has pushed compliance from a rules-based order to one more concerned with teaching values to help employees structure their decisions.

Part of enabling those decisionmakers is preparing them better for gray areas. “We conducted workshops with employees, and they wanted more guidance on an ‘inner compass,’ helping them to be able to act on their own in the values-based interest of the company,” Mackert says. “We had to design a digital decision framework to give them a tool, and make dilemmas more tangible.”

Perhaps the most important innovation that the compliance team rolled out was “an AI-based check box” designed to dynamically walk employees through the ethical and legal requirements of various tasks. As time has gone on, the tool became more sophisticated, with keyword analysis that can direct an employee to a human partner at any point in the process. Over time, her team has added pattern-recognition over these text entries to analyze trends and adapt accordingly.

Starting an AI Ethics Journey

When Mackert first began thinking about issues of ethical AI application several years ago, relevant expertise was still heavily concentrated in the large American tech giants such as Google, Microsoft, and Facebook. On a trip to several of their “hyperscale” data facilities, she connected with a number of engineers and practitioners on the issue.

Mackert began to consider the role that intermediaries like Deutsche Telekom, as “enablers for AI and AI-related services,” could play in the rollout of these technologies to consumers. She considered the necessary “success factors” for DT: trust, responsibility, transparency, data sovereignty, and excellence. “It was clear to me that I had to transfer values from the analog world into the digital world.”

She began an intensive self-education into AI and machine learning technologies and the work that various units within DT were already doing with them, without central coordination. Then, she began to convene a group within DT to devise “self-binding rules” for AI. To ensure plenty of stakeholders were represented, she cast a wide net—partners, suppliers, peers, regulators—all geared towards the same question: “What kind of guard rails do we need?”

The work benefited greatly from the shared knowledge of the tech companies who had gone before. She asked them many questions: “What kind of hurdles have they faced? What failures? What risks? How did they try to adapt?” Her connections allowed DT to learn from those who had done the work before and build on their knowledge. Now, DT has published and implemented its nine Digital Ethics Guidelines on AI.

Building Ethics into Existing Structures

Of course, implementing these principles required new processes. Mackert knew that the approach the American companies had taken—adding new “AI ethics committees” to product review processes—might add friction given her organization’s already-robust bureaucracy. Instead, she pushed to add AI ethics to existing checkpoints.

“I implemented everything in existing procedures,” she says. “We already had a privacy and security assessment, and a product and innovation board, for all IT- and AI-related products. We could make life easier for them if they added these procedures so they could identify risks and help developers check the implications of digital ethics in their environment.” Eventually, adding these steps to existing structures resulted in an internal seal of approval for AI projects, receiving which became a goal for many new product teams.

One product whose development Mackert says was deeply influenced by the company’s AI guidelines was its Magenta voice assistant, which powers a line of smart speakers. Design choices to change the color of a speaker’s lights when the user was engaging with Magenta were driven by privacy concerns. Mackert’s engagement with American tech counterparts also allowed the Magenta team to learn from their mistakes around bias, addressing an issue before release that had Magenta responding better to male voices than those of women or children. In the end, the focus on ethics improved product quality and user experience.

Making the Case for AI Ethics as Business Differentiator

“Consumers in Europe, they really like to know what is going on with their services and products in terms of data,” she says. For Mackert, this fact presented an opportunity. Developing a robust framework for digital and AI ethics, and being able to credibly tell consumers about DT’s leadership on this topic, might in fact be a crucial differentiator in the market.

As to why the compliance function should steer the conversation, and not the actual engineers or product teams developing new technologies, Mackert’s answer was simple: “We have to protect employees and consumers. Business folks think of reducing costs or technical aspects, but they cannot calculate values,” she says.

“However,” she adds, “you can calculate avoided reputational damage. You can calculate liability issues.” From her perspective, compliance acts as a translator, helping to communicate the needs of different stakeholders throughout the process. The company’s commitment to ethics, in AI and the rest of its work, has contributed significantly to its brand’s value, with the latest “Brand Finance Global 500” report finding that Deutsche Telekom is one of the world’s 25 most valuable brands, third among telecom companies.

Educating and Supporting Stakeholders

Mackert believes that a push for more attention on digital ethics will, eventually, find its way into the conversations companies have with investors and regulators around ESG metrics and topics. “It has become more and more clear to me that ESG will have to develop, for the future, into ESGT—environmental, social, governance, and technology issues.”

She sees a role for companies leading the charge on responsible AI to educate investors and ratings agencies, given their clear interest in risk mitigation. She approached several of them last year asking how they were approaching the issue, and she was surprised to learn that credit ratings did not yet factor in a company’s implementation of a digital ethics strategy.

Given the role that a robust digital ethics framework could play in risk mitigation, reducing liability, and reputation damage, Mackert thinks it is inevitable that soon investors and raters will formally incorporate digital ethics and AI into their formulas. She has begun a dialogue with several of them to further their understanding of the space and the sort of criteria that they might employ.

Deutsche Telekom’s leadership on AI ethics has positioned it well to cater to the market demand of an increasingly privacy-concerned public, and to help inform investors, regulators, and policymakers. Starting last year, it extended its AI Guidelines to its own network of suppliers.

As Mackert puts it, “Compliance is only successful if you can remain entrepreneurial.” To support efforts internally and in its supplier network to build compliant AI tools, DT ended up building what she calls a “robust AI testing tool.” It is designed to screen a wide variety of potential AI applications for compliance with DT’s principles and various possible kinds of bias. Rolled out first internally and then to support suppliers in bringing their own tools up to par (DT extended the AI Guidelines to its supplier network last year), Mackert envisions a future where the company can package its AI testing tools as a service in their own right. Helping other companies keep their AI tools operating according to ethical standards and free from bias certainly seems like the ultimate form of doing well by doing good.


About the Expert:

Manuela Mackert has been Chief Compliance Officer and Head of Group Compliance Management at Deutsche Telekom AG since July 2010. She is also active on committees in Germany and internationally with the aim of cultivating standards for good corporate governance and establishing these in business practice. One of her latest initiatives is to promote digital ethics within Europe to be prepared for the emerging challenges of digitization. Accordingly, she was significantly involved in developing Deutsche Telekom’s guidelines for dealing with artificial intelligence.

Subscribe to our bi-weekly newsletter Ethisphere Insights for the latest articles, episodes, and updates.

RELATED POSTS

Free Magazine Access!

Fill out the form below, and get access to our Magazine Library

Free Magazine Access!

Fill out the form below, and get access to our Magazine Library

%d