Articles You May Have Missed

Generative AI and Compliance: The Two Can Co-Exist

The potential regulatory and civil risk around the business use of artificial intelligence (AI) is at a fever pitch, but that is no reason for why it must fall to Compliance to forbid its use within your organization. In fact, Compliance can create the framework that unlocks AI’s potential for your organization while also managing its potential downsides.

As generative AI has grown in popularity across industries, many companies are facing some challenging questions. Questions such as how can we adopt the new technology safely, can we use or leverage it at all, how do we keep data protected and promote responsible use, how do we mitigate risk of biased or discriminatory outcomes?

Kelly Lange | Blue Cross Blue Shield of Michigan, Blue Care Network and its Medicare Advantage joint ventures

Compliance officers may be faced with helping their companies answer such questions and along with some of their oversight partners, be viewed as the “no” department(s). If this scenario sounds all too familiar to you, this article is here to help. It aims to provide some guidance on getting to a reasonable “yes”, all while sustaining an effective compliance program. Innovation can be complemented by compliance and the two can co-exist. Arguably, those that get this right, will have a competitive advantage.

There is no doubt there is value in the responsible use of generative AI. Generative AI can help drive scale, the pace of work, cost reductions, customer experience and elevate business models. In the healthcare industry in particular, providers have great opportunity to leverage AI in managing patient care delivery. Payers can take advantage of automation in processes like preauthorization, for example. Compliance and audit teams can automate logic in more routine or transactional claims testing and focus on more complex reviews. The potential is quite astounding!

Just as with any innovation, new uses or tools should be coupled with process for success. Those who get this balanced recipe right will reap the rewards and be able to turn their attention to more complex business processes and customer needs.

Below are some tips to consider as you develop your recipe or pathway to get to mutually beneficial, innovative yet compliant outcomes.

Know the rules. Partner with your data, legal, and information security teams and have a proactive process that seeks industry inputs, rules, including any public cases on misuse and government guidance.

For healthcare, the rules continue to emerge but include and are not limited to the U.S. Department of Health and Human Services Section 1557 non-discrimination rule, HIPAA federal and state safeguards, and anti-kickback laws. The Federal Trade Commission also continues to comment on AI use. They have publicly warned companies that AI tools can be “inaccurate, biased, and discriminatory by design and incentivize relying on increasingly invasive forms of commercial surveillance.”

As the FTC and other entities sustain protections of consumers, be sure to heed their guidance and findings. Have a process that proactively ingests regulatory information and confirms your company is aligned.

Set guardrails. Look outside to regulators and other corporate innovators across industries and apply the learnings to your organization through guardrails or AI standards. Setting internal guidance early, educating on the expectations, and confirming the workforce understands how the Code of Conduct, data privacy and security policies set the foundation of your AI framework is foundational. Calling out AI usage within existing policies or standards enables your workforce to make the connections. There may be a standalone AI policy as well, but it is important that the workforce sees the linkage to the other existing company commitments and controls in place and how they all relate.

Educate your workforce and your vendors. Integrate AI examples within training mechanisms as appropriate so the workforce can make the connection to existing safeguards. Highlight any AI expectation nuances and emphasize reaching out for help from oversight areas such as Information Security, Data Governance and Compliance. Keep the training simple yet effective. Ask your audience for feedback on these two objectives.

There is no better way to educate and confirm understanding than to use a knowledge check test question or poll and gauge results for opportunities. Consider targeted education for your development teams that goes a layer deeper and partner with your IT leadership on that. Be sure your vendor contract administrators are also adequately trained to have substantive conversations with their vendors who may be using AI. The goal for contract admins is to understand the usage and align on risk appetite expectations. If possible, consider using vendor attestations on appropriate usage that aligns with security, data, fair, and responsible use. Integrate AI questions in due diligence and security reviews.

Although a slightly separate topic, incorporate AI cyber threats awareness and how your workforce can be the eyes of the company to identify misuse of AI. This is important as your workforce is your first line of defense for control failure, cyber threats, and ongoing appropriate data usage.

Present the generative AI approach to your governing bodies including your appropriate boards to demonstrate the discipline and control being applied.

Adopt risk assessment protocols. The AI risk assessment can leverage existing scales of high, medium, and low but should integrate evaluation for discrimination, bias, appropriate and accurate use of data and conclusions. The rubric should be applied to achieve consistent results and to drive mitigation.

The National Institute on Standards and Technology (NIST) organization is paving the way with frameworks and a playbook to guide organizations through risk and control. Be sure to tap their resources and adopt some common industry risk levers and methods.

Consider contractual language. To further enhance your understanding of vendor or partner use, imbed language that requires contractual transparency and aligns usage with your company’s expectation on fairness, accuracy, and appropriate data usage.

Live and apply the learnings. As with all innovation, be sure to have a feedback loop on use cases and build from those. Set expectation that usage needs to deliver successes and learnings that can be shared and imbedded into future practices. Use the examples in training as appropriate.

Establish a common and centralized way to inventory AI usage through business cases and process. Knowing what the generative AI usage inventory is and establishing controls to keep it accurate is essential. If you don’t know what you have, how can you manage and monitor it?

An inventory will drive organizational transparency, appropriate controls, and audit readiness. Setting an inventory process early drives long term benefits such as ongoing monitoring. If one thing is certain, it is change, and what was once compliant or functioning a certain way can evolve over time. An inventory helps keep the pulse on this.

Set governance at the working level as well as executive level to drive consistent process. Setting and communicating process generates enterprise understanding and know-how. Consider a simple process for workforce submission of use cases, vetting and approval and communicate that well. Avoid process surprises.

Test, test, test. Any new and innovative technology should be well tested, and this includes AI. Be sure to integrate data accuracy, data sharing and use, along with bias within the testing and as required gates before production use. Above and beyond typical testing, the test plan should consider:

  • Is machine learning generating the right conclusions through experience?
  • Are decisions made through the logic accurate? Is data output accurate?
  • Is it appropriate to use the data in this way or release certain information to the user?
  • Is there bias in the data based upon assumptions or other sub-detail that could be driving an unfair conclusion about an individual or population?

Develop an organization toolkit. Lead your workforce where you want them to go. Include standard definitions, governance model, the process and quality gates prior to production, just as you would with an IT change management model. This will drive consistency and set due care for how the company is adopting AI technology.

In conclusion, compliance can co-exist with innovation. It takes the cultural commitment to an effective compliance program, one that invests in process, continuously improves and learns, all while upholding stakeholder confidence and trust critical to your brand. The companies who perfect the balanced recipe will be positioned to reap the benefits of responsible generative AI long-term and will stand out in the industry.

 

ABOUT THE AUTHOR

Kelly Lange is the vice president within compliance and is the Privacy and Medicare Compliance Official for Blue Cross Blue Shield of Michigan, Blue Care Network and its Medicare Advantage joint ventures.

Subscribe to our bi-weekly newsletter Ethisphere Insights for the latest articles, episodes, and updates.

RELATED POSTS

Free Magazine Access!

Fill out the form below, and get access to our Magazine Library

Free Magazine Access!

Fill out the form below, and get access to our Magazine Library

%d