Skip to content

The Journey to Achieving Regulation-Ready Artificial Intelligence

By Lee Barrett, Managing Director and Accenture Northeast Applied Intelligence Lead

The journey to achieving Responsible AI is evolving as the maturity-level of data collected by companies continue to expand. Through the sophisticated tech devices that people wear and interact with, consumers are giving companies an inside look into their routines, habits and personal preferences.

To ensure that companies build an ethical and secure AI foundation, organizations and government bodies are coming together to set new regulations that protect personal data.

Is your AI regulation-ready?

We saw Europe pioneer these efforts with the implementation of the General Data Privacy Regulation—or GDPR—a legal guideline around the collection and use of personal information from people who reside in the European Union. Starting January 2023, New York City will restrict employers from using AI technology in their hiring practices, unless their systems pass a “bias audit” test conducted by an independent auditor. This new legislation is a significant step in the right direction that will likely prevent against unlawful discrimination during the hiring process (Maurer).

Many organizations and companies are prioritizing Responsible AI in their strategy because they know that it can be more beneficial for them over time, especially as new laws and regulations develop around the use of AI. A recent study conducted by Accenture found that 80% of companies plan to increase investment in Responsible AI, and 77% of these companies see regulation of AI as a priority (Eitel-Porter).

In fact, many of these organizations view regulatory compliance as a way to gain competitive advantage and differentiate themselves from their counterparts. The pioneers who deliver responsible and trustworthy AI systems that are regulation-ready will likely have a significant advantage in the short-term, enabling them to attract new customers, retain existing ones and build investor confidence.

Getting your company regulation ready

The strategic journey to get companies prepared for regulation cannot be overly simplified, but there are immediate steps companies can take to start:

• Establish a Data & AI Ethics Framework that can guide the creation and governance of policy, capability development and align AI capabilities with the established core values. By conducting comparison reports and gap analysis, companies can identify where to align their priorities.

• Testing for Bias Detection: Run pilots of Responsible AI tools within business units to understand ethical trade-offs, and create blueprints on how to execute accordingly.

• Accountability and Execution at Scale: Support the operating model with additional roles/responsibilities that can continue to monitor, detect bias, and sustain ethical compliance.

This is only the beginning…

Most companies have started implementing Responsible AI practices, but only a very small fraction have operationalized their capabilities to be responsible by design. As devices continue to increase in sophistication and expand the experience for the customer, the maturity-level of data captured can increase with it.

Look at the metaverse for example, how does the metaverse track what captures a user’s attention? What immersive experiences does one focus on over another? And how will commerce change with Web 3.0? The rollout of new technologies like Web 3.0 and DLT (Distributed Ledger Technologies) will start to unlock new business models and services. While these opportunities can create room for growth and many companies are ready to jump in on the action, not every company is ready to protect the data and privacy of its consumers. It all comes back to ethical and Responsible AI, and the companies that outperform their competition will likely prioritize these standards into their AI strategy.

The opinions, statements, and assessments in this report are solely those of the individual author(s) and do not constitute legal advice, nor do they necessarily reflect the views of Accenture, its subsidiaries, or affiliates.

Accenture provides the information on an “as-is” basis without representation or warranty and accepts no liability for any action or failure to act taken in response to the information contained or referenced in this publication.

Copyright © 2022 Accenture. All rights reserved. Accenture and its logo are trademarks of Accenture.