- Acumen Powered by Robins Kaplan LLP®
- Affirmative Recovery
- American Indian Law and Policy
- Antitrust and Trade Regulation
- Appellate Advocacy and Guidance
- Business Litigation
- Civil Rights and Police Misconduct
- Class Action Litigation
- Commercial/Project Finance and Real Estate
- Corporate Governance and Special Situations
- Corporate Restructuring and Bankruptcy
- Domestic and International Arbitration
- Entertainment and Media Litigation
- Health Care Litigation
- Insurance and Catastrophic Loss
- Intellectual Property and Technology Litigation
- Mass Tort Attorneys
- Medical Malpractice Attorneys
- Personal Injury Attorneys
- Telecommunications Litigation and Arbitration
- Wealth Planning, Administration, and Fiduciary Disputes
Acumen Powered by Robins Kaplan LLP®
Ediscovery, Applied Science and Economics, and Litigation Support Solutions
-
October 14, 2024Raoul Shah Recognized as New Volunteer Attorney of the Year by Tubman
-
October 14, 2024Robins Kaplan Receives LAAC Award of Merit for Landmark Ruling Benefiting Homeless Veterans
-
October 10, 2024Michael Collyard and Ronald Schutz Named to Minnesota Lawyer’s Power 30: Business Litigation List
-
October 20, 2024License Agreement Disputes:
-
October 21, 2024How Much Did We Invest in AI?
-
October 22, 2024Justice for All: Minnesota's Civil Legal Aid and Pro Bono Landscape
-
September 2024Meet Our New Partner and Trial Advocacy Seminar Keynote Speaker: B. Todd Jones
-
September 2024Q&A with Alan Harter, Founder of Pactolus Private Wealth Management
-
August 2024Recruiting & Retaining Diverse Attorneys: Building an Inclusive Legal Profession
-
September 16, 2022Uber Company Systems Compromised by Widespread Cyber Hack
-
September 15, 2022US Averts Rail Workers Strike With Last-Minute Tentative Deal
-
September 14, 2022Hotter-Than-Expected August Inflation Prompts Massive Wall Street Selloff
Find additional firm contact information for press inquiries.
Find resources to help navigate legal and business complexities.
Artificial Intelligence v. General Data Protection Regulation: Complex Risks in Changing Times
Artificial Intelligence (“AI”) is a complicated technology developed with data. This saddles AI it with a host of potential privacy regulations, including Europe’s data protection law, the General Data Protection Regulation (“GDPR”). Understanding the liabilities that can arise at the intersection of technology and privacy is a first step to avoiding bad faith missteps. This article highlights three compliance tensions between AI and the GDPR and how carriers are responding to these risks that can carry steep regulatory penalties.
November 15, 2019
Artificial Intelligence (“AI”) swallows vast troves of data, so, as its definition suggests, it enables “the capability of a machine to imitate intelligent human behavior.”1 Much like humans learn over time by exposure to different experiences and new information, AI systems can be fed enough data so that they can eventually draw conclusions and make inferences.
Given AI’s data diet, it is saddled with a host of privacy regulations, which vary depending on the nature of the data and its uses. This article highlights three compliance tensions between AI and the European privacy regime, the General Data Protection Regulation (“GDPR”), which contains various privacy-related principles for how personal data must be processed and provides certain data subject rights. With GDPR fines reaching as high as 4 percent of annual global turnover, or 20 million euros (whichever is higher), carriers insuring them should endeavor to understand these complex risks. Understanding any insured risk, including new risks like AI, is also the first step a carrier can take to avoid a bad faith claim. This article also addresses how carriers are responding to these risks.
Top Three Compliance Tensions Between AI and GDPR
The Norwegian Data Protection Authority (“NDPA”) provided one European regulator’s perspective on GDPR compliance issues for AI. Highlighted below are three of these compliance tensions:2
1. Discrimination Rankles Fairness Requirements
AI’s susceptibility to discriminatory processing could potentially conflict with the GDPR requirement that data be processed in a fair manner. For example, the nonprofit investigative journalism group ProPublica claimed that an AI program used for setting bail erroneously flagged black individuals as “high re-offending risks” twice more often than white individuals.3 This means companies employing or insuring AI liabilities should be on the lookout for this potential exposure.
2. Data Minimization for Data-Hungry AI
The NDPA highlights the tension between data-hungry AI and the GDPR’s data minimization principle that requires processed personal data to be limited to data necessary and relevant to the purpose of the processing. The NDPA acknowledges an extra layer of confusion when operating in the world of AI because:
[I]t may be difficult to define the purpose of processing because it is not possible to predict what the algorithm will learn. The purpose may also be changed as the machine learns and develops. This challenges the data minimi[z]ation principle as it will be difficult to define which data is necessary.4
Although the regulation recognizes the “difficulty” in defining relevant purpose and necessary data, it appears the navigation of that risk rests with the user, which ultimately may show up as a claim before its insurer.
3. Transparency in a World of Complex Algorithms
GDPR requires transparent data processing and imposes an extended duty to inform when data is used for automated decision making. The regulation puts limitations on certain types of automated decision making. The NDPA recognizes that the advanced nature of the technology processing data with AI “can be difficult to explain[,]” and sometimes it can even be “practically impossible to explain how information is correlated and weighed in a specific process[,]” because exactly what happens during AI processing is not always understood.5 This provides another example of the tensions between the application of AI technology and compliance with the GDPR.
Mitigating AI Exposures Under the GDPR with Insurance
GDPR creates risks for companies using AI, and those businesses will likely respond by employing various strategies, including insurance, to mitigate these risks. While cyber insurance markets continue to grow, AI presents carriers new challenges and opportunities. At this intersection of cutting-edge technology and new regulations, there are specific issues to keep in mind regarding insurance.
Insurability Issues of AI/GDPR Liabilities
Whether GDPR fines are insurable may depend on your jurisdiction. Even if fines are not insurable in a given jurisdiction, hosts of other potentially significant exposures likely are, including expenses for consumer notice and for retaining specialists like lawyers, public relations experts, and computer forensics experts. There may also be insurable liabilities arising from consumer actions.
Sufficiently Covering for AI/GDPR Exposures
Jonelle Horta, Vice President at Allied World, a global insurance and reinsurance provider, is on the company’s Cyber Underwriting team and is its FrameWRX lead.6 She explains some of the struggles predicting AI exposures under the GDPR:
As with all privacy-related regulations, consistent application of GDPR among its members is still somewhat of an unknown. To date there have been few fines imposed, making it difficult to predict future impact. Of the fines imposed there is a wide disparity in industry type and size, providing some preliminary insight; a broader set of breach activity will provide better clarity. GDPR is likely to impact future iterations of U.S. privacy regulations in a “me-too” fashion, which may create additional scrutiny for organizations with both U.S. and GDPR exposure. Carriers will continue to monitor the regulatory impact and evaluate, as this could mean opposing guidelines for compliance with privacy regulations based on jurisdiction.
The GDPR greatly expanded what constitutes personal data and what is a violation of a privacy law that could be subject to regulatory or consumer exposures. As opportunities for violations have increased, businesses and carriers should consider the impact of this expansion on the types of coverages and policy limits offered. Those limits include when AI is used.
Horta explains what the process might look like for companies seeking coverage specifically for AI and GDPR exposures:
Traditional insurance applications, combined with other sources for evaluating risk, typically identify operations outside of the United States. If an organization is requesting coverage for technology or professional services related to artificial intelligence, additional information is requested to identify areas of exposure (i.e., what services are being performed, the value of top contracts, etc.). Coverage could extend to the organization’s use of AI by outlining the services performed via endorsement.
Making the Most of Pre-Breach Services
Many carriers attempt to get ahead of potential cyber exposures by utilizing pre-breach services as ways to reduce exposures. Horta explains how Allied World incorporated this strategy into its offerings:
The Allied World // FrameWRXSM risk management platform provides our cyber insureds with access to a set of pre-breach services as well as a dedicated FrameWRX specialist to facilitate and support engagement at no additional cost. Our insureds have access to GDPR-specific materials and guidance, GDPR-specific training accompanied by a concierge resource to support user engagement, content regarding applicability of the regulation, and high-level preparation and criteria around fines. Each insured also has access to an advice center where they can reach out to our vendor and be connected with the appropriate legal or technical resource based on their needs. Additionally, there are a set of consultative services for an insured to choose from that may align with GDPR-related cybersecurity objectives.
Armed with knowledge of potential AI and GDPR exposures, companies can pursue methods to mitigate this risk, and carriers can respond to the need with new products, offerings, and guidance. Counsel may also be invaluable in identifying, navigating, and offsetting these complex risks in these changing times.
Datatilsynet, The Norwegian Data Protection Authority, Artificial Intelligence and Privacy Report, January 2018.
Id.
Id.
Id.
Jonelle was instrumental in building the award-winning Allied World // FrameWRXSM risk management platform — a value-added resource that helps organizations take control of their cyber exposure in a rapidly evolving, inherently complex environment. Jonelle built the team and continually works with leading vendors to make sure that Allied World’s cyber policyholders receive the most up-to-date and effective support to help mitigate their cyber exposures.
In her 10-year tenure at Allied World, Jonelle was a member of the global and product solution department and has worked closely with Allied World’s cyber practice leads on the development of cutting-edge insurance policies that are proactive and responsive in a rapidly evolving cyber landscape. She was also a member of Allied World’s High Potential professional development program.
Related Publications
Related News
If you are interested in having us represent you, you should call us so we can determine whether the matter is one for which we are willing or able to accept professional responsibility. We will not make this determination by e-mail communication. The telephone numbers and addresses for our offices are listed on this page. We reserve the right to decline any representation. We may be required to decline representation if it would create a conflict of interest with our other clients.
By accepting these terms, you are confirming that you have read and understood this important notice.