Google’s AI chatbot, Bard, has been launched in the European Union (EU) after implementing changes to improve transparency and user controls. However, EU privacy regulators are closely monitoring the situation, and decisions regarding the enforcement of data protection laws on generative AI are yet to be made. The Irish Data Protection Commission (DPC), Google’s lead regulator in the region, will continue engaging with the tech giant following Bard’s launch. Google has also agreed to conduct a review and provide a report to the DPC within three months of Bard becoming operational in the EU.
The European Data Protection Board (EDPB) has established a taskforce to assess compliance with the General Data Protection Regulation (GDPR) by AI chatbots. This taskforce initially focused on OpenAI’s ChatGPT but will now incorporate Bard into its work to harmonize enforcement actions among different data protection authorities (DPAs).
Last month, the EU launch of Google’s ChatGPT was delayed as it failed to provide necessary information to the Irish regulator. One particular concern was Google not sharing a data protection impact assessment (DPIA), a crucial document for identifying potential risks and mitigation measures related to fundamental rights. The DPC has now received a DPIA for Bard, which will be included in its three-month review alongside other relevant documentation.
In an official blog post, Google did not disclose specific steps taken to reduce regulatory risk in the EU but mentioned proactive engagement with experts, policymakers, and privacy regulators before expanding their services. When launching Bard in the EU, Google made changes that limit access to users aged 18+ who have a Google Account. It also introduced a new Bard Privacy Hub that allows users easy access to privacy control explanations.
Google claims the performance of a contract and legitimate interests as its legal bases for processing Bard data, though it heavily relies on legitimate interests for most associated processing. As for data deletion options, users can delete their own Bard usage activity through a web form. While there is no apparent way for users to request the removal of personal data used to train the chatbot, another web form allows users to report problems or legal issues, including requesting corrections or objecting to data processing.
Google’s spokeswoman highlighted additional user control features such as setting preferences for data retention periods and the ability to delete Bard activity from their Google Account. By default, Google stores Bard activity for up to 18 months but users have the option to change this period or disable data storage completely.
Google’s approach to transparency and user control with Bard appears similar to changes made by OpenAI following regulatory scrutiny in Italy. The Italian Data Protection Authority (DPA) suspended ChatGPT initially due to several data protection concerns. Following a series of adjustments, ChatGPT resumed service in Italy with added privacy disclosures, opt-out options for data processing, deletion requests, and age-gating features.
While OpenAI remains under investigation by the Italian DPA, other EU DPAs are also investigating ChatGPT. Unlike Google, OpenAI does not have a lead DPA as it is not primarily established in any Member State, leading to greater regulatory uncertainty and potential risk.
To address these challenges and achieve consistency in enforcement actions, the EDPB taskforce aims to establish common positions on AI chatbots among EU DPAs. However, differences in approaches are expected since some authorities have already outlined specific strategies concerning generative AI technologies, such as protecting publicly available web data against scraping.
Regulatory attention on AI chatbots will continue to grow in the coming months as European regulators scrutinize compliance with data protection laws and work towards harmonization.