March 6, 2025: Google has informed Australian authorities that it received more than 250 complaints globally over nearly a year regarding the use of its artificial intelligence (AI) software to create deepfake terrorism material. The Alphabet-owned tech giant also reported dozens of user warnings about its AI program Gemini being used to generate child abuse content, according to the Australian eSafety Commissioner.
Under Australian law, tech companies must periodically provide the eSafety Commissioner with information about harm minimization efforts or face fines. The reporting period in question covered April 2023 to February 2024. Since the public emergence of OpenAI’s ChatGPT in late 2022, global regulators have called for better safeguards to prevent AI from enabling terrorism, fraud, deepfake pornography, and other abuses.
The Australian eSafety Commissioner described Google’s disclosure as a “world-first insight” into the exploitation of AI technology for harmful and illegal content production. “This underscores how critical it is for companies developing AI products to build in and test the efficacy of safeguards to prevent this type of material from being generated,” stated eSafety Commissioner Julie Inman Grant.
In its report, Google revealed it received 258 user reports about suspected AI-generated deepfake terrorist or violent extremist content created using Gemini, along with 86 user reports alleging AI-generated child exploitation or abuse material. The company did not specify how many complaints it verified.
A Google spokesperson reiterated the company’s policy against generating or distributing content related to violent extremism, terrorism, child exploitation, or other illegal activities. “We are committed to expanding on our efforts to help keep Australians safe online,” the spokesperson said via email.
Google employs hatch-matching, a system that automatically matches newly uploaded images with known images, to identify and remove child abuse material created with Gemini. However, the same system is not used to eliminate terrorist or violent extremist material generated with Gemini, the regulator noted.
The regulator has previously fined Telegram and Twitter, now known as X, for deficiencies in their reports. X has lost one appeal against its A$610,500 (S$515,200) fine but plans to appeal again. Telegram also intends to challenge its fine.