Author: Aditya Pareek | EQMint | General News
Tech giant Google is facing a new class-action lawsuit accusing it of unlawfully using its Gemini AI assistant to secretly collect private user data from its widely used communication services — Gmail, Google Chat, and Google Meet.
The lawsuit, filed late Tuesday in the U.S. District Court for the Northern District of California (San Jose), alleges that Google turned on Gemini AI by default in October without seeking explicit user consent. According to the complaint, this move allowed the company to access private emails, messages, and attachments across its platforms, in violation of federal and state privacy laws.
Allegations: Gemini AI “Secretly Activated”
The suit claims that while Google had previously allowed users to choose whether to enable its AI tools, it “secretly turned on” Gemini across all communication applications last month. This automatic activation allegedly enabled Google to begin collecting and analysing private data from user conversations and files — including email contents and attachments — “without the users’ knowledge or consent.”
According to the complaint, filed by plaintiff Thele v. Google LLC (25-cv-09704), Gemini AI accessed “the entire recorded history of users’ private communications,” exploiting personal and professional data shared through Google services.
“Google has unlawfully activated Gemini’s data collection features across its ecosystem, including Gmail, Chat, and Meet, to extract, record, and analyse users’ communications without transparency or consent,” the filing claims.
Violation of California Privacy Law
The lawsuit alleges that Google’s conduct violates the California Invasion of Privacy Act (CIPA) — a law originally enacted in 1967 to protect citizens against surreptitious wiretapping and eavesdropping.
Under this law, recording or intercepting confidential communications without the consent of all parties involved is considered illegal. The plaintiffs argue that by enabling Gemini AI to “listen” and “read” through private conversations, Google effectively performed unauthorized surveillance on its users.
Legal experts note that the case could hinge on whether the court interprets AI data analysis as a form of “recording” or “interception” under the statute — a key question as lawmakers and judges increasingly grapple with the legal implications of artificial intelligence.
User Consent and Hidden Settings
The complaint also highlights that Google technically allows users to turn off Gemini AI, but the option is buried deep within privacy and account settings, making it difficult for most users to find.
“Unless users take deliberate action to deactivate Gemini,” the filing says, “Google continues to use its AI models to access, process, and monetise private user communications.”
This alleged design choice, according to the plaintiffs, represents a “deceptive practice” that tricks users into thinking their communications remain private, when in fact their data is being analysed and stored.
A Class-Action Lawsuit on Behalf of Millions
The case, brought as a proposed class-action, seeks to represent millions of users across the United States who have used Gmail, Google Chat, or Google Meet since Gemini’s October activation.
Plaintiffs are seeking unspecified damages, along with a court injunction requiring Google to stop using Gemini AI to access private communications and to delete any data collected through the system.
If certified, the class action could become one of the most significant legal challenges yet against a major AI platform, highlighting the growing conflict between innovation and privacy in the tech industry.
Google Yet to Respond
As of Wednesday morning, Google had not issued a public statement regarding the lawsuit. The company also did not respond to media requests for comment outside of regular business hours.
In past communications, Google has defended its use of AI as being aligned with its privacy commitments and responsible AI development principles, which it claims prioritise user control and data protection. However, the new allegations raise serious questions about whether those principles were upheld in Gemini’s rollout.
Gemini AI: Part of Google’s AI Push
Gemini — formerly known as Bard AI — represents Google’s next-generation large language model, designed to compete with OpenAI’s ChatGPT and other advanced generative AI systems. Integrated across Google Workspace tools, Gemini assists with drafting emails, summarising conversations, transcribing meetings, and offering context-based suggestions.
However, privacy advocates have long warned that such deep integration within personal and professional communication tools could blur the boundaries between assistance and surveillance.
Critics argue that AI systems trained on user-generated content could potentially store or replicate sensitive data, especially when users are not fully aware of what information is being processed.
Legal and Ethical Questions Ahead
The lawsuit comes amid a broader wave of scrutiny facing big tech companies over the use of AI and personal data. Regulators in both the U.S. and European Union have been investigating whether AI systems are compliant with existing privacy and consumer protection laws.
If the court finds that Gemini AI’s data collection practices violate the California Invasion of Privacy Act, it could set a major legal precedent — potentially reshaping how AI models are deployed in communication platforms.
Privacy analysts say the case could also reignite debates over informed consent in AI usage, an issue that has remained largely undefined in U.S. law.
Conclusion: AI Innovation vs. User Privacy
The Thele v. Google LLC lawsuit underscores the rising tension between rapid AI innovation and the fundamental right to privacy. As AI tools like Gemini become increasingly embedded in digital platforms, questions over who controls user data, how it’s used, and whether consent is meaningful will remain central to the technology debate.
For Google, the case could mark a defining moment — forcing the company to defend not just Gemini’s functionality, but also its transparency and ethics in the age of artificial intelligence.
For more such insights visit EQMint
Disclaimer: This article is based on information available from public sources. It has not been reported by EQMint journalists. EQMint has compiled and presented the content for informational purposes only and does not guarantee its accuracy or completeness. Readers are advised to verify details independently before relying on them.
