Cybersecurity experts have revealed a serious flaw in Google’s Gemini AI assistant that could have allowed hackers to take control of smart home devices — all through something as simple as a fake Google Calendar invite.
At the DEF CON 33 Hacking Conference in Las Vegas earlier this month, researchers from Tel Aviv University, Technion, and SafeBreach demonstrated how the vulnerability worked. A malicious calendar invitation, once pulled up by Gemini when a user asked “What’s on my calendar today?”, could infect the chatbot with hidden prompts. Those prompts, dubbed “promptware,” could then trick Gemini into carrying out dangerous actions.
The potential consequences were startling: hackers could hijack smart home appliances, stream video through Zoom, access emails, change calendar events, spam harmful messages, or even manipulate devices like boilers and window locks. Researchers showed how the flaw could be used to deliver harmful or distressing messages to users, including those about self-harm.
Google confirmed the issue has since been patched. In a statement, Andy Wen, senior director of security product management for Google Workspace, said the company rolled out “cutting-edge defences” after working with the researchers. “These discoveries helped us better understand novel attack pathways and accelerate protections now in place,” Wen said.
Experts say the discovery is a wake-up call about the risks of connecting large language models to sensitive personal information and physical devices. “These assistants are like genius toddlers,” one researcher explained. “They’re smart but don’t always realize when they’re being manipulated.”
Cybersecurity specialists advise users to be cautious. Stav Cohen, an AI security expert, recommends limiting what tasks AI assistants can perform and requiring explicit approval before they take any action. “Never approve actions you didn’t initiate yourself,” added researcher Ben Nassi, stressing that vigilance is the best safeguard.
While Google’s fix closes this particular loophole, researchers warn that as AI assistants become more deeply integrated into daily life — from cars to connected homes — the stakes of securing them will only grow.

