You (hopefully) know by now that you can’t take everything AI tells you at face value. Large language models (LLMs) sometimes provide incorrect information, and threat actors are now using paid search ads on Google to spread conversations with ChatGPT and Grok that appear to provide tech support instructions but actually direct macOS users to install an infostealing malware on their devices.
The campaign is a variation on the ClickFix attack, which often uses CAPTCHA prompts or fake error messages to trick targets into executing malicious commands. But in this case, the instructions are disguised as helpful troubleshooting guides on legitimate AI platforms.
How attackers are using ChatGPT
Kaspersky details a campaign specific to installing Atlas for macOS. If a user searches “chatgpt atlas” to find a guide, the first sponsored result is a link to chatgpt.com with the page title “ChatGPT™ Atlas for macOS – Download ChatGPT Atlas for Mac.” If you click through, you’ll land on the official ChatGPT site and find a series of instructions for (supposedly) installing Atlas.
However, the page is a copy of a conversation between an anonymous user and the AI—which can be shared publicly—that is actually a malware installation guide. The chat directs you to copy, paste, and execute a command in your Mac’s Terminal and grant all permissions, which hands over access to the AMOS (Atomic macOS Stealer) infostealer.
A further investigation from Huntress showed similarly poisoned results via both ChatGPT and Grok using more general troubleshooting queries like “how to delete system data on Mac” and “clear disk space on macOS.”
What do you think so far?
AMOS targets macOS, gaining root-level privileges and allowing attackers to execute commands, log keystrokes, and deliver additional payloads. BleepingComputer notes that the infostealer also targets cryptocurrency wallets, browser data (including cookies, saved passwords, and autofill data), macOS Keychain data, and files on the filesystem.
Don’t trust every command AI generates
If you’re troubleshooting a tech issue, carefully vet any instructions you find online. Threat actors often use sponsored search results as well as social media platforms to spread instructions that are actually ClickFix attacks. Never follow any guidance that you don’t understand, and know that if it asks you to execute commands on your device using PowerShell or Terminal to “fix” a problem, there’s a high likelihood that it’s malicious—even if it comes from a search engine or LLM you’ve used and trusted in the past.
Of course, you can potentially turn the attack around by asking ChatGPT (in a new conversation) if the instructions are safe to follow. According to Kaspersky, the AI will tell you that they aren’t.
