ChatGPT Atlas is an AI-powered web browser that can book travel, order groceries or do research, all on your behalf. OpenAI says it’s like having a personal agent built into your web browser. That’s what has security experts concerned.
As remarkable as AI systems are, they’re also imperfect. From hallucinations to sycophancy, AI can get things wrong, often. Handing the keys of a web browser to AI introduces a host of other potential issues, including prompt injection attacks, clipboard attacks and the simple inability to understand that some sites are spam.
“Atlas shows the same early-stage issues we have seen across other agent-style browsers,” said Rob T. Lee, chief of research and chief AI officer at SANS Institute, a cooperative cybersecurity training and education organization. “There have been successful prompt injection and redirection tests. To their credit, OpenAI has moved quickly to address reports.”
Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.
The release of AI Atlas is an early salvo in an emerging browser war. Other entrants in this space include Perplexity’s Comet, Google’s inclusion of Gemini in Chrome and Copilot Mode in Microsoft Edge. For major players in Big Tech, gaining any sort of upper hand in the web browser space gives them critical user data, which they can use to either better optimize their products or sell targeted advertising against. That’s especially important for OpenAI, which has committed billions of dollars to AI infrastructure development while showing limited ability to make revenue, much less a profit. The company is looking towards all avenues, including advertising, to push revenues up, along with allowing the generation of adult textual content.
(Disclosure: Ziff Davis, CNET’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
In the case of OpenAI, having an AI-powered web browser gain popularity means pulling people away from Chrome, currently the world’s most popular web browser with 73% market share, according to GlobalStats. ChatGPT Atlas could further expand OpenAI’s ecosystem. While ChatGPT has become the catch-all term for AI chatbots, for Atlas to achieve mass adoption in both the consumer and enterprise space, OpenAI will need to ensure its browser is as secure and trustworthy as Chrome.
Prompt injections, clipboard attacks and more
Prompt injection attacks are the vulnerability most associated with AI-powered web browsers. It’s a type of exploit in which bad actors deliberately place malicious instructions on a website for an AI agent. The text is invisible, hidden from the user. But since the AI can analyze all content on the site, it sucks up the instructions and ignores safety guidelines. The bad instructions could lead to the AI leaking sensitive information, changing system settings or taking other harmful actions.
“There’s also just this wider consumer concern here, as it pertains to just this sort of omnipresent computer vision component associated with every aspect of your web browsing,” said Simon Poulton, executive vice president of innovation and growth at Tinuiti, a marketing agency. Poulton worries that consumers won’t understand how their information is being stored and how persistent that information is within the AI.
This leads to another concern that Poulton has: agentic deference. As users become more accustomed to AI systems, they start ceding skepticism and giving AI more control. He equates it to riding in a Waymo self-driving car for the first time. At first, a customer might watch closely, making sure the car is behaving normally. But after ten minutes, they’ll switch to browsing on their phones.
The problem is that AI systems aren’t perfect. When testing Perplexity’s Comet, Poulton saw that the browser began entering his password into the email address field when logging into a site. He was able to catch it, but it shows how AI systems can mishandle sensitive information.
A lesser-known vulnerability is the copy-to-clipboard attack. This is when a bad actor will instruct the AI to copy a malicious link onto a person’s clipboard. If the person isn’t paying attention, they might accidentally paste the link into their web browser and direct themselves to a bad website. It’s these instances of inattentiveness that can lead to major vulnerabilities.
“One of the biggest risks of using LLMs as interfaces to the internet is how people may not understand their limitations and thus use them inappropriately,” said Serena Booth, a professor of computer science at Brown University.
Booth cites the preponderance of using LLMs as therapists, even though these systems aren’t tuned for this kind of help. “I am sure this browser will also hallucinate, which may harm people who do not manage this effectively. OpenAI should feel a weighty responsibility to educate users about how to use their software appropriately,” Booth said.
When asked for comment, OpenAI referred to a recently published blog post regarding prompt injection attacks.
“Defending against prompt injection is a challenge across the AI industry and a core focus at OpenAI, according to the blog post. “While we expect adversaries to continue developing such attacks, we’re building defenses designed to carry out the user’s intended task even when someone is actively trying to mislead them.”
OpenAI says it is training AI models to call upon an instruction hierarchy that aims to distinguish between trusted and untrusted instructions. It has also developed multiple AI-powered “monitors” that can identify and block prompt injection attacks. Atlas turns control over to the user when on sensitive sites, such as online shopping services. OpenAI said it’s also using red-teaming (when security teams simulate real-world attacks, pitting hackers against defenders) with internal and external teams and is offering a bounty for people who find bugs. The average payout is $784.
Be careful with AI browsers at work
Despite the risk, there’s pressure on employees to adopt AI systems. With the release of ChatGPT Atlas, 27.7% of enterprises have had at least one person download the AI-powered web browser, according to data security company Cyberhaven. Some of that is likely IT professionals downloading the browser to test it, but the risk of employees using agentic browsers at work is still significant.
“Agentic browsers can simplify and automate the worst possible attacks to steal extremely sensitive data on customers, individuals, patients, sensitive product designs, and highly regulated data with national security implications,” said Cyberhaven CEO Nishant Doshi.
Doshi said that this risk isn’t limited to ChatGPT Atlas and that since AI browsers can act on behalf of the employee, using their credentials to navigate corporate tools, there need to be guardrails.
Current AI and IT security tools are often unable to tell clearly whether data is sensitive or not or where it came from. “Without that important context, they can’t accurately say whether a given piece of data is sensitive or not. Combine that major weakness with the major strength of agentic browsers to automate work, and you have an incident waiting to happen,” said Doshi.
Should I use ChatGPT Atlas or not?
For individuals, it should be okay to use ChatGPT Atlas, as long as you’re aware of its limitations, according to Lee of the SANS Institute. He recommends to avoid syncing Atlas with or directly sharing “financial, medical, or sensitive information with these systems” and to disable permissions that are unneeded.
At work, however, it’s best to proceed with caution. Experts said ChatGPT Atlas should be used in testing environments with limited network reach. It’s also important to track all activity and to incorporate it into a company’s AI governance framework early, said Lee.
The bigger question is whether you need ChatGPT Atlas. While the capabilities are cool, if you must constantly monitor it to ensure it’s doing things correctly, is it really worth the hassle? Likely, you’re familiar enough with the internet to do things yourself, even if it requires you to use a few extra synapses in your brain.
“It is very hard to make a case for why anyone would use this right now,” said Poulton, who believes he can click through sites faster. “It’s a novelty factor. But where does the actual consumer ease of experience come from? It doesn’t change. It doesn’t create any value for me.”
TL;DR
Consumers can use ChatGPT Atlas, just proceed with caution. Don’t use it on work computers without the approval of IT as there could be some vulnerabilities. When using it, keep an eye on how it’s using sensitive information, such as passwords, to navigate across sites and accomplish tasks. To be safe, maybe avoid banking or other sensitive sites.
Read the full article here
