ChatGPT Agents: When Every Agent Gets Its Own Computer
ChatGPT Agents is the clearest sign yet of how OpenAI thinks agents will show up in our work life. It's also clearly not the destination - it's a stepping stone toward something much bigger. After spending some time with it, here's what I'm taking away, and where I think this is heading.
Codex quietly became the harness for everything
OpenAI has been investing heavily in Codex for a while, and that investment is starting to pay off in places that aren't obviously about coding. Codex is no longer just a code agent - it's the general harness OpenAI is using for any complex task an agent has to execute. ChatGPT Agents are built on top of it, running in a cloud environment.
You can feel the difference immediately when you compare the same task across surfaces. I recently asked plain ChatGPT (the web interface) to create a presentation and upload it to a specific folder in SharePoint. The result was decent - but not at the quality level I needed. Doing the same task through a ChatGPT Agent, with access to my SharePoint and the underlying Codex infrastructure, produced a noticeably better result.
That gap isn't about the model. It's about the skills OpenAI has been building into Codex over the last year to turn it into a multipurpose work-agent harness - file handling, document wrangling, code execution, the whole stack a knowledge worker actually uses.
Every agent gets its own computer
The other thing that surprised me: each ChatGPT Agent gets its own cloud environment. A real one. With:
- A file system the agent can use to store its memory as markdown files.
- The ability to produce files and code, and keep them around.
- The ability to wrangle office documents of all sorts.
- An upload step during agent creation, so you can hand it your own files up front.
In effect, every ChatGPT Agent has its own computer in the cloud. (You can tell, by the way, because the environment auto-shuts down when idle - when you come back to an agent, it tells you the environment is starting up.)
The whole industry is converging on "agents need a computer"
Once you see it from this angle, you start seeing it everywhere:
- Anthropic is developing Claude on the desktop as an agent that uses your computer.
- Perplexity literally calls its new product a "personal computer" - it runs on a Mac Mini.
- OpenAI built the Codex desktop app for Mac and Windows, and is now putting the same capabilities into a cloud computer exposed through ChatGPT Agents.
The reason is the same in every case: to do the kinds of tasks a human does in a day, an agent needs the same broad tooling a human has on a computer. There's no shortcut. Either the agent works on your machine, or it gets its own.
Where this is going
ChatGPT Agents is clearly a stepping stone toward a larger vision. A few things I expect to land soon:
- Remote-controlling Codex on your local machine. Imagine an always-on Mac Mini with access to all your files and tools, and the ChatGPT app (or a dedicated Codex app) on your phone as the remote. Send it a task - update this presentation, draft this email - and let it work on your behalf.
- Agents on personal plans. Right now, ChatGPT Agents is business-only. But OpenAI is positioning it as the successor to GPTs - and GPTs were one of the most popular ways for individuals to share workflows and AI-encoded expertise. Agents do the same job, only much more powerfully, because they bundle files, skills, and connectors. It would be strange if this stayed locked away from personal users for long.
- Consumption-based pricing for agent runs. Notion, Perplexity, and Anthropic are all moving toward consumption-based pricing layered on top of subscription allowances. I'd be surprised if ChatGPT Agents didn't follow.
![]()
A small but underrated win: plugin discovery
One detail worth calling out. OpenAI's strategy for Codex plugins is quietly paying off - and the place they've pulled ahead of Claude is in discovery.
A plugin in this world is essentially a combination of an integration (usually via an MCP server) and dedicated skills. When you enable an integration in ChatGPT Agents, the associated skills come with it automatically - there's no separate skill management to think about.
That sounds small, but it makes the out-of-the-box experience much more intuitive, especially for non-experts. They don't have to know what a "skill" is in order to benefit from one.
Your information architecture is now strategic
With the race to build custom agents for both work and private life heating up, how and where you organize your files, notes, tasks, and the rest of your digital life matters more than it ever did before.
There are really two challenges:
- Access. Giving the agent access to the places where your information actually lives.
- Effective access. Giving it access in a way the agent can actually exploit.
Those aren't the same problem. Most of my work files live in Microsoft OneDrive, and there's a dedicated SharePoint connector for ChatGPT that agents can use. So in principle, access is solved. In practice, the connector isn't great - finding files through it is noticeably less powerful than what an agent could do on my local hard drive. That's not a ChatGPT problem; it's a function of how Microsoft has structured the underlying MCP server and how it handles semantic search.
The implication for the rest of us is clear: it's no longer enough to ask "can my agent reach this?" You also have to ask "can my agent use it well?" The choice of where to keep your knowledge is starting to look a lot like the choice of where to host your code.
The agent platform is the opportunity of 2026
Stepping back: building the agent platform is one of the most complex - and most promising - opportunities of 2026. The winner will be the one who can combine a user's full digital context with the right access to the right tools, and turn both into a productivity stack that holds up day after day.
Right now, Anthropic, OpenAI, and Notion are leading this race, each from a different angle. Perplexity has made an interesting contribution with Perplexity Personal Computer. There are also a number of smaller players doing interesting work but without the distribution to match.
The most interesting unknown is Apple - and the interesting part is precisely that no one really knows how they'll improve this experience. Right now, every agent on your computer is working through the same interface a human would. Codex is a good example: it has quite powerful computer-use capabilities, driving the machine visually to reach UI-first apps the same way you would, click by click. It works, but it's a brittle way to build serious workflows. If Apple shipped a dedicated protocol at the OS level - something that made UI-first apps natively accessible to AI agents instead of forcing them to pretend to be a person with a mouse - it would change the game dramatically. And it's pretty clear Apple is paying attention to how badly people want this kind of experience. The signal is hard to miss: Mac Minis are selling out because users are turning them into always-on agent computers. WWDC and the September releases will tell us how far Apple is willing to go.
ChatGPT Agents, in that sense, is less an end product and more a public preview of OpenAI's hand. The interesting question isn't whether it's good today - it's already useful. The interesting question is what they (and everyone else) build on top of it next.