
When the creator of the world’s most advanced coding agency speaks, people in Silicon Valley don’t just listen, they take notes.
Over the past week, the engineering community has been analyzing threads about X by Boris Cherny, the author and head of Anthropic’s Claude code. What started as a casual sharing of his personal device settings has blossomed into a viral manifesto for the future of software development that industry insiders are calling a turning point for the startup.
"If you haven’t read best practices for Claude code directly from the authors of Claude code, you’re behind as a programmer." Jeff Tang, a prominent voice in the developer community, writes: Another industry observer, Kyle McNeese, went further, declaring in Charney’s opinion: "game-changing updates," humanity is "burning," may face "ChatGPT moments."
This excitement stems from a contradiction. Cherny’s workflow is surprisingly simple, but it allows one person to work with the production capacity of a small engineering department. As one user pointed out on X after implementing Cherny’s setup, the experience is "Looks like StarCraft" Rather than traditional coding, it is a transition from inputting syntax to commanding autonomous units.
Here’s an analysis of the workflows that are reimagining how software is built, straight from the architects themselves.
How running five AI agents simultaneously turns coding into a real-time strategy game
The most impressive thing about Cherny’s revelation is that he doesn’t code in a linear way. in the traditional "inner loop" In development, programmers create functions, test them, and move on to the next function. However, Cerny acts as a fleet commander.
"Run 5 clones in parallel in the terminal." Charney wrote. "I number the tabs from 1 to 5 and use system notifications to know when Claude needs input."
By leveraging iTerm2 system notifications, Cherny effectively manages five concurrent work streams. While one agent runs the test suite, another refactors the legacy module, and a third drafts the documentation. he runs too "5-10 Claude of claude.ai" In your browser, "teleport" Command to pass sessions between the web and your local machine.
This results in "do more with less" A strategy articulated by Anthropology President Daniela Amodei earlier this week. While competitors such as OpenAI seek to build multi-trillion dollar infrastructure, Anthropic is proving that better orchestration of existing models can lead to dramatic increases in productivity.
The counterintuitive case of choosing the slowest, smartest model
In a surprising move for an industry obsessed with latency, Cherny revealed that he only uses Anthropic’s heaviest and slowest model, the Opus 4.5.
"I use Opus 4.5 for everything, but" Charney explained. "It’s the best coding model I’ve ever used, and even though it’s bigger and slower than Sonnet, it ends up being faster most of the time than using smaller models because it requires less manipulation and better tooling."
For enterprise technology leaders, this is an important insight. The bottleneck in modern AI development is not the speed of token generation. That’s the amount of time humans spend fixing AI mistakes. Cherny’s workflow is "calculate tax" By proactively eliminating smarter models, "revised tax" later.
One shared file turns every AI mistake into a lasting lesson
Charney also detailed how his team is solving the problem of AI amnesia. In the standard large language model, "Remember" Each session involves company-specific coding styles and architectural decisions.
To address this, Cherny’s team maintains a single file in a git repository called CLAUDE.md. "When Claude finds something wrong, he adds it to CLAUDE.md. That way Claude will know not to do it next time." he wrote.
This practice turns your codebase into a self-modifying organism. When a human developer reviews a pull request and finds an error, they don’t just fix the code. Tag the AI to update its own instructions. "Every mistake becomes a rule," Aakash Gupta, a product lead analyzing this thread, says: The longer the team works together, the smarter the agents become.
Slash commands and subagents automate the most tedious parts of development
of "vanilla" The workflow that one observer praised is achieved through rigorous automation of repetitive tasks. Cherny uses slash commands (custom shortcuts checked into your project’s repository) to handle complex operations with a single keystroke.
He highlighted the following command: /commit-push-prhe calls it dozens of times every day. Instead of manually typing git commands, creating commit messages, and opening pull requests, agents handle the bureaucracy of version control autonomously.
Cherny is also introducing subagents (specialized AI personas) to handle specific phases of the development lifecycle. He uses code simplification tools to clean up the architecture after major work is completed and uses the verify-app agent to perform end-to-end testing before anything ships.
Why validation loops unlock the real AI-generated code
If there’s one reason Claude Code reportedly reached $1 billion in annual recurring revenue so quickly, it’s probably the validation loop. AI is more than just a text generator. It’s a tester.
"Claude uses the Claude Chrome extension to test all changes made to claude.ai/code." Charney wrote. "Open a browser, test the UI, and iterate until your code works and the UX is good."
He argues that giving AI a way to validate its own work, such as automating browsers, running bash commands, or running test suites, improves the quality of the end result. "2-3 times." Agents don’t just write code. The code is proven to work.
Cherny’s workflow shows the future of software engineering
The response to Cherny’s thread suggests a pivotal shift in how developers think about their technology. For many years "AI coding" This meant the text editor’s autocomplete feature was a faster way to type. Cherny demonstrated that it could function as an operating system for labor itself.
"If you are already an engineer and need more power, read this." Jeff Tang summarizes X.
The tools already exist to increase human output five times. All it takes is a willingness to stop thinking of AI as an assistant and start treating it as a workforce. Programmers who take the first mental leap aren’t just more productive; They end up playing a completely different game, while others continue to type.
