The Changing Landscape of Software Development and the Rise of Agents
Robert Brennan draws on over a decade of experience building open-source development tools to discuss coding agents and how to use them effectively. He introduces OpenHands (formerly OpenDevon), an open-source software development agent built by him and his team, emphasizing that the software development environment is changing rapidly.
"Software development in 2025 is changing. Our work is different from two years ago, and it will be different again two years from now."
He says that coding itself will gradually disappear. However, software engineering is not going away -- rather, the ability to think critically about problems and understand users and business needs will become even more important.
"We're not paid to bang on keyboards -- we're paid to think critically about the problems in front of us."
While AI excels at repetitive code writing and execution, seeing the big picture, empathizing with users, and considering business goals remains firmly the domain of humans.
What Is a Coding Agent?
The word "agent" gets thrown around a lot these days, but its essence lies in the ability to take actions in the real world (agency). The core tools of a software engineer -- code editor, terminal, and web browser -- are the same key tools given to agents.
"These are the core tools of a software engineer, and we give these same tools to agents to perform the full development loop."
Coding agents evolved from basic code autocomplete (e.g., GitHub Copilot) toward performing increasingly asynchronous and autonomous tasks. Now you can describe what you want in a sentence or two, and the agent works on it independently for 5-15 minutes, then delivers the result.
"You can send out multiple agents simultaneously, and meanwhile chat with colleagues or take a short break. It's a completely different but far more powerful way of working."
How Agents Work: Internal Architecture
The core of an agent is a loop of interaction between a large language model (LLM) and the outside world.
- The LLM decides the next action (e.g., read a file, modify code, run a command, browse a webpage).
- The action is executed, and the result (output) is obtained.
- The result is fed back to the LLM to decide the next action.
This process repeats, getting progressively closer to the goal.
Key Tools That Agents Use
-
Code Editor Rather than replacing entire files, agents use find & replace or diff-based editors to efficiently modify only the necessary parts.
"When you need to change one line in a thousand-line file, outputting the entire file is inefficient."
-
Terminal While it retrieves command execution results, handling long-running commands and parallel execution makes this more complex than it seems.
-
Web Browser Instead of simply passing HTML, agents optimize for the LLM by using accessibility trees or markdown conversion to deliver only the necessary information.
"Recently, we've been experimenting with labeling nodes on page screenshots so the agent can specify what to click. This field is evolving really fast."
-
Sandboxing Agents run in isolated Docker containers to prevent dangerous actions.
"We need to make sure the agent never runs 'rm -rf' on my home directory!"
He also emphasizes that external API access (e.g., GitHub tokens, AWS accounts) must strictly follow the principle of least privilege.
Best Practices for Using Agents
1. Start Small
- Tasks that can be completed quickly and have clear completion criteria are ideal.
- For example, passing tests, fixing lint errors, resolving merge conflicts -- repetitive, tedious tasks are highly effective for agents.
"Those little annoying tasks that developers hate doing? AI handles them really well."
- As you gain experience, you can gradually delegate larger tasks.
- In fact, Robert says "90% of my code is now written through agents."
2. Give Clear Instructions
- You need to specify not just the desired outcome, but also how you want the work done.
- For example, specifying the framework to use, test-driven development approach, and exact file/function names to modify yields faster and more accurate results.
"If you tell the agent exactly which files to modify, you can significantly reduce the time and cost spent searching the codebase."
3. Code Is Cheap!
- In the AI era, you can easily throw away code, experiment, and build prototypes.
- It's important to develop the habit of boldly discarding failed results and starting fresh.
"If an idea hits me while walking, I voice-instruct OpenHands, and when I get to the office, a PR is waiting. I discard half and merge half."
4. Code Review Is Mandatory!
- You must never blindly merge AI-generated code.
- Auto-merging without code review leads to duplicate code and technical debt, and the codebase quickly becomes a mess.
"If you merge AI-generated code without code review, things spiral out of control fast."
- A human must personally review the code and actually run it to verify there are no issues.
"Trust, but always verify. In the end, a human must check at least once."
- From OpenHands' experience, when PR ownership isn't clear, nobody takes responsibility and problems arise.
"Now whoever opens the PR is directly responsible for merging it. When issues come up, it's clear who to ask."
Representative Use Cases for Agents
Agents are general-purpose, but they show particular strengths in the following tasks:
-
Resolving merge conflicts
"OpenHands, resolve the merge conflicts in this PR." Repetitive and clear tasks that agents handle with nearly 99% accuracy.
-
Implementing PR feedback
"OpenHands, fix it the way that person suggested." When someone has already left clear modification requests, the agent implements them directly.
-
Fixing small bugs
"OpenHands, fix this issue we just discussed." You can give instructions directly from Slack and get quick fixes without opening your IDE.
-
Infrastructure changes Agents handle tasks requiring complex syntax like Terraform well, performing documentation searches and applying changes.
-
Database migrations They write migration code that follows best practices for indexes, foreign keys, and more.
-
Fixing test failures and expanding coverage
"Test coverage is low here -- please increase it." Useful for safely adding tests or fixing broken ones.
-
Building apps from scratch For internal apps, you can quickly build prototypes and use them even without code review.
Conclusion and Community Invitation
Robert closes by inviting everyone to join the OpenHands community and help build it together.
"Come build with us on GitHub, Slack, and Discord!"
Key Concepts Summary
- Coding agents: Strong at repetitive, well-defined tasks; evolving toward increasingly autonomous and asynchronous work
- Large language models (LLMs): The brain of the agent, interacting repeatedly with the outside world
- Core tools: Code editor, terminal, web browser, sandboxing
- Best practices: Start small, give clear instructions, always do code review, code is disposable
- Representative use cases: Merge conflicts, PR feedback, bug fixes, infrastructure changes, DB migrations, testing, app development
This was a talk that clearly and practically explained the current state and future of AI-based software development agents, along with real-world advice and experience on how to use them effectively.
