In this talk, Jacob Lauritzen, CTO of Legora, highlights the limitations of chat interfaces for interacting with complex AI agents and explains how to collaborate with them more effectively. He argues that AI agents need to go beyond simply "doing" tasks — they need to support human control and trust during the planning and review stages — and emphasizes the need for high-bandwidth interfaces tailored to each industry's unique characteristics. This, he shows, enables humans and AI agents to collaborate more efficiently on complex work.
1. The Problem with How We Currently Interact with Complex AI Agents
The talk opens by illustrating the problems that arise when AI agents tackle complex tasks today. Jacob asks the audience to imagine the following scenario:
"You're asked to research something, draft a contract, and told not to make mistakes. The agent starts thinking, starts reading, spins up multiple sub-agents, does web searches, writes files, spins up more sub-agents, reads more, writes more files, keeps going, takes forever. Thirty minutes later it hands you a contract. You look at it — clause three seems wrong. You ask: 'Did you make a mistake here? Can you look at this other document?'"
The agent then starts over, and in doing so a phenomenon called compaction occurs — the agent falls into a state of context rot, forgetting what it learned earlier. As a result, the new contract it produces often fails to incorporate the previous feedback, leaving the user deeply frustrated. 😔
2. Introducing Legora and the New Economics of AI Agent Work
Jacob introduces himself as CTO of Legora, a legal tech startup, and describes Legora as a vertical AI company providing a collaborative AI workspace for law firms. Legora has grown rapidly to more than 1,000 customers across 50+ markets, and is hiring engineers in London.
He explains that the goal of vertical AI companies is to have agents complete increasingly complex tasks end to end. In the past six to twelve months, how that goal is achieved has changed dramatically, driven by a shift in the economics of production:
"Before, completing an end-to-end task was all about doing the actual work. Today it's different. Now planning and reviewing have become the new bottlenecks."
Performing the actual work has become cheap and easy, but planning, identifying non-functional requirements and specifications, and reviewing outputs now consume the most time. Jacob introduces the verifier's rule as a framework for completing complex tasks across the planning, execution, and review phases:
"The verifier's rule — a term coined by Jason — says: 'If a task is solvable and easy to verify, AI will solve it.'"
This applies to agents just as it does to foundation models: make a task verifiable, and an agent can iterate until it reaches the goal.
However, not every task in every industry sits at the same point on the verifiability spectrum. In law, for example:
- Checking definitions in a contract: Very easy to verify and complete.
- Drafting a contract: Easy to solve, but genuinely difficult to verify.
- As Jacob puts it, "The only time you can really find out whether the language in a contract works is when you go to court and a judge essentially verifies it and tells you good or bad." 🤯
- Litigation strategy: Virtually impossible to verify.
- Ask five lawyers for the optimal strategy on the same case and you'll get five different answers — there is no objective ground truth, making it very hard for AI to solve.
The same is true in coding: some things are easy, but "building a successful consumer app" is very hard to verify.
3. Two Core Elements of Effective Human–AI Collaboration: Trust and Control
Jacob stresses that AI agents should handle the work while humans stay involved at the points that matter. He describes two elements critical to human–agent collaboration:
- Trust: How much a human can rely on the agent's work and minimize the need for review.
- Control: How effectively a human can inject their own knowledge and steer the agent during the work.
3.1. How to Build Trust
Several strategies help increase trust:
-
Make tasks more verifiable:
- Coding example: Granting browser access and using test-driven development (TDD) turns implementation into a verifiable task, improving agent performance.
- Legal contract example: When direct verification is difficult, use proxies. For instance, test whether a draft is similar to past "golden contracts" to help the agent produce better results. 📜
-
Decompose tasks: Break a single complex task into many smaller ones. Humans handle the higher-stakes parts; the agent handles the easily verifiable pieces (e.g., applying formatting, checking definitions).
-
Add guardrails: Restricting what an agent is allowed to do increases trust.
"By limiting what it can do, you essentially gain more trust, because you know the agent won't do something weird."
For example, with Claude Code, low trust means asking permission for every action — making it useless. High trust means "YOLO mode" — and you just have to hope it doesn't drop your production database. 😱
3.2. How to Increase Control
Increasing control is equally important. Jacob explains that a complex agent task resembles a tree of work — specifically, a DAG (Directed Acyclic Graph).
-
Low control (early agent model): The agent completes all subtasks — company research, contract review, report writing — and only presents its output at the very end. The human can only make a judgment at the root level.
"Basically I can only make a judgment at the root level. After the agent does all this work and comes back to me, I might try to have a conversation again. This is the same example I opened with."
-
Planning: Steering the agent upfront and reaching agreement on the approach increases control.
"Planning essentially lets you steer the agent in advance and align on the approach."
The downside is that it requires the human to do a lot of work upfront to surface everything the agent needs to know — like briefing a colleague on a plan, agreeing on it, then hearing nothing until the final deliverable arrives. 🤷♀️
-
Skills: A highly effective way to encode human judgment into task nodes.
"Skills are really, really, really good. The reason skills are so good is that you can encode human judgment into a task node."
For example, you can define a skill that dictates how a confidentiality clause should be reviewed. This allows you to handle contingencies in advance and supports progressive discovery.
-
Elicitation: When skills are absent, have the agent ask the user directly.
"Elicitation means asking the user — asking the human. You may have skills, but instead of providing all the information, the agent will come to you and say: 'I don't know how to handle this. What do you want?'"
The key is not to let the agent get stuck. The agent should make a decision and keep going even when uncertain, but log that decision in a decision log so a human can review it later and reverse it if needed. 📝
4. The Need for High-Bandwidth Interfaces Beyond Chat
Jacob argues that as complex agent work trees scale to 10× or 100× their current size, today's chat interface cannot handle that complexity:
"You don't want this in chat. You don't want to open chat and have an endlessly long conversation where you need to answer 50 questions. You won't know what to answer, and you won't be able to do it well because you don't have the right context. So chat is not it. Chat is one-dimensional. It's a very low-bandwidth interface, and it tries to collapse this tree of work into one linear thing."
He argues that humans and agents need to collaborate through high-bandwidth artifacts. What those look like will vary by industry and vertical:
-
Legora's example: the Document 📄
- Collaborate with the agent through a document, just as you would with a colleague.
- Change only specific clauses, add comments, tag the agent or a teammate, and hand off specific sections of the document to specialized agents.
-
Legora's example: Tabular Review 📊
- When reviewing a contract, the agent runs a "tabular review" and delivers results in a format users are already familiar with.
- It flags items worth attention, letting users quickly spot problems and inject their own judgment.
"I can go in and very quickly see where the issues are. So the control is high. It's very effective at injecting my judgment. And I get a very fast idea of what the agent actually did."
These interfaces stand in contrast to current UI convergence patterns, which are post-hoc and linear.
Jacob emphasizes that using a chat box as an input method is great, but it should not be the primary mode of collaborating with complex agents. Language is a universal interface, but agents are not humans and should not be constrained to human language alone:
"But agents are not humans. I am constrained by language, but agents are not humans. So we should not limit agents to only human language. Thank you." 🙏
Conclusion
Jacob Lauritzen makes the case that complex AI agents need to move beyond simple chat-based interaction and toward high-bandwidth interfaces that meaningfully increase human trust and control. This is the foundation for a future in which agents can tackle ever more complex and specialized work in genuine synergy with humans. 🚀
