Jacob Lauritzen, CTO of Legora, explains why chat is too narrow for complex AI-agent work. As agents take on larger tasks, the bottleneck is no longer only execution. It is planning, review, trust, and control.
1. Problems With Today's Interaction Model for Complex AI Agents
The talk begins with a familiar failure mode. An agent researches, drafts, launches subagents, reads files, writes files, and eventually returns a contract. The user spots an issue, asks for a correction, and the agent begins again.
As the conversation grows, context degrades. The agent may forget earlier reasoning or feedback, and the second result can be worse than the first. The problem is not only model capability. It is the interaction structure.
2. Legora and the New Economics of AI-Agent Work
Lauritzen describes Legora as a vertical AI company building a collaborative AI workspace for legal teams. In vertical AI, the goal is to let agents complete increasingly complex end-to-end tasks inside a specific professional domain.
The economics have shifted. Execution has become cheap and increasingly capable, while planning and reviewing have become the new bottlenecks. If a task is solvable and easy to verify, agents can usually make progress. But many valuable tasks are hard to verify.
Legal work makes the spectrum clear. Checking a definition in a contract is easy to verify. Drafting a contract is harder because quality may only be tested much later. Litigation strategy is even harder because there may be no single objective answer.
3. Two Requirements for Better Human-Agent Collaboration: Trust and Control
Lauritzen frames collaboration around two concepts: trust and control. Trust means the human can rely on the agent enough to reduce review burden. Control means the human can inject judgment and steer the work at the right moments.
3.1. How to Increase Trust
Trust improves when work becomes more verifiable. In software, tests and browser access can make success easier to check. In legal work, proxy tests such as comparison with strong past contracts can help.
Another strategy is decomposition. Break a large task into smaller parts so humans handle the ambiguous parts while agents perform the easier, more verifiable work. Guardrails also help: limit which files an agent can edit, what directories it can read, or which sites it can search.
The point is not to make agents weak. It is to make their operating area legible enough that people can trust what they are doing.
3.2. How to Increase Control
Complex agent work resembles a tree or DAG of subtasks. If the human only sees the final answer, they can only intervene at the root level after the work is done.
Planning helps by aligning on an approach before execution, but it can demand too much up-front labor from the user. Skills are more powerful because they encode human judgment into specific nodes of the work. When no skill exists, elicitation lets the agent ask questions, but agents should not freeze whenever uncertainty appears.
Lauritzen suggests decision logs: agents can make a provisional choice, keep working, and leave a record for later review and correction.
4. The Need for High-Bandwidth Interfaces Beyond Chat
As task trees become much larger, chat becomes a poor interface. It is linear, low bandwidth, and forces a complex graph of work into one long thread.
Lauritzen argues that agents and humans should collaborate through high-bandwidth artifacts. In Legora, one artifact is the document itself: users can edit clauses, leave comments, tag agents, and assign specific parts of the document to specialized agents.
Another example is tabular contract review. A table can show issues, flags, and areas needing attention in a form lawyers already understand. This gives the user faster insight into what the agent did and where judgment is needed.
Chat remains useful as an input box, but it should not be the main workspace for sophisticated agent collaboration. Agents are not humans, so their interfaces do not need to be limited to human conversation.
Conclusion
The talk's central claim is simple: complex AI agents need interfaces that preserve human trust and control. Better planning, skills, decomposition, decision logs, documents, and structured review tools can make agents more useful than a long chat transcript ever could.
