Formalizing the Future of Agentic Systems
A focused hackathon exploring how Lean can become the foundation for building provably reliable, multi-agent systems.
April 17–18, 2026 | IISc Bangalore + Online
Agentic systems powered by LLMs are moving from prototypes to production. Coding agents are writing, testing, and deploying software. But one boundary remains fragile: Human intent → Machine execution. When intent is written in natural language, ambiguity creeps in. At scale, that drift becomes dangerous. The next evolution of AI systems requires Formal Specification as the interface to autonomy.
Verified autonomy will define the next decade of AI. India can lead this transformation—our mathematical rigor and engineering talent are exactly what trustworthy AI demands. That transformation starts here, with builders like you.
Use Lean as the grounding layer for AI agents—proving that verification can be embedded at every level of autonomous decision-making.
Formalize system invariants before code generation, proving that transformations preserve semantics even as systems evolve.
Attach machine-checkable guarantees to autonomous workflows and agent-generated code—moving from hope to proof.
This hackathon isn't about theoretical exercises. You'll build working systems that demonstrate how verification makes autonomy trustworthy.
You'll work hands-on in teams, guided by experts, tackling real-world challenges at the intersection of formal verification and autonomous AI. The theme for this hackathon is:
Multi-agent systems such as those built on OpenClaw have become extremely popular because of their power across many domains. However, they also introduce significant risks. These risks must be addressed without sacrificing the capabilities that make such systems valuable. Guardrails backed by formal proofs are a natural way to ensure safety without being excessively restrictive. This hackathon is focused on building such verification systems in the Lean Prover.
The hackathon will begin with a one-day workshop on Lean, focused mainly on programming and the basic metaprogramming needed to build these systems. This will be followed by one day of in-person guided project work, and then 10 days of online interaction to complete the projects.
Participants will develop a model of a class of multi-agent systems like OpenClaw in Lean, together with the basic theory needed to reason about them. They will define types, terms, and functions that capture the semantics of these systems in Lean, and prove the core theorems on which proof automation can build. To interface with the Lean code, they may also extend Lean's syntax and build connections to proof automation tools. Participants may choose the style of agentic system they want to formalize, including specialized systems such as those for mathematics.
A successful project will be able to verify correct behaviour and flag risks within a powerful agentic system, without being so defensive that it weakens the system's usefulness. All code will be open-sourced (with acknowledgement of the hackathon sponsor) and should be well documented. After the hackathon, we aim to aggregate the various contributions into a common system.
This hackathon is for builders who believe "move fast and break things" is no longer enough.
Sponsored by Emergence AI
For exceptional formal specification and practical application
For strong verification work and innovation
For outstanding effort and promising verification work
If you are serious about building AI systems that scale safely, preserve semantics, respect invariants, and earn trust — this is for you.
April 17–18, 2026 | IISc Bangalore + Online