This is one of the things that GitHub Spec Kit solves for me. The specify.plan step launches code exploration agents and builds itself the latest data model, migrations, etc etc. Really reduces the need to document stuff when the agent self discovers codebase needs.
Give Claude sqlite/supabase MCP, GitHub CLI, Linear CLI, Chrome or launch.json and it can really autonomously solve this.
Neat idea. The biggest problem I've had with code knowledge graphs is they go stale immediately -- someone renames a package and nobody updates the graph. Having it as Markdown in the repo is clever because it just goes through normal PR review like everything else, and you get git blame for free. My concern is scale though. Once you have thousands of nodes the Markdown files themselves become a mess to navigate, and at that point you're basically recreating a database with extra steps. Would love to see how this compares to just pointing an agent at LSP output.
> My concern is scale though. Once you have thousands of nodes the Markdown files themselves become a mess to navigate
The agent will update the graph.
If you have thousands of nodes in md it means you have a highly non-trivial large code base and this is where lat will start saving you time - agents will navigate code much faster and you'll be reviewing semantic changes in lat in every diff, potentially suggesting the agents to alter the code or add more context to lat.
You still have to be engaged in maintaining your codebase, just at a higher level.
I would say that when you treat your Markdown as the authoritative source, I of course don't get it automated but that is my choice. It takes knowledge of the domain, but when you have deep specific knowledge that is worth so much more than automated updates. I use AI to get the initial MD but then I edit that. Sure it doesn't get auto updated, but I would never trust advice on the fly that got updated based on AI output on the internet.
Creator of lat.md here. There are two videos with me talking about lat in more detail [1] and less detail [2]. But I'm also working on a blog post exploring lat and its potential, stay tuned.
Cool. I've been thinking about tools to help coding agents lately, and I've always wondered if I could do a better job than actual Claude Code / Codex. They can do the `rg`s and kick off multiple agents and the skills, etc, and I couldn't find a way to actually prove there's a ROI of using something like Agent Lattice. Curious to see the results!
The staleness problem mentioned here is real. For agentic systems, a markdown-based DAG of your codebase is more practical than a traditional graph because agents work within context windows. You can selectively load relevant parts without needing a complex query engine. The key is making updates low-friction -- maybe a pre-commit hook or CI job that refreshes stale nodes.
managing agents.md is important, especially at scale. however I wonder how much of a measurable difference something like this makes? in theory, it's cool, but can you show me that it's actually performing better as compared to a large agents.md, nested agents.md, skills?
more general point being that we need to be methodical about the way we manage agent context. if lat.md shows a 10% broad improvement in agent perf in my repo, then I would certainly push for adoption. until then, vibes aren't enough
I definitely agree with the need for this. There's just too much to put into the agents file to keep from killing your context window right off the bat. Knowledge compression is going to be key.
I saw this a couple of days ago and I've been working on figuring out what the right workflows will be with it.
It's a useful idea: the agents.md torrent of info gets replaced with a thinner shim that tells the agent how to get more data about the system, as well as how to update that.
I suspect there's ways to shrink that context even more.
We've been doing this with simple mkdocs for ages. My experience is that rendering the markdown to feel like public docs is important for getting humans to review and take it seriously. Otherwise it goes stale as soon as one dev on the project doesn't care.
good catch. Makes me wonder if we could feed the Agent with a repository of known vulnerability and security best practices to check against and get ride of most deps. Just ask _outloud_ so to speak.
So the graph is human-maintained, and agents consume it and `lat check` is supposed to catch broken links and code-spec drift. How do you manage this in a multi-agent setup? Is it still a manual merge+fix conflicts situation? That's where I keep seeing the biggest issues with multi-agent setups
I found having smaller structured markdowns in each folder explaining the space and classes within keeps Claude and Codex grounded even in a 10M+ LOC codebase of c/c++
I was thinking the same. Especially now that Obsidian has a CLI to work with the vault.
The one thing I saw in the README is that lat has a format for source files to link back to the lat.md markdown, but I don't see why you couldn't just define an "// obs:wikilink" sort of format in your AGENTS.md
I have a vitepress package in most of my repos. It is a knowledge graph that also just happens to produce heat looking docs for humans when served over http. Agents are very happy to read the raw .md.
The problem is that for any non-trivial question agents have to grep A LOT to understand the high-level logic expressed in code. In big projects grepping can take minutes some times. Lat short-circuits grepping significantly with `lat search` that agents happily use.
One of the things that I've been chewing on lately is the sync problem. Having a CI job that identifies places where the docs have drifted from the implementation seems pretty valuable.
https://news.ycombinator.com/item?id=47543324
what's the point of markdown? there's nothing useful you can do with it other than handing it over to llm and getting some probabilistic response
Give Claude sqlite/supabase MCP, GitHub CLI, Linear CLI, Chrome or launch.json and it can really autonomously solve this.
The agent will update the graph.
If you have thousands of nodes in md it means you have a highly non-trivial large code base and this is where lat will start saving you time - agents will navigate code much faster and you'll be reviewing semantic changes in lat in every diff, potentially suggesting the agents to alter the code or add more context to lat.
You still have to be engaged in maintaining your codebase, just at a higher level.
So better question is why there isn't a bootstrap to get your LLM to scaffold it out and assist in detailing it.
Other than that the goal of lat is to make the agent use it and update it and it has tools to enforce that.
AMA :)
[1] https://x.com/mitsuhiko/status/2037649308086902989?s=20
[2] https://www.youtube.com/watch?v=gIOtYnI-8_c
more general point being that we need to be methodical about the way we manage agent context. if lat.md shows a 10% broad improvement in agent perf in my repo, then I would certainly push for adoption. until then, vibes aren't enough
I saw this a couple of days ago and I've been working on figuring out what the right workflows will be with it.
It's a useful idea: the agents.md torrent of info gets replaced with a thinner shim that tells the agent how to get more data about the system, as well as how to update that.
I suspect there's ways to shrink that context even more.
security.md ist missing apparently.
https://github.com/1st1/lat.md/commit/da819ddc9bf8f1a44f67f0...
The one thing I saw in the README is that lat has a format for source files to link back to the lat.md markdown, but I don't see why you couldn't just define an "// obs:wikilink" sort of format in your AGENTS.md
So give you agent a whole obsidian
I am skeptical how that helps. Agents cant just grep in one big file if reading entire file is the problem.