> It’s one of those things that crackpots keep trying to do, no matter how much you tell them it could never work. If the spec defines precisely what a program will do, with enough detail that it can be used to generate the program itself, this just begs the question: how do you write the spec? Such a complete spec is just as hard to write as the underlying computer program, because just as many details have to be answered by spec writer as the programmer.
Program generation from a spec meant something vastly different in 2007 than it does now. People can and are generating programs from underspecified prompts. Trying to be systematic about how prompts work is a worthwhile area to explore.
Sure, but Joel isn't saying that's impossible or that people who do that are crackpots. In fact, he was an advocate of writing specs ahead of time [1] - for people.
At the time "generating a program from a spec" was an idea floating around that you could come up with a "spec language" that was easier than regular programming languages but somehow still had the same power and could be compiled directly into a program. That's the crackpot idea that Joel is referencing - but that's not what a spec language used with an LLM is doing.
This is an excellent observation and puts into words something I have barely scratched the surface of. Along with specifications, formal verification is another domain that received the "just automate it" treatment in the before times.
And because formal verification with LLMs is an active area of open research, I have some hope that the old idea of automated formal verification is starting to take shape. There is a lot to talk about here, but I'll leave a link to the 1968 NATO Software Engineering Conference [1] for those who are interested in where these thoughts originated. It goes deeply into the subject of "specification languages" and other related concepts. My understanding is that the historical split between computing science and software engineering has its roots in this 1968 conference.
Might look like it, might also just be survivorship bias. Alot of crackpot ideas hit the wall instead of beeing a success. We only notice the successors and might think of them as the default, not the exception.
It actually makes sense that code is becoming amorphous and we will no longer scale in terms of building out new features (which has become cheap), but by defining stricter and stricter behavior constraints and structural invariants.
I’m actually start seeking at work how people are writing skills in a very procedural manner. Something like:
First, collect the following information from user: ….
Second, send http request to the following endpoint with the certain payload….
If server returned error - report back to user.
It makes me crack every time I see that kind of stuff. Why on Earth you won’t just write a script for that purpose? 10x faster, zero tokens burned, 100% deterministic.
import Mathlib
def Goldbach := ∀ x : ℕ, Even x → x > 2 → ∃ (y z: ℕ), Nat.Prime y ∧ Nat.Prime z ∧ x = y + z
A short specification for the proof of the Goldbach conjecture in Lean. Much harder to implement though. Implementation details are always hidden by the interface, which makes it easier to specify than produce. The Curry-Howard correspondence means that Joel's position here is that any question is as hard to ask as answer, and any statement as hard to formulate as it is to prove, which is really just saying that all describable statements are true.
This argument is based on the notion of proof irrelevance – if a theorem is true, any proof is as good as any other. This is not the case for computer programs – two programs that implement the same specification may be very different in terms of performance, size, UI/UX, code maintainability, etc.
Performance and size can easily be added to any specification, maintainability is not a problem if you never have to maintain it, UI/UX are design issues not code issues. If you specify a UI, it will have the UX you want. We can already do UI creation with visual editors.
What he misses is that it's much easier to change the spec than the code. And if the cost of regenerating the code is low enough, then the code is not worth talking about.
Is it? If the spec is as detailed as the code would be? If you make a change to one part of the spec do you now have inconsistencies that the LLM is going to have to resolve in some way? Are we going to have a compiler, or type checker type tools for the spec to catch these errors sooner?
It IS a compiler. You might as well ask if the machine-language output of a C compiler is as detailed as the C code was.
To anticipate your objection: you can get over determinism now, or you can get over it later. You will get over it, though, if you intend to stay in this business.
What are you talking about? If an LLM is a compiler, then I'm a compiler. Are we going to redefine the meaning of words in order not to upset the LLM makers?
Over time, when digital computers became commonplace, the computing moved from the person to the machine. At this time, arguably the humans doing the programming of the machine were doing the work we now ask of a "compiler".
So yes, an LLM can be a compiler in some sense (from a high level abstract language into a programming language), and you too can be a compiler! But currently it's probably a good use of the LLM's time and probably not a good use of yours.
I don't know, having done a lot of completely pointless time-wasting staring at hex dumps and assembly language in my youth was a pretty darned good lesson. I say it's a worthwhile hobby to be a compiler.
But your point stands. There is a period beyond which doing more than learning the fundamentals just becomes toil.
You can only specify software into existence if your idea of what you want it to look like is as vague as your specification. Sometimes this is the case, sometimes not.
As far as I can tell it's not a new language, but rather an alternative workflow for LLM-based development along with a tool that implements it.
The idea, IIUC, seems to be that instead of directly telling an LLM agent how to change the code, you keep markdown "spec" files describing what the code does and then the "codespeak" tool runs a diff on the spec files and tells the agent to make those changes; then you check the code and commit both updated specs and code.
It has the advantage that the prompts are all saved along with the source rather than lost, and in a format that lets you also look at the whole current specification.
The limitation seems to be that you can't modify the code yourself if you want the spec to reflect it (and also can't do LLM-driven changes that refer to the actual code), and also that in general it's not guaranteed that the spec actually reflects all important things about the program, so the code does also potentially contain "source" information (for example, maybe your want the background of a GUI to be white and it is so because the LLM happened to choose that, but it's not written in the spec).
The latter can maybe be mitigated by doing multiple generations and checking them all, but that multiplies LLM and verification costs.
Also it seems that the tool severely limits the configurability of the agentic generation process, although that's just a limitation of the specific tool.
> The limitation seems to be that you can't modify the code yourself if you want the spec to reflect it
Eventually, we'll end up in a world where humans don't need to touch code, but we are not there yet. We are looking into ways to "catch up" the specs with whatever changes happen in the code not through CodeSpeak (agents or manual changes or whatever). It's an interesting exercise. In the case of agents, it's very helpful to look at the prompts users gave them (we are experimenting with inspecting the sessions from ~/.claude).
More generally, `codespeak takeover` [1] is a tool to convert code into specs, and we are teaching it to take prompts from agent sessions into account. Seems very helpful, actually.
I think it's a valid use case to start something in vibe coding mode and then switch to CodeSpeak if you want long-term maintainability. From "sprint mode" to "marathon mode", so to speak
1. You are right that we can redefine what is code. If code is the central artefact that humans are dealing with to tell machines and other humans how the system works, then CodeSpeak specs will become code, and CodeSpeak will be a compiler. This is why I often refer to CodeSpeak as a next-level programming language.
2. I don't think being deterministic per se is what matters. Being predictable certainly does. Human engineers are not deterministic yet people pay them a lot of money and use their work all the time.
>Human engineers are not deterministic yet people pay them
Human carpenters are not deterministic yet they won't use a machine saw that goes off line even 1% of the time. The whole history of tools, including software, is one of trying to make the thing do more precisely what is intended, whether the intent is right or not.
Can you imagine some machine tool maker making something faulty and then saying, "Well hey, humans aren't deterministic."
Compiler is not 100% deterministic. Its output can change when you upgrade its version, its output can change when you change optimization options. Using profile-guided optimization can also change between runs.
If you change inputs then obviously you will get a different output. Crucially using the same inputs, however, produces the same output. So compilers are actually deterministic.
This is irrelevant over the long run because the environment changes even if nothing else does. A compiler from the 1980's still produces identical output given the original source code if you can run it. Some form of virtualization might be in order, but the environment is still changing while the deterministic subset shrinks.
Having faith that determinism will last forever is foolish. You have to upgrade at some point, and you will run into problems. New bugs, incompatibilities, workflow changes, whatever the case will make the determinism property moot.
Many compilers aren't deterministic. That's why the effort to make Linux distros have reproducible builds took so long and so much effort.
The reason is, it's often more work to be deterministic than not deterministic, so compilers don't do it. For example, they may compile functions in parallel and append them to the output in the order they complete.
Also they seem to want to run this as a business, which seems absurd to me since I don't see how they can possibly charge money, and anyway the idea is so simple that it can be reimplemented in less than a week (less than a day for a basic version) and those alternative implementations may turn out to be better.
It also seems to be closed-source, which means that unless they open the source very soon it will very likely be immediately replaced in popularity by an open source version if it turns out to gain traction.
I think these limitations could be addressed by allowing trivial manual adjustments to the generated code before committing. And/or allowing for trivial code changes without a spec change. The judgement of "trivial" being that it still follows the spec and does not add functionality mandating a spec change. I haven't checked if they support any of this but I would be frustrated not being allowed to make such a small code change, say to fix an off-by-one error that I recently got from LLM output. The code change would be smaller than the spec change.
Cool idea overall, an incremental psuedocode compiler. Interesting to see how well it scales.
I can also see a hybrid solution with non-specced code files for things where the size of code and spec would be the same, like for enums or mapping tables.
Also a bit formal. Maybe something like this will be the output of the prompt to let me know what the AI is going to generate in the binary, but I doubt I will be writing code like this in 5 years, English will probably be fine at my level.
> Also it seems that the tool severely limits the configurability of the agentic generation process, although that's just a limitation of the specific tool.
Working on that as well. We need to be a lot more flexible and configurable
* This isn't a language, it's some tooling to map specs to code and re-generate
* Models aren't deterministic - every time you would try to re-apply you'd likely get different output (without feeding the current code into the re-apply and let it just recommend changes)
* Models are evolving rapidly, this months flavour of Codex/Sonnet/etc would very likely generate different code from last months
* Text specifications are always under-specified, lossy and tend to gloss over a huge amount of details that the code has to make concrete - this is fine in a small example, but in a larger code base?
* Every non-trivial codebase would be made up of of hundreds of specs that interact and influence each other - very hard (and context - heavy) to read all specs that impact functionality and keep it coherent
I do think there are opportunities in this space, but what I'd like to see is:
* write text specifications
* model transforms text into a *formal* specification
* then the formal spec is translated into code which can be verified against the spec
2 and three could be merged into one if there were practical/popular languages that also support verification, in the vain of ADA/Spark.
But you can also get there by generating tests from the formal specification that validate the implementation.
Models aren't deterministic - every time you would try to re-apply you'd likely get different output (without feeding the current code into the re-apply and let it just recommend changes)
If the result is always provably correct it doesn't matter whether or not it's different at the code level. People interested in systems like this believe that the outcome of what the code does is infinity more important than the code itself.
That if at the beginning of your sentence is doing a whole lot of work. Indeed, if we could formally and provably (another extremely loaded word) generate good code that'd be one thing, but proving correctness is one of those basically impossible tasks.
> but proving correctness is one of those basically impossible tasks.
To aim for a meeting of the minds... Would you help me out and unpack what you mean so there is less ambiguity? This might be minor terminological confusion. It is possible we have different takes, though -- that's what I'm trying to figure out.
There are at least two senses of 'correctness' that people sometimes mean: (a) correctness relative to a formal spec: this is expensive but doable*; (b) confidence that a spec matches human intent: IMO, usually a messy decision involving governance, organizational priorities, and resource constraints.
Sometimes people refer to software correctness problems in a very general sense, but I find it hard to parse those. I'm familiar with particular theoretical results such as Rice's theorem and the halting problem that pertain to arbitrary programs.
* With tools like {Lean, Dafny, Verus, Coq} and in projects like {CompCert, sel4}.
You got it completely backwards. The claim is that if the code does exactly what the spec says (which generated tests are supposed to "prove") then the actual code does not matter, even if it's different each time.
The point they are making is the tests are neither necessary nor sufficient alone to prove the code does exactly what the spec says. Looking at the tests isn't enough to prove anything; as an extreme example, if no one involved looks at the code, then the tests can just be static always passing and you wouldn't know either way whether or not the code matches the spec or not.
If anyone cared enough they could look at the code and see the problem immediately and with little effort, but we're encouraging a world where no one cares enough to put even that baseline effort because *gestures at* the tests are passing. Who cares how wrong the code is and in what ways if all the lights are green?
> If the result is always provably correct it doesn't matter whether or not it's different at the code level. People interested in systems like this believe that the outcome of what the code does is infinity more important than the code itself.
If the spec is so complete that it covers everything, you might as well write the code.
The benefit of writing a spec and having the LLM code it, is that the LLM will fill in a lot of blanks. And it is this filling in of blanks that is non-deterministic.
Except one shoe is made by children in a fire-trap sweatshop with no breaks, and the other was made by a well paid adult in good working conditions.
The ends don’t justify the means. The process of making impacts the output in ways that are subtle and important, but even holding the output as a fixed thing - the process of making still matters, at least to the people making it.
If you are a “programmer” you are going to be the kids in the sweatshop. On the enterprise dev side where most developers work, it’s been headed in that direction for at least a decade where it was easy enough to become a “good enough” generic full stack/mobile/web etc dev.
Even on the BigTech side being able to reverse a btree on the whiteboard and having on your resume that you were a mid level developer isn’t enough either anymore
If you look at the comp on that side, it’s also stagnated for decade. AI has just accelerated that trend.
While my job has been at various percentages to produce code for 30 years, it’s been well over a decade since I had to sell myself on “I codez real gud”. I sell myself as a “software engineer” who can go from ambiguous business and technical requirements, deal with politics, XYProblems, etc
Exactly. I work in a consulting company as a customer facing staff consultant - highest level - specializing in cloud + app dev. We don’t hire anyone less than staff in the US. Anything lower is hired out of the country.
That’s exactly my point. “Programming” was clearly becoming commoditized a decade ago.
I worked with developers from 6 other countries (the “america first” slogan of the ruling part is missing a fine print that should read “americans last”) and not only are they not in sweatshop conditions, most of them live like kings on salaries they are making and are more “white collar” in their country than most SWEs here
Out of bounds behavior is sometimes a known unknown, but in the era of generated code is exclusively unknown unknowns.
Good luck speccing out all the unanticipated side effects and undefined behaviors. Perhaps you can prompt the agent in a loop a bnumber of times but it's hard to believe that the brute-force throw-more-tokens-at-it approach has the same level of return as a more attentive audit by human eyeballs.
Are you as a developer 100% able to trust that you didn’t miss anything? Your team if you are a team lead who delegates tasks to other developers? If you outsource non business things like Salesforce integrations etc do you know all of the code they wrote? Your library dependencies? Your infrastructure providers?
I don’t know. I’m making a point that the only people whose sole responsibility is code that they personally write are mid level ticket takers.
I don’t review every line of code by everyone whose output I’m responsible for, I ask them to explain how they did things and care about their testing, the functional and non functional requirements and hotspots like concurrency, data access patterns, architectural issues etc.
For instance, I haven’t done web development since 2002 except for a little copy and paste work. I completely vibe coded three internal web admin sites for separate projects and used Amazon Cognito for authentication. I didn’t look at a line of code that AI generated any more than I would have looked at a line of code for a website I delegated to the web developer. I cared about functionality and UX.
The difference is that you have theory of mind of your human counterparts -- you can trust that their reasoned explanations are consistent with what you know about them.
I have not encountered an agent yet that I can trust in the same way.
Sure. People go for the cheapest option that fits their requirements, mostly.
But we’re the shoemakers, not the consumers. It’s actually our job to preserve our own and our peers quality of life.
Cheapest good option possible doesn’t have to be the sweatshop - tho the shareholders of nike or zara would have you believe that - the labor movements of the 19th century proved that’s not the case.
It is our job to keep our job, or leave if we don't agree with management, assuming to be lucky when there is an option to walk out and start anew right on the other side of the street.
This is what is sometimes called a “crabs in a bucket” mentality. It’s how you go from a middle class weaver, to an impoverished sweatshop worker in a generation.
If it's wrong then it's not provably correct (for any value of 'proof').
How you define your proof is up to you. It might be a simple test, or an exhaustive suite of tests, or a formal proof. It doesn't matter. If the output of the code is correct by your definition, then it doesn't matter what the underlying code actually is.
If what you're after is determinism, then your solution doesn't offer it. Both the formal specification and the code generated from it would be different each time. Formal specifications are useful when they're succinct, which is possible when they specify at a higher level of abstraction than code, which admits many different implemementations.
The point would presumably be to formalise it, then verify that the formal version matches what you actually meant. At which point you can't/shouldn't regenerate it, but you can request changes (which you'd need to verify and approve).
But the code produced from the formal spec would still be nondeterministic. And I believe CodeSpeak doesn't wish to regenerate the entire program with each spec change, but apply code changes based on the changes to the spec. Maybe there could be other benefits to formalisation in this case, but determinism isn't one of them.
First, it's not a question of decidability but of tractability. Verifying programs in a language that has nothing but boolean variables, no subroutines, and loops at depth of at most 2 - far, far, from Turing-completeness - is already intractable (reduction from TQBF).
Second, it's very easy to have some specs decided tractably, at least in many practical instances, but they are far too weak to specify most correctness properties programs need. You mentioned the Rust type system, and it cannot specify properties with interleaved quantifiers, which most interesting properties require.
And as for HoTT - or any of the many equivalent rich formalisms - checking their proofs is tractable, but not finding them. The intractability of verification of even very limited languages (again TQBF) holds regardless of how the verification is done.
I think it's best to take it step by step, and CodeSpeak's approach is pragmatic.
I think there is a bit of the map territory relation here.
> First, it's not a question of decidability but of tractability
The question of decidability is a form of many-to-one, reduction. In fact RE-complete is defined by many-to-one reductions.
In a computational complexity sense, tractability is a far stronger notion. Basically an algorithm efficient if its time complexity at most PTIME for any size n input. A problem is "tractable" if there is an efficient algorithm that solves it.
You are correct, if you limit your expressiveness to PTIME, where because P == co-P, PEM/tight apartness/Omniscience principles hold.
But the problem is that Church–Rosser Property[0] (proofs ~= programs) and Brouwer–Heyting–Kolmogorov Interpretation[1] (Propositions at types) are NOT binary SAT, and you have concepts like mere propositions[3] that are very different than just BSAT.
But CodeSpeak doesn't have formal specifications, so this is irrelevant. Their example code output produced code with path traversal/resource exhaustion risks and correctness issues and is an example.
My personal opinion is that we will need to work within the limitations of the systems, and while it is trivial to come up with your own canary, I would recommend playing with [3] before the models directly target it.
Generating new code from a changed spec will be less difficult, specifically when the mess of real world specs comes into play. You can play with the example on CodeSpeak's front page, trying to close the various holes the software has with malformed/malicious input, giving the LLM the existing code base and you will see that "brown m&m"[3] problem arise quickly. At least for me if I prompt it to look at the changed natural language spec, generating new code it was more successful.
But for some models like the qwen3 coder next, the style resulted in far less happy path protections, which that model seems to have been trained on to deliver by default in some cases.
Validating programs against a formal spec is very, very hard for foundational computational complexity reasons. There's a reason why the largest programs whose code was fully verified against a formal spec, and at an enormous cost, were ~10KLOC. If you want to do it using proofs, then lines of proof outnumber lines of code 10-1000 to 1, and the work is far harder than for proofs in mathematics (that are typically much shorter). There are less absolute ways of checking spec conformance at some useful level of confidence, and they can be worthwhile, but they require expertise and care (I'm very much in favour of using them, but the thought that AI can "just" prove conformance to a formal spec ignores the computational complexity results in that field).
For most cases we don't need nearly that comprehensive verification. This is expecting more off AI written code than we ever bother to subject most human written code to. There's a vast chasm there we only need to even slightly start to bridge to get to far higher confidence levels than the typical human dev team achieves.
> For most cases we don't need nearly that comprehensive verification. This is expecting more off AI written code than we ever bother to subject most human written code to.
True.
> There's a vast chasm there we only need to even slightly start to bridge to get to far higher confidence levels than the typical human dev team achieves.
The word "slightly" is doing a lot of work here to the point of making it impossible to estimate. For example, the complexity classes P and NP are only slightly apart, and yet that's where a very practical barrier between feasibility and infeasibility lies. I don't doubt that one day AI may be able to write programs as well as humans, although nobody can estimate how soon that day will come, but nobody knows how wide the gap between that and "far higher confidence" is. Maybe there are fundamental computational complexity barriers in that gap that no amount of intelligence can cross, and maybe there aren't. Nobody knows yet.
What we do know is that anything humans do is possible - after all, we're doing it - and many things we need and humans can't do (including predicting nonlinear systems like the behavious of economy) no machine can do drastically better because of complexity limitations.
My process has organically evolved towards something similar but less strictly defined:
- I bootstrap AGENTS.md with my basic way of working and occasionally one or two project specific pieces
- I then write a DESIGN.md. How detailed or well specified it is varies from project to project: the other day I wrote a very complete DESIGN.md for a time tracking, invoice management and accounting system I wanted for my freelance biz. Because it was quite complete, the agent almost one-shot the whole thing
- I often also write a TECHNICAL-SPEC.md of some kind. Again how detailed varies.
- Finally I link to those two from the AGENTS. I also usually put in AGENTS that the agent should maintain the docs and keep them in sync with newer decisions I make along the way.
This system works well for me, but it's still very ad hoc and definitely doesn't follow any kind of formally defined spec standard. And I don't think it should, really? IMO, technically strict specs should be in your automated tests not your design docs.
I think many have adopted "spec driven development" in the way you describe.
I found it works very well in once-off scenarios, but the specs often drift from the implementation.
Even if you let the model update the spec at the end, the next few work items will make parts of it obsolete.
Maybe that's exactly the goal that "codespeak" is trying to solve, but I'm skeptical this will work well without more formal specifications in the mix.
> specs often drift from the implementation
> Maybe that's exactly the goal that "codespeak" is trying to solve
Yes and yes. I think it's an important direction in software engineering. It's something that people were trying to do a couple decades ago but agentic implementation of the spec makes it much more practical.
I have the same basic workflow as you outlined, then I feed the docs into blackbird, which generates a structured plan with task and sub tasks. Then you can have it execute tasks in dependency order, with options to pause for review after each task or an automated review when all child task for a given parents are complete.
It’s definitely still got some rough edges but it has been working pretty well for me.
There should be a setting to include specific files in every prompt/context. I’m using zed and when you fire up an agent / chat it explicitly states that the file(s) are included.
Are you sure? If so then your harness is doing something wrong. AGENTS.md doesn't need to be read deliberately by the model, it forms part of the starting prompt.
Is that really true? I haven’t tried to do my own inference since the first Llama models came out years ago, but I am pretty sure it was deterministic: if you fixed the seed and the input was the same, the output of the inference was always exactly the same.
1.) There is typically a temperature setting (even when not exposed, most major providers have stopped exposing it [esp in the TUIs]).
2.) Then, even with the temperature set to 0, it will be almost deterministic but you'll still observe small variations due to the limited precision of float numbers.
> but you'll still observe small variations due to the limited precision of float numbers
No. Floating number arithmetic is deterministic. You don't get different answers for the same operations on the same machine just because of limited precision. There are reasons why it can be difficult to make sure that floating point operations agree across machines, but that is more of a (very annoying and difficult to make consistent) configuration thing than determinism.
(In general it is mildly frustrating to me to see software developers treat floating point as some sort of magic and ascribe all sorts of non-deterministic qualities to it. Yes floating point configuration for consistent results across machines can be absurdly annoying and nigh-impossible if you use transcendental functions and different binaries. No this does not mean if your program is giving different results for the same input on the same machine that this is a floating point issue).
In theory parallel execution combined with non-associativity can cause LLM inference to be non-deterministic. In practice that is not the case. LLM forward passes rarely use non-deterministic kernels (and these are usually explicitly marked as such e.g. in PyTorch).
You may be thinking of non-determinism caused by batching where different batch sizes can cause variations in output. This is not strictly speaking non-determinism from the perspective of the LLM, but is effectively non-determinism from the perspective of the end user, because generally the end user has no control over how a request is slotted into a batch.
> No. Floating number arithmetic is deterministic. You don't get different answers for the same operations on the same machine just because of limited precision. There are reasons why it can be difficult to make sure that floating point operations agree across machines, but that is more of a (very annoying and difficult to make consistent) configuration thing than determinism.
Float addition is not associative, so the result of x1 + x2 + x3 + x4 depends on which order you add them in. This matters when the sum is parallelized, as the structure of the individual add operations will depend on how many cores are available at any given time.
Limited precision of float numbers is deterministic. But there's whole parallelism and how things are wired together, your generation may end up on a different hardware etc.
And models I work with (claude,gemini etc) have the temperature parameter when you are using API.
I use Kiro IDE (≠ Kiro CLI) primarily as a spec generator.
In my experience, it's high-quality for creating and iterating on specs. Tools like Cursor are optimized for human-driven vibing -- they have great autocomplete, etc. Kiro, by contrast, is optimized around spec, which ironically has been the most effective approach I've found for driving agents.
I'd argue that Cursor, Antigravity, and similar tools are optimized for human steering, which explains their popularity, while Kiro is optimized for agent harnesses. That's also why it’s underused: it's quite opinionated, but very effective. Vibe-coding culture isn't sold on spec driven development (they think it's waterfall and summarily dismiss it -- even Yegge has this bias), so people tend to underrate it.
Kiro writes specs using structured formats like EARS and INCOSE (which is the spc format used in places like Boeing for engineering reqs). It performs automated reasoning to check for consistency, then generates a design document and task list from the spec -- similar to what Beads does. I usually spend a significant amount of time pressure-testing the spec before implementing (often hours to days), and it pays off. Writing a good, consistent spec is essentially the computer equivalent of "writing as a tool of thought" in practice.
Once the spec is tight, implementation tends to follow it closely. Kiro also generates property-based tests (PBTs) using Hypothesis in Python, inspired by Haskell's QuickCheck. These tests sweep the input domain and, when combined with traditional scenario-based unit tests, tend to produce code that adheres closely to the spec. I also add a small instruction "do red/green TDD" (I learned this from Simon Willison) and that one line alone improved the quality of all my tests.
Kiro can technically implement the task list itself, but this is where agents come in. With the spec in hand, I use multiple headless CLI agents in tmux (e.g., Kiro CLI, Claude Code) for implementation. The results have been very good. With a solid Kiro spec and task list, agents usually implement everything end-to-end without stopping -- I haven’t found a need for Ralph loops. (agents sometimes tend to stop mid way on Claude plans, but I've never had that happen with Kiro, not sure why, maybe it's the checklist, which includes PBT tests as gates).
didn't have the strongest start, but the Kiro IDE is one of the best spec generators I've used, and it integrates extremely well with agent-driven workflows.
Each stage produces its own output artifact (analysis, implementation plan, implementation summary, etc) and takes the previous phases' outputs as input. The artifact is locked after the stage is done, so there is no drift.
> * model transforms text into a formal specification
formal specification is no different from code: it will have bugs :)
There's no free lunch here: the informal-to-formal transition (be it words-to-code or words-to-formal-spec) comes through the non-deterministic models, period.
If we want to use the immense power of LLMs, we need to figure out a way to make this transition good enough
In reality you give the same programmer an update to the existing spec, and they change the code to implement the difference. Which is exactly what the thing in OP is doing, and exactly what should be done. There's simply no reason to regenerate the result.
The entire thing about determinism is a red herring, because 1) it's not determinism but prompt instability, and 2) prompt instability doesn't matter because of the above. Intelligence (both human and machine) is not a formal domain, your inputs lack formal syntax, and that's fine. For some reason this basic concept creates endless confusion everywhere.
I think your objections miss the point. My informal specs to a program are user-focused. I want to dictate what benefits the program will give to the person who is using it, which may include requirements for a transport layer, a philosophy of user interaction, or any number of things. When I know what I want out of a program, I go through the agony of translating that into a spec with database schemas, menu options, specific encryption schemes, etc., then finally I turn that into a formal spec within which whether I use an underscore or a dash somewhere becomes a thing that has to be consistent throughout the document.
You're telling me that I should be doing the agonizing parts in order for the LLM to do the routine part (transforming a description of a program into a formal description of a program.) Your list of things that "make no sense" are exactly the things that I want the LLMs to do. I want to be able to run the same spec again and see the LLM add a feature that I never expected (and wasn't in the last version run from the same spec) or modify tactics to accomplish user goals based on changes in technology or availability of new standards/vendors.
I want to see specs that move away from describing the specific functionality of programs altogether, and more into describing a usefulness or the convenience of a program that doesn't exist. I want to be able to feed the LLM requirements of what I want a program to be able to accomplish, and let the LLM research and implement the how. I only want to have to describe constraints i.e. it must enable me to be able to do A, B, and C, it must prevent X,Y, and Z; I want it to feel free to solve those constraints in the way it sees fit; and when I find myself unsatisfied with the output, I'll deliver it more constraints and ask it to regenerate.
> I want to be able to run the same spec again and see the LLM add a feature that I never expected (and wasn't in the last version run from the same spec) or modify tactics to accomplish user goals based on changes in technology or availability of new standards/vendors.
Be careful what you wish for. This sounds great in theory but in practice it will probably mean a migration path for the users (UX changes, small details changed, cost dynamics and a large etc.)
I tried this recently with what I thought was a simple layout, but probably uncommon for CSS. It took an extremely long back and forth to nail it down. It seemingly had no understanding how to achieve what I wanted. A couple sentences would have been clear to a person. Sometimes LLMs are fantastic and sometimes they are brain dead.
The problem with formal prompting languages is they assume the bottleneck is ambiguity in the prompt. In my experience building agents, the bottleneck is actually the model's context understanding. Same precise prompt, wildly different results depending on what else is in the context window. Formalizing the prompt doesn't help if the model builds the wrong internal representation of your codebase. That said curious to see where this goes.
Two pieces of advice I keep seeing over & over in these discussions-- 1) start with a fresh/baseline context regularly, and 2) give agents unix-like tools and files which can be interacted with via simple pseudo-English commands such as bash, where they can invoke e.g. "--help" to learn how to use them.
I'm not sure adding a more formal language interface makes sense, as these models are optimized for conversational fluency. It makes more sense to me for them to be given instructions for using more formal interfaces as needed.
This concept is assuming a formalized language would make things easier somehow for an llm. That’s making some big assumptions about the neuro anatomy if llms. This [1] from the other day suggests surprising things about how llms are internally structured; specifically that encoding and decoding are distinct phases with other stuff in between. Suggesting language once trained isn’t that important.
We are not trying to make things easier for LLMs. LLMs will be fine. CodeSpeak is built for humans, because we benefit from some structure, knowing how to express what we want, etc.
I'm writing the tool as proof of the spec. Still very much a pre-alpha phase, but I do have a working POC in that I can specify a series of prompts in my YAML language and execute the chain of commands in a local agent.
One of the "key steps" that I plan on designing is specifically an invocation interceptor. My underlying theory is that we would take whatever random series of prose that our human minds come up with and pass it through a prompt refinement engine:
> Clean up the following prompt in order to convert the user's intent
> into a structured prompt optimized for working with an LLM
> Be sure to follow appropriate modern standards based on current
> prompt engineering reasech. For example, limit the use of persona
> assignment in order to reduce hallucinations.
> If the user is asking for multiple actions, break the prompt
> into appropriate steps (**etc...)
That interceptor would then forward the well structured intent-parsed prompt to the LLM. I could really see a step where we say "take the crap I just said and
turn it into CodeSpeak"
What a fantastic tool. I'll definitely do a deep dive into this.
Under "Prerequisites"[0] I see: "Get an Anthropic API key".
I presume this is temporary since the project is still in alpha, but I'm curious why this requires use of an API at all and what's special about it that it can't leverage injecting the prompt into a Claude Code or other LLM coding tool session.
You can basically condense this entire "language" into a set of markdown rules and use it as a skill in your planning pipeline.
And whatever codespeak offers is like a weird VCS wrapper around this. I can already version and diff my skills, plans properly and following that my LLM generated features should be scoped properly and be worked on in their own branches. This imo will just give rise to a reason for people to make huge 8k-10k line changes in a commit.
> And whatever codespeak offers is like a weird VCS wrapper around this.
I'm still getting used to the idea that modern programs are 30 lines of Markdown that get the magic LLM incantation loop just right. Seems like you're in the same boat.
We tend to obsess over abstractions, frameworks, and standards, which is a good thing. But we already have BDD and TDD, and now, with english as the new high-level programming language, it is easier than ever to build. Focusing on other critical problem spaces like context/memory is more useful at this point. If the whole purpose of this is token compression, I don't see myself using it.
I've done something similar for queries. Comments:
* Yes, this is a language, no its not a programming language you are used to, but a restricted/embellished natural language that (might) make things easier to express to an LLM, and provides a framework for humans who want to write specifications to get the AI to write code.
* Models aren't deterministic, but they are persistent (never gonna give up!). If you generate tests from your specification as well as code, you can use differential testing to get some measure (although not perfect) of correctness. Never delete the code that was generated before, if you change the spec, have your model fix the existing code rather than generate new code.
* Specifications can actually be analyzed by models to determine if they are fully grounded or not. An ungrounded specification is going to not be a good experience, so ask the model if it thinks your specification is grounded.
* Use something like a build system if you have many specs in your code repository and you need to keep them in sync. Spec changes -> update the tests and code (for example).
This doesn't seem particularly formal. I still remain unconvinced reducing is really going to be valuable. Code obviously is as formal as it gets but as you trend away from that you quickly introduce problems that arise from lack of formality. I could see a world in which we're all just writing tests in the form of something like Gherkin though.
People seem weirdly eager to talk to LLMs in proto-code instead of fixing the base problem that LLMs are just unreliable interpreters. If your tool needs a new human-friendly DSL to avoid the ambiguity of plain English, maybe what you really want is to be writing actual code or specs with a type system and feedback loop. Any halfway formalism gives a false sense of precision, and you still get blindsided by the same model quirks, just dressed up differently.
> I could see a world in which we're all just writing tests in the form of something like Gherkin though.
Yes, and the implementation... no one actually cares about that. This would be a good outcome in my view. What I see is people letting LLMs "fill in the tests", whereas I'd rather tests be the only thing humans write.
While I'm also a bit skeptical, I think some formalism could really simplify everything. The programming world has lots of words that mean close to the same thing (subroutine, method, function, etc. ). Why not choose one and stick to it for interactions with the LLM? It should save plenty of complexity.
From what I was able to understand during the interview there, it's not actually a language, more like an orchestrator + pinning of individual generated chunks.
The demo I've briefly seen was very very far from being impressive.
Got rejected, perhaps for some excessive scepticism/overly sharp questions.
My scepticism remains - so far it looks like an orchestrator to me and does not add enough formalism to actually call it a language.
I think that the idea of more formal approach to assisted coding is viable (think: you define data structures and interfaces but don't write function bodies, they are generated, pinned and covered by tests automatically, LLMs can even write TLA+/formal proofs), but I'm kinda sceptical about this particular thing. I think it can be made viable but I have a strong feeling that it won't be hard to reproduce that - I was able to bake something similar in a day with Claude.
Definitely won't use it for prod ofc but may try it out for a side-project.
It seems that this is more or less:
- instead of modules, write specs for your modules
- on the first go it generates the code (which you review)
- later, diffs in the spec are translated into diffs in the code (the code is *not* fully regenerated)
this actually sounds pretty usable, esp. if someone likes writing. And wherever you want to dive deep, you can delve down into the code and do "microoptimizations" by rolling something on your own (with what seems to be called here "mixed projects").
That said, not sure if I need a separate tool for this, tbh. Instead of just having markdown files and telling cause to see the md diff and adjust the code accordingly.
Instead of imperatively letting the agents hammer your codebase into shape through a series of prompts, you declare your intent, observe the outcome and refine the spec.
The agents then serve as a control plane, carrying out the intent.
I tried looking through some of the spec samples, and it was not clear what the "language" was or that there was any syntax. It just looks like a terse spec.
In my building and research of Simplex, specs designed for LLM consumption don't need a formalized syntax as much as they just need an enforced structure, ideally paired with a linter. An effective spec for LLMs will bridge the gap between natural language and a formal language. It's about reducing ambiguity of intent because of the weaknesses and inconsistencies of natural language and the human operator.
The other piece that has always struck me as a huge inefficiency with current usage of LLMs is the hoops they have to jump through to make sense of existing file formats - especially making sense of (or writing) complicated semi-proprietary formats like PDF, DOC(X), PPT(X), etc.
Long-term prediction: for text, we'll move away from these formats and towards alternatives that are designed to be optimal for LLMs to interact with. (This could look like variants of markdown or JSON, but could also be Base64 [0] or something we've not even imagined yet.)
If LLMs can't deal with those legacy file formats, I don't trust them to be able to deal with anything. The idea that LLMs are so sophisticated that we have a need to dumb down inputs in order to interact with them is self-contradictory.
While I agree, the parent also talks about efficiency. If a different format increases efficiency, that could be reason enough to switch to it, even if understanding doesn’t improve and already was good before.
Thank you, yes, efficiency was entirely my point. :)
Humans are far more efficient when they interact with information that's in a format that suits their abilities or preferences; it seems pretty obvious that in some ways the same would likely be true for LLMs.
From an inclusivity perspective, can more people than "programmers" be enlisted to write specs?
We are putting people out of work. Why not employ MORE people to do LESS, by sharing the responsibility? A group activity, perhaps?
Eg make room in this spec > program development workflow for, say, ... Tech Writers. Add them to the development team to ensure the language is right for the LLM ahead of time!
I think I want to know exactly what SQL ends up hitting the DB, and I want to fine tune it precisely.
This is the same issue I've had with ORMs - I get that they make it easier to generate functionality at speed, but ultimately I want control over the biggest performance lever I have available to me.
I am trying a similar spec driven development idea in a project I am working on. One big difference is that my specifications are not formalized that much. Tney are in plain language and are read directly by the LLM to convert to code. That seems like the kind of thing the LLM is good at. One other feature of this is that it allows me to nudge the implmentation a little with text in the spec outside of the formal requirements. I view it two ways, as spec-to-code but also as a saved prompt. I haven't spent enough time with it to say how successfuly it is, yet.
Do you save these "prompts" so you can improve, and in turn improve the code. to me Spec Driven Development is more than a spec to generate code, structured or not.
The spec contains formal, numbered items which are requirements and also serve to make tests (these are spec tests, additional implementation tests are also allowed by the implementer). When I said "they are not formalized as much", I mean I am not as strict on the spec format as CodeSpeak is, where their spec can be parsed with a tool. For me it is up to the LLM to use the spec itself. I have additional text beyond the requirement items which also influences how the LLM implements the code. I did this because it is too tough, for me at least, to prompt the LLM just based on strict requirements. This is perhaps cheating according to what you might call SDD. I'm just trying to be practical. The idea in the end is that this spec implies the code and maintaining the spec is the same as maintaining the code. Strictly speaking this won't be true, but I am hoping it still works anyway.
The title writer might be doing the project a disservice by using the term "formal" to describe it, given that the project talks a lot about "specs". I mistook it to imply something about formal specification.
My quick understanding is that isn't really trying to utilize any formal specification but is instead trying to more-clearly map the relationship between, say, an individual human-language requirement you have of your application, and the code which implements that requirement.
i’ve been doing this for a while, you create an extra file for every code file, sketch the code as you currently understand it (mostly function signatures and comments to fill in details), ask the LLM to help identify discrepancies. i call it “overcoding”.
i guess you can build a cli toolchain for it, but as a technique it’s a bit early to crystallize into a product imo, i fully expect overcoding to be a standard technique in a few years, it’s the only way i’ve been able to keep up with AI-coded files longer than 1500 lines
A few days ago I released https://github.com/b4rtaz/incrmd
, which is similar to Codespeak. The main difference is that the specification is defined at the *project* level. I'm not sure if having the specification at the *file* level is a good choice, because the file structure does not necessarily align with the class structure, etc.
The pattern we keep converging on is to treat model calls like a budgeted distributed system, not like a magical API. The expensive failures usually come from retries, fan-out, and verbose context growth rather than from a single bad prompt. Once we started logging token use per task step and putting hard ceilings on planner depth, costs became much more predictable.
Is it open source? This is a cool idea, but I'm pretty sure it's probably just a thin wrapper around claude. I also couldn't install it on my headless dev box because it relies on a localhost callback. Well, I'm looking forward to the first open source version in about 10 minutes.
This seems like a step backwards. Programming Languages for LLMs need a lot of built in guarantees and restrictions. Code should be dense. I don't really know what to make of this project. This looks like it would make everything way worse.
I've had good success getting LLMs to write complicated stuff in haskell, because at the end of the day I am less worried about a few errant LLM lines of code passing both the type checking and the test suite and causing damage.
It is both amazing and I guess also not surprising that most vibe coding is focused on python and javascript, where my experience has been that the models need so much oversight and handholding that it makes them a simple liability.
The ideal programming language is one where a program is nothing but a set of concise, extremely precise, yet composable specifications that the _compiler_ turns into efficient machine code. I don't think English is that programming language.
I'm gonna be honest here, I opened this website excited thinking this was a sort of new paradigm or programming language, and I ended up extremely confused at what this actually is and I still don't understand.
Is it a code generator tool from specs? Ugh. Why not push for the development of the protocol itself then?
When you translate spec to tests (if those are traditional unit tests or any automated tests that call the rest of the code), that fixes the API of the code, i.e. the code gets designed implicitly in the test generation step. Is this working well in your experience?
So, instead of making LLMs smarter let’s make everything abstract again? Because everyone wants to learn another tool? Or is this supposed to be something I tell Claude, “Hey make some code to make some code!” I’m struggling to see the benefit of this vs. just telling Claude to save its plan for re-use.
One requirement for a programming language to be “good” is that doing this, with sufficient specificity to get all the behavior you want, will be more verbose than the code itself.
Getting so close to the idea. We will only have Englishscripts and don’t need code anymore. No compiling. No vibe coding. No coding. Https://jperla.com/blog/claude-electron-not-claudevm
When we understand that AI allows the spec to be in English (or any natural language), we might stop attempting to build "structured english" for spec.
Somewhat related but I always wondered if I asked a LLM to create a new language with full focus on LLM coding efficiency, ignoring the need for humans to read it, what would it come with? Binary?
...and I obviously asked Gemini about it and it replied:
"A language optimized exclusively for Large Language Model (LLM) efficiency would prioritize Token Density, Context Window Management, and Architectural Alignment. It would not be binary, as standard LLM architectures (Transformers) process discrete tokens from a predefined vocabulary, not raw bits."
The issue with these LLM-targeting DSLs is that you have to waste a bunch of your context window explaining the grammar and semantics to the LLM, whereas they already speak existing programming languages because they've seen so much existing code. This usually negates the benefits of the DSL.
Yep, you're right, I read this too fast - it's also breaking long lines into many and I read this in reverse. I just imagined how much I could reduce my own LOC by adjusting the print width on my prettier settings..
it's not a new question if the as-is programming languages are optimal for LLMs: a language for LLM use would have to strongly typed. But that's about it for obvious requirements.
I would just like to point out the fun fact that instead of the brave new MD speak, there is still a `codespeak.json` to configure the build system itself...
...which seems to suggest that the authors themselves don't dogfood their own software. Please tell me that Codespeak was written entirely with Codespeak!
Instead of that json, which is so last year, why not use an agent to create an MD file to setup another agent, that will compile another MD file and feed it to the third agent, that... It is turtles, I mean agents, all the way down!
We created programming languages to direct programs. Then created LLM's to use English to direct programs. Now we've create programming languages to direct LLM's. What is old is new again!
Another great way to shrink your codebase 10x? Rewrite it in APL. If less code means less information, what are we gonna do when missing information was important?
The tweet I saw a few weeks ago about LLMs enabling building stupid ideas that would have never been built otherwise particularly resonates with this one.
As someone who hates writing (and thus coding) this might be a good tool, but how’s is it different from doing the same in claude? And I only see python, what about other languages, are they also production grade?
The intent of the idea is there, and I agree that there should be more precise syntax instead of colloquial English. However, it's difficult to take CodeSpeak seriously as it looks AI generated and misses key background knowledge.
I'm hoping for a framework that expands upon Behavior Driven Development (BDD) or a similar project-management concept. Here's a promising example that is ripe for an Agentic AI implementation, https://behave.readthedocs.io/en/stable/philosophy/#the-gher...
`codespeak build` — takes the spec and turns it into code via LLM, like a non-deterministic compiler.
`codespeak takeover` — reads a file and creates a spec from it.
You can progressively opt in ("mixed mode") so it only touches files you allow it to (and makes new ones if needed).
Pros:
- Formalised version of the "agentic engineering" many are already doing, but might actually get people to store their specs and decisions in a concise way that seems more sane than committing your entire meandering chat session.
- Encouraging people to review spec and code side-by-side at a file level seems reasonable. Could even build an IDE/plugin around that concept to auto-load/navigate the spec and code side-by-side like their examples: https://codespeak.dev/shrink-factor/markitdown-eml. If tokens per second for popular models continues to improve, could even update the spec by hand and see the code regenerate live on the fly, perhaps via `codespeak watch`.
- Reduces the code you have to write by 5-10x. Largely by convincing you not to write it any more. Our graphics cards write the code for us in this timeline and many people are even happy about it.
- As models improve, could optionally re-run `build` against the same original spec. (Why do that if the output already produces the intended result and the test suite still passes? Presumably for simpler code. Or faster output. Or lower memory use. Or simply _different_ bugs.)
- Moves programming back toward structured thinking backed by a committed artifact and a solid two-word command you can run, instead of actively having conversations with far away GPUs like that's normal now.
- Could theoretically swap out the build target language if you grow to trust the build process to be your babelfish/specfish. Kind of Haxe with Markdown.
Cons:
- Seems to be gated by their login, can't bring your own model?
- Suspect the labs can all clone this concept very easily. `claude build` and `claude spec`?
The idea of a non-deterministic 'build' command had me cringing at first. But formalising a process many are using anyway that currently feels pretty sloppy perhaps isn't so terrible.
If nothing else, writing `build` is a lot quicker and maintains a whisker of self-respect. At least compared to typing, "please take this spec and adapt the Python accordingly" followed 2 minutes later by, "I updated the spec to deal with the edge-case you missed, try again but don't miss anything this time".
This is pretty lame. I WANT to write code, something that has a formal definition and express my ideas in THAT, not some adhoc pseudo english an LLM then puts the cowboy hat on and does what the hotness of the week is.
Programming is in the end math, the model is defined and, when done correctly follows common laws.
I cannot read light on black. I don't know, maybe it's a condition, or simply just part of getting old. But my eyes physically hurt, and when I look up from reading a light-on-black screen, even when I looked at only for a short moment, my eyes need seconds to adjust again.
I know dark mode is really popular with the youngens but I regularly have to reach for reader mode for dark web pages, or else I simply cannot stand reading the contents.
Unfortunately, this site does not have an obvious way of reading it black-on-white, short of looking at the HTML source (CTRL+U), which - in fact - I sometimes do.
Same for me, has been my whole life. I complain about it all the time. It's well documented that people can read black on light far better and with less eye strain than light on black; yet there seems to be a whole generation of developers determined to force us all to try and read it. Even the media sites like Netflix, Prime, etc. force it. At least Tubi's is somewhat more readable.
Sometimes a site will include a button or other UI element to choose a light theme but I find it odd that so many sites which are presumed to be designed by technically competent people, completely ignore accessibility concerns.
The most common mistake I see (on this website at least) is the assumption that one's programming competence is equal to their competence in other things.
Do you sit in a bright room? Right now, during the night, I see your comment like this: https://i.imgur.com/c7fmBns.png, but during the day when the room is bright, I also see everything with light themes/background colors, otherwise it is indeed hard to see properly.
When it’s dark (I can’t stand bright rooms at night), I lower the brightness of my screens instead of going for dark mode. I have astigmatism and any tiny bright spot is hard to focus on. It’s easier when the bright part is large and the dark parts are small (black on white is best).
I find dark mode much easier to read and far less eye strain. I guess it just shows that users should be the ones to set the preference. There are studies on monkeys showing light mode leading to myopia. Although lately I have come to learn there are lots of poorly done studies.
THe HN title seems very misleading to me. How is this, in any sense of the word, "formal?" I don't see that particular word used to describe this tool on the web page itself.
The site does describe it as a "programming language," which feels like a novel use of the term to me. The borders around a term like "programming language" are inherently fuzzy, but something like "code generation tool" better describes CodeSpeak IMHO.
Isn't that the point though? In the development loop, you'd diagnose why it's not building what you expect, so you flush out those previous implicit or even subconscious edge cases, undocumented behaviors, and tribal knowledge and codify them into the spec.
It would actually end up being a lot easier to maintain than a bunch of undocumented spaghetti.
I think the magic sauce in this project is the fact that they convert diffs in spec to diffs in code, which is likely more stable than just regenerating the whole thing.
The thing is, such exploration can be done on a whiteboard or a moodboard. Once it’s we settled on a process, we code it and let the computer take over.
I really believe the struggle is knowledge and communication of ideas, not the coding part (which is fairly easy IMO).
We built LLMs so that you can express your ideas in English and no longer need to code.
Also, English is really too verbose and imprecise for coding, so we developed a programming language you can use instead.
Now, this gives me a business idea: are you tired of using CodeSpeak? Just explain your idea to our product in English and we'll generate CodeSpeak for you.
Yeah. It's hard to express and understand nested structures in a natural language yet they are easy in high-level programming languages. E.g. "the dog of first son of my neighbour" vs "me.neighbour.sons[0].dog", "sunny and hot, or rainy but not cold" vs "(sunny && hot) || (rainy && !cold)".
In the past maths were expressed using natural language, the math language exists because natural language isn't clear enough.
That seems like it could lead to imprecise outcomes, so I've started a business that defines a spec to output the correct English to input to your product.
My gut says Kotlin is great for individual developer experience. But I never heard or saw credible reports on the Total Cost of Ownership, e.g., Kotlin engineers hiring, swapping out on a team.
"In order to make machines significantly easier to use, it has been proposed (to try) to design machines that we could instruct in our native tongues. this would, admittedly, make the machines much more complicated, but, it was argued, by letting the machine carry a larger share of the burden, life would become easier for us. It sounds sensible provided you blame the obligation to use a formal symbolism as the source of your difficulties. But is the argument valid? I doubt."
Joel Spolsky, stackoverflow.com founder, Talk at Yale: Part 1 of 3 https://www.joelonsoftware.com/2007/12/03/talk-at-yale-part-...
That's still the best way to turn a spec into a program and comes with all the downsides it entails.
At the time "generating a program from a spec" was an idea floating around that you could come up with a "spec language" that was easier than regular programming languages but somehow still had the same power and could be compiled directly into a program. That's the crackpot idea that Joel is referencing - but that's not what a spec language used with an LLM is doing.
[1]: https://www.joelonsoftware.com/2000/10/02/painless-functiona...
And because formal verification with LLMs is an active area of open research, I have some hope that the old idea of automated formal verification is starting to take shape. There is a lot to talk about here, but I'll leave a link to the 1968 NATO Software Engineering Conference [1] for those who are interested in where these thoughts originated. It goes deeply into the subject of "specification languages" and other related concepts. My understanding is that the historical split between computing science and software engineering has its roots in this 1968 conference.
[1]: http://homepages.cs.ncl.ac.uk/brian.randell/NATO/nato1968.PD...
First, collect the following information from user: …. Second, send http request to the following endpoint with the certain payload…. If server returned error - report back to user.
It makes me crack every time I see that kind of stuff. Why on Earth you won’t just write a script for that purpose? 10x faster, zero tokens burned, 100% deterministic.
- Because your bash-fu may not be good enough
- Because parts of the process may not be amenable to scripting, especially if they require LLMs
- Because the inputs to some steps are fuzzy enough that only an LLM can handle them
- etc...
That being said, yes, anything amenable to being turned into scripts should be.
To anticipate your objection: you can get over determinism now, or you can get over it later. You will get over it, though, if you intend to stay in this business.
What are you talking about? If an LLM is a compiler, then I'm a compiler. Are we going to redefine the meaning of words in order not to upset the LLM makers?
Over time, when digital computers became commonplace, the computing moved from the person to the machine. At this time, arguably the humans doing the programming of the machine were doing the work we now ask of a "compiler".
So yes, an LLM can be a compiler in some sense (from a high level abstract language into a programming language), and you too can be a compiler! But currently it's probably a good use of the LLM's time and probably not a good use of yours.
But your point stands. There is a period beyond which doing more than learning the fundamentals just becomes toil.
Advice given to Henry Ford’s lawyer, Horace Rackam, by an unnamed president of Michigan Savings Bank in 1903.
The idea, IIUC, seems to be that instead of directly telling an LLM agent how to change the code, you keep markdown "spec" files describing what the code does and then the "codespeak" tool runs a diff on the spec files and tells the agent to make those changes; then you check the code and commit both updated specs and code.
It has the advantage that the prompts are all saved along with the source rather than lost, and in a format that lets you also look at the whole current specification.
The limitation seems to be that you can't modify the code yourself if you want the spec to reflect it (and also can't do LLM-driven changes that refer to the actual code), and also that in general it's not guaranteed that the spec actually reflects all important things about the program, so the code does also potentially contain "source" information (for example, maybe your want the background of a GUI to be white and it is so because the LLM happened to choose that, but it's not written in the spec).
The latter can maybe be mitigated by doing multiple generations and checking them all, but that multiplies LLM and verification costs.
Also it seems that the tool severely limits the configurability of the agentic generation process, although that's just a limitation of the specific tool.
Eventually, we'll end up in a world where humans don't need to touch code, but we are not there yet. We are looking into ways to "catch up" the specs with whatever changes happen in the code not through CodeSpeak (agents or manual changes or whatever). It's an interesting exercise. In the case of agents, it's very helpful to look at the prompts users gave them (we are experimenting with inspecting the sessions from ~/.claude).
More generally, `codespeak takeover` [1] is a tool to convert code into specs, and we are teaching it to take prompts from agent sessions into account. Seems very helpful, actually.
I think it's a valid use case to start something in vibe coding mode and then switch to CodeSpeak if you want long-term maintainability. From "sprint mode" to "marathon mode", so to speak
[1] https://codespeak.dev/blog/codespeak-takeover-20260223
Will we though? Wouldn't AI need to reach a stage where it is a tool, like a compiler, which is 100% deterministic?
1. You are right that we can redefine what is code. If code is the central artefact that humans are dealing with to tell machines and other humans how the system works, then CodeSpeak specs will become code, and CodeSpeak will be a compiler. This is why I often refer to CodeSpeak as a next-level programming language.
2. I don't think being deterministic per se is what matters. Being predictable certainly does. Human engineers are not deterministic yet people pay them a lot of money and use their work all the time.
Human carpenters are not deterministic yet they won't use a machine saw that goes off line even 1% of the time. The whole history of tools, including software, is one of trying to make the thing do more precisely what is intended, whether the intent is right or not.
Can you imagine some machine tool maker making something faulty and then saying, "Well hey, humans aren't deterministic."
* regression tests – can be generated
* conformance tests – often can be generated
* acceptance tests – are another form of specification and should come from humans.
Human intent can be expressed as
* documents (specs, etc)
* review comments, etc
* tests with clear yes/no feedback (data for automated tests, or just manual testing)
And this is basically all that matters, see more here: https://www.linkedin.com/posts/abreslav_so-what-would-you-sa...
Having faith that determinism will last forever is foolish. You have to upgrade at some point, and you will run into problems. New bugs, incompatibilities, workflow changes, whatever the case will make the determinism property moot.
The reason is, it's often more work to be deterministic than not deterministic, so compilers don't do it. For example, they may compile functions in parallel and append them to the output in the order they complete.
It also seems to be closed-source, which means that unless they open the source very soon it will very likely be immediately replaced in popularity by an open source version if it turns out to gain traction.
Cool idea overall, an incremental psuedocode compiler. Interesting to see how well it scales.
I can also see a hybrid solution with non-specced code files for things where the size of code and spec would be the same, like for enums or mapping tables.
Working on that as well. We need to be a lot more flexible and configurable
* This isn't a language, it's some tooling to map specs to code and re-generate
* Models aren't deterministic - every time you would try to re-apply you'd likely get different output (without feeding the current code into the re-apply and let it just recommend changes)
* Models are evolving rapidly, this months flavour of Codex/Sonnet/etc would very likely generate different code from last months
* Text specifications are always under-specified, lossy and tend to gloss over a huge amount of details that the code has to make concrete - this is fine in a small example, but in a larger code base?
* Every non-trivial codebase would be made up of of hundreds of specs that interact and influence each other - very hard (and context - heavy) to read all specs that impact functionality and keep it coherent
I do think there are opportunities in this space, but what I'd like to see is:
* write text specifications
* model transforms text into a *formal* specification
* then the formal spec is translated into code which can be verified against the spec
2 and three could be merged into one if there were practical/popular languages that also support verification, in the vain of ADA/Spark.
But you can also get there by generating tests from the formal specification that validate the implementation.
If the result is always provably correct it doesn't matter whether or not it's different at the code level. People interested in systems like this believe that the outcome of what the code does is infinity more important than the code itself.
To aim for a meeting of the minds... Would you help me out and unpack what you mean so there is less ambiguity? This might be minor terminological confusion. It is possible we have different takes, though -- that's what I'm trying to figure out.
There are at least two senses of 'correctness' that people sometimes mean: (a) correctness relative to a formal spec: this is expensive but doable*; (b) confidence that a spec matches human intent: IMO, usually a messy decision involving governance, organizational priorities, and resource constraints.
Sometimes people refer to software correctness problems in a very general sense, but I find it hard to parse those. I'm familiar with particular theoretical results such as Rice's theorem and the halting problem that pertain to arbitrary programs.
* With tools like {Lean, Dafny, Verus, Coq} and in projects like {CompCert, sel4}.
Since nobody involved actually cares whether the code works or not, it doesn't matter whether it's a different wrong thing each time.
If anyone cared enough they could look at the code and see the problem immediately and with little effort, but we're encouraging a world where no one cares enough to put even that baseline effort because *gestures at* the tests are passing. Who cares how wrong the code is and in what ways if all the lights are green?
If the spec is so complete that it covers everything, you might as well write the code.
The benefit of writing a spec and having the LLM code it, is that the LLM will fill in a lot of blanks. And it is this filling in of blanks that is non-deterministic.
Welcome to the usual offshoring experience.
Except one shoe is made by children in a fire-trap sweatshop with no breaks, and the other was made by a well paid adult in good working conditions.
The ends don’t justify the means. The process of making impacts the output in ways that are subtle and important, but even holding the output as a fixed thing - the process of making still matters, at least to the people making it.
And guess how much shoe companies make who manufacture shoes in sweatshop conditions versus the ones who make artisanal handcrafted shoes?
Btw in my metaphor, we - the programmers - are the kids in the sweatshop.
Even on the BigTech side being able to reverse a btree on the whiteboard and having on your resume that you were a mid level developer isn’t enough either anymore
If you look at the comp on that side, it’s also stagnated for decade. AI has just accelerated that trend.
While my job has been at various percentages to produce code for 30 years, it’s been well over a decade since I had to sell myself on “I codez real gud”. I sell myself as a “software engineer” who can go from ambiguous business and technical requirements, deal with politics, XYProblems, etc
That’s exactly my point. “Programming” was clearly becoming commoditized a decade ago.
But while you are clutching your pearls, where do you think your computer, clothes etc are being made?
Out of bounds behavior is sometimes a known unknown, but in the era of generated code is exclusively unknown unknowns.
Good luck speccing out all the unanticipated side effects and undefined behaviors. Perhaps you can prompt the agent in a loop a bnumber of times but it's hard to believe that the brute-force throw-more-tokens-at-it approach has the same level of return as a more attentive audit by human eyeballs.
I don’t review every line of code by everyone whose output I’m responsible for, I ask them to explain how they did things and care about their testing, the functional and non functional requirements and hotspots like concurrency, data access patterns, architectural issues etc.
For instance, I haven’t done web development since 2002 except for a little copy and paste work. I completely vibe coded three internal web admin sites for separate projects and used Amazon Cognito for authentication. I didn’t look at a line of code that AI generated any more than I would have looked at a line of code for a website I delegated to the web developer. I cared about functionality and UX.
I have not encountered an agent yet that I can trust in the same way.
Being shoes, offshoring, Webwidgets or AI generated code.
But we’re the shoemakers, not the consumers. It’s actually our job to preserve our own and our peers quality of life.
Cheapest good option possible doesn’t have to be the sweatshop - tho the shareholders of nike or zara would have you believe that - the labor movements of the 19th century proved that’s not the case.
How you define your proof is up to you. It might be a simple test, or an exhaustive suite of tests, or a formal proof. It doesn't matter. If the output of the code is correct by your definition, then it doesn't matter what the underlying code actually is.
What the Church–Rosser property/confluence is in term rewriting in lambda calculus is a possible lens.
To have a formally verified spec, one has to use some decidable fragment of FO.
If you try to replace code generation with rewriting things can get complicated fast.[2]
Rust uses affine types as an example and people try to add petri-nets[0] but in general petri-net reachability is Ackerman-complete [1]
It is just the trade off of using a context free like system like an LLM with natural language.
HoTT and how dependent types tend to break isomorphic ≃ equal Is another possible lens.
[0] https://arxiv.org/abs/2212.02754v3
[1] https://arxiv.org/abs/2212.02754v3
[2] https://arxiv.org/abs/2407.20822
Second, it's very easy to have some specs decided tractably, at least in many practical instances, but they are far too weak to specify most correctness properties programs need. You mentioned the Rust type system, and it cannot specify properties with interleaved quantifiers, which most interesting properties require.
And as for HoTT - or any of the many equivalent rich formalisms - checking their proofs is tractable, but not finding them. The intractability of verification of even very limited languages (again TQBF) holds regardless of how the verification is done.
I think it's best to take it step by step, and CodeSpeak's approach is pragmatic.
> First, it's not a question of decidability but of tractability
The question of decidability is a form of many-to-one, reduction. In fact RE-complete is defined by many-to-one reductions.
In a computational complexity sense, tractability is a far stronger notion. Basically an algorithm efficient if its time complexity at most PTIME for any size n input. A problem is "tractable" if there is an efficient algorithm that solves it.
You are correct, if you limit your expressiveness to PTIME, where because P == co-P, PEM/tight apartness/Omniscience principles hold.
But the problem is that Church–Rosser Property[0] (proofs ~= programs) and Brouwer–Heyting–Kolmogorov Interpretation[1] (Propositions at types) are NOT binary SAT, and you have concepts like mere propositions[3] that are very different than just BSAT.
But CodeSpeak doesn't have formal specifications, so this is irrelevant. Their example code output produced code with path traversal/resource exhaustion risks and correctness issues and is an example.
My personal opinion is that we will need to work within the limitations of the systems, and while it is trivial to come up with your own canary, I would recommend playing with [3] before the models directly target it.
Generating new code from a changed spec will be less difficult, specifically when the mess of real world specs comes into play. You can play with the example on CodeSpeak's front page, trying to close the various holes the software has with malformed/malicious input, giving the LLM the existing code base and you will see that "brown m&m"[3] problem arise quickly. At least for me if I prompt it to look at the changed natural language spec, generating new code it was more successful.
But for some models like the qwen3 coder next, the style resulted in far less happy path protections, which that model seems to have been trained on to deliver by default in some cases.
[0] https://calhoun.nps.edu/entities/publication/015f1bab-6642-4... [1] https://www.cs.cornell.edu/courses/cs6110/2017sp/lectures/le... [2] https://www.cambridge.org/core/journals/journal-of-functiona... [3] https://codemanship.wordpress.com/2025/10/03/llms-context-wi...
I have no idea about codespeak - I was responding to the comments above, not about codespeak.
True.
> There's a vast chasm there we only need to even slightly start to bridge to get to far higher confidence levels than the typical human dev team achieves.
The word "slightly" is doing a lot of work here to the point of making it impossible to estimate. For example, the complexity classes P and NP are only slightly apart, and yet that's where a very practical barrier between feasibility and infeasibility lies. I don't doubt that one day AI may be able to write programs as well as humans, although nobody can estimate how soon that day will come, but nobody knows how wide the gap between that and "far higher confidence" is. Maybe there are fundamental computational complexity barriers in that gap that no amount of intelligence can cross, and maybe there aren't. Nobody knows yet.
What we do know is that anything humans do is possible - after all, we're doing it - and many things we need and humans can't do (including predicting nonlinear systems like the behavious of economy) no machine can do drastically better because of complexity limitations.
- I bootstrap AGENTS.md with my basic way of working and occasionally one or two project specific pieces
- I then write a DESIGN.md. How detailed or well specified it is varies from project to project: the other day I wrote a very complete DESIGN.md for a time tracking, invoice management and accounting system I wanted for my freelance biz. Because it was quite complete, the agent almost one-shot the whole thing
- I often also write a TECHNICAL-SPEC.md of some kind. Again how detailed varies.
- Finally I link to those two from the AGENTS. I also usually put in AGENTS that the agent should maintain the docs and keep them in sync with newer decisions I make along the way.
This system works well for me, but it's still very ad hoc and definitely doesn't follow any kind of formally defined spec standard. And I don't think it should, really? IMO, technically strict specs should be in your automated tests not your design docs.
I found it works very well in once-off scenarios, but the specs often drift from the implementation. Even if you let the model update the spec at the end, the next few work items will make parts of it obsolete.
Maybe that's exactly the goal that "codespeak" is trying to solve, but I'm skeptical this will work well without more formal specifications in the mix.
Yes and yes. I think it's an important direction in software engineering. It's something that people were trying to do a couple decades ago but agentic implementation of the spec makes it much more practical.
https://github.com/doubleuuser/rlm-workflow
I have the same basic workflow as you outlined, then I feed the docs into blackbird, which generates a structured plan with task and sub tasks. Then you can have it execute tasks in dependency order, with options to pause for review after each task or an automated review when all child task for a given parents are complete.
It’s definitely still got some rough edges but it has been working pretty well for me.
Is that really true? I haven’t tried to do my own inference since the first Llama models came out years ago, but I am pretty sure it was deterministic: if you fixed the seed and the input was the same, the output of the inference was always exactly the same.
1.) There is typically a temperature setting (even when not exposed, most major providers have stopped exposing it [esp in the TUIs]).
2.) Then, even with the temperature set to 0, it will be almost deterministic but you'll still observe small variations due to the limited precision of float numbers.
Edit: thanks for the corrections
No. Floating number arithmetic is deterministic. You don't get different answers for the same operations on the same machine just because of limited precision. There are reasons why it can be difficult to make sure that floating point operations agree across machines, but that is more of a (very annoying and difficult to make consistent) configuration thing than determinism.
(In general it is mildly frustrating to me to see software developers treat floating point as some sort of magic and ascribe all sorts of non-deterministic qualities to it. Yes floating point configuration for consistent results across machines can be absurdly annoying and nigh-impossible if you use transcendental functions and different binaries. No this does not mean if your program is giving different results for the same input on the same machine that this is a floating point issue).
In theory parallel execution combined with non-associativity can cause LLM inference to be non-deterministic. In practice that is not the case. LLM forward passes rarely use non-deterministic kernels (and these are usually explicitly marked as such e.g. in PyTorch).
You may be thinking of non-determinism caused by batching where different batch sizes can cause variations in output. This is not strictly speaking non-determinism from the perspective of the LLM, but is effectively non-determinism from the perspective of the end user, because generally the end user has no control over how a request is slotted into a batch.
Float addition is not associative, so the result of x1 + x2 + x3 + x4 depends on which order you add them in. This matters when the sum is parallelized, as the structure of the individual add operations will depend on how many cores are available at any given time.
And models I work with (claude,gemini etc) have the temperature parameter when you are using API.
It is absolutely workable, current inference engines are just lazy and dumb.
(I use a Zobrist hash to track and prune loops.)
I use Kiro IDE (≠ Kiro CLI) primarily as a spec generator. In my experience, it's high-quality for creating and iterating on specs. Tools like Cursor are optimized for human-driven vibing -- they have great autocomplete, etc. Kiro, by contrast, is optimized around spec, which ironically has been the most effective approach I've found for driving agents.
I'd argue that Cursor, Antigravity, and similar tools are optimized for human steering, which explains their popularity, while Kiro is optimized for agent harnesses. That's also why it’s underused: it's quite opinionated, but very effective. Vibe-coding culture isn't sold on spec driven development (they think it's waterfall and summarily dismiss it -- even Yegge has this bias), so people tend to underrate it.
Kiro writes specs using structured formats like EARS and INCOSE (which is the spc format used in places like Boeing for engineering reqs). It performs automated reasoning to check for consistency, then generates a design document and task list from the spec -- similar to what Beads does. I usually spend a significant amount of time pressure-testing the spec before implementing (often hours to days), and it pays off. Writing a good, consistent spec is essentially the computer equivalent of "writing as a tool of thought" in practice.
Once the spec is tight, implementation tends to follow it closely. Kiro also generates property-based tests (PBTs) using Hypothesis in Python, inspired by Haskell's QuickCheck. These tests sweep the input domain and, when combined with traditional scenario-based unit tests, tend to produce code that adheres closely to the spec. I also add a small instruction "do red/green TDD" (I learned this from Simon Willison) and that one line alone improved the quality of all my tests. Kiro can technically implement the task list itself, but this is where agents come in. With the spec in hand, I use multiple headless CLI agents in tmux (e.g., Kiro CLI, Claude Code) for implementation. The results have been very good. With a solid Kiro spec and task list, agents usually implement everything end-to-end without stopping -- I haven’t found a need for Ralph loops. (agents sometimes tend to stop mid way on Claude plans, but I've never had that happen with Kiro, not sure why, maybe it's the checklist, which includes PBT tests as gates).
didn't have the strongest start, but the Kiro IDE is one of the best spec generators I've used, and it integrates extremely well with agent-driven workflows.
>* write text specifications
>* model transforms text into a formal specification
>* then the formal spec is translated into code which can be verified against the spec
This skill does just that: https://github.com/doubleuuser/rlm-workflow
Each stage produces its own output artifact (analysis, implementation plan, implementation summary, etc) and takes the previous phases' outputs as input. The artifact is locked after the stage is done, so there is no drift.
formal specification is no different from code: it will have bugs :)
There's no free lunch here: the informal-to-formal transition (be it words-to-code or words-to-formal-spec) comes through the non-deterministic models, period.
If we want to use the immense power of LLMs, we need to figure out a way to make this transition good enough
Slightly sarcastic but not sure this couldn't become a thing.
So like when you give the same spec to 2 different programmers.
The entire thing about determinism is a red herring, because 1) it's not determinism but prompt instability, and 2) prompt instability doesn't matter because of the above. Intelligence (both human and machine) is not a formal domain, your inputs lack formal syntax, and that's fine. For some reason this basic concept creates endless confusion everywhere.
It’s not fine. I program using formal syntax precisely because I want the computer to do exactly what I tell it to.
You're telling me that I should be doing the agonizing parts in order for the LLM to do the routine part (transforming a description of a program into a formal description of a program.) Your list of things that "make no sense" are exactly the things that I want the LLMs to do. I want to be able to run the same spec again and see the LLM add a feature that I never expected (and wasn't in the last version run from the same spec) or modify tactics to accomplish user goals based on changes in technology or availability of new standards/vendors.
I want to see specs that move away from describing the specific functionality of programs altogether, and more into describing a usefulness or the convenience of a program that doesn't exist. I want to be able to feed the LLM requirements of what I want a program to be able to accomplish, and let the LLM research and implement the how. I only want to have to describe constraints i.e. it must enable me to be able to do A, B, and C, it must prevent X,Y, and Z; I want it to feel free to solve those constraints in the way it sees fit; and when I find myself unsatisfied with the output, I'll deliver it more constraints and ask it to regenerate.
Be careful what you wish for. This sounds great in theory but in practice it will probably mean a migration path for the users (UX changes, small details changed, cost dynamics and a large etc.)
https://codespeak.dev/blog/greenfield-project-tutorial-20260...
It is a formal "way" aka like using json or xml like tons of people are already doing.
I'm not sure adding a more formal language interface makes sense, as these models are optimized for conversational fluency. It makes more sense to me for them to be given instructions for using more formal interfaces as needed.
[1] https://news.ycombinator.com/item?id=47322887
I'm writing a language spec for an LLM runner that has the ability to chain prompts and hooks into workflows.
https://github.com/AlexChesser/ail
I'm writing the tool as proof of the spec. Still very much a pre-alpha phase, but I do have a working POC in that I can specify a series of prompts in my YAML language and execute the chain of commands in a local agent.
One of the "key steps" that I plan on designing is specifically an invocation interceptor. My underlying theory is that we would take whatever random series of prose that our human minds come up with and pass it through a prompt refinement engine:
> Clean up the following prompt in order to convert the user's intent > into a structured prompt optimized for working with an LLM > Be sure to follow appropriate modern standards based on current > prompt engineering reasech. For example, limit the use of persona > assignment in order to reduce hallucinations. > If the user is asking for multiple actions, break the prompt > into appropriate steps (**etc...)
That interceptor would then forward the well structured intent-parsed prompt to the LLM. I could really see a step where we say "take the crap I just said and turn it into CodeSpeak"
What a fantastic tool. I'll definitely do a deep dive into this.
I presume this is temporary since the project is still in alpha, but I'm curious why this requires use of an API at all and what's special about it that it can't leverage injecting the prompt into a Claude Code or other LLM coding tool session.
[0]: https://codespeak.dev/blog/greenfield-project-tutorial-20260...
And whatever codespeak offers is like a weird VCS wrapper around this. I can already version and diff my skills, plans properly and following that my LLM generated features should be scoped properly and be worked on in their own branches. This imo will just give rise to a reason for people to make huge 8k-10k line changes in a commit.
I'm still getting used to the idea that modern programs are 30 lines of Markdown that get the magic LLM incantation loop just right. Seems like you're in the same boat.
* Yes, this is a language, no its not a programming language you are used to, but a restricted/embellished natural language that (might) make things easier to express to an LLM, and provides a framework for humans who want to write specifications to get the AI to write code.
* Models aren't deterministic, but they are persistent (never gonna give up!). If you generate tests from your specification as well as code, you can use differential testing to get some measure (although not perfect) of correctness. Never delete the code that was generated before, if you change the spec, have your model fix the existing code rather than generate new code.
* Specifications can actually be analyzed by models to determine if they are fully grounded or not. An ungrounded specification is going to not be a good experience, so ask the model if it thinks your specification is grounded.
* Use something like a build system if you have many specs in your code repository and you need to keep them in sync. Spec changes -> update the tests and code (for example).
That works great in practice, Gherkin even has a markdown dialect [1].
If you combine it with a tool like aico [2] you can have a really effective development workflow.
[1] https://github.com/cucumber/gherkin/blob/main/MARKDOWN_WITH_...
[2] https://github.com/jurriaan/aico
Yes, and the implementation... no one actually cares about that. This would be a good outcome in my view. What I see is people letting LLMs "fill in the tests", whereas I'd rather tests be the only thing humans write.
There has been a profession in place for many decades that specifically addresses that...Software Engineering.
The demo I've briefly seen was very very far from being impressive.
Got rejected, perhaps for some excessive scepticism/overly sharp questions.
My scepticism remains - so far it looks like an orchestrator to me and does not add enough formalism to actually call it a language.
I think that the idea of more formal approach to assisted coding is viable (think: you define data structures and interfaces but don't write function bodies, they are generated, pinned and covered by tests automatically, LLMs can even write TLA+/formal proofs), but I'm kinda sceptical about this particular thing. I think it can be made viable but I have a strong feeling that it won't be hard to reproduce that - I was able to bake something similar in a day with Claude.
Definitely won't use it for prod ofc but may try it out for a side-project.
It seems that this is more or less:
this actually sounds pretty usable, esp. if someone likes writing. And wherever you want to dive deep, you can delve down into the code and do "microoptimizations" by rolling something on your own (with what seems to be called here "mixed projects").That said, not sure if I need a separate tool for this, tbh. Instead of just having markdown files and telling cause to see the md diff and adjust the code accordingly.
Instead of imperatively letting the agents hammer your codebase into shape through a series of prompts, you declare your intent, observe the outcome and refine the spec.
The agents then serve as a control plane, carrying out the intent.
The other piece that has always struck me as a huge inefficiency with current usage of LLMs is the hoops they have to jump through to make sense of existing file formats - especially making sense of (or writing) complicated semi-proprietary formats like PDF, DOC(X), PPT(X), etc.
Long-term prediction: for text, we'll move away from these formats and towards alternatives that are designed to be optimal for LLMs to interact with. (This could look like variants of markdown or JSON, but could also be Base64 [0] or something we've not even imagined yet.)
[0] https://dnhkng.github.io/posts/rys/
Humans are far more efficient when they interact with information that's in a format that suits their abilities or preferences; it seems pretty obvious that in some ways the same would likely be true for LLMs.
https://www.zmescience.com/science/news-science/polish-effec...
We are putting people out of work. Why not employ MORE people to do LESS, by sharing the responsibility? A group activity, perhaps?
Eg make room in this spec > program development workflow for, say, ... Tech Writers. Add them to the development team to ensure the language is right for the LLM ahead of time!
This is the same issue I've had with ORMs - I get that they make it easier to generate functionality at speed, but ultimately I want control over the biggest performance lever I have available to me.
My quick understanding is that isn't really trying to utilize any formal specification but is instead trying to more-clearly map the relationship between, say, an individual human-language requirement you have of your application, and the code which implements that requirement.
i guess you can build a cli toolchain for it, but as a technique it’s a bit early to crystallize into a product imo, i fully expect overcoding to be a standard technique in a few years, it’s the only way i’ve been able to keep up with AI-coded files longer than 1500 lines
I've had good success getting LLMs to write complicated stuff in haskell, because at the end of the day I am less worried about a few errant LLM lines of code passing both the type checking and the test suite and causing damage.
It is both amazing and I guess also not surprising that most vibe coding is focused on python and javascript, where my experience has been that the models need so much oversight and handholding that it makes them a simple liability.
The ideal programming language is one where a program is nothing but a set of concise, extremely precise, yet composable specifications that the _compiler_ turns into efficient machine code. I don't think English is that programming language.
Is it a code generator tool from specs? Ugh. Why not push for the development of the protocol itself then?
LLMs works on both translation steps. But you end up with an healthy amount of tests.
I tagged each tests with the id of the spec so I do get spec to test coverage as well.
Beside standard code coverage given by the tests.
For now, it's only about test coverage of the code, but the spec coverage is coming too.
Good enough that I don't review it.
Granted, it is a personal project that I care only to the point that I want it to work. There are no money on the line. Nothing professional.
I believe that part of the secret is that I force CC to run the whole est suites after it change ANY file. Using hooks.
It makes iteration slower because it kinda forces it to go from green to green. Or better from red to less red (since we start in red).
But overall I am definitely happy with the results.
Again, personal projects. Not really professional code.
I force the code to be almost 100% dependency injection-able.
It simplifies a lot writing tests and getting the coverage. And I see the LLM being able to handle it very very well.
There you have it: Code laundering as a service. I guess we have to avoid Kotlin, too.
You write a markdown spec.
The script takes it and feeds it to an LLM API.
The API generates code.
Okay? Where is this "next-generation programming language" they talk about?
https://www.loglan.org/
Or Lojban?
https://mw.lojban.org/
Of course an expert would throw it out and design/write it properly so they know it works.
However, there is no case for more complicated, multi-file changes or architecture stuff.
Also, the examples feel forced, as if you use external libraries, you don't have to write your own "Decode RFC 2047"
This feels wrong, as the spec doesn't consistently generate the same output.
But upon reflection, "source of truth" already refers to knowledge and intent, not machine code.
Actually, computers, being machines, do equate machine code and source of truth.
> - Encoding auto-detection and normalization for beautifulsoup4
I was kinda expecting to see the name "chardet" pop up here. :-)
So for example, if you refactor a program, make the LLM do anything but keep the logic of the program intact.
...and I obviously asked Gemini about it and it replied:
"A language optimized exclusively for Large Language Model (LLM) efficiency would prioritize Token Density, Context Window Management, and Architectural Alignment. It would not be binary, as standard LLM architectures (Transformers) process discrete tokens from a predefined vocabulary, not raw bits."
Example of it:
"[1] When computing LOC, we strip blank lines and break long lines into many"
I imagine this is before and after- not just after.
As in, they aren't just making lines long and removing whitespace (something models love to do when you ask it to remove lines of code)
What's old is new.
...which seems to suggest that the authors themselves don't dogfood their own software. Please tell me that Codespeak was written entirely with Codespeak!
Instead of that json, which is so last year, why not use an agent to create an MD file to setup another agent, that will compile another MD file and feed it to the third agent, that... It is turtles, I mean agents, all the way down!
Instant tab close!
https://codespeak.dev/blog/greenfield-project-tutorial-20260...
https://news.ycombinator.com/item?id=47284030
I'm hoping for a framework that expands upon Behavior Driven Development (BDD) or a similar project-management concept. Here's a promising example that is ripe for an Agentic AI implementation, https://behave.readthedocs.io/en/stable/philosophy/#the-gher...
Does this make it a 6th generation language?
`codespeak build` — takes the spec and turns it into code via LLM, like a non-deterministic compiler.
`codespeak takeover` — reads a file and creates a spec from it.
You can progressively opt in ("mixed mode") so it only touches files you allow it to (and makes new ones if needed).
Pros:
- Formalised version of the "agentic engineering" many are already doing, but might actually get people to store their specs and decisions in a concise way that seems more sane than committing your entire meandering chat session.
- Encouraging people to review spec and code side-by-side at a file level seems reasonable. Could even build an IDE/plugin around that concept to auto-load/navigate the spec and code side-by-side like their examples: https://codespeak.dev/shrink-factor/markitdown-eml. If tokens per second for popular models continues to improve, could even update the spec by hand and see the code regenerate live on the fly, perhaps via `codespeak watch`.
- Reduces the code you have to write by 5-10x. Largely by convincing you not to write it any more. Our graphics cards write the code for us in this timeline and many people are even happy about it.
- As models improve, could optionally re-run `build` against the same original spec. (Why do that if the output already produces the intended result and the test suite still passes? Presumably for simpler code. Or faster output. Or lower memory use. Or simply _different_ bugs.)
- Moves programming back toward structured thinking backed by a committed artifact and a solid two-word command you can run, instead of actively having conversations with far away GPUs like that's normal now.
- Could theoretically swap out the build target language if you grow to trust the build process to be your babelfish/specfish. Kind of Haxe with Markdown.
Cons:
- Seems to be gated by their login, can't bring your own model?
- Suspect the labs can all clone this concept very easily. `claude build` and `claude spec`?
The idea of a non-deterministic 'build' command had me cringing at first. But formalising a process many are using anyway that currently feels pretty sloppy perhaps isn't so terrible.
If nothing else, writing `build` is a lot quicker and maintains a whisker of self-respect. At least compared to typing, "please take this spec and adapt the Python accordingly" followed 2 minutes later by, "I updated the spec to deal with the edge-case you missed, try again but don't miss anything this time".
Programming is in the end math, the model is defined and, when done correctly follows common laws.
I know dark mode is really popular with the youngens but I regularly have to reach for reader mode for dark web pages, or else I simply cannot stand reading the contents.
Unfortunately, this site does not have an obvious way of reading it black-on-white, short of looking at the HTML source (CTRL+U), which - in fact - I sometimes do.
Sometimes a site will include a button or other UI element to choose a light theme but I find it odd that so many sites which are presumed to be designed by technically competent people, completely ignore accessibility concerns.
Definitely in the minority on this one as dark mode is really popular these days.
Really hard to describe how it is literally physically painful for my eyes. Very strange.
The site does describe it as a "programming language," which feels like a novel use of the term to me. The borders around a term like "programming language" are inherently fuzzy, but something like "code generation tool" better describes CodeSpeak IMHO.
It would actually end up being a lot easier to maintain than a bunch of undocumented spaghetti.
I really believe the struggle is knowledge and communication of ideas, not the coding part (which is fairly easy IMO).
Also, English is really too verbose and imprecise for coding, so we developed a programming language you can use instead.
Now, this gives me a business idea: are you tired of using CodeSpeak? Just explain your idea to our product in English and we'll generate CodeSpeak for you.
In the past maths were expressed using natural language, the math language exists because natural language isn't clear enough.
My gut says Kotlin is great for individual developer experience. But I never heard or saw credible reports on the Total Cost of Ownership, e.g., Kotlin engineers hiring, swapping out on a team.
"In order to make machines significantly easier to use, it has been proposed (to try) to design machines that we could instruct in our native tongues. this would, admittedly, make the machines much more complicated, but, it was argued, by letting the machine carry a larger share of the burden, life would become easier for us. It sounds sensible provided you blame the obligation to use a formal symbolism as the source of your difficulties. But is the argument valid? I doubt."
"Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something."
https://news.ycombinator.com/newsguidelines.html