This is all way too much. If you see a duplicate idempotency key, skip the replay and always return 409. This becomes a client problem. Clients already need to help enforce idempotent contracts; "check for conflict response" is not an onerous imposition.
I've built multiple ecommerce APIs with this approach and they work great. No heroic measures required. You can often satisfy this contract with a unique constraint; if not, a simple presence check in redis. No hashing or worrying about PII.
I really like that article, thank you for sharing it again. I wish I had read it a decade ago, or even the first time you submitted it - I had to learn some of these things the hard way.
I agree with you - I don't think Stripe has made the right choices here and its unfortunate that it has inspired so many other people to make the same poor choices. I don't agree that their system is as sound as always returning 409s. I think having a short window where you return response bodies is fine, but after that they should still be sending 409s. If no one will ever actually resend a request after 24 hours how is it not fine to send 409s when they do? They've chosen to implement the more expensive choice and then not back it with the cheapest one.
But that's not idempotent? If I'm a client and I don't know if the original request went through, getting a 409 on any subsequent requests tells me nothing about whether the original request was successful or not.
Retries will only receive 409 if the original request was successful. If the original request failed, the server performs the operation as normal on the second request. It doesn't replay failures.
The whole point of the idempotence mechanism is so you can make a reliable distributed system. If the first try fails, the client doesn't know if it succeeded or not, so the client should try again later ("at-least-once"). The idempotence mechanism just ensures that we don't get duplicates in the case that the first try actually succeeded.
If you replayed failures there wouldn't be any point to the idempotency key.
What if the original request is still being processed when the retry comes in? That doesn't fall into either of your categories: the request isn't successful, but it hasn't failed either.
That’s because everyone here thinking about payments completely incorrectly. They’re not atomic and your server shouldn’t pretend they are.
You need to store the payment state at each relevant step and process it asynchronously. If requests time out, you check the status of it using the key you store (with the processor) to see if it was even received.
It’s not perfect, some processors will 500 while processing the payment (Braintree), so you still need reconciliation on the backend.
Being charitable, I'd say the poster above is saying that in the web architecture you can (should?) shift more of the burden for idempotence to the client.
But, rather than 409, I'd say that you should be using opportunistic concurrency control if you adopt this perspective. There should be a resource context for the request, so the client can obtain an ETag and send If-None-Match headers, and get a 412 response if things are out of sync. That allows them to retry a failed/lost request and safely prevent a double action.
Under a 412, they have to step back and retry a larger loop where they GET some new state and prepare a new action. Just like in DB transaction programming, where your failed commit means you roll back, clean the slate, and start a whole new interrogation of transaction-protected state leading up to your new mutation request.
The client is already participating in a transaction in a distributed system. There is no way to change the reality of that. Suggestions about masking this only make the composite system unsound and will not improve net service reliability improvement.
That doesn't mean that idempotency keys have to be used. You can certainly hash message content if that is documented behavior. That probably only makes sense when there is already some logical session or transaction identifier that makes dedupe semantics clear.
The system you propose might be sound and might be necessary in some systems, but I can't think of what they might be that wouldn't be better served by the simpler solution that is already widely used for this purpose.
Your database should not allow both commits to happen - one should get rolled back.
If it processed 99% of the request and the final bookkeeping failed because of a duplicate, that's still a failed request.
Arguably this should be the primary way you check for idempotent requests - you shouldn't have a separate check for existence, you should have the insert/update fail atomically.
This is the same thing you see on filesystems for TOCTOU security holes - the right way is to atomically access and modify once, and you only know the request was already processed because that fails.
Not in the payments world. If you’re 99% done but only the bookkeeping failed, then it’s likely that money is changing hands and you need to deal with that fact. Payments are not an atomic infrastructure and you cannot magic that into reality.
Payments are multistep but each of the steps needs to be atomic. The "create payment" operation must be transactional and the communication channel between you and the processor must be idempotent so you don't inadvertently create multiple payments.
The fact that payments have a settlement process is not relevant to this discussion.
That's usually solved with traditional database transactions.
Even if you have a complex long-running multistep orchestration problem, you can break it down into simpler transactions. Eg you could start with a "lock the resources" txn.
But 99% of these conversations around idempotence are simple POST operations like "create order" that regular old database concurrency management handles just fine.
This is just normal concurrent programming? If two requests come in for the same idempotency key/customer reference id, only one will succeed. Use standard database transaction isolation.
So one will complete with 200, one will complete with 409. It doesn't matter which.
That said, there's something odd about the way you phrased this question. If the original request hasn't gotten a response yet, why is it sending a retry? What you're asking is more general: What happens when two conflicting requests come in? This is something we've been solving with RDBMSes since the 1970s.
Not necessarily - there are different transaction isolation and conflict resolution methods provided by every database built for this purpose. You just have to ensure that only one request actually commits to the database, and that one sends a success response while the other sends a 409. The database or another lock provider can either help enforce serialization up-front - or the app can use optimistic locks based on data in the request that will only block if there is actually a conflict, and this won't delay the first transaction at all.
Solving these kinds of issues are exactly the purposes of idempotency keys and database transactions and using them in the intended way is really the only sound way to build a distributed system. Making things more complicated to "improve DevX" is just going to make them unsound. That is what Stripe chose to do. Their 24-hour replay idea is fine but why not send 409s after that rather than accept those transactions? If "that will never happen" then the 409s will never happen. It would have cost approximately nothing (if designed that way upfront) and inconvenienced their clients not at all.
By definition we have to serialize something somewhere so we can decide which request is the success and which request is the duplicate. There's nothing special about the case of retries, this is standard concurrent programming 101. Two conflicting requests come in, which one wins?
You absolutely must wait for one request to finish before any other request can return a 409. 409 is a signal to the client that they can stop retrying, the job is done. If some request returns 409 early and the "original" request fails, you will not get further retries and the message will be lost.
Stripe's approach requires serialization as well. Only one request can succeed. If you send multiple conflicting requests in simultaneously, some of those have to block.
The good news is that we have been solving this problem for decades and we have incredibly well refined tools - database transactions and isolation levels - for solving this problem.
In my opinion, the idea of idempotency is to accept both requests, but only one is actioned (and the requester is non-the-wiser about which). Otherwise, you're just recreating database transactions - something that doesn't need to be named idempotency.
And you haven't considered multiple servers in your scenario - what if two requests meant to be idempotent with each other arrived at different servers?
How would the requester get notified if it doesn’t know which request succeeded? Is it listening for events?
And at the sake of repeating the above commenter, you solve the multiple server by serializing somewhere, because you ultimately need a lock on something. You can also perform the operation in both places and then reconcile the state later but that’s a lot more complex.
The requester doesn't want to know which request succeeded, because they are duplicates and one is a retry!
When you are using TCP, and you send the same data twice because of a delayed ack, you likewise don't care if the ACK is for the first time or the second time you sent the data. You just know the other side got the data, and that's all you care about.
> That said, there's something odd about the way you phrased this question. If the original request hasn't gotten a response yet, why is it sending a retry?
Because it hasn't gotten a response yet. That's got to be far and away the most common reason any request gets retried in any context.
> If the original request hasn't gotten a response yet, why is it sending a retry?
Um, because connections over the Internet aren't 100% always on? Because packets can get lost? Because computers sometimes have to reboot?
You're assuming that the client will always receive whatever response your server finally sends, and that the client will wait indefinitely to receive a response. Neither of those things are true. So the client can be in a state where it sends a retry because it got no response and doesn't know why. And that means a retry request could come in while the first one is still being resolved--because the client had a timeout or it rebooted or something else happened that made it lose the connection state it previously had. That's the case I'm asking about.
I'm not assuming anything. Let me try to reframe this for you.
The case of "client sends a retry with the same idempotency key" generalizes to "multiple requests come in for the same idempotency key". These can come in spread out over time (like a traditional loop), or they could come in at once. The solution is the same either way.
The problem of "how do we deal with multiple conflicting requests coming in at once" is something we have been dealing with for decades. We have databases with transactions and isolation levels. If I said in an interview "make an endpoint that inserts a value in a database and returns an error if the value is a duplicate", any competent backend web developer should be able write it without Claude's help. Concurrency is part of our life.
Whether you want to return 409 or replay the success is irrelevant to this question. You must serialize the idempotent operation on the server, because you can have multiple requests coming in simultaneously. If you put the operation in a database transaction with an appropriate isolation level, you are most of the way there.
"Retries will only receive 409 if the original request was successful. If the original request failed, the server performs the operation as normal on the second request. It doesn't replay failures."
I understand all that just fine; you don't need to keep trying to "reframe" it. But what you said that I just quoted above assumes, implicitly, that if you get a second request with the same idempotency key, the original request has either failed or succeeded--because you don't even address the case where neither of those things are true. I'm asking you to address that case.
If your answer is "that will never happen", I disagree, and I explained why in response to your question about why the client would send a retry if it hasn't received a response to the original request. You could answer, I guess, that you still think that would never happen--and I would still disagree. But at least that would be an answer. So far all you've done is "reframe" something that I already understand and wasn't asking about.
The other cases are the original request is still in flight or never occurred. The former case was explained by the prior comment, one request is processed, the other is returned by 409. The system cares little for which is which and neither should the caller. The latter case is handled by clients retrying until a request is received, at which point one of the other three states takes over.
Whether or not a prior request exists in the system in processed or unprocessed state should not matter in a properly implemented idempotent system, the whole point is that one and only one is processed, and all replicas indicate that they are such.
What you do inside of your boundary to implement that idempotent contract need not be part of the contract and the decision of what primitives to use (locking, content-based addressing etc) are mainly just a question of implementation constraints.
> The other cases are the original request is still in flight or never occurred.
I'm not sure what you mean by "in flight". The case I'm asking about is where the original request was received by the server and is being processed--and then a second request comes in with the same idempotency key. The original request has not succeeded, and has not failed--it's still in process. What response does the second request get? I do not see an answer to that question anywhere in this thread.
The answer is "the same thing as every other concurrency conflict between two requests". In modern backend development this is most commonly handled by the database, and the practical result is that (from the client's perspective) the requests will block, and only one will "actually succeed".
Here's a typical example, assuming serializable isolation in a database that uses optimistic concurrency.
* Two simultaneous requests come in to create a payment.
* The requests provide an idempotency key that is expected to be unique (possibly scoped to a tenant).
* The first request starts a transaction and starts processing, everything looks good - no dups.
* The second request starts a transaction and starts processing, everything looks good - no dups.
* The first one commits and returns success.
* The second tries to commit, but a conflict is detected (the first txn committed first). Typically this causes the second transaction to retry.
* On retry, the second transaction detects the duplicate.
The only question here is what happens when the second transaction fails? The Stripe model is "look up the original response and hand that back to the client". An equally valid and much easier to implement solution is "return a response that tells the client that there was a conflict".
Both solutions offer "create payment" as an idempotent operation.
You're asking questions that broadly summarize as: "what if we had an idempotency key [which must work concurrently to be useful], but it didn't work concurrently?"
Idempotency keys are themselves the solution you're looking for. If they don't work concurrently, they aren't idempotency keys. Your response in races or duplicates doesn't inherently matter in that sense, pick whatever semantics make sense for your system.
> You're asking questions that broadly summarize as
No, I'm asking one question, which doesn't seem to be summarized by your summary.
The situation is that your server has received two requests with the same idempotency key. For the first request, one of three things could be true: it could have succeeded, it could have failed, or it could still be in process.
The original post I responded to said what response the second request gets if the first request succeeded and if it failed. But it didn't say what response the second request gets if the first request is still in process on the server--so it hasn't succeeded and it hasn't failed. I do not see an answer to that anywhere in this thread.
If the first is still in progress and a second arrives, you can wait for it to finish (and then return whatever would be useful, e.g. you could replay the first request's response), or fail the second (409, 423, 429, there are a number of codes that could apply) to tell the caller to retry later.
So yeah, you can do basically anything that isn't inconsistent. Success, fail, delay, don't respond until timeout, all are valid as long as you don't double-apply. Most concurrent systems are like this in some way, because all successes can become errors, and all responses might never arrive. It has nothing really to do with idempotency.
The lock would normally make the second request wait, aka not return a response, until the first one is done. Then it sees it's a duplicate and returns that. Or it times out and returns an error. Then the client hopefully have some exponential back off strategy, so the third attempt doesn't suffer the same fate.
To be honest, I liked your original response about returning a 409 - it's not something I'd done before and I like how it keeps things simpler.
But your follow up responses here are making me rethink. Now you have to have all these special cases where the original request is still in process. I think or assertion of "99% are simple POST operations" is bullshit. For the times where idempotency is hard and really matters, often times you're calling a third party API, like a payment processing API.
I would think a better approach would be to always return a 409 on a subsequent request, regardless of whether it passed or failed, and then have a separate standard API that lets you get the result of any request by its idempotency key.
Idempotence already requires some client thinking if you want to conform to HTTP specs.
I.e. idempotent DELETE with proper protocol behavior requires that one request see the 200 OK or 204 No Content and the other sees 404 Not Found, because the delete has already happened. It would be misleading to say 200 OK to both, because that answer means the resource was there when the request arrived.
Honestly, the whole HTTP resource model has a different conceptual backing for state management than the independently developed "idempotence" concepts in distributed systems. Those non-HTTP concepts came from more message-based rather than resource-based architectural assumptions.
The cleanest mapping in the spirit of HTTP would be that you do multiple round trips. A POST creates a new idempotence context, a bit like "start a transaction". The new URI is the key for coordinating state change and allowing restart/recovery.
As I remember it, the idea of idempotence keys in headers really came from the SOAP RPC mindset. It's kind of funny to see it persisting in some hybrid SOAP + REST mental model.
> The cleanest mapping in the spirit of HTTP would be that you do multiple round trips. A POST creates a new idempotence context, a bit like "start a transaction". The new URI is the key for coordinating state change and allowing restart/recovery.
I think that gave me "Enterprise Java Beans PTSD". I.e. an over-engineered solution that adds complexity for both the client and server in the name of some sort of "protocol purity".
People bolted on idempotent semantics onto HTTP because it wasn't provided natively by the protocol, so I don't think it makes sense to go through some hoop-jumping gymnastics for the sake of conforming to a spec that doesn't describe the necessary semantics in the first place.
> special cases where the original request is still in process
This isn't a special case, and it's the same problem if you want to replay the original response on conflict. If the original request isn't complete, what are you going to replay?
> If the original request isn't complete, what are you going to replay?
Who says you have to replay? If you get a second request with the same idempotency key, and the original request is still in process, why not just send the client a response that says so?
That is my point. When you are doing "normal" idempotency where you do the appropriate locking and keep around a table with ongoing request status and the result that you can return on a subsequent duplicate request, you handle all these cases. But in your "409" version of it, you haven't really saved much complexity on the server because you still need to keep around all that info if you're not just returning a 409 if you get a second request while the first is in progress.
Your comment contributes nothing to the discourse. It's FUD.
The pattern I describe was the dominant design pattern for financial transaction processing systems before Stripe. Stripe's API makes life for the clients slightly easier at the expense of making life for servers more complicated, but the two approaches are equivalent in function.
The topic is about idempotency though, and what you are describing is not idempotency. You are describing something different, and arguing “but it accomplishes a similar thing.” It appears you have come to the same conclusion as the article that building idempotency isn’t trivial.
It absolutely is idempotent behavior of the system. The goal is "make an idempotent payment", and this approach ("return an error for duplicates") was standard for financial APIs before Stripe. It still dominates for ecommerce APIs.
It isn't complicated, though I can see how if your entire experience with financial APIs is Stripe, you might not be aware of how simple it is. Because Stripe's approach, while mildly more convenient for clients, is a PITA to implement properly.
The client is part of a distributed transaction. It can't be oblivious to this. Clear semantics and accurate adherence to them is the only answer that doesn't make the overall system unsound. Client bugs are expected and so the simplest semantics that ensure data integrity and accurate responses are the best way to help them identify and fix their bugs.
This is because there are countless tutorials and slop on the internet that says, no, instructs, you to handle idempotency on the client instead of where it belongs. Server-side.
I really like this approach, and am going to keep it in my back pocket.
However, it's good for e-commerce where there are a subset of "important" operations, but I'd argue the idempotency key is better for a financial API like Stripe where the majority of your operations need these semantics.
You also run into a problem if PUT/PATCH needs to be exactly-once instead of at least once. Not as common, but again, might be something you run into with a financial API.
There's nothing special about any particular HTTP method; it is the inherent nature of distributed systems that you can not have exactly-once semantics with a single HTTP request. To create exactly-once semantics, you need a client which retries until success and a server which prevents duplicates.
The only difference between the approach I'm describing and Stripe's approach is the detail of how the client knows that it's done. "My" mechanism (notify the client that a request is a duplicate) was dominant in financial transaction processing systems before Stripe; I used probably a half dozen of them back in the day.
Stripe came along and from the beginning decided "we want to compete based on making a friendly API". They succeeded, and if you ever look at Paypal's original API, it's easy to see why. It's true that repeating successful API responses makes life slightly easier for clients; they never need to check for a 409 (or whatever) response. It comes at a cost of making life quite a bit more complex on servers. Personally I don't think the tradeoff is worth it, but YMMV. If your single competitive advantage is "easy API" maybe it makes sense. If you're normal B2B, almost certainly not.
And a good way to convey this contract is force the clients to hash-chain the calls, so is now more clear that it should do (and even can detect the send will fail at client side)
If idempotent key was seen then send back response.
Clients intention is outside the scope. If contract says "idempotency on key" the idempotent response on key. If contract says "idempotent on body hash" then response on body hash (which might or might not include extra data).
APIs are contracts. Not the pinky promise of "I'll do my best guess"
IMHO it's more: fix problems, or at least mitigate them, regardless whose problem it is.
I've been in this situation, a clientside bug meant that different requests arrived with the same idempotency key.
In my case, updating the client would have taken weeks, in the best case scenario. Updating the backend to check for a matching request body would have taken minutes, maybe hours.
It took me a surprising amount of arguing to convince people that, even if it was a clientside bug, we couldn't let users suffer for weeks in name of "correctness".
Well.. it was ~6 years and ~10 billion payments ago, the clients have been fixed but the "hack" is still there, it has caused no harm as far as I can tell. Worst case scenario it's useless, best case scenario it prevents regressions.
The issue with things that client must not do is that they might still do them, and users don't care whose fault it is. It's important to have auxilliary mechanisms to mitigate these.
> In my case, updating the client would have taken weeks, in the best case scenario. Updating the backend to check for a matching request body would have taken minutes, maybe hours.
Then at least admit you’re just hacking quickly fixes, creating technical debt, and not fixing the actual problem.
I agree with your point that business interest is most important, I disagree that it’s the technically most appropriate solution.
The whole article is proclaiming that this is a technical problem about idempotency being hard, while it’s not. The whole premise of client side bugs must be resolved backend side as the correct solution is incorrect.
Eh, idk, I wouldn't classify these fixes as hacks nor as technical debt. It's labels that only work from a partial perspective. IMHO a solution that expects perfect compliance is not really complete, it's not good enough to put all the burden on the client, idempotency keys are part of the solution, but not the solution. So, in this sense I would say it is a technical problem.
That just leads to bigger fools. I don't just mean that as clever wordplay, but as a serious point. No matter how sloppy you make your API someone else will use it even more sloppily. Now you've got an enormous sloppy surface you can't properly contain or maintain, and people still transgress its boundaries even so.
The robustness principle has its times and places but the general consensus that it should be applied everywhere to everything was a big mistake. The default should be that you are very rigid and precise and only apply the robustness principle in those times and places it applies, and I'm perfectly comfortable waiting to deploy something precise and find out that this was one of them. The vast majority of APIs is not the time and place for the robustness principle. It's the time and place for careful precision on exactly what is provided, and detailed and description error messages, logging, and metrics for when the boundaries are transgressed.
Sometimes "best guess" is the contract. Obviously many applications can't tolerate that, but surprisingly many can.
The user just needs to know what the trade-off is. And "best guess" can be hard to characterize, so you need to be extremely careful. But sometimes it's a big win for a low price.
APIs always have unspecified edge cases (for any sufficiently complex API). In those cases, the API usually does the author's best guess of the proper behavior.
> APIs are contracts. Not the pinky promise of "I'll do my best guess"
You have never had to work with PHP backends, have you?
JSON in PHP is a flustercluck. Undefined, null, "" or "null", that is always the question.
If you use a typed Go/Rust client and schemas, you usually end up with "look ahead schemas" that try to detect the actual types behind the scenes, either with custom marshallers or with some v1/v2/v3 etc schema structs.
It's so painful to deal with ducktyped languages ... that's something I wouldn't wish on anyone.
This is an excellent article, I’ve seen almost all of the issues it calls out in production for various APIs. I’ll be saving this to share with my team.
I’ve seen two separate engineers implement a “generic idempotent operation” library which used separate transactions to store the idempotency details without realizing the issues it had. That was in an organization of less than 100 engineers less than 5 years apart.
I once wrote about inherent, irreducible complexity and how we try to deal with it. The draft has sections on how complexity can be hidden, spread out, localized, passed off, or recreated from scratch. Unfortunately, people are now using LLMs to pile complexity on the simplest of tasks, and my essay isn't really worth finishing.
> people are now using LLMs to pile complexity on the simplest of tasks, and my essay isn't really worth finishing.
Isn't the opposite true? The more people are messing with complexity, the more they could benefit from a model of a complexity? And if they generate complexity with external tools, then maybe a theoretical take on that will be the only way for them to learn? I mean, we learn these things through struggle and pain, but if all of that becomes an LLM problem, than you just stop learning? But at some point complexity will strike back, at some point there will be as much of it, that LLM will be no help.
OTOH, if LLM still win, and skills of managing complexity will be lost in future generations, if we are at the peak of our skills of dealing with complexity, than shouldn't we try our best to imprint our hard won lessons into a history? Maybe for some later generations the tide will turn and they would write textbooks on complexity, and with your article you'll get your portrait in a textbook, and each bored pupil will decorate it with mustaches? You have a chance to immortalize yourself. xD
Or maybe you can become someone like Ramanujan for math? Someone who honed obsolete math skills to an unimaginable level? Maybe a time will come, when students will pour over Ramanujan works, because his skills became useful again, and they try to find out how Ramanujan thought?
...
Sorry, I just couldn't resist. Seriously, it is hard to predict with LLMs, maybe we will not need intellect or any intellectual skills at all after AGI.
How would you even know the second request is different? Hash every request? That's a waste of resources. The only sensible policy is to trust the key.
No, the sensible policy is to have the code operate idempotently for every request with an idempotent method. This is a design decision, not something you slap on top afterward with a special key.
I skimmed the article earlier, but going back and looking, it doesn't appear that the article mentions the spec at all, or links to it. In fact, the first paragraph is literally "People talk about..."
Some help for others to understand the history of this (which apparently Stripe, Paypal, Dwolla, and others use): https://github.com/mdn/content/issues/41497 There are links to the RFC and prior art.
That aside, my first impulse is to say that the server should specify that the key includes a hash of the important parts of the request, checked on receipt, so that only the key itself need be stored. However, FF's implementation apparently(?) adds the header automatically to POST and PATCH if it's not already present, which means that it's not able to comply with such a decision, and the RFC (currently expired) recommends using a UUID, so.
I'm guessing the original motivation of this is "Browser JS might not be able to send a PUT, or proxies may not handle a PUT correctly".
A couple of years ago, we experienced a silent data corruption incident in our checkout process due to this specific edge case.
A user would generate the idempotency key by loading the front-end application, adding item(s) to their cart, submitting their order but timing out. The user would then navigate back to the front-end application and add another item and submit the order again. Since the user is submitting an identical idempotency key to the same transaction, our payment gateway would look up the request/transaction by idempotency key and see in its cache that there was a successful (200 OK) response to the previous request. The user now believes they purchased three items, however, our system only charged and shipped on two of the orders.
Consequently, the lesson we take away from the aforementioned incident is idempotency keys are really composite keys (Client_Provided_Key + Hash(Request_Payload)).
If a system receives an identical idempotency key (but with a different request payload) the idempotency key should be rejected with a 409 Conflict response with a message similar to "Idempotency key already used with different request payload". Alternatively, some teams argue it should be returned with a 400 Bad Request response. Systems should never return a failed cache response or replace old entries of data.
This article explains how to unlock your flow. The final idempotent key will not be located until the first request completes, but will rather exist when the request is in progress.
To safely accomplish your goal, you have to follow the following steps:
1. Acquire a distributed lock on the idempotent key.
2. Check for the existence of a key in your persistent store.
3. If an existing key is found, verify the hash of the payload against the hash for the payload type. If the hashes do not match, return a 409 error.
4. If the hashes match, look up the status of the payload. If the status shows COMPLETED in the persistent store, return the cached response. If the status shows PENDING in the persistent store, return a 429 Too Many Requests to the user or hold the connection open until the request reaches a PENDING state.
5. After processing the request, save the response to the persistent store before releasing the lock.
While this may look simple on paper, creating a distributed locking state machine for a single API endpoint is typically how developers have their first aha moments with idempotency. Becoming idempotent is often an enormous architectural shift and not just a middleware header check.
This is incredibly poor advice. The bug was clearly in the client code, which did not understand the purpose or usage of the idempotency key. The API itself probably had a design flaw as well - it sounds like it needed a session or transaction id to serve the purpose you mentioned. That is not what an idempotency key is for.
An API should follow its documented behavior. This is both a specification and a contract. If the docs for the API say that a duplicate idempotency key will receive a 409, and do not mention message hashes, then they need to follow that spec because the client may specifically depend on it. For example if the order was processed and the cart is resent with the same key but an additional item, client does not want another order with the duplicate items in the first one. They want an error.
If the docs do not accurately describe the behavior of the idempotency key, the client should find another provider.
> While this may look simple on paper, creating a distributed locking state machine for a single API endpoint is typically how developers have their first aha moments with idempotency. Becoming idempotent is often an enormous architectural shift and not just a middleware header check.
Yes, when you expand the scope of your API implementation beyond its contract you take on a virtually unbounded amount of edge cases that not only must you solve, but that your customers must guess at how you are solving.
I'm guessing that your API required the idempotency key. I think that is could be risky because it means developers will simply provide a value for it without understanding the purpose, or thinking through the implications. You really only want them using it if they understand the problem it is solving.
Hashing message content could be an alternative behavior that it makes sense to support by default for apps that don't supply an idempotency key. As long as you document it.
I wholeheartedly agree. Luckily it is a lot easier to reliably run a distributed KV store that only needs to lock the idempotency key over relatively short times rather than a whole database with millions of records or make arbitrary systems idempotent.
> 5. After processing the request, save the response to the persistent store before releasing the lock.
Save only if the operation succeeds. It's meaningless to cache a failure, subsequent retries will result in failure from the cache.
Frankly you guys are overengineering the whole thing. We use the concept only for network outages i.e. it is only on timeout that we want to guard against fultilling duplicate request for the same operation.
Sounds like an interesting case of incorrectly trusting user input.
The idempotency key should have been viewed as the untrustworthy hint it really is. Then you can decide whether an untrustworthy hint is what you really need. At that point I'd hope someone on the team says "This is ordering - I think we need something trustworthy"
> Consequently, the lesson we take away from the aforementioned incident is idempotency keys are really composite keys (Client_Provided_Key + Hash(Request_Payload)).
Did the postmortem result in any other (wider) changes/actions, out of curiosity?
No idea if this was anything like what happened your case, and probably going off on a tangent, but I've seen so many cases where teams are split into backend and frontend, and they stop thinking about the product as a single distributed system (or, it exacerbates that lack of that thinking from before). Frontend often suggest "Oh we can just create an idempotency key" and any concerns from backend are dismissed. If they implement it incorrectly, backend are on the wrong 'team' to provide input.
Congrats on destroying the purpose of Idempotency Keys.
Ask yourself, why not just `Hash(Request_Payload)`? That'll give you half of what you need to know about why the Idempotency Key header is useful in the first place.
The other half you already know? You just described your bug, it's a bug, on your front-end, this has nothing to do with idempotency; if anything, the system is performing as expected.
If your requests do something different, they should have different Idempotency Keys. <- this brings down TFA and most of the comments here. I guess those are the perils of vibecoding.
The real problem is that sticking an idempotency key onto an operation doesn’t make it idempotent.
It may improve efficiency where a protocol doesn’t assure exactly-once delivery of messages, but it cannot help you with problems other than deduplication of identical messages.
Creating a payment is not an idempotent operation. If the economics of the operation can differ when the “idempotency” key remains the same then you’ve just created a foot-gun in your API.
You can document that you’re going to ignore “duplicate” requests that share an idempotency key but that’s just user-hostile. The system as a whole is broken as designed.
I do not disagree with their definition of idempotency, but they silently assume resending the same result is the default. They discus this later on in the article but they do not seem to question why that might not be a good idea in the first place.
Edit: Perhaps it is my mental model that is different. I think it makes most sense to see the idempotency key as a transaction identifier, and each request as a modification of that transaction. From this perspective it is clearer that the API calls are only implying the expected state that you need to handle conflicts and make PUTs idempotent. Making it explicit clarifies things.
The article actually ends up creating the required table to make this explicit, but the API calls do not clarify their intent. As long as the transaction remains pending you're free to say "just set the details to X" and just let the last call win, but making the state final requires knowing the state and if you are wrong it should return an error.
If you split this in two calls there's no way to avoid an error if you set it from pending to final twice. So a call that does both at once should also crash on conflicts because one of the two calls incorrectly assumed the transaction was still pending.
Right. An operation is idempotent only if doing it twice has the same result as doing it once. If you have to worry about whether an operation has already been done, it's not idempotent. If you have to worry about order of operations, it's not idempotent.
What's being asked for here is eventual consistency. If you make the same request twice, the system must settle into a the same state as if it was done only once.
That's the realm of conflict-free replicated data types, which the article is trying to re-invent.
x = 1
is idempotent.
x = x + 1
over a link with delay and errors is a problem that requires the heavy machinery of CRDTs.
> Send the same payment twice and one of them should respond "payment already exists".
You are hiding the relevant complexity in the term "same". What is here the same? I mean, if accidentally buy only 1 instead of two items of a product and then buy afterwards again 1 item. How is this then the same or not the same payment?
How and based on what is the idempotency key calculated which the clients sends with its request? In my double-purchase example above: when would the second purchase be requested with the same key or not?
For idempotency you literally just want f(state) = f(f(state)). Whether you achieve this by just doing the same thing twice (no external effects) or doing the thing exactly once (if you do have side effects) is not important.
But if you have side effects and need something to happen exactly once it seems a lot more useful to communicate this, rather than pretending you did the thing.
> But if you have side effects and need something to happen exactly once it seems a lot more useful to communicate this, rather than pretending you did the thing.
I think it depends on whether the sender needs to know whether the thing was done during the request, or just needs to know that the thing was done at all. If the API is to make a purchase then maybe all the caller really needs to know is "the purchase has been done", no matter whether it was done this time or a previous time.
And in terms of a caller implementing retry logic, it's easier for the caller to just retry and accept the success response the second time (no matter if it was done the second time, or actually done the first time but the response got lost triggering the retry).
Not that it's a bad TFA but in addition to what you said, many/most of the edge cases mentioned in TFA are just as problematic whether idempotency is desired or not.
I mean:
> Maybe the first request created a local payment but crashed before publishing an event ...
I mean, yeah, sure. That's a problem. I can come up with another one:
"Maybe the ZFS disk array for the DB caught fire and died a horrible death and you now need to restore from backups".
i dont disagree with the problem, but this sort of Idempotency-Key header is kind of outsourcing the de-duplication to the client. If the client sends a different request with same Idempotency-Key header its the user's (client's) fault. Its also circumventing the fact that its the effect that should apply to give the same state, you could design the API itself to be idempotent wrt to some other property such as the transaction id. The designs I have seen using an explicit Idempotency-Key header has usually been added on after launch.
I think this article (and the author's previous articles on their blog) is quite clearly AI written. It has such a frustratingly punctuated cadence and really does not serve the reader anything valuable.
What really does not serve the reader any value is this comment now appearing on nearly every single HN thread. (And neither does my comment, sorry about that.)
If you like the article, upvote. If you don’t, don’t.
> Maybe it arrives while the first request is still running. Now your idempotency layer is part of your concurrency control.
I recently designed a system where this had to be taken into consideration. I find my solution very elegant: When the request arrives, I put the pending request into a map, keyed by the idempotenceId. This whole operation is executed in one step. Now the event loop may process other requests. If one of them has the same key, it will await the same response object from the store. And then, once i have the response, I resolve both promises with it.
You keep the hash of the request so that you can reject a subsequent request with a different body. This has helped me surface bugs and data issues in other systems.
A queue with a pre-flight check can assist with this quite well. Requests are queued and executed ASAP and you use a checker function to verify whether future requests/jobs can run. Just check that idempotency key and if it's already in the queue/db, skip it (and if need be, log the double attempt for future forensics).
I will add: if you have an operation that adds a record to a database (like a payment in the OP example), don’t forget to have some field that the client specifies that can be used to find and query the status of that record later. This field can be the idempotency key itself or another field.
Or you can completely forget this feature and make it really awkward for the client to reconcile their view of the world with yours and/or to check in the request later. cough Mercury cough.
It is, just barely, acceptable to generate the identifier server side and return it to the client.
Why does a second request need to replay the response? It seems to solve a ton of issues to just send back "request key already used" regardless of whether the request is the same or whether the previous request succeeded or failed. Is there a common situation where the replay is needed? I think of idempotency being important to deal with caching and the like, not something you should build your client around.
Here x is interpreted as state and f an action acting on the state.
State is in practice always subjected to side effects and concurrency. That's why if x is state then f can never be purely idempotent and the term has to be interpreted in a hand-wavy fashion which leads to confusions regarding attempts to handle that mismatch which again leads to rather meandering and confusing and way too long blog posts as the one we are seeing here.
*: I wonder how you can write such a lengthy text and not once even mention this. If you want to understand idempotency in a meaningful way then you have to reduce the scenario to a mathematical function. If you don't then you are left with a fuzzy concept and there isn't much point about philosophizing over just accepting how something is practically implemented; like this idempotency-key.
Idempotence is a semantically overloaded term in computer science where in functional programming it refers to the same concept as mathematical idempotence it refers to any function leading to the same state in multiple calls as the first.
And yes, in real machines we can't ever have true same states between multiple calls as system time, heat and other effects will differ but we define the state over the abstracted system model of whatever we are modelling and we define idempotency as the same state over multiple calls in that system.
not just heat and system time. the context is about state handled by databases. database content can never be assumed to be identical between to identical operations involving it.
"delete record with id 123" is only idempotent if there is no chance that an operation like "create record with id 123" happened in between.
Half of the mentioned issues are issues of atomicity, not idempotency. If I make a request, and the server crashes midway and doesn't send some crucial events, that's an issue whether or not I send a second request.
From a cursory read, only the part up to "what if the second request comes while the first is running" is an idempotency problem, in which case all subsequent responses need to wait until the first one is generated.
Everything else is an atomicity issue, which is fine, let's just call it what it is.
If the atomic action is idempotent, you don't need a layer for repeating yourself. You hit the nail on the head. So much idempotency efforts are made because they never made the actions idempotent in the first place.
It's just the horrible misapplication of the term 'stateless' to a wrapper around something very-much stateful. It's here to stay.
(Though I do disagree with the original premise too. Putting on a 'stateless' boxing glove won't mean there's no difference between punching a guy once or twice)
The point of idempotency is safe retries. Systems are completely fallible, all the way down to the network cables.
The user wants something + the system might fail = the user must be able to try again.
If the system does not try again, but instead parrots the text of the previous failure, why bother? You didn't build reliability into the system, you built a deliberately stale cache.
That's why you need to separate work from actual input.
It's not about trying again but about making sure you get consistent state.
Imagine request for payment. You made one and timeouted. Why did it timeout? Your network or payment service error?
You don't know, so you can't decide between retry and not retry.
Thus practice is: make request - ack request with status request id (idempotent, same request gives same status id) - status checks might or might not be idempotent but they usually are - each request need to have unique id to validate if caller even tried to check (idenpotency requires state registration).
If you want to try again you give new key and that's it.
There might of course be bug in implementation (naive example: idempotency key is uint8) but proper implementation should scope keys so they don't clash. (Example implementation: idempotency keys are reusable after 48h).
If same calls result in different responses (doesn't matter if you saw it or not) then API isn't idempotent.
> You don't know, so you can't decide between retry and not retry
I'm well aware that the first order went through, even though the dumb system fumbled the translation of the success message and gave me a 500 back.
I do retry because I wanted the outcome. I'm not giving it a new key (firstly because I'm a user clicking a form, not choosing UUIDs for my shopping cart) but more importantly, if I did supply a second key, it's now my fault for ordering two copies.
"Idempotency" feels like "encapsulation" all over again.
Take a good principle like 'modules should keep their inner workings secret so the caller can't use it wrong', run it through the best-practise-machine, and end up with 'I hand-write getters and setters on all my classes because encapsulation'.
This article seems to be missing an example use-case for this functionality, it is very unclear to me that this is a good idea. Your job is hopefully to build an API with the simplest and most reliable contract to meet the needs of the client. Sometimes that involves saying, "human technology does not have a reliable way to support this expectation, but here's what else could work."
> Maybe the first request called a payment provider, the provider accepted it, and your process died before recording the result. Now your database cannot infer whether money moved.
This entire example is bad design. It's bad, bad design. I'm sorry, but if this is your example, you are doing it wrong in every way. There are ways to handle these sorts of things, well-known and well-established patterns. You are using none of these here.
I get it, it's an example, but it's a poor example. You should change it before someone assumes what you are talking about is sensible or reasonable in a production environment. Or at least put a warning.
yes I always thought it's an easy thing. but I changed my mind recently when I had to deal with it.
A lot little things you need to think of. For example.
Client sends a request. The database is temporarily down. The server catches the exception and records the key status as FAILED. The client retries the request (as they should for a 500 error). The server sees the key exists with status FAILED and returns the error again-forever. Effectively "burned" the key on a transient error.
others like:
- you may have Namespace Collisions for users... (data leaks)
- when not using transactions only redis locking you have different set of problem
- the client needs to be implmented correctly. Like client sees timout and generates a new key, and exactly once processing is broken
- you may have race conditions with resource deletes
- using UUID vs keys build from object attributes (different set of issues)
I mean the list can get very long with little details..
None of those are really unsolvable problems. I think though the issue it seems everyone in this thread is having is you can't wrap a non idempotent function to make it idempotent no matter how hard you try you have to design your system around it.
Idempotency is just the next generation realizing that goto and global variables are still antipatterns.
And then some lazy birdbrain will come up with some new way to either jump to a random place in the code without guardrails on program state, or referencing data that other code or threads could have touched, and they'll call it a time saving feature.
And then we will all learn the hard way that those annoying restrictions were in place for a reason.
This is the great circle of life and death and rebirth
I really hate the POST verb for RESTish APIs because it cannot be idempotent without implementing an idempotency layer. Other verbs are naturally idempotent. Has anyone tried foregoing POST routes entirely? Theoretically you can let the client generate an ID and have it request a PUT route to create new entities. This would give you a tiny amount of extra complexity on the client, but make the server simpler as a trade-off.
GET is not supposed to make changes on the server. The usual idempotent verbs for making changes are PUT and DELETE.
One thing that's confusing, here, is that idempotency only applies for the same request, but the article implies that idempotency is about whether the request contains a specific "idempotency key".
How can you tell from the server if that's a retry (think e.g. some reverse proxy crashed and the first request timed out, but the payment already went through to the user's CC)... or if the user just trying to purchase another item 123 because they forgot they needed 2?
There is simply no way to make the requests idempotent without an idempotency key. The only way to tell both situations apart is to key the requests by some UID. The HTTP verb is irrelevant.
In the case in the article, the request is being rebuilt again by the client, and may be slightly different. Typically, the server doesn't have to care about any of that if it's just "did we get something for this ID?" and either it did and errors (could be a 4xx or a 5xx depending on what it now has), or it didn't, and processes the request.
So what you propose is first you create the request payload and POST it, which generates a request-id-bound URL (but it does nothing stateful yet) and then you actually request to perform it? Because otherwise I don't see any difference.
If you must use POST with a idempotency-key, then my suggestion would be to use it like you'd use a PUT: you generate a guaranteed unique URI on the client, according to a spec agreed upon with the server, along with some idempotency-key value (UUID or whatever per the recommendation of the RFC), and then POST the request to the generated URI. If you get back 200 or 201, great! If you get an error that indicates it might not have worked or you get nothing because of a partition, send the request again. If the server had already processed the first request, the second one should 400 or 409 or something, regardless of any differences between the first and second request. If there was some sort of partial processing, then some other permanent error for that particular ID or URI should convey that.
My original point, though, was that these semantics are well-understood for PUT, so just use PUT, or use POST (with the idempotency-key header) exactly as you would PUT.
> If you’re still in school, here’s a fact: you will learn as much or more every year of your professional life than you learned during an entire university degree—assuming you have a real engineering job.
This rubs me the wrong way. It's stated as fact without any trace of evidence, it is probably false, and it seems to serve no purpose but to make struggling students feel worse (and make the author feel superior).
"assuming you have a real engineering job" does a lot of work there. You could also do a lot of work the other way by stating "assuming you are getting a real education". I studied physics when I was young and that field is a lot deeper than my current work in programming. Computer science can also be quite deep if one considers things like the halting problem, type theory and proof assistants.
What you learn at a uni is not really about learning a trade, sure it gives you a taste of the basics in many areas, but you will never be an supeb developer (or another profession) when you get out by only attending classes. However, what uni teaches you is how to learn, how to think critically, how important sources are, what to look for to get the most knowledge out of what you read. Or at least that is what it has always been about for me, the process of learning effective learning.
That's what university is supposed to be about, but it doesn't achieve that for many people. Which is a shame, because those skills are extremely important.
I think it's that the things learned in school are academic (red-black trees, dynamic programming, writing toy OS and programming languages, etc.)
In the real world you're faced with building five nines active-active systems that interface across various stakeholders, behaviour has to be eventually consistent, you've got a long list of requirements and deadlines, etc. It's practical, hands on, and people are there to build the thing with you at a scale that far exceeds the university undergraduate setting.
It's not a bad thing, it's just different.
Students shouldn't be afraid of it. Your job and coworkers, if it's a good workplace, are there to help you succeed as you succeed together. You learn and grow a lot.
You also learn how to deal with people, politics, changing requirements, etc., which I would imagine is difficult or impossible to teach without just throwing yourself into the fire.
Sure, it's different than gaining professional experience. It's more theorical, more foundational, broader. But that doesn't mean you're learning less, let alone 4 times less on a yearly basis.
I've been a CS teacher and I found that it's terribly easy to underestimate how much there is to learn and how much effort that learning takes, when you've internalized a skill yourself a long time ago.
I've built multiple ecommerce APIs with this approach and they work great. No heroic measures required. You can often satisfy this contract with a unique constraint; if not, a simple presence check in redis. No hashing or worrying about PII.
My rant about this: https://github.com/stickfigure/blog/wiki/How-to-%28and-how-n...
I agree with you - I don't think Stripe has made the right choices here and its unfortunate that it has inspired so many other people to make the same poor choices. I don't agree that their system is as sound as always returning 409s. I think having a short window where you return response bodies is fine, but after that they should still be sending 409s. If no one will ever actually resend a request after 24 hours how is it not fine to send 409s when they do? They've chosen to implement the more expensive choice and then not back it with the cheapest one.
The whole point of the idempotence mechanism is so you can make a reliable distributed system. If the first try fails, the client doesn't know if it succeeded or not, so the client should try again later ("at-least-once"). The idempotence mechanism just ensures that we don't get duplicates in the case that the first try actually succeeded.
If you replayed failures there wouldn't be any point to the idempotency key.
You need to store the payment state at each relevant step and process it asynchronously. If requests time out, you check the status of it using the key you store (with the processor) to see if it was even received.
It’s not perfect, some processors will 500 while processing the payment (Braintree), so you still need reconciliation on the backend.
But, rather than 409, I'd say that you should be using opportunistic concurrency control if you adopt this perspective. There should be a resource context for the request, so the client can obtain an ETag and send If-None-Match headers, and get a 412 response if things are out of sync. That allows them to retry a failed/lost request and safely prevent a double action.
Under a 412, they have to step back and retry a larger loop where they GET some new state and prepare a new action. Just like in DB transaction programming, where your failed commit means you roll back, clean the slate, and start a whole new interrogation of transaction-protected state leading up to your new mutation request.
That doesn't mean that idempotency keys have to be used. You can certainly hash message content if that is documented behavior. That probably only makes sense when there is already some logical session or transaction identifier that makes dedupe semantics clear.
The system you propose might be sound and might be necessary in some systems, but I can't think of what they might be that wouldn't be better served by the simpler solution that is already widely used for this purpose.
If it processed 99% of the request and the final bookkeeping failed because of a duplicate, that's still a failed request.
Arguably this should be the primary way you check for idempotent requests - you shouldn't have a separate check for existence, you should have the insert/update fail atomically.
This is the same thing you see on filesystems for TOCTOU security holes - the right way is to atomically access and modify once, and you only know the request was already processed because that fails.
The fact that payments have a settlement process is not relevant to this discussion.
Even if you have a complex long-running multistep orchestration problem, you can break it down into simpler transactions. Eg you could start with a "lock the resources" txn.
But 99% of these conversations around idempotence are simple POST operations like "create order" that regular old database concurrency management handles just fine.
That doesn't answer my question. What response do you return to the client in the case I described?
So one will complete with 200, one will complete with 409. It doesn't matter which.
That said, there's something odd about the way you phrased this question. If the original request hasn't gotten a response yet, why is it sending a retry? What you're asking is more general: What happens when two conflicting requests come in? This is something we've been solving with RDBMSes since the 1970s.
> why is it sending a retry?
may be two clients tries to do it? Or there's a bug with the client in how they do it?
Isn't the point of idempotency meant to enable clients to retry again, without fear that a 2nd request somehow breaking things?
Not necessarily - there are different transaction isolation and conflict resolution methods provided by every database built for this purpose. You just have to ensure that only one request actually commits to the database, and that one sends a success response while the other sends a 409. The database or another lock provider can either help enforce serialization up-front - or the app can use optimistic locks based on data in the request that will only block if there is actually a conflict, and this won't delay the first transaction at all.
Solving these kinds of issues are exactly the purposes of idempotency keys and database transactions and using them in the intended way is really the only sound way to build a distributed system. Making things more complicated to "improve DevX" is just going to make them unsound. That is what Stripe chose to do. Their 24-hour replay idea is fine but why not send 409s after that rather than accept those transactions? If "that will never happen" then the 409s will never happen. It would have cost approximately nothing (if designed that way upfront) and inconvenienced their clients not at all.
You absolutely must wait for one request to finish before any other request can return a 409. 409 is a signal to the client that they can stop retrying, the job is done. If some request returns 409 early and the "original" request fails, you will not get further retries and the message will be lost.
Stripe's approach requires serialization as well. Only one request can succeed. If you send multiple conflicting requests in simultaneously, some of those have to block.
The good news is that we have been solving this problem for decades and we have incredibly well refined tools - database transactions and isolation levels - for solving this problem.
And you haven't considered multiple servers in your scenario - what if two requests meant to be idempotent with each other arrived at different servers?
And at the sake of repeating the above commenter, you solve the multiple server by serializing somewhere, because you ultimately need a lock on something. You can also perform the operation in both places and then reconcile the state later but that’s a lot more complex.
When you are using TCP, and you send the same data twice because of a delayed ack, you likewise don't care if the ACK is for the first time or the second time you sent the data. You just know the other side got the data, and that's all you care about.
By sending a third request and getting a response that reveals the state of the system.
Because it hasn't gotten a response yet. That's got to be far and away the most common reason any request gets retried in any context.
Um, because connections over the Internet aren't 100% always on? Because packets can get lost? Because computers sometimes have to reboot?
You're assuming that the client will always receive whatever response your server finally sends, and that the client will wait indefinitely to receive a response. Neither of those things are true. So the client can be in a state where it sends a retry because it got no response and doesn't know why. And that means a retry request could come in while the first one is still being resolved--because the client had a timeout or it rebooted or something else happened that made it lose the connection state it previously had. That's the case I'm asking about.
The case of "client sends a retry with the same idempotency key" generalizes to "multiple requests come in for the same idempotency key". These can come in spread out over time (like a traditional loop), or they could come in at once. The solution is the same either way.
The problem of "how do we deal with multiple conflicting requests coming in at once" is something we have been dealing with for decades. We have databases with transactions and isolation levels. If I said in an interview "make an endpoint that inserts a value in a database and returns an error if the value is a duplicate", any competent backend web developer should be able write it without Claude's help. Concurrency is part of our life.
Whether you want to return 409 or replay the success is irrelevant to this question. You must serialize the idempotent operation on the server, because you can have multiple requests coming in simultaneously. If you put the operation in a database transaction with an appropriate isolation level, you are most of the way there.
Sure you are. You said:
"Retries will only receive 409 if the original request was successful. If the original request failed, the server performs the operation as normal on the second request. It doesn't replay failures."
I understand all that just fine; you don't need to keep trying to "reframe" it. But what you said that I just quoted above assumes, implicitly, that if you get a second request with the same idempotency key, the original request has either failed or succeeded--because you don't even address the case where neither of those things are true. I'm asking you to address that case.
If your answer is "that will never happen", I disagree, and I explained why in response to your question about why the client would send a retry if it hasn't received a response to the original request. You could answer, I guess, that you still think that would never happen--and I would still disagree. But at least that would be an answer. So far all you've done is "reframe" something that I already understand and wasn't asking about.
Whether or not a prior request exists in the system in processed or unprocessed state should not matter in a properly implemented idempotent system, the whole point is that one and only one is processed, and all replicas indicate that they are such.
What you do inside of your boundary to implement that idempotent contract need not be part of the contract and the decision of what primitives to use (locking, content-based addressing etc) are mainly just a question of implementation constraints.
I'm not sure what you mean by "in flight". The case I'm asking about is where the original request was received by the server and is being processed--and then a second request comes in with the same idempotency key. The original request has not succeeded, and has not failed--it's still in process. What response does the second request get? I do not see an answer to that question anywhere in this thread.
Here's a typical example, assuming serializable isolation in a database that uses optimistic concurrency.
* Two simultaneous requests come in to create a payment.
* The requests provide an idempotency key that is expected to be unique (possibly scoped to a tenant).
* The first request starts a transaction and starts processing, everything looks good - no dups.
* The second request starts a transaction and starts processing, everything looks good - no dups.
* The first one commits and returns success.
* The second tries to commit, but a conflict is detected (the first txn committed first). Typically this causes the second transaction to retry.
* On retry, the second transaction detects the duplicate.
The only question here is what happens when the second transaction fails? The Stripe model is "look up the original response and hand that back to the client". An equally valid and much easier to implement solution is "return a response that tells the client that there was a conflict".
Both solutions offer "create payment" as an idempotent operation.
Idempotency keys are themselves the solution you're looking for. If they don't work concurrently, they aren't idempotency keys. Your response in races or duplicates doesn't inherently matter in that sense, pick whatever semantics make sense for your system.
No, I'm asking one question, which doesn't seem to be summarized by your summary.
The situation is that your server has received two requests with the same idempotency key. For the first request, one of three things could be true: it could have succeeded, it could have failed, or it could still be in process.
The original post I responded to said what response the second request gets if the first request succeeded and if it failed. But it didn't say what response the second request gets if the first request is still in process on the server--so it hasn't succeeded and it hasn't failed. I do not see an answer to that anywhere in this thread.
So yeah, you can do basically anything that isn't inconsistent. Success, fail, delay, don't respond until timeout, all are valid as long as you don't double-apply. Most concurrent systems are like this in some way, because all successes can become errors, and all responses might never arrive. It has nothing really to do with idempotency.
But your follow up responses here are making me rethink. Now you have to have all these special cases where the original request is still in process. I think or assertion of "99% are simple POST operations" is bullshit. For the times where idempotency is hard and really matters, often times you're calling a third party API, like a payment processing API.
I would think a better approach would be to always return a 409 on a subsequent request, regardless of whether it passed or failed, and then have a separate standard API that lets you get the result of any request by its idempotency key.
I.e. idempotent DELETE with proper protocol behavior requires that one request see the 200 OK or 204 No Content and the other sees 404 Not Found, because the delete has already happened. It would be misleading to say 200 OK to both, because that answer means the resource was there when the request arrived.
Honestly, the whole HTTP resource model has a different conceptual backing for state management than the independently developed "idempotence" concepts in distributed systems. Those non-HTTP concepts came from more message-based rather than resource-based architectural assumptions.
The cleanest mapping in the spirit of HTTP would be that you do multiple round trips. A POST creates a new idempotence context, a bit like "start a transaction". The new URI is the key for coordinating state change and allowing restart/recovery.
As I remember it, the idea of idempotence keys in headers really came from the SOAP RPC mindset. It's kind of funny to see it persisting in some hybrid SOAP + REST mental model.
I think that gave me "Enterprise Java Beans PTSD". I.e. an over-engineered solution that adds complexity for both the client and server in the name of some sort of "protocol purity".
People bolted on idempotent semantics onto HTTP because it wasn't provided natively by the protocol, so I don't think it makes sense to go through some hoop-jumping gymnastics for the sake of conforming to a spec that doesn't describe the necessary semantics in the first place.
This isn't a special case, and it's the same problem if you want to replay the original response on conflict. If the original request isn't complete, what are you going to replay?
Who says you have to replay? If you get a second request with the same idempotency key, and the original request is still in process, why not just send the client a response that says so?
The pattern I describe was the dominant design pattern for financial transaction processing systems before Stripe. Stripe's API makes life for the clients slightly easier at the expense of making life for servers more complicated, but the two approaches are equivalent in function.
It isn't complicated, though I can see how if your entire experience with financial APIs is Stripe, you might not be aware of how simple it is. Because Stripe's approach, while mildly more convenient for clients, is a PITA to implement properly.
However, it's good for e-commerce where there are a subset of "important" operations, but I'd argue the idempotency key is better for a financial API like Stripe where the majority of your operations need these semantics.
You also run into a problem if PUT/PATCH needs to be exactly-once instead of at least once. Not as common, but again, might be something you run into with a financial API.
The only difference between the approach I'm describing and Stripe's approach is the detail of how the client knows that it's done. "My" mechanism (notify the client that a request is a duplicate) was dominant in financial transaction processing systems before Stripe; I used probably a half dozen of them back in the day.
Stripe came along and from the beginning decided "we want to compete based on making a friendly API". They succeeded, and if you ever look at Paypal's original API, it's easy to see why. It's true that repeating successful API responses makes life slightly easier for clients; they never need to check for a 409 (or whatever) response. It comes at a cost of making life quite a bit more complex on servers. Personally I don't think the tradeoff is worth it, but YMMV. If your single competitive advantage is "easy API" maybe it makes sense. If you're normal B2B, almost certainly not.
If idempotent key was seen then send back response.
Clients intention is outside the scope. If contract says "idempotency on key" the idempotent response on key. If contract says "idempotent on body hash" then response on body hash (which might or might not include extra data).
APIs are contracts. Not the pinky promise of "I'll do my best guess"
I've been in this situation, a clientside bug meant that different requests arrived with the same idempotency key.
In my case, updating the client would have taken weeks, in the best case scenario. Updating the backend to check for a matching request body would have taken minutes, maybe hours.
It took me a surprising amount of arguing to convince people that, even if it was a clientside bug, we couldn't let users suffer for weeks in name of "correctness".
Ideally you already send client version in requests (or have an API version prefix). Add the workaround only for legacy clients.
Next client version must distinguish itself from predecessor and must not require the bodge to work.
The issue with things that client must not do is that they might still do them, and users don't care whose fault it is. It's important to have auxilliary mechanisms to mitigate these.
Then at least admit you’re just hacking quickly fixes, creating technical debt, and not fixing the actual problem.
I agree with your point that business interest is most important, I disagree that it’s the technically most appropriate solution.
The whole article is proclaiming that this is a technical problem about idempotency being hard, while it’s not. The whole premise of client side bugs must be resolved backend side as the correct solution is incorrect.
The robustness principle has its times and places but the general consensus that it should be applied everywhere to everything was a big mistake. The default should be that you are very rigid and precise and only apply the robustness principle in those times and places it applies, and I'm perfectly comfortable waiting to deploy something precise and find out that this was one of them. The vast majority of APIs is not the time and place for the robustness principle. It's the time and place for careful precision on exactly what is provided, and detailed and description error messages, logging, and metrics for when the boundaries are transgressed.
The user just needs to know what the trade-off is. And "best guess" can be hard to characterize, so you need to be extremely careful. But sometimes it's a big win for a low price.
"Best guess" can be bad if it is not well-defined, but you can still make error detection obvious rather than hidden.
You have never had to work with PHP backends, have you?
JSON in PHP is a flustercluck. Undefined, null, "" or "null", that is always the question.
If you use a typed Go/Rust client and schemas, you usually end up with "look ahead schemas" that try to detect the actual types behind the scenes, either with custom marshallers or with some v1/v2/v3 etc schema structs.
It's so painful to deal with ducktyped languages ... that's something I wouldn't wish on anyone.
I’ve seen two separate engineers implement a “generic idempotent operation” library which used separate transactions to store the idempotency details without realizing the issues it had. That was in an organization of less than 100 engineers less than 5 years apart.
One other thing I would augment this with is Antithesis’ Definite vs Indefinite error definition (https://antithesis.com/docs/resources/reliability_glossary/#...). It helps to classify your failures in this way when considering replay behavior.
[1] http://johnsalvatier.org/blog/2017/reality-has-a-surprising-...
I once wrote about inherent, irreducible complexity and how we try to deal with it. The draft has sections on how complexity can be hidden, spread out, localized, passed off, or recreated from scratch. Unfortunately, people are now using LLMs to pile complexity on the simplest of tasks, and my essay isn't really worth finishing.
Isn't the opposite true? The more people are messing with complexity, the more they could benefit from a model of a complexity? And if they generate complexity with external tools, then maybe a theoretical take on that will be the only way for them to learn? I mean, we learn these things through struggle and pain, but if all of that becomes an LLM problem, than you just stop learning? But at some point complexity will strike back, at some point there will be as much of it, that LLM will be no help.
OTOH, if LLM still win, and skills of managing complexity will be lost in future generations, if we are at the peak of our skills of dealing with complexity, than shouldn't we try our best to imprint our hard won lessons into a history? Maybe for some later generations the tide will turn and they would write textbooks on complexity, and with your article you'll get your portrait in a textbook, and each bored pupil will decorate it with mustaches? You have a chance to immortalize yourself. xD
Or maybe you can become someone like Ramanujan for math? Someone who honed obsolete math skills to an unimaginable level? Maybe a time will come, when students will pour over Ramanujan works, because his skills became useful again, and they try to find out how Ramanujan thought?
...
Sorry, I just couldn't resist. Seriously, it is hard to predict with LLMs, maybe we will not need intellect or any intellectual skills at all after AGI.
Some help for others to understand the history of this (which apparently Stripe, Paypal, Dwolla, and others use): https://github.com/mdn/content/issues/41497 There are links to the RFC and prior art.
That aside, my first impulse is to say that the server should specify that the key includes a hash of the important parts of the request, checked on receipt, so that only the key itself need be stored. However, FF's implementation apparently(?) adds the header automatically to POST and PATCH if it's not already present, which means that it's not able to comply with such a decision, and the RFC (currently expired) recommends using a UUID, so.
I'm guessing the original motivation of this is "Browser JS might not be able to send a PUT, or proxies may not handle a PUT correctly".
A user would generate the idempotency key by loading the front-end application, adding item(s) to their cart, submitting their order but timing out. The user would then navigate back to the front-end application and add another item and submit the order again. Since the user is submitting an identical idempotency key to the same transaction, our payment gateway would look up the request/transaction by idempotency key and see in its cache that there was a successful (200 OK) response to the previous request. The user now believes they purchased three items, however, our system only charged and shipped on two of the orders.
Consequently, the lesson we take away from the aforementioned incident is idempotency keys are really composite keys (Client_Provided_Key + Hash(Request_Payload)).
If a system receives an identical idempotency key (but with a different request payload) the idempotency key should be rejected with a 409 Conflict response with a message similar to "Idempotency key already used with different request payload". Alternatively, some teams argue it should be returned with a 400 Bad Request response. Systems should never return a failed cache response or replace old entries of data.
This article explains how to unlock your flow. The final idempotent key will not be located until the first request completes, but will rather exist when the request is in progress.
To safely accomplish your goal, you have to follow the following steps:
1. Acquire a distributed lock on the idempotent key.
2. Check for the existence of a key in your persistent store.
3. If an existing key is found, verify the hash of the payload against the hash for the payload type. If the hashes do not match, return a 409 error.
4. If the hashes match, look up the status of the payload. If the status shows COMPLETED in the persistent store, return the cached response. If the status shows PENDING in the persistent store, return a 429 Too Many Requests to the user or hold the connection open until the request reaches a PENDING state.
5. After processing the request, save the response to the persistent store before releasing the lock.
While this may look simple on paper, creating a distributed locking state machine for a single API endpoint is typically how developers have their first aha moments with idempotency. Becoming idempotent is often an enormous architectural shift and not just a middleware header check.
An API should follow its documented behavior. This is both a specification and a contract. If the docs for the API say that a duplicate idempotency key will receive a 409, and do not mention message hashes, then they need to follow that spec because the client may specifically depend on it. For example if the order was processed and the cart is resent with the same key but an additional item, client does not want another order with the duplicate items in the first one. They want an error.
If the docs do not accurately describe the behavior of the idempotency key, the client should find another provider.
> While this may look simple on paper, creating a distributed locking state machine for a single API endpoint is typically how developers have their first aha moments with idempotency. Becoming idempotent is often an enormous architectural shift and not just a middleware header check.
Yes, when you expand the scope of your API implementation beyond its contract you take on a virtually unbounded amount of edge cases that not only must you solve, but that your customers must guess at how you are solving.
I'm guessing that your API required the idempotency key. I think that is could be risky because it means developers will simply provide a value for it without understanding the purpose, or thinking through the implications. You really only want them using it if they understand the problem it is solving.
Hashing message content could be an alternative behavior that it makes sense to support by default for apps that don't supply an idempotency key. As long as you document it.
Save only if the operation succeeds. It's meaningless to cache a failure, subsequent retries will result in failure from the cache.
Frankly you guys are overengineering the whole thing. We use the concept only for network outages i.e. it is only on timeout that we want to guard against fultilling duplicate request for the same operation.
The idempotency key should have been viewed as the untrustworthy hint it really is. Then you can decide whether an untrustworthy hint is what you really need. At that point I'd hope someone on the team says "This is ordering - I think we need something trustworthy"
> Consequently, the lesson we take away from the aforementioned incident is idempotency keys are really composite keys (Client_Provided_Key + Hash(Request_Payload)).
Did the postmortem result in any other (wider) changes/actions, out of curiosity?
No idea if this was anything like what happened your case, and probably going off on a tangent, but I've seen so many cases where teams are split into backend and frontend, and they stop thinking about the product as a single distributed system (or, it exacerbates that lack of that thinking from before). Frontend often suggest "Oh we can just create an idempotency key" and any concerns from backend are dismissed. If they implement it incorrectly, backend are on the wrong 'team' to provide input.
Congrats on destroying the purpose of Idempotency Keys.
Ask yourself, why not just `Hash(Request_Payload)`? That'll give you half of what you need to know about why the Idempotency Key header is useful in the first place.
The other half you already know? You just described your bug, it's a bug, on your front-end, this has nothing to do with idempotency; if anything, the system is performing as expected.
If your requests do something different, they should have different Idempotency Keys. <- this brings down TFA and most of the comments here. I guess those are the perils of vibecoding.
It may improve efficiency where a protocol doesn’t assure exactly-once delivery of messages, but it cannot help you with problems other than deduplication of identical messages.
Creating a payment is not an idempotent operation. If the economics of the operation can differ when the “idempotency” key remains the same then you’ve just created a foot-gun in your API.
You can document that you’re going to ignore “duplicate” requests that share an idempotency key but that’s just user-hostile. The system as a whole is broken as designed.
Idempotency is about state, not communication. Send the same payment twice and one of them should respond "payment already exists".
”Idempotency is about the effect
An operation is idempotent if applying it once or many times has the same intended effect.”
Edit: Perhaps it is my mental model that is different. I think it makes most sense to see the idempotency key as a transaction identifier, and each request as a modification of that transaction. From this perspective it is clearer that the API calls are only implying the expected state that you need to handle conflicts and make PUTs idempotent. Making it explicit clarifies things.
The article actually ends up creating the required table to make this explicit, but the API calls do not clarify their intent. As long as the transaction remains pending you're free to say "just set the details to X" and just let the last call win, but making the state final requires knowing the state and if you are wrong it should return an error.
If you split this in two calls there's no way to avoid an error if you set it from pending to final twice. So a call that does both at once should also crash on conflicts because one of the two calls incorrectly assumed the transaction was still pending.
What's being asked for here is eventual consistency. If you make the same request twice, the system must settle into a the same state as if it was done only once. That's the realm of conflict-free replicated data types, which the article is trying to re-invent.
is idempotent. over a link with delay and errors is a problem that requires the heavy machinery of CRDTs.You are hiding the relevant complexity in the term "same". What is here the same? I mean, if accidentally buy only 1 instead of two items of a product and then buy afterwards again 1 item. How is this then the same or not the same payment?
The idempotency key of the request
If the client sends the same key but a different payload that’s a 400 or 409 in my eyes.
2) Client's choice
I can choose to purchase a 2nd item, or I can choose to retry purchasing the 1st item. The server making that choice for me is not idempotency.
Idempotency is the server supporting my ability to retry purchasing the 1st item, safe in the knowledge that they won't send me a 2nd one.
For idempotency you literally just want f(state) = f(f(state)). Whether you achieve this by just doing the same thing twice (no external effects) or doing the thing exactly once (if you do have side effects) is not important.
But if you have side effects and need something to happen exactly once it seems a lot more useful to communicate this, rather than pretending you did the thing.
I think it depends on whether the sender needs to know whether the thing was done during the request, or just needs to know that the thing was done at all. If the API is to make a purchase then maybe all the caller really needs to know is "the purchase has been done", no matter whether it was done this time or a previous time.
And in terms of a caller implementing retry logic, it's easier for the caller to just retry and accept the success response the second time (no matter if it was done the second time, or actually done the first time but the response got lost triggering the retry).
I mean:
> Maybe the first request created a local payment but crashed before publishing an event ...
I mean, yeah, sure. That's a problem. I can come up with another one:
"Maybe the ZFS disk array for the DB caught fire and died a horrible death and you now need to restore from backups".
But that's going to be a problem anyway.
So "Imma break it down for ya". Since your feeble brain cant do it.
Idempotency literally MEANS: "Empowered by Identity"
IF you actually understood that which you dont, you would know that sending a POST with an ID is a 400 error.
If you like the article, upvote. If you don’t, don’t.
Not well organized, but not zero value.
Is this the new normal? Assert something, that id clearly broken as the correct, then write a blog fixing their broken logic?
You don’t replay it on retry. You signal it is a success on first try, and subsequent request with the same key return 409.
Anything else and you are doing it wrong.
I recently designed a system where this had to be taken into consideration. I find my solution very elegant: When the request arrives, I put the pending request into a map, keyed by the idempotenceId. This whole operation is executed in one step. Now the event loop may process other requests. If one of them has the same key, it will await the same response object from the store. And then, once i have the response, I resolve both promises with it.
Or you can completely forget this feature and make it really awkward for the client to reconcile their view of the world with yours and/or to check in the request later. cough Mercury cough.
It is, just barely, acceptable to generate the identifier server side and return it to the client.
Yes.
Here x is interpreted as state and f an action acting on the state.
State is in practice always subjected to side effects and concurrency. That's why if x is state then f can never be purely idempotent and the term has to be interpreted in a hand-wavy fashion which leads to confusions regarding attempts to handle that mismatch which again leads to rather meandering and confusing and way too long blog posts as the one we are seeing here.
*: I wonder how you can write such a lengthy text and not once even mention this. If you want to understand idempotency in a meaningful way then you have to reduce the scenario to a mathematical function. If you don't then you are left with a fuzzy concept and there isn't much point about philosophizing over just accepting how something is practically implemented; like this idempotency-key.
That is simply not true. f could be, for example, “set x.variable to 7”, which is definitely idempotent.
> State is in practice always subjected to side effects and concurrency.
There was never any claim or assumption regarding f. Maybe the way you interpreted it is what they meant, but it is not what was stated.
And yes, in real machines we can't ever have true same states between multiple calls as system time, heat and other effects will differ but we define the state over the abstracted system model of whatever we are modelling and we define idempotency as the same state over multiple calls in that system.
"delete record with id 123" is only idempotent if there is no chance that an operation like "create record with id 123" happened in between.
I wondered about this too. Also, why was it framed in the context of JSON based RPC over HTTP ?
In that mathematical notation typically there is no side effects and those are meant to be pure functions.
From a cursory read, only the part up to "what if the second request comes while the first is running" is an idempotency problem, in which case all subsequent responses need to wait until the first one is generated.
Everything else is an atomicity issue, which is fine, let's just call it what it is.
Auth, logging, and atomicity are all isolated concerns that should not affect the domain specific user contract with your API.
How you handle unique keys is going to vary by domain and tolerance-- and its probably not going to be the same in every table.
It's important to design a database schema that can work independently of your middleware layer.
(Though I do disagree with the original premise too. Putting on a 'stateless' boxing glove won't mean there's no difference between punching a guy once or twice)
Idempotency or not, many points in the articles are are about atomic transactions.
The user wants something + the system might fail = the user must be able to try again.
If the system does not try again, but instead parrots the text of the previous failure, why bother? You didn't build reliability into the system, you built a deliberately stale cache.
It's not about trying again but about making sure you get consistent state.
Imagine request for payment. You made one and timeouted. Why did it timeout? Your network or payment service error?
You don't know, so you can't decide between retry and not retry.
Thus practice is: make request - ack request with status request id (idempotent, same request gives same status id) - status checks might or might not be idempotent but they usually are - each request need to have unique id to validate if caller even tried to check (idenpotency requires state registration).
If you want to try again you give new key and that's it.
There might of course be bug in implementation (naive example: idempotency key is uint8) but proper implementation should scope keys so they don't clash. (Example implementation: idempotency keys are reusable after 48h).
If same calls result in different responses (doesn't matter if you saw it or not) then API isn't idempotent.
I'm well aware that the first order went through, even though the dumb system fumbled the translation of the success message and gave me a 500 back.
I do retry because I wanted the outcome. I'm not giving it a new key (firstly because I'm a user clicking a form, not choosing UUIDs for my shopping cart) but more importantly, if I did supply a second key, it's now my fault for ordering two copies.
Take a good principle like 'modules should keep their inner workings secret so the caller can't use it wrong', run it through the best-practise-machine, and end up with 'I hand-write getters and setters on all my classes because encapsulation'.
No -- Idempotent means _no_ side-effects.
This entire example is bad design. It's bad, bad design. I'm sorry, but if this is your example, you are doing it wrong in every way. There are ways to handle these sorts of things, well-known and well-established patterns. You are using none of these here.
I get it, it's an example, but it's a poor example. You should change it before someone assumes what you are talking about is sensible or reasonable in a production environment. Or at least put a warning.
A lot little things you need to think of. For example.
Client sends a request. The database is temporarily down. The server catches the exception and records the key status as FAILED. The client retries the request (as they should for a 500 error). The server sees the key exists with status FAILED and returns the error again-forever. Effectively "burned" the key on a transient error.
others like:
- you may have Namespace Collisions for users... (data leaks) - when not using transactions only redis locking you have different set of problem - the client needs to be implmented correctly. Like client sees timout and generates a new key, and exactly once processing is broken - you may have race conditions with resource deletes - using UUID vs keys build from object attributes (different set of issues)
I mean the list can get very long with little details..
This is the bug regardless of idempotency, right? It should be recording something like RESOURCE_UNAVAILABLE.
And then some lazy birdbrain will come up with some new way to either jump to a random place in the code without guardrails on program state, or referencing data that other code or threads could have touched, and they'll call it a time saving feature.
And then we will all learn the hard way that those annoying restrictions were in place for a reason.
This is the great circle of life and death and rebirth
The GET/POST split is the defence (even it's only advisory).
GET-only means every time you hit the back button during an order flow, you might double-order.
One thing that's confusing, here, is that idempotency only applies for the same request, but the article implies that idempotency is about whether the request contains a specific "idempotency key".
Don't do that, and this problem evaporates.
Don't do that, and you solved nothing.
Either I'm missing what you mean, or half the comments here are missing the point of idempotency.
Let's say your server received this request twice within one minute:
How can you tell from the server if that's a retry (think e.g. some reverse proxy crashed and the first request timed out, but the payment already went through to the user's CC)... or if the user just trying to purchase another item 123 because they forgot they needed 2?There is simply no way to make the requests idempotent without an idempotency key. The only way to tell both situations apart is to key the requests by some UID. The HTTP verb is irrelevant.
Did I misunderstand what you meant?
I mean: you still have the problem regardless of following HTTP verb semantics or not.
My original point, though, was that these semantics are well-understood for PUT, so just use PUT, or use POST (with the idempotency-key header) exactly as you would PUT.
You want a rebuildable environment after testing blows it up? Idempotent build scripts.
You want to sell crap from a web interface? Thats a transaction. If you do 'repeat a sale', thats a new transaction, with new goods, with newer date.
Forcing 1 paradigm on a different one always results in gnashing of teeth and sadness. But I guess it gets the blog hits for that dopamine rush.
Like, I thought the entire definition had to do with "the exact same thing twice."
This rubs me the wrong way. It's stated as fact without any trace of evidence, it is probably false, and it seems to serve no purpose but to make struggling students feel worse (and make the author feel superior).
In the real world you're faced with building five nines active-active systems that interface across various stakeholders, behaviour has to be eventually consistent, you've got a long list of requirements and deadlines, etc. It's practical, hands on, and people are there to build the thing with you at a scale that far exceeds the university undergraduate setting.
It's not a bad thing, it's just different.
Students shouldn't be afraid of it. Your job and coworkers, if it's a good workplace, are there to help you succeed as you succeed together. You learn and grow a lot.
You also learn how to deal with people, politics, changing requirements, etc., which I would imagine is difficult or impossible to teach without just throwing yourself into the fire.
I've been a CS teacher and I found that it's terribly easy to underestimate how much there is to learn and how much effort that learning takes, when you've internalized a skill yourself a long time ago.