This is the wrong way to do it. As software architects, you need to learn to appropriate the correct usage of algorithms and AI. Using AI for building everything is not just a waste of tokens, it also is an exercise in futility.
Here is how I solved this problem:
1. There is already a knowledgebase of almost all APIs (the ones that are useful to the average Joe anyway) in either Swagger.json or Postman.json format. This is totally upto you as to what format you prefer.
2. Write a generator (I use Elixir) to infer which format 1. uses and generate your API modules using a code generator. There are plenty, or you can even write your own using simple File.write!
3. In the rare occurence you coming across a shitty API with only scattered documentation across outdated static pages online, only then use an LLM + browser to automate it to write it into the format listed in 1. (Swagger.json or Postman.json)
Throwing an LLM at everything is just inefficient lazy work.
I don't know, maybe I'm misunderstanding too but they basically just asked an agent to interface with an API. It seems the agent will create new code each time..
I think the question is why integrating with, say, Google Calendar is different for each customer? How much is custom versus potentially reusable code?
The TL;DR dos not seem to match the rest of the article.
They claim the agents reliably generated a week’s worth of dev work for $20 in tokens, then go on to list all the failure modes and debugging they had to do to get it to work, and conclude with “Agents are not ready to autonomously ship every integration end-to-end.”
Generally a good write up that matches my experience (experts can make systems that can guide agents to do useful work, with review), but the first section is pretty misleading.
A lot of these smells like skill issue on the model. So many are completely non-issues if using Claude Opus 4.5+
The idea of assigning a code-owner agent per directory is really interesting. A2A (read: message passing and self-updating AGENTS.md files) might really shine there in some way.
If you're using Elixir (or don't mind running a separate Elixir service), we've built what is effectively a clone of the oAuth part of Nango (formally Pizzly). Drop into any Elixir project and get full oAuth management out of the box, and it's compatible with all of the Nango provider strategies:
Here is how I solved this problem:
1. There is already a knowledgebase of almost all APIs (the ones that are useful to the average Joe anyway) in either Swagger.json or Postman.json format. This is totally upto you as to what format you prefer.
2. Write a generator (I use Elixir) to infer which format 1. uses and generate your API modules using a code generator. There are plenty, or you can even write your own using simple File.write!
3. In the rare occurence you coming across a shitty API with only scattered documentation across outdated static pages online, only then use an LLM + browser to automate it to write it into the format listed in 1. (Swagger.json or Postman.json)
Throwing an LLM at everything is just inefficient lazy work.
The post provides a lot of good food for thought based on experience which is exactly what the title conveys
What is the value add of having the AI rebuild code over and over, individually for each project using it?
I hope this isn't their business model.
The news here is the AI reading the API docs, assembling requests, and iterating on them until it works as expected.
This sounds simple, but is time consuming and error prone for humans to do.
It take lots of readings and testing before integrating to your project.
https://nango.dev/docs/guides/platform/free-self-hosting/con...
Ofc that may well be my misreading but it seems important in the context of the claim and the analysis using OpenCode.
Perhaps they could clarify and/or revisit the docs.
They claim the agents reliably generated a week’s worth of dev work for $20 in tokens, then go on to list all the failure modes and debugging they had to do to get it to work, and conclude with “Agents are not ready to autonomously ship every integration end-to-end.”
Generally a good write up that matches my experience (experts can make systems that can guide agents to do useful work, with review), but the first section is pretty misleading.
The idea of assigning a code-owner agent per directory is really interesting. A2A (read: message passing and self-updating AGENTS.md files) might really shine there in some way.
https://github.com/agoodway/tango