This guy took inspiration from gog cli (steipete's cli for Google Workspace, which predates gws cli and is apparently more agent-friendly and token-efficient):
Getting agents used to using `--force` to bypass prompts seems like a bad idea. `--force` is for when the action failed (or would fail) for some reason and you want it to definitely happen this time.
I think `--yes` or `--yes-do-the-dangerous-thing` is leagues better.
It also in the case of an LLM can bias it towards using that sort of flag more commonly, which is less than ideal when it then uses a more ordinary Unix command that uses that to mean something dangerous.
I think every CLI is agent native when invoked from claude or any coding agents.
I was really suprised today. We at adaptive [1], is an access management platform to access psql, mysql, vms, k8s etc. When you use `adaptive connect <db-name>` it would connect create just-in-time tunnel and connect the user to the database. You cannot do traditional psql operation etc. That design is by choice.
Today I was trying to invoke it via claude, and, god damn, it found a way to connect. It create a pseudo shell in python, pass the queries and treat our cli like a tool. This would have been humanly not possible. Partly because, you would like about risks, good practice/bad practice, would be scared to execute and write code like that, and it just did it and acheived the goal.
I dont want "agent-native CLIs" to proliferate because I'd rather we design CLIs for human use and programmatic (automation) use first. Agents are good at vomiting json between tool calls, I am not, and never will be.
Too many tools stray so wildly from UNIX principles. If we design for agents first we will likely see more and more of this.
I would naively suppose that the agent is able to read the man page or run the help command of the tool. They usually contain plenty of information. But bending the tool to suit the agent has some value. The GNU-AI suite of userland tools? Unfortunately it's possible that every model will settle on a different average. If that's the case we can't bend to every model. Models will have to bend to whatever we want to use.
Of course it can read the man page and run cmd --help.
Now you've wasted context on, what? Learning how to use the tool. And it will waste context on it every single time. (You can write skills to mitigate this a bit, but still).
The alternative is to make the tool work as the user (an LLM in this case) expects it to work, without having to resort to the manual.
> Let the Agent use the CLI and if it guesses the wrong option, you make that the RIGHT option
This sounds backwards and presumes that the statistics machines which are LLMs are getting it right when they "average" out to the wrong command. No, fix the agents behavior, dont change the CLI to accommodate it.
I don’t remember exactly the specific examples off the top of my head (some are definitely ffmpeg commands) but I do know that when LLMs keep hallucinating command line flags that don’t exist for that specific command their “suggestion” is actually very reasonable and so many developers are adding support to their tools for common hallucinations.
Not to belabor my point, but I think "adding support to tools for common hallucinations" is a bad idea. Sounds like something a vibecoded project being spammed with issues by agents might do. Not so much a serious, mature project, though.
Well we will have to agree to disagree because my understanding of what has been generally the case is that the LLMs might vibe-coding spam, that’s true, but the interesting difference is generally speaking their “suggestions” are very reasonable and represent in hindsight useful changes that make the commands more useful for everyone, humans included.
It’s also likely that agents would also be better if they didn’t deal with json vomit either. I’m optimistic that agent frameworks will eventually come full circle and realize concise teletype linear CLIs aka old school UNIX is actually very effective and efficient for agents as well as humans!
Partially, but I think if you design for agents, their needs are different enough from a human's that you end up making different choices.
I found myself nodding along to the linked tweet/article. Recently I did many rounds of iterative user-centered design with an agent to improve the CLI interface in Jobs [0], a task manager for LLMs. The resulting CLI follows most of these principles.
One great idea from the tweet that I will be adding: a `feedback` subcommand, for the agent to capture feedback while they work.
https://github.com/mvanhorn/cli-printing-press
He made a whole bunch of agent-friendly CLIs: https://printingpress.dev/
https://github.com/mvanhorn/printing-press-library/tree/main...
I think `--yes` or `--yes-do-the-dangerous-thing` is leagues better.
I was really suprised today. We at adaptive [1], is an access management platform to access psql, mysql, vms, k8s etc. When you use `adaptive connect <db-name>` it would connect create just-in-time tunnel and connect the user to the database. You cannot do traditional psql operation etc. That design is by choice.
Today I was trying to invoke it via claude, and, god damn, it found a way to connect. It create a pseudo shell in python, pass the queries and treat our cli like a tool. This would have been humanly not possible. Partly because, you would like about risks, good practice/bad practice, would be scared to execute and write code like that, and it just did it and acheived the goal.
[1] https://adaptive.live
Too many tools stray so wildly from UNIX principles. If we design for agents first we will likely see more and more of this.
Let the Agent use the CLI and if it guesses the wrong option, you make that the RIGHT option.
Every time it doesn't guess something right, you change it.
Now you've wasted context on, what? Learning how to use the tool. And it will waste context on it every single time. (You can write skills to mitigate this a bit, but still).
The alternative is to make the tool work as the user (an LLM in this case) expects it to work, without having to resort to the manual.
This sounds backwards and presumes that the statistics machines which are LLMs are getting it right when they "average" out to the wrong command. No, fix the agents behavior, dont change the CLI to accommodate it.
It feels like most of the “rules” are “don’t be an ass to your consumer”.
I found myself nodding along to the linked tweet/article. Recently I did many rounds of iterative user-centered design with an agent to improve the CLI interface in Jobs [0], a task manager for LLMs. The resulting CLI follows most of these principles.
One great idea from the tweet that I will be adding: a `feedback` subcommand, for the agent to capture feedback while they work.
[0]: https://github.com/bensyverson/jobs