Appropriately, I think this was probably drafted by AI too:
> How does install.md work with my existing CLI or scripts?
> install.md doesn't replace your existing tools—it works with them. Your install.md can instruct the LLM to run your CLI, execute your scripts, or follow your existing setup process. Think of it as a layer that guides the LLM to use whatever tools you've already built.
(It doesn't X — it Ys. Think of it as a Z that Ws. this is LLM speak! I don't know why they lean on these constructions to the exclusion of all else, but they demonstrably do. The repo README was also committed by Claude Code. As much as I like some of the code that Claude produces, its Readmes suck)
Yeah, removing that line right now. Went too fast and some this copy is definitely low quality :(. Incredibly ironic for me to say that AI needs more supervision while working at the company proposing this haha.
Any other feedback you have about the general idea?
I think my preferred version of this would be a hybrid. Keep the regular installer, add a file filled with information that an LLM can use to assist a human if the install script fails for some reason.
If the installer was going to succeed in a particular environment anyway, you definitely want to use that instead of an LLM that might sporadically fail for no good reason in that same environment.
If the installer fails then you have a "knowledge base" to help debug it, usable by humans or LLMs, and if it fails, well, the regular installer failed too, so hopefully you're not worse off. If the user runs the helper LLM in yolo mode then the consequences are on them.
All the insecurity of running a random bash script, with all the terrifying stochasticity of an LLM in one "makes you want to tear your eyes out" package!
Fascinating. My thinking was that this is an upgrade over a bash script because you can prompt the AI to check it, clear installs with you, or otherwise investigate safety before installing in a way that isn't natural with *.sh. Does that make any amount of sense or am I just crazy?
Time and time again, be it "hallucination", prompt injection, or just plain randomness, LLMs have proven themselves woefully insufficient at best when presented with and asked to work with untrusted documents. This simply changes the attack vector rather than solving a real problem
Bash scripts give you visibility into what they are going to do by virtue of being machine instructions in a determimistic language. MD files you pipe to matrix multiplication has a much lower chance of being explainable.
Yeah, someone else was pointing that the bash scripts are guaranteed to do the same thing on every system which I think is in the same vein as your feedback. It's for sure a downside of the markdown that I need to explain the docs behind the proposal.
> "Installing software is a task which should be left to AI."
So, after teaching people to outsource their reasoning to an LLM, LLMs are now actively coaching folks to use LLMs for tasks for which it makes no sense at all.
I would think that the common bash scripts we already have would provide an agent better context for installation than a markdown file, and even better, they already work without an LLM.
I can definitely see where you're coming from and agree to a large extent. I was asking myself that question a lot when thinking about this.
What pushed me over the edge was actually feeding bash install scripts into agents and seeing them not perform well. It does work, but a lot worse than this install.md thing.
In the docs for the proposal I wrote the following:
>install.md files are direct commands, not just documentation. The format is structured to trigger immediate autonomous execution.[1]
What is the benefit of having this be a standard? Can't an agent follow a guide just as easily in document with similar content in a different structure?
Primarily this being a predictable location for agents. AI not having to fetch the sitemap or llms.txt and then a bunch of subsequent queries saves a lot of time and tokens. There's an advantages section[1] within the proposal docs.
At some point in the future (if not already), claude will install malware less often on average. Just like waymos crash less frequently.
Once you accept that installation will be automated, standardized formats make a lot of sense. Big q is will this particular format, which seems solid, get adopted - probably mostly a timing question
Great, I can now combine the potential maliciousness of a script with the potential vulnerabilities of an AI Agent!
Jokes aside, this seems like a really wierd thing to leave to agents; I'm sure its definitely useful but how exactly is this more secure, a bad actor could just prompt inject claude (an issue I'm not sure can ever be fixed with our current model of LLMs).
And surely this is significantly slower than a script, claude can take 10-20 seconds to check the node version; if not longer with human approval for each command, a script could do that in miliseconds.
Sure it could help it work on more environments, but stuff is pretty well standardised and we have containers.
I think this part in the FAQ wraps it up neatly:
"""
What about security? Isn't this just curl | bash with extra steps?
This is a fair concern. A few things make install.md different:
Human-readable by design. Users can review the instructions before execution. Unlike obfuscated scripts, the intent is clear.
Step-by-step approval. LLMs in agentic contexts can be configured to request approval before running commands. Users see each action and can reject it.
No hidden behavior. install.md describes outcomes in natural language. Malicious intent is harder to hide than in a shell script.
Install.md doesn't eliminate trust requirements. Users should only use install.md files from sources they trust—same as any installation method.
"""
So it is just curl with extra steps; scripts aren't obfuscated, you can read them; if they are obfuscated then they aren't going to use a Install.md and you (the user) should really think thrice before installing.
Step by step approval also sorta betrays the inital bit about leaving installing stuff to ai and wasting time reading instructions.
Malicious intent is harder to hide, but really if you have any doubt in your mind about an authors potential malefeasance you shouldn't be running it, wrapping claude around this doesn't make it any safer really when possible exploits and malware are likely baked into the software you are trying to install, not the install.
tldr; why not just have @grok is this script safe?
fascinating. i personally (biased bc i work at Mintlify) think a markdown file makes more sense than a bash script because at least Claude kind of has your best interests at heart.
Wait, but being serious. You can prompt the ai when you feed it this file to ask "do you see anything nefarious" or "follow these instructions, but make sure you ask me every time you install something because i want to check the safety" in a way that you can't when you pipe a script into bash.
Does that make any sense or am I just off my rocker?
No. Absolutely not. The opposite in fact. Your bash script is deterministic. You can send it to 20 AIs or have someone fluent read it. Then you can be confident it’s safe.
An LLM will run the probabilistically likely command each time. This is like using Excel’s ridiculous feature to have a cell be populated by copilot rather than having the AI generate a deterministic formula.
>i personally (biased bc i work at Mintlify) think a markdown file makes more sense than a bash script because at least Claude kind of has your best interests at heart.
Most of the largest trends in "how to deploy software" revolve around making things predictable and consistent. The idea of abandoning this in favor of making a LLM do the work seems absurd. At least the bash script can be replicated exactly across machines and will do the same thing in the same situation.
Yeah, I'm going to add that as one of the downsides to the docs. The stochastic nature of the markdown vs. a script is for sure a reason to not adopt this.
Intent here is that this would be adopted by more difficult to install devtools which are unpackaged to the extent that you need a dependency like a specific version of Node, Python, or a dev lib.
> How does install.md work with my existing CLI or scripts?
> install.md doesn't replace your existing tools—it works with them. Your install.md can instruct the LLM to run your CLI, execute your scripts, or follow your existing setup process. Think of it as a layer that guides the LLM to use whatever tools you've already built.
(It doesn't X — it Ys. Think of it as a Z that Ws. this is LLM speak! I don't know why they lean on these constructions to the exclusion of all else, but they demonstrably do. The repo README was also committed by Claude Code. As much as I like some of the code that Claude produces, its Readmes suck)
Any other feedback you have about the general idea?
If the installer was going to succeed in a particular environment anyway, you definitely want to use that instead of an LLM that might sporadically fail for no good reason in that same environment.
If the installer fails then you have a "knowledge base" to help debug it, usable by humans or LLMs, and if it fails, well, the regular installer failed too, so hopefully you're not worse off. If the user runs the helper LLM in yolo mode then the consequences are on them.
I think I agree with you on it needing to assist in event of failure instead of jumping straight to install though. Will think more about that.
So, after teaching people to outsource their reasoning to an LLM, LLMs are now actively coaching folks to use LLMs for tasks for which it makes no sense at all.
This is a "solution" looking for a problem.
What pushed me over the edge was actually feeding bash install scripts into agents and seeing them not perform well. It does work, but a lot worse than this install.md thing.
In the docs for the proposal I wrote the following:
>install.md files are direct commands, not just documentation. The format is structured to trigger immediate autonomous execution.[1]
[1]: https://www.installmd.org/
[1]: https://www.installmd.org/#advantages
I’m not sure this solution is needed with frontier models.
Once you accept that installation will be automated, standardized formats make a lot of sense. Big q is will this particular format, which seems solid, get adopted - probably mostly a timing question
Jokes aside, this seems like a really wierd thing to leave to agents; I'm sure its definitely useful but how exactly is this more secure, a bad actor could just prompt inject claude (an issue I'm not sure can ever be fixed with our current model of LLMs).
And surely this is significantly slower than a script, claude can take 10-20 seconds to check the node version; if not longer with human approval for each command, a script could do that in miliseconds.
Sure it could help it work on more environments, but stuff is pretty well standardised and we have containers.
I think this part in the FAQ wraps it up neatly:
""" What about security? Isn't this just curl | bash with extra steps? This is a fair concern. A few things make install.md different:
Install.md doesn't eliminate trust requirements. Users should only use install.md files from sources they trust—same as any installation method. """So it is just curl with extra steps; scripts aren't obfuscated, you can read them; if they are obfuscated then they aren't going to use a Install.md and you (the user) should really think thrice before installing.
Step by step approval also sorta betrays the inital bit about leaving installing stuff to ai and wasting time reading instructions.
Malicious intent is harder to hide, but really if you have any doubt in your mind about an authors potential malefeasance you shouldn't be running it, wrapping claude around this doesn't make it any safer really when possible exploits and malware are likely baked into the software you are trying to install, not the install.
tldr; why not just have @grok is this script safe?
Ten more glorious years to installer.sh
> Installing software is a task which should be left to AI.
Absolutely I don't think so. This is a very bad idea.
$ curl | bash was bad enough. But $ curl -fsSL | claude looks even worse.
What could possibly go wrong?
That is such a wild thing to say. Unless this whole thing is satire...
Does that make any sense or am I just off my rocker?
An LLM will run the probabilistically likely command each time. This is like using Excel’s ridiculous feature to have a cell be populated by copilot rather than having the AI generate a deterministic formula.
This forum gets more depressing by the day.
Most of the largest trends in "how to deploy software" revolve around making things predictable and consistent. The idea of abandoning this in favor of making a LLM do the work seems absurd. At least the bash script can be replicated exactly across machines and will do the same thing in the same situation.
How we've all been blue-pilled. Sigh..
Just like installing spice racks is a task which which should be left to military engineer corps.
What?? How do I get off of this train? I used to come to hacker news for a reason...what the fuck am I reading
That way we can have entire projects with nothing but Markdown files. And we can run apps with just `claude run app.md`. Who needs silly code anyway?
This is such an insane statement. Is this satire?