Description
llms.txt proposes a standardized approach to provide large language models (LLMs) with machine-readable information derived from complex websites. It aims to address the limitations of LLMs in processing web content due to constrained context windows and the challenges of parsing HTML with navigation, ads, and JavaScript.
The proposal advocates for a Markdown-based file, typically located at a website’s root, to provide LLMs with relevant information, improving the web’s utility for automated systems, like how robots.txt facilitates crawler interactions.
The proposal is in an early stage, and does not introduce any new format.
Currently, projects like FastHTML and nbdev have adopted llms.txt.
Challenges & Risks
llms-full.txt
may exceed LLM context windows for large documentation sets.
The proposed /llms.txt path diverges from the increasingly adopted /.well-known/ prefix for well-known URIs (e.g., /.well-known/security.txt), which may cause conflicts or confusion. (See also AnswerDotAI/llms-txt#44 )
Current use is primarily within AnswerDotAI and fast.ai projects, with limited evidence of broader uptake.
(2025-05-20 update: Context7 also uses llms.txt.)
(2025-05-24 update: There are a number of tools that use llms.txt in https://github.com/thedaviddias/llms-txt-hub )
Metadata
Metadata
Assignees
Labels
Type
Projects
Status