Skip to main content

Purpose

Generates /llms/{product-slug}.txt files for products and deploys them to Vercel. These files provide detailed product information optimized for AI consumption.

Architecture

Pipeline Flow

This endpoint is the third step in a 3-step product pipeline:
1. discover-products
   └── Extract products from content
   └── Return entity_ids

2. generate-product-prompts
   └── Takes entity_ids from step 1
   └── Generates 10 prompts per product

3. generate-product-llms-txt (this endpoint)
   └── Takes entity_ids from step 1
   └── Generates /llms/{slug}.txt files
   └── Deploys to Vercel

File Structure

FilePathContent
Root llms/llms.txtBusiness overview + links
Product llms/llms/{slug}.txtProduct-specific content
Example:
/llms.txt              β†’ Business overview
/llms/teddy-bear.txt   β†’ Teddy Bear product details
/llms/robot-dog.txt    β†’ Robot Dog product details

Source URL Filtering

The service only uses content from the product’s source_urls:
# Product has source_urls: ["/products/teddy", "/toys"]
# Pages available: ["/", "/about", "/products/teddy", "/toys", "/contact"]
# Filtered pages: ["/products/teddy", "/toys"]
This ensures each product llms file contains only relevant content.

Scalability

This architecture scales to thousands of products:
  • Root llms.txt stays small (~3KB) with just links
  • Each product gets its own file (~1-2KB)
  • AI crawlers can fetch specific product files on-demand

Code Location

src/app/apis/cron/generate_product_llms_txt/routes.py
src/app/shared/products/generate_llms_txt.py