Skip to main content
Update an AI website with content changes from detect-changes API. Uses THREE parallel Gemini calls (same pattern as onboarding):
  1. Update llms.txt
  2. Update homepage + Q&A pages
  3. Update data.json (Schema.org)
This is the second step in the 3-API update flow:
  1. detect-changes - Find what changed
  2. update-ai-site (this endpoint) - Update the AI website
  3. discover-products-from-changes - Find new products

How It Works

1. Fetch current AI site files (llms.txt, homepage HTML, data.json)
2. Run 3 PARALLEL Gemini calls:
   - llms.txt: "Here's current + changes, update if needed"
   - Q&A pages: "Here's current + changes, update if needed"
   - data.json: "Here's current + changes, update if needed"
3. Check if any product source URLs changed β†’ regenerate affected product llms
4. Generate files via generate_ai_site (only new + changed replica pages)
5. Deploy to Vercel
6. Store updated site_map + page_hashes
7. Return changed URLs for IndexNow submission

Product LLMs Regeneration

When pages change, this endpoint checks if any existing products have product_source_urls that overlap with the changed URLs. If so, those product llms files are regenerated.
Products in DB:
- Sandals: product_source_urls = ["/products/sandals", "/categories/footwear"]
- Boots: product_source_urls = ["/products/boots", "/winter-collection"]

Changed URLs from detect_changes:
- ["/products/sandals", "/about"]

Result:
- Sandals /llms/sandals.txt β†’ REGENERATE (overlap: /products/sandals)
- Boots /llms/boots.txt β†’ SKIP (no overlap)
Efficient updates: Product llms files are only regenerated if their source content changed. This scales well for sites with hundreds of products.
Partial content limitation: When regenerating product llms, we only use content from the changed pages. If a product has 3 source URLs and only 1 changed, the regenerated file will be based on that 1 page’s content. For a complete regeneration with all source content, use regenerate-fresh-website.

Request Body

Takes the output from detect-changes directly:
FieldTypeRequiredDescription
business_idstringYesClerk org ID
new_pagesarrayYesPages that didn’t exist before (have markdown)
changed_pagesarrayYesPages with content changes (have markdown)
removed_urlsarrayYesURLs that no longer exist
unchanged_pagesarrayYesUnchanged pages (NO markdown)
updated_site_maparrayYesCurrent URL list
updated_hashesobjectYesCurrent hash map
business_infoobjectYesBusiness entity and AI site info
Key difference from old API: unchanged_pages replaces all_pages. Unchanged pages don’t have markdown because they weren’t batch scraped.

Response

{
  "status": "success",
  "files_updated": 42,
  "product_llms_regenerated": 2,
  "deployment_url": "https://example.searchcompany.dev",
  "project_name": "example-searchcompany-dev",
  "source_url": "https://example.com",
  "changed_urls": [
    "https://example.com/new-page",
    "https://example.com/about"
  ]
}
The changed_urls array should be submitted to IndexNow after deployment.

Three Parallel Gemini Calls

Each call receives the current file + changes and decides what to update:

1. Update llms.txt

Current llms.txt: [current content]
Changes: [new/changed/removed pages]
Task: Update if needed, preserve Boosted sections

2. Update Q&A Pages

Current homepage HTML: [current structure]
Changes: [new/changed/removed pages]
Task: Update homepage + Q&A pages if needed, preserve Boosted sections

3. Update data.json

Current Schema.org: [current JSON-LD]
Changes: [new/changed/removed pages]
Task: Update if needed
Boosted sections are preserved - The Gemini prompts explicitly instruct NOT to modify any β€œBoosted” sections, as those are managed separately by the boosted pages cron.

Database Updates

  • ai_sites.page_hashes - Updated hash map
  • ai_sites.site_map - Updated URL list
  • ai_sites.last_content_check_at - Timestamp

External API Calls

  • Gemini 3 Flash Preview (x3 parallel) - Content updates
  • Gemini 3 Flash Preview (per affected product) - Product llms regeneration
  • Vercel API - Deployment