Skip to content
Headless CMS for AI

The headless CMS with built-in search and AI

A content platform for LLM-powered applications. Define a schema — get vector search, full-text search, and auto-generated embeddings out of the box. Native LangChain integration included. No vector databases, no pipelines, no glue code.

The Problem

The traditional headless CMS stack

A headless CMS like Contentful, Sanity, or Strapi stores your content and exposes it via API. But the moment you need search — and you always do — you're integrating external services.

Add AI features and you need a vector database, an embedding pipeline, and sync logic to keep everything consistent. What started as "just a CMS" becomes five services with five points of failure. Structured content for LLM retrieval shouldn't require this much glue.

1
Headless CMS
Contentful, Strapi, Sanity
2
Search service
Algolia, Elasticsearch
3
Vector database
Pinecone, Weaviate
4
Embedding API
OpenAI, Cohere
5
Sync scripts
ETL pipelines, cron jobs

Each service = separate billing, uptime, sync logic

Features

What if your CMS had vector search built in?

A CMS with built-in search where storage, search, and AI are one system — not separate services stitched together.

Schema → API in seconds

Define your content model — get search, list, and retrieve endpoints instantly. No manual routing. No controllers. No ORM.

Save content

import { ManagementClient } from '@foxnose/sdk'
await client.createResource('products', {
data: {
title: { en: 'Winter Parka' },
price: 189,
category: 'clothing'
}
})
// → indexed + embedded automatically

Search instantly

import { FluxClient } from '@foxnose/sdk'
await client.search('products', {
vector_search: { query: 'warm jacket' },
where: { $: { all_of: [
{ category__eq: 'clothing' },
{ price__lte: 200 }
]}}
})
// → semantic match: "warm jacket" ≈ "Winter Parka"

Search without Algolia

No external search service. No webhooks to sync content. The hybrid search API lives where your data lives.

CMS → webhook → Algolia
CMS → ETL → Elasticsearch
Save → searchable

Full-text + semantic + filters in one query

AI-ready on every save

No vector DB. No embedding pipeline. Mark a field as vectorizable — embeddings are generated automatically.

Save
content
Embed
auto
RAG-ready
instant

Content + schema versioning

Other CMSes version content. FoxNose versions everything — content, schema, and the relationship between them.

Schema v4today

Added "seo_description" field

schema
Content v122 hours ago

Updated description on "Winter Parka"

indexedre-embedded
Content v11yesterday

Updated pricing on "Winter Parka"

Schema v3last week

Added "variants" relationship

Full audit trail — who changed what, when, and why

Why FoxNose

What makes this different

Search is not an afterthought

In a traditional headless CMS, you store content in one place and search it through another. You maintain sync scripts, manage Algolia indexes, handle stale data. FoxNose is a headless CMS with search as the storage layer. Every piece of content is instantly searchable — full-text, semantic, and filtered — the moment you save it.

An AI-native CMS, not bolted on

Most headless CMSes weren't designed for AI workloads. Turning content into something an LLM can use means integrating a vector database, building an embedding pipeline, and writing sync logic. FoxNose is a CMS for RAG — it generates embeddings automatically on every save, so your content is ready for retrieval-augmented generation without extra infrastructure.

Fewer services, fewer bills, fewer failures

A typical content stack includes a CMS, a search service, and increasingly a vector DB — each with its own billing, uptime, and data consistency guarantees. FoxNose consolidates these into one platform. One API key, one data source of truth, one bill.

Schema-first with real versioning

Define your content model with typed fields, relationships, and constraints. FoxNose is the only CMS with schema versioning — schemas are versioned independently, so you can evolve your content structure without breaking existing content or APIs. Full content versioning included — track exactly what changed, when, and by whom.

Search

Query across collections in one call

Traditional headless CMSes force you to fetch each content type separately and stitch results together in code. FoxNose lets you join up to 3 collections in a single search request — with filters on every joined collection.

Inner & left joins

Require a match or return results even when the related record is missing — just like SQL.

Per-collection filters

Filter each joined collection independently — e.g. only published articles by French authors.

Works with vector search

Combine semantic search with joins and filters — all in one request.

Find published articles by French authors, with companies

POST /v1/articles/_search

{
  "join": {
    "as": {
      "authors": "people.authors",
      "companies": "orgs.companies"
    },
    "relations": [
      {
        "target": "authors",
        "source_field": "author",
        "relation_type": "inner"
      },
      {
        "source": "authors",
        "target": "companies",
        "source_field": "company",
        "relation_type": "left"
      }
    ]
  },
  "where": {
    "$": {
      "all_of": [{ "status__eq": "published" }]
    },
    "authors": {
      "all_of": [{ "country__eq": "FR" }]
    }
  }
}
// → Articles + author data + company data
// → One request. No N+1 queries.
Comparison

Headless CMS comparison: Traditional vs. FoxNose

Feature by feature, capability by capability.

Content & API

Content modeling
Traditional
Visual editor or config
FoxNose
Schema-first + Dashboard
API style
Traditional
REST and/or GraphQL
FoxNose
Auto-generated REST from schema
Localization
Traditional
Plugin or paid (e.g. Strapi)
FoxNose
Field-level, built-in
Content versioning
Traditional
Drafts/published (Sanity, Strapi v5)
FoxNose
Full revision history per resource
Schema versioning
Traditional
Migrations or code-based (no rollback)
FoxNose
Non-destructive, per-version with rollback
Granular RBAC
Traditional
Varies (often paid tier)
FoxNose
Built-in, per-folder
Environments
Traditional
Paid add-on
FoxNose
Included with isolated data
Multiple APIs from one content store
Traditional
FoxNose
Per-client APIs with own keys & field visibility
Hierarchical URLs (strict reference)
Traditional
FoxNose
Auto-generated parent-child URL structure
Population (resolve references)
Traditional
GraphQL or multiple requests
FoxNose
Built-in ?populate param
SDKs
Traditional
Varies by vendor
FoxNose
TypeScript + Python

Search

Full-text search
Traditional
Basic or external (Algolia, ES)
FoxNose
Built-in, typo-tolerant
Semantic / vector search
Traditional
External vector DB
FoxNose
Built-in
Hybrid search (text + vector + filters)
Traditional
DIY integration
FoxNose
Single API call
Cross-collection joins in search
Traditional
Multiple queries or GraphQL resolvers
FoxNose
Built-in — up to 3 joins per query
Search indexing on save
Traditional
Requires sync to external service
FoxNose
Automatic — no sync scripts

AI & RAG

Auto-embedding generation
Traditional
FoxNose
Per-field control (vectorize flag)
RAG-ready content
Traditional
Requires pipeline
FoxNose
Out of the box
LangChain integration
Traditional
FoxNose
Python + JS retrievers
No external vector database needed
Traditional
FoxNose

Operations

One service for CMS + search + AI
Traditional
Separate vendors & bills
FoxNose
Single platform
GDPR compliance mode
Traditional
Varies
FoxNose
EU data residency + EU-only processing
Audit trail
Traditional
Varies
FoxNose
Content + schema + access
Use Cases

Built for modern content workloads

Looking for the best headless CMS for AI? From marketing sites to LLM-powered applications — one platform for all of it.

Marketing websites

Manage pages, blog posts, and landing pages. Deliver content to any frontend — Next.js, Nuxt, Astro, or mobile apps. Built-in search means visitors find what they need.

PagesBlogLanding pagesMulti-site

Product catalogs & e-commerce

Structure products with flexible schemas. A headless CMS with hybrid search finds "warm winter jacket" even when the listing says "insulated parka". Filter by price, category, availability — all in one query.

ProductsHybrid searchFaceted filtering

Documentation & knowledge bases

Version your docs alongside schema changes. Readers search by meaning, not just keywords. Power AI assistants that answer questions based on your actual documentation.

DocsVersioningAI-readyRAG

AI-powered applications

A headless CMS for RAG apps — without a separate vector database. Store content, auto-generate embeddings, search semantically. One CMS for LLM retrieval that replaces the entire RAG infrastructure stack.

RAGVector searchLLMChatbots
API Architecture

One content store, multiple APIs

Most headless CMSes give you one API for all clients. FoxNose lets you create separate knowledge base APIs from the same content — each with its own key, permissions, and field visibility.

Public storefront API

Product catalog and help articles. No sensitive fields exposed.

Private AI assistant API

Vector search over knowledge base for RAG. Full access to all content.

Partner integration API

Wholesale pricing and inventory for resellers. Separate key, separate access.

Same content, no duplication. Collections can be shared across APIs — a product catalog powers both the storefront and the AI assistant.

Your Content Store

ProductsFAQArticlesKnowledge BaseInventory
APIStorefront
GET /products/
GET /articles/
GET /faq/
Public
APIAI Assistant
POST /products/_search
POST /kb/_search
GET /faq/
Private
APIPartners
GET /products/
GET /inventory/
GET /price-lists/
Private
Relational API

Hierarchical URLs, automatically

Enable strict reference on a folder — and FoxNose generates parent-child URL structure automatically. No routing config. No middleware. Just ownership semantics baked into the schema-first API.

Data integrity

A lesson can't exist without its module. A module can't exist without its course. Enforced at the platform level.

Cascade operations

Delete a course — all its modules and lessons follow. Permissions inherit down the tree.

Choose per folder

Some collections are flat, some are hierarchical. Mix both in the same project.

Flat structure — independent collections

GET/courses/
GET/modules/
GET/lessons/

Strict reference — parent-child ownership

Recommended
GET/courses/
GET/courses/:course/modules/
GET/courses/:course/modules/:module/lessons/

URL paths reflect real data relationships. Auto-generated from folder structure.

API

Resolve references in one request

Traditional CMSes return reference IDs — then you make another request to fetch the related resource. With FoxNose, add ?populate=author and get the full object inline. No N+1 queries.

Works on all endpoints

Search, list, and get-by-key — populate works everywhere.

No GraphQL needed

Get nested data in REST without the complexity of a GraphQL schema.

Without populate — two requests

// Request 1: get the article
GET /articles/Ed3gw2uNcA4h/

// Response: just a reference ID
{ "author": "jruF38Qu1P20" }

// Request 2: resolve the author
GET /authors/jruF38Qu1P20/

With populate — one request

GET /articles/Ed3gw2uNcA4h/?populate=author

{
  "title": { "en": "Refund Policy" },
  "author": {
    "_sys": { "key": "jruF38Qu1P20" },
    "data": {
      "name": "John Smith",
      "role": "Editor"
    }
  }
}
Code Examples

From schema to searchable REST API

Define your content model. Save content. Search it — with keywords, meaning, or both. That's the entire workflow.

1Define your content schema
// Create a "products" collection via Management API or Dashboard
// Then define fields on a schema version:
{
  "title":       { "type": "string", "localizable": true },
  "description": { "type": "text", "vectorizable": true },
  "price":       { "type": "number" },
  "category":    { "type": "string", "enum": ["electronics", "clothing", "home"] },
  "published":   { "type": "boolean" }
}
2Save content — embeddings generated automatically
// Management API
POST /v1/products

{
  "title": { "en": "Insulated Winter Parka" },
  "description": { "en": "Warm, waterproof jacket for extreme cold..." },
  "price": 189.00,
  "category": "clothing",
  "published": true
}
// → Stored + full-text indexed + vector embedded
3Search by meaning, keywords, or both — plus filters
// Flux API
POST /v1/products/_search

{
  "vector_search": { "query": "warm winter jacket" },
  "where": {
    "$": {
      "all_of": [
        { "category__eq": "clothing" },
        { "price__lte": 200 },
        { "published__eq": true }
      ]
    }
  }
}
// → Finds "Insulated Winter Parka" via semantic match
// → Even though "warm winter jacket" ≠ "insulated parka"
LangChain Integration

From headless CMS to LangChain in minutes

Your content is already indexed and embedded — ready for LLM retrieval. Connect it to any language model with a few lines of code. FoxNose has native LangChain retrievers for Python and JavaScript, turning your headless CMS with embeddings into a RAG-ready knowledge base.

1Create a retriever
pip install langchain-foxnose langchain-openai

from langchain_foxnose import FoxNoseRetriever

retriever = FoxNoseRetriever.from_client_params(
    base_url="https://your-env.fxns.io",
    api_prefix="content",
    public_key="YOUR_PUBLIC_KEY",
    secret_key="YOUR_SECRET_KEY",
    folder="products",
    search_mode="hybrid",
    content_field="description",
)
2Build a RAG chain
from langchain_openai import ChatOpenAI
from langchain.chains import RetrievalQA

qa = RetrievalQA.from_chain_type(
    llm=ChatOpenAI(model="gpt-4o"),
    retriever=retriever,
)

answer = qa.invoke("What winter jackets do you have under $200?")
print(answer["result"])
# → "We have the Insulated Winter Parka at $189..."

No vector database. No embedding pipeline. No chunking logic. Your CMS content goes straight into a RAG chain.

Learn more about building RAG applications with FoxNose →

Ready to simplify your content stack?

FoxNose is in open beta — full access, zero cost. If you're looking for a Sanity alternative or Contentful alternative with built-in vector search and AI, start here.