The headless CMS
with built-in search and AI
A content platform for LLM-powered applications. Define a schema — get vector search, full-text search, and auto-generated embeddings out of the box. Native LangChain integration included. No vector databases, no pipelines, no glue code.
The traditional headless CMS stack
A headless CMS like Contentful, Sanity, or Strapi stores your content and exposes it via API. But the moment you need search — and you always do — you're integrating external services.
Add AI features and you need a vector database, an embedding pipeline, and sync logic to keep everything consistent. What started as "just a CMS" becomes five services with five points of failure. Structured content for LLM retrieval shouldn't require this much glue.
Each service = separate billing, uptime, sync logic
What if your CMS had vector search built in?
A CMS with built-in search where storage, search, and AI are one system — not separate services stitched together.
Schema → API in seconds
Define your content model — get search, list, and retrieve endpoints instantly. No manual routing. No controllers. No ORM.
Save content
Search instantly
Search without Algolia
No external search service. No webhooks to sync content. The hybrid search API lives where your data lives.
Full-text + semantic + filters in one query
AI-ready on every save
No vector DB. No embedding pipeline. Mark a field as vectorizable — embeddings are generated automatically.
Content + schema versioning
Other CMSes version content. FoxNose versions everything — content, schema, and the relationship between them.
Added "seo_description" field
Updated description on "Winter Parka"
Updated pricing on "Winter Parka"
Added "variants" relationship
Full audit trail — who changed what, when, and why
What makes this different
Search is not an afterthought
In a traditional headless CMS, you store content in one place and search it through another. You maintain sync scripts, manage Algolia indexes, handle stale data. FoxNose is a headless CMS with search as the storage layer. Every piece of content is instantly searchable — full-text, semantic, and filtered — the moment you save it.
An AI-native CMS, not bolted on
Most headless CMSes weren't designed for AI workloads. Turning content into something an LLM can use means integrating a vector database, building an embedding pipeline, and writing sync logic. FoxNose is a CMS for RAG — it generates embeddings automatically on every save, so your content is ready for retrieval-augmented generation without extra infrastructure.
Fewer services, fewer bills, fewer failures
A typical content stack includes a CMS, a search service, and increasingly a vector DB — each with its own billing, uptime, and data consistency guarantees. FoxNose consolidates these into one platform. One API key, one data source of truth, one bill.
Schema-first with real versioning
Define your content model with typed fields, relationships, and constraints. FoxNose is the only CMS with schema versioning — schemas are versioned independently, so you can evolve your content structure without breaking existing content or APIs. Full content versioning included — track exactly what changed, when, and by whom.
Query across collections in one call
Traditional headless CMSes force you to fetch each content type separately and stitch results together in code. FoxNose lets you join up to 3 collections in a single search request — with filters on every joined collection.
Inner & left joins
Require a match or return results even when the related record is missing — just like SQL.
Per-collection filters
Filter each joined collection independently — e.g. only published articles by French authors.
Works with vector search
Combine semantic search with joins and filters — all in one request.
Find published articles by French authors, with companies
POST /v1/articles/_search
{
"join": {
"as": {
"authors": "people.authors",
"companies": "orgs.companies"
},
"relations": [
{
"target": "authors",
"source_field": "author",
"relation_type": "inner"
},
{
"source": "authors",
"target": "companies",
"source_field": "company",
"relation_type": "left"
}
]
},
"where": {
"$": {
"all_of": [{ "status__eq": "published" }]
},
"authors": {
"all_of": [{ "country__eq": "FR" }]
}
}
}
// → Articles + author data + company data
// → One request. No N+1 queries.Headless CMS comparison: Traditional vs. FoxNose
Feature by feature, capability by capability.
Content & API
Search
AI & RAG
Operations
Content & API
| Capability | Traditional CMS | FoxNose |
|---|---|---|
| Content modeling | Visual editor or config | Schema-first + Dashboard |
| API style | REST and/or GraphQL | Auto-generated REST from schema |
| Localization | Plugin or paid (e.g. Strapi) | Field-level, built-in |
| Content versioning | Drafts/published (Sanity, Strapi v5) | Full revision history per resource |
| Schema versioning | Migrations or code-based (no rollback) | Non-destructive, per-version with rollback |
| Granular RBAC | Varies (often paid tier) | Built-in, per-folder |
| Environments | Paid add-on | Included with isolated data |
| Multiple APIs from one content store | Per-client APIs with own keys & field visibility | |
| Hierarchical URLs (strict reference) | Auto-generated parent-child URL structure | |
| Population (resolve references) | GraphQL or multiple requests | Built-in ?populate param |
| SDKs | Varies by vendor | TypeScript + Python |
Search
| Capability | Traditional CMS | FoxNose |
|---|---|---|
| Full-text search | Basic or external (Algolia, ES) | Built-in, typo-tolerant |
| Semantic / vector search | External vector DB | Built-in |
| Hybrid search (text + vector + filters) | DIY integration | Single API call |
| Cross-collection joins in search | Multiple queries or GraphQL resolvers | Built-in — up to 3 joins per query |
| Search indexing on save | Requires sync to external service | Automatic — no sync scripts |
AI & RAG
| Capability | Traditional CMS | FoxNose |
|---|---|---|
| Auto-embedding generation | Per-field control (vectorize flag) | |
| RAG-ready content | Requires pipeline | Out of the box |
| LangChain integration | Python + JS retrievers | |
| No external vector database needed |
Operations
| Capability | Traditional CMS | FoxNose |
|---|---|---|
| One service for CMS + search + AI | Separate vendors & bills | Single platform |
| GDPR compliance mode | Varies | EU data residency + EU-only processing |
| Audit trail | Varies | Content + schema + access |
Built for modern content workloads
Looking for the best headless CMS for AI? From marketing sites to LLM-powered applications — one platform for all of it.
Marketing websites
Manage pages, blog posts, and landing pages. Deliver content to any frontend — Next.js, Nuxt, Astro, or mobile apps. Built-in search means visitors find what they need.
Product catalogs & e-commerce
Structure products with flexible schemas. A headless CMS with hybrid search finds "warm winter jacket" even when the listing says "insulated parka". Filter by price, category, availability — all in one query.
Documentation & knowledge bases
Version your docs alongside schema changes. Readers search by meaning, not just keywords. Power AI assistants that answer questions based on your actual documentation.
AI-powered applications
A headless CMS for RAG apps — without a separate vector database. Store content, auto-generate embeddings, search semantically. One CMS for LLM retrieval that replaces the entire RAG infrastructure stack.
One content store, multiple APIs
Most headless CMSes give you one API for all clients. FoxNose lets you create separate knowledge base APIs from the same content — each with its own key, permissions, and field visibility.
Public storefront API
Product catalog and help articles. No sensitive fields exposed.
Private AI assistant API
Vector search over knowledge base for RAG. Full access to all content.
Partner integration API
Wholesale pricing and inventory for resellers. Separate key, separate access.
Same content, no duplication. Collections can be shared across APIs — a product catalog powers both the storefront and the AI assistant.
Your Content Store
Hierarchical URLs, automatically
Enable strict reference on a folder — and FoxNose generates parent-child URL structure automatically. No routing config. No middleware. Just ownership semantics baked into the schema-first API.
Data integrity
A lesson can't exist without its module. A module can't exist without its course. Enforced at the platform level.
Cascade operations
Delete a course — all its modules and lessons follow. Permissions inherit down the tree.
Choose per folder
Some collections are flat, some are hierarchical. Mix both in the same project.
Flat structure — independent collections
Strict reference — parent-child ownership
RecommendedURL paths reflect real data relationships. Auto-generated from folder structure.
Resolve references in one request
Traditional CMSes return reference IDs — then you make another request to fetch the related resource. With FoxNose, add ?populate=author and get the full object inline. No N+1 queries.
Works on all endpoints
Search, list, and get-by-key — populate works everywhere.
No GraphQL needed
Get nested data in REST without the complexity of a GraphQL schema.
Without populate — two requests
// Request 1: get the article
GET /articles/Ed3gw2uNcA4h/
// Response: just a reference ID
{ "author": "jruF38Qu1P20" }
// Request 2: resolve the author
GET /authors/jruF38Qu1P20/With populate — one request
GET /articles/Ed3gw2uNcA4h/?populate=author
{
"title": { "en": "Refund Policy" },
"author": {
"_sys": { "key": "jruF38Qu1P20" },
"data": {
"name": "John Smith",
"role": "Editor"
}
}
}From schema to searchable REST API
Define your content model. Save content. Search it — with keywords, meaning, or both. That's the entire workflow.
// Create a "products" collection via Management API or Dashboard
// Then define fields on a schema version:
{
"title": { "type": "string", "localizable": true },
"description": { "type": "text", "vectorizable": true },
"price": { "type": "number" },
"category": { "type": "string", "enum": ["electronics", "clothing", "home"] },
"published": { "type": "boolean" }
}// Management API
POST /v1/products
{
"title": { "en": "Insulated Winter Parka" },
"description": { "en": "Warm, waterproof jacket for extreme cold..." },
"price": 189.00,
"category": "clothing",
"published": true
}
// → Stored + full-text indexed + vector embedded// Flux API
POST /v1/products/_search
{
"vector_search": { "query": "warm winter jacket" },
"where": {
"$": {
"all_of": [
{ "category__eq": "clothing" },
{ "price__lte": 200 },
{ "published__eq": true }
]
}
}
}
// → Finds "Insulated Winter Parka" via semantic match
// → Even though "warm winter jacket" ≠ "insulated parka"From headless CMS to LangChain in minutes
Your content is already indexed and embedded — ready for LLM retrieval. Connect it to any language model with a few lines of code. FoxNose has native LangChain retrievers for Python and JavaScript, turning your headless CMS with embeddings into a RAG-ready knowledge base.
pip install langchain-foxnose langchain-openai
from langchain_foxnose import FoxNoseRetriever
retriever = FoxNoseRetriever.from_client_params(
base_url="https://your-env.fxns.io",
api_prefix="content",
public_key="YOUR_PUBLIC_KEY",
secret_key="YOUR_SECRET_KEY",
folder="products",
search_mode="hybrid",
content_field="description",
)from langchain_openai import ChatOpenAI
from langchain.chains import RetrievalQA
qa = RetrievalQA.from_chain_type(
llm=ChatOpenAI(model="gpt-4o"),
retriever=retriever,
)
answer = qa.invoke("What winter jackets do you have under $200?")
print(answer["result"])
# → "We have the Insulated Winter Parka at $189..."No vector database. No embedding pipeline. No chunking logic. Your CMS content goes straight into a RAG chain.
Ready to simplify your content stack?
FoxNose is in open beta — full access, zero cost. If you're looking for a Sanity alternative or Contentful alternative with built-in vector search and AI, start here.
Explore the platform
Hybrid Search API
Semantic search, full-text, and pre-filter vector search in one query.
Learn more →Knowledge Base API
Schema-first, auto-generated REST API with built-in search. Instant backend for AI.
Learn more →Build RAG Apps
AI knowledge retrieval without the infrastructure. Python and JS SDKs included.
Learn more →LLM Database
AI-native database with auto-embeddings, structured storage, and built-in search.
Learn more →