Give your AI agent
a knowledge base
Persistent memory for AI agents — structured, versioned, and searchable. Agents read, write, and search knowledge through one API. Native LangChain integration, built-in vector search, no external databases.
Why agents need more than vector memory
Most AI agent frameworks treat memory as an afterthought — a vector database you bolt on after the fact. Your agent embeds text, stores it in Pinecone or ChromaDB, and hopes semantic search alone is enough.
Real AI agent memory needs more than a bag of embeddings — it needs structure, versioning, and a write path.
Five services just to give your agent persistent memory
A knowledge base your agent can read and write
One platform for structured content, vector search, and agent persistence. No glue code.
Read + write in one API
Agents don't just retrieve — they learn. The knowledge base API lets agents create and update knowledge entries. The Flux API searches them.
Agent writes knowledge
Agent retrieves knowledge
Auto-generated embeddings
No embedding pipeline. Mark fields as vectorizable — embeddings are generated on every save. Vector search works instantly.
Knowledge evolves safely
Add fields, change types, roll back — without migrations. Schema versioning keeps existing knowledge intact.
Semantic + structured search
Search by meaning and filter by metadata — in one query. The hybrid search API with RRF ranking gives agents precise retrieval.
What makes this different from a vector database
Agents can read and write
Most knowledge stores are read-only — your agent retrieves but can't persist what it learns. FoxNose has a full read-write API. Agents create, update, and search knowledge entries through the same platform. An agent memory store where knowledge actually grows.
Structured knowledge, not just vectors
Vector-only memory loses relationships and metadata. FoxNose stores typed, schema-defined content with auto-generated embeddings on top. Your agent gets long term memory for LLMs with the precision of a database and the flexibility of semantic search.
Schema versioning — knowledge evolves safely
Agent knowledge models change over time. FoxNose versions schemas independently — add fields, rename properties, roll back — without breaking existing data or running migrations.
Knowledge base versioning →No Pinecone, no sync scripts
Traditional setups chain a vector database, an embedding API, and sync logic. FoxNose replaces all three. Content is embedded and indexed the moment it's saved. One API, one bill — a Pinecone alternative for agents that actually simplifies the stack.
Train a chatbot on your data — without fine-tuning
You don't need to fine-tune a model to make an AI chatbot with own data. Store your documents, FAQs, and product information in FoxNose. The agent retrieves relevant context at query time and generates grounded answers.
Build a custom GPT with company data that stays current — update content through the dashboard or API, and every agent interaction reflects the change. No retraining, no reindexing, no stale knowledge.
No fine-tuning required
RAG retrieval gives your agent company knowledge without touching model weights.
Always up to date
Content updates are reflected instantly — no batch reindexing or pipeline delays.
Editors manage content, agents consume it
Non-technical team members update the chatbot knowledge base through a visual dashboard.
Traditional approach — fine-tuning
RAG with FoxNose — always current
LangChain agent with FoxNose tools
Give your agent a search tool and a write tool — ai agent tools in Python and JavaScript. It retrieves knowledge, answers questions, and persists what it learns — using native LangChain integration.
pip install langchain-foxnose langchain-openai foxnose-sdk
from langchain_foxnose import FoxNoseRetriever
retriever = FoxNoseRetriever.from_client_params(
base_url="https://your-env.fxns.io",
api_prefix="content",
public_key="YOUR_PUBLIC_KEY",
secret_key="YOUR_SECRET_KEY",
folder="knowledge",
search_mode="hybrid",
content_field="content",
)from langchain_openai import ChatOpenAI
from langchain.chains import RetrievalQA
qa = RetrievalQA.from_chain_type(
llm=ChatOpenAI(model="gpt-4o"),
retriever=retriever,
)
answer = qa.invoke("What's our refund policy for international orders?")
print(answer["result"])
# → "International orders can be returned within 30 days..."from foxnose_sdk.management import ManagementClient
from foxnose_sdk.auth import SimpleKeyAuth
mgmt = ManagementClient(
base_url="https://api.foxnose.net",
environment_key="your-env-key",
auth=SimpleKeyAuth("pub_key", "sec_key")
)
# Agent persists a new finding
mgmt.create_resource("knowledge", body={
"data": {
"topic": "international-refunds",
"content": "Customers in EU get 14 extra days per directive...",
"source": "legal-review-2025",
"confidence": 0.92
}
})
# → Embedded and searchable immediatelyWorks with any LLM — OpenAI, Anthropic, local models. FoxNose handles the data layer, you choose the brain.
Agent scenarios
From research agents that build knowledge over time to AI support bots with structured knowledge bases — one platform for every agent architecture.
Research agent
An agent that reads papers, extracts findings, and builds a searchable knowledge base over time. Uses the Management API to write structured entries and the Flux API to search semantically across everything it's collected.
Customer support bot
Train a chatbot on your data — product docs, FAQs, policies. The agent reads from a structured knowledge base to answer questions, and editors update content through the dashboard. No fine-tuning, no retraining.
Coding assistant with evolving docs
An AI chatbot with own data — internal documentation, API references, runbooks. As docs change, the agent always retrieves the latest version. Schema versioning ensures the knowledge structure can evolve without breaking retrieval.
Personal AI with custom GPT data
Build a custom GPT with company data that actually stays current. Product catalogs, employee directories, process guides — all searchable by meaning. Update content once, every agent interaction reflects the change.
Works with your agent framework
FoxNose is a REST API with Python and TypeScript SDKs. Use it as a knowledge base for CrewAI, AutoGen, LangGraph, or any custom AI agent — the same way you'd use any API. Native LangChain retrievers ship out of the box, replacing langchain conversation memory with structured, persistent knowledge.
Build smarter AI agents
FoxNose is in open beta — full access, zero cost. Give your agents persistent, structured memory and see how much better they perform with real knowledge.
Explore the platform
Build RAG Apps
AI knowledge retrieval without the infrastructure. Python and JS SDKs included.
Learn more →LLM Database
AI-native database for agents and RAG with auto-embeddings and structured storage.
Learn more →Hybrid Search API
Semantic search, full-text, and pre-filter vector search in one query.
Learn more →Knowledge Base API
Schema-first, auto-generated REST API with built-in search and auto-embeddings.
Learn more →