Skip to content
AI Agent Tools

Give your AI agent a knowledge base

Persistent memory for AI agents — structured, versioned, and searchable. Agents read, write, and search knowledge through one API. Native LangChain integration, built-in vector search, no external databases.

The Problem

Why agents need more than vector memory

Most AI agent frameworks treat memory as an afterthought — a vector database you bolt on after the fact. Your agent embeds text, stores it in Pinecone or ChromaDB, and hopes semantic search alone is enough.

Real AI agent memory needs more than a bag of embeddings — it needs structure, versioning, and a write path.

Vector database
Store embeddings, lose structure
Embedding pipeline
Chunk, embed, sync on every change
Memory framework
LangChain memory, custom wrappers
Conversation store
Redis, PostgreSQL, or another service
Knowledge sync
Cron jobs to keep everything consistent

Five services just to give your agent persistent memory

Features

A knowledge base your agent can read and write

One platform for structured content, vector search, and agent persistence. No glue code.

Read + write in one API

Agents don't just retrieve — they learn. The knowledge base API lets agents create and update knowledge entries. The Flux API searches them.

Agent writes knowledge

await mgmt.createResource('knowledge', {
data: {
topic: 'refund-policy',
content: 'Returns accepted within 30 days...',
source: 'support-ticket-4521',
confidence: 0.95
}
})
// → stored + embedded instantly

Agent retrieves knowledge

await flux.search('knowledge', {
vector_search: {
query: 'how do returns work?'
},
where: { $: { all_of: [
{ confidence__gte: 0.8 }
]}}
})
// → semantic match + metadata filter
Write
Embed
Search
— automatic on every save

Auto-generated embeddings

No embedding pipeline. Mark fields as vectorizable — embeddings are generated on every save. Vector search works instantly.

Knowledge evolves safely

Add fields, change types, roll back — without migrations. Schema versioning keeps existing knowledge intact.

Semantic + structured search

Search by meaning and filter by metadata — in one query. The hybrid search API with RRF ranking gives agents precise retrieval.

Why FoxNose

What makes this different from a vector database

01

Agents can read and write

Most knowledge stores are read-only — your agent retrieves but can't persist what it learns. FoxNose has a full read-write API. Agents create, update, and search knowledge entries through the same platform. An agent memory store where knowledge actually grows.

02

Structured knowledge, not just vectors

Vector-only memory loses relationships and metadata. FoxNose stores typed, schema-defined content with auto-generated embeddings on top. Your agent gets long term memory for LLMs with the precision of a database and the flexibility of semantic search.

03

Schema versioning — knowledge evolves safely

Agent knowledge models change over time. FoxNose versions schemas independently — add fields, rename properties, roll back — without breaking existing data or running migrations.

Knowledge base versioning →
04

No Pinecone, no sync scripts

Traditional setups chain a vector database, an embedding API, and sync logic. FoxNose replaces all three. Content is embedded and indexed the moment it's saved. One API, one bill — a Pinecone alternative for agents that actually simplifies the stack.

Use case

Train a chatbot on your data — without fine-tuning

You don't need to fine-tune a model to make an AI chatbot with own data. Store your documents, FAQs, and product information in FoxNose. The agent retrieves relevant context at query time and generates grounded answers.

Build a custom GPT with company data that stays current — update content through the dashboard or API, and every agent interaction reflects the change. No retraining, no reindexing, no stale knowledge.

No fine-tuning required

RAG retrieval gives your agent company knowledge without touching model weights.

Always up to date

Content updates are reflected instantly — no batch reindexing or pipeline delays.

Editors manage content, agents consume it

Non-technical team members update the chatbot knowledge base through a visual dashboard.

Traditional approach — fine-tuning

Collect & clean training data
Fine-tune model ($$$, hours/days)
Deploy & re-train on every update
Stale knowledge between retrains

RAG with FoxNose — always current

Store content with schema & embeddings
Agent retrieves context at query time
Update anytime — reflected instantly
Zero retraining cost
Code Example

LangChain agent with FoxNose tools

Give your agent a search tool and a write tool — ai agent tools in Python and JavaScript. It retrieves knowledge, answers questions, and persists what it learns — using native LangChain integration.

1Set up FoxNose as a LangChain retriever
pip install langchain-foxnose langchain-openai foxnose-sdk

from langchain_foxnose import FoxNoseRetriever

retriever = FoxNoseRetriever.from_client_params(
    base_url="https://your-env.fxns.io",
    api_prefix="content",
    public_key="YOUR_PUBLIC_KEY",
    secret_key="YOUR_SECRET_KEY",
    folder="knowledge",
    search_mode="hybrid",
    content_field="content",
)
2Build a RetrievalQA agent
from langchain_openai import ChatOpenAI
from langchain.chains import RetrievalQA

qa = RetrievalQA.from_chain_type(
    llm=ChatOpenAI(model="gpt-4o"),
    retriever=retriever,
)

answer = qa.invoke("What's our refund policy for international orders?")
print(answer["result"])
# → "International orders can be returned within 30 days..."
3Agent writes back new knowledge
from foxnose_sdk.management import ManagementClient
from foxnose_sdk.auth import SimpleKeyAuth

mgmt = ManagementClient(
    base_url="https://api.foxnose.net",
    environment_key="your-env-key",
    auth=SimpleKeyAuth("pub_key", "sec_key")
)

# Agent persists a new finding
mgmt.create_resource("knowledge", body={
    "data": {
        "topic": "international-refunds",
        "content": "Customers in EU get 14 extra days per directive...",
        "source": "legal-review-2025",
        "confidence": 0.92
    }
})
# → Embedded and searchable immediately

Works with any LLM — OpenAI, Anthropic, local models. FoxNose handles the data layer, you choose the brain.

Use Cases

Agent scenarios

From research agents that build knowledge over time to AI support bots with structured knowledge bases — one platform for every agent architecture.

Research agent

An agent that reads papers, extracts findings, and builds a searchable knowledge base over time. Uses the Management API to write structured entries and the Flux API to search semantically across everything it's collected.

ResearchKnowledge buildingSemantic search

Customer support bot

Train a chatbot on your data — product docs, FAQs, policies. The agent reads from a structured knowledge base to answer questions, and editors update content through the dashboard. No fine-tuning, no retraining.

SupportFAQChatbot knowledge base

Coding assistant with evolving docs

An AI chatbot with own data — internal documentation, API references, runbooks. As docs change, the agent always retrieves the latest version. Schema versioning ensures the knowledge structure can evolve without breaking retrieval.

DocumentationDeveloper toolsVersioning

Personal AI with custom GPT data

Build a custom GPT with company data that actually stays current. Product catalogs, employee directories, process guides — all searchable by meaning. Update content once, every agent interaction reflects the change.

Custom GPTEnterpriseCompany data
Integrations

Works with your agent framework

FoxNose is a REST API with Python and TypeScript SDKs. Use it as a knowledge base for CrewAI, AutoGen, LangGraph, or any custom AI agent — the same way you'd use any API. Native LangChain retrievers ship out of the box, replacing langchain conversation memory with structured, persistent knowledge.

LangChainLangGraphCrewAIAutoGenOpenAI Agents SDKCustom agents

Build smarter AI agents

FoxNose is in open beta — full access, zero cost. Give your agents persistent, structured memory and see how much better they perform with real knowledge.