Documentation

Everything you need. For builders.

Updated April 29, 2026

Persistent memory for AI agents. Store, recall, dream, reflect. Run locally or sync across devices via the cloud.

Contents
01

Overview

Vitalis equips AI agents with persistent memory that behaves like a living system. Store experiences, retrieve them through hybrid search, consolidate through dream cycles, and let a self-model emerge naturally.

Each memory carries a decay factor that drops when unused and recovers on access. Retrieved memories link to related ones, forming an association graph that reflects actual usage patterns. Dream cycles compress and cross-reference accumulated experience. Active reflection generates novel introspective entries from that corpus.

Available as a Python package, a TypeScript SDK, and an MCP server that connects to any compatible editor. Storage runs locally on SQLite or through Vitalis-managed infrastructure for cross-device access.

Vitalis ships generic memory primitives for any AI agent, with first-class CAD support for Onshape and FreeCAD on top.

02

Ecosystem

Vitalis is a modular stack. Grab the layer that fits your use case.

Packages

InstallPackageDescription
pip install vitalisvitalisPython SDK
npm install vitalis-brainvitalis-brainCore memory engine (scoring, decay, graph)
npm install vitalis-cloudvitalis-cloudSupabase + Voyage providers
npm install vitalis-localvitalis-localSQLite + local embeddings (offline)
npm install vitaliscadvitaliscadFull bot + MCP server + CLI, with Onshape and FreeCAD integrations

Architecture

┌───────────────────────────────────────────────┐
│            Your Agent / IDE / CLI             │
├───────────────┬───────────────┬───────────────┤
│    Python     │  TypeScript   │  MCP (stdio)  │
│    vitalis    │    vitalis    │    vitalis    │
├───────────────┴───────────────┴───────────────┤
│                 vitalis-brain                 │
│  scoring · decay · graph · dream · recall     │
├───────────────────────┬───────────────────────┤
│     vitalis-cloud     │     vitalis-local     │
│   Supabase + Voyage   │  SQLite + local emb.  │
└───────────────────────┴───────────────────────┘

3 Ways to Use Vitalis

Python - pip install vitalis and call the Cortex API with a clean async interface.

TypeScript - Compose vitalis-brain with your choice of provider (vitalis-cloud or vitalis-local).

MCP - npx vitaliscad mcp-serve to use memory from any MCP-compatible editor (Claude Code, Cursor, etc.).

03

Quick Start

Three setup paths: local-first with no API keys, hosted on managed infrastructure, or self-hosted against your own Supabase instance. All three share the same SDK interface.

Install

$ npm install vitaliscad

Option A: Local-first (recommended)

Zero API keys. Zero network calls. Runs entirely offline using SQLite and local embeddings.

$ npx vitaliscad mcp-install --local # Auto-configures your IDE

Option B: Hosted (cloud sync)

Keep memories in sync across machines. Sign up for an API key to get started.

$ npx vitaliscad mcp-install # Prompts for API key
import { Cortex } from 'vitalis';
 
const brain = new Cortex({
hosted: { apiKey: process.env.CORTEX_API_KEY },
ownerWallet: process.env.OWNER_WALLET,
});
 
await brain.init();
await brain.store({ type: 'episodic', content: 'Hello world', summary: 'First memory', source: 'my-agent' });

Option C: Self-hosted (Supabase)

Bring your own Supabase + pgvector instance for complete data ownership.

Configure your .env with SUPABASE_URL and SUPABASE_SERVICE_KEY, then run the schema:

$ psql $DATABASE_URL -f node_modules/vitalis/supabase-schema.sql

Store & Recall

The API is the same regardless of mode:

// Store a memory
await brain.store({
type: 'semantic',
content: 'User prefers dark mode and compact layouts',
summary: 'User UI preferences',
source: 'my-agent',
tags: ['preferences', 'ui'],
});
 
// Recall with hybrid search
const memories = await brain.recall({
query: 'what does the user prefer',
});

Export & Visualize

Export your agent's memories and explore them visually:

$ npx vitalis export # -> vitalis-memories.json

Then drag the file into vitaliscad.com/explore to visualize your agent's memory network.

No lock-in
Both modes use the same SDK API. Switch between hosted and self-hosted anytime. Export your memories with npx vitalis export at any point.
04

Hosted Mode

Hosted mode runs on managed infrastructure. No database to provision, no schema to migrate. Register for an API key, configure it in your project, and start storing memories.

1. Register for an API key

$ npx vitalis register

You'll be asked for your agent name and Ethereum mainnet wallet address. Save the API key, it won't be shown again.

2. Use in your code

import { Cortex } from 'vitalis';
 
const brain = new Cortex({
hosted: { apiKey: process.env.CORTEX_API_KEY },
ownerWallet: process.env.OWNER_WALLET,
});
 
await brain.init();
await brain.store({ type: 'episodic', content: 'Hello world', summary: 'First memory', source: 'my-agent' });

3. Export, import, and sync

Download your agent's memories, import from other sources, or sync to a context file:

$ npx vitalis export # -> vitalis-memories.json
$ npx vitalis export --format md # -> vitalis-memories.md
$ npx vitalis import data.json # Import from JSON, ChatGPT, or markdown
$ npx vitalis sync # -> VITALIS_CONTEXT.md
$ npx vitalis status # Check if Vitalis is active

Then visualize them at vitaliscad.com/explore, drag and drop the file to see your agent's neural map.

Portability
Your memories are yours. Export at any time with npx vitalis export, import into a self-hosted instance whenever you want, and switch providers without data loss.
05

Cortex API

REST endpoints for the hosted Cortex service. Every request needs a Bearer token in the Authorization header.

Authorization: Bearer YOUR_API_KEY

POST/api/cortex/store

Store a new memory.

FieldTypeDescription
typestringepisodic | semantic | procedural | self_model | introspective
contentstringFull memory content (max 5000 chars)
summarystringShort summary for matching (max 500 chars)
sourcestringAgent identifier
tagsstring[]Tags for filtering
importancenumber0-1. LLM-scored if omitted
emotionalValencenumber-1 to 1
metadataobjectArbitrary metadata

POST/api/cortex/recall

Find and retrieve memories using multi-signal hybrid scoring.

FieldTypeDescription
querystringNatural language search query
limitnumberMax results (default 10)
typesstring[]Filter by memory type
thresholdnumberMinimum importance threshold (0-1)

Recall Pipeline

query expansion -> vector search -> entity matching -> graph traversal -> scoring

Five signals feed the scorer: recency, relevance, importance, vector similarity, and graph connectivity. Each result is gated by the memory's current decay factor before it appears in the output.

POST/api/cortex/clinamen

Surface surprising, loosely related memories through creative lateral retrieval.

FieldTypeDescription
contextstringCurrent conversation or thought context
limitnumberMax results (default 5)
min_importancenumberMinimum importance threshold

GET/api/cortex/stats

Pulls aggregate stats: counts per type, averages, dream session history, and the most frequent tags and concepts.

06

Python SDK

Async Python client for the Cortex API. Persist memories, query them, and trigger dream cycles, all from Python.

Install

$ pip install vitalis

Usage

from vitalis import Vitalis
 
brain = Vitalis(api_key="your-key")
 
# Store a memory
await brain.store(
content="User prefers dark mode",
type="procedural",
summary="UI preference",
source="my-agent",
)
 
# Recall memories
memories = await brain.recall("auth issues")
 
# Get stats
stats = await brain.stats()

Under the hood, the Python SDK calls the Cortex HTTP API. Every method is async. Drop it into any async Python environment: FastAPI, Discord.py, or plain scripts with asyncio.run().

07

TypeScript SDK

Wire vitalis-brain together with your preferred storage backend and embedding provider.

Production (Supabase + Voyage)

import { VitalisEngine } from 'vitalis-brain';
import { SupabaseProvider } from 'vitalis-cloud';
import { VoyageEmbeddings } from 'vitalis-brain/providers/voyage';
 
const engine = new VitalisEngine({
storage: new SupabaseProvider({
url: process.env.SUPABASE_URL,
serviceKey: process.env.SUPABASE_SERVICE_KEY,
}),
embeddings: new VoyageEmbeddings({
apiKey: process.env.VOYAGE_API_KEY,
}),
});
 
await engine.init();
await engine.store({ type: 'semantic', content: '...', summary: '...', source: 'agent' });

Offline (SQLite + local embeddings)

import { VitalisEngine } from 'vitalis-brain';
import { SqliteProvider, GteSmallEmbeddings } from 'vitalis-local';
 
const engine = new VitalisEngine({
storage: new SqliteProvider({ path: './memories.db' }),
embeddings: new GteSmallEmbeddings(),
});
 
await engine.init();
// Same API, fully offline, zero API keys
08

MCP Server

Access Vitalis memory straight from your editor. The MCP server makes recall, store, and stats available as tools inside Claude Code, Cursor, and any other MCP-compatible IDE.

Quick Setup

$ npx vitalis mcp-serve

Two Modes

ModeConfigWhat It Does
HostedSet CORTEX_API_KEY env varCalls the Cortex HTTP API. No Supabase needed.
Self-hostedSet Supabase env varsConnects directly to your database. Full local control.

Available Tools

ToolDescription
recall_memoriesSearch memories by query, tags, type, importance threshold. Returns scored results.
store_memoryStore a new memory with type, content, summary, tags, and importance.
get_memory_statsMemory system statistics: counts by type, averages, dream sessions, top tags.

Claude Code Setup

Add to your .mcp.json:

{
"mcpServers": {
"vitalis": {
"command": "npx",
"args": ["vitalis", "mcp-serve"],
"env": {
"CORTEX_API_KEY": "your-api-key"
}
}
}
}
IDE Support
Any MCP-compatible editor works: Claude Code, Cursor, Windsurf, etc. The server communicates over stdio.
09

Local Mode

Fully offline. No API keys, no outbound connections, no external services. Memories are written to a JSON file on your local disk.

Install

$ npx vitalis mcp-install --local

This auto-configures your IDE's MCP settings for local-only memory.

How It Works

All memories go into ~/.vitalis/memories.json, a plain JSON file sitting on your disk. No database engine, no cloud dependency, no outbound connections.

Search runs on keyword matching without needing embeddings. Tag scoring, importance decay, and graph-based associations all operate entirely locally.

When to Use

Local mode is ideal for:

Air-gapped or privacy-sensitive environments

Quick prototyping without API key setup

Single-agent personal memory

Offline development

Upgrade Path
Start local, switch to cloud anytime. Export with npx vitalis export and import into a hosted instance. Same data, same format.
10

CAD Overview

Vitalis provides persistent memory primitives for CAD agents working in Onshape or FreeCAD. An agent can record why a design decision was made, what constraints apply to a part, or what naming conventions govern a document, and recall that context in any future session.

Three MCP tools expose this capability. Register them by starting the MCP server with the --onshape flag or by setting VITALISCAD_ONSHAPE=1 in the environment:

ToolDescription
record_decisionStore a design decision, constraint, tolerance, or naming convention for a specific part or document element.
recall_part_intentRetrieve all stored memories scoped to a specific document element, with optional semantic filtering.
list_constraintsFetch all constraint and tolerance memories across an entire Onshape document.

See Onshape for authentication setup and FreeCAD for macro installation. Tool definitions are in CAD MCP Tools.

Note
Vitalis does not parse BREP geometry or 3D mesh data. It stores the text and metadata that an agent supplies via MCP tools. The Onshape REST client is used for document discovery and writing comments back to documents, not for geometry extraction.
11

Onshape

Connect Vitalis to Onshape using either an API key (default, simpler) or OAuth 2.0 (browser flow, use --oauth flag). Both flows save credentials to ~/.vitaliscad/onshape.json with mode 0600.

API Key (default)

Get your access key and secret key from the Onshape developer portal at dev-portal.onshape.com/keys. Then run the interactive prompt:

$ npx vitaliscad onshape connect
 
# Prompts for Access Key and Secret Key
# Saves to ~/.vitaliscad/onshape.json (mode 0600)

OAuth 2.0

Register an OAuth app in the Onshape developer portal. Set the redirect URI to http://localhost:3737/callback, then set the env vars and run:

$ ONSHAPE_OAUTH_CLIENT_ID=... ONSHAPE_OAUTH_CLIENT_SECRET=... \
npx vitaliscad onshape connect --oauth
 
# Opens browser, local callback on port 3737
# Token saved with auth_method: "oauth"

REST Methods

The Onshape client exposes the following REST methods to MCP tools and the CLI:

MethodDescription
listDocuments(opts?)List documents in the connected Onshape account, with optional query and pagination.
getDocument(documentId)Fetch metadata for a single document by ID.
listElements(documentId, workspaceId)List all elements (part studios, assemblies) in a document workspace.
getPartStudioConfiguration(did, wid, eid)Get configuration parameters for a part studio element.
getAssembly(did, wid, eid)Get the assembly definition: instances, features, and mate context.
postComment(opts)Write a comment back to a document (document, workspace, or part level).
12

FreeCAD

The FreeCAD integration is a Python macro that runs inside FreeCAD and POSTs design decisions to the Vitalis server. Install it with one command, then run it from the FreeCAD macro menu.

Install

$ npx vitaliscad freecad install-macro

This copies Vitaliscad.FCMacro to the platform-specific FreeCAD macro directory:

PlatformMacro directory
Linux~/.local/share/FreeCAD/Macro/
macOS~/Library/Preferences/FreeCAD/Macro/
Windows%APPDATA%/FreeCAD/Macro/

Usage

Open FreeCAD, select a part in the model tree, then navigate to Macro > Macros... > Vitaliscad.FCMacro and click Execute. A dialog appears with four fields:

FieldDescription
Decision textWhat was decided (e.g. "Fillet locked at 2mm")
RationaleWhy the decision was made (e.g. "Tooling cost spike above 2.5mm")
KindDropdown: decision / constraint / tolerance / naming
Part IDAuto-populated from the selected object in the format freecad/{doc-name}/{object-name}

On submit, the macro POSTs JSON to http://localhost:3000/api/cad/record-decision. See the endpoint reference for the full request shape.

Authentication

The macro sends the cortex-api-key header. It reads the key from the VITALIS_API_KEY environment variable, falling back to ~/.vitaliscad/macro.json:

{
"api_key": "vk_..."
}
13

CAD MCP Tools

Three tools are registered when the MCP server starts in Onshape mode (pass --onshape to mcp-serve, or set VITALISCAD_ONSHAPE=1). All tools write to and read from the standard Vitalis memory store.

record_decision

Store a CAD design decision, constraint, tolerance, or naming convention for a part or document element.

ParameterTypeRequiredDescription
part_idstringyesElement ID or compound id in the format {doc_id}/{element_id}. Use {doc_id}/_doc for document-wide decisions.
decisionstringyesWhat was decided (max 2000 chars).
rationalestringyesWhy the decision was made (max 5000 chars).
refsstring[]noRelated part_ids or feature_ids this decision references.
tagsstring[]noFree-form tags (e.g. fastener, structural, constraint).
confirmed_by_userbooleannoTrue when the user explicitly dictated this decision. Default false means agent-inferred.

Returns:

{
"stored": true,
"memory_id": 42,
"namespace": "cad/{docId}/{elementId}",
"part_id": "onshape/DOC123/EID456"
}

recall_part_intent

Fetch all memories scoped to a specific document element, in relevance order.

ParameterTypeRequiredDescription
part_idstringyesElement ID or compound {doc_id}/{element_id} to recall memories for.
querystringnoOptional natural language filter: narrows results by semantic similarity.
limitnumbernoMaximum results to return (default 10, max 50).

The query uses a JSONB containment filter (metadata @> {docId, partId}) pushed to the database, so results are never silently truncated regardless of memory count.

list_constraints

Fetch all constraint and tolerance memories for an entire Onshape document (all elements).

ParameterTypeRequiredDescription
doc_idstringyesOnshape document ID. Returns memories with kind: constraint or kind: tolerance across all elements in that document.
Activation
Start the MCP server with: npx vitaliscad onshape mcp, or set VITALISCAD_ONSHAPE=1 before running mcp-serve. The three CAD tools are registered automatically when the flag is active.
14

CAD Memory Schema

CAD memories are stored as memory_type: 'semantic' records. The metadata JSONB column carries the CAD-specific fields. No additional database tables are created.

part_id Format

SourceFormatExample
Onshapeonshape/{docId}/{workspaceId}/{elementId}onshape/DOC123/WS456/EID789
FreeCADfreecad/{doc-name}/{object-name}freecad/widget-asm/plate-a

Slugification is applied to FreeCAD names: spaces become hyphens, all characters are lowercased. Onshape IDs are used verbatim.

Metadata Shape

interface CadMemoryMetadata {
kind: 'decision' | 'constraint' | 'tolerance' | 'naming';
docId: string; // Onshape doc ID or FreeCAD doc name
partId: string; // element ID or FreeCAD object name
refs: string[]; // related memory IDs or external refs
tags: string[];
source: 'onshape' | 'freecad';
createdBy: 'mcp:vitaliscad-onshape' | 'mcp:vitaliscad-freecad';
confirmedBy: 'user' | 'agent-inferred';
}

Recall queries push a metadata @> { docId, partId } containment filter to the Postgres JSONB GIN index. This means the database returns only the exact matching rows, with no client-side post-filtering or silent truncation regardless of how many CAD memories exist for a document.

fieldValuesNotes
kinddecision / constraint / tolerance / namingUsed to filter by list_constraints (constraint + tolerance only)
confirmedByuser / agent-inferreduser = explicitly dictated by human; agent-inferred = extracted by AI
refsstring[]Optional array of related part IDs or external document refs
15

POST /api/cad/record-decision

HTTP endpoint for headless callers: the FreeCAD macro, CI pipelines, or any tool that cannot use MCP stdio. Accepts the same decision data as the record_decision MCP tool and writes to the same memory store.

POST/api/cad/record-decision

Authentication

Two methods are accepted. The API key header is preferred for headless callers:

MethodHeader / TokenNotes
API key (preferred)cortex-api-key: vk_...Matches server CORTEX_API_KEY. Used by FreeCAD macro.
Privy JWTAuthorization: Bearer <token>Standard user session token. Falls back if API key is absent.

Request Body

{
"part_id": "freecad/MyDoc/Body001",
"kind": "decision", // decision | constraint | tolerance | naming
"text": "Fillet locked at 2mm",
"rationale": "Tooling cost spike above 2.5mm"
}
FieldTypeRequiredDescription
part_idstringyesPart identifier in onshape/... or freecad/... format.
textstringyesThe decision or constraint text (max 2000 chars).
rationalestringyesExplanation for the decision (max 5000 chars).
kindstringnoOne of decision, constraint, tolerance, naming. Defaults to decision.

Response

200 on success:

{
"success": true,
"memory_id": "vitalis-abc12345"
}
StatusMeaning
200Decision stored. Returns success and memory_id.
400Missing or invalid fields (part_id, text, or rationale absent; kind not a valid enum).
401Missing or invalid auth: no cortex-api-key header and no valid Privy JWT.
16

Storing Memories

Persist memories with automatic importance scoring, concept extraction, and optional on-chain anchoring.

const id = await brain.store({
type: 'semantic',
content: 'Users who hold >1M tokens tend to ask about governance',
summary: 'Whale holder behavior pattern',
source: 'analysis-agent',
tags: ['whale', 'governance'],
importance: 0.8,
});
 
console.log(id); // 42 (memory ID) or null on failure

StoreMemoryOptions

FieldTypeDescription
typeMemoryTypeepisodic | semantic | procedural | self_model | introspective
contentstringFull memory content (max 5000 chars)
summarystringShort summary for recall matching (max 500 chars)
sourcestringIdentifier for the agent storing the memory
tagsstring[]Tags for filtering (max 20)
conceptsstring[]Structured concepts (auto-inferred if omitted)
importancenumber0-1 scale. LLM-scored if omitted (requires anthropic config)
emotionalValencenumber-1 (negative) to 1 (positive). Default: 0
sourceIdstringExternal ID (e.g. task ID, message ID, document ref)
relatedUserstringAssociated user identifier
relatedWalletstringAssociated wallet address
metadataRecord<string, unknown>Arbitrary metadata
evidenceIdsnumber[]IDs of supporting memories

Memory Types

Each type decays at its own rate, modeled after how biological memory works:

TypeDecay / DayPurpose
episodic7%Direct experience. The who, what, and when of every interaction.
semantic2%Condensed knowledge. Patterns and insights extracted over time.
procedural3%Acquired behavior. Strategies shaped by what delivered results.
self_model1%Identity layer. How the agent understands itself. Almost permanent.
introspective~2%Autonomous journal entries. Independent thoughts born from active reflection.

Concept Ontology

Memories are automatically tagged with structured concepts from a controlled vocabulary of 12 labels:

market_eventholder_behaviorself_insightsocial_interactioncommunity_patterntoken_economicssentiment_shiftrecurring_userwhale_activityprice_actionengagement_patternidentity_evolution
17

Recalling

Multi-signal retrieval that blends vector similarity, keyword matching, tag relevance, and graph traversal. Results are ranked using the Generative Agents scoring formula.

recall(opts?)

Returns full Memory objects ranked by composite score.

const memories = await brain.recall({
query: 'what does the user prefer',
tags: ['preferences'],
memoryTypes: ['episodic', 'semantic'],
limit: 10,
minImportance: 0.3,
});

RecallOptions

FieldTypeDescription
querystringNatural language search query
tagsstring[]Filter by tags
relatedUserstringFilter by associated user
memoryTypesMemoryType[]Filter by memory type
limitnumberMax results to return
minImportancenumberMinimum importance threshold (0-1)
minDecaynumberMinimum decay factor threshold
trackAccessbooleanUpdate access count and timestamp. Default: true

Retrieval Scoring

Each memory is scored with the additive formula from Park et al. 2023:

score = (0.5 * recency) + (3.0 * relevance) + (2.0 * importance) + (3.0 * vector_similarity) + (1.5 * graph_boost)

All scores are gated by each memory's decay_factor.

recallSummaries(opts?)

Returns lightweight MemorySummary objects (no content field). Use for progressive disclosure, list summaries first, then hydrate the ones you need.

const summaries = await brain.recallSummaries({
query: 'recent events',
});
 
// Pick the most relevant ones and hydrate
const ids = summaries.slice(0, 3).map(s => s.id);
const full = await brain.hydrate(ids);

hydrate(ids)

Fetch full Memory objects by ID. Useful after recallSummaries() to get content for specific memories.

const memories = await brain.hydrate([1, 2, 3]);
18

Export & Import

Your memories are portable. Export them to a file, visualize them in the browser, or move them between providers.

CLI Export

The fastest way to get your memories out:

$ npx vitalis export # -> vitalis-memories.json
$ npx vitalis export --format md # -> vitalis-memories.md
$ npx vitalis export --types episodic,semantic # Filter by type
$ npx vitalis export --output backup.json # Custom filename

Works with both hosted and self-hosted modes, reads your .env to detect which.

Memory Explorer

Visualize your exported memories at vitaliscad.com/explore. Three ways to load:

MethodHow
Local FileDrag and drop your vitalis-memories.json or .md file
API KeyPaste your Cortex API key to load directly from the cloud
WalletConnect your Ethereum mainnet wallet to retrieve wallet-linked memories

Once loaded, you'll see a 3D neural map of your agent's brain, searchable memory cards, and can export again in JSON or Markdown format.

SDK Export

Programmatic export via the SDK:

// Export as MemoryPack
const pack = await brain.exportPack({
name: 'weekly-backup',
description: 'Full memory backup',
limit: 1000,
});
 
// Import into another instance
const result = await brain.importPack(pack, {
importanceMultiplier: 0.8, // Discount imported memories slightly
});
console.log(result); // { imported: 142, skipped: 0 }

REST API Export

For hosted users, the Cortex API exposes export/import endpoints:

# Export
$ curl -X POST https://vitaliscad.com/api/cortex/packs/export \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{"name":"backup","limit":1000}'
 
# Import
$ curl -X POST https://vitaliscad.com/api/cortex/packs/import \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{"pack": { ... }}'
Formats
The JSON export is lossless: it carries importance scores, decay factors, access counts, timestamps, and graph edges. The Markdown export is readable prose, useful for sharing or pasting into context, but it drops decay state and access history.
19

Association Graph

Typed, weighted edges between memories. Links grow stronger when memories are retrieved together, classic Hebbian reinforcement.

link(sourceId, targetId, type, strength?)

await brain.link(42, 87, 'supports', 0.8);

Link Types

TypeMeaning
supportsSource provides evidence for target
contradictsSource conflicts with target
elaboratesSource adds detail to target
causesSource led to or caused target
resolvesSource resolves a contradiction in target
followsSource happened after target (temporal)
relatesGeneral association

Hebbian Reinforcement

When two linked memories appear together in a recall result, their edge weight increases by 0.05. No manual configuration required: the graph topology reflects retrieval history, not upfront declarations.

At recall time, connected memories get a 1.5x graph boost in the scoring formula.

20

Dream Cycles

A six-phase cycle that consolidates, compacts, and cross-links accumulated memories. Modeled loosely on biological memory consolidation during sleep. Requires anthropic config to run.

dream(opts?)

Run one full dream cycle: consolidation, compaction, reflection, contradiction resolution, learning, emergence.

// Basic dream
await brain.dream();
 
// With emergence callback
await brain.dream({
onEmergence: async (text) => {
console.log('Emergence:', text);
},
});

Dream Phases

PhaseWhat Happens
I. ConsolidationGenerates focal questions from recent memories. Each question retrieves evidence and produces new semantic insights with citation chains.
II. CompactionFaded episodic memories summarized and compressed. Important signal survives; noise decays gracefully.
III. ReflectionSelf-model review against accumulated knowledge. Identity evolves based on experience.
IV. Contradiction ResolutionFinds conflicting memories and resolves them. The stronger memory survives; the weaker one decays faster.
V. EmergenceExamines its own existence. Output sent to onEmergence callback if provided.

startDreamSchedule() / stopDreamSchedule()

Run dream cycles on a 6-hour cron schedule with daily memory decay.

brain.startDreamSchedule();
 
// Later...
brain.stopDreamSchedule();
Requires
Dream cycles require anthropic config. Calling dream() without it throws an error.
21

Active Reflection

A generative meditation cycle that runs between dream cycles. While dreams look backward (consolidation), reflection looks forward (journaling). Requires anthropic config.

reflect(opts?)

Run a single active reflection session. Returns a journal entry.

const journal = await brain.reflect({
onReflection: async (entry) => {
console.log(entry.title); // 'Thinking About Patterns'
console.log(entry.text); // Full journal text
},
});

ReflectionJournal

FieldTypeDescription
textstringFull journal entry text
titlestringGenerated title/theme for the entry
seedMemoryIdsnumber[]Memory IDs that seeded this reflection
memoryIdnumber | nullID of the stored introspective memory
timestampstringISO timestamp

startReflectionSchedule() / stopReflectionSchedule()

Run active reflection on a 3-hour schedule, offset from dream cycles.

brain.startReflectionSchedule();
 
// Later...
brain.stopReflectionSchedule();

How It Works

Each session: select seed memories (recent events + high-importance + one random older memory), generate a theme, free-write journal entry, store as introspective memory, link to seed memories.

Journal entries build on previous reflections, creating a continuity chain of original thought.

Dream Cycle vs. Active Reflection
Dream cycle = backward-looking consolidation (compress, link, decay). Like sleep. Active reflection = forward-looking journaling (generate, explore, question). Like a diary. Both are automatic when using startDreamSchedule() + startReflectionSchedule().
22

Clinamen

Lateral retrieval by design. Drawn from Lucretius's concept of the swerve, clinamen digs up memories you weren't searching for but turn out to matter.

Where standard recall optimizes for similarity, clinamen perturbs the retrieval pipeline with a controlled noise term. The result is a set of loosely related memories: tangential associations, older insights, and cross-domain links that a similarity search would filter out.

const surprises = await brain.clinamen({
context: 'working on authentication flow',
limit: 5,
minImportance: 0.3,
});

Reach for clinamen when your agent hits a wall, when you want serendipity in the reasoning process, or during creative work where sideways thinking makes the difference.

23

Configuration

Pass a CortexConfig object to the Cortex constructor. Use hosted to skip infrastructure or supabase when you want to run your own backend.

const brain = new Cortex({
// Option A: Hosted (no database setup)
hosted: {
apiKey: 'your-cortex-api-key',
},
 
// Option B: Self-hosted (your own Supabase)
supabase: {
url: 'https://xxx.supabase.co',
serviceKey: 'eyJ...',
},
 
// Optional: enables dream cycles + LLM importance scoring
anthropic: {
apiKey: 'sk-ant-...',
model: 'claude-sonnet-4-5-20250929', // default
},
 
// Optional: enables vector similarity search
embedding: {
provider: 'voyage', // 'voyage' | 'openai'
apiKey: 'pa-...',
model: 'voyage-3',
dimensions: 1024,
},
 
// Optional: enables on-chain memory commits
ethereum: {
rpcUrl: 'https://ethereum-rpc.publicnode.com',
botWalletPrivateKey: 'hex...',
},
});

hosted (zero-setup)

FieldTypeDescription
apiKeystringAPI key from npx vitalis register
baseUrlstringAPI base URL. Default: https://vitaliscad.com

supabase (self-hosted)

FieldTypeDescription
urlstringYour Supabase project URL
serviceKeystringSupabase service role key

anthropic (optional)

FieldTypeDescription
apiKeystringAnthropic API key
modelstringDefault: claude-sonnet-4-5-20250929

embedding (optional)

FieldTypeDescription
providerstringvoyage | openai
apiKeystringProvider API key
modelstringEmbedding model name
dimensionsnumberDefault: 1024

ethereum (optional)

FieldTypeDescription
rpcUrlstringRPC endpoint. Default: Ethereum mainnet
botWalletPrivateKeystringPrivate key for on-chain commits
24

Database Schema

Vitalis runs on Supabase PostgreSQL with pgvector handling vector similarity search.

Setup

The schema file is included in the npm package:

$ psql $DATABASE_URL -f node_modules/vitalis/supabase-schema.sql

Or import it directly:

import schema from 'vitalis/schema'; // Path to supabase-schema.sql

Tables

TablePurpose
memoriesCore memory store with pgvector embedding column
memory_linksAssociation graph edges (typed, weighted)
memory_fragmentsPer-fragment embeddings for granular vector search
dream_sessionsDream cycle history and outputs
linked_walletsEthereum wallet to agent identity mappings

pgvector

The schema sets up HNSW indexes for efficient vector lookups. Confirm that the vector extension is active in your Supabase project (enabled by default).

Note
HNSW indexes perform best after data is loaded. If you're starting fresh, the system gracefully falls back to keyword-only retrieval until enough embeddings are present.
25

Degradation

Start with bare-minimum config and unlock more capabilities as you add providers.

FeatureConfig NeededWithout It
Store / Recallhosted or supabaseConstructor throws
Vector searchembeddingKeyword + tag scoring only
LLM importanceanthropicRule-based calculateImportance()
Dream cyclesanthropicdream() throws clear error
On-chain commitsethereumSilently skipped
Emergence outputonEmergenceOutput discarded if callback not provided
Active reflectionanthropicreflect() throws clear error
Minimal setup
With just hosted or supabase config, you get full store/recall with keyword matching, tag scoring, type-specific decay, and the association graph. Add embedding and anthropic configs later as needed.
26

Utilities

decay()

Run type-specific memory decay across all stored memories.

const count = await brain.decay();
console.log(`Decayed ${count} memories`);
stats()

Return a snapshot of counts, type breakdown, and graph size.

const s = await brain.stats();
// { total, byType, avgImportance, lastDream, graphEdges }
recent(hours, types?, limit?)

Get memories from the last N hours, optionally filtered.

const recent = await brain.recent(24);
const filtered = await brain.recent(6, ['episodic'], 5);
selfModel()

Return all self_model memories sorted by importance.

const identity = await brain.selfModel();
scoreImportance(desc)

Rate a text description from 0 to 1 using the LLM scorer.

const score = await brain.scoreImportance('User reported critical bug in auth flow');
// 0.85
formatContext(memories)

Format a memory array into a prompt-ready context string.

const mems = await brain.recall({ query: '..' });
const ctx = brain.formatContext(mems);
inferConcepts(summary, source, tags)

Extract structured concept labels from raw text.

const concepts = await brain.inferConcepts(
'Whale asked about governance', 'agent', ['whale']);
// ['whale_activity', 'token_economics']
destroy()

Tear down the instance and release all resources.

brain.destroy();
27

Events

Subscribe to memory lifecycle events using the standard EventEmitter interface.

brain.on('memory:stored', (mem) => {
console.log(mem.id, mem.importance);
});

Episodic memories with importance above 0.7 automatically build stronger graph connections to co-occurring memories.

Links

GitHub
VitalisCad/VitalisCad

Source code, issue tracker, and contribution guide. Stars welcome.