Benchmarks

Real numbers from real hardware. Every claim backed by reproducible tests.

Test environment: Ubuntu 22.04 · Intel i5-13500 · 32GB DDR5 · NVMe SSD

Search Latency Comparison

Median search latency for small-medium corpora. Lower is better.

Ocean Eterna
0.019ms
ChromaDB
6ms
Weaviate
3ms
Pinecone
80ms
Elasticsearch
10ms

Indexing Speed

BM25 keyword indexing vs embedding-based approaches.

SystemSpeedOE Advantage
OpenAI Embeddings API~3,000 tok/s6,000x faster
Local Embedding Model (GPU)~50,000 tok/s380x faster
LangChain + ChromaDB~5,000 tok/s3,800x faster
Pinecone / Weaviate~10,000 tok/s1,900x faster
Ocean Eterna19,000,000 tok/s

Raw Benchmark Data

Search Latency

CorpusChunksp50 (ms)p95 (ms)p99 (ms)Mean (ms)Max (ms)
100 docs2170.0650.1260.1440.0570.147
300 docs6410.0190.0470.060.0220.08

Ingestion Speed

SizeLatency (ms)ChunksTokensMB/sMemory Delta
1 KB714000.220 MB
10 KB572,5622.160.3 MB
100 KB56625,74920.250.8 MB
500 KB12326127,92941.812.8 MB
1 MB21667261,94147.765.7 MB
5 MB10733311,309,09046.9428.9 MB

Test Environment

CPU: Intel Core i9-14900KS
RAM: 94 GB DDR5
OS: Linux 6.12.10 (Ubuntu)
Compiler: g++ -O3 -std=c++17 -march=native -fopenmp
Binary size: 1.2 MB
Dependencies: liblz4, libcurl, libzstd, OpenMP

Edge Cases & Security — 18/18 passed

Empty query
10KB query
Chinese unicode
Arabic unicode
Emoji query
Mixed unicode
HTML injection
SQL injection
Path traversal
XSS attempt
Newline injection
Malformed JSON
Missing required field
404 handling
Invalid chunk ID
Path traversal ingest
Empty content ingest
Huge filename ingest