Benchmarks
Real numbers from real hardware. Every claim backed by reproducible tests.
Test environment: Ubuntu 22.04 · Intel i5-13500 · 32GB DDR5 · NVMe SSD
Search Latency Comparison
Median search latency for small-medium corpora. Lower is better.
Ocean Eterna
0.019ms
ChromaDB
6ms
Weaviate
3ms
Pinecone
80ms
Elasticsearch
10ms
Indexing Speed
BM25 keyword indexing vs embedding-based approaches.
| System | Speed | OE Advantage |
|---|---|---|
| OpenAI Embeddings API | ~3,000 tok/s | 6,000x faster |
| Local Embedding Model (GPU) | ~50,000 tok/s | 380x faster |
| LangChain + ChromaDB | ~5,000 tok/s | 3,800x faster |
| Pinecone / Weaviate | ~10,000 tok/s | 1,900x faster |
| Ocean Eterna | 19,000,000 tok/s | — |
Raw Benchmark Data
Search Latency
| Corpus | Chunks | p50 (ms) | p95 (ms) | p99 (ms) | Mean (ms) | Max (ms) |
|---|---|---|---|---|---|---|
| 100 docs | 217 | 0.065 | 0.126 | 0.144 | 0.057 | 0.147 |
| 300 docs | 641 | 0.019 | 0.047 | 0.06 | 0.022 | 0.08 |
Ingestion Speed
| Size | Latency (ms) | Chunks | Tokens | MB/s | Memory Delta |
|---|---|---|---|---|---|
| 1 KB | 7 | 1 | 400 | 0.22 | 0 MB |
| 10 KB | 5 | 7 | 2,562 | 2.16 | 0.3 MB |
| 100 KB | 5 | 66 | 25,749 | 20.25 | 0.8 MB |
| 500 KB | 12 | 326 | 127,929 | 41.81 | 2.8 MB |
| 1 MB | 21 | 667 | 261,941 | 47.76 | 5.7 MB |
| 5 MB | 107 | 3331 | 1,309,090 | 46.94 | 28.9 MB |
Test Environment
CPU: Intel Core i9-14900KS
RAM: 94 GB DDR5
OS: Linux 6.12.10 (Ubuntu)
Compiler: g++ -O3 -std=c++17 -march=native -fopenmp
Binary size: 1.2 MB
Dependencies: liblz4, libcurl, libzstd, OpenMPEdge Cases & Security — 18/18 passed
Empty query
10KB query
Chinese unicode
Arabic unicode
Emoji query
Mixed unicode
HTML injection
SQL injection
Path traversal
XSS attempt
Newline injection
Malformed JSON
Missing required field
404 handling
Invalid chunk ID
Path traversal ingest
Empty content ingest
Huge filename ingest