The model layer

Models.
Not one model. A system.

NeuraWrite\u2019s multi-model intelligence layer sits behind every workflow. The platform selects the right model for each task, orchestrates them across steps, and continuously improves the ones we tune ourselves. Users never pick a model \u2014 the platform does.

Model flexibility without complexity

One call. The right model. Every time.

// User intent
aura.run({
  task: "draft a thought-leadership post on FedRAMP",
  project: "acme-marketing",
})
// Aura selects, calls, fans out
Aura Research   → web sources, claim-checked
Aura Academic   → first draft with brand book
Aura Humanize   → detection-aware rewrite
Aura Brand Voice → final voice pass
// You get
A finished post + a trace of what Aura did.

You don\u2019t pick a model. Aura does.

Every Aura Model is a complete capability \u2014 not just an LLM call. Each one ships with its own system prompt, scoring loop, and governance. Aura routes each step of a workflow to the profile best suited for it.

  • One API across reasoning, generation, retrieval, and tools
  • Automatic fallback when a provider degrades
  • Swap or upgrade engines with no contract change
  • Same governance, telemetry, and audit on every call

Capabilities

What Aura Models can do.

Aura doesn\u2019t list models \u2014 it lists capabilities. Each one is backed by one or more Aura Models, with deterministic routing rules and live retrain pipelines for the ones we own.

Humanize

Pass detection. Keep citations.

Humanize

live

A multi-stage humanization pipeline tuned for academic prose.

Aura Humanize is not a single model — it is an orchestrated pipeline. Each stage targets a different detector fingerprint: structural (Claude), lexical (DeepSeek), opener distribution (Llama), local validation, and a final perplexity pass via our humanization closer. The result is text that matches no single model fingerprint, which is the core reason multi-detector evasion works.

GPTZero target
≥ 85
Sapling target
≥ 85
Pipeline stages
3 always-on, 2 conditional
Training pairs (humanizer-tune)
199 (v1)

Humanize 2

preview

Next humanization retrain — capturing approved production pairs now.

Successor to Aura Humanize 1, retrained on 500+ approved production pairs from the live capture loop. Triggered automatically once the training-pair threshold is reached.

Pairs collected
live counter on /aura
Threshold to ship
500 approved pairs

Academic

Long-form drafting with discipline.

Academic

live

Citation-aware academic drafting. Tuned for theses, journals, RFPs.

A Sonnet-backed Aura profile with a NeuraWrite academic-voice system prompt and structural-anchor enforcement. Used for long-form drafting where citations, methodology, and discipline-specific tone matter.

Avg paper length
4–8k words
Citation preservation
100%

Research

Web-grounded, claim-checked.

Research

live

Web-grounded multi-source synthesis with claim-checking.

An Opus-class profile that combines Tavily web search, knowledge-base retrieval, and a NeuraWrite claim-check pass. Outputs include source attribution and confidence scoring.

Source attribution
every claim
Claim-check
auto

Brand Voice

Style guide on every word.

Brand Voice

live

Style-guide-aware drafting from your brand book.

A low-temperature Sonnet profile that loads your active brand book (voice rules, banned phrases, style guide) into the system prompt. Used everywhere brand consistency matters: SEO posts, social, customer-facing copy.

Voice adherence
auto-evaluated
Banned-phrase enforcement
pre + post

Detect

In-house detection pre-predictor.

Score

roadmap

In-house AI-detection pre-predictor. Cheaper than external scoring.

A small classifier we are training on labeled (text, gptzero_score) pairs. Predicts AI-detection score before calling an external API, so we only spend on borderline content.

Target accuracy vs GPTZero
≥ 90%
Inference cost
~$0.0001 / call

Performance + safety

Built for enterprise from the first call.

Privacy

Your content is never used to train third-party models. Aura\u2019s capture loop only feeds NeuraWrite-owned retrains, scoped to your tenant.

Governance

Every Aura Model call runs inside your project policies, content guardrails, and audit log. Same controls across every provider.

Reliability

Each Aura Model has a deterministic fallback. Provider outages degrade quality, never break workflows.

Latency

Aura routes light steps to fast small models and heavy steps to flagship models \u2014 you don\u2019t pay 200B-param latency for a 5-line summary.

Capture loop

Every successful Aura Humanize run becomes a training pair. Usage compounds into the next retrain. The longer you use NeuraWrite, the better Aura gets.

Observability

Every Aura Model call writes telemetry: provider, latency, fallback flag, tokens. We can answer \u201cis Aura Humanize 2 actually better?\u201d with data.

Use Aura outside NeuraWrite

Call Aura from your stack.

Public REST API

Issue an Aura API key in Settings, then call any Aura capability directly from your services.

# Humanize via Aura
curl https://neurawrite.ai/api/v1/aura/humanize \
-H "Authorization: Bearer nw_live_\u2026" \
-H "Content-Type: application/json" \
-d '{"text": "your AI draft…"}'

MCP server

Drop Aura into Claude Desktop, Cursor, or any MCP-compatible agent. Same Aura key, same governance, same capabilities.

# Claude Desktop config
{ "mcpServers": { "neurawrite": { "command": "npx", "args": ["@neurawrite/mcp-server"], "env": { "NEURAWRITE_API_KEY": "nw_live_…" } } } }

One model is a tool.
Aura is a system.

Stop wiring providers together. Let Aura route, retry, evaluate, and improve every workflow you ship.