NeuraWrite\u2019s multi-model intelligence layer sits behind every workflow. The platform selects the right model for each task, orchestrates them across steps, and continuously improves the ones we tune ourselves. Users never pick a model \u2014 the platform does.
Model flexibility without complexity
aura.run({
task: "draft a thought-leadership post on FedRAMP",
project: "acme-marketing",
})Aura Research → web sources, claim-checked Aura Academic → first draft with brand book Aura Humanize → detection-aware rewrite Aura Brand Voice → final voice pass
A finished post + a trace of what Aura did.
Every Aura Model is a complete capability \u2014 not just an LLM call. Each one ships with its own system prompt, scoring loop, and governance. Aura routes each step of a workflow to the profile best suited for it.
Capabilities
Aura doesn\u2019t list models \u2014 it lists capabilities. Each one is backed by one or more Aura Models, with deterministic routing rules and live retrain pipelines for the ones we own.
Pass detection. Keep citations.
A multi-stage humanization pipeline tuned for academic prose.
Aura Humanize is not a single model — it is an orchestrated pipeline. Each stage targets a different detector fingerprint: structural (Claude), lexical (DeepSeek), opener distribution (Llama), local validation, and a final perplexity pass via our humanization closer. The result is text that matches no single model fingerprint, which is the core reason multi-detector evasion works.
Next humanization retrain — capturing approved production pairs now.
Successor to Aura Humanize 1, retrained on 500+ approved production pairs from the live capture loop. Triggered automatically once the training-pair threshold is reached.
Long-form drafting with discipline.
Citation-aware academic drafting. Tuned for theses, journals, RFPs.
A Sonnet-backed Aura profile with a NeuraWrite academic-voice system prompt and structural-anchor enforcement. Used for long-form drafting where citations, methodology, and discipline-specific tone matter.
Web-grounded, claim-checked.
Web-grounded multi-source synthesis with claim-checking.
An Opus-class profile that combines Tavily web search, knowledge-base retrieval, and a NeuraWrite claim-check pass. Outputs include source attribution and confidence scoring.
Style guide on every word.
Style-guide-aware drafting from your brand book.
A low-temperature Sonnet profile that loads your active brand book (voice rules, banned phrases, style guide) into the system prompt. Used everywhere brand consistency matters: SEO posts, social, customer-facing copy.
In-house detection pre-predictor.
In-house AI-detection pre-predictor. Cheaper than external scoring.
A small classifier we are training on labeled (text, gptzero_score) pairs. Predicts AI-detection score before calling an external API, so we only spend on borderline content.
Performance + safety
Your content is never used to train third-party models. Aura\u2019s capture loop only feeds NeuraWrite-owned retrains, scoped to your tenant.
Every Aura Model call runs inside your project policies, content guardrails, and audit log. Same controls across every provider.
Each Aura Model has a deterministic fallback. Provider outages degrade quality, never break workflows.
Aura routes light steps to fast small models and heavy steps to flagship models \u2014 you don\u2019t pay 200B-param latency for a 5-line summary.
Every successful Aura Humanize run becomes a training pair. Usage compounds into the next retrain. The longer you use NeuraWrite, the better Aura gets.
Every Aura Model call writes telemetry: provider, latency, fallback flag, tokens. We can answer \u201cis Aura Humanize 2 actually better?\u201d with data.
Use Aura outside NeuraWrite
Issue an Aura API key in Settings, then call any Aura capability directly from your services.
Drop Aura into Claude Desktop, Cursor, or any MCP-compatible agent. Same Aura key, same governance, same capabilities.
Stop wiring providers together. Let Aura route, retry, evaluate, and improve every workflow you ship.