NeuraWrite is the orchestration layer behind every workflow. It decides what to do, calls the right tools, picks the right model, remembers context across runs, and learns from every result. You experience the platform. You don\u2019t configure it.
Two layers. One system.
How Aura works
Aura plans the next step. It chooses which Aura Model to call, which connector to use, and what to retrieve from memory.
Aura runs each step \u2014 calls the model, invokes the tool, updates the document, captures the result. Every action is observable.
Aura emits structured events. The runtime UI renders them as Activity, Trace, and Document updates \u2014 so you see exactly what Aura did and why.
Aura persists context, sources, brand books, and prior decisions. Every workflow inherits everything Aura has learned about your work.
When a provider degrades or a connector fails, Aura falls back automatically and explains the change. Workflows don\u2019t break.
Every Aura call runs inside your project policies, guardrails, and audit log. Aura is intelligent and accountable.
Aura Models
Aura Models is the multi-model intelligence layer Aura draws on. You never pick a model. Aura routes every step to the profile best suited for it, with deterministic fallbacks.
Pass every detector while keeping every citation.
A multi-stage humanization pipeline tuned for academic prose.
Aura Humanize is not a single model — it is an orchestrated pipeline. Each stage targets a different detector fingerprint: structural (Claude), lexical (DeepSeek), opener distribution (Llama), local validation, and a final perplexity pass via our humanization closer. The result is text that matches no single model fingerprint, which is the core reason multi-detector evasion works.
Next humanization retrain — capturing approved production pairs now.
Successor to Aura Humanize 1, retrained on 500+ approved production pairs from the live capture loop. Triggered automatically once the training-pair threshold is reached.
Rigorous long-form drafting with citation discipline.
Citation-aware academic drafting. Tuned for theses, journals, RFPs.
A Sonnet-backed Aura profile with a NeuraWrite academic-voice system prompt and structural-anchor enforcement. Used for long-form drafting where citations, methodology, and discipline-specific tone matter.
Web-grounded synthesis with claim-checking.
Web-grounded multi-source synthesis with claim-checking.
An Opus-class profile that combines Tavily web search, knowledge-base retrieval, and a NeuraWrite claim-check pass. Outputs include source attribution and confidence scoring.
Style-guide adherence on every word you ship.
Style-guide-aware drafting from your brand book.
A low-temperature Sonnet profile that loads your active brand book (voice rules, banned phrases, style guide) into the system prompt. Used everywhere brand consistency matters: SEO posts, social, customer-facing copy.
In-house pre-predictor for AI-detection scoring.
In-house AI-detection pre-predictor. Cheaper than external scoring.
A small classifier we are training on labeled (text, gptzero_score) pairs. Predicts AI-detection score before calling an external API, so we only spend on borderline content.
A typical Aura run
Aura roadmap
Retrain on production-approved pairs. Higher GPTZero/Sapling targets.
In-house AI-detection classifier. Skip external scoring on obvious cases.
REST and MCP server so customers can call Aura from inside their own agentic stacks.
Aura is the orchestration brain behind NeuraWrite. Talk to us about running it on your enterprise content, with your tools, under your governance.