We read every Mistral Instruct prompt guide so you don't have to.
Paste your prompt below - we'll rewrite it using Mistral's official best practices.
0/100
0
Role
0
Ctx
0
Task
0
Constr
0
Fmt
0 chars
0 chars
- Built on Mistral's official prompting guide
- Handles function_calling, json_mode, codestral_for_code automatically
- Free, instant, no signup
What Mistral Instruct actually rewards
We pulled this from Mistral's official guidance and what works in production. The short version:
- →Embed system message inside the FIRST [INST] block as preamble.
- →Use few-shot.
- →Alternate user/assistant turns strictly.
- →Use Mistral Large for tool use and JSON output.
Before you hit send, check:
- ☐Embed system message inside the FIRST [INST] block as preamble?
- ☐Did you use few-shot?
- ☐Alternate user/assistant turns strictly?
- ☐Did you use Mistral Large for tool use and JSON output?
Common mistakes we fix automatically
- AvoidDon't break user/assistant alternation.
- AvoidDon't use Llama-style <|...|> headers.
Ready to rewrite for Mistral Instruct?
Frequently asked questions
- Which versions of Mistral Instruct does this support?
- We support mistral-7b-instruct, mistral-large, mistral-small, mistral-nemo, codestral. We apply the prompt patterns Mistral recommends for each, so the rewrite is tuned to the version you're using.
- Is my prompt stored or used for training?
- No. Prompts are sent to the rewriter, scored, returned, and discarded. We don't train on them and we don't keep them around.
- Do I need to know prompt engineering to use this?
- Nope. That's the point. Paste what you have, click Rewrite, get back a version that follows Mistral's official guidance.
- What makes this different from Mistral Instruct's own "improve prompt" feature?
- Built-in optimizers use the model's own preferences. Ours is built on Mistral's official documentation and patterns that consistently produce better results in production. Mistral Instruct works best with prompts in the 100-3000 token range, and we keep rewrites inside that window.
Optimizing for a different AI?