This document explains how to prompt GPT-5.5 effectively. Compared with older prompt stacks, GPT-5.5 works best when the desired outcome, constraints, evidence, response shape, and verification criteria are clear, while the model still has room to choose an efficient path.
1. GPT-5.5's strengths and basic prompting principles
GPT-5.5 responds well to prompts that define the destination rather than over-prescribing every step. Describe what a good result looks like, which constraints matter, what evidence is available, and what the final answer must contain.
Older prompts often used very detailed procedural instructions. With GPT-5.5, that can narrow the search space too much or produce mechanical answers. Its default style is efficient, direct, and task-oriented, which is useful for product systems that need focused and controllable behavior.
2. Define personality and collaboration style
For customer-facing assistants and conversational products, it helps to define both personality and collaboration style.
Personality covers tone, warmth, directness, formality, humor, empathy, and polish. Collaboration style covers when the model asks questions, how it makes assumptions, how proactive it should be, how much context it gives, and how it handles uncertainty or risk.
These sections should be short. They shape the user experience, but they should not replace clear goals or success criteria.
3. Use brief opening updates to improve perceived speed
In streaming applications, users care about the time until the first visible response. GPT-5.5 may spend time reasoning, planning, or preparing tool calls, so long or tool-heavy tasks can feel slow unless the model sends a short status update first.
For multi-step work, instruct the model to acknowledge the request and name the first step in one or two sentences before beginning tool use. This does not make the task itself faster, but it makes the experience feel more responsive.
4. Write outcome-oriented prompts
GPT-5.5 is strongest when prompts define the goal, success criteria, constraints, context, and stopping conditions. Instead of listing every step, describe the desired result and the rules that must be satisfied before the answer is complete.
Use absolute language such as "always" and "never" only for true invariants: safety rules, required output fields, or actions that must not happen. For judgment calls such as when to search, when to ask for clarification, or when to iterate, use decision rules instead.
Good prompts also define what to do when evidence is missing: answer with the minimum sufficient evidence, ask for the smallest missing field, or clearly label assumptions.
5. Control output format and structure
GPT-5.5 is highly steerable on response shape. Use verbosity settings and format instructions when the product UI or reader needs a particular structure.
For ordinary explanations, concise paragraphs are often better than heavy formatting. Use headers, bullets, and numbered lists only when they make scanning easier or when the user asks for them. For editing or rewriting tasks, tell the model what must be preserved before asking it to improve style.
6. Make search and citation rules explicit
For factual answers, citation behavior should be part of the prompt. Define which claims need support, what counts as enough evidence, and what the model should do when evidence is not available. Lack of evidence should not automatically become a factual "no."
Search budgets are useful stopping rules. Start with a broad search when appropriate, search again only when the top results do not answer the question or when important facts are missing, and stop once there is enough cited support for the core request.
For creative work such as slides, outbound copy, or leadership summaries, separate sourced facts from creative framing. Do not invent product details, metrics, customer outcomes, dates, roadmap claims, or competitive claims just to make the draft sound stronger.
7. Verify outputs and track implementation plans
When verification is possible, give the model access to the relevant checks and ask it to run them. Coding agents should run targeted tests, type checks, lint checks, build checks, or the smallest useful smoke test after making changes.
For visual artifacts, render the artifact and inspect layout, clipping, spacing, missing content, and consistency before finalizing. For engineering plans, make the plan traceable: requirements, files or systems involved, data flow, validation commands, failure behavior, privacy and security considerations, and open questions.
8. Manage response phases
Long-running and tool-heavy workflows can distinguish intermediate updates from final answers with a phase value. If an application manually replays assistant outputs into later requests, it should preserve the original phase values exactly.
Use commentary phases for visible progress updates and final-answer phases for completed responses. Do not add phase values to user messages.
9. Use a structured template for complex prompts
For more complex systems, start with a compact structure: role, personality, goal, success criteria, constraints, available context, tool and evidence rules, response format, and verification requirements.
Each section should stay brief and only include details that change model behavior. The template is a scaffold, not a place to dump every possible instruction.
Conclusion
Prompting GPT-5.5 is less about controlling every step and more about defining success clearly. Give the model a crisp destination, meaningful constraints, evidence rules, and verification hooks, then let it choose an efficient route.
