Learn how AI prompting really works. Discover advanced prompt engineering techniques like personas, chain of thought, few-shot prompting, and trees of thought to get better results from ChatGPT and large language models.
Prompting AI the right way: from vague prompts to powerful results

Let’s clear something up. AI prompting isn’t “just asking ChatGPT to do stuff”. It’s programming with words. Large language models (LLMs) don’t think the way we do. They are prediction engines, sophisticated autocomplete systems. When you write a prompt, you’re not simply asking a question. You’re starting a pattern. The model completes it based on probabilities.
If your prompt is vague, it guesses.
If your prompt is structured, you hack the probability.
That’s the foundation of prompt engineering.
What is AI prompting (really)?
Prompting is a call to action, but more than that, it’s instruction design.
When you write:
“Write me an email about X”
Who’s writing it?
Nobody.
That’s the issue.
Without structure, perspective, or context, the model defaults to generic output. That’s why basic ChatGPT prompts often sound bland. The model is statistically averaging everything it has seen before.
To improve AI output, you need to reduce ambiguity.
Use personas to control perspective
One of the most effective AI prompting techniques is assigning a persona.
Instead of:
“Write an email about X.”
Try:
“You are a senior performance marketing strategist. Write an email about X.”
Why does this work?
Because you’re narrowing the knowledge source. You’re defining:
- Who is answering
- From what experience
- With what tone
In prompt engineering terms, you’re constraining the probability space.
Better constraints = better output.
Add context to reduce hallucinations
Large language models are eager to please. They rarely say “I don’t know.” If information is missing, they fill the gaps.That’s where context comes in.
Compare:
“Give me birthday present ideas under £30.”
vs
“Give me 5 birthday present ideas under £30 for a 29-year-old who loves sport and recently started playing basketball.”
MORE CONTEXT= LESS GUESSWORK.
A powerful addition to any AI workflow is this line:
“If it’s not in the context and you can’t find the answer, say ‘I don’t know.’”
This dramatically reduces hallucinations and increases reliability.
In AI prompting, specificity is the difference between average and accurate.
Use tools, but don’t blindly trust them
Modern AI models can browse the web and access external data. That helps solve the “frozen in time” limitation of training cut-off dates.
But there’s a trade-off.
The more tools models have, the more we trust them. And if they retrieve outdated or low-quality sources, you get confidently delivered misinformation.
That’s why structured prompting and validation matter even more in advanced AI workflows.
Control the format of the output
Another key part of prompt engineering is formatting.
Don’t just ask for content. Specify:
- Format: bullet points
- Length: under 200 words
- Tone: professional and direct
- Structure: introduction, three key points, conclusion
Formatting reduces randomness. It tells the model exactly what “good” looks like.
This is still zero-shot prompting: one instruction, no examples.
But we can go further.
Few-shot prompting: show the model what good looks like
Few-shot prompting means providing examples of the output you want.
Instead of describing tone, structure, and style, you show it.
This is one of the most powerful techniques in ChatGPT prompting because it drastically reduces interpretation gaps.
Chain of thought: improve reasoning
Chain of thought (COT) prompting tells the model to think step by step before answering.
For example:
Before writing the email, think through:
- What was the root cause?
- How was it fixed?
- What is being done to prevent it?
This increases:
- Accuracy
- Logical coherence
- Transparency
Modern reasoning models often integrate this internally, but explicitly structuring thinking improves repeatable AI systems and complex workflows.
Trees of thought: generate better options
If the chain of thought is linear, trees of thought (TOT) explore multiple reasoning paths.
Instead of asking for one answer, you ask for:
- Three distinct strategic approaches
- Evaluation of each
- A synthesised “golden path”
- Final output
This avoids average answers and encourages diversity of ideas.
For brainstorming, strategy, or creative direction, this is significantly stronger than basic prompting.
Adversarial validation: force higher standards
Also known as the “playoff method”, adversarial validation creates internal competition.You can simulate:
- A customer persona
- A PR manager
- A developer
Each produces or critiques drafts. The best elements are synthesised.
Why does this work?
Because AI models are often better at editing and critiquing than producing first drafts.
By engineering friction, you engineer quality.
The operating principle of
great prompt engineering
All of these advanced AI prompting techniques - personas, context, few-shot prompting, chain of thought, trees of thought - boil down to one thing: Clarity.
Persona forces you to define perspective.
Context forces you to define facts.
Chain of thought forces you to define logic.
Few-shot prompting forces you to define quality.
If AI output is messy, it’s rarely because AI “failed”.
It’s usually because the prompt wasn’t clear enough.
The real skill isn’t clever tricks. It’s structured thinking.
At Studio 34, we don’t just use AI tools. We design AI systems. Because when you can clearly explain a process, you can scale it.
And that’s how ideas perform.
Complete this form to receive the Prompting Guide.



