Skip to main content
Back to Blog
Prompting

Prompt Engineering Best Practices: Why Vague Prompts Fail

CookedBanana Team··8 min read
Prompt Engineering Best Practices: Why Vague Prompts Fail

"Tell me something about marketing."

The AI returns 400 words on target audiences, brand awareness, segmentation, and conversion funnels. Technically accurate. Completely useless.

Then you complain that AI is overhyped.

It isn't. The problem is upstream.

The quality of an LLM's output is directly proportional to the quality of the input you give it. If the prompt is vague, the response will be generic — not because the model is stupid, but because it doesn't have enough information to do better.

This is exactly the problem that prompt engineering exists to solve.


What Is Prompt Engineering?

Prompt engineering is the practice of designing, structuring, and iterating the instructions you give to an AI model so it returns reliably useful output. It's not about finding magic words — it's about eliminating the ambiguity that causes models to guess wrong.

The core insight: a model can only work with what you give it. OpenAI's own prompt engineering documentation describes it as "the process of structuring an input that can be interpreted and understood by a generative AI model." Every technique — role assignment, output formatting, few-shot examples, constraint definition — is a way of reducing the interpretive gap between what you want and what the model produces.

The rest of this article covers the most common failure mode: the vague prompt.


How an LLM Actually Works (in Three Lines)

A Large Language Model doesn't "understand" your questions the way a person would. It predicts the most probable next token given what you've written. Every ambiguous word in your prompt opens a range of possible interpretations, and the model picks the statistically most common one — rarely the one most suited to your specific situation.

Ambiguity in the input compounds with every token generated. The result is text that could answer a thousand different questions simultaneously — and yours only approximately.

Even instructions as simple as "summarize this" or "make it shorter" can produce radically different results depending on how they're phrased. Clarity in the prompt isn't optional — it's the engine of the response. — Lakera Prompt Engineering Guide 2026


5 Signs Your Prompt Is Too Vague

Check these five criteria before sending any request to an LLM.

Hands typing a prompt into ChatGPT on a MacBook laptop — a real example of writing AI prompts with specific, structured instructions

1. It's fewer than 15 words. Not a hard rule, but a reliable signal. A very short prompt rarely contains all the information the model needs to respond usefully.

2. It doesn't specify the target audience. "Write an article about energy saving" and "Write an article about energy saving for purchasing managers at mid-size manufacturing companies, aged 40–55, professional but accessible tone" will produce radically different outputs. Audience defines vocabulary, depth, examples, and register.

3. It doesn't define the output format. Do you want a list? Continuous prose? A table? An email? A LinkedIn post? If you don't specify, the model picks the statistically most common format for that type of request — which is often not what you need.

4. It doesn't say where the result will be used. A text for a company blog requires a different register than an internal report, a social post, or an investor presentation. Context of use determines voice, tone, and structure.

5. It uses vague words without defining them. "Good," "interesting," "appropriate," "professional": these words mean nothing to a model that doesn't know your industry, your standards, or your expectations. Replace them with concrete descriptors.


Before and After: Three Real Examples

Seeing the difference in practice is worth more than any abstract explanation.

Example 1 — Sales Follow-Up Email

Vague prompt:

Write a follow-up email.

Effective prompt:

You are a B2B account manager in the software industry. Write a follow-up email to a prospect who attended a product demo 3 days ago and hasn't responded. Tone: professional but warm. Length: under 100 words. Goal: schedule a second call. Avoid openers like "I hope this finds you well" and generic filler phrases.


Example 2 — Business Idea Analysis

Vague prompt:

Analyze this business idea: [idea].

Effective prompt:

You are a startup consultant with 15 years of experience in European e-commerce. Analyze this business idea with a critical lens: identify the strengths, the main risks, and the three problems that need solving before investing more resources. Present the analysis as a numbered list, 300 words max.


Example 3 — LinkedIn Post

Vague prompt:

Write a LinkedIn post about remote work.

Effective prompt:

You are a 38-year-old manager who has worked remotely for 4 years at a 50-person tech company. Write a first-person LinkedIn post about one specific lesson you learned managing a distributed team. Tone: honest, slightly contrarian, no motivational clichés. Length: 150–200 words. Close with a question to the audience.


The difference isn't in word count — it's in specificity. The effective prompt leaves the model almost no interpretive room, which translates directly into immediately usable output.


The Base Formula for an Effective Prompt

There's a simple framework you can apply to almost any request:

[AI Role] + [Specific task] + [Context] + [Output format] + [Constraints]

| Component | Practical example | |---|---| | Role | "You are an expert B2B marketing copywriter." | | Task | "Write a tagline for the launch of a new SaaS product." | | Context | "The product is called Routly. It automates corporate expense management. Target: CFOs at 50–200 person companies." | | Format | "3 variants, 10 words max each." | | Constraints | "Avoid 'revolutionary' and 'innovative.' No forced wordplay." |

This structure isn't magic — it's a mental framework that forces you to clarify what you want before writing the prompt. That's the hidden value: building the prompt makes you think more precisely.

Professional writing structured notes in a notebook next to a laptop in an open office — applying the prompt engineering formula to plan effective AI instructions

The same principle applies to image prompts. If you use AI to generate images, a vague prompt produces the same problem: generic, non-replicable output. Our 8-part prompt formula for realistic AI photography shows how to decompose a visual request into precise technical parameters — the same logic applied to image generation.


The Cognitive Error Behind It All: The Curse of Knowledge

There's a reason we write vague prompts even when we think we're being specific: the curse of knowledge.

The person writing the prompt already knows what they want. They know the context, the audience, the tone, the output they're imagining. And they assume the model has access to all that information — because in their mind, it's obvious.

The model knows nothing that isn't written in the prompt.

The practical fix: after writing a prompt, read it as if you were someone with zero knowledge of your context — a colleague from a different company, a freelancer on their first day. Every gap that hypothetical reader would have needs to be filled in the prompt.

The exercise sounds trivial. The results aren't.


Frequently Asked Questions

What is a vague prompt?

A vague prompt is a request to an LLM that lacks contextual specificity — it doesn't define the audience, format, context, or constraints. It produces generic output because the model fills the gaps with statistically probable defaults, which rarely match what you actually need.

How do you write an effective AI prompt?

Use the formula: [Role] + [Specific task] + [Context] + [Format] + [Constraints]. The more detail you provide about who you are, who the output is for, and how it should be structured, the more precise the model's response will be.

Why doesn't AI understand my prompts?

LLMs don't "understand" in the human sense — they predict the most probable next token. If your prompt is ambiguous, the model interprets it using the most statistically common reading for that input string, which isn't necessarily the right one for your specific case.

How long should a prompt be?

There's no fixed ideal. A prompt needs at least: a role or context, a specific task, a target audience, an output format, and key constraints. That's usually 30–150 words in practice, but length is a consequence of specificity required — not a goal in itself.

Do ChatGPT and Claude behave differently with the same prompt?

Yes. Each model has different training architectures, different default tones, and different sensitivity to instruction types. A prompt that works perfectly on Claude may produce mediocre results on GPT-4o or Gemini, and vice versa. The base formula is valid for all of them, but fine-tuning depends on the specific model.

What is few-shot prompting?

Few-shot prompting means including one or more examples of the desired output directly in the prompt. Instead of describing what you want in the abstract, you show the model a sample and ask it to follow that pattern. It's one of the most reliable techniques for getting consistently formatted, on-tone output.


Conclusion

AI isn't telepathic. It doesn't know your industry, your audience, or where the output will be used. It only knows what you write in the prompt.

Take the last prompt you sent. Apply the formula: does it have a role? A defined audience? A format? Explicit constraints? Something is probably missing — and that gap is the explanation for the output that disappointed you.

The next problem we tackle in this series is more subtle: even when prompts are specific, the questions we ask often unconsciously reflect what we want to hear — and AI is far too good at telling us exactly that. We cover it in the article on AI sycophancy and cognitive biases in prompting.

Topics

prompt engineering best practiceswhat is prompt engineeringprompt engineering tipshow to write effective AI promptsprompt engineering guidechain of thought promptingvague prompts AIhow to improve ChatGPT resultswhy AI doesn't understand my promptseffective prompting ChatGPT Claude
Get started

Ready to generate your first AI prompt?

Join CookedBanana and turn any photo or idea into a perfect Nano Banana prompt.

Start For Free