Prompt Engineering Basics: Get Better Results from AI

Prompt Engineering Basics: Get Better Results from AI

The way you phrase your request matters dramatically. A few simple tweaks can improve AI output quality by 50 percent without paying more or switching tools.

Be specific

Bad: "Write a blog post"

Better: "Write a 1500-word blog post about best AI tools for freelancers targeting freelance writers. Include an intro hook, at least 3 tools with descriptions, and a conclusion with next steps. Use a conversational, friendly tone."

Specificity gives the model guardrails. It knows the length, audience, format, and tone. The output will match your needs much better.

Provide context

Bad: "Summarize this: [article]"

Better: "Summarize this article in 150 words. Focus on the key findings and recommendations, not background. I'll be sending this summary to executives who need the headline facts, not detailed methodology."

Context helps the model prioritize. It understands what matters for your use case.

Give examples

Bad: "Write marketing copy for my product"

Better: "Write marketing copy for my SaaS product [name]. Here is an example of copy I like: [paste competitor example]. Match that style—direct, benefit-focused, conversational, with a strong CTA. Our product solves [problem] for [audience]."

Examples show the model exactly what you want. Tone, style, and structure are learned from examples faster than from descriptions.

Break complex requests into steps

Bad: "Help me plan my content strategy"

Better: "I'm a consultant focusing on [niche]. Help me with 3 things: 1) What types of content perform best for [niche] consultants? 2) What topics should I write about? 3) What is a realistic publishing schedule for a solo consultant?"

Separate steps reduce confusion. The model can focus on one question at a time and give better answers.

Iterate, don't regenerate

After getting a draft:

  • "This is good but make it more casual" (regenerates better)
  • "Change the opening to ask a question instead" (specific edit)
  • "Add a section about [topic]" (specific addition)

Iteration guides the model toward your vision. Generic "regenerate" requests often go backward.

Know your model's strengths

  • GPT-4 (ChatGPT): Good at reasoning, writing, code
  • Claude: Excellent at long documents, nuance, avoiding bias
  • Perplexity: Good at research and current information

Match your task to the right tool. Asking Claude to search the web is inefficient; ask Perplexity instead.

Common prompt patterns that work

  1. Role-play: "Act as a [expert]. How would you approach [problem]?"
  2. Structure: "Provide your answer in [format: bullet points, table, outline]."
  3. Constraints: "Answer in under 50 words" or "Use only simple language."
  4. Audience: "Explain [topic] as if I am [audience type]."

These patterns work because they give the model clear structure to follow.

Practice

Write three prompts for the same task. Make each progressively more specific. Run all three through your favorite AI tool. Compare outputs. You will immediately see how specificity improves results.

Then use that best-performing prompt every time you need that task done. Save your good prompts in a document or Notion database. Build your own library of high-performing prompts.

Discussion

  • Loading…

← Back to Blog