Prompt Engineering Mastery6 of 20 steps (30%)

Chain-of-Thought Prompting: Getting AI to Reason Step by Step

When AI Gets Answers Wrong

AI models can make surprising mistakes on tasks that seem simple. They sometimes get the wrong answer to a maths problem, miss a logical step in an argument, or draw conclusions that do not follow from the information given. These failures are not random. They tend to happen when the task requires multiple steps of reasoning, and the model tries to jump to the answer without working through those steps.

Chain-of-thought prompting is the technique that fixes this. It works by asking the model to reason through a problem step by step before giving its answer. The results are often dramatically better.


The Core Idea

When you ask a model to reason step by step, it produces a chain of intermediate reasoning before reaching a conclusion. Each step can be checked, corrected, or built upon. Errors that would be invisible in a single-step answer become visible in the chain and are less likely to propagate.

The simplest way to activate chain-of-thought reasoning is to add a phrase like:

  • "Think through this step by step."
  • "Reason through this carefully before giving your final answer."
  • "Show your reasoning before concluding."
  • "Work through this one step at a time."

That is often all you need. The model shifts from trying to produce an instant answer to working through the problem methodically.


A Concrete Example

Without chain-of-thought:

A product costs $120. We are offering a 25% discount, then applying a $15 coupon.
What is the final price?

The model might answer $75 (wrong) by applying both deductions incorrectly or in the wrong order.

With chain-of-thought:

A product costs $120. We are offering a 25% discount, then applying a $15 coupon.
What is the final price? Think through this step by step.

Now the model works through it:

  • Step 1: 25% of $120 is $30. Price after discount: $120 - $30 = $90.
  • Step 2: Apply the $15 coupon. $90 - $15 = $75.
  • Final price: $75.

In this case the answer is the same, but the reasoning is visible and verifiable. For more complex problems, chain-of-thought catches errors that the direct approach misses.


When to Use Chain-of-Thought Prompting

Chain-of-thought prompting helps most in tasks that involve:

Multi-step reasoning. Anything where the correct answer depends on working through several logical or mathematical steps in sequence.

Decision-making and trade-off analysis. When you want the model to weigh multiple factors and reach a considered conclusion rather than a snap judgment.

Evaluating or critiquing. When you want the model to assess something (a plan, an argument, a piece of writing) and you want its reasoning to be transparent and checkable.

Complex instruction following. When a task has several conditions or constraints that all need to be satisfied simultaneously.

It is less useful (and sometimes counterproductive) for simple factual lookups, short creative tasks, or any task where speed matters more than depth of reasoning.


Structured Chain-of-Thought

For complex tasks, you can make the chain-of-thought more structured by specifying the steps yourself.

We are evaluating whether to expand into the German market.
Think through this in the following order:
1. What do we know about demand for our product category in Germany?
2. What are the main regulatory or compliance considerations?
3. What are the likely costs and timeline for entry?
4. What are the biggest risks?
5. Based on the above, what is your recommendation?

Background information:
[paste your market data]

This gives you control over the reasoning structure. You can make sure the model considers all the factors you care about before reaching a conclusion, rather than fixating on one or two and ignoring others.


Zero-Shot Chain-of-Thought

Research has found that simply adding "Let us think step by step" to a prompt improves performance on reasoning tasks significantly, even without providing any examples. This is called zero-shot chain-of-thought.

For everyday use, this is the most practical technique: just add "Think through this step by step" or "Let us work through this carefully" to any prompt where you want more deliberate reasoning. It takes two seconds and often makes a noticeable difference.


Using Chain-of-Thought for Decisions

Chain-of-thought is particularly valuable when you want to use AI as a thinking partner for a decision. Instead of just asking for an answer, ask it to walk through the decision with you.

I am deciding between two candidates for a senior marketing role.

Candidate A: 8 years experience, strong brand background, no direct experience with
our industry, excellent references.

Candidate B: 5 years experience, weaker references, direct industry experience,
showed strong analytical thinking in the interview.

We need someone who can start contributing quickly but also grow into a long-term
leadership role.

Think through the trade-offs carefully and then give me your recommendation with
your reasoning.

The model will surface considerations you may have overlooked and present the trade-offs clearly before landing on a recommendation. You may agree or disagree with its conclusion, but the process of reading through the reasoning often helps you clarify your own thinking.


Checking the Chain

One of the main benefits of chain-of-thought prompting is that the reasoning is visible. This means you can check it. Read through the steps and ask:

  • Are the individual steps correct?
  • Does the conclusion actually follow from the reasoning?
  • Did the model consider all the relevant factors?
  • Is there a step where it made an assumption you disagree with?

If you spot an error, you can point to the specific step and correct it: "In step 2 you assumed X, but actually Y. Revise from step 2 onwards." This is much more effective than simply saying the answer is wrong.

In the next step, you will explore the best AI tools for Write SEO blog brief. Browse the options, pick one that fits your workflow, and try it before continuing.

Discussion

  • Loading…

← Back to Academy