Chain-of-Thought Prompt

Chain-of-Thought Prompting, making a chain of thought

[Wei et al., 2022] introduced chain-of-thought prompting as a technique to encourage learners to respond in ways that facilitate coherent, step-by-step reasoning processes. The main contribution lies in the proposal and the exploration of thought chain prompting, demonstrating its effectiveness in eliciting more structured and thoughtful responses from LLMs compared to traditional prompts.

Principle of the chain of thought

Introduced by Wei et al. (2022), the method Chain of Thought (CoT) helps develop complex reasoning skills through intermediate reasoning steps. You can combine it with a few-step response method to get better results. results on more complex tasks that require reasoning before responding.

«"The odd numbers in this group add up to form an even number: 4, 8, 9, 15, 12, 2, 1.
A: Adding all the odd numbers (9, 15, 1) gives 25. The answer is False.
The odd numbers in this group add up to form an even number: 17, 10, 19, 4, 8, 12, 24.
A: Adding all the odd numbers (17, 19) gives 36. The answer is True.
The odd numbers in this group add up to form an even number: 16, 11, 14, 4, 8, 13, 24.
A: Adding all the odd numbers (11, 13) gives 24. The answer is True.
The odd numbers in this group add up to form an even number: 17, 9, 10, 12, 13, 4, 2.
A: Adding all the odd numbers (17, 9, 13) gives 39. The answer is False.
The odd numbers in this group add up to form an even number: 15, 32, 5, 13, 82, 7, 1.
HAS : "

The prompt generates the following response:

«"The sum of all odd numbers (15, 5, 13, 7, 1) equals 41. The answer is False."»

In itself, a single example of few-shot prompting is sufficient here.

Zero Shot CoT

The concept of a "Zero-Shot Thinking Chain" (Zero-shot-CoT) prompt is an advanced iteration of the CoT prompt mechanism, where the "zero-shot" aspect implies that the model is capable of performing a certain amount of reasoning without having seen examples of the task during training. The increasing number of queries with the phrase "Let's think step by step" has facilitated the generation of a sequential reasoning chain by LLMs. This reasoning chain has subsequently proven crucial for obtaining more accurate answers. This technique is based on the idea that the model, just like a human, can benefit from more detailed and logical steps to process the prompt and generate a response.

Automatic CoT

When applying thought chain incentives with proofs, the process involves manually creating efficient and diverse examples. This manual effort could lead to suboptimal solutions. Zhang et al. (2022) propose an approach to eliminate manual effort by leveraging LLMs with the "Think step by step" prompt to generate reasoning chains for proofs one by one. This automated process can still result in errors in the generated chains. To mitigate the effects of errors, the diversity of manifestations matters. This work proposes Auto-CoT, which samples questions with diversity and generates chains of reasoning to construct the demonstrations.

Auto-CoT consists of two main steps:

  1. Question grouping: partitioning the questions in a given dataset into a few clusters
  2. Demonstration sampling: select a representative question from each cluster and generate its reasoning chain using Zero-Shot-CoT with simple heuristics

A simple heuristic could be the length of the questions (e.g., 60 tokens) and the number of steps in the justification (e.g., 5 steps of reasoning). This encourages the model to use simple and precise proofs.

Golden chain-of-thought

The Golden Chain of Thought provides an innovative approach to generating query responses based on instructions. This methodology It leverages a set of "ground truth-based thought chain" solutions embedded in the prompt, significantly simplifying the model's task by eliminating the need for independent CoT generation. Concurrently, a new benchmark incorporating detective puzzles was designed to assess the abductive reasoning capabilities of LLMs, which is also considered an evaluation of the Golden CoT. GPT-4 demonstrates commendable performance, with a puzzle-solving rate of 83 %s, compared to the standard CoT's 38 %s.

Next, we always add the mystery name, the list of suspects, and the mystery content (body) to the prompt. When we want to invoke chain reasoning, we also add the following:

Full answer:
Let's think step by step.

When we want to provide a golden thought chain, we add the following prompt:

Solution: {solution}

Finally, we always ask for the final answer with

Final answer: