The modular DECOMP (Decomposed Prompting) approach is designed to handle complex tasks by breaking them down into simpler, more manageable subtasks. methodology It leverages the capabilities of LLM by creating a systematic process in which each subtask is managed by specialized managers. This approach not only simplifies the problem-solving process but also improves the flexibility and efficiency of task management.
First, a decomposition prompt describes the process of solving a complex task by breaking it down into smaller subtasks. Each of these subtasks is then managed by specific subtask managers. These managers can:
This technique has three key advantages:
Let's take an example of breaking down the prompt into action. Suppose we need to concatenate the first letter of each word in a string, using spaces as separators. This can be accomplished by dividing the problem into three subtasks:
Split the string into a list of words.
Extract the first letter of each word.
Concatenate the extracted letters, using spaces as separators.
First, the decomposer specifies the sequence of questions and the corresponding subtasks:
QC: Concatenate the first letter of every word in "Jack Ryan" using spaces Q1: [split] What are the words in "Jack Ryan"? #1: ["Jack". "Ryan"] Q2: (foreach) [str_pos] What is the first letter of #1? #2: ["J", "R"] Q3: [merge] Concatenate #2 with spaces # 3: "JR" Q4: [EOQ]
The decomposer prompt determines the first subtask to perform: word splitting in this case. The subtask is handled by the split subtask manager, and the generated response is added to the decomposer prompt to obtain the second subtask. The process continues until the decomposer prompt produces [EOQ]. At this point, no tasks remain, and the last response is returned as the solution.
The DECOMP process can be installed via the Dedicated GitHub.