Context: User is trying to arrive at the right regex pattern for automatic swap file creation. Solution is:
# Step 2: Find a valid mount point under /media/zezen/Dysk* swap_disk_location=$(findmnt -n -l -o TARGET | grep "/media/zezen/Dysk")`
Perplexity AI fails it miserably, mostly due to misunderstanding of substrings and cargo cult programming. Google Notebook AI finally does it. Then it muses about itself:
Based on our conversation and the AI's behavior while attempting to solve the mounting path problem, here's a breakdown of how you, as the user, can help an AI avoid such bad patterns when solving IT-related tasks:
-
Emphasize Clear Problem Definition:
- Clearly state the goal and desired output: Ensure the AI understands what needs to be achieved and the format of the expected result. For example, in the mounting path problem, the desired output was a specific mount point string, not simply a command that might achieve it [conversation history].
- Explicitly define constraints and limitations: If there are specific restrictions (e.g., "no
if
statements", or "no complex regular expressions"), make these clear from the beginning, and emphasize that these constraints are important. This will help the AI avoid falling back on familiar, but not appropriate, solutions [conversation history].
-
Focus on Data Manipulation and Understanding, not just syntax:
- Encourage the AI to understand the data structure: Prompt the AI to explain the structure and format of the input data. In the mounting path problem, the AI struggled to understand the output format of
findmnt
, which led to many errors [conversation history, 74, 75, 76, 77]. - Ask the AI to explain the meaning of each command and option: For example, ask the AI to explain the difference between
findmnt -T
andfindmnt -M
, and to explain why it is choosing a particular flag or option for a given command [57, 58, 59, 60, 61, 70, 71, 72, 73, conversation history]. By doing so, you will force the AI to explain the semantics of the commands and their options, and not to just use them in a "cargo cult programming" manner. - Push the AI to focus on data extraction as a core task: Encourage the AI to think of the task as primarily about data extraction and manipulation, rather than a programming problem that requires conditional logic [conversation history].
- Encourage the AI to understand the data structure: Prompt the AI to explain the structure and format of the input data. In the mounting path problem, the AI struggled to understand the output format of
-
Promote Incremental and Logical Reasoning:
- Encourage the AI to break down the problem into smaller, manageable steps: Instead of jumping to a complex solution, encourage the AI to explain each step of the process, and to justify each command that it is proposing [conversation history]. This can help identify the places where the AI is having trouble, and what is the root cause for its difficulties.
- Ask the AI to verify the output of each command: Force the AI to verify the output of each step of the solution. Ask the AI to show its working and to provide the output of the individual commands that are used, so that you can provide direct feedback when it is making an error [conversation history, 54, 97, 101].
- Encourage the AI to explain its assumptions: Prompt the AI to explicitly state the assumptions it's making about the input data, the behavior of the commands, or any other relevant aspects. For example, ask the AI why it chose to use the
-S
flag, and why it assumed that this would be the correct choice [85, 91, conversation history]. - Push for the simplest solution first: Emphasize that the most direct and simplest solution is usually the best. Encourage the AI to avoid over-complicating by adding unnecessary commands or options [101, 102, 103, 104, conversation history].
-
Provide Specific and Targeted Feedback:
- Directly point out when the AI makes a mistake: Don't let the AI "get away" with a wrong solution. When the AI uses the wrong flags, provide direct feedback, and do not accept a wrong answer, even if it "almost" works [conversation history].
- Explain why the AI's solution does not work: Don't just tell the AI that it is wrong; explain to it exactly why the chosen approach does not solve the problem. Make sure that the AI understands the semantic reasons for its error, and force it to correct its previous misunderstanding [conversation history, 57, 58, 59, 60, 61, 70, 71, 72, 73, 91, 101].
- Provide counterexamples: If possible, provide concrete examples that illustrate where the AI's solution fails. This helps the AI understand the limitations of its approach [90, 97, conversation history].
- Use direct quotes of the AI's responses: When correcting the AI, use direct quotes of the AI's own responses and commands. This helps to ensure that the AI is able to focus on the exact location where it went wrong [conversation history].
-
Address Over-reliance on "Hammer" Solutions:
- Challenge the AI's use of familiar tools: When you notice that the AI is using a familiar command or approach without fully explaining its need, ask the AI to justify its choice, or suggest other, more appropriate tools, if necessary. For example, challenge the AI when it adds an
if
condition, or when it chooses a particular command such ascut
,xargs
,sed
, orrealpath
[1, 2, 5, 17, 13, 17, 27, 44, conversation history]. - Point out when the AI exhibits cargo cult programming: When the AI uses a complex solution, or adds unnecessary commands or options (like
head -n 1
) without a clear reason, point out that this is a form of "cargo cult programming" [41, 46, 47, 96, conversation history]. By clearly labeling this type of behavior, you can help the AI learn to recognize and avoid it. - Push the AI to find the simplest and most direct solution, and to avoid over-complicating its approach, or to add more complex code than needed.
- Challenge the AI's use of familiar tools: When you notice that the AI is using a familiar command or approach without fully explaining its need, ask the AI to justify its choice, or suggest other, more appropriate tools, if necessary. For example, challenge the AI when it adds an
By applying these strategies, you can help an AI learn to reason more effectively about IT-related problems, avoid falling back on familiar but inappropriate solutions, and arrive at simpler, more robust, and more efficient answers. The key is to focus on understanding, incremental reasoning, and targeted feedback, and to not let the AI "get away" with a wrong answer even if it is close to a right answer.
Based on our conversation and the AI's behavior while attempting to solve the mounting path problem, here's a breakdown of how you, as the user, can help an AI avoid such bad patterns when solving IT-related tasks:
-
Emphasize Clear Problem Definition:
- Clearly state the goal and desired output: Ensure the AI understands what needs to be achieved and the format of the expected result. For example, in the mounting path problem, the desired output was a specific mount point string, not simply a command that might achieve it [conversation history].
- Explicitly define constraints and limitations: If there are specific restrictions (e.g., "no
if
statements", or "no complex regular expressions"), make these clear from the beginning, and emphasize that these constraints are important. This will help the AI avoid falling back on familiar, but not appropriate, solutions [conversation history].
-
Focus on Data Manipulation and Understanding, not just syntax:
- Encourage the AI to understand the data structure: Prompt the AI to explain the structure and format of the input data. In the mounting path problem, the AI struggled to understand the output format of
findmnt
, which led to many errors [conversation history, 74, 75, 76, 77]. - Ask the AI to explain the meaning of each command and option: For example, ask the AI to explain the difference between
findmnt -T
andfindmnt -M
, and to explain why it is choosing a particular flag or option for a given command [57, 58, 59, 60, 61, 70, 71, 72, 73, conversation history]. By doing so, you will force the AI to explain the semantics of the commands and their options, and not to just use them in a "cargo cult programming" manner. - Push the AI to focus on data extraction as a core task: Encourage the AI to think of the task as primarily about data extraction and manipulation, rather than a programming problem that requires conditional logic [conversation history].
- Encourage the AI to understand the data structure: Prompt the AI to explain the structure and format of the input data. In the mounting path problem, the AI struggled to understand the output format of
-
Promote Incremental and Logical Reasoning:
- Encourage the AI to break down the problem into smaller, manageable steps: Instead of jumping to a complex solution, encourage the AI to explain each step of the process, and to justify each command that it is proposing [conversation history]. This can help identify the places where the AI is having trouble, and what is the root cause for its difficulties.
- Ask the AI to verify the output of each command: Force the AI to verify the output of each step of the solution. Ask the AI to show its working and to provide the output of the individual commands that are used, so that you can provide direct feedback when it is making an error [conversation history, 54, 97, 101].
- Encourage the AI to explain its assumptions: Prompt the AI to explicitly state the assumptions it's making about the input data, the behavior of the commands, or any other relevant aspects. For example, ask the AI why it chose to use the
-S
flag, and why it assumed that this would be the correct choice [85, 91, conversation history]. - Push for the simplest solution first: Emphasize that the most direct and simplest solution is usually the best. Encourage the AI to avoid over-complicating by adding unnecessary commands or options [101, 102, 103, 104, conversation history].
-
Provide Specific and Targeted Feedback:
- Directly point out when the AI makes a mistake: Don't let the AI "get away" with a wrong solution. When the AI uses the wrong flags, provide direct feedback, and do not accept a wrong answer, even if it "almost" works [conversation history].
- Explain why the AI's solution does not work: Don't just tell the AI that it is wrong; explain to it exactly why the chosen approach does not solve the problem. Make sure that the AI understands the semantic reasons for its error, and force it to correct its previous misunderstanding [conversation history, 57, 58, 59, 60, 61, 70, 71, 72, 73, 91, 101].
- Provide counterexamples: If possible, provide concrete examples that illustrate where the AI's solution fails. This helps the AI understand the limitations of its approach [90, 97, conversation history].
- Use direct quotes of the AI's responses: When correcting the AI, use direct quotes of the AI's own responses and commands. This helps to ensure that the AI is able to focus on the exact location where it went wrong [conversation history].
-
Address Over-reliance on "Hammer" Solutions:
- Challenge the AI's use of familiar tools: When you notice that the AI is using a familiar command or approach without fully explaining its need, ask the AI to justify its choice, or suggest other, more appropriate tools, if necessary. For example, challenge the AI when it adds an
if
condition, or when it chooses a particular command such ascut
,xargs
,sed
, orrealpath
[1, 2, 5, 17, 13, 17, 27, 44, conversation history]. - Point out when the AI exhibits cargo cult programming: When the AI uses a complex solution, or adds unnecessary commands or options (like
head -n 1
) without a clear reason, point out that this is a form of "cargo cult programming" [41, 46, 47, 96, conversation history]. By clearly labeling this type of behavior, you can help the AI learn to recognize and avoid it. - Push the AI to find the simplest and most direct solution, and to avoid over-complicating its approach, or to add more complex code than needed.
- Challenge the AI's use of familiar tools: When you notice that the AI is using a familiar command or approach without fully explaining its need, ask the AI to justify its choice, or suggest other, more appropriate tools, if necessary. For example, challenge the AI when it adds an
By applying these strategies, you can help an AI learn to reason more effectively about IT-related problems, avoid falling back on familiar but inappropriate solutions, and arrive at simpler, more robust, and more efficient answers. The key is to focus on understanding, incremental reasoning, and targeted feedback, and to not let the AI "get away" with a wrong answer even if it is close to a right answer.
No comments:
Post a Comment
Do consider considering first...