Chain of Verification Pattern

Generative AI and Risk

The decision to use Generative AI is fraught.  Potential users must weigh ethical concerns, and potential impacts to social, political, economic and environmental concerns. Once the decision to use AI has been made, the user must be aware of more individual issues such as privacy, intellectual property and safety.  These are critical considerations that are informed by and guide the individual’s values, needs and norms.  Standard implementation practices are essential for all GenAI use and are adequate for more basic needs. For foundational prompt development, complex project generation or any situation where accuracy is a paramount concern.

Chain of Verification

Generative AI has access to most of the information created through the 20 century and the first quarter of the 21 st century. The sheer amount of information is difficult to comprehend. This is the information that the AI sorts through and selects from when generating the information it presents to the user.  Considering that AI systems are trained on all this information, genuine fact and inaccuracies, it is clear that GenAI can be incredibly useful or propagate false and damaging information.  The Chain of Verification strategy provides a systematic approach to cross check and validate AI generated content providing high standard of accuracy and reliability. This prompting strategy is taken from Yi Zhou (2023).

The sequence structure for this strategy is as follows:

  • Create an initial prompt
  • Gen AI provides a response to the Prompt
  • The AI compares answers from the verification questions against its initial response
  • Based on this comparison the AI model either confirms its response or generates a revised final response
  • Analyze the AI response and identify areas that may benefit from verification
  • Develop verification questions pertinent to the content and ambiguities of the generated content
  • These questions can be generated by the human or with guidance by the AI
  • Note: Using AI alone can introduce bias into this process.
  • It is essential to be alert to any inadvertent errors introduced during the verification process and users are encouraged to trust but verify the AI output
  • The human is accountable
  • Pose generated verification questions to the AI model
  • The AI that answers these verification questions independently
  • The answers are meant as a Fact Check

Users may choose to breakdown the verification planning further by initiating an addition an additional or multiple feedback loops

Implementation

1
Initial Prompt

The user creates an effective prompt asking for the AI to present information or complete a task.

2
Review the initial response

AI provides a response to the user Prompt. The user reviews the response and assesses what points should be assessed for validity and accuracy. The user may choose to as AI to do this analysis as well.

3
Formulate verification questions

Using all the available assessment, the user curates the best set of questions to input to AI to verify the previous generated material.

4
Input AI verification questions

The User inputs the verification questions to the AI.

5
Analyze verification responses

The user reviews the verification responses. The user is looking for evidence that the original response is accurate or other areas that may require subsequent verification.

6
Refinement

The user accepts the original response or refines the verification questions and inputs them into the AI. The user determines how many iterations are appropriate.

Pattern in Action Example #1