Prompt Engineering – Risk Overview

Generative AI and Risk

The decision to use Generative AI is fraught.  Potential users must weigh ethical concerns, and potential impacts to social, political, economic and environmental concerns. Once the decision to use AI has been made, the user must be aware of more individual issues such as privacy, intellectual property and safety.  These are critical considerations that are informed by and guide the individual’s values, needs and norms.  This resource will not address these concerns.

More concrete concerns focus on the product generated by the AI.  There are three significant and broad concerns with the products generated by AI. These include:

These concerns will be discussed with some standard implementation practices that can help to alleviate these concerns.

Incorrect Information

The information generative AI provides in response to your prompt may or may not be accurate.  Depending on your prompt, it may not be complex or nuanced enough to address the comprehensive nature of the prompt, or the material it generates may be factually incorrect.

Hallucination

GenAI can provide an informative response to a prompt with sources for where the AI has obtained the information. In some cases, both the materials and the sources the AI provides are completely accurate and in others the AI has falsely manufactured the information as well as the sources cited. This is often referred to as a Hallucination. 

The current Models that power Generative AI are trained on much of the material that exists online.  Most of the information online is western in origin and comes from a Caucasian and androcentric perspective.  As a result, much of the material generated by AI is biased to promote a western, Caucasian, and androcentric perspective. This has the potential to leave people who are not western, Caucasian or male without representation in material generated by AI at best and fully excluded by programs and services that use Generative AI at worst.

The inaccuracies described above are incidental to the programming, training and data sources for the Generative AI.  That is the Generative Ais have not been deliberately programmed to return results in this way. That these inaccuracies exist as a curiosity of the confluence of factors brought together to enable this technology.  As Generative AI improves, it is these areas that will see the greatest improvement.

Some basic approaches to working around these inaccuracies include:

The User Is Accountable

Generative AI May create the content, but in all cases, it is the user who makes the decision to use the created information or not. It is the responsibility of the user to verify the information before deciding to use it. If there are errors in the content that the user selects, then the responsibility for distributing inaccurate content lies with the user. 

Verify The Information

Check that the information the AI provides is accurate. If it provides sources, may sure that the sources exist.  If many cases AI will provide links to the website, it obtained its information from. Follow the link and make sure a) the information is located on the linked page, b) the information, and source author are reflective of the material you are presenting.

Generative AI will generate a response to any prompt it receives.  Before any generated response is delivered to the user, it will be assessed for tone and content.  All AI will have a filter of some description to verify that the information generated will be appropriate. Who determines and programs what is appropriate will change the nature and scope of the material the user receives. 

Unlike the preceding inaccuracies, these programed response limitations are purposefully programmed into the systems. These are unlikely to change with improvements to the AI but will change to keep pace with what the corporate culture of the Generative AI’s vendor feels is appropriate.

Ask the AI What Its Limitations Are.  The AI is aware of its limitations and can articulate them.  I asked Copilot what parameters it uses when it considers what information it presents to the user.  This was Copilots response (Jan 20, 2025):

Yes, there are guidelines and filters that shape how I provide information. These guidelines ensure that the information I share is accurate, safe, and respectful. Here are a few key points:

  1. Accuracy & Relevance: I strive to provide correct and up-to-date information. If I don’t have enough information or if I’m unsure, I’ll let you know.
  2. Harm Mitigation: I avoid sharing content that could cause physical, emotional, or financial harm. This includes avoiding sensitive or controversial topics in a way that could be harmful.
  3. Copyright Compliance: I respect copyright laws and regulations. I can summarize copyrighted content briefly but won’t provide full verbatim excerpts or links to pirated content.
  4. Transparency & Humility: If a request exceeds my capabilities or doesn’t align with these guidelines, I’ll inform you politely.

These filters help ensure that our interactions are safe, respectful, and productive. If you have any specific questions or need information, feel free to ask!

It is important to note that these filters will change without notice.  When beginning an important project using GenAI it is good to have these limitations identified.

Clarify Your Need.  Challenging and difficult topics are often discussed in class specifically to explore why they are challenging and difficult.  It can be very difficult to have AI address or present these challenging of difficult materials. The extent of this can change significantly from AI product to AI product.

Outlining your role as an educator and providing some context around the topics value can lead to the AI presenting useful information.  If students are intended to work with AI on sensitive topics, the AI ability to present information on that topic must be verified before the activity can be assigned.

The preceding represents very basic approaches for managing risk around the use of Generative AI. These should be understood and implemented by everyone who chooses to use this technology.