Generative AI Guidelines

Used properly, Generative artificial intelligence – including ChatGPT, Bard, DALL-E and a host of others – is a valuable tool that creates efficiencies and speeds workflow. Many marketing and communications materials can benefit from the proper use of generative AI. However, generative AI should be used to enhance, not replace, human-generated content. Generative AI is a tool – it is not a substitute for your editorial judgment.

These guidelines are designed to help UF’s communications and marketing professionals engage effectively with this rapidly emerging and evolving technology and understand its benefits as well as its limitations related to their work at UF.

As generative AI continues to evolve, these guidelines will be updated to reflect its new capabilities and address any new issues that may arise from them.

ETHICS AND INTEGRITY

LEARNING TO USE GENERATIVE AI

APPROPRIATE USES

INAPPROPRIATE USES

SENSITIVE INFORMATION

ETHICS AND INTEGRITY

Ethical considerations should be at the forefront of every interaction with generative AI. Because of the ease with which output can be translated into actual use, it is critical to guard against allowing AI-generated content making its way whole cloth into real-world content. Risks of using generative AI include plagiarism, improper image manipulation, copyright issues, misinformation, and deep fakes.

The credibility of the university and its communications and marketing operations rests on each individual’s commitment to the responsible use of generative AI. Transparency, honesty, and integrity are paramount. Therefore, the following points should be taken into consideration:

  • Content created solely using generative AI must never be presented as original human-created work.
  • Any content optimized or enhanced using generative AI:
    • Must be vetted and approved by appropriate subject matter experts, managers, editors, supervisors or other leadership before it reaches its final audience. Special attention should be given to guarding against false information, misrepresentation, and plagiarism.
    • Must be credited under the work as follows:
      • For copy: “Portions of this story were created or edited using generative AI.”
      • For imagery: “This image was created or edited by image-to-image or text-to-image generative AI.”
  • Generative AI is only as sound as the data it draws from – remember the maxim “garbage in, garbage out.” Be mindful that generative AI can amplify existing biases, especially in the area of image creation or manipulation. Watch for them in output and correct for accuracy as needed. Additional information may be found here.
  • When using generative AI for image creation or photo manipulation, pay special attention to copyright laws and usage rights, intellectual property, consent and privacy, and images that can create false narratives or damage to an individual’s reputation. For photos to be shared with the news media, consult the Associated Press Code of Ethics for Photojournalists and the National Press Photographers Association Code of Ethics, both of which can be found here. Also consult the PRSA Code of Ethics, which can be found here.

Additionally, although AI fact checkers are available, none are considered effective enough to be recommended here. Should that change, it will be noted here.

Generative AI

LEARNING TO USE GENERATIVE AI

The more a user teaches generative AI, the better the output will be. That starts with the prompt the user provides. The following general tips will improve the user experience:

  • Teach generative AI your style, acronyms, etc.; for instance, always capitalize “Go Gators!”
  • Train generative AI to learn your voice and brand
  • Be clear and concise; remember that short, vague prompts tend to return vague output
  • Remember that prompts can be iterative and revised to hone the output
  • Provide context
  • Use examples
  • Use language consistently
  • Experiment

For expanded guidance, such as how to create more detailed prompts, the following links may be helpful:

  • The art of the prompt: How to get the best out of generative AI (Microsoft)
  • Want Better Answers From Generative AI? Write Better Prompts (Salesforce)
  • How to Write AI Art Prompts [Examples + Templates] (Hootsuite)

APPROPRIATE USES

Generative AI is most useful as a starting point for creating communications and marketing content. It can perform well in the context of sparking and inspiring new ideas, seeing projects from a fresh perspective and even overcoming writer’s block. Examples of the best uses for generative AI include:

  • Brainstorming
  • Copy editing
  • Outline developments
  • Planning design layouts
  • Image generation
  • Frameworks
  • Storyboarding
  • Summarizing complex ideas
  • Data analysis
  • Photo descriptions or alternative text
  • Highly structured routine content (e.g. drafting Q&A documents)

INAPPROPRIATE USES

Because generative AI can provide false information, known in the industry as “hallucinations,” it should not be used for anything personal in nature. Using generative AI can also undermine the perceived sincerity of a statement, note, or direct message. Examples of content to avoid using generative AI with include:

  • Feature stories, human interest pieces
  • Messages or statements in which the author wants to communicate sincerity
  • Eulogies
  • Biographies or CVs
  • Memorials
  • Tributes

In addition, using generative AI to create social media posts should be approached with extreme caution or avoided altogether due to the risk of inadvertently posting false information and imagery with potentially negative – and instant – consequences.

SENSITIVE INFORMATION

Only information intended for public consumption should be used with generative AI. Publicly available generative AI applications commonly employ user interactions as training data, which means that any data you provide as part of your prompts may show up in subsequent responses to other users.

Communicators should also avoid using generative AI when creating any written material about an embargoed journal article because that information becomes public data and potentially violates the embargo.

Examples of sensitive information include, but are not limited to, the following:

  • Anything regulated by state or federal government, such as HIPAA and FERPA
  • Social Security numbers
  • Dates of birth
  • Confidential donor information
  • Proprietary information (trade secrets)
  • Intellectual property

The privacy policy of OpenAI, the company that developed ChatGPT, can be found here.