Responsible use of Generative AI for Non-academic Purposes at CCA
New Generative AI tools and platforms are rolling out faster than any other technology before it. While the capabilities of these new tools and platforms are very compelling, there are significant and specific concerns around security, privacy, accuracy, and bias.
We have provided the following guidelines for the non-academic use of Generative AI at CCA to ensure that it be used in a way that protects everyone's personal and institutional privacy and security.
For guidelines and policies for using Generative AI for academic purposes, please reach out to your academic programs.
Data privacy and security
While Generative AI may be new, it is just one of many technologies in use at CCA. And, as with all technologies, the use of AI is subject to CCA’s Acceptable Use Policy (AUP) and Information Security Policy.
However, the use of Generative AI brings with it specific security and privacy concerns. Most Generative AI tools learn from the information shared with them; incorporating this content into their data models. This may lead to the information being reused and exposed in unpredictable ways.
Unless specific exceptions are made, you must not provide sensitive or confidential information when using AI platforms or tools.
This applies to all AI platforms, applications, or functions, even those provided to you by CCA such as Google Workspace, Zoom, and the Adobe Suite.
Accuracy
Even the best AI tools can generate inaccurate output. It is your responsibility to review, understand, and validate the output; particularly before sharing with others or reusing that content in other works.
Bias
AI tools reflect the biases inherent in the models used to train them. Be particularly aware of overly simplified or biased AI-generated output that may omit important data or reinforce harmful stereotypes.
Clearly identify AI-generated output
When appropriate, clearly identify AI-generated output and cite the source, allowing others to trust and validate the output.
Generative AI and Images
The most controversial use of AI is the generation of still and moving images, music, and other creative output.
Generative AI platforms have a very poor track record of copying and reusing the creative work of others. And while some Generative AI tools are working to combat this by training their models with inputs specifically created for this use, it is too soon to say how effective this will be at protecting the creative output of individual artists.
At this time, our guidance is to not use generative art in any materials that will be seen by the general public.
Additionally, if you do create generative art for internal use, it should always clearly be labeled as such, citing the source.
We encourage you to use the creative output of actual human artists, particularly the members of our community (with permission, of course!).
Zoom's AI Companion and the use of AI to generate meeting transcripts and summaries
To ensure the security and privacy of all the participants, the only AI tool that is allowed in CCA meetings is Zoom's AI Companion, currently available for use by all staff.
Zoom's implementation of Generative AI provides a high level of security and privacy that is consistent with our AUP.
All meeting participants will see clear messaging that AI Companion is enabled and have the opportunity to request that the meeting owners disable the feature.
Even though we allow the use of AI Companion, please note, if the summaries, prompts, and responses generated from the meeting contain sensitive or confidential information, they are covered under CCA’s AUP and must be stored and shared in ways appropriate for that content.
Google Gemini
When you need a general purpose Generative AI tool, we strongly recommend you use Google Gemini. Using Google Gemini while logged into your CCA Google Workspace account ensures that your prompts and generated content are not used to train Google’s AI platform.
Additionally, Google provides tools to help you evaluate the accuracy of Gemini's responses. Look for Sources and related content and the Double Check button following your results.
As Google says, “Gemini may display inaccurate info, including about people, so double-check its responses.”