Body
Overview
Staff are encouraged to explore AI tools, but it is essential to stay informed and mindful when using this rapidly evolving technology. The following guidelines help protect University data and ensure adherence to contractual and ethical responsibilities.
Principles for AI
According to the AI Working Groups Charter, established by the Interim Chief AI Officer for the University of Oklahoma, the AI Working Group for Governance and Policy (“AI Governance Working Group”) is tasked to “ensure responsible and ethical AI use through the development of clear guidelines, policies, and oversight mechanisms.” Building on the foundations of the National Institute of Standards and Technology’s (“NIST”) Artificial Intelligence Risk Management Framework for AI Risks and Trustworthiness (AI RMF 1.0, Section 3 [NIST AI 100-1], fig. 1), the AI Governance Working Group has established the following set of principles to guide the development of AI guidelines and policies for implementation at the University of Oklahoma.
These principles are written to guide the work of the AI Governance Working Group with the goal of balancing innovation and enablement with ethical and secure deployment to “enhance innovation across education, research, and operation[s]” at the University of Oklahoma. The principles are set forth in a binary approach. Each principle below consists of an aspirational component that is idealistic as well as an operational component that is pragmatic.
Lastly, these principles are written with the understanding that they are subject to change due to the ever increasing and advancing nature of AI and the standards being developed accordingly. This includes monitoring the newly introduced NIST AI Standards “Zero Drafts” Pilot Project to continually develop AI standards.

Visit Ethical and Trustworthy AI Usage Principles to learn more.
Foundational Information
For the purposes of these principles, the following foundational information regarding AI systems should apply:
Definition of Artificial Intelligence: As used in the NIST AI RMF 1.0 (adapted from OECD Recommendation on AI:2019; ISO/IEC 22989:2022), AI “refers to an AI system as an engineered or machine-based system that can, for a given set of objectives, generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy.”
Deployment of AI Systems: The university recognizes that AI systems are deployed for a variety of use cases, with varying levels of complexity, and can be provided for internal or through third parties via direct, web-based use, through APIs or other means. As used in these principles and elsewhere, the following categories of AI systems apply:
- Experimental Deployment: AI systems developed and used for prototyping, research exploration, or early-phase trials within academic environments.
- Custom In-House Deployment: AI systems developed and used entirely within the university using proprietary or institution-owned data and infrastructure. These systems offer full transparency and accountability, requiring rigorous adherence to internal controls across the AI lifecycle—design, training, tuning, testing, and deployment.
- Foundational Model Deployment: AI systems reliant on large, pre-trained foundational models (e.g., GPT-4, Gemini, Deepseek R1, Microsoft 365 Copilot, etc.) that are locally deployed and/or customized using institutional data (such as fine-tuning, retrieval-augmented generation, or other similar technologies).
- Integrated Deployment: AI systems embedded within enterprise software (Banner, Microsoft 365, Adobe Firefly, Zoom AI, Gradescope, etc.) or provided via third-party APIs or platforms.
- Public-Facing Deployment: AI systems that interact directly with university users or the public (e.g., ChatGPT, Grammarly, Microsoft Copilot Chat, chatbots, recommendation engines, educational tools, etc.).
- Embedded Deployment: AI systems embedded in devices or infrastructure systems (e.g., smart campus sensors, autonomous lab instruments).
OU’s GenAI Offerings
As the use of generative AI and machine learning tools continues to grow, it’s important to choose tools that align with both the user’s needs and user data responsibilities. The University of Oklahoma offers access to AI tools via the AI Resource Center. Some of these tools are provided under institutional licenses and may include enhanced data privacy protections or security features. However, not all tools share the same terms of use, cost structure, or data handling practices. When using AI tools, faculty should review the specific terms associated with each platform, especially how their data is handled. While some tools are configured to prevent user inputs from being used for model training, no system can eliminate all risk of data exposure. Faculty in professional colleges (i.e., law and health sciences), or who deal with special populations (e.g. Native or education data) may have additional requirements that mandate more careful or limited use of these systems.
AI Potential and Operations
AI tools can support staff across a wide range of roles and career stages by assisting with tasks such as data analysis, coding, writing, visualization, and literature review. Recent advancements—such as GPT-4—have shown that these tools can help clean datasets, propose analytical approaches, perform basic statistical tests, and even generate content for academic or administrative purposes.
While these tools can significantly improve efficiency and productivity, they also come with important limitations, including the potential for hallucinations (confident but incorrect outputs), biases, and fabricated information. It is essential to critically evaluate AI-generated outputs and avoid overreliance on these tools without verification.
As AI capabilities continue to grow, so does the importance of using them responsibly and effectively. Selecting the right tool for the task, understanding its limitations, and ensuring transparency and integrity are key to maximizing the value of AI while upholding academic and professional standards.
Principles Related to Using AI in Operations
1. Accountability and Responsible Use
Staff should utilize AI systems only where they perform well and exhibit few hallucinations. Staff should ensure they have a sound understanding of the capabilities and limitations of AI tools within the context of a specific tool implementation. As AI models are predictive systems and lack true understanding, staff should verify all outputs for accuracy and attribution and attest that this has been done in all cases, detailing the methods used to do so.
Staff are accountable for any AI outputs, outcomes, and effects throughout the use of AI. To ensure accountability, staff should identify the person(s) responsible for validating or approving AI outputs and should ensure meaningful human oversight over AI-driven processes. When using AI tools to generate code, staff should ensure that AI-generated code carries out operations as expected.
When using AI tools to analyze or otherwise process research data provided by community partners, external entities, and other populations, staff should obtain consent from those partners before uploading such data to AI tools. Consent must be informed, specific to intended use, and documented in writing. Staff should communicate transparently about data that they contributed is stored, shared, and made accessible to other parties by any AI tool used in the research process.
When uploading copyrighted data to AI tools, staff must ensure that all such uses fall within the fair use doctrine. The Office of Legal Counsel can provide guidance regarding permissible use of copyrighted materials.
2. Documentation
Staff who use AI to support administrative or operational tasks should ensure their use is transparent, well-documented, and appropriate to the context. When AI contributes to materials shared externally—such as grant documentation, reports, or presentations—it is important to clearly explain how the tool was used and how its outputs were reviewed or integrated into the final work.
This includes identifying the AI tool and version, describing its purpose and how it supported the task, and clarifying the extent of human oversight. Staff should also be prepared to explain how the AI generated its output, the type of data or prompts used, and any known limitations or risks. These practices help ensure that AI is used responsibly, align with institutional standards, and meet the expectations of external sponsors, partners, and professional organizations, many of which are developing their own AI-related policies and requirements.
3. Account for and Limit Bias in Administrative Use of AI
When using AI tools to support administrative or operational work, it’s important to be aware that these systems can reflect or amplify certain types of bias—such as stereotypes, assumptions, or patterns that may not be accurate or inclusive. This can impact the fairness, quality, or appropriateness of communications, decisions, or data interpretations generated with AI assistance.
Staff should use AI tools thoughtfully by reviewing outputs for accuracy, fairness, and inclusivity, especially when materials are shared with diverse audiences or used to inform policy or service decisions. Human oversight remains essential, and staff are encouraged to question and correct AI-generated content when needed to ensure that institutional values and expectations are upheld (“Bias in AI”, 2025).
4. Privacy Protection
Under no circumstances should staff input personal, private, or HIPAA, FERPA, Common Rule, and Export Control-protected information into an unsecured GenAI system. Personal data includes any information that may relate to an identified or identifiable person. Data may have additional restrictions and protocols associated with it (e.g. Indigenous data), and these types of information should not be shared with these models without additional consent. Researchers should follow University of Oklahoma policies.
Staff must exercise caution when using AI-powered tools or third-party bots that connect to virtual meetings—such as Zoom or Microsoft Teams—for the purposes of transcription, recording, or automatic summarization. These tools may join meetings without an explicit invite, potentially violating university policies, participant consent expectations, and privacy regulations.
All meeting organizers and participants are expected to review and comply with the University's guidance on managing third-party apps and bots, as outlined in OU IT’s Knowledgebase Article on Uninvited Meeting Bots. Before enabling such tools, ensure that all attendees are aware of and have consented to the use of AI for recording or summarizing meeting content. Unauthorized use of such tools in meetings involving sensitive information, student data, or instructional activity may result in violations of institutional privacy or data protection policies.
5. Institutional Oversight
To use an AI system not provided by OU, complete the IT Security Assessment (which may include a Privacy assessment depending on data types). Do not share sensitive, confidential, or regulated data with any commercial or opensource AI solution or model unless an IT security assessment has been completed and a university contract that includes appropriate data protection language (such as FERPA contract language or an agreement has been executed between OU and the provider (e.g., Purchase or Software Agreement and a Business Associate Agreement (BAA), if applicable). Use of the system prior to full approval may expose the University to legal and data security risks.
6. Education and Training
Staff are encouraged to participate in AI ethics and best-practice training to ensure responsible use of AI tools. The University will provide institutional resources including information sessions, resource guides, and consultative services to support researchers in learning about AI capabilities, risks, and ethical considerations.
Helpful AI Learning Resources
The AI space is rapidly growing and expanding, with new tools and capabilities being revealed every few days. Hence, understanding what to look for in these tools is key. When considering tools outside of what the University offers, be aware of the following when choosing a tool for your needs:
-
AI may be Biased: The datasets used to train AI models contain biases that can impact output. Be aware of this and exercise critical judgment when interpreting results. For e.g.: According to some articles, when AI was asked to create images of people in specialized professions, it showed both younger and older people, but the older people were always men, reinforcing gendered bias of the role of women in the workplace. Learn about Bias in AI through reputable sources such as:
Questions to Ask Yourself
When using AI, consider the following reflective questions:
As AI continues to evolve, it presents new opportunities and challenges in the workplace. By staying informed, adhering to best practices, and making ethical decisions, OU staff can effectively integrate AI tools while upholding university values. Responsible AI use ensures that we maximize its benefits while safeguarding privacy, security, and professional integrity.
Acknowledgment
This document was drafted with the assistance of ChatGPT, an AI language model developed by OpenAI, based on guidance from the NIST Artificial Intelligence Risk Management Framework (2023), Gartner’s EU AI Act readiness series (2023), and internal governance requirements. Human oversight, subject matter expertise, and institutional review were applied to ensure accuracy, relevance, and alignment with organizational goals.
References