Overview
Artificial Intelligence (AI) is reshaping the landscape of higher education, offering new opportunities to enhance teaching and support student learning across all academic disciplines—from the sciences and engineering to the social sciences, humanities, creative arts, and the health sciences. As a transformative tool, AI can enrich how faculty design, deliver, and personalize instruction, while also introducing students to emerging ways of thinking and problem-solving.
At the same time, AI is contributing to the evolution of academic fields, prompting the growth of new areas of study and shifting the focus of others. Faculty play a key role in helping students navigate these changes by integrating AI thoughtfully into teaching practices and preparing learners for a future shaped by rapid technological advancement.
Principles for AI
According to the AI Working Groups Charter, established by the Interim Chief AI Officer for the University of Oklahoma, the AI Working Group for Governance and Policy (“AI Governance Working Group”) is tasked to “ensure responsible and ethical AI use through the development of clear guidelines, policies, and oversight mechanisms.” Building on the foundations of the National Institute of Standards and Technology’s (“NIST”) Artificial Intelligence Risk Management Framework for AI Risks and Trustworthiness (AI RMF 1.0, Section 3 [NIST AI 100-1], fig. 1), the AI Governance Working Group has established the following set of principles to guide the development of AI guidelines and policies for implementation at the University of Oklahoma.
These principles are written to guide the work of the AI Governance Working Group with the goal of balancing innovation and enablement with ethical and secure deployment to “enhance innovation across education, research, and operation[s]” at the University of Oklahoma. The principles are set forth in a binary approach. Each principle below consists of an aspirational component that is idealistic as well as an operational component that is pragmatic.
Lastly, these principles are written with the understanding that they are subject to change due to the broad and rapidly advancing nature of AI and the standards being developed accordingly. This includes monitoring the newly introduced NIST AI Standards “Zero Drafts” Pilot Project to continually develop AI standards.

Visit Ethical and Trustworthy AI Usage Principles to learn more.
Foundational Information
For the purposes of these principles, the following foundational information regarding AI systems should apply:
Definition of Artificial Intelligence: As used in the NIST AI RMF 1.0 (adapted from OECD Recommendation on AI:2019; ISO/IEC 22989:2022), AI “refers to an AI system as an engineered or machine-based system that can, for a given set of objectives, generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy.”
Deployment of AI Systems: The university recognizes that AI systems are deployed for a variety of use cases, with varying levels of complexity, and can be provided for internal or through third parties via direct, web-based use, through APIs or other means. As used in these principles and elsewhere, the following categories of AI systems apply:
- Experimental Deployment: AI systems developed and used for prototyping, research exploration, or early-phase trials within academic environments.
- Custom In-House Deployment: AI systems developed and used entirely within the university using proprietary or institution-owned data and infrastructure. These systems offer full transparency and accountability, requiring rigorous adherence to internal controls across the AI lifecycle—design, training, tuning, testing, and deployment.
- Foundational Model Deployment: AI systems reliant on large, pre-trained foundational models (e.g., GPT-4, Gemini, Deepseek R1, Microsoft 365 Copilot, etc.) that are locally deployed and/or customized using institutional data (such as fine-tuning, retrieval-augmented generation, or other similar technologies).
- Integrated Deployment: AI systems embedded within enterprise software (Banner, Microsoft 365, Adobe Firefly, Zoom AI, Gradescope, etc.) or provided via third-party APIs or platforms.
- Public-Facing Deployment: AI systems that interact directly with university users or the public (e.g., ChatGPT, Grammarly, Microsoft Copilot Chat, chatbots, recommendation engines, educational tools, etc.).
- Embedded Deployment: AI systems embedded in devices or infrastructure systems (e.g., smart campus sensors, autonomous lab instruments).
OU’s GenAI Offerings
As the use of generative AI and machine learning tools continues to grow, it’s important to choose tools that align with both the user’s needs and user data responsibilities. The University of Oklahoma offers access to AI tools via the AI Resource Center. Some of these tools are provided under institutional licenses and may include enhanced data privacy protections or security features. However, not all tools share the same terms of use, cost structure, or data handling practices. When using AI tools, faculty should review the specific terms associated with each platform, especially how their data is handled. While some tools are configured to prevent user inputs from being used for model training, no system can eliminate all risk of data exposure. Faculty in professional colleges (i.e., law and health sciences), or who deal with special populations (e.g. Native or education data) may have additional requirements that mandate more careful or limited use of these systems.
AI Potential and Academic/Teaching Implications
Because large language models can write so quickly and competently, they can assist students in completing many common types of assessments. Text outputs from language models can be used to answer essay questions, write papers, and reply to discussion board posts, simulating recall and understanding with little effort or engagement. However, there are benefits and disadvantages. For instance, these tools are also often used to generate background material for papers with citations; however, current versions are prone to create fictitious papers that sound appropriate but were never published (a type of hallucination).
Language models generalize and summarize existing knowledge based on probability predictions of word sequences rather than copying them verbatim, and so it may be impossible to identify their use with certainty. While detection tools like Turnitin or GPTZero may report the probability of AI authorship, they are easily circumvented and cannot provide definitive proof of cheating. False positives and negatives are possible, and even likely. The University does not recommend the use of AI-detection tools for identifying potential student misuse of Generative AI. These tools are known to have a high error rate, and their results can be misleading or unreliable. Faculty must not rely on these technologies as the basis for assessing academic misconduct, and students should not be penalized solely on the results of AI detection tools.
If an instructor believes that an academic integrity violation has occurred—whether related to unauthorized AI use or otherwise—they should follow the established university procedures and refer the case to the appropriate academic integrity office for review and adjudication. Upholding due process and consistency in handling these matters ensures fairness for all students and maintains the integrity of the learning environment.
Principles Related to Using AI in Teaching
1. Accountability and Responsible Use
Faculty play a critical role in setting a tone of ethical, transparent, and intentional use of AI in the classroom. Responsible use begins with clear communication about what AI tools are permitted, the rationale behind those decisions, and how students are expected to engage with them. Faculty should actively guide students in understanding the limitations, risks, and potential of AI technologies. Emphasizing that these tools should supplement – not replace – student effort supports both academic integrity and deeper learning. Regular classroom discussions about the appropriate and ethical use of AI, along with clearly stated syllabi and consequences, help build a shared understanding of accountability between faculty and students.
2. Authentic Instructor-Student Interaction
AI tools should complement—not replace—the essential role of instructor presence, interaction, and feedback in the learning environment. Faculty are expected to maintain regular and substantive interaction with students, as required by federal regulations that distinguish distance education from correspondence education (34 CFR § 600.2). This includes proactive engagement, meaningful dialogue, and timely, personalized feedback that supports student learning and development. While AI tools may assist in streamlining certain teaching tasks (e.g., drafting quiz questions, suggesting feedback language), they must not be used as a substitute for authentic instructor engagement. The University of Oklahoma maintains that the majority of instructional interaction—particularly feedback on assignments, participation in discussion, and guidance on course progress—must be carried out directly by the instructor. This standard ensures a quality educational experience and upholds the distinct value of an OU education.
3. Documentation
Clear and consistent documentation of generative AI expectations is essential for transparency and fairness. Faculty should include statements in course syllabi that define their policies for the use of AI tools, referencing broader institutional standards on academic integrity. These policies may vary in scope—from unrestricted use to complete prohibition—but should always clarify what constitutes appropriate use, when disclosure is required, and how to properly cite AI-generated content. By documenting these expectations in the syllabus and individual assignments, faculty ensure that students are aware of their responsibilities and can engage with AI in ways that support course goals. For example:
Allowed: If faculty permit unrestricted use of AI – “Students are welcome to use AI tools for brainstorming, research, refining ideas as long as you properly cite any AI-generated content.”
Allowed with Restrictions: A policy that encourages transparency in how students use AI might read: Before collaborating with an AI chatbot on your work for this course, please request permission by sending me a note that describes (a) how you intend to use the tool and (b) how using it will enhance your learning. Any use of AI to complete an assignment must be acknowledged in a citation that includes the prompt you submitted to the bot, the date of access, and the URL of the program.
Strictly Prohibited: If faculty think students’ learning is best supported by avoiding AI altogether, the course policy might read: Collaboration with ChatGPT or other AI composition software is not permitted in this course.
4. Account for and Limit Bias in Use of AI
When using AI tools to support administrative or operational work, it is important to be aware that these systems can reflect or amplify certain types of bias—such as stereotypes, assumptions, or patterns that may not be accurate or inclusive. This can impact the fairness, quality, or appropriateness of communications, decisions, or data interpretations generated with AI assistance.
Faculty should use AI tools thoughtfully by reviewing outputs for accuracy, fairness, and inclusivity, especially when materials are shared with diverse audiences or used to inform policy or service decisions. Human oversight remains essential, and faculty are encouraged to question and correct AI-generated content when needed to ensure that institutional values and expectations are upheld (“Bias in AI”, 2025).
5. Course and Curriculum Design
The rapid evolution of AI technologies calls for a thoughtful reevaluation of course design and assessment practices. Faculty should critically examine how generative AI intersects with their learning objectives, disciplinary standards, and instructional goals. This may involve updating course content to include AI competencies, revising assignments to promote original thinking and process-oriented learning, or rethinking assessments to ensure they accurately reflect student understanding. As AI becomes increasingly integrated into professional and academic environments, faculty should prepare students to use these tools ethically and effectively within their field. Course design decisions should be grounded in both the opportunities presented by AI and the risks it introduces to academic rigor and student development.
6. Privacy Protection
Under no circumstances should faculty input personal, private, HIPAA, FERPA, Common Rule, and Export Control-protected information into an unsecured GenAI system. Personal data includes any information that may relate to an identified or identifiable person. Data may have additional restrictions and protocols associated with it (e.g. Indigenous data), and these types of information should not be shared with these models without additional consent. Researchers should follow University of Oklahoma policies.
Faculty must exercise caution when using AI-powered tools or third-party bots that connect to virtual meetings—such as Zoom or Microsoft Teams—for the purposes of transcription, recording, or automatic summarization. These tools may join meetings without an explicit invite, potentially violating university policies, participant consent expectations, and privacy regulations.
All meeting organizers and participants are expected to review and comply with the University's guidance on managing third-party apps and bots, as outlined in OU IT’s Knowledgebase Article on Uninvited Meeting Bots. Before enabling such tools, ensure that all attendees are aware of and have consented to the use of AI for recording or summarizing meeting content. Unauthorized use of such tools in meetings involving sensitive information, student data, or instructional activity may result in violations of institutional privacy or data protection policies.
7. Institutional Oversight
To use an AI system not provided by OU, complete the IT Security Assessment (which may include a Privacy assessment depending on data types). Faculty should not share sensitive, confidential, or regulated data with any commercial or opensource AI solution or model unless an IT security assessment has been completed and a university contract that includes appropriate data protection language (such as FERPA contract language or an agreement has been executed between OU and the provider (e.g., a Purchase or Software Agreement and a Business Associate Agreement (BAA), if applicable). Use of the system prior to full approval may expose the University to legal and data security risks.
8. Education and Training
Faculty have a unique opportunity to help students build digital literacy and ethical awareness by incorporating discussion and instruction around AI into their teaching practices. Educating students on the capabilities and limitations of generative AI tools enhances critical thinking and supports responsible use. Instructors are encouraged to introduce AI-related content early in the course and continue reinforcing expectations throughout the term. Discipline-specific examples, classroom demonstrations, and reflective assignments can help students engage with AI in meaningful ways. Faculty should also take advantage of available resources to stay informed about emerging tools and pedagogical strategies that support AI integration.
Acknowledgement
This document was drafted with the assistance of ChatGPT, an AI language model developed by OpenAI, based on guidance from the NIST Artificial Intelligence Risk Management Framework (2023), Gartner’s EU AI Act readiness series (2023), and internal governance requirements. Human oversight, subject matter expertise, and institutional review were applied to ensure accuracy, relevance, and alignment with organizational goals.
References