DRAFT - AI Usage Guidelines for Students

Body

Overview

At the University of Oklahoma, we are excited and committed to helping you explore how AI tools can support your learning experiences and boost productivity. We offer guidance about prioritizing your learning while you experiment with AI tools, whether you are using AI to brainstorm ideas, summarize class notes, improve writing, streamline daily routine tasks, tackle challenging problems, etc. Ultimately, students should learn enough about AI to become responsible practitioners and/or informed critics. Please remember that each course at OU sets its own policies for using AI tools. If you are unsure about what's allowed, refer to your syllabus statement and feel free to ask your instructor. 

Principles for AI 

According to the AI Working Groups Charter, established by the Interim Chief AI Officer for the University of Oklahoma, the AI Working Group for Governance and Policy (“AI Governance Working Group”) is tasked to “ensure responsible and ethical AI use through the development of clear guidelines, policies, and oversight mechanisms.”  Building on the foundations of the National Institute of Standards and Technology’s (“NIST”) Artificial Intelligence Risk Management Framework for AI Risks and Trustworthiness (AI RMF 1.0, Section 3 [NIST AI 100-1], fig. 1), the AI Governance Working Group has established the following set of principles to guide the development of AI guidelines and policies for implementation at the University of Oklahoma.   

These principles are written to guide the work of the AI Governance Working Group with the goal of balancing innovation and enablement with ethical and secure deployment to “enhance innovation across education, research, and operation[s]” at the University of Oklahoma.  The principles are set forth in a binary approach.  Each principle below consists of an aspirational component that is idealistic as well as an operational component that is pragmatic.   

Lastly, these principles are written with the understanding that they are subject to change due to the broad and rapidly advancing nature of AI and the standards being developed accordingly.  This includes monitoring the newly introduced NIST AI Standards “Zero Drafts” Pilot Project to continually develop AI standards. 

OU_AI_Principles

Visit Ethical and Trustworthy AI Usage Principles to learn more. 

Foundational Information 

For the purposes of these principles, the following foundational information regarding AI systems should apply: 

Definition of Artificial Intelligence: As used in the NIST AI RMF 1.0 (adapted from OECD Recommendation on AI:2019; ISO/IEC 22989:2022), AI “refers to an AI system as an engineered or machine-based system that can, for a given set of objectives, generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy.” 

Deployment of AI Systems: The university recognizes that AI systems are deployed for a variety of use cases, with varying levels of complexity, and can be provided for internal or through third parties via direct, web-based use, through APIs or other means.  As used in these principles and elsewhere, the following categories of AI systems apply: 

  • Experimental Deployment: AI systems developed and used for prototyping, research exploration, or early-phase trials within academic environments. 
  • Custom In-House Deployment: AI systems developed and used entirely within the university using proprietary or institution-owned data and infrastructure.  These systems offer full transparency and accountability, requiring rigorous adherence to internal controls across the AI lifecycle—design, training, tuning, testing, and deployment. 
  • Foundational Model Deployment:  AI systems reliant on large, pre-trained foundational models (e.g., GPT-4, Gemini, Deepseek R1, Microsoft 365 Copilot, etc.) that are locally deployed and/or customized using institutional data (such as fine-tuning, retrieval-augmented generation, or other similar technologies). 
  • Integrated Deployment: AI systems embedded within enterprise software (Banner, Microsoft 365, Adobe Firefly, Zoom AI, Gradescope, etc.) or provided via third-party APIs or platforms. 
  • Public-Facing Deployment: AI systems that interact directly with university users or the public (e.g., ChatGPT, Grammarly, Microsoft Copilot Chat, chatbots, recommendation engines, educational tools, etc.). 
  • Embedded Deployment: AI systems embedded in devices or infrastructure systems (e.g., smart campus sensors, autonomous lab instruments).   

OU’s GenAI Offerings 

You may be using AI tools in research, coursework, or projects if permitted. With the rise of numerous Generative AI (GenAI)-based tools, it is important to know how to choose the best tool for you.  

As GenAI and machine learning tools continue to grow, choosing tools that align with your needs and your data responsibilities is important. The University of Oklahoma offers access to several AI tools via the AI Resource Center. These tools are provided under institutional licenses and may include enhanced data privacy protections or security features. We recommend using these tools over similar technologies whenever possible. However, not all tools share the same terms of use, cost structure, or data handling practices. When using AI tools, please review the specific terms associated with each platform, especially how your data is handled. While some tools are configured to prevent your inputs from being used for model training, no system can eliminate all data exposure risks. We understand the emerging AI trends, opportunities, and uncertainties and are closely monitoring the changes.  

Discuss the appropriate course of action with your advisors and/or instructors when using AI solutions for your academic, research, and scholarly activities. Do not share sensitive, confidential, or regulated data with any commercial or open-source AI solution or model unless an IT Security Assessment has been completed and an agreement has been executed between OU and the provider (e.g., a Purchase or Software Agreement and a Business Associate Agreement (BAA), if applicable). (References: OU Risk Assessment Standard, OU Identity and Access Management Policy) . Please remember that some information may have additional restrictions and associated protocols (e.g., Indigenous, healthcare, or subjects' data), and these types of information should not be shared with these models without additional consent. See the OU Acceptable Use Policy and the OU CyberSecurity Policy to ensure they are legitimate and follow safe computing practices.  

Choosing a GenAI Tool to Use

The GenAI space is rapidly growing and expanding, with new tools and capabilities being revealed every few days. Hence, understanding what to look for in these tools is key. When considering tools outside of what the University offers, be aware of the following when choosing a tool for your needs: 

  • Privacy Risks: Data shared with external GenAI tools is often not private and may be accessible by third parties. Do not share personal, financial, or sensitive information. Additionally, for AI tools supported by OU, using a private account instead of an official OU account does not offer a similar level of data protection. To help safeguard your information, we recommend always using your OU account when accessing these supported AI tools. 

  • Misleading Costs: Some tools appear free but require payment for full functionality. Be cautious of services that request credit card details upfront. Some AI tools appear free initially but limit key features, run time, or results behind a paywall, requiring users to subscribe or pay for full functionality. Be cautious of services that request credit card information upfront, which may indicate automatic subscriptions or hidden fees. 

  • Understanding Limitations: AI tools generate responses based on patterns, which is not true understanding. They can also fabricate ("hallucinate") data. They may also carry biases, particularly favoring Standard American English, which could disadvantage other languages, dialects, and writing styles. If the data used to train the AI models is biased, the results may also be biased. AI models can also reinforce existing inequalities if disproportionality exists in historical data. For instance, according to some articles, when users asked AI to create images of people in specialized professions, it showed both younger and older people. Still, the older people were always men, reinforcing gendered bias regarding the role of women in the workplace. AI models often underrepresent or misrepresent marginalized cultural perspectives when their training data lacks those voices. These examples highlight why it is crucial to approach AI-generated content critically and be aware of how bias can influence outputs. 

  • Use Reputable Sources: When using external AI tools, verify the product's or tool's legitimacy. Check whether OU IT has already assessed this AI tool for security risks.  Use of the tool prior to full approval may expose the University to legal and data security risks. 

Considering the Limitations of Using GenAI 

  • GenAI is not sentient: AI models simulate intelligence by identifying and replicating patterns in data, but they do not possess self-awareness, consciousness, emotions, or independent thought. Their responses are based purely on statistical associations, not genuine understanding or intentional reasoning. 

  • GenAI is biased: AI models are trained from extensive historical data collections, often containing human assumptions, stereotypes, and social inequalities. As a result, AI can reflect and even reinforce these biases in its outputs. Because of this, AI tools should not be relied upon for ethical deliberation, sensitive judgment, or decision-making without careful human oversight. 

  • GenAI can mislead: AI-generated content may include false or inaccurate information, often presented confidently and convincingly, making it difficult to spot errors immediately. Sometimes, AI systems can hallucinate, producing names, data, quotes, or sources that do not exist. Because of this, verifying all AI-generated information against credible, trusted sources is essential before using it. 

Using AI for Learning 

There are ways that you can use GenAI to develop your knowledge and skills. At the same time, GenAI can easily impede your learning by offering shortcuts and allowing you to avoid the hard work that learning requires. Here are a few ways you might use GenAI to assist, rather than undermine, your learning process. 

  • As a Search Tool: Well-known search engines like Google and Copilot can provide summarizations along with references. They can be powerful enough to brainstorm and sketch tech ideas for quick, thorough analysis. However, most AI chatbots provide broad overviews but lack access to many academic sources. Always cross-check information with academic literature and valid sources. 

  • For Summarization: OU-supported AI tools like Copilot, NotebookLM, and Zoom AI companion can generate quick summaries of the materials and meetings, aiding in note-taking and improving comprehension. However, leveraging these tools to personalize and meet your needs and improvising the content is essential. Please do not rely solely on AI summaries; use them as a canvas for sketching ideas. 

  • For Studying: Use AI to create practice quizzes and test knowledge. Treat AI as your student and have it ask you questions instead of providing answers to deepen your understanding of a subject. 

  • For Writing and Problem-Solving: AI tools can assist with brainstorming and editing but should not replace original thought processes. Academic policies often restrict AI use for assignments—refer to your syllabus and consult your instructor. 

  • AI Learning Resources: Educate yourself on how to use AI effectively with OU resources such as: 

  • Presentations on GenAI use in writing are provided by the OU writing center and are available by request with two weeks of notice.  

Using AI in Your Courses 

  • Be Aware of Your AI Usage: GenAI encompasses many technologies beyond chatbots like ChatGPT, Gemini, or Copilot. Software such as Grammarly, for instance, also uses GenAI. You need to be aware of how and when you are using all AI technologies while completing your coursework. While your instructors may not have any problem using Grammarly (or something similar) to correct spelling/grammar errors, these technologies offer much more than basic editing. For instance, Grammarly can revise your writing to sound more "academic" or "confident." Carefully check with your instructor on AI usage for more details to avoid potential violation of policies. 

  • Instructor Policies: Each course may have different AI use guidelines. It's important to carefully review your syllabus to understand what is allowed and what isn't. If the policy is unclear or not addressed, ask your instructor for clarification before using AI tools in your coursework. 

  • Academic Integrity: Use AI to enhance your learning, not replace it. Avoid using GenAI in ways that compromise academic integrity. Always disclose and cite your use of AI when it is allowed in your course or research.   

  • Do No Harm: As a student, how you use GenAI is about more than just abiding by policies or maintaining academic integrity. Using GenAI during your coursework can potentially harm your instructors and peers – an outcome you must avoid. The most common way that GenAI can cause harm in your courses is by making your instructor or peers read machine-generated text when they believe it was written by a human (you). This deception causes harm when people are made to read machine-generated text (something that few people would willingly choose to do) when they believe they are reading something written by a human (you). 

Using AI for Career Readiness 

  • AI in the Job Market: AI literacy is becoming increasingly important in the workforce. Some industries require AI skills, while others restrict AI use. It behooves you to be literate in AI so you can speak to its use in the particular industry or job you are pursuing and know the ethical and other issues. 

  • Research Considerations: Sensitive or identifiable data must not be input into AI tools being used for research. Consult your advisor before using AI for University-sponsored work. 

Use of AI Tools in Virtual Meetings 

Students must exercise caution when using AI-powered tools or third-party bots that connect to virtual meetings—such as Zoom or Microsoft Teams—for the purposes of transcription, recording, or automatic summarization. These tools may join meetings without an explicit invite, potentially violating university policies, participant consent expectations, and privacy regulations. 

All meeting organizers and participants are expected to review and comply with the University's guidance on managing third-party apps and bots, as outlined in OU IT’s Knowledgebase Article on Uninvited Meeting Bots. Before enabling such tools, ensure that all attendees are aware of and have consented to the use of AI for recording or summarizing meeting content. Unauthorized use of such tools in meetings involving sensitive information, student data, or instructional activity may result in violations of institutional privacy or data protection policies. 

Questions to Consider When Using AI    

AI tools can be valuable resources for research, studying, and skill development but must be used responsibly. Prioritize critical thinking, verify AI-generated information, and follow course-specific guidelines. Understanding AI's limitations and ethical considerations will help you become an informed user academically and professionally. 

As AI transforms academic and professional spaces, consider the following: 

  • How will I integrate AI tools in my academic work to enhance, but not curb, my learning and critical thinking skills? 

  • Does AI support or hinder my mastery of course objectives? 

  • Is the content I generate and submit for courses my original work accurate and verifiable? 

  • How will I treat AI-generated content in my academic work? 

  • Is using AI fair & equitable in my class for my peers and instructor for assignments, projects, and group work? 

If You Have Concerns 

Talk with your instructor if you have concerns about how GenAI is being used or limited in a course. Training materials may also be available to help you understand the tools. 

Details

Details

Article ID: 3406
Created
Wed 4/16/25 8:00 AM
Modified
Fri 5/2/25 9:31 AM