DRAFT - AI Usage Guidelines for Research

Overview

Artificial Intelligence (AI) will create new opportunities across the entire spectrum of research activities, ranging from the natural sciences, social sciences, economics and political sciences, humanities, engineering, biomedical and health sciences, as well as the creative arts. AI will serve as a catalyst, instigating profound transformations across various domains. Concurrently, it will foster the emergence of novel fields of study while potentially diminishing the prominence of others. 

Principles for AI 

According to the AI Working Groups Charter, established by the Interim Chief AI Officer for the University of Oklahoma, the AI Working Group for Governance and Policy (“AI Governance Working Group”) is tasked to “ensure responsible and ethical AI use through the development of clear guidelines, policies, and oversight mechanisms.”  Building on the foundations of the National Institute of Standards and Technology’s (“NIST”) Artificial Intelligence Risk Management Framework for AI Risks and Trustworthiness (AI RMF 1.0, Section 3 [NIST AI 100-1], fig. 1), the AI Governance Working Group has established the following set of principles to guide the development of AI guidelines and policies for implementation at the University of Oklahoma.   

These principles are written to guide the work of the AI Governance Working Group with the goal of balancing innovation and enablement with ethical and secure deployment to “enhance innovation across education, research, and operation[s]” at the University of Oklahoma.  The principles are set forth in a binary approach.  Each principle below consists of an aspirational component that is idealistic as well as an operational component that is pragmatic.   

Lastly, these principles are written with the understanding that they are subject to change due to the broad and rapidly advancing nature of AI and the standards being developed accordingly.  This includes monitoring the newly introduced NIST AI Standards “Zero Drafts” Pilot Project to continually develop AI standards. 

Visit Ethical and Trustworthy AI Usage Principles to learn more. 

Foundational Information 

For the purposes of these principles, the following foundational information regarding AI systems should apply: 

Definition of Artificial Intelligence: As used in the NIST AI RMF 1.0 (adapted from OECD Recommendation on AI:2019; ISO/IEC 22989:2022), AI “refers to an AI system as an engineered or machine-based system that can, for a given set of objectives, generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy.” 

Deployment of AI Systems: The university recognizes that AI systems are deployed for a variety of use cases, with varying levels of complexity, and can be provided for internal or through third parties via direct, web-based use, through APIs or other means.  As used in these principles and elsewhere, the following categories of AI systems apply: 

  • Experimental Deployment: AI systems developed and used for prototyping, research exploration, or early-phase trials within academic environments. 
  • Custom In-House Deployment: AI systems developed and used entirely within the university using proprietary or institution-owned data and infrastructure.  These systems offer full transparency and accountability, requiring rigorous adherence to internal controls across the AI lifecycle—design, training, tuning, testing, and deployment. 
  • Foundational Model Deployment:  AI systems reliant on large, pre-trained foundational models (e.g., GPT-4, Gemini, Deepseek R1, Microsoft 365 Copilot, etc.) that are locally deployed and/or customized using institutional data (such as fine-tuning, retrieval-augmented generation, or other similar technologies). 
  • Integrated Deployment: AI systems embedded within enterprise software (Banner, Microsoft 365, Adobe Firefly, Zoom AI, Gradescope, etc.) or provided via third-party APIs or platforms. 
  • Public-Facing Deployment: AI systems that interact directly with university users or the public (e.g., ChatGPT, Grammarly, Microsoft Copilot Chat, chatbots, recommendation engines, educational tools, etc.). 
  • Embedded Deployment: AI systems embedded in devices or infrastructure systems (e.g., smart campus sensors, autonomous lab instruments).   

OU’s GenAI Offerings 

As the use of generative AI and machine learning tools continues to grow, it’s important to choose tools that align with both the user’s needs and user data responsibilities. The University of Oklahoma offers access to AI tools via the AI Resource Center. Some of these tools are provided under institutional licenses and may include enhanced data privacy protections or security features. However, not all tools share the same terms of use, cost structure, or data handling practices. When using AI tools, faculty should review the specific terms associated with each platform, especially how their data is handled. While some tools are configured to prevent user inputs from being used for model training, no system can eliminate all risk of data exposure. Faculty in professional colleges (i.e., law and health sciences), or who deal with special populations (e.g. Native or education data) may have additional requirements that mandate more careful or limited use of these systems. 

AI Potential and Research 

AI can assist researchers in different roles and at all career stages by supporting data analysis, coding, writing, visualization, and literature synthesis. Recent advancements demonstrate that models like GPT-4 can clean datasets, develop analytical strategies, run statistical tests, and generate academic content. However, while these tools enhance efficiency, they also pose risks, including hallucinations, biases, and fabrications. 

As AI evolves, researchers must balance innovation with integrity. Some academic journals and federal funding agencies have restricted GenAI use in manuscripts and grant applications. It is crucial for researchers to familiarize themselves with funding agencies’ and journals’ guidelines to ensure compliance. 

For AI to truly drive discoveries, it will also be essential for researchers to understand AI tools’ abilities and limitations and leverage them effectively, selecting the appropriate tool for the research need in question. 

Principles Related to Using AI in Research 

1. Accountability and Responsible Use 

Researchers should utilize AI systems in research only where they perform well and exhibit few hallucinations. Researchers should ensure they have a sound understanding of the capabilities and limitations of AI tools within the context of a specific tool’s implementation. As AI models are predictive systems and lack true understanding, researchers should verify all outputs for accuracy and attribution and attest that this has been done in all cases, detailing the methods used to do so. 

Researchers are accountable for any AI outputs, outcomes, and effects throughout the research process. To ensure accountability, researchers should identify the person(s) responsible for validating or approving AI outputs and should ensure meaningful human oversight over AI-driven processes within a research project. When using AI tools to generate code, researchers should ensure that AI-generated code carries out operations as expected. 

When using AI tools to analyze or otherwise process research data provided by community partners, external entities, and other populations, researchers should obtain written consent from those partners before uploading such data to AI tools. Researchers should communicate transparently with research partners about data that they contributed is stored, shared, and made accessible to other parties by any AI tool used in the research process. 

When uploading copyrighted data to AI tools, researchers must ensure that all such uses fall within the fair use doctrine. The Office of Technology Commercialization can provide guidance regarding permissible use of copyrighted materials. 

2. Documentation 

All use of AI in research should be fully documented and reported in detail to sponsors for grants, editors for manuscripts and publications, reviews for conferences, and audiences for invited talks and presentations. Of note, sponsors, journals, and professional societies are generating their own best practices and policies in tandem, so researchers should be aware of additional requirements beyond University policies. 

When using AI in research, researchers should clearly identify the AI tool or system used, including name and version. Researchers should explain the purpose and context for using AI (e.g., data analysis, drafting content), describe the stage(s) at which AI is used-input processing, output, or post-processing, and clarify how an AI system is supporting human analysis. 

Researchers should document how the AI generates outputs. Researchers should state the origin and type of data used to train or fine-tune the AI system, disclose known limitations or biases associated with the model, and share assumptions or constraints that influence how the AI tool is applied in context.  

3. Account for and Limit Bias 

Bias in AI systems should be understood, quantified, described, and mitigated in order to ensure that quality research is produced when AI tools are incorporated into research workflows. This is especially true when applying outputs to research involving human participants or using data from such studies. 

Selection bias, confirmation bias, measurement bias, stereotyping bias, and out-group homogeneity bias are all common forms of bias that can be introduced or amplified by AI tools. Researchers can mitigate bias by ensuring that AI tools used are trained on diverse and representative data, by using bias detection tools, by continuously auditing AI systems to evaluate fairness of outputs, and by keeping human review as a critical component of any AI workflow within a research lifecycle (“Bias in AI”, 2025). 

4. Privacy Protection 

Under no circumstances should researchers input personal, private, or HIPAA, FERPA, Common Rule, and Export Control-protected information into an unsecured GenAI system. Personal data includes any information that may relate to an identified or identifiable person. Data may have additional restrictions and protocols associated with it (e.g. Indigenous data), and these types of information should not be shared with these models without additional consent. Researchers should follow University of Oklahoma policies. 

Researchers must exercise caution when using AI-powered tools or third-party bots that connect to virtual meetings—such as Zoom or Microsoft Teams—for the purposes of transcription, recording, or automatic summarization. These tools may join meetings without an explicit invite, potentially violating university policies, participant consent expectations, and privacy regulations. 

All meeting organizers and participants are expected to review and comply with the University's guidance on managing third-party apps and bots, as outlined in OU IT’s Knowledgebase Article on Uninvited Meeting Bots. Before enabling such tools, ensure that all attendees are aware of and have consented to the use of AI for recording or summarizing meeting content. Unauthorized use of such tools in meetings involving sensitive information, student data, or instructional activity may result in violations of institutional privacy or data protection policies. 

5. Institutional Oversight 

The use of any GenAI system not provided by OU or on the approved software list must have an IT Security Assessment completed before using with an OU account or OU-related data. See the OU Acceptable Use Policy and the OU CyberSecurity Policy. Use of the tool prior to full approval may expose the University to legal and data security risks. 

The institution will develop processes to monitor and evaluate AI-related research, ensuring compliance with research integrity guidelines. Research output, including quality, impact, and relevance, will be assessed, and a culture of responsible AI research will be fostered through education, training, and institutional support. 

6. Education and Training 

Researchers are encouraged to participate in AI ethics and best-practice training to ensure responsible use of AI tools. The University will provide institutional resources including information sessions, resource guides, and consultative services to support researchers in learning about AI capabilities, risks, and ethical considerations. 

7. Collaboration and Partnerships 

Researchers should be aware of guidelines related to external collaborations involving any AI research. If AI is used in joint research efforts, funding policies, or industry partnerships, researchers should ensure compliance with institutional and funding agency requirements.  

Paid or unpaid relationships (contractual or otherwise) with technology service providers surrounding the use of AI should be coordinated with University IT and/or the relevant IT Advisory Committees. Vendors should demonstrate compliance with institutional AI standards, applicable regulatory requirements (e.g., EU AI Act, NIST AI RMF), and contractual safeguards. 

8. Intellectual Property and Human Subjects Research 

AI-generated work may raise questions about intellectual property ownership. Researchers should be aware of relevant policies and seek guidance on AI-created content ownership from the Office of Technology Commercialization. Additionally, research involving human subjects must adhere to ethical standards, ensuring that AI does not compromise participant rights (including the, CARE principles), consent processes, or data integrity. Researchers should be aware that the use of AI tools (e.g., LLMs) with IP data might constitute a public disclosure and affect patent filings and copyright filings. 

Acknowledgement 

This document was drafted with the assistance of ChatGPT, an AI language model developed by OpenAI, based on guidance from the NIST Artificial Intelligence Risk Management Framework (2023), Gartner’s EU AI Act readiness series (2023), and internal governance requirements. Human oversight, subject matter expertise, and institutional review were applied to ensure accuracy, relevance, and alignment with organizational goals. 

References 

  • Gartner. (2023a). Getting Ready for the EU AI Act: Assess Your Exposure to High-Risk Use Cases. Gartner Research. 

  • Gartner. (2023b). Getting Ready for the EU AI Act: Conduct an AI Impact Assessment. Gartner Research. 

  • Gartner. (2023c). Getting Ready for the EU AI Act: Build Capabilities for AI Governance. Gartner Research.