DRAFT - Ethical and Trustworthy AI Usage Principles

Summary

Building on the foundations of the National Institute of Standards and Technology’s (“NIST”) Artificial Intelligence Risk Management Framework for AI Risks and Trustworthiness (AI RMF 1.0, Section 3 [NIST AI 100-1], fig. 1), the AI Governance Working Group has established the following set of principles to guide the development of AI guidelines and policies for implementation at the University of Oklahoma.

Body

Introduction

According to the AI Working Groups Charter, established by the Interim Chief AI Officer for the University of Oklahoma, the AI Working Group for Governance and Policy (“AI Governance Working Group”) is tasked to “ensure responsible and ethical AI use through the development of clear guidelines, policies, and oversight mechanisms.”  Building on the foundations of the National Institute of Standards and Technology’s (“NIST”) Artificial Intelligence Risk Management Framework for AI Risks and Trustworthiness (AI RMF 1.0, Section 3 [NIST AI 100-1], fig. 1), the AI Governance Working Group has established the following set of principles to guide the development of AI guidelines and policies for implementation at the University of Oklahoma. 

These principles are written to guide the work of the AI Governance Working Group with the goal of balancing innovation and enablement with ethical and secure deployment to “enhance innovation across education, research, and operation[s]” at the University of Oklahoma.  The principles are set forth in a binary approach.  Each principle below consists of an aspirational component that is idealistic as well as an operational component that is pragmatic. 

Lastly, these principles are written with the understanding that they are subject to change due to the ever increasing and advancing nature of AI and the standards being developed accordingly.  This includes monitoring the newly introduced NIST AI Standards “Zero Drafts” Pilot Project to continually develop AI standards.

Foundational Information

For the purposes of these principles, the following foundational information regarding AI systems should apply:

Definition of Artificial Intelligence: As used in the NIST AI RMF 1.0 (adapted from OECD Recommendation on AI:2019; ISO/IEC 22989:2022), AI “refers to an AI system as an engineered or machine-based system that can, for a given set of objectives, generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy.”

Deployment of AI Systems: The working group recognizes that AI systems are deployed for a variety of use cases, with varying levels of complexity, and can be provided for internally or through third parties via direct, web-based use, through APIs, or other means.  As used in these principles and elsewhere, the following categories of AI systems apply:

  • Experimental Deployment:  AI systems developed and used for prototyping, research exploration, or early-phase trials within academic environments.
  • Custom In-House Deployment: AI systems developed and used entirely within the university using proprietary or institution-owned data and infrastructure.  These systems offer full transparency and accountability, requiring adherence to internal controls across the AI lifecycle - design, training, tuning, testing, and deployment.
  • Foundational Model Deployment:  AI systems reliant on large, pre-trained foundational models (e.g., GPT-4, Gemini, Deepseek R1, Microsoft 365 Copilot, etc.) that are locally deployed and/or customized using institutional data (such as fine-tuning, retrieval-augmented generation, or other similar technologies).
  • Integrated Deployment: AI systems embedded within enterprise software (Banner, Microsoft 365, Adobe Firefly, Zoom AI, Gradescope, Apple Intelligence, etc.) or provided via third-party APIs or platforms.
  • Public-Facing Deployment: AI systems that interact directly with university users or the public (e.g., ChatGPT, Grammarly, Microsoft Copilot Chat, chatbots, recommendation engines, educational tools, etc.).
  • Embedded Deployment: AI systems embedded in devices or infrastructure systems (e.g., smart campus sensors, autonomous lab instruments).

 

OU AI Working Group Principles for AI

OU_AI_Principles

1.Transparent

Aspirational: AI systems should foster trust by being open in their design, function, process, and use across all applications and departments at the University, especially in teaching, research, publications, and other high-stakes contexts.

Operational:  AI system use should be documented, including system name, version, role in workflow, and data inputs when used at the University of Oklahoma, and AI use should be disclosed to appropriate users when AI is involved in producing decisions or content that affect academic or other high-impact outcomes.

2.Accountable

Aspirational: Human oversight must anchor every AI system deployment, with responsibility for its outcomes clearly assigned and maintained throughout all AI system lifecycles.

Operational: Each AI system should have designated personnel accountable for verifying outputs, managing risk, and ensuring compliance with institutional and legal standards.

3.Fair

Aspirational: AI systems should actively support equity by proactively identifying and reducing bias in data, design, and application across diverse populations and contexts.

Operational: Bias detection and mitigation must be applied to AI systems through formal bias audits and human review of outcomes, especially when AI systems are deployed in teaching, research, publications, and other high-stakes contexts.

4.Culturally Adept

Aspirational: AI systems must honor cultural diversity, local context, and community values in how they are designed and applied.

Operational: AI systems should be evaluated for linguistic and cultural relevance during design and deployment, with input from representative user groups.

5.Efficient

Aspirational: In line with strategic institutional goals, proposals for new AI systems must justify need over existing solutions and include lifecycle plans addressing maintenance, decommissioning, and environmental impact.

Operational: AI initiatives should serve strategic institutional goals, ensuring efficient use of resources and alignment with academic, research, and operational priorities as set forth in the University strategic plan.

6.Safe and Reduces Harm

Aspirational: AI should be designed to minimize direct and indirect physical, emotional, financial, and legal harms to individuals, society, and the environment, avoiding both direct and systemic negative consequences.

Operational: AI systems must have risk assessments and safety testing before deployment, with documented procedures for shutdown, human override, and incident responses in place.

7.Accessible

Aspirational: AI systems should be inclusive, offering equitable access to users of all abilities and circumstances.

Operational: Public- and campus-facing AI systems must meet WCAG standards and be compatible with assistive technologies to ensure accessibility for all users.

8.Secure, Resilient and Compliant

Aspirational: AI systems must uphold institutional security standards and be resilient against manipulation or misuse.

Operational: Cybersecurity and adversarial testing controls should be integrated into AI development, deployment, and monitoring processes.  To use an AI system not provided by OU, an IT Security Assessment should be completed.  Sensitive, confidential, or regulated data should not be shared with any commercial or opensource AI solution or model unless an IT security assessment has been completed and a university contract that includes appropriate data protection language (such as FERPA contract language or a Business Associate Agreement (BAA)) has been executed between OU and the provider.

9.Continuous Learning

Aspirational: Effective and ethical AI use depends on ongoing education for all stakeholders, fostering responsible innovation.  All persons at the university should receive mandatory AI training and education.

Operational: The university should offer AI-focused training, resource guides, and consultations to proactively develop AI competency at the university.  AI system users should be required to complete appropriate modules before AI system use.

10.Supports Innovation

Aspirational: AI governance at OU should cultivate a spirit of curiosity, experimentation, and boundary-pushing that advances discovery, creativity, and societal impact. Innovation should be empowered, not impeded, by policy—so long as it is guided by shared values and ethical integrity.

Operational: OU’s AI governance documents should encourage innovation, experimentation, and creative inquiry while balancing and ensuring safe and responsible development.  Policies should include clear pathways for pilot-testing and research involving AI, with guardrails to assess potential impacts and a streamlined review process for ethical and security vetting.

 

Acknowledgment & Disclosure

Some content in this document was drafted with the assistance of ChatGPT, an AI language model developed by OpenAI, based on guidance from the NIST Artificial Intelligence Risk Management Framework (2023), Gartner’s EU AI Act readiness series (2023), and internal governance requirements. Human oversight, subject matter expertise, and institutional review were applied to ensure accuracy, relevance, and alignment with organizational goals.

 

References

  • OpenAI. (2025). ChatGPT (GPT-4.5 and 4o models). https://chat.openai.com
  • Gartner. (2023a). Getting Ready for the EU AI Act: Assess Your Exposure to High-Risk Use Cases. Gartner Research.
  • Gartner. (2023b). Getting Ready for the EU AI Act: Conduct an AI Impact Assessment. Gartner Research.
  • Gartner. (2023c). Getting Ready for the EU AI Act: Build Capabilities for AI Governance. Gartner Research.
  • National Institute of Standards and Technology. (2023). Artificial Intelligence Risk Management Framework (AI RMF 1.0). U.S. Department of Commerce. https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf
 

Details

Details

Article ID: 3408
Created
Thu 4/17/25 5:03 PM
Modified
Fri 5/2/25 9:29 AM

Related Services / Offerings

Related Services / Offerings (1)