Generative AI Policy: IT Operations
Guidance
Technical
guidance for using AI tools safely
while protecting institutional
data
On this page:
Purpose and Scope
This document provides technical guidance for
using generative AI tools on The Master's
University (TMU) IT infrastructure. IT
Services & Security maintains this policy
to help faculty and staff use AI safely while
protecting institutional data and
systems.
What this policy covers: Technical
requirements for AI tools on TMU-managed
devices, networks, and accounts. Data
protection requirements when using AI
tools for TMU work.
What this
policy does not cover: Ethical
usage in coursework, academic integrity,
or pedagogical decisions. These topics are
addressed by department-specific policies
and the broader TMU academic AI policy.
Students and faculty should defer to their
department's guidance for classroom and
research ethics.
Related policies: This policy operates
alongside existing TMU data security,
acceptable use, and privacy policies. When
questions arise about overlapping topics or
matters requiring broader governance
decisions, IT Services & Security will
work with TMU executive leadership to address
these concerns.
Core Technical Concepts
Understanding these concepts helps you make
informed decisions about AI tool usage:
Device Ownership vs Data Ownership
Device ownership: TMU manages devices,
email accounts, and network access. On
TMU-owned endpoints, IT maintains
administrative controls and software
configurations.
Data ownership: TMU data belongs to the
institution regardless of which device or AI
tool accesses it. Using a personal laptop or
personal AI subscription does not change data
protection obligations when handling TMU
information.
Key
principle: If you're doing TMU
work or handling TMU data, these
guidelines apply whether you're using a
TMU device or personal equipment.
How AI Tools Handle Data
Public AI tools store your inputs. Consumer services like free ChatGPT, Claude,
or Gemini accounts permanently retain what you
submit. This data may be used to train models
or accessed by the company. Once entered, it
cannot be fully retracted.
Enterprise tools provide stronger
protections. Business agreements with
AI vendors can include data processing
agreements, opt-out from training, and
enhanced security controls. These tools are
appropriate for sensitive workflows.
TMU liability: When TMU data is exposed
through AI tools, the institution faces
regulatory consequences under FERPA, HIPAA,
GDPR, or CCPA—regardless of whether the
incident occurred on TMU or personal devices.
IT Services & Security handles breach
response and regulatory reporting.
AI Agents and Embedded Instructions
What are AI agents? AI agents are
advanced tools that can perform tasks
autonomously, access multiple systems, browse
the web, or execute complex workflows on your
behalf. Unlike standard AI chatbots that only
respond to prompts, AI agents can take actions
across your systems.
Examples of AI agents you may
encounter:
- Browser/Computer Agents: Claude
Computer Use, OpenAI Operator, Anthropic's
Claude in Chrome
- Development Tools: Claude Code,
GitHub Copilot Workspace, Cursor AI,
Aider
- Security/Research Tools: Clawdbot
(popular in security research), AutoGPT,
AgentGPT
- Productivity Agents: Google Gemini
with extensions, Microsoft Copilot agents,
Zapier AI Actions
- Custom/Shared Agents: Custom GPTs,
Claude Projects, community-built
automation tools
- Email/Calendar Agents: AI
assistants that read emails, schedule
meetings, or manage tasks
Why rapid
review matters: New AI agent tools
emerge weekly, each with different
capabilities and risks. A tool that's safe
today may gain new permissions tomorrow,
or a malicious actor may create a
lookalike version. What spreads quickly in
security communities or on social media
may not be safe for institutional use. IT
collaboration ensures we evaluate these
tools before they access TMU
systems.
CAUTION: AI agents may contain embedded
meta-prompts—hidden instructions that
control the agent's behavior. These
embedded instructions can:
- Override your explicit commands
- Access or transmit data without your
knowledge
- Execute actions you did not
authorize
- Bypass normal safety controls
Exercise extreme caution: Before using
any AI agent, especially those created by
third parties or downloaded from external
sources:
- Verify the source and creator of the
agent
- Never grant AI agents access to TMU
systems, credentials, or sensitive data
without explicit IT approval
- Assume that public or shared AI agents may
contain malicious embedded
instructions
- Be wary of tools trending on social media
or in online communities without
established security track records
- Review agent permissions carefully before
granting access to files, emails, or
systems
- Contact IT Services & Security if you
need to use an AI agent for TMU work
When in
doubt: If a tool asks for
permissions to access your email, files,
calendar, browser activity, or
terminal/command line, treat it as an AI
agent and contact IT Services &
Security before installation. This
includes tools recommended by colleagues,
found on GitHub, or trending in
security/tech communities.
Shadow IT
Shadow IT refers to tools deployed without IT
awareness. While often implemented to solve
immediate problems, unsanctioned tools create
risk when they process TMU data or connect to
TMU systems.
Why IT needs visibility: When tools
cause security incidents, compliance
violations, or data breaches, IT responds to
regulatory agencies, manages notifications,
and handles remediation. Early collaboration
prevents problems and enables safer AI
adoption.
IT-Managed Resources
IT manages TMU devices, @masters.edu email
accounts, and network infrastructure. These
resources require IT notification or approval
for AI tool deployment.
When to Contact IT
Contact IT Services & Security
before:
- Installing AI software on TMU-owned
computers
- Signing up for AI services using
@masters.edu email addresses
- Purchasing AI tools with TMU funds
intended for deployment on TMU endpoints
or networks
- Enabling AI features in existing software
that connect to TMU systems
- Deploying or using AI agents that
access TMU data or systems
Why: IT
maintains administrative controls on TMU
endpoints to prevent security conflicts.
Early notification during procurement
prevents purchasing tools that cannot be
deployed. AI services using @masters.edu
credentials can access institutional data
and require evaluation. AI agents require
special scrutiny due to embedded
instruction risks.
Contact: servicedesk@masters.edu | 661-362-2876
Evaluation and Approval
IT evaluates AI tools based on security,
privacy, compliance, and technical integration
requirements. Most requests are processed
within 2-3 business days.
IT Administrative Authority: IT Services
& Security reserves the authority to
administratively review and block unsanctioned
applications, including tools such as DeepSeek
or other services that pose security,
compliance, or data protection concerns.
Evaluation criteria include NIST cybersecurity
framework guidance, industry best practices,
vendor security posture, data handling
policies, and compliance with applicable
regulations. Applications may be blocked
pending review or permanently restricted based
on risk assessment. Users requiring blocked
tools must submit exception requests with
appropriate risk mitigation and senior
leadership approval.
Exception Requests
If you need a tool that IT cannot approve for
general use, submit an exception request to GenAI@masters.edu with:
- Business justification and specific use
case
- Description of data that will be
processed
- Risk mitigation measures (e.g., data
anonymization, isolated environment)
- Risk owner identification (department head
or senior leader)
IT Services & Security will evaluate the
request and work with appropriate TMU
leadership to determine approval. Approved
exceptions include documented conditions, time
limitations, and ongoing oversight
requirements.
Data Protection Essentials
These rules apply to all TMU data regardless of
device or AI tool used. This is not an
exhaustive data classification—for detailed
guidance, consult TMU's data security policy
or contact IT.
Never Share with Public AI Tools
WARNING: Do not share personally identifiable
information (PII) or sensitive data with
public AI tools. AI platforms store
submitted information indefinitely,
creating compliance and privacy
risks.
- Student records, grades, or personally
identifiable student information
- Social Security Numbers, student ID
numbers, or government-issued IDs
- Medical information, financial records, or
donor data
- Employee personnel files or sensitive HR
information
- Confidential institutional documents,
strategic plans, or legal materials
SPECIAL NOTICE
- AI Agents: Never grant AI agents
access to TMU email, file systems,
databases, or administrative systems
without explicit IT approval. AI agents
with embedded malicious meta-prompts can
exfiltrate entire datasets or execute
unauthorized commands across connected
systems.
Compliance Requirements
TMU must comply with FERPA (student records),
HIPAA (health information), GDPR (European
residents), and CCPA (California residents).
Mishandling data through AI tools can result
in regulatory violations, significant fines,
and institutional liability.
If unsure: Contact IT Services &
Security (servicedesk@masters.edu)
before entering potentially sensitive data
into AI tools.
Safe Usage Examples
Public AI tools can be used for:
- General research using publicly available
information
- Writing assistance for non-confidential
communications
- Brainstorming and ideation without
confidential details
- Educational content creation with
hypothetical examples
- Code generation for non-proprietary
projects
- Grammar and style checking for general
documents
Key
principle: If the information
could identify individuals, affect
privacy, or harm TMU if exposed, do not
enter it into public AI tools. When in
doubt, ask IT.
Incident Reporting
If you accidentally share sensitive data with
an AI tool, discover an unauthorized tool
deployment, or encounter unexpected AI
behavior, contact IT Services & Security
immediately.
Email: servicedesk@masters.edu
Phone: 661-362-2876
No-fault
reporting: Accidental incidents
reported promptly will not result in
disciplinary action. Early reporting
enables faster response and reduces
institutional risk.
Getting Help
General Questions
servicedesk@masters.edu | 661-362-2876
- Questions about AI tool selection or usage
guidance
- Tool approval requests or technical
evaluation
- Data protection questions
- Security incident reporting
Exception Requests and Risk Assessment
- Exception requests for blocked or
unsanctioned tools
- Risk evaluation for sensitive use
cases
- Security policy interpretation
Resources
President's Guidance on AI
President
Abner Chou's Guide on AI Usage at
TMU
Helpful reading for understanding TMU's
institutional perspective on artificial
intelligence and its role in Christian higher
education.
Policy Maintenance
IT Services & Security maintains this
policy and reviews it annually or more
frequently as technology, regulations, or
institutional needs evolve. For matters
requiring broader governance decisions or
campus-wide policy changes, IT works with TMU
executive leadership and relevant steering
committees.
Feedback: Submit suggestions to servicedesk@masters.edu
Thank you for helping us protect TMU's
data and systems while enabling
innovative use of AI
technology.
