University of North America Student AI Policy
Summary
The University of North America (the University) supports the responsible use of Artificial Intelligence (AI) tools to enhance learning while maintaining academic integrity. This policy document is student-focused and is divided into two sections: a General AI Policy (broad rules for any AI use in academics) and a Generative AI Policy (specific guidelines for content-generating AI tools like ChatGPT). Below is a brief overview:
- Academic Integrity First: Students must produce their own work. Using AI to complete assignments or exams without permission is considered cheating. All work must comply with the University's Academic Integrity Policy (Honor Code).
- Instructor Guidance: Always follow your instructor's rules on AI use. If an assignment doesn't explicitly allow AI, assume it's not allowed. When in doubt, ask your instructor before using any AI tool.
- Permitted Use & Transparency: If AI use is permitted for an assignment, you must use it ethically and openly disclose it. Any significant contribution from an AI (beyond basic proofreading) should be cited or acknowledged per the guidelines below.
- Prohibited Use: Submitting AI-generated content as your own work (without authorization and attribution) is plagiarism and a violation of policy. Using AI in exams or other restricted settings is forbidden.
- Consequences: Misuse of AI tools is treated as academic misconduct. Violations will be handled through established disciplinary procedures and may result in a failing grade, course failure, or even suspension/expulsion for serious or repeated offenses.
- Data Privacy: Protect your and others' privacy. Do not enter confidential or sensitive information into AI tools - these tools may store or share data. The University's data privacy rules apply.
- Support & Education: The University provides resources like workshops, writing center support, and library guides to help students learn how to use AI appropriately. This policy will be updated as technology and best practices evolve.
General AI Policy
This section outlines the general policies governing student use of any AI tools in academic work. It applies to all undergraduate and graduate students across all courses and programs at the University of North America. All academic work - including essays, projects, lab reports, coding assignments, presentations, exams, and any other graded submissions - falls under this policy.
Purpose and Philosophy
The University of North America recognizes that Al technology offers significant opportunities to enrich learning, creativity, and productivity. Tools like Al-assisted writing or coding programs can enhance the learning experience, help generate ideas, and prepare students for a future where AI is prevalent. However, these tools also pose challenges: if misused, AI can undermine academic integrity, shortcut the learning process, introduce biases or errors, and raise ethical concerns.
Our core philosophy is to uphold academic integrity, intellectual honesty, and authentic learning. Al should be used to support students' education, not replace their own effort and critical thinking. We expect students to be the authors of their work, developing their own understanding and skills. In essence, use AI as a tool, not a crutch - it can help you learn, but it cannot do the learning for you. Any use of Al must be responsible and in line with the University's values.
Definitions
To clarify this policy, the following definitions apply:
- Artificial Intelligence (AI): Any computer-based system or software that performs tasks which normally require human intelligence. This includes things like learning, problem-solving, decision-making, and content generation.
- Generative AI: A subset of Al tools designed to produce new content (text, images, code, audio, video, etc.) in response to user prompts. Examples: ChatGPT or Bard (text generation), DALL-E or Midjourney (image generation), GitHub Copilot (code suggestions).
- AI Tools: Any application, website, or software that uses AI or generative AI capabilities. This ranges from general AI-based services to specialized tools integrated in software (for instance, an AI writing assistant or translation tool).
- Academic Work: Any work product for academic credit or evaluation. This includes homework, essays, research papers, theses, dissertations, projects, lab reports, coding assignments, presentations, quizzes, exams, or any other assignment that contributes to a course grade or degree requirement.- Unauthorized Use: Any use of AI tools that is not permitted under this policy or specific instructor guidelines. This also includes failing to properly acknowledge AI assistance when it was allowed (undisclosed use).
Guiding Principles
The following principles guide acceptable AI use at the University:
- Academic Integrity: All submitted work must adhere to the University of North America's Academic Integrity Policy (Honor Code). If you use AI in a way that is not allowed and represent the output as your own work, it is academic dishonesty. Likewise, using AI without proper attribution when required is dishonest. Always maintain honesty in your academic efforts.
- Transparency & Attribution: When AI tools are used (with permission), you must be transparent about it. Clearly disclose any AI contributions to your work and cite them appropriately, just as you would cite any source or collaborator. Honesty about when and how you used AI is mandatory.
- Critical Engagement: You are responsible for the content you submit, even if an AI helped generate it. AI outputs can be inaccurate, biased, or nonsensical ("hallucinations"). Always critically review and verify AI-generated material. Never accept AI output as automatically correct - cross-check facts, correct errors, and ensure the final work reflects accurate understanding.
- Learning & Skill Development: The purpose of assignments is to develop your knowledge and skills. Over-reliance on AI can prevent you from learning important concepts in research, writing, problem-solving, etc. Use of AI should not replace critical thinking or original effort. Students should ensure they are learning from the process, not just turning in AI work.
- Instructor Discretion: Instructors have the authority to set specific rules about AI use for their classes. Some may encourage certain AI tools for learning; others may ban them for certain tasks. Always defer to the rules your instructor sets (which may be stricter than the general policy). The University supports instructors in integrating AI appropriately into their teaching to enhance learning while upholding integrity.
General Rules and Instructor Authority
Default Rule - No Unauthorized AI in Assignments: Unless your instructor has explicitly given permission for a particular assignment, you may not use AI tools to produce any part of your submitted work. Submitting work (in whole or part) generated by AI without permission is considered an academic integrity violation. In practical terms, if the syllabus or assignment instructions do not say you can use an AI tool, assume that you cannot use it for that assignment.
Instructor's Authority & Instructions: Instructors will clearly communicate their course-specific AI policies (usually in the syllabus, assignment guidelines, or class announcements). They may allow or disallow AI use depending on the assignment. It is the student's responsibility to know and follow these rules. Instructors may specify:
Whether any AI tools are permitted or prohibited on a given assignment.
Which types of AI use are acceptable (for example, an instructor might allow using AI for brainstorming ideas or checking grammar, but not for writing the final essay, or allow AI for coding help but not for solving exam problems).
How to document AI use if it's allowed (e.g., requiring a note in your submission about how you used the AI or including the Al-generated draft as an appendix).
Always pay attention to these instructions. If you are unsure about an assignment's AI policy, consult your instructor before using any AI tool. Failure to follow an instructor's specific AI guidelines is a violation of this policy.
Student Responsibilities
If you choose to use AI tools in your academic work (in ways that are allowed), you have several key responsibilities:
- Understand Course Policies: Read your course syllabus and assignment instructions carefully for any rules on AI use. Different classes may have different rules. When in doubt, always ask your instructor for clarification before using an AI tool on an assignment.
- Use AI as a Supplement, Not a Substitute: Even when AI is allowed, it should support your own work. You should not let the AI do all the thinking or create the entire assignment for you. Make sure the final submission reflects your own understanding and effort. For instance, you might use an AI to get ideas or check grammar, but the analysis, argument, or solution should be your own.
- Attribute and Disclose Properly: Whenever you use AI in a permitted manner, you must disclose that use. Follow the guidelines on how to cite or explain your use of AI (see the Generative AI Policy section on Disclosure and Citation). Never present AI-generated material as if you wrote it entirely yourself, and never try to hide the fact that you used an AI tool if it was part of your process.- Verify and Validate Content: Always critically evaluate the output from AI tools. If you incorporate any information or text from an AI, double-check facts, verify calculations, and ensure sources are correctly used. You are responsible for any errors or falsehoods in your work, even if an AI generated them. It is not an excuse to say "the AI got it wrong." Treat AI suggestions as you would information from any unverified source - with skepticism and verify independently.
- Respect Ethical and Legal Boundaries: Do not use AI in ways that violate ethical norms or laws. For example, do not use AI to fabricate data, create false citations, or assist in any form of plagiarism. Do not have an AI write a paper for you and then try to paraphrase to hide it - this is still plagiarism. Also, do not use AI tools to generate content that infringes on someone's copyright or intellectual property.
- Protect Data Privacy: Be mindful of what you input into AI tools. Do not enter sensitive personal information (about yourself or others), confidential academic or research data, or any non-public information related to the University into a third-party AI service. Many AI tools store user inputs, and those inputs could be seen by others or used to train the AI. For your own privacy and safety, and to comply with data protection policies, never share information that should be kept private. (For instance, do not paste the text of an unpublished research paper or someone's personal details into an AI chatbot.)
- Acknowledge Limitations: Understand that AI tools have limitations. They do not truly understand meaning and can produce irrelevant or incorrect answers. They also lack human judgment. Use your own judgment when interpreting AI output. Don't let the AI's answer stop you from thinking further or researching deeper. If something generated by AI seems suspect or too good to be true, investigate it yourself.
- Maintain Originality: Ultimately, ensure that the work you submit is yours. Even if you use AI for help, the final structure, ideas, and conclusions should be the result of your own learning process. You should be able to explain and defend any work you turn in without relying on the AI. In other words, using AI is not a substitute for knowing the material.
Data Privacy & Security
When using AI tools, students must exercise caution regarding the data and content they share:
- Think Before You Input: Assume that anything you type or upload into an online AI service could potentially be seen, stored, or used by others. Do not input private or sensitive information into AI tools that are not explicitly approved by the University for such data.- Prohibited Data to Share: Never share personal identifiers (your own or others' names, addresses, ID numbers, etc.), health or financial information, unpublished research or thesis content, confidential University documents, or any other sensitive data through AI platforms. Similarly, do not use AI to transform or analyze data that is confidential or protected unless you have explicit permission and the tool is authorized for that use.
- University Policies Apply: Comply with the University's data protection and privacy policies when using AI. For example, if certain data is protected by law or University policy (like student records under FERPA, or research data under an NDA), putting it into an external AI service would violate those rules. Use AI tools at your own risk with regard to privacy; the University cannot control how third-party AI services use your data.
- Security Awareness: Understand that free or public AI tools often use your inputs to improve their algorithms. If an AI service is not contractually bound to protect University data, you should assume your inputs are not secure. When in doubt, don't share it.
Enforcement and Consequences
Using AI in ways that violate this policy (or an instructor's specific rules) is a form of academic misconduct. The University treats such violations seriously, just as it would instances of plagiarism or cheating.
- Academic Integrity Policy: Any unauthorized or undisclosed use of AI tools in academic work is considered a violation of the University of North America's Academic Integrity Policy. For example, submitting an essay or coding assignment that was written by an AI (without permission) is cheating. Failing to cite AI assistance when required is also a violation.
- Reporting and Investigation: Suspected violations of this AI policy will be handled under the University's established academic misconduct procedures. Typically, if an instructor or grader believes a student has misused AI, they will document the concern and report it to the appropriate body (such as the Academic Integrity Office or Dean's office). An investigation or review will follow the same process as any academic dishonesty case. This may involve meeting with the student, gathering evidence (which could include AI output logs or use of AI-detection tools), and allowing the student to respond.
- Use of Detection Tools: Be aware that the University or instructors may employ plagiarism detection and AI-detection software to help identify uncredited Al-generated content. While such tools are not infallible, they can flag content for further review. Students should not rely on the idea that "AI usage can't be proven" attempting to evade detection is itself dishonest and not guaranteed to succeed.- Consequences: If a student is found to have violated this policy, they will face consequences as outlined in the Academic Integrity Policy. Depending on the severity of the violation, consequences may include:
- A failing grade or zero on the assignment in question.
- A failing grade for the entire course.
- Academic disciplinary sanctions such as probation, suspension, or expulsion from the University (for very serious or repeated offenses).
Lesser sanctions might be applied for minor or first-time offenses, and more severe sanctions for egregious cases. The exact outcome is determined through the academic misconduct process.
- Opportunity to Appeal: Students have the right to due process. If you are found responsible for a violation, you can typically appeal the decision or sanction through the standard University academic grievance or appeals process (as detailed in the Academic Integrity Policy or Student Handbook).
- Enforcement of Instructor Rules: If you violate a specific AI rule set by your instructor (even if not explicitly stated in this general policy), it is still considered a violation. For instance, if an instructor allowed AI for brainstorming but you used it to write the entire assignment, that breach will be treated as misconduct. Always align with both this policy and any additional course-specific policies.
Training and Resources
The University is committed to helping students understand how to use AI effectively and ethically:
Educational Resources: Students are encouraged to take advantage of University resources on AI usage. The University Library, Writing Center, and IT Services offer guides and tutorials on topics such as citing AI sources, evaluating AI-generated content, and maintaining academic integrity in the age of AI.
Workshops and Seminars: The University will provide workshops, seminars, or online modules about AI tools each semester. These sessions cover how AI can be used in research and writing, what the pitfalls are, and how to avoid academic misconduct. All students (and faculty) are urged to participate to stay informed about best practices.
Consultations: If you are unsure how this policy applies to a specific situation, you can seek guidance. Your instructors and academic advisors can help clarify expectations for particular courses. Additionally, offices like the Academic Integrity Office or the Center for Teaching and Learning can provide advice on proper use of AI in academics.
Staying Updated: Because AI tools evolve quickly, the University's online resource hub will be updated regularly with FAQs, examples of proper vs. improper use, and links to articles or guidelines. Check the University's academic integrity or IT website for a dedicated page on AI tool usage.
Policy Review and Updates
AI technology and its role in education are changing rapidly. The University of North America will review this AI policy on a regular basis (at least annually) and update it as needed:
- Regular Review: A designated committee (for example, the Academic Affairs Committee in collaboration with the Office of the Provost and IT governance representatives) will review this policy periodically. They will consider new developments in AI, feedback from the University community, and evolving academic standards. Updates will be made to address emerging issues or tools.
- Notification of Changes: If the policy is revised, students will be notified through official channels (such as email announcements or updated syllabi). The latest version of the AI policy will be published in the Student Handbook and on the University's website.
- Community Input: The University welcomes input from students and faculty. If you have concerns or suggestions regarding AI use in academics, you can share feedback with your student representatives or directly with the Academic Integrity Office. Such feedback may be considered in future policy revisions. (The General AI Policy above provides the overarching rules and expectations for all AI tool use. Below, the Generative AI Policy section offers more detailed guidance specifically for generative AI tools, which create content. The two sections are intended to be consistent; generative AI use must follow both the general rules and the specific guidelines.)
Generative AI Policy
Generative AI tools are AI systems specifically designed to produce new content (text, images, code, audio, etc.) based on prompts or questions. Common examples include large language models like ChatGPT or Google Bard (which generate text in response to questions), image generators like DALL-E or Midjourney, and coding assistants like GitHub Copilot. These technologies can be very powerful and useful, but in an academic context they require special guidelines to prevent misuse. This section details the University's policy for student use of generative AI tools in academic work. These guidelines build upon the General AI Policy, focusing on scenarios unique to generative AI. (If this section is provided independently, students should still refer to the University's General AI Policy for additional context and rules.)
Overall Stance: Using generative AI in your coursework is considered similar to getting help from another person or source. Unless an instructor explicitly allows it for a task, you should not use generative AI to produce work you will submit. If it is allowed, you must follow all the rules of proper use and citation. Always treat generative AI assistance as something you need to credit and use responsibly, just as you would any external aid.
Permissible Uses of Generative AI (When Allowed)
There are many productive ways generative AI can contribute to learning if your instructor permits it for an assignment. Below are examples of acceptable uses provided you have authorization and you properly disclose your use of AI. Remember, these are only permissible in contexts explicitly allowed by your instructor:
Brainstorming and Idea Generation: Using an AI tool to brainstorm topics, research questions, or approaches to a problem. For instance, you might ask ChatGPT to suggest angles for an essay or to explain a concept in different ways to spark your own ideas. This can help you get started or overcome writer's block, but you would still develop and write the actual content yourself.
Grammar and Style Assistance: Using AI to improve the writing mechanics of text you have written. This is similar to using a spelling/grammar checker or tools like Grammarly. For example, you can have an AI suggest re-phrasings for clarity or fix grammatical errors in your draft. The key is that you wrote the original draft - the AI is just helping polish language, not generating the ideas or analysis.
Outlining and Summarizing: Having an AI summarize complex source material or generate an initial outline for a paper as a starting point. For example, after researching a topic, you could ask an AI to summarize a journal article to ensure you understood it, or to propose a possible outline for your argument. You would then critically evaluate that outline and modify it as you see fit. Any outline or summary from the AI should be treated as a suggestion, not a final product.
Coding Help and Debugging: Using AI coding assistants to help find errors in your code or to understand how to implement a concept. If programming assignments allow AI, you might use a tool like GitHub Copilot or ChatGPT to get hints for fixing a bug or to see examples of how a particular algorithm can be written. This can facilitate learning if you study the AI's suggestion and integrate it with understanding. You should not, however,use AI to write an entire program for you if the assignment is meant to test your coding ability (unless explicitly allowed).
Practice and Study Aids: Using generative AI to practice course material in ungraded contexts. For example, you can ask an AI to generate sample problems, quiz questions, or explanations to test your knowledge while studying. This use is generally fine because it's off-line from actual assignment submission; it's akin to using flashcards or practice quizzes. (If you then use Al-generated content directly in a graded assignment, that would need permission and disclosure.) Always ensure that using AI in this manner does not violate any exam rules (never use it during a closed-book exam, obviously).
Important: Even in the above permissible scenarios, you must credit the AI's contribution if any of the Al-generated content or suggestions make it into your submitted work. (How to do this is covered in the Disclosure and Citation section below.) Also, you should double-check any information the AI provides. For instance, if an Al suggests an outline or code snippet, verify that it is correct and make sure you understand it fully. Permissible use is about enhancing your own work and learning - you should remain in control of the process and the content.
Impermissible Uses of Generative AI (Prohibited Behavior)
Unless explicitly authorized by your instructor for a specific pedagogical reason, the following uses of generative AI are considered violations of University policy. These actions undermine academic integrity and are not allowed:
Submitting AI-Generated Work as Your Own: You may not have an Al write your essay, solve your problem set, answer exam questions, or produce any assignment for you, and then submit it as if you did it. This includes copy-pasting Al-generated text and making only minor edits. If the AI's output constitutes a significant portion of the work and you haven't been given permission, this is cheating. For example, asking ChatGPT to write your history paper or using an AI to generate a lab report and turning it in is strictly forbidden.
Using AI on Exams or Restricted Assessments: Any use of generative AI (or any AI) during closed-book exams, online tests, or any assessment where external assistance is not allowed is prohibited. This is equivalent to getting unauthorized help. Even if an exam is take-home, you must follow the rules given - if the instructor says no outside assistance or only certain resources, AI falls under that restriction unless stated otherwise.
Automating Programming Assignments: If an assignment is meant to evaluate your programming or problem-solving skills, you should not use AI to generate the solution code or answers in lieu of doing the work. For instance, using an AI to write an entire code assignment, or to complete math problem sets step-by-step for you, is not allowed. (Using AI to debug or get hints is different - see permissible uses - but outright having it do the assignment is misconduct.)
Unverified or Misrepresented Content: You should not present Al-generated content (text, images, analyses, etc.) as factual or as your own analysis without verification. For example, it's impermissible to use an AI to generate a data analysis or citation and then include it in your paper without checking it. Al can produce fabricated references or incorrect data; turning that in not only risks being wrong, it's academically dishonest if done knowingly. Always verify any Al-provided facts or sources. Including Al-created material in your work without proper attribution is also misrepresentation (see next point).
Failure to Disclose Allowed AI Use: If your instructor does permit AI assistance and you use it, you must not hide it. Failing to disclose Al involvement when it was required by this policy or by the instructor is a violation. For instance, if you used an Al writing assistant to draft a portion of your essay, and the policy or your instructor expects you to note that, you must do so. Omitting that information is considered misleading and against the transparency principle.
Violating Copyright or IP via AI: Do not use AI tools in ways that infringe on intellectual property rights. This includes, for example, using an AI to generate content that is essentially plagiarizing an author's work, or feeding copyrighted material into an AI to produce a summary when you do not have rights to that material. Also, you should not submit Al-generated artwork or text that closely mimics someone else's style or content in a way that violates plagiarism or licensing rules. Standard academic rules about plagiarism and copyright apply fully to Al-generated content.
In summary, any attempt to shortcut your learning or misrepresent the origin of your work using generative AI is impermissible. If you are ever unsure whether a use of AI is allowed, assume it is not, or get explicit clarification from your instructor. The default expectation is that the work you submit is your own, created by you without unapproved aid.
Disclosure and Citation of AI Usage
Whenever you use generative AI in a way that contributes to your submitted work, you must clearly disclose this use. This applies even when AI use is allowed - transparency is required so that your instructors understand how the work was produced. Proper disclosure has two parts: a general statement of use, and specific citations if applicable.
1. General Disclosure Statement: Include a brief statement in your assignment (for example, at the end of your essay, in a footnote, or in a cover page) explaining how you used the AI tool.This statement should mention the name of the tool, the date or time frame you used it, and the purpose for which you used it. Be as specific as necessary to make clear what portion or aspect of the work had AI assistance. For instance, you might write something like:
Example: "I used ChatGPT (GPT-4) on March 15, 2025, to help brainstorm arguments for this essay and to improve the wording of two paragraphs in the draft. The AI was given an outline of my essay and prompted for suggestions on expanding certain points. I reviewed and edited all content myself. All final ideas and conclusions are my own, with the AI used only for support in phrasing and idea generation."
In more technical work, an example might be:
Example: "Parts of the code were developed with assistance from GitHub Copilot. Specifically, Copilot was used on April 2, 2025, to suggest solutions for the sorting algorithm and to identify a bug in the merge function. I ensured all code was tested and fully understood before inclusion."
The key is honesty. This statement will not count against you if AI use was allowed; in fact, it demonstrates integrity. If a certain format is preferred by your instructor (some may ask for a specific section or style of acknowledgment), follow those instructions.
2. In-Text Citations (if needed): If you directly include content generated by AI word-for-word or incorporate an idea that is uniquely from the AI and not common knowledge, treat it like a source. For example, if an AI gave you a particular phrase or a distinctive piece of information that you include in your essay, you might add a citation like you would for a book or article. There is not yet a universal standard for citing AI, but you could do something like: "(Source: ChatGPT, response to prompt about climate data on March 10, 2025)". Follow any citation format recommended by your instructor or discipline. The important part is to make it clear what content came from an AI tool.
If the AI content is more general and you've already disclosed it in a statement, you might not need separate inline citations for every small AI-assisted edit. Use your judgment and when in doubt, cite.
Note on Style Guides: Citation standards for AI are evolving. Some style guides (MLA, APA, etc.) have begun to issue guidelines for citing AI-generated text. Students should stay updated on these or ask a librarian/instructor for the proper way to cite AI in their field.
Record-Keeping: It's a good practice to keep copies of your AI interactions (prompts and responses) when you use an AI tool for an assignment. In some cases, an instructor might request to see how you used the AI (e.g., to verify that your use was within allowed bounds or to understand a specific citation). Having a saved chat transcript or log can help demonstrate your process. This also protects you in case there is any question later about what you did. Always use AI in logged environments (most tools allow you to save your chat history) so you can produce evidence of your academic honesty if needed.
Academic Integrity and Misuse of Generative AI
Any misuse of generative AI tools - such as using them when it's not allowed, or failing to follow the above disclosure rules - is a violation of academic integrity. The same enforcement procedures and consequences described in the General AI Policy Enforcement section apply here:
- If an instructor suspects you have submitted Al-generated work without permission or not properly disclosed, it will be treated as a suspected cheating or plagiarism incident.
- The case will be referred to the appropriate academic integrity body and investigated. You may be asked to provide evidence of your work process (which is why keeping records of AI use is important).
- Proven violations will result in disciplinary action. This can include failing the assignment or course and further sanctions as described earlier (up to suspension/expulsion, depending on the severity).
- Always remember: Claiming ignorance of the policy is not an excuse. This document serves as your notice of what is expected. When you use generative AI, do so openly and responsibly, or not at all.
By adhering to these generative AI guidelines, you can safely incorporate new AI tools into your learning process without compromising your integrity or academic development.
Questions and Additional Support
Questions or Uncertainty: If you have any questions about whether a particular use of AI is allowed, or how to properly credit AI assistance, you should reach out for clarification before you proceed. Your first point of contact can be your course instructor (for questions about class-specific rules). For general questions about this AI policy or academic integrity, you can contact the Academic Integrity Office or the Dean of Students office for guidance. It's always better to ask in advance than to inadvertently violate the policy.
Conclusion: The University of North America is excited about the potential of AI technology as a learning aid, but maintaining academic honesty and rigor is paramount. By following this policy, students will ensure that they use AI tools in a way that enhances their education instead of undermining it. The University community is expected to uphold these standards so that degrees and grades continue to reflect genuine student achievement. Remember, academic integrity is a shared responsibility - together, we can integrate AI into education in an ethical and meaningful way.