Â鶹ӳ»Ó°Òô administrators are at work to create university policy in conjunction with the appropriate university subcommittees. At this time, however, faculty can still rely on existing academic honesty policies as a resource to support how students should conduct themselves regarding the use of generative AI.
Representing work not done by the student as their own is already a policy violation, and this includes work generated by an AI system that is not properly credited or allowed by the instructor. Instructors have the discretion to explore these tools in the classroom, and CTLE recommends that faculty decide whether these tools align with their pedagogical aims before allowing or disallowing use.
Many faculty are concerned about the potential for academic dishonesty stemming from AI, and while that is a very valid concern, it is very difficult to identify evidence of such use. Artificial Intelligence Detection Software claims that it can find such evidence, but due to the high false positive rates associated with such software, Â鶹ӳ»Ó°Òô is not currently supporting its use for assessment of student’s potential use of AI.
It is important to review all current and relevant handbooks and policies. Adhere to existing policies and procedures and be mindful as they update. In the meantime, faculty should be clear with students they are teaching and advising about their policies on permitted uses, if any, of Generative AI in classes and on academic work. Students are also encouraged to ask their instructors for clarification about these policies as needed.
Publicly available or published university information (Green Category) can be freely used in AI Tools. Usage should align with the Data Governance Policy.
Examples of Published Data
Reminder: Always review generated content before use
Generative AI works to provide the most likely response not the most truthful. For this reason, generated content must be critically evaluated. AI-generated content can be inaccurate, misleading, or entirely fabricated (sometimes called hallucinations or may contain copyrighted material. You, not LU, are responsible for any content that you publish that includes AI-generated material.
Given recommendations to use only public information input when generating AI output, it is essential to contemplate the shareable or retrievable nature of any AI-informed work. Instructors should emphasize to students that these tools are not to be used for creating content intended to be private (e.g., for research purposes) or claimed as their intellectual property.
For Faculty members, with even more access to confidential, proprietary, or sensitive information, this aspect is even more important to consider.
ChatGPT or similar AI Tools cannot be used with personal, confidential, proprietary, or sensitive information unless under a university contract specifically protecting such data.
Examples of Controlled Data
Examples of Confidential Data
AI Tools must not be used to generate non-public content, including proprietary or unpublished research, legal analysis, recruitment decisions, completion of academic work not allowed by the instructor, non-public instructional materials, and direct grading.
AI Tools must not be used for any activity that is illegal, fraudulent, or violates state, federal, LU, or TSUS policies.
CTLE support is available 8:00 AM until 5:00 PM, Monday through Friday. We are committed to the highest ideals of confidentiality in all matters.
Ashley L. Dockens | Director of the Center for Teaching and Learning Enhancement
For questions about the CTLE: dept_CTLE@lamar.edu
For questions about Blackboard: blackboard@lamar.edu