Project Details
Free online artificial intelligence language models (AI LM’s) are enhancing experiences online and are likely to continue to "evolve" and students from across the stages of education are using these AI programs, like ChatGPT. ChatGPT can converse with students and provide answers to their questions; however, the answers are only as good as the input or materials provided. If limited or no input is provided by the student, the AI is more likely to provide false information with confidence (aka hallucinate). As defined by ChatGPT, a hallucination occurs when a “model generates outputs that are plausible sounding but factually incorrect, nonsensical, or not grounded in reality.” This project will involve asking prepared questions based on Veterinary Medical Curricula within the abnormal disease processes within AI LM’s such as ChatGPT. Responses will be compared when no input is provided, when poor input is provided, and when peer-reviewed or published materials are provided. Results will be shared, including differences in quality of responses, subjects which more often create hallucinations, and quality of different AI LM’s.
Computer skills
Communication skills
Independence
Student will be supervised by Dr. Watson.
