Skip to Main Content

DAsH

Research Guide for DAsH (or digital humanities) resources and tools

AI and ChatGPT Usage

According to the Manhattan University Community Standards and Student Code of Conduct, the unacknowledged use of AI within academic work can fall under the definition of plagiarism and cheating. The section of the Code on Academic Integrity Violations includes this text:

Cheating: Cheating is the use of or attempted use of inappropriate, unauthorized or prohibited materials, information, sources, study aids, devices, or assistance of others (including AI engines) in any academic exercise or examination in an effort to misrepresent mastery of material.

Plagiarism. Plagiarism occurs when a person represents work (e.g. words, ideas, phrases, sentences, data, etc.) which is not their own as their own work without acknowledgement or credit. This includes work generated by AI engines.

If you want to know if acknowledged or cited AI use is allowed for a course, or if your professor has guidelines into how it can be used within a class, please ask your professors for each course for guidance. The rules will vary from one course to another.

What's Wrong with using ChatGPT or AI for Research?

How ChatGPT Works

ChatGPT and other similar products that label themselves AI operate using what are called LLMs or Large Language Models. What ChatGPT does isn't to draw from a large base of information to consider all the facts and come up with an answer. It is more like the auto-complete or auto-fill on your phone, email or word doc software, which operates by predicting the next word in a sentence based on the words that appear before it in your document, and the sequence that words have appeared in past documents that make up its data. So if you type, "Abraham Lincoln was the s" and Microsoft Word auto-fills in "sixteenth U.S. President" it isn't because Word is consulting a list of U.S. Presidents, but  because Word has a lot of documents in its auto-complete training data where sentences that start with "Abraham Lincoln was the" continue with "sixteenth U.S. President" so Word predicts that is what you want to type. 

ChatGPT uses the same mechanism but operates from an unimaginably large pool of data when it predicts the next word in a sequence, takes more words into account when predicting the next word than your phone, and has had at least some vetting of its answers by engineers.

More information on how ChatGPT is configured is available at this New York Times article

What That Means For Its Answers

The fact that ChatGPT's answers come from this word-prediction process, and the fact that you don't know what exact sources it's consulting, nor its reasons for choosing the answer it does, means its responses can have these problems:

  • They can contain information that is incorrect, outdated or hallucinated. 
  • They can give biased responses due to the source material they pull from.
  • Due to randomness in their algorithms, their output isn't always reproducible. If you ask ChatGPT a question twice, or you and your friend ask the same question, you can get a different answer each time, and if you change up the prompt, you'll get a different answer. 

Because of that, it is important to remember to double check the claims of any ChatGPT or other AI answer elsewhere. Even if ChatGPT gives you the sources it got its answer from, go to the source to confirm the answer. The source may not exist or if it does, may make a completely different claim than what ChatGPT says it does.

Key Takeaways

ChatGPT or other AI tools are not wizards or oracles. They are more like a friend who is quick to offer an answer, but who you notice is only half listening when someone explains something to them.

ChatGPT or other AI tools are not neutral or objective because they are computer-generated, their data and configuration comes from humans with all our failings and biases.

ChatGPT or other AI tools should never be your last step in research.