Georgia Tech Is Trying to Keep a ChatGPT-Powered Teaching Assistant From ‘Hallucinating’
EdSurge
Jeffrey R. Young
June 22, 2023
A college probably wouldn’t hire a teaching assistant who tends to lie to students about course content or deadlines. So despite the recent buzz about how new AI software like ChatGPT could serve as a helper in classes, there’s widespread concern about the tendency of the technology to simply make up facts.
Researchers at the Georgia Institute of Technology think they may have a way to keep the chatbots honest. And they’re testing the approach in three online courses this summer.
At stake is whether it is even possible to tame so-called “large language models” like ChatGPT, which are usually trained with information drawn from the internet and are designed to spit out answers that fit predictable patterns rather than hew strictly to reality.
“ChatGPT doesn’t care about facts, it just cares about what’s the next most-probable word in a string of words,” explains Sandeep Kakar, a research scientist at Georgia Tech. “It’s like a conceited human who will present a detailed lie with a straight face, and so it’s hard to detect. I call it a brat that’s not afraid to lie to impress the parents. It has problems saying, ‘I don’t know.’”
As a result, researchers and companies working to develop consumer products using these new AI bots, including in education, are searching for ways to keep them from unexpected bouts of fabrication.
Continue Reading
Share