In short, ChatGPT knows a lot—but not everything. It was trained using hundreds of gigabytes of writing on the Internet, but the Internet does not contain the whole world of knowledge. While it can provide extremely helpful explanations (imagine being able to ask Wikipedia virtually any question you want), it sometimes fabricates information as well based its understanding of what words usually follow others. As the official ChatGPT FAQ puts it, "ChatGPT will occasionally make up facts or 'hallucinate' outputs." These "hallucinations" can include fabricated citations.
As another example, we asked it for information about one of our librarians. It provided a response that sounded plausible but was largely false (incorrect information underlined in red):
When asked, ChatGPT currently claims that it does not have direct access to subscription-based databases such as JSTOR. The AI tool's training data depended on publicly available texts.