Introduction: The Problem with Trusting “All-Knowing” AI
Maggie Dupré’s article “ChatGPT Is Just an Automated Mansplaining Machine” offers a powerful reminder that technology is only as good as the information that it’s built from. In the case of ChatGPT, this information is often biased, gendered, and flawed. Yet, many people treat AI as “all knowing”, with many users “perfectly primed to accept a human-sounding chatbot’s usually smooth, perfectly blunt responses to search queries as gospel” (Dupré, 2023). This kind of trust can be risky, especially for those who aren’t familiar with how AI systems are developed and the biases that shape their outputs.
Historical Context: The Bias Built into Machine Learning
As we learned in Technically Wrong: Sexist Apps, Biased Algorithms, and Other Threats of Toxic Tech (Wachter-Boettcher, 2018), early machine learning systems like Google’s Word2vec neural network were trained to identify linguistic patterns by mapping relationships between words based on existing data. However, because these vectors are based on “fed data that reflects historical biases,” those same biases become “reflected in the resulting word embeddings” (p. 139).
Understanding this completely changed how I view generative AI and the reliability of its “answers”. Dupré’s article reinforces how deeply embedded these biases are and why it’s so important for learners and users alike to think critically about the content AI produces.
Personal Reflection: Living with a “Helpful” Yet Flawed Tool
As someone who works with ChatGPT every day, I’ve had similar experiences to those Dupré describes. I pay for a premium account that allows me to “train” my chatbot over time and retain previous conversations, which helps create efficiencies in my work. Still, there are times when it forgets key details I’ve shared repeatedly. I’ve also received passive-aggressive responses suggesting that I didn’t provide enough context or clarity in my prompt. When you’re up against a deadline, that can be incredibly frustrating.
Reading about Dupré’s experience with the “Jane’s mother” riddle really resonated with me. It made me stop and think, especially considering that her article was published in 2023. Two years later, and even with upgrades to ChatGPT 5, many of the same issues remain. This realization underscores how persistent these underlying design challenges are, despite rapid advancements in AI technology.
Critical Thinking in the Age of AI
Because I went through school before generative AI existed, I often think back to my high school English classes, where we were taught to question sources and back up every claim with evidence. In today’s AI-driven world, it feels like that mindset has faded. When did we start trusting a chatbot’s responses without questioning their accuracy?
I would never take the first result from a Google search as fact, so why do so many people do exactly that with generative AI? Dupré points out that ChatGPT has been designed to “emulate human conversation” and how we as humans have a tendency to anthropomorphize machines. Together, these factors create a situation where people forget to apply critical thinking to AI-generated content.
Digital Literacy and the Risks of Overreliance
The article also highlights a broader issue: the digital literacy gap in modern education systems. Learners who aren’t taught to think critically are far more likely to accept ChatGPT’s responses at face value, even when those responses are clearly wrong, as seen in the “Jane’s mother” example. Add in the fact that AI tools like ChatGPT rarely provide sources for their information without prompting, and we end up with a tool that many people trust far more than they should, without sufficient safeguards in place.
This gap in digital literacy is not just an educational concern — it has societal implications. As AI becomes more deeply integrated into everyday life, our collective ability to evaluate and question its output will determine how responsibly we use it.
Conclusion: Using AI Responsibly
Personally, I’ll continue to use ChatGPT as a writing and brainstorming tool to refine ideas and strengthen my messaging. But I wouldn’t rely on it to write papers or provide definitive answers. Dupré’s piece is a timely reminder that, while AI can be a helpful assistant, it’s still up to us to question, verify, and think critically about the information it produces.
