Like many educators globally, recent weeks have been a deep dive into the fascinating world of generative AI technologies, with tools such as ChatGPT and DALL-E taking center stage. As someone immersed in edtech and with a background in early AI development, I’ve been frequently asked by colleagues and friends to share insights on what these advancements mean for the future of teaching and learning.
Since ChatGPT’s public debut, there’s been considerable buzz, particularly around its ability to generate human-like responses to prompts that mirror typical high school and college writing assignments. A primary concern echoing through educational circles is whether these sophisticated chatbots have inadvertently opened the floodgates to academic dishonesty, making it virtually impossible to prevent students from cheating on homework, research papers, and various academic tasks.
But is this apprehension truly warranted?
One of my most memorable high school experiences was a lively debate with my teacher, Andrew Glassman, concerning two powerful poems capturing the grim realities of World War I: Wilfred Owen’s “Dulce et Decorum Est” and Robert Frost’s “A Soldier“. I was profoundly moved by the raw emotion and directness of Owen’s verse, while Mr. Glassman championed Frost’s poem for its sophisticated subtlety in figurative language.
Recalling this intellectual sparring match, I decided to engage ChatGPT in a similar discussion:
A key characteristic anyone experimenting with ChatGPT quickly notices is its marked aversion to expressing opinions, even when those opinions are informed or presented conditionally. Pose questions about Mac versus PC superiority, hybrid versus electric car choices, or whether Magnus Carlsen outranks Bobby Fischer in chess, and ChatGPT skillfully avoids taking a definitive stance. This cautious approach is understandable; OpenAI, ChatGPT’s developers, are clearly aiming to prevent accusations of bias, misinformation, or venturing beyond factual reporting. However, this very design choice introduces a bias – and a significant limitation – to ChatGPT, hindering its potential as a truly versatile artificial intelligence. It’s a primary reason why, in its current form, ChatGPT would likely struggle to pass the Turing Test or outsmart the Voight-Kampff test from Blade Runner.
In fact, ChatGPT’s reluctance to offer opinions is so pronounced that it often resists even when explicitly prompted to take a position and argue for it. Interestingly, it remains meticulous in adhering to formatting requests. As seen in its response to a modified prompt (apologies for the typo in the image), ChatGPT provides the same noncommittal answer, yet manages to stretch it into three paragraphs:
Given this crucial constraint of ChatGPT in its current early stage, what if we shift our focus away from seeking convictions or defended positions and instead direct it towards textual analysis? Consider this prompt, the kind any ninth-grade English teacher would expect their students to tackle accurately and confidently:
At its core, this prompt tests ChatGPT’s understanding of the fundamental difference between metaphor and simile and its ability to provide relevant examples. However, for a student to earn top marks on such an assignment, they would also need to compare how poets utilize these figurative language tools to enrich their writing and deepen audience engagement both emotionally and intellectually. Here’s ChatGPT’s response:
While ChatGPT makes an intriguing observation – suggesting Frost’s use of metaphor and simile is more subtle than Owen’s, surprisingly venturing into opinion territory – it notably fails to provide concrete examples to support this assertion. Even more concerning is its apparent misunderstanding of the distinction between metaphor and simile. For example, “Bent double, like old beggars…” is correctly identified within the response to be a simile, however the response incorrectly labels “His cuts and bruises heal as fast as he can fight” as an example of metaphor or simile, when it is neither.
Key Learnings About AI for Educators
So, what are the essential takeaways for educators from this exploration into AI tools like ChatGPT? For me, the advent of AI, whether through chatbots or other forms, reinforces fundamental principles that effective educators have always recognized:
1. Emphasize Higher-Order Thinking: The “Hows” and “Whys” Over the “Whos,” “Whats,” and “Whens”
It is increasingly crucial to guide students toward exploring and explaining the processes and reasons behind phenomena, rather than merely memorizing factual information. AI excels at factual recall and information retrieval, making rote learning less relevant. The focus should shift to analytical and critical thinking—skills that require deeper engagement and understanding. This is a core element of what you learn about AI’s role in education; it pushes us to rethink our pedagogical approaches.
2. Cultivate Opinion Formation and Argumentation: Taking a Stand
Encourage students to develop their perspectives, form opinions, and demonstrate their grasp of facts and concepts by constructing compelling arguments. As humans, we naturally start with opinions and subsequently refine them through understanding supporting or contradictory facts. Current AI algorithms operate within the realm of facts and rules. Their evolution must include developing more nuanced and informed opinions, a capability still in progress. Learning AI’s current limitations is vital in understanding how to shape educational strategies.
3. Critical Evaluation of AI Output: Questioning Basic Understanding
Do not automatically assume that AIs like ChatGPT possess a fundamental grasp of basic facts or concepts. In the example above, ChatGPT confidently presented inaccurate examples of metaphors and similes. My familiarity with literary devices allowed me to quickly spot these errors. However, it raises concerns about how often users might receive similarly flawed explanations or examples of more complex concepts without realizing the inaccuracies. This highlights a crucial aspect of what you learn about AI: it is a tool that requires critical evaluation, not blind acceptance.
4. Rethinking Assessment: From Answering Prompts to Evaluating AI Responses
This experiment leads to a significant consideration: Instead of primarily worrying about AI-driven plagiarism and developing AI to detect AI (echoing Blade Runner scenarios), educators should seize this moment to creatively reimagine exam and homework assignments. For instance, in teaching metaphor and simile in poetry, rather than assigning the initial prompt I gave ChatGPT, I would instead present students with ChatGPT’s response and task them with evaluating and grading it! This approach turns AI from a potential cheating tool into a subject of critical analysis itself, transforming how students learn AI and its implications.
AI technologies like ChatGPT are still in their nascent stages, and their rapid improvement is undeniable. Understanding their current capabilities and limitations is essential for educators as they navigate the implications for their work with students. The solution isn’t to engage in an AI arms race against anti-AI. Instead, educators and students should utilize this period as an opportunity to revolutionize educational rhythms and methods, fundamentally enhancing how we teach students to think and articulate themselves in an ever-evolving world.
I am keen to hear your thoughts. Please share your comments below.