Harvard’s AI Adventure: A Bold Step or a Risky Leap?
Imagine a world where AI holds more significance than fire or electricity. Now, picture Harvard University stepping into this world, leading the way with a daring experiment. The renowned institution is handing over the reins of its popular coding course, CS50, to an AI instructor. It’s like launching a spaceship into the unexplored cosmos of AI.
At the centre of this bold venture is Professor David Malan. He dreams of a future where AI becomes a personal tutor for every student, available round the clock. He envisions AI taking over routine tasks, freeing human teachers to engage in deeper, one-on-one interactions with students. In his eyes, this could reshape education, making it more akin to a “personal apprenticeship”.
However, not everyone is on board with this high-speed journey. Critics argue that the AI models in use, GPT 3.5 and GPT 4, aren’t always reliable at generating flawless code. They worry that students, who are paying for their education, are being used as test subjects in a potentially flawed experiment.
Despite the turbulence, Professor Malan stays the course. He acknowledges the possibility of errors but views them as minor hiccups on the path to a more efficient, personalised education system. He firmly believes that the potential benefits of AI in education far outweigh the risks.
In my opinion, the use of AI in education is like a double-edged sword. On one side, it offers personalised learning, constant support, and a lighter workload for teachers. On the flip side, it raises concerns about reliability, the potential loss of human touch in education, and possible privacy issues.
A recent study, “War of the chatbots: Bard, Bing Chat, ChatGPT, Ernie and beyond. The new AI gold rush and its impact on higher education,” offers an interesting viewpoint. The study suggests that while AI models like GPT-4 show promise in higher education, they are far from perfect. The study concludes that despite all the hype, there are currently no top-performing students among the AI models.
In a nutshell, Harvard’s AI instructor experiment is a daring leap into the future of education. It’s an exciting journey, but one that requires us to stay alert and think critically, whether we’re learning from humans or software. As we embark on this adventure, let’s keep Professor Malan’s words in mind: “We’ll make clear to students that they should always think critically when taking in information as input, be it from humans or software.” After all, in this brave new world of AI, critical thinking might just be our most valuable compass.