The Great AI Debate: The pleasures and the pitfalls.
“The emergence of artificial intelligence (AI) and the technological singularity is a threat to human survival.” – Stephen Hawking, physicist. BBC interview, 2014
What is all the fuss about?
It all happened so quickly that it was a while before anyone gave realistic consideration to the actual impact of AI and how to go about regulating it. Couple that with the many misconceptions about what AI is and can do, and the result is mass confusion and frequently fear.
The discussion about artificial intelligence continues to escalate. One side is in favour of all the advantages and possibilities of AI-driven content. The other side approaches the subject with a fair amount of fear and paranoia. When someone like physicist Stephen Hawking sees the need to warn about the potential consequences of the emergence of AI, it seems that we should take note. However, the advantages that AI provides should not be discounted. What is all the fuss about? What impact could AI have on you personally? What implications are there for human life in general?
Should you be afraid?
On a personal level, job loss is a major concern. AI is performing tasks from computer programming to management functions, and even CEO roles. AI can produce efficient computer code, pass exams, create artwork, successfully interview for jobs, and improve cancer screening techniques. It’s fair to assume that a computer could be coming for some aspect of your job.
The AI Myth, Mind vs Machine:
The essential difference between human intelligence and artificial intelligence is that humans know they must learn in order to survive, and can gather, sort and use everything around them to do so. Machines can only turn microscopic switches on and off according to a pattern dictated by electrical impulses. The switches get smaller, the impulses get more energy efficient, the processing gets faster, but there is no awareness or internal motivation.
But there is certainly potential for being replaced by a machine that can do your job. Considering the history of automation this is nothing new. There are increasing reports of companies turning to generative AI models to replace writers, content creators and customer-support functions. However, the outlook is not entirely doom and gloom. A number of people have managed to pivot, working with the new tools and taking advantage of the opportunities they offer.
Many people in the educational fields have embraced the possibilities of AI, particularly when it comes to generating content and graphics. One of the huge stumbling blocks for producing excellent training and teaching is access to quality material. AI models such as ChatGPT can almost instantly produce content in any format or style that the user requests. Within certain limitations it is accurate and direct.
AI is also able to generate or suggest images, graphics, charts and diagrams when given a set of requirements or parameters, saving time, expense and effort.
In our experience, ChatGPT’s ability to interpret requirements and suggest appropriate material has been good-to-excellent so far. This ranged across subjects as diverse as marketing, coding and management skills development. Nevertheless, we kept strict watch on the accuracy of the supplied material
What to watch out for.
What do you need to watch out for? Large-language models are still inclined to extrapolate (also called hallucinating) information which may not be true. Occasionally, it will ignore certain aspects of a subject or will include outdated concepts because it is not current with the latest data. So, you have to take care when building a prompt to stipulate clearly what you need to know or include. Be prepared to drill down and prompt in different ways to get the level of detail you need.
You must fact check for accuracy. In most cases this is easily done with a quick search of the key terms, something you probably would have done anyway when researching your subject. If you are creating code, test wherever possible. Surprisingly, mathematics seems to be a tricky concept for AI to manage, so don’t rely on any results produced.
Another problem can be oversimplification – AI cannot identify when certain nuances or details are important for a complete understanding of a subject.
In this respect humans still have a critical role to play in editing, checking, guiding and reviewing AI results.
Keep the words of author Margaret Atwood in mind: “Machines don’t have morals, and neither do the people who program them.” Though AI language models use communication patterns that convincingly mimic reasoning and accountability capabilities, they don’t reason and may never do so in truly human terms. The humans who feed data to AI machines are free agents – that means that they are subject to personal and professional whims and fallacies which they have little motivation to control or eliminate. Until such time as AI development is subjected to regulation, it’s critical to check, check and check again for bias and mistakes.
Conclusion
It’s a vast subject and justifiably cause for concern. At this point though, AI machines are still dependant on humans for all of their knowledge. What we put in, is what we will get out. While AI can make predictions and extrapolate results more accurately and faster than humans, they cannot ‘learn’ about the world without us. Our knowledge is still relevant and so is our human experience and insight. No machine is nearly able to match those qualities yet.
The Learning Studio has a wealth of experience in creating engaging and effective content. We’re also excited about the possibilities that AI offers and using the benefits to make our eLearning as great as possible. We would be happy to share what we’ve learned about both human and AI content development!
Kerushan Naidoo
Head of Moodle Development