Skip to content

Who’s afraid of ChatGPT?

  • by

I posted about the limitations of Artificial Intelligence in my blog entry in July 2022. Little did I know then, that by the end of the year and in the beginning of 2023, everybody would talk about ChatGPT. Some consider it as something that would cause disturbance and chaos in education and publication. Other people think that it’s the way of the future – an oracle beyond Google, who we can ask about a variety of topics rather than only pinpointing us to websites, products, and map directions.

For those who are not in the know, ChatGPT is an Artificial Intelligence model which allows users to interact with the it in a conversational way. The dialog allows users to continue the conversation thread by asking follow-up questions – even disagreements. ChatGPT would politely apologise and then continue the conversation with you. Google has also answered to the challenge now by introducing its chatbot, Bard.

In a way, the AI-led conversation tool is not novel. In the 1960s, researchers at MIT already created a natural language processing computer program called ELIZA. ELIZA allowed users to interact with it, by inviting questions and responding to your statements. It could simulate conversation through pattern-matching and created an impression it understands you – by producing the most likely appropriate responses. There’s a clone of ELIZA that you can try.

The Fear and Hysteria

The hysteria over ChatGPT is caused by its ability to produce program codes, essays, and passages on order. It seemingly covers a whole lot of area and ‘knows’ about a lot of things. You can ask it to tell a joke, write you a 1,000-word essay on Shakespeare, an R-code to create a co-occurrence table, or a dot-point explanation on why dogs are superior to cats.

ChatGPT would assemble the information that it could get from the Internet, based on the request it received, and find the most appropriate way to structure the code or passage through its internal set of logic. The answer is scarily close to something that can be created by a person – however, on some occasions, the end result would contain some statements or facts that are incorrect. I tried asking ChatGPT: “What is the Ehrenberg-Bass Institute?” and among the professional sentences, it included names I don’t recognise as having established the Institute. The Ehrenberg-Bass Institute was not created by somebody called ‘Professor Richard J. Ehrenberg’.

There is fear that lecturers, examiners, and teachers would not be able to distinguish works that are genuinely produced by students, against those that have been written through ChatGPT. The fear then leads to the doom-and-gloom thinking that students will not be able to think for themselves and that this would be the start of a system like Skynet where the computers become our overlords.

Let’s be Pragmatic and Strategic, Instead!

It helps to be open-minded but skeptical and strategic when it comes to appraising the role of ChatGPT and its successors. Within the education sector in Australia, I’m glad that the University of South Australia (where I am based in!) is cautiously pragmatic about ChatGPT rather than rejecting it out right.

The hysteria over ChatGPT reminds me of the fear and paranoia of big data analytics in the market research circle in mid-2000s. There was fear that big data analytics would decimate the industry and that companies would end up running their own surveys through services like SurveyMonkey. I was part of a national body during the period – and at that time, I advocated the inclusion of big data analysts as part of market research practitioners. It seemed logical with the abundance of learning and knowledge-sharing between the two areas. More than a decade later, market research has evolved into something that is more mature compared to the past – and certainly big data analytics allow businesses and organisations to make better data-driven decisions. There is a great synergy between the two.

As mentioned in my blog post in October 2022, Artificial Intelligence is simply a means to an end more efficiently. It’s not the fix-all-and-end-all nor is it the black plague. It may not be the most effective – but it is certainly more efficient. A program coder can ask ChatGPT to construct a good R code in a matter of milliseconds, which can then be refined with a human mind. It’s a much better way to manage effort and labour, isn’t it so? The challenge is to direct the effort to the refinement and not to blindly and lazily accept what ChatGPT or any other AI system throws at us.

I still remember the old days when constructing a program code in C or C++ would mean consulting books and creating it from scratch. Every time. Now, with Stack Exchange and YouTube clips, I can leverage on others who have made better in-roads in similar tasks that I need to do. The shortcuts allow me to direct my resources to further refine and customise it to a logical challenge that I need to solve.

Just consider ChatGPT and the likes to be in the same league as Ready-to-Heat meal – rather than spending hours in the kitchen, you can direct your effort into refining it later on – or – doing other cerebral activities that need to be done. Similar to Ready-to-Heat meal, it doesn’t mean that what’s produced by ChatGPT is better. In many cases, they are probably sub-par to what you can create from scratch.

If we want to venture to a new galaxy, there are many mundane things that we should let go to be automated, so they can be executed more efficiently. We can’t colonise Mars, if we insist all calculations to be done by hand. Doing things smarter with AI does not exonerate us from verifying their work and exercising our judgment. This is so that we avoid cases when we blindly accept AI’s wacky answers or having an AI system that is racist and sexist through the training it received. On the other hand, I would love to have an AI system that would automate my data wrangling that would save me hours from finicky coding work.

We should continue to leave the superior human brains to do the lateral and creative thinking, building analogies, connecting seemingly unrelated theories and phenomenons, as well as injecting feeling, humour, and emotions.

This is what machines can’t do.

PS: This article was not written by ChatGPT. I wanted to ‘cook it’ from scratch.

Leave a Reply

Your email address will not be published. Required fields are marked *