Is ChatGPT Worth The Hype?

*Click the Title above to view complete article on https://thefridaytimes.com/.

2023-04-01T16:27:00+05:00 Zahoor Reza
Since becoming available for public use in December 2022, ChatGPT has generated a lot of hype and taken the Internet by storm. As of January 2023, according to data from the analytics firm Similarweb, this incredibly powerful AI tool developed by OpenAI surpassed the coveted 100 million monthly active users mark in the span of just two months.

For those who are not in the know, ChatGPT is an autoregressive language generation model with 175 billion parameters that is trained on a large body of data and uses deep learning techniques to understand and swiftly respond to user's text-based inputs.

"Technology," English novelist CP Snow told the New York Times in 1971, "is a queer thing. It brings you great gifts with one hand, and it stabs you in the back with the other."

While on one hand so much has been written and said about it being the next big leap and a tipping point for AI technology, there has also been much hue and cry by those who have expressed fears that it often generates biased and incorrect responses, and is also providing a gliding path for students to plagiarize their essays and cheat on assignments and online exams. Then there’s also those who are concerned that it will be eventually replacing human workers from jobs like copywriting, content writing, editing and digital marketing.

Before delving into why these worries about the fastest-growing consumer application are overblown out of proportion, let's first briefly discuss what is it good for.

From being capable of writing code to simplifying difficult concepts to generating essays to producing human-like responses to user's prompts, ChatGPT's bevy of applications span across a wide range of industries. Hence, it comes as no surprise why Microsoft has plowed big money into this AI chatbot's developer when it announced that it has invested a whopping $10 billion in OpenAI.

Having said that, it's important not to overlook the fact that ChatGPT's limitations far outweigh its applications. Three of its notable limitations - occasionally generating incorrect information, producing harmful instructions or biased content and limited knowledge of world and events after 2021 - are emblazoned on the tool's main interface for all users to see.

As alluded to above, one of the main areas of concern surrounding the use of ChatGPT is the tool being used as a means of cheating in assignments and online exams. To prevent any such misuse, a number of universities globally such as Oakland Unified in California, Seattle Public Schools, Baltimore County Public Schools in Maryland, New York City's Department of Education, Bangalore's RV University, France's Sciences Po have imposed restrictions on access to ChatGPT.

As I see it, these fears are a bit unfounded. To begin with, one of the main reasons why many strongly suggest against relying on ChatGPT to understand any new concept is that it often presents incorrect information with such remarkable amount of confidence that someone who does not have knowledge of that subject will believe that information to be true.

For instance, when biostatistician Adrian Olszewski posed a question to ChatGPT if he can use the logistic regression to do regression related tasks, ChatGPT's response varied from stating that logistic regression is unable to perform regression tasks to asserting that it can, and eventually suggesting Olszewski to consult with a teacher for clarification. Olszewski, being a biostatistician himself, might have known the correct answer beforehand, but anyone trying to understand logistic regression without any prior knowledge would have been left scratching their heads after reading such a cryptic response.

ChatGPT cannot be blindly trusted even when it comes to asking it for a list of sources about any particular subject you are writing an essay about. On many occasions, when asked to suggest books or movies or provide a list of nearby locations, it has often presented a list of hypothetical books, movies and places that do not actually exist, a phenomenon that has been referred to as “hallucination.”

It is also worth noting that just as the number of AI writing assistants have increased, applications to determine whether text is generated by an AI have also simultaneously increased. Not only has ChatGPT failed the Turing Test, a method proposed by Alan Turing in 1950 to ascertain the ability of machines to exhibit human intelligence, one can easily discern whether a piece of text has been generated by AI using a tool like GPT-Zero designed by Princeton student Edward Tian.

Human-centric technologist Kyle Simpson, in a post on LinkedIn, sums up the limitations of ChatGPT as well as anyone and why artificial intelligence cannot trump human intelligence.

"If you ask a good question to ChatGPT, and you get back a good response, especially one that's detailed, has step by step instructions to answer/solve what you asked, etc.... That means, almost certainly, that at least one human author answered a question similar/identical to the one you asked, in a well-organized way (perhaps in a couple of different posts)," Simpson wrote.

He further added, “ChatGPT is collecting, synthesizing and summarizing blog posts for you. It's not figuring out how to weave entirely distinct concepts together in a novel (and correct) way. You have a human to thank for that. But unfortunately, ChatGPT won't tell you which human to thank.”

The purpose of all this isn’t to say that imposing restrictions on access to ChatGPT is the right course of action by any means. Rather, looking at the breakneck pace of advancements in the world of artificial intelligence, the focus should be on learning how to use AI efficiently, responsibly and ethically.
View More News