In November 2022, OpenAI launched ChatGPT (Chat Generative Pre-Trained Transformer). OpenAI is an AI research company with the goal to “ensure that artificial general intelligence benefits all of humanity.”
What is ChatGPT?
ChatGPT is an AI chatbot that can converse with its users, answer their questions, produce stories, pass university-level exams, fix computer code, and write creative essays and songs as well as academic-style papers. ChatGPT’s popularity arises from its ability to quickly produce coherent and, sometimes, correct outputs that can appear to be written by a human.
Currently public for all to use and interact with, ChatGPT’s answers and writings have gone viral all over the Internet, which has been abuzz with discussions around the potential impacts and ethics of this model.
How Reliable is ChatGPT?
Discussing the possible impacts of AI, the CEO of OpenAI, Sam Altman, himself said, “I think the good case [for A.I.] is just so unbelievably good that you sound like a crazy person talking about it…I think the worst case is lights-out for all of us.” This ambiguity is reflected in many reports on the ChatGPT model.
In the flurry of opinion pieces and articles on the model, the New York Times published a letter-to-the-editor, written mostly by ChatGPT. The letter was a response to an opinion essay called, “How ChatGPT Hijacks Democracy.” Generated in seconds, the brief letter states, “[T]he notion that ChatGPT could be used to compromise democratic processes is fear-based speculation that is not rooted in reality. It is important to approach new technologies with caution and to understand their capabilities and limitations. However, it is also essential not to exaggerate their potential dangers and to consider how they can be used in a positive and responsible manner.”
Thus, there are already valid concerns about ChatGPT’s production of misinformation. The misinformation tracker, NewsGuard, tested ChatGPT’s responses to misinformation, and ultimately referred to it as “the next great misinformation superspreader.”
NewsGuard found that in 80% of its trials, ChatGPT produced false narratives. If circulated, this misinformation can mislead people, especially those unaware of the specific news topic.
Moreover, the model can be used to make student essays and news reports, as well as replicate the work of artists, which has led to concerns around plagiarism and legality.
OpenAI has admitted that there are limitations to its model including incorrect responses. This is also due to the fact that ChatGPT only holds data up until 2021 with no knowledge of events since then.
In many articles about ChatGPT, journalists have incorporated texts produced by the model to exemplify how seamlessly it can be incorporated into a human-written text. While this is done with the purpose of showing ChatGPT’s outputs, it begs the question of how this model will impact the future of journalism and media.
Will ChatGPT Transform Media and Journalism?
Some writers are already sharing concerns about their job security amidst already difficult times in the media and news industries.
Media outlets, such as Buzzfeed, have shared that they are expanding their use of AI in their online content and cutting jobs to save costs, but a spokesperson maintained that journalism will not be created by AI.
Nevertheless, these developments point to the fact that AI will change, and in some cases has already been changing, journalism and media.
On the other hand, many others are more optimistic, stating that ChatGPT will not impact all types of journalism and media, particularly those that require interviewing people and reporting on niche stories. Moreover, OpenAI is reportedly continuing to enhance its model having placed barriers to violent or problematic prompts.
How Are Institutions and Companies Reacting to ChatGPT?
Given these wide-ranging concerns and discussions, there has been a variety of responses to this technology. Microsoft has invested billions in the technology. Meanwhile, academic journals, universities, such as the prestigious French institution Sciences-Po, and schools in New York and Seattle have banned the use of ChatGPT. Others have suggested being critical of the model through regulations and other measures rather than banning it.
These varied responses highlight that the rapid development of AI technology requires thoughtful reflection and critical efforts to understand the impacts of these technologies.
NewsGuard Misinformation Monitor