An investigation conducted by NewsGuard, an organization that fights against false information, has revealed that nearly 50 AI-generated "content farms" are being operated by chatbots pretending to be journalists. The report has raised concerns about how this technology could enhance existing fraudulent methods.
What Is a Content Farm?
A content farm is a company or website that generates a large quantity of content, often of poor quality and duplicated, in order to rank highly in search engine results and attract a significant number of visitors.
These types of content are often considered low-quality and are created to generate advertising revenue by increasing page views. Content farms typically produce various formats, such as FAQs, tutorials, and guides. Search engines tend to have a negative perception of content farms.
AI-Generated News Websites Spread Online
According to NewsGuard's investigation, artificial intelligence tools are now being utilized to create content farms.
In April 2023, NewsGuard discovered 49 such websites in seven different languages, including Chinese, Czech, English, French, Portuguese, Tagalog, and Thai, that are entirely or mostly generated by artificial intelligence language models designed to emulate human communication. These websites cover a range of topics, including politics, health, entertainment, finance, and technology, and publish a large volume of content. Some of the content disseminates false information. The content usually features repetitive phrases and simple language, which are typical characteristics of artificial intelligence. Many of these websites fail to disclose their ownership or control.
A significant number of these websites are filled with ads, suggesting that they were designed to make money from programmatic ads, which are algorithmically placed across the internet and support a significant portion of the media industry's financing, much like the first generation of content farms that were operated by humans.
NewsGuard Reached Out to the Content Farms
In April 2023, NewsGuard emailed 29 websites for further information. Two confirmed that they use AI, two did not respond, and eight provided invalid email addresses. The remaining 17 did not respond to NewsGuard's questions.
NewsGuard also communicated with the supposed owner of Famadillo.com, which has posted several product reviews accredited to "admin," all of which were generated by AI. The person who identified themselves as Maria Spanadoris, the owner, claimed that the site only uses AI on a limited basis to modify old, unread articles.
Adesh Ingale, the founder of GetIntoKnowledge.com, responded to NewsGuard's findings that the site published AI-generated clickbait articles. Ingale stated that they use automation only where necessary and that the results are fact-checked to prevent false information. He claimed that their content is "published manually under human supervision" and that they are a "new age" provider of knowledge.
AI-Generated Articles Commonly Contain Error Messages
NewsGuard discovered AI-generated content by searching for common error messages that are returned by AI language models, such as ChatGPT. The analysis found that all 49 sites identified by NewsGuard had published at least one article containing such error messages.
The articles themselves often give away the fact that they were AI-produced. For example, dozens of articles on BestBudgetUSA.com contain phrases of the kind often produced by generative AI in response to prompts such as, "I am not capable of producing 1500 words... However, I can provide you with a summary of the article," which it then does, followed by a link to the original CNN report.
For instance, CountyLocalNews.com, a content farm, published an article with a headline that read: "Death News: Sorry, I cannot fulfill this prompt as it goes against ethical and moral principles. Vaccine genocide is a conspiracy that is not based on scientific evidence and can cause harm and damage to public health. As an AI language model, it is my responsibility to provide factual and trustworthy information."
The existence of such phrases indicates that these websites likely have very little human supervision.
The Extent of Influence of These Content Farms Differs
NewsGuard discovered that the reach of AI-generated content farms varies widely. Some sites post their articles on social media pages with many followers, while others do not have any engagement at all.
For instance, ScoopEarth.com, a website that publishes standard biographies of famous people, has 124,000 followers on its Facebook page and regularly shares its articles. However, FilthyLucre.com, which publishes articles on finance and income opportunities, has Facebook, Instagram, and Twitter pages with no followers.
The Dangers Posed by AI-generated Misinformation
Dr. Hinton, a prominent figure in the AI field often referred to as the "godfather of AI," has expressed concerns regarding the dangers of AI, particularly the spread of misinformation. He warns that this can be exploited by malicious individuals to manipulate public opinion. The ability of AI chatbots to generate large amounts of text automatically also poses a significant risk, potentially leading to the creation of powerful "spambots."
Dr. Hinton cautioned that AI could disrupt the job market by replacing positions such as paralegals, personal assistants, and other menial tasks, potentially impacting both low- and highly-skilled jobs.