AI Article Farms and Clickbait

AI Article Farms and Unethical Use of ChatGPT

With the introduction of incredible technology from OpenAI and other tech pioneers, there has been constant talk about artificial intelligence and its current and potential applications. Alongside the discourse, of course, there has been plenty of talk about the dangers and threats of AI, especially as far as entertainment content goes.

There is another corner of content creation where the AI conversation has reached as well, and that’s the rapid generation of articles—clickbait articles, specifically. So, how exactly is AI being used to generate clickbait articles, and why is this worth the discussion? Let’s take a look.

 

AI Article Generation

To be clear, AI article generation is not a new trick. What’s new is that OpenAI and others have brought AI technology to the forefront for everybody to use. Before, similar technology was only available for certain companies behind closed doors, in large part due to the cost originally associated with running these algorithms. As Forbes reports, AP has been using AI to generate quick, simple, and informative articles for decades. Often, an intentionally detailed narrative is quite important, but when it’s crucial to get the facts out as quickly as possible, AP opts for automated generation.

At least in principle, making sure only organizations like AP had access to this AI article generation technology was a way to make sure the tech would only be used for responsible, appropriate purposes. Whether or not this logic holds true is another conversation, but now the question has become: what happens when everybody has the ability to churn out articles practically automatically? The answer seems to be that some businesses are churning out articles that are basically just long-form ads, automatically.  

 

AI Article Farms and Clickbait

It’s no secret that we’re constantly surrounded by ads. From the giant billboards we drive past, to the banners plastered on public transportation, to the unsolicited videos and images that pop up on our screens before and around the content we’re looking for. Even the content that we are looking for is often promoting some product or service in some way. As consumers, we need to be conscious about this reality, and engage critically with the content we consume—keeping a healthy distance between the content we enjoy and the marketing it includes is something many of us do automatically already.

However, what happens when there is no distance between the content and the marketing? With the ease of use behind AI tools, technically original articles can be generated in seconds. Meaning that companies can create entire articles for the sole purpose of marketing, without caring too much for the content, all at essentially no cost. If the AI is fed the right keywords and other SEO tricks, these empty and automatically generated articles might also be among the first, if not the only, results on your search page. Even when you’re looking for genuine information, or sincere content. This rapid and practically automatic AI-assisted generation of ad-like articles has been referred to by some as AI Article Farms. 

Practically eliminating the cost of this sort of copywriting is, a blessing for many brands’ marketing efforts. It quickly becomes dangerous baggage for consumers who now have new obstacles to navigate when trying to simply enjoy their online experience.

 

AI Concerns

There are many legitimate concerns embedded in much of the current AI discourse, and when it comes to article generation, many of the consequences are clear. First and foremost, AI article generation threatens to pollute the internet with useless, empty content that serves only as a vehicle for ads and clicks. With virtually no cost to produce these articles, the risk of these AI articles flooding our search engines is very real. This can make just performing a Google search a complicated task, let alone being able to engage openly and confidently with online content.

Another concern is the actual content of these articles. GPT-4 promises big things as far as accuracy and precision in generated content goes, but until those updates are released, services like ChatGPT are still little more than fun chatbots, or proofs of concept. OpenAI recognizes that ChatGPT generates false or misleading content relatively commonly—it is simply not meant to be a watertight AI assistant yet. So, generating countless articles for the sole sake of advertising, while also running the risk of misinformation in the content, is a problem that shouldn’t be taken lightly.

The recurring theme in these discussions is accountability. Just as the risks and concerns of this open AI software are evident, so are the benefits and the potential. The software continues to be made more reliable and more responsible. It is up to us to hold one another accountable for irresponsible use of this technology. It’s up to us to concern ourselves with keeping our streams of information as reliable as possible—or from slipping into even less reliable territory. 

Living Pono is dedicated to communicating business management concepts with Hawaiian values. Founded by Kevin May,  an established and successful leader and mentor, Living Pono is your destination to learn about how to live your life righteously and how that can have positive effects in your career. If you have any questions, please leave a comment below or contact us here. Also, join our mailing list below, so you can be alerted when a new article is released.

Finally, consider following the Living Pono Podcast to listen to episodes about living righteously, business management concepts, and interviews with business leaders.

Leave a Reply

Your email address will not be published. Required fields are marked *