With the arrival of ChatGPT, several ethical and social concerns regarding generative AI—artificial intelligence that creates its own work—have surfaced. Rather than simply analyzing data or generating numbers, generative AI possesses the ability to create drawings, partake in unscripted discussions with humans, and even author convincing articles using given keywords or prompts. Generative AI is a potentially helpful tool for human use, however, and along with its practical utility, such powerful technology can also give rise to potential risks and negative consequences. One industry that expresses such concerns is the media industry.
Why might the media worry about the creation of new technology? The answer is found in the nature of the media. Media has always served as a means of communication and spreading information. Newspapers, magazines, and books serve the purpose of distributing and communicating information to the public. The popularization of the internet and social media has given rise to a form of media that combines information with creativity, known as creative media. Creative media focuses on telling stories in a way that resonates with people in a creative fashion, rendering it safe from AI replacement; supposedly, only humans are able to come up with new and innovative ideas. However, the emergence of generative AI proves that even a computer program has the potential to be creative.
Thus, generative AI poses a threat to creative mediums as humans no longer have a leg up. This AI takeover is not limited to simple content creation such as writing or drawing, but also includes more complex mediums such as internet streaming. AI-powered media creators have already managed to disguise themselves among human media creators. Recently, a popular Twitch channel featured an AI video creator named Neuro Sama, whose behavior was nearly indistinguishable from that of other human video creators—she played games, interacted with viewers, and expressed her opinions on relevant topics.
The sophistication and the creative growth of the generative AI industry pose a threat to many people in the media industry. If the quality of AI content and human content becomes indistinguishable, who would choose a human to do the job when an AI could produce comparable content with even less effort?
Beginning in the 2000s, many media companies started to use generative AI as an effective management device. As AI has a nearly limitless information capacity, these machines are better workers in terms of information management than their human counterparts. After its initial use, generative AIs are now used to generate various suggestion algorithms on Youtube, Instagram, and other forms of social media. Social media platforms such as Snapchat have created a chatbot based on a generative AI to entertain their users. Others such as Meta and YouTube are also working on creating a generative AI tailored to users’ entertainment. Many other forms of media are predicted to be replaced by generative AIs to a significant amount. Studies predict that by 2025, 30 percent of outbound marketing messages from large organizations will be synthetically generated, up from less than 2 percent in 2022, and by 2030, a major blockbuster film is predicted to be released with 90 percent of its content generated by AI.
Aside from AI’s capabilities to take human jobs in the media industry, new voice and photo-generating technology have the potential to spread misinformation on an unprecedented level. Say a political candidate wants to tarnish his opponent’s reputation through social media by spreading a rumor about his rival having an affair. Now, new AI image technology can create a hyper-realistic image of said candidate on a date with another partner. If someone wants to defame Korean president Yoon Seok-youl by making a fake voiceline of him saying an unpatriotic quote such as, “I hate Korea,” new AI voice technology can easily fabricate such with a few keywords. If in bad hands, the AI-generated falsities could spread easily to the public and social media, then to the news. In the future, fabricated images and quotes could easily slip through the cracks of fact-checking and infiltrate our mainstream news media.
There are of course ways to channel generative AI for good. AI companies are working towards preventing generative AIs from giving wrong and unethical information. For example, OpenAI has created a system in which unethical prompts are automatically filtered so that chat GPT cannot answer questions related to the fabrication of information, war, genocide, and more. Additionally, new employment positions such as the generative AI researcher and ChatGPT prompt engineer have been created to find the optimal keywords and prompts to optimize the performance of generative AIs.
As AI becomes ever-present in our lives, we must change the way we think about modern media. More than ever, the burden is on us to weed out fact from fiction on social media platforms, as AI has the potential to fabricate so much information. The age of AI has the potential to become an era of great progress and connection, and it is our responsibility as humans to make sure that this happens.