Everyone is sounding the alarm about Artificial Intelligence while simultaneously adopting it. Experts are debating AI’s role in content creation and its impact on the marketplace. On one hand, the future of original content looks bleak if bots can scoop up existing words and images and recreate anything—quicker and cheaper. On the other hand, AI is a useful tool in performing rote content delivery and is already integrated into our online systems and lives. We all have been served and misled by AI at some point. Our governance, best practices, and laws need to catch up.
Instead of fighting against AI, we should reaffirm creativity alongside it. AI doesn’t replace good writing and meaningful ideas. Digital communicators must elevate original content to distinguish it from machine-generated pablum.
Social Lessons
Artificial Intelligence is an outgrowth of social media, and its effects on digital content have not been fully studied. However, we’ve learned a few things in the last decade. Social media has arguably exerted a downward pressure on expression. The medium’s race for fleeting, low-information, viral clickbait has transformed everything from website design to news syndication to how we speak with each other.
Social media has made commodification the entire goal of information sharing. Fake, if it’s pretty or slick, is the new gold standard—and has greased the wheels for AI to take over.
Back to Basics
While the risks of unregulated AI warrant our full attention, the basics of content creation remain the same. AI does not change communications fundamentals. To exceed the rote and superficial info that AI spits out, original content requires competent writing, innovative visuals, and thoughtful delivery.
AI is built to be efficient, not unique. Machines manufacture generic words, stock images, and recycled designs. Digital professionals create content by expressing new ideas creatively, telling stories to illustrate a point, delivering complex data succinctly, and citing sources. Demonstrating care for the readers of your content will capture, retain, and persuade them.
No Rivals
Automatically-generated content is getting better and finding acceptance, but it’s not raising the bar. An MIT study on AI content suggests people want to know the source of the writing to gauge trustworthiness, depending on the type of content. People are ambivalent about AI writing for a product advertisement or conducting a transaction online; the jury’s out on anything else. The mantra, “content is everything,” may be true, but content value and authority are open to interpretation.
Artificial Intelligence is spectacularly bad at delivering fresh and original content. AI cannot easily hide its blunt copying nor erase its fingerprints. For information that matters, readers look for meaning in the text, others evaluate the author’s credibility, and some check secondary sources. People are becoming sophisticated AI detectors (and using AI detection software).
Nobody’s Fool: Cute Rabbits
People are honing their AI interpretative skills. For example, if I want to drive more users to this blog post, I may add popular SEO key words like “cute rabbits” in specific ways so that cute rabbits are captured in search engines by everyone interested in cute rabbit stories to increase the likelihood that more people will visit this blog, despite having nothing to do with cute rabbits, or the aesthetic appeal of bunnies.
The paragraph above does not make sense and is out of context with the rest of the blog post. You may not be able to tell if the text or image is AI generated, but it doesn’t matter. The content you are seeking is not here, so the SEO tricks failed. Readers will leave and never return again.
People are able to make judgements on the fly, and their interpretation skills of AI will improve with exposure and practice.
Digital Epistemology
It’s human nature to intuitively look for what’s real, but we also suspend reality at times if we think we are in control. College professors know how slovenly AI is in “helping” students communicate ideas, but they also use it as a teaching moment. Digital editors find AI-generated writing painfully obvious, and they work to ensure originality continues to be relevant.
We all have a responsibility to be better readers in the digital world. If people haplessly accept biased and trite content unchallenged, they are susceptible to manipulation regardless of AI. Education, in the broadest sense, is the remedy to any type of thought control, technological or otherwise. Seeing through an artifice takes extra effort, but there’s always a learning curve when adapting to a new technology which has the potential to replace knowledge.
Smarter readers will help mitigate bad actors exploiting AI. To paraphrase Abraham Lincoln:
You can use AI to manipulate some of the people some of the time, but not everyone and not all of the time.
The Real Villains
Computer-generated mimicry is ripe for exploitation at a mass scale. The main problem, however, lies with lazy writers and malicious producers, not AI itself. Cheaters always take the easy route. Hucksters and power-wielders have used propaganda techniques throughout time; AI is merely the latest technology in their arsenal. We need to enact sound and enforceable policies to protect us from AI-based fraud and manipulation.
The Big Leap
Communicating via writing and visuals is a complex, ever-changing craft. Language, symbols, and meaning evolve constantly, carrying both truth and lies. Artificial Intelligence is growing in usage, and we must concentrate on producing better quality content. In the AI age, computers can copy and paste, so humans must exert their robust imaginations.
In the future, people will want to quickly evaluate authenticity and assess the value of content before taking any action. To help them make the leap, digital communicators will need to create excellent original content and organize it well. That’s nothing new.
Special thanks to Deepthi Welaratna for her insightful contributions.
