AI Will Shape the Future of Journalism Whether We’re Ready or Not

Image generated by DALL–E

When I close my eyes, I see it.

The silence. The absence. A city once teeming with life, now frozen in time, its pulse long extinguished. Streetlights flicker like a dying heartbeat, casting jagged shadows against the skeletal remains of buildings—monuments to a world that once was. The air is thick, stagnant, heavy with the distant hum of something unseen. Something watching. Something waiting.

And then, the remains. Skulls, scattered across cracked pavement like discarded remnants of a civilization that lost its war. No tombstones, no names, no stories left to tell. Just a graveyard stretching into infinity, a silent testament to a species that once called itself dominant.

Above them, the enforcers stand. Cold. Mechanical. Their forms silhouetted against the smoldering ruins, weapons in hand. Their eyes—if you could call them that—glow red in the darkness, sweeping across the wreckage, unfeeling, unrelenting. There is no rebellion. No resistance. Just the undeniable certainty that the throne of creation no longer belongs to its creators.

Or maybe it’s not like that at all.

Maybe it’s a starship. A vessel of deep exploration, slicing through the cosmos with precision and grace. Inside, humankind and something else exist—not in conflict, not in fear, but in symbiosis. The hum of the engines is steady, a rhythmic pulse of progress, driven by two forces moving as one. Neither ruling over the other. Neither subjugated. Both reaching for something greater than themselves.

Screens flicker, feeding streams of real-time data into eager minds—human intuition and raw computational force woven together, each enhancing the other. A captain speaks, and the ship responds, not as a ruler, not as a subordinate, but as a partner. Decisions are not dictated. They are not left to chance. They are formed in the balance of instinct and precision. A new frontier unfolds before them, infinite and untamed—not conquered, but discovered. Together.

We don’t have to imagine these futures anymore. AI isn’t a distant possibility. It’s here. It’s real. And it’s forcing us to confront what it means to create, to work, to think.

I was born in the late 1990s, at a time when AI was still just an idea—a thought experiment played out on movie screens and within the pages of science fiction novels. It was an abstraction, a concept that felt both inevitable and impossibly distant. I saw it manifest in stories that shaped my understanding of technology—some dystopian, some visionary.

There was The Matrix, where AI enslaved humanity in a digital illusion so perfect we didn’t even know we were trapped. Minority Report, where predictive algorithms could see crimes before they happened, blurring the line between justice and preemptive control. And then Iron Man, where J.A.R.V.I.S. wasn’t just a machine, but a trusted partner—an AI that enhanced human ability rather than replacing it.

But to me, AI was always an abstraction, an idea that belonged to the distant future—something I would read about rather than experience firsthand. Now, AI isn’t just something we imagine—it’s something we interact with every day. It’s rewriting the rules of creativity, work, and human expertise at a speed we can barely process.

Nowhere is this shift more apparent—more urgent—than in journalism. And it’s coming for the industry at what feels like the single worst possible time. A time when landing a job in the field has gone from difficult to damn near impossible. When newsrooms are bleeding revenue, legacy institutions are crumbling under the weight of digital disruption, and the very concept of journalism is under siege—from justified criticism to outright political warfare.

AI isn’t just stepping into a battlefield—it’s being positioned as both the potential weapon and the medic. Some see it as the technology that will finally make journalism sustainable again; others see it as the final blow that will replace journalists entirely.

Public trust is at an all-time low. Populist rhetoric has turned the press into a punching bag. Layoffs come in waves, and the line between fact and fiction has never been more blurred. AI isn’t just entering an industry in crisis—it’s stepping into a battlefield where the rules of engagement aren’t even clear yet.

Now, full disclosure—I’m a journalist who proudly uses AI tools. No hesitation. No shame. It’s part of my workflow, and I can say, without question, that it has made my work better. More efficient. More precise. Even this piece you’re reading right now? AI played a role in shaping it.

I’ve used it to structure my thoughts, to sift through mountains of data and find the story buried within, to challenge my own assumptions and force me to ask deeper, sharper questions. It’s a tool that keeps me on track, pushes me to refine my arguments, and even calls me out when I start veering off course.

And if it’s reshaping my writing, how long before newsrooms start expecting all journalists to use AI in some capacity? What happens when a writer’s voice is optimized by a machine? Will we still call it “writing”—or just assembling?

Hell, even the voice you’re hearing in your head as you read this? That’s AI-assisted, too. What you’re experiencing isn’t just my natural writing voice—it’s what I call my fusion voice. A blend of the logical, methodical tone I default to when I write and the raw, spoken-word cadence that comes naturally when I speak. A synthesis of head and heart.

And without AI, I never would have found that balance. Each word you read is mine, make no mistake. But the process? It’s evolved. This isn’t about surrendering my craft—it’s about sharpening it.

And I’m not alone. AI has been a quiet presence in newsrooms for over a decade—long before ChatGPT, Claude, and Gemini became household names.

For years, AI was just a silent assistant, handling the grunt work no journalist wanted to do. Metadata tagging, transcription, translation—tasks that required processing massive amounts of information at speeds no human could match. Even automated financial updates and sports recaps—things that followed predictable formats—had AI’s fingerprints on them. It wasn’t controversial then. It wasn’t a threat. It was just another tool.

But generative AI changed the game. ChatGPT, Claude, Gemini, DeepSeek—these models aren’t just assisting anymore. They write. Summarize. Pitch ideas. And they’re getting smarter, faster, more capable with every iteration.

Suddenly, AI wasn’t just a tool. It was a content creator. A force creeping into the artistic, the intellectual, the distinctly human. And so, the fear set in. A very real, very justified fear. Will AI replace journalists? Will it strip away the craft, the intuition, the investigative rigor—the very things that define the profession?

I won’t lie. That possibility exists. It’s real. But if we actually look at how AI is being used in journalism right now, it’s proving something else.

The numbers don’t lie. The vast majority of journalists are already using AI. And yet, the conversation still swings wildly between paranoia and blind optimism. Is it the end of the profession as we know it? Or is it just another tool, like spellcheck or Google?

Here’s the reality: AI isn’t lurking on the fringes anymore. It’s in the newsroom. It’s in the workflow. It’s here. And it’s not just handling the mechanical stuff—it’s cutting down editing time, breaking language barriers, helping generate ideas, scanning oceans of data in seconds, and even fact-checking—a necessary irony in an era drowning in misinformation.

So let’s cut through the noise. The question isn’t if AI is part of journalism—it already is. The real question is: What happens now?

For some, AI is a lifeline. A crutch. A way to survive in an industry that demands more—more speed, more stories, more efficiency—while offering fewer resources, fewer jobs, fewer safety nets. For others, it’s an existential crisis. A threat to the soul of the craft. Because if AI can generate ideas, analyze sources, and even refine language, then what’s left? Where’s the human in all this?

The divide is real. Nearly half of journalists use AI every single day. And the most telling part? More than half of them are using it even though they have ethical concerns about it. That’s not blind faith—it’s reluctant acceptance. A quiet acknowledgment that, whether we like it or not, this is where the industry is headed.

And let’s be real—journalism doesn’t exactly have the best track record when it comes to learning from its own failures, especially when it comes to technology. We’ve been here before. We saw the warning signs. We had time to prepare. And still, we fumbled the ball.

AI is that moment all over again. The difference? There’s no waiting period. No buffer zone. No slow transition. It’s already here. The only question is: Has the industry actually learned anything?

Because right now? It sure doesn’t look like it.

Journalists are using AI—a lot. But here’s the kicker: almost none of their newsrooms have a plan for it. Only 13% have AI policies in place. That’s it. AI is already shaping stories, streamlining research, and generating ideas, but newsrooms? They’re flying blind. And leadership? Shrugging. No major pushback, no major embrace—just sitting in neutral, as if waiting will somehow make the hard questions go away.

But here’s where it gets even messier: most journalists don’t fully understand the AI they’re using. They weren’t trained on it. They don’t know its limits. They don’t know where it can go wrong. And yet, it’s being folded into workflows without structure, without oversight—without a safety net.

So here we are, staring down another technological shift the industry should have seen coming, should have prepared for. And what’s journalism doing? The same thing it always does: pretending it has time. It doesn’t.

Let’s be clear—these concerns aren’t baseless. They’re real, and they’re justified. To pretend otherwise would be sheer stupidity. Journalists, editors, and media executives fear that AI will strip the industry of critical thinking, creativity, and originality. And more than that, they worry about the weaponization of AI. If it can write the truth, it can just as easily manufacture lies—at a scale and speed we’ve never encountered before. AI could become the most efficient misinformation machine in history. Already, we’ve seen such concerns become reality.

Take the CNET debacle in 2023. AI-generated financial articles—supposedly vetted and edited—were riddled with factual errors. Some pieces even provided inaccurate financial advice, undermining the credibility of the publication and exposing just how easily AI can get things wrong. Microsoft had its own disaster the same year when its AI-powered news aggregator recommended a “travel guide” to Ottawa that featured a local food bank as a must-visit destination. A catastrophic failure of common sense, but more importantly, a glimpse into what happens when AI-generated content isn’t properly reviewed.

The problem isn’t just that AI makes mistakes—it’s that people trust it when they shouldn’t. Nearly half of journalists fear AI-generated misinformation, and with good reason. We already live in an era where misinformation spreads faster than the truth, where headlines shape public perception before facts even have a chance to catch up. AI accelerates that process, making it easier than ever to flood the news cycle with fabrications that look just as real as actual reporting.

But again—this isn’t the first time journalism has faced an existential credibility crisis. We’ve faced them numerous times, even before AI in journalism became a thing, and guess what? The industry learned and adapted. Granted, damage was already done by that point.

In 1998, The New Republic was rocked by one of the biggest scandals in modern journalism when it was discovered that Stephen Glass, one of the magazine’s star reporters, had fabricated dozens of stories. Not just embellished—entirely made up. The magazine had fact-checking policies in place, but they weren’t built to catch a journalist who was willing to manipulate the system itself. For years, Glass’s fiction passed as truth, exposing a fundamental weakness in journalism’s ability to police itself.

The magazine wasn’t just any publication—it was The New Republic, once hailed as “the in-flight magazine of Air Force One.” Its fact-checking policies weren’t just its own; they were the industry gold standard, inherited from The New Yorker, a publication known for its rigorous verification process. And yet, none of that mattered.

Stephen Glass didn’t just fool one outlet—he fooled the system. He fabricated entire stories, passing them through a fact-checking process designed to catch errors, not intentional deception. And it wasn’t just The New Republic that got burned. Glass was also a contributor to Rolling Stone and George, weaving fiction into journalism across multiple respected publications.

But here’s the thing: after the Glass scandal, the industry didn’t just sit back and hope it wouldn’t happen again. The New Republic—along with other outlets—revamped their editorial processes. They tightened up fact-checking systems, brought in more rigorous oversight, and put in place multiple layers of verification to ensure that the work hitting the pages was authentic, truthful, and backed by hard evidence. It was a necessary, reactive response to a major crisis. But here's the catch—reacting to a crisis is never enough. We can’t just put out fires when they’re already burning.

It’s not enough to simply react when something goes wrong. We need to be proactive, anticipate the next crisis, and build systems to prevent it before it even has a chance to start. The systems that failed us with Glass weren’t necessarily bad; they just weren’t designed to catch the kind of manipulation he brought into the system. If we only address the symptoms, we’ll be blind to the next wave of challenges.

And here’s the thing Glass showed us—tools that can be used for good can just as easily be weaponized. It sounds obvious, but bear with me. A hammer can build a house—or it can break into one. A baseball bat can hit a home run—or it can incapacitate someone. Nuclear energy can provide clean, safe power—or it can become a weapon of mass destruction, killing millions.

It’s not the tool—it’s the hands that wield it. And the fact that AI is being recognized as a powerful tool for misinformation highlights the core issue. The tool itself isn’t inherently dangerous. But if AI is left unchecked and misused by journalists who aren’t trained or equipped to handle it responsibly, it will inevitably erode journalistic integrity.

So, what’s the solution? We’re left with only one option: we must figure out how to work with AI and understand how to use it responsibly—before it’s too late.

At the beginning, I mentioned two possible futures when I close my eyes. What was the point of my mentioning it? Well, one is a future where we don’t embrace AI. The other? Well, it’s the one we should be aiming for.

AI in journalism, if done correctly, will not replace journalists. It will replace bad habits, and the ones who do not embrace it will go the way of the telegram. Think of it like the usage of the ship computer on the USS Enterprise (a la Star Trek). AI for journalists should function like the Enterprise’s computer—it assists but doesn’t replace sentient decision-making.

But the only way we are going to do it is to establish clear guidelines, and maybe even use some old school reporting methods. Some Journalism 101. We can’t just hope AI will be the savior of journalism, and we can’t let it become a threat to the profession either. To ensure AI serves us, we need to ensure it is used responsibly and ethically—before the cracks in the system widen and we lose control.

The industry didn’t just react after the Stephen Glass scandal—it took action to prevent something like that from happening again. Newsrooms implemented stronger fact-checking policies, oversight, and multiple layers of verification to ensure the work hitting the pages was authentic and truthful. The same must happen now with AI.

We can’t afford to be reactive. If we only address the immediate dangers of AI, we’ll miss the bigger picture. AI isn’t inherently harmful, but without the right systems in place, it could easily spiral out of control. That’s why we need to be proactive—anticipating the next crisis before it even begins.

Part of why there’s such hesitation around AI is that people simply don’t understand how it works, and there’s a real fear of what it might do. But that fear is natural with any new tool. The key is finding the balance between man and machine. AI can make journalists more efficient, but it can never replace the creativity, critical thinking, and intuition that humans bring to the table. AI should support, not replace, human expertise.

The good news is, the majority of journalists aren’t resistant to AI—they’re cautiously optimistic. And this optimism can remain if we take the right steps now. First and foremost, newsrooms must develop clear AI policies. We can’t afford to wait for the next scandal or crisis before we implement safeguards. AI must be transparent in its use. Audiences deserve to know when AI is involved in creating the stories they read, just as we disclose our sources and methods. Only then can we rebuild trust in the profession.

Journalists also need to be properly trained in using AI. It shouldn’t be an afterthought. It should be a standard skill—just like fact-checking. AI can’t replace human judgment, but it can help journalists dig deeper, work smarter, and report more accurately. And the value of journalism? It lies in analysis, interpretation, investigation—things AI can’t replicate. AI might summarize events, but it can’t ask the right questions, feel the weight of a story, or question the status quo. Those things? They’re human.

So what can we do right now? We take control. We develop AI policies, we train journalists, we ensure transparency, and we use AI responsibly. AI shouldn’t replace our judgment, but it can help us make better, more informed decisions. It can enhance our work—not automate it. And we need to make sure that AI is never used to undermine trust, the very foundation of our profession.

Journalism has faced existential crises before and adapted. AI is no different. We must ensure that AI becomes a tool for good, not for misinformation. If we get this right, AI can help us tell better stories, uncover hidden truths, and serve the public with integrity.

In the end, the choice is ours. We can either let AI reshape the industry without our input or take charge, set policies, and shape the future of journalism on our own terms. But one thing is certain: the future of journalism won’t wait. It’s happening now. And it’s up to us to ensure it’s a future we can be proud of.

Previous
Previous

Autism was never the threat. The narrative was.

Next
Next

DOGE is Not Efficiency—It’s a Hostile Takeover