Every business is struggling with how to incorporate artificial intelligence into their workflow. Journalism is no different. At CNN we have strict rules against writing any content using AI but other news organizations have experimented with online content written completely by AI. And content written by artificial intelligence is only one small use case for incorporating AI into the future of journalism. AI is pretty good at writing like a person but is even better at recognizing patterns. Therefore, AI can help news organizations tailor their content and their advertising to individual users based on their patterns of online behavior.
I believe AI can go even further. This Fall we gathered at the University of Missouri-Columbia, one of the pre-eminent journalism schools – to look into the future of using AI in storytelling. We visited a company called Healium, which has using virtual reality technology to tell stories. These experiences can improve sleep, mindfulness, focus and positivity. Imagine if, instead of turning to their phones, our kids put on a VR headset and swam with whales under the ocean? These experiences have been proven to impact brainwave activity and could help kids learn to be present and concentrate for longer periods of time.
We also did group projects looking at deeper ways AI could be applied to journalism. My group looked at how AI could be used to prevent future school shootings by analyzing online activity for certain patterns. These would not be the obvious red flags like, “I want to be a school shooter.” Rather, AI can identify deeper patterns in words and behavior and search for those. Other groups looked at digital mausoleums, apps that could play therapist to kids and other ways to incorporate AI into storytelling. The possibilities are endless.
Unfortunately, with so few guardrails, the dangers are also endless. Could kids turn to AI for companionship? We have already seen that happen when a 14-year-old Florida boy killed himself after being encouraged by an online chatbot. AI-written articles by CNET, ESPN and other major news outlets have been found to have factual errors. And one company called ‘Hoodline’ even made fake bios for reporters that were actually bots. These bots are writing all of their articles as if they are the next wave of journalists.
One thing is clear about the future of AI – humans must be involved in the process. Whether it is fact-checking articles, interacting with your children or monitoring the patterns of data for inconsistencies, humans are the key to making AI work long-term. How will you incorporate AI into your business?