One of AI’s most prominent problems when it comes to content creation is trust. Readers feel cheated when they open an Instagram reel, a news article, or a tweet and their spidey senses start to tingle.
“Something feels off. Is this AI?” And there it is. The question lots of people are afraid to ask themselves these days: Is x thing AI?
Well… maybe. We’ve come to a point where AI-generated images are strikingly realistic, and so are voice, video and most definitely text outputs.
But the problem of trust doesn’t lie exclusively within AI. It is a publishing problem. Content creators are increasingly using AI tools to generate synthetic content and these results are not being properly labeled. While one marketing agency might be forward stating their most recent campaigns are developed either partially or totally with AI, other businesses prefer not to disclose any information about their creative processes.
But why is it so? We all know about the proliferation of open source AI models and how different companies around the world are driving the costs of its usage to almost zero. There’s no competitive advantage in using AI to write blog posts, create social media images, or generate videos without human intervention. At least, said advantage wouldn’t rely on the usage of AI as anyone has access to these tools today. So why the secrecy?
At DailyBot, for example, we love AI in some regards and we don’t in others. We certainly do love to see our PMs or designers taking advantage of AI tools to prototype new tools and bridge the gap between developers and product teams. We’re not big fans of holding AI over people’s heads as a permanent threat that they may lose their jobs soon because their jobs have become “obsolete.”
We value AI as a tool that exponentially frees workers’ time and reduces busywork so people can focus on more important tasks. To figure out a safe way for AI to fulfill this role, we must be proactive in advocating for responsible usage of the tools, technologies and workflows that encompass this new ecosystem.
As artificial intelligence matures, we will eventually find systems that deploy, maintain and depreciate AIs for all kinds of jobs in the future. It’s in our best interest to get ahead and propose more ordered, systematic and responsible ways of using AI that truly benefits humanity. Systems that maintain integrity in the way we educate or entertain people with their content, and are not just the byproduct of a false arms race towards the next business objective.
We’re still far from definitive solutions to the trust problem of AI-generated content, but I don’t want to leave you empty-handed, so a penny for your thoughts: what if we created designated spaces for AI within the constraints of each project?
For instance, if your business wants to implement AI to your content strategy, why not outline from the very beginning a set of rules everyone should follow? This way you protect your editorial workflows and can be transparent with your audience about which specific workflows use AI and to what extent and what’s all human.
It’s more than understandable if some people do not want to interact with AI at all, but as this technology evolves, the problem will only get harder as AI-generated content gets harder to tell apart. Taking action and creating designated spaces for AI to do its job while you do yours seems a first step in the fight against a world where “nothing is authentic.”