Designated spaces for AI

Published on
February 12, 2025
Contributors
No items found.
Subscribe for product updates and more:
Read about our privacy policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Table of Contents:

One of AI’s most prominent problems when it comes to content creation is trust. Readers feel cheated when they open an Instagram reel, a news article, or a tweet and their spidey senses start to tingle.

“Something feels off. Is this AI?” And there it is. The question lots of people are afraid to ask themselves these days: Is x thing AI?

Well… maybe. We’ve come to a point where AI-generated images are strikingly realistic, and so are voice, video and most definitely text outputs. 

But the problem of trust doesn’t lie exclusively within AI. It is a publishing problem. Content creators are increasingly using AI tools to generate synthetic content and these results are not being properly labeled. While one marketing agency might be forward stating their most recent campaigns are developed either partially or totally with AI, other businesses prefer not to disclose any information about their creative processes.

But why is it so? We all know about the proliferation of open source AI models and how different companies around the world are driving the costs of its usage to almost zero. There’s no competitive advantage in using AI to write blog posts, create social media images, or generate videos without human intervention. At least, said advantage wouldn’t rely on the usage of AI as anyone has access to these tools today. So why the secrecy?

At DailyBot, for example, we love AI in some regards and we don’t in others. We certainly do love to see our PMs or designers taking advantage of AI tools to prototype new tools and bridge the gap between developers and product teams. We’re not big fans of holding AI over people’s heads as a permanent threat that they may lose their jobs soon because their jobs have become “obsolete.”

We value AI as a tool that exponentially frees workers’ time and reduces busywork so people can focus on more important tasks. To figure out a safe way for AI to fulfill this role, we must be proactive in advocating for responsible usage of the tools, technologies and workflows that encompass this new ecosystem.

As artificial intelligence matures, we will eventually find systems that deploy, maintain and depreciate AIs for all kinds of jobs in the future. It’s in our best interest to get ahead and propose more ordered, systematic and responsible ways of using AI that truly benefits humanity. Systems that maintain integrity in the way we educate or entertain people with their content, and are not just the byproduct of a false arms race towards the next business objective.

We’re still far from definitive solutions to the trust problem of AI-generated content, but I don’t want to leave you empty-handed, so a penny for your thoughts: what if we created designated spaces for AI within the constraints of each project?

For instance, if your business wants to implement AI to your content strategy, why not outline from the very beginning a set of rules everyone should follow? This way you protect your editorial workflows and can be transparent with your audience about which specific workflows use AI and to what extent and what’s all human.

It’s more than understandable if some people do not want to interact with AI at all, but as this technology evolves, the problem will only get harder as AI-generated content gets harder to tell apart. Taking action and creating designated spaces for AI to do its job while you do yours seems a first step in the fight against a world where “nothing is authentic.”

⁀જ➣ Share this post:

Continue Reading

Similar readings that might be of interest:
🛠️ How-to

Managing dependencies across multiple teams with check-ins

Dependencies between teams aren't just annoying, they're killing your projects. This guide reveals how to transform check-ins, eliminate structural bottlenecks, and implement a practical 30-day plan that turns dependency hell into smooth workflow heaven.
Read post
🛠️ How-to

Why your team hates your new standup format

Learn how effective boundary leadership can rescue failing daily standup transformations, with practical strategies for building team trust, gathering intel, influencing stakeholders, and creating meaningful autonomy based on research from Druskat and Wheeler.
Read post
📡 News

AI in the workplace: New data shows how workers really use AI

Anthropic's new research reveals surprising patterns in workplace AI adoption, showing that while AI assists with specific tasks across many jobs, complete automation remains rare, and the highest AI usage isn't where you might expect.
Read post
We use our own and third-party cookies to obtain data on the navigation of our users and improve our services. If you accept or continue browsing, we consider that you accept their use. You can change the settings or get more information here.
I agree