Gartner suggests new ways for businesses to protect themselves from AI-created data.

Businesses really need to be more careful about the data they trust, especially since so much of it is now created by AI, new research from Gartner warns.
With more and more companies diving into generative AI — a recent Gartner survey showed that a whopping 84% plan to spend more on it this year — there’s a growing worry. Future AI models might end up learning from the work of older AI models, which could lead to something called “model collapse.”
To stop this from happening, Gartner advises companies to adjust how they handle data that hasn’t been verified. Their suggestions include appointing an AI governance leader who will team up closely with data and analytics experts; boosting teamwork among departments by creating cross-functional groups with people from cybersecurity, data, and analytics; and updating current security and data management rules to deal with the dangers posed by AI-generated data.
Gartner expects that by 2028, half of all organizations will be forced to adopt a “zero-trust” approach to data governance because of the sheer volume of untrustworthy AI-generated data flooding their systems.
“Businesses can’t just blindly trust data anymore or assume it came from a human,” Gartner managing VP Wan Fui Chan explained in a statement. “With AI-generated data becoming so widespread and often impossible to tell apart from human-made data, adopting a ‘zero-trust’ mindset – one that insists on strong authentication and verification – is absolutely crucial to protect a company’s operations and finances.”
What makes things even more complicated, Chan added, is that governments around the world will likely have different ideas about how to handle AI. “Rules might vary quite a bit from one region to another, with some places wanting much tighter controls on AI-generated content, while others might take a more relaxed view,” he noted.
For a clear example of how AI can mess up data governance, look no further than when Deloitte Australia had to give back some money from a government contract because their final report contained AI-generated mistakes, including legal citations that simply didn’t exist.
You can find this article originally published on CIO.
