Only 13% of AI-succeeding enterprises share a vital strategy: consolidating their data on a flexible Postgres® foundation.
With the AI economy poised to reach an astounding $17 trillion by 2028, businesses worldwide are fundamentally rethinking their infrastructure. This monumental shift is driving 95% of major global enterprises to intensely focus on transforming into their own AI and data platforms within the next two years.
However, a mere 13% of these organizations have successfully cracked the code. Their winning strategy for seamlessly integrating agentic AI? They’ve moved beyond outdated, fragmented architectures, opting instead to consolidate their data securely, compliantly, and autonomously alongside their AI.
As enterprises navigate this rapid evolution towards an “agentic” workforce, they find themselves in a highly uncertain and complex (VUCA) environment. Success in this landscape demands a departure from rigid, conventional methods, favoring adaptable and resilient approaches. For the leading enterprises, the foundational data layer of choice is unequivocally open-source relational databases. An impressive 81% of these successful organizations have embraced open-source strategies, with over 40% selecting PostgreSQL as their standard for relational data management.
Doug Flora, VP of Product Marketing at EnterpriseDB (EDB), emphasizes this trend: “During periods of rapid transformation, it’s crucial to observe the tactics of those achieving success, rather than simply following the patterns of the majority still adhering to past practices. Companies prioritizing open source and sovereignty over their AI and data are charting a course for agentic triumph, realizing an ROI five times greater than their counterparts.”
Extensibility is key: AI thrives on both structured and unstructured data
AI applications cannot function effectively on vector embeddings alone; they necessitate a sophisticated blend of structured, semi-structured, and unstructured data. Unlike many traditional databases that awkwardly integrate new features, Postgres was inherently designed for profound extensibility. This architecture empowers developers to dynamically expand data types, indexes, query planners, functions, and storage engines.
By bringing together vectorized data with conventional transactional (binary) data, Postgres provides AI agents with the essential “senses and intellect” to interpret inputs and operate autonomously within a unified, ACID-compliant environment.
An ecosystem fostering architectural agility
In today’s ever-expanding data landscape, relying on a patchwork of specialized databases leads to intricate, fragile connections prone to delays, integration failures, and data silos—essentially, system-level “hallucinations.” Postgres eliminates this technical burden by allowing a single database engine to adeptly handle diverse workload requirements.
“Developers have long appreciated Postgres for its extensibility, versatility, and open innovation framework. Now, global enterprises are recognizing that same value, making Postgres a strategic imperative for running their mission-critical data systems,” states Jozef de Vries, SVP, Core Database Engineering, EDB.
Developers can effortlessly extend Postgres to manage highly complex and dynamic workloads:
- pgvector: Facilitates advanced vector search, enabling developers to merge relational data, metadata, and embeddings to construct robust retrieval-augmented generation (RAG) applications.
- Citus: Boosts multi-tenant SaaS application performance and enables real-time analytics (HTAP) through transparent sharding and parallel query execution.
- PostGIS: Offers robust, enterprise-grade geospatial querying, indispensable for sectors like defense and retail.
- TimescaleDB: Manages vast volumes of time-series data, crucial for intricate analytical models and agentic learning patterns.
- pgraph: Handles complex, interconnected data traversals to reveal hidden relationships and insights.
The future demands collective intelligence, not vendor dependency
Critically, Postgres is not owned by a single corporation. Its thriving ecosystem is fueled by the combined intellect of one of the world’s largest independent developer communities. In 2025 alone, over 260 developers directly contributed code to PostgreSQL’s core database engine, with hundreds more globally engaged in testing, reviews, and documentation. Beyond code, the community is supported by countless user groups, meetups, and international PostgreSQL conferences, ensuring continuous innovation across every continent.
While enterprise-grade platforms are developed around Postgres to optimize it for sovereign, agentic environments—with leading tech giants among the top commercial contributors and EDB leading with over 30% of contributions—its core innovation originates from this rich, expanding community. Inspired by James Surowiecki’s The Wisdom of Crowds, this collective intelligence ensures the database evolves with unparalleled speed and resilience compared to any proprietary, single-vendor alternative.
Ensuring a sovereign data future
To truly prosper in the agentic era, engineering and data leaders must implement two crucial architectural changes: First, liberate themselves from restrictive, outdated relational ecosystems such as Oracle, MySQL, SQL Server, or Greenplum, which impede agility.
Second, leverage Postgres’s vast extensibility, its dynamic open-source community, and its robust ACID capabilities to unify data and AI operations.
The future of enterprise architecture isn’t about renting space within a hyperscaler’s proprietary domain. It’s about forging your own sovereign platform, where both your structured and unstructured data effortlessly empower a new agentic workforce under your absolute command. Transition your data to Postgres now, or risk being left behind in the foundation of the agentic future.
Claim your complimentary copy of the O’Reilly book Building a Data and AI Platform with PostgreSQL.