
This article originally appeared on my Substack, Data Gravity, and was written in collaboration with Colin Zima and Jonathan Griffiths at Omni, as well as Erik Kuld at Inventive.
Business Intelligence (BI) has never stood still. Every decade, a new wave of tools emerges—promising to fix past problems but introducing new ones. In the 1990s, BI was rigid and IT-controlled, requiring SQL expertise and expensive infrastructure. The 2000s brought Tableau and Power BI, making data accessible but leading to data chaos and conflicting reports. The 2010s reintroduced governance and modeling with Looker and dbt, ensuring consistency but creating bottlenecks for business users.
Today, BI is fragmented. Some teams model everything in dbt, others rely on BI tools (Looker, Sigma, Tableau), and many use a mix of both. AI has entered the space, but without structured business logic, it struggles to deliver useful insights. The market for BI software reached $30 billion in 2024, yet companies still wrestle with balancing governance, self-service, and automation. Data teams need control, while business users want instant access to insights—without SQL barriers. The challenge is finding a single solution that works for both.
New BI vendors are reshaping the landscape by merging governance, flexibility, and AI-driven analytics—Omni integrates deep data modeling with self-service speed, while Inventive pushes the boundaries of AI-powered data exploration, reducing reliance on SQL and manual queries.
The History of BI: From Legacy Tools to AI-First BI
BI has evolved through four distinct waves, oscillating between centralized governance and self-service analytics. Each era brought innovations but also new challenges, shaping the BI tools we use today.
1990s: The Era of Legacy Data Modeling

In the 1990s, BI was built for structured, enterprise-wide reporting but was slow, expensive, and IT-controlled. Tools like SAP BusinessObjects, MicroStrategy, IBM Cognos, and Oracle BI required specialized expertise, extensive data modeling, and on-premises infrastructure. These platforms operated on predefined schemas and OLAP cubes, making report changes a long, complex process.
Strengths: Governance, security, and structured reporting for enterprises.
Challenges: High infrastructure costs, slow report generation, rigid data modeling, and reliance on on-premises hardware.
These BI systems ran entirely on-prem, requiring dedicated servers and large IT teams to maintain. Scaling meant buying more physical hardware, making growth slow and expensive. ETL processes loaded data overnight, so reports were often outdated by the time they were used. Business users had little flexibility, as reports relied on pre-aggregated OLAP cubes, and any new queries had to go through IT—sometimes taking weeks.
While these tools ensured data accuracy and governance, they were rigid, costly, and reactive—paving the way for self-service BI in the 2000s.
2000s: The Rise of Self-Service BI & Data Visualization

The 2000s saw a shift away from IT-controlled BI with the rise of self-service analytics. Tools like Tableau, Power BI, and Qlik introduced drag-and-drop visualization, allowing business users to explore data without relying on IT. These tools ran on in-memory processing, making analysis much faster than traditional OLAP cubes.
Strengths: Faster insights, easier access to data, and more intuitive interfaces.
Challenges: Lack of governance, inconsistent metrics, and fragmented reporting.
While self-service BI empowered business users, it also led to data chaos. Without centralized governance, teams often built conflicting reports from different data sources, making it difficult to trust the numbers. Many organizations still relied on on-premises databases, meaning data extracts had to be manually refreshed, leading to outdated reports.
Despite these challenges, the 2000s marked a turning point for BI adoption. BI reached millions of users and more employees had access to analytics than ever before, setting the stage for governed data modeling in the 2010s.
2010s: The Return of Governance & Modern Data Modeling

The 2010s were defined by a mass migration to cloud data warehouses. As companies moved from on-prem databases to Snowflake, BigQuery, and Redshift, data became more scalable and accessible. This shift disrupted traditional BI, as organizations no longer needed to pre-aggregate data in OLAP cubes—instead, they could query massive datasets directly in the cloud. However, self-service BI tools still lacked governance, leading to inconsistent metrics and data sprawl.
Strengths: Centralized data governance, scalable cloud-based analytics, and improved collaboration.
Challenges: Steep learning curves, reliance on data teams, and slower iteration cycles.
In response, Looker (2012) introduced LookML, a modeling layer that allowed companies to define a central data model while still enabling business users to explore insights. This approach ensured consistent metrics but also created new bottlenecks, as any changes required data team involvement.
In the same few years, dbt (2016) changed how organizations handled data transformation. Instead of modeling data inside BI tools, dbt enabled SQL-based transformations directly in the cloud warehouse, leading to different BI workflows:
- No modeling in BI → All transformations happen in dbt.
- Some modeling in BI → A hybrid approach with dbt and BI tools.
- Traditional BI modeling → Using LookML or other proprietary methods.
As BI became more complex, Sigma Computing emerged as an alternative to Looker, offering a spreadsheet-based BI tool that directly queried cloud data warehouses. Sigma simplified data exploration for business users, but lacked robust data modeling, making it difficult for organizations requiring governance. It did provide an interface many were familiar with and worked exceptionally well with Snowflake.
With BI decoupled from data transformation, organizations now had more flexibility but also more complexity. Many relied on multiple BI tools—Looker for governed reporting, Tableau for ad-hoc visualization, dbt for data modeling, and Sigma for spreadsheet-driven analysis. While cloud warehouses made analytics faster and more scalable, BI itself became fragmented, setting the stage for the next evolution in the 2020s.
2020s: Fragmentation & Convergence for AI-First BI

The 2020s have brought AI-driven automation and the push to unify fragmented BI tools. As organizations continue relying on cloud data warehouses like Snowflake, BigQuery, and Databricks, they now face a different challenge—too many BI tools, data silos, and lack of self-service access.

Strengths: AI-powered analytics, embedded intelligence, and unified BI platforms.
Challenges: BI tool sprawl, lack of a single source of truth, and difficulty integrating AI with structured business logic.
Companies today struggle with three major BI problems:
- Data bottlenecks → Business users often need SQL or IT help to query data.
- Data chaos → Different teams report conflicting numbers, leading to mistrust.
- Too many tools → Organizations use Looker for governance, Tableau for ad-hoc analysis, dbt for modeling, and spreadsheets for flexibility—creating inefficiencies.
Despite the explosion of BI tools, true innovation in BI has stalled. Looker (acquired by Google), Tableau (acquired by Salesforce), and Power BI (part of Microsoft) are now part of large tech giants, leading to slower product evolution. The need for a modern, deeply integrated BI tool is stronger than ever:
- Legacy BI is outdated → Many enterprises still rely on older, rigid BI tools that struggle with cloud-native workloads.
- dbt changed the analytics stack → No major BI tool has been built since dbt became central to analytics engineering, yet most BI tools lack deep dbt integration.
- BI is critical for decision-making → Being data-driven is no longer optional—leaders expect real-time insights, not static dashboards.
New BI Vendors Are Driving the Next Wave
In response, new BI platforms are emerging to consolidate workflows and integrate AI. Omni and Inventive, both founded by former Looker early team, are at the forefront of this shift.

Omni blends governance with self-service analytics, allowing users to query data through point-and-click interfaces, SQL, Excel formulas, and AI. Unlike traditional BI tools that force organizations to choose between governed modeling and ad-hoc exploration, Omni provides a centralized yet flexible data model, ensuring consistency without slowing down decision-making. Users can start with a simple workbook and promote metrics to a shared data model, creating a single source of truth while maintaining the freedom to explore data independently.
Omni also integrates deeply with dbt, allowing data teams to sync transformations, version control models, and apply changes across their analytics stack without manual duplication. Developers benefit from Git-based workflows, dynamic environments, and push-to-dbt features, bridging the gap between data transformation and BI. This makes Omni an ideal solution for companies looking to consolidate their BI tools, reducing reliance on multiple platforms while enhancing governance, performance, and usability. See the full Omni product demo here.

Inventive is transforming embedded analytics with AI-driven agents that automate customer interactions and data analysis. Instead of relying on manual report requests, SaaS companies can integrate AI Analysts directly into their products, allowing customers to generate insights via natural language. These agents handle the full analytics workflow—interpreting requests, running queries, generating reports, and iterating based on feedback—without human intervention.
The Future of BI: AI-First, But Structured
While AI is transforming BI, it still relies on structured business logic to be effective. AI copilots can assist users in querying data, but without a governed data model, AI struggles to provide accurate, contextual insights. The future of BI will be AI-powered but rooted in strong modeling practices, blending automation with governance to create a unified, intelligent analytics experience.
A major driver of this shift is Text-to-SQL, which allows non-technical users to query complex datasets using natural language. By eliminating the need for SQL expertise, AI-powered querying is making BI more accessible and democratized than ever. However, without a well-structured semantic layer, Text-to-SQL tools risk generating inconsistent or misleading results. The same leaders who drove Looker’s rise in the 2010s are now shaping the AI-first BI era, ensuring that data remains trustworthy, accessible, and actionable—without sacrificing governance or accuracy.