top of page

A New Era: What’s Changing with dbt Fusion


At Coalesce 2025, dbt Labs laid out a bold vision for the next generation of analytics engineering and how dbt is evolving to meet that: faster iteration, deeper intelligence, tighter governance, and native readiness for AI workflows. Key among the announcements is the general availability and broader rollout of the Fusion engine and associated capabilities. dbt Developer Hub+2CRN+2

Here are the high‐level shifts to be aware of:

  1. Performance & scale — Fusion has been rewritten in Rust to deliver substantially faster parsing, compilation, and feedback loops. dbt Labs+2dbt Developer Hub+2

  2. Native SQL comprehension + state-awareness — The engine now doesn’t just execute SQL, it understands it (dependencies, lineage, types). It also has a model of the “state” of your project and warehouse to optimise what actually runs. dbt Labs+1

  3. Governance & AI readiness — With the explosion of AI use-cases, dbt is positioning itself as the control plane: managed lineage, metadata, semantic layer, agents, etc., built on top of Fusion. dbt Labs+1

So: this isn’t just another incremental release. If you are responsible for growing your analytics engineering practice, aligning with AI-ready data workflows, or scaling teams and pipelines, this shift is material.



Key Feature Announcements at Coalesce 2025

Below are the standout features from the Coalesce announcements (and supporting docs) that data engineering teams should pay attention to.


1. General Availability of MCP Server

One of the big reveals: the dbt MCP Server is now GA. dbt Developer Hub What it is: MCP = Model Context Protocol, a standardized way for AI / agents / external tools to plug into the governed data graph that dbt maintains. Why it matters: If your organization is planning to integrate LLMs or agents over your structured data, this gives a managed, auditable, “trusted” way to surface transformations, models, lineage, metadata. Implication: You’ll want to map your architecture to include the MCP endpoint (or equivalent) so your analytics pipelines, semantic layer and AI/ML workflows share a common definition of dimensional models, metrics, lineage.



2. New Platform & Adapter Support

Among the announcements:

  • Fusion engine support (beta/GA) for multiple warehouses. Eg: BigQuery adapter support in beta. dbt Developer Hub+1

  • Support for Databricks via Fusion. dbt Labs+1

  • More identity/governance features: e.g., SCIM via Microsoft Entra ID is now GA. dbt Developer Hub Why this matters: If your architecture spans multiple platforms (Snowflake, BigQuery, Databricks, Redshift), you should evaluate what the “multi-warehouse” readiness looks like under Fusion. Also, identity and governance surfaces matter when you scale. Implication: Map out your warehouse strategy in light of Fusion supported adapters; audit your identity/SSO/SCIM configurations now.



3. State-Aware Orchestration & Cost Savings

One of the big value props: Fusion supports “state-aware orchestration” (in preview) — meaning that dbt knows what is already materialized and only rebuilds what needs rebuilding. dbt Developer Hub+2CRN+2Why this matters: In large projects, rebuilds of unchanged models are wasted compute, cost, and time. This capability can reduce turnaround times and cloud spend. Reported outcomes: Early customers are seeing ~10% reductions in warehouse spend. dbt Labs+1Implication: If your organization has long build times or high compute cost in the transformation phase, this is a key area to test. Be prepared to capture metrics pre- and post-migration.



4. Developer Experience & Local Tooling

Alongside the core engine, there’s a major update for how developers build with dbt:

  • The Fusion engine powers a new VS Code extension (beta) with real-time code validation, autocomplete, go-to definition, inline previews. dbt Labs+1

  • Native SQL comprehension: errors and issues in SQL can be caught before hitting the warehouse. dbt Labs+1Why this matters: Developer productivity, fewer “big bang” compilation failures, faster local iteration. Implication: Update your dev workflow: install the extension, re-train your team on the new feedback loops. Capture productivity improvements as part of your ROI business case.



5. Semantic Layer, Catalog, Governance Enhancements

Several features announced align with the control plane idea:

  • Ability to view history of settings changes for projects/environments/jobs. dbt Developer Hub

  • Semantic Layer expansions: e.g., support for new platforms, GraphQL API endpoint for saved queries. dbt Developer Hub

  • Metadata, lineage, cataloging improvements. dbt Labs+1Why this matters: As analytics engineering matures, one of the biggest risks is data trust, visibility, governance. These enhancements help build for scale. Implication: Consider hooking up your existing metadata/lineage stack (or building one) to leverage the new capabilities. Evaluate gaps in your documentation, cataloging, and governance practices.



Practical Migration & Planning Considerations

Given your background (data engineering / analytics engineering, pipeline building, governance) here are some practical questions and next steps to plan your migration or adoption of Fusion.


Assess current state

  • Catalogue your existing dbt project: how many models, how many CI/CD runs, build times, compute costs.

  • Identify pain-points with your existing architecture: long parse/compilation times? Too many rebuilds? Cost overruns?

  • What warehouses/adapters are you using? Are they supported by Fusion today (or in roadmap)?

  • What is your readiness in terms of governance and metadata (lineage, catalog, semantic layer)?


Plan for adoption

  • Start with non-critical pilot projects on Fusion (beta if applicable) to measure real improvements (parse time, run time, cost).

  • Evaluate dev workflow changes: the VS Code extension, local feedback loops. Update your team training and tooling.

  • Plan for governance and identity enhancements: SCIM, audit logs, history of settings changes.

  • Evaluate cross-platform data strategy: if you have multi-warehouse, plan how Fusion’s adapter roadmap affects your stack.


Think about ROI & value capture

  • Track metrics: build/compile times, developer idle time, warehouse compute cost, time-to-insight.

  • Model cost savings: if Fusion can reduce rebuilds via state awareness, what percent of your pipeline is “wasted rebuilds”?

  • Time to value: faster dev iteration = faster analytics delivery to business consumers.

  • Risk reduction: fewer errors, stronger lineage and governance.


Upgrade path & migration details

  • Review the supported features and limitations of Fusion. For example: some dbt Core features may not yet be fully supported under Fusion. dbt Developer Hub+1

  • For large projects, splitting things into smaller logical units/models may help.

  • Coordinate with your version control/CI/CD pipeline to ensure compatibility with the new engine.

  • Communicate to stakeholders: There will be changes in tooling (VS Code extension), build processes, perhaps monitoring.



Why This Matters for You

Given your experience in analytics engineering, pipeline building and governance (and your interest in scaling pipelines, standardizing field names/types, managing metadata), this announcement is highly relevant:

  • Scaling Analytics Engineering: As your projects grow (number of models, domains, domains’ complexity), the faster iteration and cheaper rebuilds matter.

  • Standardizing & Normalizing: The stronger lineage, metadata and state-awareness features help reinforce the discipline of “structured, governed transformations” you are already practicing (e.g., surrogate keys, standardized field names).

  • AI-Readiness: You’re exploring broader data/AI workflows (referenced in your memory). Fusion + MCP + semantic layer enable pushing data pipelines into AI/LLM use cases more confidently (trusted data, governed control plane).

  • Consulting/Platform Thinking: In your role as a contractor across industries, being familiar with the next generation of the dbt stack (Fusion + associated tooling) positions you well to architect future data transformation platforms, drive migrations, and advise clients.



Key Take-Away & Next Steps

  • The dbt Fusion engine and its ecosystem represent a substantial platform upgrade—not just incremental.

  • Performance (parse/compile/run) improvements + cost savings + governance capabilities = a compelling reason to evaluate upgrade now.

  • But: you need to plan carefully. Ensure your stack (warehouses/adapters, dev workflows, governance practices) aligns with Fusion readiness.

  • Given your role, build a migration readiness plan now: pilot, measure, upgrade strategy.

  • Communicate to your stakeholders: this is a forward-looking investment in analytics engineering productivity, governance, and AI readiness.


ree

 
 
 

Comments


bottom of page