Skip to content

Releases: databricks/dbt-databricks

v1.7.3

12 Dec 23:22
Compare
Choose a tag to compare

What's Changed

The big change in this release is that we fixed the issue where every single dbt action initiated a new connection to Databricks. We will now reuse a connection if there is a thread-local connection that matches the compute the user has selected.

This change will be most apparent if your dbt operations are very short lived, such as tests against a small table, as there is now less time spent in connection negotiation; for longer operations, the time spent in computing and transmitting the result set is more significant than the time spent on connecting.

If for some unforeseen reason this change negatively impacts performance:

a.) You can turn it off by setting the DBT_DATABRICKS_LONG_SESSIONS environment variable to false.
b.) Please file an issue so we can investigate.

Fixes

Under the Hood

  • Refactor macro tests so that we can move macros by @benc-db in #524
  • Updating Python Functional Tests by @benc-db in #526
  • Refactoring to align with dbt-core organization: Part I by @benc-db in #525

Full Changelog: v1.7.2...v1.7.3

1.7.2

30 Nov 19:05
Compare
Choose a tag to compare

The big news is that the ability to choose separate compute by model is now available. Until I get updated docs out, please look here for usage notes: #333 (comment)

What's Changed

Full Changelog: v1.7.2b2...v1.7.2

1.5.7

30 Nov 17:57
Compare
Choose a tag to compare

Fixes

This release is to declare that the 1.5.x branch is not compatible with databricks-sql-connector version 3.0.0

Full Changelog: v1.5.6...v1.5.7

1.7.2b2

16 Nov 21:10
c0a9416
Compare
Choose a tag to compare
1.7.2b2 Pre-release
Pre-release

This is a beta release for testing the ability to specify compute on a per model basis. For full instructions on how to use this capability, for now see #333, where I will include the provisional instructions. DO NOT RELY ON THIS CAPABILITY FOR PRODUCTION WORKLOADS YET. We are looking for users to try out this feature and report any bugs they encounter.

Full Changelog: v1.7.1...v1.7.2b2

1.7.1

14 Nov 00:01
Compare
Choose a tag to compare

Under the Hood

  • Revert to client-side filtering for large projects in an attempt improve performance of doc generation by @benc-db (thanks @mikealfare for the help) (503)

Full Changelog: v1.7.0...v1.7.1

1.7.0

09 Nov 22:41
Compare
Choose a tag to compare

What's Changed

This release is mostly about performance and compatibility with 1.7.x of dbt-core. Expect more to come in the coming weeks for expanding config, and config change management, for Materialized Views and Streaming Tables.

Features

Under the Hood

New Contributors

Full Changelog: v1.7.0rc1...v1.7.0

1.6.7

09 Nov 18:09
Compare
Choose a tag to compare

Under the Hood

  • Updating to dbt-spark 1.6.1

v1.7.0 RC1

13 Oct 20:00
c901f8c
Compare
Choose a tag to compare
v1.7.0 RC1 Pre-release
Pre-release

What's Changed

  • Getting compatibility with 1.7.0 RC by @benc-db in #479
  • As part of the above change, fixed a bug with constraints where if a column had a primary key constraint and a not null constraint (a pre-req for primary key), it could fail depending on the order the constraints were run in.
  • As part of the above, included support for specifying foreign key constraints using the dbt constraint expression syntax. Currently this support is restricted to single column foreign keys.

Full Changelog: v1.6.6...v1.7.0rc1

v1.6.6

10 Oct 16:23
Compare
Choose a tag to compare

What's Changed

  • Ensure optimize is run with liquid_clustered_by by @benc-db in #463
  • fix vscode pylance import errors by @dataders in #471
  • Revert python library install behavior if index_url not specified by @benc-db in #472

Full Changelog: v1.6.5...v1.6.6

1.5.6

29 Sep 15:54
55c3dcd
Compare
Choose a tag to compare

What's Changed

Includes the following:

  • Updated the Databricks SDK dependency so as to prevent reliance on an insecure version of requests (460)
  • Update logic around submitting python jobs so that if the cluster is already starting, just wait for it to start rather than failing (461)
  • Add fetchmany, resolves #408 (Thanks @NodeJSmith) (#409)
  • Relaxed the constraint on databricks-sql-connector to allow newer versions (#436)
  • Follow up: re-implement fix for issue where the show tables extended command is limited to 2048 characters. (#326). Set DBT_DESCRIBE_TABLE_2048_CHAR_BYPASS to true to enable this behaviour.

Full Changelog: v1.5.5...v1.5.6