- Fixed unresolvable Maven Central POM for the uber JAR. The published POM no longer declares a transitive dependency on the internal
databricks-jdbc-corecoordinate (which is not published to Maven Central), restoring resolution for downstream consumers (#1431).
- Added
CallableStatementsupport with IN parameters.Connection.prepareCall()now returns a workingDatabricksCallableStatementthat supports positional parameter binding and execution via{call proc(?)}JDBC escape syntax. OUT/INOUT parameters and named parameters throwSQLFeatureNotSupportedException. - Added AI coding agent detection to the User-Agent header. When the driver is invoked by a known AI coding agent (e.g. Claude Code, Cursor, Gemini CLI),
agent/<product>is appended to the User-Agent string.
- Added support for using SQL SHOW commands for Thrift-mode metadata operations (
getTables,getColumns,getSchemas,getFunctions,getPrimaryKeys,getImportedKeys,getCrossReference). Enable by settingUseQueryForMetadata=1. This aligns Thrift metadata behavior with Statement Execution API (SEA) mode.
- Improved error messages for cancelled statements: operations cancelled via
Statement.cancel()or closed connections now return SQL stateHY008(operation cancelled) instead of generic error codes, making it easier for applications to detect and handle cancellations. - Fixed race condition between chunk download error handling and result set close that could cause invalid state transition warnings (
CHUNK_RELEASED -> DOWNLOAD_FAILED) during Arrow Cloud Fetch operations in resource-constrained environments. - Fixed
EnableBatchedInsertssilently falling back to individual execution when table or schema names contain special characters (e.g., hyphens) inside backtick-quoted identifiers. Added a warn log when the fallback occurs. - Fixed
IntervalConvertercrash (IllegalArgumentException: Invalid interval metadata) when INTERVAL columns are returned via CloudFetch. Arrow metadata from CloudFetch uses underscored format (INTERVAL_YEAR_MONTH,INTERVAL_DAY_TIME) which the driver's regex did not accept. - Fixed
Statementbeing prematurely closed after queries that return inline results, which prevented re-execution,getResultSet(), andgetExecutionResult()from working. Statements now remain open and reusable until explicitly closed by the caller. - Fixed primitive types within complex types (ARRAY, MAP, STRUCT) not being correctly parsed when Arrow serialization uses alternate formats: TIMESTAMP/TIMESTAMP_NTZ as epoch microseconds or component arrays, and BINARY as base64-encoded strings.
- Fixed
PARSE_SYNTAX_ERRORfor column names containing special characters (e.g., dots) whenEnableBatchedInsertsis enabled, by re-quoting column names with backticks in reconstructed multi-row INSERT statements. - Fixed Volume ingestion for SEA mode, which was broken due to statement being closed prematurely.
- Fixed unclear
error: [null]messages during transient HTTP failures (e.g. 502 Bad Gateway) in Thrift polling. Error messages now include server error details and use SQL state08S01(communication link failure) so callers can identify retryable errors. Also fixedDatabricksError(RuntimeException) from SDK client being unhandled in CloudFetch download paths. - Fixed escaped pattern characters in catalogName for
getSchemas, as returned catalogName should be unescaped. - Fixed
getColumnClassName()returning null for VARIANT columns in SEA mode by adding VARIANT to the type system. - Fixed
getColumns()returningDATA_TYPE=0(NULL) for GEOMETRY/GEOGRAPHY columns in Thrift mode. Now returnsTypes.VARCHAR(12) when geospatial is disabled andTypes.OTHER(1111) when enabled, consistent with SEA mode. - Fixed
getCrossReference()returning 0 rows when parent args are passed in uppercase. The client-side filter used case-sensitive comparison against server-returned lowercase names.
- Added
DatabaseMetaData.getProcedures()andDatabaseMetaData.getProcedureColumns()to discover stored procedures and their parameters. Queriesinformation_schema.routinesandinformation_schema.parametersusing parameterized SQL for both SEA and Thrift transports. - Added connection property
OAuthWebServerTimeoutto configure the OAuth browser authentication timeout for U2M (user-to-machine) flows, and also updated hardcoded 1-hour timeout to default 120 seconds timeout. - Added connection property
UseQueryForMetadatato use SQL SHOW commands instead of Thrift RPCs for metadata operations (getCatalogs, getSchemas, getTables, getColumns, getFunctions). This fixes incorrect wildcard matching where_was treated as a single-character wildcard in Thrift metadata pattern filters. - Added connection property
TreatMetadataCatalogNameAsPatternto control whether catalog names are treated as patterns in Thrift metadata RPCs. When disabled (default), unescaped_in catalog names is escaped to prevent single-character wildcard matching. This aligns with JDBC spec which treats catalogName as identifier and not pattern.
- Bumped
com.fasterxml.jackson.core:jackson-corefrom 2.18.3 to 2.18.6. - Fat jar now routes SDK and Apache HTTP client logs through Java Util Logging (JUL), removing the need for external logging libraries.
- Added Apache Arrow on-heap memory management for processing Arrow query results. Previously, Arrow result processing was unusable on JDK 16+ without passing the
--add-opens=java.base/java.nio=ALL-UNNAMEDJVM argument, due to stricter encapsulation of internal APIs. With this change, there is no JVM argument required - the driver automatically falls back to an on-heap memory path that uses standard JVM heap allocation instead of direct memory access. - Log timestamps now explicitly display timezone.
- [Breaking Change]
PreparedStatement.setTimestamp(int, Timestamp, Calendar)now properly applies Calendar timezone conversion using LocalDateTime pattern (inline withgetTimestamp). Previously Calendar parameter was ineffective. DatabaseMetaData.getColumns()with null catalog parameter now retrieves columns from all available catalogs when using SQL Execution API.
- Fixed statement timeout when the server returns
TIMEDOUT_STATEdirectly in theExecuteStatementresponse (e.g. query queued under load), the driver now throwsSQLTimeoutExceptioninstead ofDatabricksHttpException. - Fixed Thrift polling infinite loop when server restarts invalidate operation handles, and added configurable timeout (
MetadataOperationTimeout, default 300s) with sleep between polls for metadata operations. - Fixed
DatabricksParameterMetaData.countParametersandDatabricksStatement.trimCommentsAndWhitespaceswith aSqlCommentParserutility class. - Fixed
rollback()to throwSQLExceptionwhen called in auto-commit mode (no active transaction), aligning with JDBC spec. Previously it silently sent a ROLLBACK command to the server. - Fixed
fetchAutoCommitStateFromServer()to accept both"1"/"0"and"true"/"false"responses fromSET AUTOCOMMITquery, since different server implementations return different formats. - Fixed socket leak in SDK HTTP client that prevented CRaC checkpointing. The SDK's connection pool was not shut down on
connection.close(), leaving TCP sockets open. - Fixed
IdleConnectionEvictorthread leak in long-running services. The feature-flags context shared per host was ref-counted incorrectly and held a stale connection UUID after the owning connection closed; on the next 15-minute refresh it silently recreated an HTTP client (and its evictor thread) that was never cleaned up. Connection UUIDs are now tracked idempotently and the stored connection context is updated when the owning connection closes. - Fixed Date fields within complex types (ARRAY, STRUCT, MAP) being returned as epoch day integers instead of proper date values.
- Fixed
DatabaseMetaData.getColumns()returning the column type name inCOLUMN_DEFfor columns with no default value.COLUMN_DEFnow correctly returnsnullper the JDBC specification. - Coalesce concurrent expired cloud fetch link refreshes into a single batch FetchResults RPC to prevent thread pool exhaustion under high concurrency.
- Added streaming prefetch mode for Thrift inline results (columnar and Arrow) with background batch prefetching and configurable sliding window for improved throughput.
- Added
EnableInlineStreamingconnection parameter to enable/disable streaming mode (default: enabled). - Added
ThriftMaxBatchesInMemoryconnection parameter to control the sliding window size for streaming (default: 3). - Added support for disabling CloudFetch via
EnableQueryResultDownload=0to use inline Arrow results instead. - Added
EnableMetricViewMetadataconnection parameter to enable/disable Metric View table type (default: disabled). - Added
NonRowcountQueryPrefixesconnection parameter to specify comma-separated query prefixes that should return result sets instead of row counts.
- Enhanced error logging for token exchange failures.
- Geospatial column type names now include SRID information (e.g.,
GEOMETRY(4326)instead ofGEOMETRY). - Implemented lazy loading for inline Arrow results, fetching arrow batches on demand instead of all at once. This improves memory usage and initial response time for large result sets when using the Thrift protocol with Arrow format.
- Enhanced
enableMultipleCatalogSupportbehavior: When this parameter is disabled (enableMultipleCatalogSupport=0), metadata operations (such asgetSchemas(),getTables(),getColumns(), etc.) now return results only when the catalog parameter is eithernullor matches the current catalog. For any other catalog name, an empty result set is returned. This ensures metadata queries are restricted to the current catalog context. When enabled (enableMultipleCatalogSupport=1), metadata operations continue to work across all accessible catalogs.
- Fixed
getTypeInfo()andgetClientInfoProperties()to return fresh ResultSet instances on each call instead of shared static instances. This resolves issues where calling these methods multiple times would fail due to exhausted cursor state (Issue #1178). - Fixed complex data type metadata support when retrieving 0 rows in Arrow format
- Normalized TIMESTAMP_NTZ to TIMESTAMP in Thrift path for consistency with SEA behavior
- Fixed complex types not being returned as objects in SEA Inline mode when
EnableComplexDatatypeSupport=true. - Fixed
StringIndexOutOfBoundsExceptionwhen parsing complex data types in Thrift CloudFetch mode. The issue occurred when metadata contained incomplete type information (e.g., "ARRAY" instead of "ARRAY"). Now retrieves complete type information from Arrow metadata. - Fixed timeout exception handling to throw
SQLTimeoutExceptioninstead ofDatabricksHttpExceptionwhen queries timeout during result fetching phase. This completes the timeout exception fix to handle both query execution polling and result fetching phases. - Fixed
getResultSet()to return null in case of DML statements to honour JDBC spec.
- Added token caching for all authentication providers to reduce token endpoint calls.
- We will be rolling out the use of Databricks SQL Execution API by default for queries submitted on DBSQL. To continue using Databricks Thrift Server backend for execution, set
UseThriftClientto1.
- Changed default value of
IgnoreTransactionsfrom0to1to disable multi-statement transactions by default. Preview participants can opt-in by settingIgnoreTransactions=0. Also updatedsupportsTransactions()to respect this flag.
- [PECOBLR-1131] Fix incorrect refetching of expired CloudFetch links when using Thrift protocol.
- Fixed logging to respect params when the driver is shaded.
- Fixed
isWildcardto return true only when the value is*
- Log timestamps now explicitly display timezone.
- [Breaking Change]
PreparedStatement.setTimestamp(int, Timestamp, Calendar)now properly applies Calendar timezone conversion using LocalDateTime pattern (inline withgetTimestamp). Previously Calendar parameter was ineffective. DatabaseMetaData.getColumns()with null catalog parameter now retrieves columns from all catalogs when using SQL Execution API, aligning the behaviour with thrift.DatabaseMetaData.getFunctions()with null catalog parameter now retrieves columns from the current catalog when using SQL Execution API, aligning the behaviour with thrift.
- Fix timeout exception handling to throw
SQLTimeoutExceptioninstead ofDatabricksSQLExceptionwhen queries timeout. - Removes dangerous global timezone modification that caused race conditions.
- Fixed
Statement.getLargeUpdateCount()to return -1 instead of throwing Exception when there were no more results or result is not an update count. - CVE-2025-66566. Updated lz4-java dependency to 1.10.1.
- Fix
INVALID_IDENTIFIERerror when using catalog/schema/table names for SQL Exec API with hyphens or special characters in metadata operations (getSchemas(),getTables(),getColumns(), etc.) and connection methods (setCatalog(),setSchema()). Per Databricks identifier rules, special characters are now properly enclosed in backticks. - Fix Auth_Scope handling inconsistency in Azure U2M OAuth.
- Added the EnableTokenFederation url param to enable or disable Token federation feature. By default it is set to 1
- Added the ApiRetriableHttpCodes, ApiRetryTimeout url params to enable retries for specific HTTP codes irrespective of Retry-After header. By default the HTTP codes list is empty.
- Added validation for positive integer configuration properties (RowsFetchedPerBlock, BatchInsertSize, etc.) to prevent hangs and errors when set to zero or negative values.
- Updated Circuit breaker to be triggered by 429 errors too.
- Refactored chunk download to keep a sliding window of chunk links. The window advances as the main thread consumes chunks. These changes can be enabled using the connection property EnableStreamingChunkProvider=1. The changes are expected to make chunk download faster and robust.
- Added separate circuit breaker to handle 429 from SQL Exec API connection creation calls, and fall back to Thrift.
- Fix driver crash when using
INTERVALtypes. - Fix connection failure in restricted environments when
LogLevel.OFFis used. - Fix U2M by including SDK OAuth HTML callback resources.
- Fix microsecond precision loss in
PreparedStatement.setTimestamp(int,Timestamp, Calendar)and address thread-safety issues with global timezone modification. - Fix metadata methods (
getColumns,getFunctions,getPrimaryKeys,getImportedKeys) to return empty ResultSets instead of throwing exceptions when catalog parameter is NULL, for SQL Exec API.
- Added support for high-performance batched writes with parameter interpolation:
supportManyParameters=1: Enables parameter interpolation to bypass 256-parameter limit (default: 0)EnableBatchedInserts=1: Enables multi-row INSERT batching (default: 0)BatchInsertSize=<SIZE>: Maximum rows per batch (default: 1000)- Note: Large batches are chunked for execution. If a chunk fails, previous chunks remain committed (no transaction rollback). Consider using staging tables for critical workflows.
- Added Feature-flag integration for SQL Exec API rollout
- Call statements will return result sets in response
- Add a gating flag for enabling GeoSpatial support:
EnableGeoSpatialSupport. By default, it will be disabled
- Minimized OAuth requests by reducing calls in feature flags and telemetry.
- Geospatial
getWKB()now returns OGC-compliant WKB values.
- Fix: SQLInterpolator failing to escape temporal fields and special characters.
- Fixed: Errors in table creation when using BIGINT, SMALLINT, TINYINT, or VOID types.
- Fixed: PreparedStatement.getMetaData() now correctly reports TINYINT columns as Types.TINYINT (java.lang.Byte) instead of Types.SMALLINT (java.lang.Integer).
- Fixed: TINYINT to String conversion to return numeric representation (e.g., "65") instead of character representation (e.g., "A").
- Fixed: Complex types (Structs, arrays, maps) now show detailed type information in metadata calls in Thrift mode
- Fixed: incorrect chunk download/processing status codes.
- Shade SLF4J to avoid conflicts with user applications.
- Added support for geospatial data types. (Use v3.0.5+ for OGC compliant WKB support)
- Added support for telemetry log levels, which can be controlled via the connection parameter
TelemetryLogLevel. This allows users to configure the verbosity of telemetry logging from OFF to TRACE. - Added full support for JDBC transaction control methods in Databricks. Transaction support in Databricks is currently available as a Private Preview. The
IgnoreTransactionsconnection parameter can be set to1to disable or no-op transaction control methods. - Added a new config attribute
DisableOauthRefreshTokento control whether refresh tokens are requested in OAuth exchanges. By default, the driver does not include theoffline_accessscope. Ifoffline_accessis explicitly provided by the user, it is preserved and not removed.
- Updated sdk version from 0.65.0 to 0.69.0
- Fixed SQL syntax error when LIKE queries contain empty ESCAPE clauses.
- Fix: driver failing to authenticate on token update in U2M flow.
- Fix: driver failing to parse complex data types with nullable attributes.
- Fixed: Resolved SDK token-caching regression causing token refresh on every call. SDK is now configured once to avoid excessive token endpoint hits and rate limiting.
- Fixed: TimestampConverter.toString() returning ISO8601 format with timezone conversion instead of SQL standard format.
- Fixed: Driver not loading complete JSON result in the case of SEA Inline without Arrow
- Fixed token endpoint regression, which caused excessive token refresh calls
- Added
enableMultipleCatalogSupportconnection parameter to control catalog metadata behavior.
- Fixed complex data type conversion issues by improving StringConverter to handle Databricks complex objects (arrays/maps/structs), JDBC arrays/structs, and generic collections.
- Fixed ComplexDataTypeParser to correctly parse ISO timestamps with T separators and timezone offsets, preventing Arrow ingestion failures.
- Enabled direct results by default in SEA mode to improve latency for short and small queries.
- Telemetry data is now captured more efficiently and consistently due to enhancements in the log and connection close flush logic.
- Updated Databricks SDK version to v0.65.0 (This is to fix OAuthClient to properly encode complex query parameters.)
- Added IgnoreTransactions connection parameter to silently ignore transaction method calls.
- Fixed state leaking issue in thrift client.
- Fixed timestamp values returning only milliseconds instead of the full nanosecond precision.
- Fixed Statement.getUpdateCount() for DML queries.
- Query Tags support: Added ability to attach key-value tags to SQL queries for analytical purposes that would appear in
system.query.historytable. Example:jdbc:databricks://host;QUERY_TAGS=team:marketing,dashboard:abc123. (This feature is in private preview) - SQL Scripting support: Added support for SQL Scripting
- Added a client property
enableVolumeOperationsto enable GET/PUT/REMOVE volume operations on a stream. For backward compatibility, allowedVolumeIngestionPaths can also be used for REMOVE operation. - Support for fetching schemas across all catalogs (when catalog is specified as null or a wildcard) in
DatabaseMetaData#getSchemasAPI in SQL Execution mode. - Configurable SQL validation in isValid(): Added
EnableSQLValidationForIsValidconnection property to control whetherisValid()method executes an actual SQL query for server-side validation. Default value is 0. - Implement multi-row INSERT batching optimization for prepared statements to improve performance when executing large batches of INSERT operations.
- Implement lazy/incremental fetching for columnar results when using Databricks JDBC in Thrift mode without Arrow support. The change modifies the behavior from buffering entire result sets in memory to maintaining only a limited number of rows at a time, reducing peak heap memory usage and preventing OutOfMemory errors.
- Added new artifact
databricks-jdbc-thinfor thin jar with runtime dependency metadata. - Introduce a memory-efficient columnar data access mechanism for JDBC result processing.
- Added support for using a custom Discovery URL in U2M flows on AWS and GCP.
- Databricks SDK dependency upgraded to latest version 0.64.0
- Integrated Azure U2M flow into driver for improved stability.
- Fixed
ResultSet.getStringfor Boolean columns in Metadata result set. - Fixed volume operations not completing unless the ResultSet is fully iterated.
- Fixed
connection.getMetadata().getColumns()to return the correct SQL data type code for complex type columns. - Fixed a bug in the JDBC driver's metadata parsing for nested decimal fields within struct types.
- Fixed case sensitive table search in
connection.getMetadata().getTables() - Fixed
connection.getMetadata().getColumns()to return the correct scale.
- Added support for providing custom HTTP options:
HttpMaxConnectionsPerRouteandHttpConnectionRequestTimeout. - Add V2 of chunk download using async http client with corresponding implementations of AbstractRemoteChunkProvider and AbstractArrowResultChunk
- Telemetry is enabled by default, subject to server-side rollout.
- Fixed Statement.getUpdateCount to return -1 for non-DML queries.
- Fixed Statement.setMaxRows(0) to be interepeted as no limit.
- Fixed retry behaviour to not throw an exception when there is no retry-after header for 503 and 429 status codes.
- Fixed encoded UserAgent parsing in BI tools.
- Fixed setting empty schema as the default schema in the spark session.
- Added DCO (Developer Certificate of Origin) check workflow for pull requests to ensure all commits are properly signed-off
- Added support for SSL client certificate authentication via parameter: SSLTrustStoreProvider
- Provide an option to push telemetry logs (using the flag
ForceEnableTelemetry=1). For more details see documentation - Added putFiles methods in DBFSVolumeClient for async multi-file upload.
- Added validation on UID param to ensure it is either not set or set to 'token'.
- Added CloudFetch download speed logging at INFO level
- Added vendor error codes to SQLExceptions raised for incorrect UID, host or token.
- Column name support for JDBC ResultSet operations is now case-insensitive
- Updated arrow to 17.0.0 to resolve CVE-2024-52338
- Updated commons-lang3 to 3.18.0 to resolve CVE-2025-48924
- Enhanced SSL certificate path validation error messages to provide actionable troubleshooting steps.
- Fixed Bouncy Castle registration conflicts by using local provider instance instead of global security registration.
- Fixed Azure U2M authentication issue.
- Fixed unchecked exception thrown in delete session
- Fixed ParameterMetaData.getParameterCount() to return total parameter count from SQL parsing instead of bound parameter count, aligning with JDBC standards
- Added support for DoD (.mil) domains
- Enables fetching of metadata for SELECT queries using PreparedStatement prior to setting parameters or executing the query.
- Added support for SSL client certificate authentication via keystore configuration parameters: SSLKeyStore, SSLKeyStorePwd, SSLKeyStoreType, and SSLKeyStoreProvider.
- Updated JDBC URL regex to accept valid connection strings that were incorrectly rejected.
- Updated decimal conversion logic to fix numeric values missing decimal precision.
- Support for fetching tables and views across all catalogs using SHOW TABLES FROM/IN ALL CATALOGS in the SQL Exec API.
- Support for Token Exchange in OAuth flows where in third party tokens are exchanged for InHouse tokens.
- Support for polling of statementStatus and sqlState for async SQL execution.
- Support for REAL, NUMERIC, CHAR, and BIGINT JDBC types in
PreparedStatement.setObjectmethod - Support for INTERVAL data type.
- Added explicit null check for Arrow value vector when the value is not set and Arrow null checking is disabled.
- Support for token cache in OAuth U2M Flow using the configuration parameters:
EnableTokenCacheandTokenCachePassPhrase. - Support for additional SSL functionality including use of System trust stores (
UseSystemTruststore) and allowing self signed certificates (viaAllowSelfSignedCerts) - Added support for
getImportedKeysandgetCrossReferencesin SQL Exec API mode
- Modified E2E tests to validate driver behavior under multi-threaded access patterns.
- Improved error handling through telemetry by throwing custom exceptions across the repository.
- Fixed bug where batch prepared statements could lead to backward-incompatible error scenarios.
- Corrected setting of decimal types in prepared statement executions.
- Resolved NullPointerException (NPE) that occurred during ResultSet and Connection operations in multithreaded environment.
- Support for connection parameter SocketTimeout.
- Handle server returned Thrift version as part of open session response gracefully
- Added OWASP security check in the repository.
- Updated SDK to the latest version (0.44.0).
- Add descriptive messages in thrift error scenario
- BigDecimal is now set correctly to NULL if null value is provided.
- Fixed issue with JDBC URL not being parsed correctly when compute path is provided via properties.
- Addressed CVE vulnerabilities (CVE-2024-47535, CVE-2025-25193, CVE-2023-33953)
- Fix bug in preparedStatement decimal parameter in thrift flow.
- Introduces a centralized timeout check and automatic cancellation for statements
- Allows specifying a default size for STRING columns (set to 255 by default) via
defaultStringColumnLengthconnection parameter - Implements a custom retry strategy to handle long-running tasks and connection attempts
- Added support for Azure Managed Identity based authentication
- Adds existence checks for volumes, objects, and prefixes to improve operational coverage
- Allows adjusting the number of rows retrieved in each fetch operation for better performance via
RowsFetchedPerBlockparameter - Allows overriding the default OAuth redirect port (8020) with a single port or comma-separated list of ports using
OAuth2RedirectUrlPort - Support for custom headers in the JDBC URL via
http.header.<key>=<value>connection parameter
- Removes the hard-coded default poll interval configuration in favor of a user-defined parameter for greater flexibility
- Adjusts the handling of NULL and non-NULL boolean values
- Ensures the driver respects the configured limit on the number of rows returned
- Improves retry behaviour to cover all operations, relying solely on the total retry time specified via the driver URL parameter
- Returns an exception instead of -1 when a column is not found
- Fixed columnType conversion for Variant and Timestamp_NTZ types
- Fix minor issue for string dealing with whitespaces
- Support for complex data types, including MAP, ARRAY, and STRUCT.
- Support for TIMESTAMP_NTZ and VARIANT data types.
- Extended support for prepared statement when using thrift DBSQL/all-purpose clusters.
- Improved backward compatibility with the latest Databricks driver.
- Improved driver performance for large queries by optimizing chunk handling.
- Configurable HTTP connection pool size for better resource management.
- Support for Azure Active Directory (AAD) Service Principal in M2M OAuth.
- Implemented java.sql.Driver#getPropertyInfo to fetch driver properties.
- Set Thrift mode as the default for the driver.
- Improved driver telemetry (opt-in feature) for better monitoring and debugging.
- Enhanced test infrastructure to improve accuracy and reliability.
- Added SQL state support in SEA mode.
- Changes to JDBC URL parameters (to ensure compatibility with the latest Databricks driver):
- Removed catalog in favour of ConnCatalog
- Removed schema in favour of ConnSchema
- Renamed OAuthDiscoveryURL to OIDCDiscoveryEndpoint
- Renamed OAuth2TokenEndpoint to OAuth2ConnAuthTokenEndpoint
- Renamed OAuth2AuthorizationEndPoint to OAuth2ConnAuthAuthorizeEndpoint
- Renamed OAuthDiscoveryMode to EnableOIDCDiscovery
- Renamed OAuthRefreshToken to Auth_RefreshToken
- Ensured TIMESTAMP columns are returned in local time.
- Resolved inconsistencies in schema and catalog retrieval from the Connection class.
- Fixed minor issues with metadata fetching in Thrift mode.
- Addressed incorrect handling of access tokens provided via client info.
- Corrected the driver version reported by DatabaseMetaData.
- Fixed case-sensitive behaviour while fetching client info.
- Telemetry support in OSS JDBC.
- Support for fetching connection ID and closing connections by connection ID.
- Stream support implementation in the UC Volume DBFS Client.
- Hybrid result support added to the driver (for both metadata and executed queries).
- Support for complex data types.
- Apache Async HTTP Client 5.3 added for parallel query result downloads, optimizing query fetching and resource cleanup.
- Enhanced end-to-end testing for M2M and DBFS UCVolume operations, including improved logging and proxy handling.
- Removed the version check SQL call when connection is established.
- Fixed statement ID extraction from Thrift GUID.
- Made volume operations flag backward-compatible with the existing Databricks driver.
- Improved backward compatibility of ResultSetMetadata with the legacy driver.
- Fix schema in connection string
- Run queries in async mode in the thrift client.
- Added GET and DELETE operations for the DBFS client, enabling full UC Volume operations (PUT, GET, DELETE) without spinning up DB compute.
- Do not send repeated DBSQL version queries.
- Skip SEA compatibility check if null or empty DBSQL version is returned by the workspace.
- Skips SEA check when DBSQL version string is blank space.
- Updated SDK version to resolve CVEs.
- Eliminated the statement execution thread pool.
- Fixed UC volume GET operation.
- Fixed async execution in SEA mode.
- Fixed and updated the SDK version to resolve CVEs.
- Added GCP OAuth support: Use Google ID (service account email) with a custom JWT or Google Credentials.
- SQL state added in thrift flow
- Add readable statement-Id for thrift
- Added Client to perform UC Volume operations without the need of spinning up your DB compute
- Add compression for SEA flow
- Updated support for large queries in thrift flow
- Throw exceptions in case unsupported old DBSQL versions are used (i.e., before DBR V15.2)
- Deploy reduced POM during release
- Improve executor service management
- Certificate revocation properties only apply when provided
- Create a new HTTP client for each connection
- Accept customer userAgent without errors
- Added compression in the Thrift protocol flow.
- Added support for asynchronous query execution.
- Implemented
executeBatchfor batch operations. - Added a method to extract disposition from result set metadata.
- Optimised memory allocation for type converters.
- Enhanced logging for better traceability.
- Improved performance in the Thrift protocol flow.
- Upgraded
commons-ioto address security vulnerability (CVE mitigation). - Ensured thread safety in
DatabricksPooledConnection. - Set UBER jar as the default jar for distribution.
- Refined result chunk management for better efficiency.
- Enhanced integration tests for broader coverage.
- Increased unit test coverage threshold to 85%.
- Improved interaction with Thrift-server client.
- Fixed compatibility issue with other drivers in the driver manager.
- Support proxy ignore list.
- OSS Readiness improvements.
- Improve Logging.
- Add SSL Truststore URL params to allow configuring custom SSL truststore.
- Accept Pass-through access token as part of JDBC connector parameter.
getTablesThrift call to align with JDBC standards.- Improved metadata functions.
- Fixed memory leaks and made chunk download thread-safe.
- Fixed issues with prepared statements in Thrift and set default timestamps.
- Fixed issues with empty table types, null pointer in
IS_GENERATEDCOLUMN, and ordinal position. - Increased retry attempts for chunk downloads to enhance resilience.
- Fixed exceptions being thrown for statement timeouts and cancel futures.
- Improved UC Volume code.
- Remove cyclic dependencies in package
- Fallback mechanism for smoother token refresh flow.
- Retry logic to improve chunk download reliability.
- Improved logging for timeouts and statement execution for better issue tracking.
- Timestamp logging in UTC to avoid skew caused by local timezones.
- Passthrough token handling with backward compatibility for the existing driver.
- Continued improvements towards OSS readiness.
getTablesThrift call to align with JDBC standards.- Improved accuracy of column metadata, fixing issues with empty table types, null pointer in
IS_GENERATEDCOLUMN, and ordinal position. - Passthrough token handling for backward compatibility.
- Memory leaks and made chunk download thread-safe.
- Issues with prepared statements in Thrift and set default timestamps.
- Increased retry attempts for chunk downloads to enhance resilience.
- Exceptions are now thrown for statement timeouts and cancel futures.
- OSS readiness changes.
- M2M JWT support.
- Credential provider OAuthRefresh.
- Commands to run benchmarking tests.
- Compiling logic for benchmarking workflows.
- Fixed metadata and TableType issues.
- Fixed precision and scale for certain dataTypes.
- Minor bug for UC Volume in Thrift mode.
- SLF4j support for default SDK mode.
- Deprecated username handling.
- Catalog and schema not set by default.
- Support for Input Stream in UC Volume Operations.
- Metadata fixes.
- Redacted passwords from logging.
- Release OSS JDBC driver for Public Preview.
- Initial beta release of Databricks JDBC OSS Driver for Public Preview.
- Stable release before Public Preview.
- All-purpose cluster support and logging support.
- First stable release with support for SQL warehouses.