Databricks Delta supports most of the options provided by Spark SQL DataFrame read and write APIs for performing batch reads and writes on tables.
Databricks Delta supports creating tables in the metastore using standard DDL:
CREATE TABLE events ( date DATE, eventId STRING, eventType STRING, data STRING) USING DELTA PARTITIONED BY (date) LOCATION '/delta/events'
When you create a table in the metastore using Databricks Delta, it creates a symlink-like pointer in the metastore to the transaction log and data that are stored on DBFS. This pointer makes it easier for other users to discover and refer to the data without having to worry about exactly where it is stored. However, the metastore is not the source of truth about what is valid in the table - that responsibility stays with Databricks Delta.
When you run
CREATE TABLE with a
LOCATION that already contains data stored using Databricks Delta, Databricks Delta does the following:
If you specify only the table name and location, for example:
CREATE TABLE events USING DELTA LOCATION '/delta/events'
the table in the Hive metastore automatically inherits the schema, partitioning, and table properties of the existing data. This functionality can be used to “import” data into the metastore.
If you specify any configuration (schema, partitioning, or table properties), Databricks Delta verifies that the specification exactly matches the configuration of the existing data.
If the specified configuration does not exactly match the configuration of the data, Databricks Delta throws an exception that describes the discrepancy.
You can partition data to speed up queries or DML that have predicates involving the partition columns.
To partition data when you create a Databricks Delta table, specify
PARTITION BY columns.
To control the location of the table, you can optionally specify the
LOCATION as a path on DBFS.
Tables created with a specified
LOCATION are considered unmanaged by the metastore. Unlike a managed table, where no path is specified, an unmanaged table’s files are not deleted when you
DROP the table.
You can load a Databricks Delta table as a DataFrame by specifying either its path:
SELECT * FROM delta.`/delta/events`
or table name:
SELECT * FROM 'events'
The DataFrame returned automatically reads the most recent snapshot of the table for any query; you never need to run
REFRESH TABLE. Databricks Delta automatically uses partitioning and statistics to read the minimum amount of data when there are applicable predicates in the query.
You can optionally specify partition columns when creating a new Databricks Delta table. Partitioning is used to speed up queries or DML that have predicates involving the partition columns. A common pattern is to partition by date, for example:
append mode you can atomically add new data to an existing Databricks Delta table:
To atomically replace all of the data in a table, you can use
df.write .format("delta") .mode("overwrite") .save("/delta/events")
You can selectively overwrite only the data that matches predicates over partition columns. The following command atomically replaces the month of January with the data in
df.write .format("delta") .mode("overwrite") .option("replaceWhere", "date >= '2017-01-01' AND date <= '2017-01-31'") .save("/delta/events")
This sample code writes out the data in
df, validates that it all falls within the specified partitions, and performs an atomic replacement.
Unlike the file APIs in Apache Spark, Databricks Delta remembers and enforces the schema of a table. This means that by default overwrites do not replace the schema of an existing table.
UPDATE statement allows you to apply expressions to change the value of columns when a row matches a predicate. For example, you can use
UPDATE to fix a spelling mistake in the
UPDATE events SET eventType = 'click' WHERE eventType = 'clck'
Similar to delete, update operations automatically make use of the partitioning of the table when possible.
MERGE INTO statement allows you to merge a set of updates and insertions into an existing dataset. For example, the following statement takes a stream of updates and merges it into the
events table. When there is already an event present with the same
eventId, Databricks Delta updates the data column using the given expression. When there is no matching event, Databricks Delta adds a new row.
Here’s a worked example:
MERGE INTO events USING updates ON events.eventId = updates.eventId WHEN MATCHED THEN UPDATE SET events.data = updates.data WHEN NOT MATCHED THEN INSERT (date, eventId, data) VALUES (date, eventId, data)
You must specify a value for every column in your table when you perform an
INSERT (for example, when there is no matching row in the existing dataset). However, you do not need to update all values.
MERGE INTO requires that the update table is small. There are no requirements on the destination table size. If your workload does not satisfy this requirement, try using separate
Databricks Delta tables allow you to remove data that matches a predicate. For instance, to delete all events from before 2017, you can run the following DML:
DELETE FROM events WHERE date < '2017-01-01'
Delete operations automatically make use of the partitioning of the table when possible. This optimization means that it will be significantly faster to delete data based on partition predicates.
Databricks Delta automatically validates that the schema of the
DataFrame being written is compatible with the schema of the table. Columns that are present in the table but not in the DataFrame are set to null. If there are extra columns in the DataFrame that are not present in the table, this operation throws an exception. Databricks Delta has DDL to explicitly add new columns explicitly and the ability to update the schema automatically.
If you specify other options, such as
partitionBy, in combination with append mode, Databricks Delta validates that they match and throws an error for any mismatch. When
partitionBy is not present, appends automatically follow the partitioning of the existing data.
This feature requires Databricks Runtime 4.1 or higher.
Databricks Delta lets you update the schema of a table. The following types of changes are supported:
- Adding new columns (at arbitrary positions)
- Reordering existing columns
- Adding or removing invariants from a column
You can make these changes explicitly using DDL or implicitly using DML.
Streams reading from a Databricks Delta table must be restarted after a schema change.
You can use the following DDL to explicitly change the schema of a table.
ALTER TABLE table_name ADD COLUMNS (col_name data_type [COMMENT col_comment] [AFTER colA_name|BEFORE colB_name], ...)
By default, nullability is
To add a column to a nested field, use:
ALTER TABLE boxes ADD COLUMNS (col_name.nested_col_name STRING)
If the schema before running
ALTER TABLE boxes ADD COLUMNS (colB.nested STRING BEFORE field2)is:
- root | - colA | - colB | +-field1 | +-field
the schema after is:
- root | - colA | - colB | +-field1 | +-nested | +-field2
Adding nested columns is supported only for structs. Arrays and maps are not supported.
Change column comment or ordering
ALTER TABLE table_name CHANGE [COLUMN] col_name [COMMENT col_comment] [FIRST|AFTER colA_name|BEFORE colB_name]
Databricks Delta can automatically update the schema of a table as part of a DML transaction (either appending or overwriting), and make the schema compatible with the data being written.
Columns that are present in the DataFrame but missing from the table are automatically added as part of a write transaction when either of the following is true:
When both options are specified, the option from the
DataFrameWriter takes precedence. The added columns are appended to the end of the struct they are present in. Case is preserved when appending a new column.
mergeSchemais not supported when table access control is enabled (as it elevates a request that requires
MODIFYto one that requires
mergeSchemacannot be used with
Columns that are
NullType are dropped from the DataFrame when writing into Databricks Delta (because Parquet doesn’t support
NullType), but are still stored in the schema. When a different data type is received for that column, Databricks Delta merges the schema to the new data type. If we receive a
NullType for an existing column, we will keep the old schema, and drop the new column during the write.
NullType in streaming is not supported. Since you must set schemas when using streaming this should be very rare.
NullType is also not accepted for complex types such as
By default, overwriting the data in a table does not overwrite the schema. When overwriting a table using
replaceWhere, you may still want to override the schema of the data being written. You can choose to replace the schema and partitioning of the table by setting:
Databricks Delta supports the creation of views on top of Databricks Delta tables just like you might with a data source table. These views integrate with table access control to allow for column and row level security. The core challenge when you operate with views is resolving the schemas. If you alter a Databricks Delta table schema, you must recreate derivative views to account for any additions to the schema. For instance, if you add a new column to a Databricks Delta table, you must make sure that this column is available in the appropriate views built on top of that base table.
You can store your own metadata as a table property using
In addition, to tailor behavior and performance, Databricks Delta supports a number of Databricks Delta table properties:
- To block deletes and modifications of a table -
- To configure the number of columns for which statistics are collected -
delta.dataSkippingNumIndexedCols=<number-of-columns>. This property takes affect only for new data that is written out.
- To randomize file prefixes to avoid hot spots in S3 metadata -
delta.randomizeFilePrefixes=true. See AWS file naming best practices for details.
Databricks Delta disallows setting any other table properties that begin with
Databricks Delta has rich features for exploring table metadata. It supports the standard commands
DESC TABLE, and so on, and in addition, provides the following unique commands:
DESCRIBE DETAIL- provides information about schema, partitioning, table size, and so on. For example, you can see the current reader and writer versions of a table by running
DESCRIBE HISTORY- provides provenance information, including the operation, user, and so on, for each write to a table. This information is not recorded by versions of Databricks Runtime below 4.1 and tables created using these versions will show this information as
null. Table history is retained for 30 days.
The Data sidebar provides a visual view of this detailed table information and history for Databricks Delta tables. In addition to the table schema and sample data, you can click the History tab to see the table history that displays with