Databricks Delta supports the
OPTIMIZE operation, which optimizes the layout of data stored in DBFS in order to improve query speed. Databricks Delta supports two layout algorithms: bin-packing and ZOrdering.
This topic describes how to run
OPTIMIZE and how the two layout algorithms work. The Databricks Delta Frequently asked questions explains why
OPTIMIZE is not automatic and has recommendations on how often to run
Databricks Delta can improve the speed of read queries from a table by coalescing small files into larger ones. You can trigger compaction by running the
If you have a large amount of data and only want to optimize a subset of it, you can specify an optional partition predicate using
OPTIMIZE events WHERE date >= '2017-01-01'
Readers of Databricks Delta tables use snapshot isolation, which means that they are not interrupted when
OPTIMIZE removes unnecessary files from the transaction log.
Bin-packing optimization is idempotent, meaning that if it is run twice on the same dataset, the second instance has no effect. Moreover,
OPTIMIZE makes no data related changes to the table, so a read before and after an
OPTIMIZE have the same results.
OPTIMIZE on a table that is a streaming source does not affect any current or future streams that treat this table as a source.
ZOrdering is a technique to colocate related information in the same set of files. This co-locality is automatically used by Databricks Delta data-skipping algorithms to dramatically reduce the amount of data that needs to be read. To ZOrder data, you must specify the columns to order on:
OPTIMIZE events WHERE date >= current_timestamp() - INTERVAL 1 day ZORDER BY (eventType)
You can specify multiple columns for
ZORDER BY as a comma-separated list. However, the effectiveness of the locality drops with each additional column.
ZOrdering is not an idempotent operation, and rearranges all of the data that matches the given filter. Therefore we suggest that you limit it to new data, using partition filters when possible.
Data skipping information is collected automatically when you write data into a Databricks Delta table. To provide faster queries, Databricks Delta takes advantage of this information (min and max values) at query time. You do not need to configure data skipping.
To ensure that concurrent readers can continue reading a stale snapshot of a table, Databricks Delta leaves deleted files on DBFS for a period of time. To save on storage costs you should occasionally clean up these invalid files using the
You can also specify
DRY RUN to test the vacuum and return a list of files to be deleted:
VACUUM events DRY RUN
VACUUM command removes any files that are no longer in the transaction log for the table and are older than a retention threshold. The default threshold is 7 days, but you can specify an alternate retention interval. For example, to delete all stale files older than 8 days, you can execute the following SQL command:
VACUUM boxes RETAIN 192 HOURS
Databricks does not recommend that you set a retention interval shorter than 7 days because old snapshots and uncommitted files can still be in use by concurrent readers or writers to the table. If
VACUUM cleans up active files, concurrent readers can fail or, worse, tables can be corrupted when
VACUUM deletes files that have not yet been committed. Databricks Delta has a safety check to prevent you from running a dangerous
VACUUM command. If you are certain that there are no operations being performed on this table that take longer than the retention interval you plan to specify, you can turn off this safety check by setting the SQL conf
false. You must choose an interval that is longer than the longest running concurrent transaction and the longest period that any stream can lag behind the most recent update to the table.