Databricks Concepts

This topic introduces the set of fundamental concepts you need to understand in order to use Databricks effectively.

Workspace

A Workspace is the root folder for Databricks. The Workspace stores notebooks, libraries, and dashboards.

This section describes the objects that you work with in a Databricks Workspace.

Databricks File System (DBFS)
A filesystem abstraction layer over a blob store. It contains directories, which can contain files (data files, libraries, and images), and other directories. DBFS is automatically populated with some datasets that you can use to learn and experiment with Databricks.
Notebook
A web-based interface to documents that contain runnable commands, visualizations, and narrative text.
Command
Code that runs in a notebook. A command operates on files and tables. Commands can be run in sequence, referring to the output of one or more previously run commands.
Visualization
A graphical rendering of table data and and the output of notebook commands.
Dashboard
An interface that provides organized access to visualizations.
Library
A package of code available to the execution context running on your cluster. Databricks Runtime includes many libraries and you can add your own.
Folder
User-defined container for notebooks, dashboards, and libraries.
Archive
A package of notebooks that can be exported from and imported into Databricks.
Databricks Runtime
The set of core components that run on the clusters managed by Databricks. Databricks Runtime includes Apache Spark but also adds a number of components and updates that substantially improve the usability, performance, and security of big data analytics.

Data Management

This section describes the objects that hold the data on which you perform analytics and feed into machine learning algorithms.

Database
A collection of information that is organized so that it can be easily accessed, managed, and updated.
Table
A representation of structured data. You query tables with Spark SQL and Apache Spark APIs. A table typically consists of multiple partitions.
Partition
A portion of a table. By splitting a large table into smaller, individual tables, queries that access only a fraction of the data can run faster because there is less data to scan.
Metastore
The component that stores all the structure information of the various tables and partitions in the data warehouse including column and column type information, the serializers and deserializers necessary to read and write data, and the corresponding files where the data is stored.

Computation Management

This section describes concepts that you need to know to run analytic and machine learning computations in Databricks.

Cluster
A set of computation resources and configurations on which you run notebooks and jobs.
Execution context
The state for a REPL environment for each supported programming language. The languages supported are Python, R, Scala, and SQL.
Job
A way of running a notebook or library either immediately or on a scheduled basis.

Model Management

This section describes concepts that you need to know to build machine learning models.

Model
A set of known dimensions that serves as the framework for training machines to make predictions. The initial structure imposed upon a function.
Trained Model
The outcome of the training process. A mathematical mapping from input to output.

Authentication and Authorization

This section describes concepts that you need to know when you manage Databricks users and their access to Databricks workspace objects.

User
A unique individual who has access to the system.
Group
A collection of users.
Access control list (ACL)
A list of permissions attached to the Workspace, cluster, job, or table. An ACL specifies which users or system processes are granted access to the objects, as well as what operations are allowed on the objects. Each entry in a typical ACL specifies a subject and an operation.