Skip to main content

Create custom model serving endpoints

This article describes how to create model serving endpoints that serve custom models using Databricks Model Serving.

Model Serving provides the following options for serving endpoint creation:

  • The Serving UI
  • REST API
  • MLflow Deployments SDK

For creating endpoints that serve generative AI models, see Create foundation model serving endpoints.

Requirements

  • Your workspace must be in a supported region.
  • If you use custom libraries or libraries from a private mirror server with your model, see Use custom Python libraries with Model Serving before you create the model endpoint.
  • For creating endpoints using the MLflow Deployments SDK, you must install the MLflow Deployment client. To install it, run:
Python
import mlflow.deployments

client = mlflow.deployments.get_deploy_client("databricks")

Access control

To understand access control options for model serving endpoints for endpoint management, see Manage permissions on your model serving endpoint.

You can also:

Create an endpoint

You can create an endpoint for model serving with the Serving UI.

  1. Click Serving in the sidebar to display the Serving UI.

  2. Click Create serving endpoint.

    Model serving pane in Databricks UI

For models registered in the Workspace model registry or models in Unity Catalog:

  1. In the Name field provide a name for your endpoint.

  2. In the Served entities section

    1. Click into the Entity field to open the Select served entity form.
    2. Select the type of model you want to serve. The form dynamically updates based on your selection.
    3. Select which model and model version you want to serve.
    4. Select the percentage of traffic to route to your served model.
    5. Select what size compute to use. You can use CPU or GPU computes for your workloads. See GPU workload types for more information on available GPU computes.
    6. Under Compute Scale-out, select the size of the compute scale out that corresponds with the number of requests this served model can process at the same time. This number should be roughly equal to QPS x model run time.
      1. Available sizes are Small for 0-4 requests, Medium 8-16 requests, and Large for 16-64 requests.
    7. Specify if the endpoint should scale to zero when not in use.
    8. Under Advanced configuration, you can add an instance profile to connect to AWS resources from your endpoint.
  3. Click Create. The Serving endpoints page appears with Serving endpoint state shown as Not Ready.

    Create a model serving endpoint

You can also:

GPU workload types

GPU deployment is compatible with the following package versions:

  • Pytorch 1.13.0 - 2.0.1
  • TensorFlow 2.5.0 - 2.13.0
  • MLflow 2.4.0 and above

To deploy your models using GPUs include the workload_type field in your endpoint configuration during endpoint creation or as an endpoint configuration update using the API. To configure your endpoint for GPU workloads with the Serving UI, select the desired GPU type from the Compute Type dropdown.

Bash
{
"served_entities": [{
"entity_name": "catalog.schema.ads1",
"entity_version": "2",
"workload_type": "GPU_MEDIUM",
"workload_size": "Small",
"scale_to_zero_enabled": false,
}]
}

The following table summarizes the available GPU workload types supported.

GPU workload typeGPU instanceGPU memory
GPU_SMALL1xT416GB
GPU_MEDIUM1xA10G24GB
MULTIGPU_MEDIUM4xA10G96GB
GPU_MEDIUM_88xA10G192GB

Modify a custom model endpoint

After enabling a custom model endpoint, you can update the compute configuration as desired. This configuration is particularly helpful if you need additional resources for your model. Workload size and compute configuration play a key role in what resources are allocated for serving your model.

Until the new configuration is ready, the old configuration keeps serving prediction traffic. While there is an update in progress, another update cannot be made. However, you can cancel an in progress update from the Serving UI.

After you enable a model endpoint, select Edit endpoint to modify the compute configuration of your endpoint.

You can do the following:

  • Choose from a few workload sizes, and autoscaling is automatically configured within the workload size.
  • Specify if your endpoint should scale down to zero when not in use.
  • Modify the percent of traffic to route to your served model.

You can cancel an in progress configuration update by selecting Cancel update on the top right of the endpoint’s details page. This functionality is only available in the Serving UI.

Scoring a model endpoint

To score your model, send requests to the model serving endpoint.

Additional resources

Notebook examples

The following notebooks include different Databricks registered models that you can use to get up and running with model serving endpoints. For additional examples, see Tutorial: Deploy and query a custom model.

The model examples can be imported into the workspace by following the directions in Import a notebook. After you choose and create a model from one of the examples, register it in Unity Catalog, and then follow the UI workflow steps for model serving.

Train and register a scikit-learn model for model serving notebook

Open notebook in new tab

Train and register a HuggingFace model for model serving notebook

Open notebook in new tab