Skip to main content
Databricks is an API Key based integration and does not require the setup of an app.

Required Credentials of a Linked Account

To successfully connect with the Databricks integration, a Linked Account or an end-user will need to provide the following to connect:
  1. API Key (Personal Access Token)
  2. Base URL
To understand how a Linked Account can get the above mentioned credentials, refer below.

Getting Credentials of Databricks

To acquire the required credentials and connect a Linked Account, please follow the steps mentioned below:
  1. Log in to your Databricks workspace at your workspace URL (e.g., https://<workspace-id>.azuredatabricks.net).
  2. Click on your profile icon in the top right corner and select Settings.
Navigation for Personal Access Token
  1. Navigate to Developer in the left menu > Click on the Manage button next to Access tokens as shown above.
  2. Click Generate new token.
Getting Personal Access Token
  1. Provide a Comment (e.g., “Refold Integration”) and set a Lifetime for the token. Select any particular scopes as should be accessible by the Refold Databricks connector. Click Generate.
  2. Copy the token displayed on the screen. This is your API Key.
Obtain the API Key
The token will only be shown once. Copy and store it immediately. If lost, you will need to revoke and generate a new token.
  1. Your Base URL is the workspace URL you used to log in (e.g., https://<workspace-id>.azuredatabricks.net).
The Linked Account or end-user now have all the credentials required to connect with Databricks.

Actions and triggers

In Refold, you can create orchestrations of your use-cases using Databricks actions and triggers. Following are the set of Databricks actions and triggers supported by Refold.
  1. Query Catalogs - Gets an array of catalogs in the metastore in Databricks.
  2. Create Catalog - Creates a new catalog instance in the parent metastore in Databricks.
  3. Retrieve Catalog - Gets the specified catalog in a metastore in Databricks.
  4. Update Catalog - Updates the catalog that matches the supplied name in Databricks.
  5. Delete Catalog - Deletes the catalog that matches the supplied name in Databricks.
  1. Create Cluster - Create a new cluster in Databricks.
  2. List Clusters - List all pinned and active clusters in Databricks.
  3. Get Cluster Info - Get information about a specific cluster in Databricks.
  4. Query Clusters - Return information about all pinned, active, and recently terminated clusters in Databricks.
  5. Start Cluster - Start a terminated cluster in Databricks.
  6. Restart Cluster - Restart a running cluster in Databricks.
  7. Terminate Cluster - Terminate a cluster in Databricks.
  8. Delete Cluster - Permanently terminates a cluster and removes it asynchronously in Databricks.
  1. List Groups - Get all group details in Databricks.
  2. Create Group - Create a new group in Databricks.
  3. Update Group - Update group details in Databricks.
  4. Delete Group - Delete a group in Databricks.
  1. Create Job - Create a new job in Databricks.
  2. Query Jobs - Retrieves a list of jobs in Databricks.
  3. Show Job - Retrieves the details for a single job in Databricks.
  4. Update Job - Add, update, or remove specific settings of an existing job in Databricks.
  5. Delete Job - Delete a job in Databricks.
  6. Run Job - Run a job and return the run ID of the triggered run in Databricks.
  7. Get Job Run - Retrieves the metadata of a run in Databricks.
  8. Cancel Run - Cancels a job run or a task run asynchronously in Databricks.
  1. Query Schemas - Gets an array of schemas for a catalog in the metastore in Databricks.
  2. Create Schema - Creates a new schema for a catalog in the metastore in Databricks.
  3. Retrieve Schema - Gets the specified schema within the metastore in Databricks.
  4. Update Schema - Updates a schema for a catalog in Databricks.
  5. Delete Schema - Deletes the specified schema from the parent catalog in Databricks.
  1. Execute Statement - Execute a SQL statement and optionally await its results in Databricks.
  2. Retrieve Statement - Poll for the status and results of a SQL statement execution in Databricks.
  3. Retrieve Statement Result Chunk - Fetch a paginated chunk of results from a completed SQL statement in Databricks.
  4. Cancel Statement Execution - Request that an executing SQL statement be canceled in Databricks.
  1. Query Tables - Gets an array of all tables for a catalog and schema in the metastore in Databricks.
  2. Retrieve Table - Gets a table from the metastore for a specific catalog and schema in Databricks.
  3. Check Table Exists - Checks if a table exists in the metastore for a specific catalog and schema in Databricks.
  4. Query Table Summaries - Gets an array of summaries for tables under a schema and catalog in Databricks.
  5. Delete Table - Deletes a table from the specified parent catalog and schema in Databricks.
  1. List Users - Get details for all users in Databricks.
  2. Create User - Create a new user in Databricks.
  3. Get User - Get user details by ID in Databricks.
  4. Update User - Update an existing user in Databricks.
  5. Delete User - Delete a user in Databricks.
  1. Query Volumes - Gets an array of volumes for a catalog and schema in the metastore in Databricks.
  2. Create Volume - Creates a new volume in Databricks.
  3. Retrieve Volume - Gets a volume from the metastore for a specific catalog and schema in Databricks.
  4. Update Volume - Updates the specified volume in Databricks.
  5. Delete Volume - Deletes a volume from the specified parent catalog and schema in Databricks.
  1. Query Warehouses - Lists all SQL warehouses that a user has manager permissions on in Databricks.
  2. Create Warehouse - Creates a new SQL warehouse in Databricks.
  3. Retrieve Warehouse - Gets the information for a single SQL warehouse in Databricks.
  4. Update Warehouse - Updates the configuration for a SQL warehouse in Databricks.
  5. Start Warehouse - Starts a SQL warehouse in Databricks.
  6. Stop Warehouse - Stops a SQL warehouse in Databricks.
  7. Delete Warehouse - Deletes a SQL warehouse in Databricks.
  1. Import Workspace - Import a workspace object (notebook or file) or the contents of an entire directory in Databricks.
  2. Export Workspace - Export a workspace object or the contents of an entire directory in Databricks.
  3. List Workspace - List the contents of a directory in Databricks.
  4. Retrieve Object Status - Gets the status of an object or directory in Databricks.
  5. Delete Workspace - Deletes an object or a directory in Databricks.
  6. Query Directories - Lists the contents of a directory, or the object if it is not a directory in Databricks.
  7. Create Directory - Creates the specified directory and any necessary parent directories in Databricks.
  1. HTTP Request - Make HTTP API calls to any Databricks documented REST APIs.
  2. Incremental Sync - Check for new data in the endpoint.