Purple Drive
Role: Azure DevOps Engineer
Location:
LOUISVILLE, KY
Responsibilities
Design and implement CI/CD pipelines for Databricks notebooks and workflows using Azure DevOps. Manage source control integration between Databricks and Git repositories. Automate deployment of Databricks clusters, jobs, and libraries using Infrastructure as Code (IaC) tools like Terraform or ARM templates. Monitor and optimize Databricks environments for performance and cost efficiency. Collaborate with data engineering and analytics teams to streamline development and deployment processes. Ensure compliance with security and governance standards in Azure and Databricks environments. Required Skills
Strong experience with
Azure DevOps
(Pipelines, Repos, Artifacts). Hands-on experience with
Azure Databricks
(clusters, jobs, notebooks). Proficiency in
Python ,
SQL , and
PySpark
for data processing. Knowledge of
Terraform
or other IaC tools for Azure resource provisioning. Familiarity with
Git
workflows and branching strategies. Understanding of
Azure services
(Data Lake, Key Vault, Storage Accounts). Experience with
CI/CD automation
for data platforms. Nice-to-Have
Experience with
MLflow
for model tracking and deployment. Knowledge of
Azure Monitor
and
Log Analytics
for observability. Exposure to
data governance tools
like Unity Catalog.
Location:
LOUISVILLE, KY
Responsibilities
Design and implement CI/CD pipelines for Databricks notebooks and workflows using Azure DevOps. Manage source control integration between Databricks and Git repositories. Automate deployment of Databricks clusters, jobs, and libraries using Infrastructure as Code (IaC) tools like Terraform or ARM templates. Monitor and optimize Databricks environments for performance and cost efficiency. Collaborate with data engineering and analytics teams to streamline development and deployment processes. Ensure compliance with security and governance standards in Azure and Databricks environments. Required Skills
Strong experience with
Azure DevOps
(Pipelines, Repos, Artifacts). Hands-on experience with
Azure Databricks
(clusters, jobs, notebooks). Proficiency in
Python ,
SQL , and
PySpark
for data processing. Knowledge of
Terraform
or other IaC tools for Azure resource provisioning. Familiarity with
Git
workflows and branching strategies. Understanding of
Azure services
(Data Lake, Key Vault, Storage Accounts). Experience with
CI/CD automation
for data platforms. Nice-to-Have
Experience with
MLflow
for model tracking and deployment. Knowledge of
Azure Monitor
and
Log Analytics
for observability. Exposure to
data governance tools
like Unity Catalog.