ZipRecruiter
Overview
Seeking a Databricks Architect to lead the design and delivery of large-scale data platforms using Databricks.
Key Responsibilities
Architect and implement Databricks lakehouse solutions (Delta Lake, Spark, Unity Catalog, MLflow).
Design and optimize ETL/ELT pipelines, data ingestion, and real-time streaming frameworks.
Integrate Databricks with cloud services (AWS, Azure, GCP) and enterprise data systems.
Establish data governance, security, and compliance frameworks.
Lead migration efforts from legacy platforms to Databricks.
Collaborate with business stakeholders, data engineers, and data scientists to deliver end-to-end solutions.
Required Skills
Proven hands-on experience as a Databricks Architect / Senior Data Engineer.
Strong experience with data pipelines including Cribl, Confluent and other high capacity data interchange.
Strong expertise in Databricks ecosystem (Spark, Delta Lake, SQL Analytics, MLflow).
Cloud experience (Azure Databricks, AWS/GCP acceptable).
Strong coding background in Python, PySpark, SQL, Scala.
Knowledge of data security, compliance, and governance standards.
U.S. work authorization; must be based onshore.
Databricks certifications.
Experience with BI tools (Power BI, Tableau) and automation/DevOps (Terraform, GitHub Actions, Jenkins).
Prior consulting/enterprise implementation experience.
Remote work
#J-18808-Ljbffr
Key Responsibilities
Architect and implement Databricks lakehouse solutions (Delta Lake, Spark, Unity Catalog, MLflow).
Design and optimize ETL/ELT pipelines, data ingestion, and real-time streaming frameworks.
Integrate Databricks with cloud services (AWS, Azure, GCP) and enterprise data systems.
Establish data governance, security, and compliance frameworks.
Lead migration efforts from legacy platforms to Databricks.
Collaborate with business stakeholders, data engineers, and data scientists to deliver end-to-end solutions.
Required Skills
Proven hands-on experience as a Databricks Architect / Senior Data Engineer.
Strong experience with data pipelines including Cribl, Confluent and other high capacity data interchange.
Strong expertise in Databricks ecosystem (Spark, Delta Lake, SQL Analytics, MLflow).
Cloud experience (Azure Databricks, AWS/GCP acceptable).
Strong coding background in Python, PySpark, SQL, Scala.
Knowledge of data security, compliance, and governance standards.
U.S. work authorization; must be based onshore.
Databricks certifications.
Experience with BI tools (Power BI, Tableau) and automation/DevOps (Terraform, GitHub Actions, Jenkins).
Prior consulting/enterprise implementation experience.
Remote work
#J-18808-Ljbffr