Logo
Concero

Lead Databricks Architect

Concero, Dallas, Texas, United States, 75215

Save Job

The Azure/Databricks Engineer and Data Operations Engineer will be responsible for designing, implementing, and managing the enterprise data lake environments on Azure, utilizing Databricks and other cloud technologies. This role will focus on building ingestion solutions, collaborating on Data Ops processes, developing solutions, managing security access processes, and ensuring compliance with auditability and FinOps requirements.

Responsibilities

Partner with stakeholders to gather requirements, document processes, and translate business needs into actionable stories.

Align stakeholders to ensure shared understanding and commitment to project goals.

Track, groom, and manage backlog and change requests following DevOps standards.

Analyze current processes and propose improvements to increase efficiency and quality.

Prioritize work and plan sprints following 3D standard and principles.

Candidate Profile 6+ years of experience.

Top 3–5 Skills Required

Azure Platform Core:

Azure Databricks, Data Factory, Synapse, ADLS Gen2, and Key Vault for unified data engineering and analytics.

Infrastructure-as-Code (IaC):

Terraform and Bicep for automated, consistent environment provisioning and configuration.

Programming & Orchestration:

PySpark, SQL, and Python for pipeline development; Git-based version control for collaboration.

DevOps Automation:

Azure DevOps or GitHub Actions for CI/CD pipelines and automated Databricks deployments.

Governance & Security:

Unity Catalog, Collibra, Azure IAM/RBAC, and network isolation with Private Link and VNets.

Data Streaming & Delivery:

Kafka or Event Hub for real-time ingestion; Power BI and Fabric for analytics consumption.

AI/ML Enablement:

MLflow and Feature Store for model tracking, deployment, and reproducibility.

Preferred Skills / Nice‑to‑Haves

Strong background in data engineering and cloud data architecture.

Solid understanding of data modeling, ETL/ELT, and pipeline orchestration for both batch and streaming workloads.

Experience designing governed, secure, and scalable data environments—preferably in regulated industries (healthcare, finance, etc.).

Proficient with data governance concepts (lineage, metadata, access control, FinOps).

Able to automate infrastructure and workflows using modern DevOps practices.

Familiar with machine learning enablement, feature store patterns, and analytics data delivery.

Certifications / Degrees Required

Bachelor’s Degree Required

Systems or Tools Used

Epic, WorkDay, Strata, Qualtrics, Imaging systems, Research applications, RTLS, DAXs, HL7, FHIR

Reporting / Documentation Systems

Develop and optimize ETL processes to ensure high-quality data is available for analysis and reporting.

Job Information

Seniority level: Mid‑Senior level

Employment type: Full‑time

Job function: Information Technology

Industries: IT Services and IT Consulting

#J-18808-Ljbffr