Elan Partners
Direct message the job poster from Elan Partners
Direct Hire Opportunity
No Sponsorship
Hybrid
Seeking a seasoned Data Engineer with hands-on Databricks experience and a proven track record of delivering multiple end-to-end implementations. The ideal candidate has 5+ years of experience, with at least two full projects delivered from inception to production, and can move comfortably between coding, pipeline engineering, and high-level design. This role is best suited for someone who thrives in a build-from-scratch environment and has experience both designing and executing solutions. Beyond strong technical execution, the ideal candidate has the ability to "show, not just tell: writing Spark notebooks, transforming raw data into Medallion layers, and leading others through best practices in modern data engineering.
Required Qualifications
5+ years of professional experience in data engineering Demonstrated experience completing 2+ end-to-end projects Databricks Azure implementations Strong proficiency in Databricks, Apache Spark, SQL, and Python Hands-on expertise with data pipeline development: ingestion, transformation, and orchestration Experience building and optimizing Delta Lake tables and implementing Medallion Architecture (Bronze/Silver/Gold layers) Proven ability to write and execute Spark notebooks and work directly with DataFrames Ability to design and implement real-time or batch data pipelines, including data modeling and transformations Strong understanding of ETL/ELT patterns, schema design, and data quality practices Preferred Qualifications
Experience mentoring small teams or consultants Exposure to Unity Catalog or other data governance solutions Familiarity with Kafka/event streaming Experience with CI/CD for data pipelines Background in large-scale data environments Exposure to AI/ML workloads and collaboration with data scientists Key Responsibilities
Design, build, and maintain data pipelines within Databricks (Azure, Spark, SQL, Python) Implement ingestion from diverse sources (flat files, relational DBs, APIs, JSON, XML) Engineer data flows using the Medallion Architecture (Bronze/Silver/Gold) Develop, organize, and test pipelines for both real-time and batch data processing Optimize and manage Delta Lake tables for scalability and performance Collaborate with architects on system-level architecture and design while owning pipeline and data flow design Implement data quality checks, monitoring, and security controls Partner with data scientists and analysts to enable AI/ML and BI initiatives Contribute to GitHub source control and CI/CD processes
#J-18808-Ljbffr
5+ years of professional experience in data engineering Demonstrated experience completing 2+ end-to-end projects Databricks Azure implementations Strong proficiency in Databricks, Apache Spark, SQL, and Python Hands-on expertise with data pipeline development: ingestion, transformation, and orchestration Experience building and optimizing Delta Lake tables and implementing Medallion Architecture (Bronze/Silver/Gold layers) Proven ability to write and execute Spark notebooks and work directly with DataFrames Ability to design and implement real-time or batch data pipelines, including data modeling and transformations Strong understanding of ETL/ELT patterns, schema design, and data quality practices Preferred Qualifications
Experience mentoring small teams or consultants Exposure to Unity Catalog or other data governance solutions Familiarity with Kafka/event streaming Experience with CI/CD for data pipelines Background in large-scale data environments Exposure to AI/ML workloads and collaboration with data scientists Key Responsibilities
Design, build, and maintain data pipelines within Databricks (Azure, Spark, SQL, Python) Implement ingestion from diverse sources (flat files, relational DBs, APIs, JSON, XML) Engineer data flows using the Medallion Architecture (Bronze/Silver/Gold) Develop, organize, and test pipelines for both real-time and batch data processing Optimize and manage Delta Lake tables for scalability and performance Collaborate with architects on system-level architecture and design while owning pipeline and data flow design Implement data quality checks, monitoring, and security controls Partner with data scientists and analysts to enable AI/ML and BI initiatives Contribute to GitHub source control and CI/CD processes
#J-18808-Ljbffr