Interactive Resources - iR
About the Role
We are seeking a highly skilled
Databricks Data Engineer
with strong expertise in modern data engineering, Azure cloud technologies, and Lakehouse architectures. This role is ideal for someone who thrives in dynamic environments, enjoys solving complex data challenges, and can lead end-to-end delivery of scalable data solutions.
What We're Looking For 8+ years
designing and delivering scalable data pipelines in modern data platforms Deep experience in
data engineering, data warehousing, and enterprise-grade solution delivery Ability to
lead cross-functional initiatives
in matrixed teams Advanced skills in
SQL, Python , and
ETL/ELT development , including performance tuning Hands-on experience with
Azure ,
Snowflake , and
Databricks , including system integrations
Key Responsibilities Design, build, and optimize large-scale data pipelines on the
Databricks Lakehouse
platform Modernize and enhance cloud-based data ecosystems on
Azure , contributing to architecture, modeling, security, and CI/CD Use
Apache Airflow
and similar tools for workflow automation and orchestration Work with financial or regulated datasets while ensuring strong compliance and governance Drive best practices in data quality, lineage, cataloging, and metadata management
Primary Technical Skills Develop and optimize ETL/ELT pipelines using
Python, PySpark, Spark SQL , and
Databricks Notebooks Design efficient
Delta Lake
models for reliability and performance Implement and manage
Unity Catalog
for governance, RBAC, lineage, and secure data sharing Build reusable frameworks using
Databricks Workflows, Repos, and Delta Live Tables Create scalable ingestion pipelines for
APIs, databases, files, streaming sources , and MDM systems Automate ingestion and workflows using
Python and REST APIs Support downstream analytics for
BI, data science, and application workloads Write optimized
SQL/T-SQL
queries, stored procedures, and curated datasets Automate DevOps workflows, testing pipelines, and workspace configurations
Additional Skills Azure:
Data Factory, Data Lake, Key Vault, Logic Apps, Functions CI/CD:
Azure DevOps Orchestration:
Apache Airflow (plus) Streaming:
Delta Live Tables MDM:
Profisee (nice-to-have) Databases:
SQL Server, Cosmos DB
Soft Skills Strong analytical and problem-solving mindset Excellent communication and cross-team collaboration Detail-oriented with a high sense of ownership and accountability
Databricks Data Engineer
with strong expertise in modern data engineering, Azure cloud technologies, and Lakehouse architectures. This role is ideal for someone who thrives in dynamic environments, enjoys solving complex data challenges, and can lead end-to-end delivery of scalable data solutions.
What We're Looking For 8+ years
designing and delivering scalable data pipelines in modern data platforms Deep experience in
data engineering, data warehousing, and enterprise-grade solution delivery Ability to
lead cross-functional initiatives
in matrixed teams Advanced skills in
SQL, Python , and
ETL/ELT development , including performance tuning Hands-on experience with
Azure ,
Snowflake , and
Databricks , including system integrations
Key Responsibilities Design, build, and optimize large-scale data pipelines on the
Databricks Lakehouse
platform Modernize and enhance cloud-based data ecosystems on
Azure , contributing to architecture, modeling, security, and CI/CD Use
Apache Airflow
and similar tools for workflow automation and orchestration Work with financial or regulated datasets while ensuring strong compliance and governance Drive best practices in data quality, lineage, cataloging, and metadata management
Primary Technical Skills Develop and optimize ETL/ELT pipelines using
Python, PySpark, Spark SQL , and
Databricks Notebooks Design efficient
Delta Lake
models for reliability and performance Implement and manage
Unity Catalog
for governance, RBAC, lineage, and secure data sharing Build reusable frameworks using
Databricks Workflows, Repos, and Delta Live Tables Create scalable ingestion pipelines for
APIs, databases, files, streaming sources , and MDM systems Automate ingestion and workflows using
Python and REST APIs Support downstream analytics for
BI, data science, and application workloads Write optimized
SQL/T-SQL
queries, stored procedures, and curated datasets Automate DevOps workflows, testing pipelines, and workspace configurations
Additional Skills Azure:
Data Factory, Data Lake, Key Vault, Logic Apps, Functions CI/CD:
Azure DevOps Orchestration:
Apache Airflow (plus) Streaming:
Delta Live Tables MDM:
Profisee (nice-to-have) Databases:
SQL Server, Cosmos DB
Soft Skills Strong analytical and problem-solving mindset Excellent communication and cross-team collaboration Detail-oriented with a high sense of ownership and accountability