Energy Jobline ZR
Job Summary
We are seeking a Principal Software Engineer with extensive experience in the Databricks platform and advanced big data engineering practices. This role involves architecting and implementing scalable, secure, and high‑performing data solutions to power the Ad Platforms Data platform and support mission‑critical data initiatives, including real‑time and batch processing pipelines. As a hands‑on technical leader, you will design data architectures that scale to handle terabytes‑per‑hour throughput, ensuring optimal performance, compliance, and alignment with business needs. You will play a pivotal role in guiding teams, collaborating across departments, and driving innovation.
Technical Leadership and Architecture
Architect and implement highly scalable, secure, and efficient ETL/ELT pipelines for both real‑time and batch data processing.
Design complex, multi‑source data ecosystems and ensure seamless integration with tools like Databricks, Delta Lake, and Snowflake.
Optimize data pipelines and workflows for performance, cost, and reliability, leveraging cutting‑edge tools like Airflow, Spark, Flink, and Databricks Delta Sharing.
Lead the adoption of best practices in data warehousing, lakehouse architecture, and metadata management.
Champion data security, privacy, and governance, implementing frameworks for lineage tracking, cataloging, and compliance (e.g., SOX, GDPR, CCPA).
Drive implementation of Unity Catalog for data governance and cross‑account querying.
Platform Development and Operational Excellence
Able to assist in the set up and manage AWS environments, including S3, IAM Roles, Network Peering, and other infrastructure components.
Oversee Airflow integration with Databricks Spark clusters, manage user access, and implement robust secret management solutions.
Lead the CI/CD process with gated code releases, ensuring code quality and scalable deployment processes.
Implement innovative solutions, from the exciting to the mundane; such as Databricks UniForm (Iceberg, Delta) format integration with Snowflake and basic tools to support cost projection, etc.
Data Engineering and Optimization
Optimize Spark Streaming and ETL workflows, leveraging deep expertise in Spark performance tuning.
Ensure best practices in data ingestion, transformation, and loading, focusing on highly performant and governed data models.
Drive improvements in Delta Lake ecosystems and lakehouse architecture, including Delta Sharing for distributed querying.
Collaboration and Communication
Collaborate closely with cross‑functional teams, including Ad Server architects, to align on data strategies and designs.
Foster open communication during architectural discussions, confidently defending decisions while welcoming feedback.
Bridge the gap between diverse team requirements and technical capabilities to deliver cohesive, scalable solutions.
Innovation and Mentorship
Stay ahead of emerging trends in big data technologies, cloud platforms (AWS, Azure, GCP), and distributed computing.
Mentor engineers and architects, fostering a culture of innovation, knowledge sharing, and technical excellence.
Basic Qualifications
Bachelor’s or Master’s degree in Computer Science, Software Engineering or related technical discipline is highly desirable.
10+ years of experience in software engineering with a focus on big data platforms.
Proven expertise in building scalable systems capable of processing terabytes of data per hour.
Hands‑on experience with Databricks, Spark, Flink, Airflow, and Delta Lake ecosystems.
Proficiency in Python (including PySpark), Java, or Scala.
Extensive knowledge of data warehousing, data lakes, and real‑time streaming systems.
Hands‑on experience with AWS cloud services and data solutions (e.g., S3, Redshift, Athena, IAM).
Deep understanding of data compliance (e.g., SOX, GDPR) and governance.
Strong skills in data lineage, cataloging, and metadata management.
Qualifications
Experience in the media advertising domain, including linear and digital Ad Sales.
Expertise with tools like ERStudio, ERWin, Alation, or Collibra for metadata management.
Familiarity with Iceberg format integration and other modern data formats.
Certifications in Databricks, AWS, or relevant big data technologies.
Ability to manage/lead a team of high‑performing data engineers and architects.
Strong curiosity about how Disney delivers the Magic and a desire to be a part of it.
Compensation Hiring range in Los Angeles, CA: $184,300 – $247,100 per year. In San Francisco, CA: $201,900 – $270,700 per year. In Seattle, WA: $193,100 – $258,900 per year. Base pay depends on internal equity, candidate knowledge, skills, experience, and geography. Bonus and long‑term incentives may be offered along with standard medical, financial, and other benefits per level and position.
Apply If you are interested, please press the Apply Button and follow the application process.
#J-18808-Ljbffr
Technical Leadership and Architecture
Architect and implement highly scalable, secure, and efficient ETL/ELT pipelines for both real‑time and batch data processing.
Design complex, multi‑source data ecosystems and ensure seamless integration with tools like Databricks, Delta Lake, and Snowflake.
Optimize data pipelines and workflows for performance, cost, and reliability, leveraging cutting‑edge tools like Airflow, Spark, Flink, and Databricks Delta Sharing.
Lead the adoption of best practices in data warehousing, lakehouse architecture, and metadata management.
Champion data security, privacy, and governance, implementing frameworks for lineage tracking, cataloging, and compliance (e.g., SOX, GDPR, CCPA).
Drive implementation of Unity Catalog for data governance and cross‑account querying.
Platform Development and Operational Excellence
Able to assist in the set up and manage AWS environments, including S3, IAM Roles, Network Peering, and other infrastructure components.
Oversee Airflow integration with Databricks Spark clusters, manage user access, and implement robust secret management solutions.
Lead the CI/CD process with gated code releases, ensuring code quality and scalable deployment processes.
Implement innovative solutions, from the exciting to the mundane; such as Databricks UniForm (Iceberg, Delta) format integration with Snowflake and basic tools to support cost projection, etc.
Data Engineering and Optimization
Optimize Spark Streaming and ETL workflows, leveraging deep expertise in Spark performance tuning.
Ensure best practices in data ingestion, transformation, and loading, focusing on highly performant and governed data models.
Drive improvements in Delta Lake ecosystems and lakehouse architecture, including Delta Sharing for distributed querying.
Collaboration and Communication
Collaborate closely with cross‑functional teams, including Ad Server architects, to align on data strategies and designs.
Foster open communication during architectural discussions, confidently defending decisions while welcoming feedback.
Bridge the gap between diverse team requirements and technical capabilities to deliver cohesive, scalable solutions.
Innovation and Mentorship
Stay ahead of emerging trends in big data technologies, cloud platforms (AWS, Azure, GCP), and distributed computing.
Mentor engineers and architects, fostering a culture of innovation, knowledge sharing, and technical excellence.
Basic Qualifications
Bachelor’s or Master’s degree in Computer Science, Software Engineering or related technical discipline is highly desirable.
10+ years of experience in software engineering with a focus on big data platforms.
Proven expertise in building scalable systems capable of processing terabytes of data per hour.
Hands‑on experience with Databricks, Spark, Flink, Airflow, and Delta Lake ecosystems.
Proficiency in Python (including PySpark), Java, or Scala.
Extensive knowledge of data warehousing, data lakes, and real‑time streaming systems.
Hands‑on experience with AWS cloud services and data solutions (e.g., S3, Redshift, Athena, IAM).
Deep understanding of data compliance (e.g., SOX, GDPR) and governance.
Strong skills in data lineage, cataloging, and metadata management.
Qualifications
Experience in the media advertising domain, including linear and digital Ad Sales.
Expertise with tools like ERStudio, ERWin, Alation, or Collibra for metadata management.
Familiarity with Iceberg format integration and other modern data formats.
Certifications in Databricks, AWS, or relevant big data technologies.
Ability to manage/lead a team of high‑performing data engineers and architects.
Strong curiosity about how Disney delivers the Magic and a desire to be a part of it.
Compensation Hiring range in Los Angeles, CA: $184,300 – $247,100 per year. In San Francisco, CA: $201,900 – $270,700 per year. In Seattle, WA: $193,100 – $258,900 per year. Base pay depends on internal equity, candidate knowledge, skills, experience, and geography. Bonus and long‑term incentives may be offered along with standard medical, financial, and other benefits per level and position.
Apply If you are interested, please press the Apply Button and follow the application process.
#J-18808-Ljbffr