Swinerton
Senior Data Engineer – Swinerton
Join Swinerton as a Senior Data Engineer and lead cutting‑edge data engineering initiatives across our cloud‑based lakehouse platforms on Azure.
Compensation:
$160,000.00 – $180,000.00 Annual Salary
Responsibilities
Architect, optimize, and evolve data storage solutions and data models, including medallion architecture (bronze, silver, gold layers), for scalable and cost‑efficient lakehouse platforms on Azure and Databricks.
Integrate data from diverse internal and external sources, ensuring interoperability and consistency across platforms.
Apply data governance, security, privacy, and compliance standards within engineering solutions, following organizational and regulatory guidelines.
Design, build, and maintain scalable ETL/ELT pipelines for ingesting, transforming, and delivering data from diverse sources using Azure Data Factory, Databricks, and related tools.
Implement workflow orchestration tools (e.g., Airflow, dbt) for managing complex data workflows.
Ensure data quality, consistency, and reliability through robust validation, monitoring, and error‑handling processes.
Implement privacy and security best practices in data pipelines (e.g., data masking, encryption, role‑based access control).
Engage with data scientists, analysts, and business stakeholders to understand data requirements and deliver solutions that drive business value.
Communicate complex technical concepts through clear, actionable insights and documentation tailored to both technical and non‑technical audiences.
Foster alignment and adoption of data engineering solutions by building strong relationships and ensuring business relevance.
Mentor and support junior engineers, promoting a culture of learning and excellence.
Optimize data engineering capabilities by leveraging existing and emerging tools and technologies, focusing on performance, cost efficiency, scalability, data workflow management, and reliable deployment.
Build and manage real‑time/streaming data pipelines using Azure Event Hubs, Kafka, or Spark Structured Streaming.
Collaborate with data scientists to enable seamless integration of data pipelines with analytics and machine learning workflows, including ML model deployment and monitoring.
Complete other responsibilities as assigned.
Qualifications
Bachelor’s degree in Computer Science, Engineering, or a related quantitative field required; Master’s degree preferred.
5+ years in data engineering or related roles, with a track record of delivering production‑ready data solutions.
Deep expertise in Azure data services (e.g., Azure Databricks, Azure Data Lake).
Advanced experience with Databricks, including Spark, Delta Lake, and medallion architecture.
Experience with workflow orchestration tools (e.g., Airflow, dbt).
Proficiency in SQL and Python (including pandas, PySpark, or similar frameworks).
Experience with real‑time/streaming data solutions (e.g., Azure Event Hubs, Kafka, Spark Structured Streaming).
Experience with CI/CD for data pipelines and infrastructure as code (Azure DevOps, GitHub Actions, Terraform).
Experience with containerization (Docker, Kubernetes) and/or serverless compute (Azure Functions) is a plus.
Experience with GenAI/LLM integration (e.g., vector databases, RAG pipelines) is a plus.
Excellent problem‑solving, critical thinking, and communication skills.
Construction industry experience is preferred but not required.
Ability to work independently in a remote or hybrid environment with minimal supervision.
Benefits This role is eligible for medical, dental, vision, 401(k) with company matching, Employee Stock Ownership Program, paid vacation, paid sick leave, paid holidays, bereavement leave, employee assistance program, pre‑tax flexible spending accounts, basic term life insurance, disability insurance, financial wellness coaching, educational assistance, Care.com membership, ClassPass fitness membership, DashPass delivery membership, plus voluntary benefits such as additional term life insurance, long‑term care insurance, critical illness, accidental injury, pet insurance, legal plan, identity theft protection, and others.
Anticipated Job Application Deadline:
10/27/2025
#J-18808-Ljbffr
Compensation:
$160,000.00 – $180,000.00 Annual Salary
Responsibilities
Architect, optimize, and evolve data storage solutions and data models, including medallion architecture (bronze, silver, gold layers), for scalable and cost‑efficient lakehouse platforms on Azure and Databricks.
Integrate data from diverse internal and external sources, ensuring interoperability and consistency across platforms.
Apply data governance, security, privacy, and compliance standards within engineering solutions, following organizational and regulatory guidelines.
Design, build, and maintain scalable ETL/ELT pipelines for ingesting, transforming, and delivering data from diverse sources using Azure Data Factory, Databricks, and related tools.
Implement workflow orchestration tools (e.g., Airflow, dbt) for managing complex data workflows.
Ensure data quality, consistency, and reliability through robust validation, monitoring, and error‑handling processes.
Implement privacy and security best practices in data pipelines (e.g., data masking, encryption, role‑based access control).
Engage with data scientists, analysts, and business stakeholders to understand data requirements and deliver solutions that drive business value.
Communicate complex technical concepts through clear, actionable insights and documentation tailored to both technical and non‑technical audiences.
Foster alignment and adoption of data engineering solutions by building strong relationships and ensuring business relevance.
Mentor and support junior engineers, promoting a culture of learning and excellence.
Optimize data engineering capabilities by leveraging existing and emerging tools and technologies, focusing on performance, cost efficiency, scalability, data workflow management, and reliable deployment.
Build and manage real‑time/streaming data pipelines using Azure Event Hubs, Kafka, or Spark Structured Streaming.
Collaborate with data scientists to enable seamless integration of data pipelines with analytics and machine learning workflows, including ML model deployment and monitoring.
Complete other responsibilities as assigned.
Qualifications
Bachelor’s degree in Computer Science, Engineering, or a related quantitative field required; Master’s degree preferred.
5+ years in data engineering or related roles, with a track record of delivering production‑ready data solutions.
Deep expertise in Azure data services (e.g., Azure Databricks, Azure Data Lake).
Advanced experience with Databricks, including Spark, Delta Lake, and medallion architecture.
Experience with workflow orchestration tools (e.g., Airflow, dbt).
Proficiency in SQL and Python (including pandas, PySpark, or similar frameworks).
Experience with real‑time/streaming data solutions (e.g., Azure Event Hubs, Kafka, Spark Structured Streaming).
Experience with CI/CD for data pipelines and infrastructure as code (Azure DevOps, GitHub Actions, Terraform).
Experience with containerization (Docker, Kubernetes) and/or serverless compute (Azure Functions) is a plus.
Experience with GenAI/LLM integration (e.g., vector databases, RAG pipelines) is a plus.
Excellent problem‑solving, critical thinking, and communication skills.
Construction industry experience is preferred but not required.
Ability to work independently in a remote or hybrid environment with minimal supervision.
Benefits This role is eligible for medical, dental, vision, 401(k) with company matching, Employee Stock Ownership Program, paid vacation, paid sick leave, paid holidays, bereavement leave, employee assistance program, pre‑tax flexible spending accounts, basic term life insurance, disability insurance, financial wellness coaching, educational assistance, Care.com membership, ClassPass fitness membership, DashPass delivery membership, plus voluntary benefits such as additional term life insurance, long‑term care insurance, critical illness, accidental injury, pet insurance, legal plan, identity theft protection, and others.
Anticipated Job Application Deadline:
10/27/2025
#J-18808-Ljbffr