Mondo Staffing
Apply now: Senior Data Engineer, Location is Hybrid (Burbank, CA). The start date is September 30, 2025, for this 4-month contract position with potential extension.
\n
Job Title:
Senior Data Engineer Location-Type:
Hybrid (3 days onsite - Burbank, CA) Start Date Is:
September 30, 2025 (or 2 weeks from offer) Duration:
4 months (Contract, potential extension) Compensation Range:
$80.00 - $85.00/hr W2 \n Job Description: We are seeking a
Senior Data Engineer
to join a lean-agile product delivery team focused on building scalable, governed, and AI/ML-ready data solutions. As part of a cross-functional pod, you will design and implement high-performance data pipelines, support analytics and machine learning workflows, and embed governance into all aspects of data delivery. This role requires strong AWS expertise, hands-on engineering, and the ability to collaborate across engineering, product, and architecture teams. \n Day-to-Day Responsibilities: \n
\n
Design & Build Scalable Data Pipelines : Develop batch and streaming pipelines using AWS-native tools (Glue, Lambda, Step Functions, Kinesis) and orchestration frameworks like Airflow.
\n
Optimize & Monitor : Ensure pipelines are resilient, cost-efficient, and scalable.
\n
Enable Analytics & AI/ML : Deliver structured, high-quality data to BI tools and ML workflows; partner with data scientists to support feature engineering and model deployment.
\n
Ensure Governance & Quality : Embed validation, lineage, tagging, and metadata standards into pipelines; contribute to enterprise data catalog.
\n
Collaborate & Mentor : Participate in Agile ceremonies, architecture syncs, and backlog refinement. Mentor junior engineers and advocate for reusable services across pods.
\n
\n Requirements: \n Must-Haves: \n
\n
7 years of experience in
data engineering , with hands-on expertise in AWS services (Glue, Kinesis, Lambda, RDS, DynamoDB, S3).
\n
Proficiency with
SQL, Python, and PySpark
for data transformations.
\n
Experience with orchestration tools such as
Airflow
or
Step Functions .
\n
Proven ability to
optimize pipelines
for both batch and streaming use cases.
\n
Understanding of
data governance
practices, including lineage, validation, and cataloging.
\n
Experience with modern data platforms such as
Snowflake, Databricks, Redshift, or Informatica .
\n
\n Nice-to-Haves: \n
\n
Experience influencing platform-first approaches across pods.
\n
Strong collaboration and mentoring skills.
\n
Knowledge of advanced governance practices and large-scale data platform operations.
\n
\n Soft Skills: \n
\n
Excellent communication skills for cross-functional collaboration. \n
Ability to mentor and guide junior engineers. \n
Proactive problem solver with strong organizational and teamwork skills. \n
Senior Data Engineer Location-Type:
Hybrid (3 days onsite - Burbank, CA) Start Date Is:
September 30, 2025 (or 2 weeks from offer) Duration:
4 months (Contract, potential extension) Compensation Range:
$80.00 - $85.00/hr W2 \n Job Description: We are seeking a
Senior Data Engineer
to join a lean-agile product delivery team focused on building scalable, governed, and AI/ML-ready data solutions. As part of a cross-functional pod, you will design and implement high-performance data pipelines, support analytics and machine learning workflows, and embed governance into all aspects of data delivery. This role requires strong AWS expertise, hands-on engineering, and the ability to collaborate across engineering, product, and architecture teams. \n Day-to-Day Responsibilities: \n
\n
Design & Build Scalable Data Pipelines : Develop batch and streaming pipelines using AWS-native tools (Glue, Lambda, Step Functions, Kinesis) and orchestration frameworks like Airflow.
\n
Optimize & Monitor : Ensure pipelines are resilient, cost-efficient, and scalable.
\n
Enable Analytics & AI/ML : Deliver structured, high-quality data to BI tools and ML workflows; partner with data scientists to support feature engineering and model deployment.
\n
Ensure Governance & Quality : Embed validation, lineage, tagging, and metadata standards into pipelines; contribute to enterprise data catalog.
\n
Collaborate & Mentor : Participate in Agile ceremonies, architecture syncs, and backlog refinement. Mentor junior engineers and advocate for reusable services across pods.
\n
\n Requirements: \n Must-Haves: \n
\n
7 years of experience in
data engineering , with hands-on expertise in AWS services (Glue, Kinesis, Lambda, RDS, DynamoDB, S3).
\n
Proficiency with
SQL, Python, and PySpark
for data transformations.
\n
Experience with orchestration tools such as
Airflow
or
Step Functions .
\n
Proven ability to
optimize pipelines
for both batch and streaming use cases.
\n
Understanding of
data governance
practices, including lineage, validation, and cataloging.
\n
Experience with modern data platforms such as
Snowflake, Databricks, Redshift, or Informatica .
\n
\n Nice-to-Haves: \n
\n
Experience influencing platform-first approaches across pods.
\n
Strong collaboration and mentoring skills.
\n
Knowledge of advanced governance practices and large-scale data platform operations.
\n
\n Soft Skills: \n
\n
Excellent communication skills for cross-functional collaboration. \n
Ability to mentor and guide junior engineers. \n
Proactive problem solver with strong organizational and teamwork skills. \n