Sr. Data Engineer
Mondo Staffing - Hughson, California, United States, 95326
Work at Mondo Staffing
Overview
- View job
Overview
Location:
Onsite, Houston, TX (4 days/week, Mon-Thurs) \n
Start Date:
2 weeks from offer for this 6-month contract position\n \n \n
Job Title:
Sr. Data Engineer, Data Solutions\n \n
Location-Type:
Onsite, Houston, TX (4 days/week, Mon-Thurs) (Remote-Fridays)\n \n
Start Date Is:
2 weeks from offer\n \n
Duration:
6 month contract (potential to extend)\n \n
Compensation Range: $50 - $70/hr w2 \n
Role Overview:
\n Client is seeking a
Sr. Data Engineer
to build and optimize data pipelines, ETL processes, and cloud-based data architectures. This role focuses on Python development, cloud data integration, and supporting advanced analytics and BI initiatives. You'll collaborate with cross-functional teams to enhance data flows and drive data product innovation. \n
Day-to-Day Responsibilities:
\n
\n
Develop Python modules using Numpy, Pandas, and dynamic programming techniques
\n
Design, build, and optimize ELT/ETL pipelines using AWS and Snowflake
\n
Manage data orchestration, transformation, and job automation
\n
Troubleshoot complex SQL queries and optimize data queries
\n
Collaborate with Data Architects, Analysts, BI Engineers, and Product Owners to define data transformation needs
\n
Utilize cloud integration tools (Matillion, Informatica Cloud, AWS Glue, etc.)
\n
Develop data validation processes to ensure data quality
\n
Support continuous improvement and issue resolution of data processes
\n
\n
Must-Haves:
\n
\n
Bachelor's degree in Computer Science, Engineering, or related field
\n
8 years of experience in data engineering, ETL, data warehouse, and data lake development
\n
Strong SQL expertise and experience with relational databases
\n
Experience with Snowflake, Amazon Redshift, or Google BigQuery
\n
Proficiency in AWS services: EC2, S3, Lambda, SQS, SNS, etc.
\n
Experience with cloud integration tools (Matillion, Dell Boomi, Informatica Cloud, Talend, AWS Glue)
\n
GitHub version control knowledge and its integration with ETL pipelines
\n
Familiarity with Spark, Hadoop, NoSQL, APIs, and streaming data platforms
\n
Python (preferred), Java, or Scala scripting experience
\n
Agile/Scrum development experience
\n
\n
Soft Skills:
\n
\n
Excellent communication and collaboration skills
\n
Strong problem-solving abilities and attention to detail
\n
Ability to multitask and meet tight deadlines
\n
Highly motivated and self-directed
\n
Comfortable working with technical and business stakeholders
\n
\n
Nice-to-Haves:
\n
\n
Experience with data quality and metadata management tools \n
Exposure to BI platforms and data visualization tools \n
Experience in building event-driven data architectures \n