Jobgether
This position is posted by Jobgether on behalf of a partner company. We are currently looking for a Data Engineer in California (USA).
As a Data Engineer, you will play a key role in building and optimizing data pipelines, enabling the seamless flow of information across modern cloud platforms and analytics systems. This role involves working with large-scale, complex datasets to design ETL processes, integrate data sources, and ensure reliability and scalability of infrastructure. You will collaborate with data scientists, analysts, and business teams to create solutions that unlock insights and support decision‑making. This is an excellent opportunity to apply your technical expertise while contributing to data‑driven innovation in a fast‑paced environment.
Accountabilities
Design, build, and maintain robust ETL pipelines to support large‑scale data processing
Work with cloud data platforms (AWS, Azure, GCP) to architect scalable data solutions
Develop and optimize data models, ensuring high performance and accessibility
Integrate structured and unstructured data from multiple sources into analytics‑ready systems
Implement data quality checks, monitoring, and performance tuning for reliability
Collaborate with cross‑functional teams to deliver solutions aligned with business needs
Support real‑time data streaming and automation initiatives using modern tools and frameworks
Requirements
Bachelor’s degree in Computer Science, Engineering, Information Systems, or related field
Strong experience with Python or Java for data engineering tasks
Hands‑on expertise with frameworks such as Apache Spark, Airflow, and Databricks
Proficiency in ETL development and cloud‑based data platforms (AWS S3, Redshift, Snowflake, or BigQuery)
Solid knowledge of relational and non‑relational databases, data modeling, and query optimization
Familiarity with infrastructure‑as‑code and DevOps practices for data systems
Experience collaborating with data scientists and analysts to support machine learning and analytics use cases
Strong problem‑solving skills and ability to work in fast‑paced, agile environments
Certifications in AWS, Microsoft, or Google Cloud are a plus
Benefits
Competitive salary package tailored to experience and skills
Full healthcare coverage and wellness benefits
Flexible remote work options with opportunities for collaboration and mentorship
Access to ongoing certification support and professional development
Career growth opportunities in cutting‑edge data engineering and cloud technologies
Thank you for your interest!
#J-18808-Ljbffr
As a Data Engineer, you will play a key role in building and optimizing data pipelines, enabling the seamless flow of information across modern cloud platforms and analytics systems. This role involves working with large-scale, complex datasets to design ETL processes, integrate data sources, and ensure reliability and scalability of infrastructure. You will collaborate with data scientists, analysts, and business teams to create solutions that unlock insights and support decision‑making. This is an excellent opportunity to apply your technical expertise while contributing to data‑driven innovation in a fast‑paced environment.
Accountabilities
Design, build, and maintain robust ETL pipelines to support large‑scale data processing
Work with cloud data platforms (AWS, Azure, GCP) to architect scalable data solutions
Develop and optimize data models, ensuring high performance and accessibility
Integrate structured and unstructured data from multiple sources into analytics‑ready systems
Implement data quality checks, monitoring, and performance tuning for reliability
Collaborate with cross‑functional teams to deliver solutions aligned with business needs
Support real‑time data streaming and automation initiatives using modern tools and frameworks
Requirements
Bachelor’s degree in Computer Science, Engineering, Information Systems, or related field
Strong experience with Python or Java for data engineering tasks
Hands‑on expertise with frameworks such as Apache Spark, Airflow, and Databricks
Proficiency in ETL development and cloud‑based data platforms (AWS S3, Redshift, Snowflake, or BigQuery)
Solid knowledge of relational and non‑relational databases, data modeling, and query optimization
Familiarity with infrastructure‑as‑code and DevOps practices for data systems
Experience collaborating with data scientists and analysts to support machine learning and analytics use cases
Strong problem‑solving skills and ability to work in fast‑paced, agile environments
Certifications in AWS, Microsoft, or Google Cloud are a plus
Benefits
Competitive salary package tailored to experience and skills
Full healthcare coverage and wellness benefits
Flexible remote work options with opportunities for collaboration and mentorship
Access to ongoing certification support and professional development
Career growth opportunities in cutting‑edge data engineering and cloud technologies
Thank you for your interest!
#J-18808-Ljbffr