SAN R&D Business Solutions
Job Title
Data Engineer
Location California
Employment Type C2C, W2
Visa Eligibility H1B, GC, USC, H4 candidates can apply
About the Job We are seeking a skilled and proactive Data Engineer to join our team and contribute to the design, development, and maintenance of scalable data pipelines. In this role, you will play a key part in transforming raw data into actionable insights, supporting analytics, business intelligence, and AI/ML initiatives across the organization. You will collaborate with cross‑functional teams to ensure data reliability, efficiency, and quality while driving process improvements and automation.
Responsibilities
Build, maintain, and optimize data pipelines and ETL/ELT workflows across cloud and on‑premise environments.
Transform data from multiple structured and unstructured sources into usable formats for analytics and reporting.
Develop, maintain, and optimize relational and non‑relational data models, schemas, and storage solutions.
Ensure high‑quality, reliable, and secure data pipelines, implementing data validation, profiling, and monitoring.
Collaborate with data scientists, analysts, and business stakeholders to understand requirements and deliver solutions that meet business needs.
Design and implement automation and CI/CD processes for data workflows, integrating tools like Airflow, Jenkins, or similar orchestration platforms.
Identify opportunities to optimize performance, reduce latency, and improve operational efficiency of data pipelines.
Document and test data solutions, ensuring accuracy, compliance, and maintainability.
Stay current with emerging technologies, cloud platforms, and data engineering best practices.
Qualifications
8+ years of experience in data engineering, data integration, or similar roles.
Strong experience with cloud platforms such as AWS, Azure, or GCP, including cloud‑native data services.
Hands‑on expertise in Python, SQL, Spark, Hadoop, Kafka, and ETL/ELT tools.
Experience with both relational and NoSQL databases (MySQL, PostgreSQL, MongoDB, Cassandra, etc.).
Proficient in building scalable, high‑performance data pipelines and implementing data governance and security practices.
Familiarity with CI/CD, automation, and containerization (Docker, Kubernetes) in data workflows.
Strong collaboration and communication skills, capable of explaining technical concepts to both technical and non‑technical stakeholders.
Bachelor’s or Master’s degree in Computer Science, Data Engineering, Information Systems, or a related field.
#J-18808-Ljbffr
Location California
Employment Type C2C, W2
Visa Eligibility H1B, GC, USC, H4 candidates can apply
About the Job We are seeking a skilled and proactive Data Engineer to join our team and contribute to the design, development, and maintenance of scalable data pipelines. In this role, you will play a key part in transforming raw data into actionable insights, supporting analytics, business intelligence, and AI/ML initiatives across the organization. You will collaborate with cross‑functional teams to ensure data reliability, efficiency, and quality while driving process improvements and automation.
Responsibilities
Build, maintain, and optimize data pipelines and ETL/ELT workflows across cloud and on‑premise environments.
Transform data from multiple structured and unstructured sources into usable formats for analytics and reporting.
Develop, maintain, and optimize relational and non‑relational data models, schemas, and storage solutions.
Ensure high‑quality, reliable, and secure data pipelines, implementing data validation, profiling, and monitoring.
Collaborate with data scientists, analysts, and business stakeholders to understand requirements and deliver solutions that meet business needs.
Design and implement automation and CI/CD processes for data workflows, integrating tools like Airflow, Jenkins, or similar orchestration platforms.
Identify opportunities to optimize performance, reduce latency, and improve operational efficiency of data pipelines.
Document and test data solutions, ensuring accuracy, compliance, and maintainability.
Stay current with emerging technologies, cloud platforms, and data engineering best practices.
Qualifications
8+ years of experience in data engineering, data integration, or similar roles.
Strong experience with cloud platforms such as AWS, Azure, or GCP, including cloud‑native data services.
Hands‑on expertise in Python, SQL, Spark, Hadoop, Kafka, and ETL/ELT tools.
Experience with both relational and NoSQL databases (MySQL, PostgreSQL, MongoDB, Cassandra, etc.).
Proficient in building scalable, high‑performance data pipelines and implementing data governance and security practices.
Familiarity with CI/CD, automation, and containerization (Docker, Kubernetes) in data workflows.
Strong collaboration and communication skills, capable of explaining technical concepts to both technical and non‑technical stakeholders.
Bachelor’s or Master’s degree in Computer Science, Data Engineering, Information Systems, or a related field.
#J-18808-Ljbffr