The Judge Group
Sr Data Platform Engineer (Elk Grove)
The Judge Group, Elk Grove, California, United States, 95759
Hybrid role 3X a week in office in Elk Grove, CA; no remote capabilities
This is a direct hire opportunity.
Summary: Were seeking a seasoned Senior Data Platform Engine er
to design, build, and optimize scalable data solutions that power analytics, reporting, and AI/ML initiatives. This fulltime role is handson, working with architects, analysts, and business stakeholders to ensure data systems are reliable, secure, and highperforming.
Responsibilites: Build and maintain robust data pipelines (structured, semistructured, unstructured). Implement ETL workflows with Spark, Delta Lake, and cloudnative tools. Support big data platforms (Databricks, Snowflake, GCP) in production. Troubleshoot and optimize SQL queries, Spark jobs, and workloads. Ensure governance, security, and compliance across data systems. Integrate workflows into CI/CD pipelines with Git, Jenkins, Terraform. Collaborate crossfunctionally to translate business needs into technical solutions.
Qualifications: 7+ years in data engineering with production pipeline experience. Expertise in Spark ecosystem, Databricks, Snowflake, GCP. Strong skills in PySpark, Python, SQL. Experience with RAG systems, semantic search, and LLM integration. Familiarity with Kafka, Pub/Sub, vector databases. Proven ability to optimize ETL jobs and troubleshoot production issues. Agile team experience and excellent communication skills. Certifications in Databricks, Snowflake, GCP, or Azure. Exposure to Airflow, BI tools (Power BI, Looker Studio).
This is a direct hire opportunity.
Summary: Were seeking a seasoned Senior Data Platform Engine er
to design, build, and optimize scalable data solutions that power analytics, reporting, and AI/ML initiatives. This fulltime role is handson, working with architects, analysts, and business stakeholders to ensure data systems are reliable, secure, and highperforming.
Responsibilites: Build and maintain robust data pipelines (structured, semistructured, unstructured). Implement ETL workflows with Spark, Delta Lake, and cloudnative tools. Support big data platforms (Databricks, Snowflake, GCP) in production. Troubleshoot and optimize SQL queries, Spark jobs, and workloads. Ensure governance, security, and compliance across data systems. Integrate workflows into CI/CD pipelines with Git, Jenkins, Terraform. Collaborate crossfunctionally to translate business needs into technical solutions.
Qualifications: 7+ years in data engineering with production pipeline experience. Expertise in Spark ecosystem, Databricks, Snowflake, GCP. Strong skills in PySpark, Python, SQL. Experience with RAG systems, semantic search, and LLM integration. Familiarity with Kafka, Pub/Sub, vector databases. Proven ability to optimize ETL jobs and troubleshoot production issues. Agile team experience and excellent communication skills. Certifications in Databricks, Snowflake, GCP, or Azure. Exposure to Airflow, BI tools (Power BI, Looker Studio).