Epsilon Data Management, LLC
Overview
Collaborate with decision scientists to enable and optimize AI/ML workflows through data engineering and platform support.
What you’ll do:
Provide support for Spark, Hive, and Hadoop jobs, including troubleshooting and performance analysis and optimization.
Participate in agile sprint cycles, reviewing designs and ensuring successful delivery.
Contribute to best practices for application development.
Gather requirements for platform and application enhancements and implement them.
Continuously learn and expand your technical skills in data engineering and cloud technologies.
Support the migration of existing data or applications to cloud platforms (AWS, Azure, Databricks, or GCP).
Learn and contribute to solutions on the Databricks platform, enabling data science and analytics at scale.
About you:
3+ years of software development experience in a scalable, distributed, or multi-node environment.
Proficient in programming with Scala, Python, or Java; comfortable building data-driven solutions at scale.
Familiarity with Apache Spark and exposure to Hadoop, Hive, or related big data technologies.
Experience with cloud platforms (AWS, Azure, Databricks, or GCP) and an interest in cloud migration projects.
Excited to learn and work with the Databricks platform.
Exposure to modern data tools and frameworks such as Kubernetes, Docker, and Airflow (a plus).
Strong problem‑solving skills with the ability to own problems end‑to‑end and deliver results.
Consultative attitude—comfortable being “first in,” building relationships, communicating broadly, and tackling challenges head‑on.
Collaborative teammate eager to learn from peers and mentors while contributing to a culture of growth.
Motivated to grow your career within a dynamic, innovative company.
What you’ll bring:
BA/BS in Computer Science or related discipline.
At least 3 years of experience.
Hadoop certification is a plus.
Spark certification is a plus.
Base Salary: $62,250 - $103,750
Actual compensation within the range will be dependent upon, but not limited to, the individual’s skills, experience, qualifications, location and application employment laws. The salary pay range is subject to change and may be modified at any time.
Epsilon is an Equal Opportunity Employer.
Epsilon’s policy does not discriminate against any applicant or employee based on protected characteristics. Epsilon will provide accommodations to applicants needing accommodations to complete the application process.
#J-18808-Ljbffr
What you’ll do:
Provide support for Spark, Hive, and Hadoop jobs, including troubleshooting and performance analysis and optimization.
Participate in agile sprint cycles, reviewing designs and ensuring successful delivery.
Contribute to best practices for application development.
Gather requirements for platform and application enhancements and implement them.
Continuously learn and expand your technical skills in data engineering and cloud technologies.
Support the migration of existing data or applications to cloud platforms (AWS, Azure, Databricks, or GCP).
Learn and contribute to solutions on the Databricks platform, enabling data science and analytics at scale.
About you:
3+ years of software development experience in a scalable, distributed, or multi-node environment.
Proficient in programming with Scala, Python, or Java; comfortable building data-driven solutions at scale.
Familiarity with Apache Spark and exposure to Hadoop, Hive, or related big data technologies.
Experience with cloud platforms (AWS, Azure, Databricks, or GCP) and an interest in cloud migration projects.
Excited to learn and work with the Databricks platform.
Exposure to modern data tools and frameworks such as Kubernetes, Docker, and Airflow (a plus).
Strong problem‑solving skills with the ability to own problems end‑to‑end and deliver results.
Consultative attitude—comfortable being “first in,” building relationships, communicating broadly, and tackling challenges head‑on.
Collaborative teammate eager to learn from peers and mentors while contributing to a culture of growth.
Motivated to grow your career within a dynamic, innovative company.
What you’ll bring:
BA/BS in Computer Science or related discipline.
At least 3 years of experience.
Hadoop certification is a plus.
Spark certification is a plus.
Base Salary: $62,250 - $103,750
Actual compensation within the range will be dependent upon, but not limited to, the individual’s skills, experience, qualifications, location and application employment laws. The salary pay range is subject to change and may be modified at any time.
Epsilon is an Equal Opportunity Employer.
Epsilon’s policy does not discriminate against any applicant or employee based on protected characteristics. Epsilon will provide accommodations to applicants needing accommodations to complete the application process.
#J-18808-Ljbffr