Logo
Epsilon

Research Engineer - ML / Systems

Epsilon, San Francisco, California, United States, 94199

Save Job

About Us

We're tackling one of healthcare's most critical challenges in medical imaging and diagnostics. Our company operates at the intersection of cutting‑edge AI and clinical practice, building technology that directly impacts patient outcomes. We've assembled one of the industry's most comprehensive and diverse medical imaging datasets and have a proven product‑market fit with a substantial customer pipeline already in place. Role Overview

We're seeking a

Research Engineer

to bridge the gap between research and production, building

ML infrastructure

and

data systems

for medical imaging at scale. You'll own critical data pipelines that unify live production traffic with offline datasets, design storage solutions for multimodal medical data, and build training + inference infrastructure that enables our research team to iterate rapidly. This role requires someone who can move fluidly between

model training, data engineering, ML systems,

and

production deployment . Key Responsibilities

Build and optimize distributed ML infrastructure for training foundation models on large‑scale medical imaging datasets. Design and implement robust data pipelines to collect, process, and store large‑scale multimodal medical imaging data from both production traffic and offline sources. Build centralized data storage solutions with standardized formats (e.g., protobufs) that enable efficient retrieval and training across the organization. Create model inference pipelines and evaluation frameworks that work seamlessly across research experimentation and production deployment. Collaborate with researchers to rapidly prototype new ideas and translate them into production‑ready code. Own end‑to‑end delivery of ML systems from experimentation through deployment and monitoring. Qualifications

3+ years building ML infrastructure, data pipelines, or ML systems in production. Strong Python skills and expertise in PyTorch or JAX. Hands‑on experience with data pipeline technologies (e.g., Spark, Airflow, BigQuery, Snowflake, Databricks, Chalk) and schema design. Experience with distributed systems, cloud infrastructure (AWS/GCP), and containerization (Docker/Kubernetes). Track record of building scalable data systems and shipping production ML infrastructure. Ability to move quickly and handle competing priorities in a fast‑paced environment. Preferred Qualifications

Experience with medical imaging formats (DICOM) and healthcare data standards. Background in distributed training frameworks (PyTorch Lightning, DeepSpeed, Accelerate). Familiarity with MLOps practices and model deployment pipelines. Experience with privacy‑preserving data systems and HIPAA compliance. Contributions to open‑source ML or data infrastructure projects.

#J-18808-Ljbffr