Radix Trading Experienced Job Board
Data Platform Engineer
Radix Trading Experienced Job Board, Chicago, Illinois, United States, 60290
About the Role
We're looking for an experienced Data Platform Engineer to join our DevOps/Automation team and help reshape the way we design, operate, and scale our infrastructure. This role is ideal for someone who has a deep technical background in Kubernetes, Terraform, and modern data pipeline orchestration, and who is eager to take ownership of building a reliable, standardized platform to support mission-critical processes.
You'll work closely with engineers across research, high-performance computing (HPC), and trading operations to streamline our data pipelines, improve observability and alerting, and introduce best-in-class tooling to replace fragmented, homegrown solutions. Over time, you'll have the opportunity to potentially lead and grow the team, playing a central role in redefining our automation strategy.
Why Join Us
This is a unique opportunity to shape the foundation of our data platform at a firm where technical excellence is highly valued. You'll work alongside some of the brightest minds in trading and engineering while tackling complex, high-impact challenges. The systems you design will directly improve the speed, reliability, and scalability of our core operations - and set the stage for our next phase of growth.
What You'll Do Design and implement scalable, production-grade data pipelines, moving away from siloed, bespoke systems to a unified platform that supports alerting, monitoring, and recovery. Drive adoption of modern orchestration and workflow technologies like Airflow or similar job scheduling tools. Build and maintain infrastructure using Kubernetes, Terraform, and related tools to ensure consistency and reliability. Partner with HPC and data teams to integrate storage systems, job scheduling, and parallelized compute environments. Collaborate with engineers to identify areas for process improvement and propose open-source or commercial solutions to improve efficiency. Act as a subject matter expert for observability, performance monitoring, and failure recovery for critical data workflows. Provide technical leadership and mentorship to junior engineers, with a path toward leading the automation team in the future. What We're Looking For
2 - 7 years of relevant experience in software, data, and/or infrastructure engineering. Strong background in data engineering or data platforms, including pipeline design and workflow orchestration tools (e.g., Airflow, Slurm). Deep expertise in Kubernetes and Terraform - you've built, deployed, and managed complex systems beyond surface-level experience. Experience with monitoring and observability tools to ensure reliability and fast recovery when issues arise. Familiarity with large-scale systems and environments such as trading firms, banks, or financial data providers is a plus. The ability to balance day-to-day operational work with long-term re-architecture and strategic initiatives. Comfort collaborating across teams and influencing technical direction in a growing organization. A mindset of continuous improvement and an eagerness to bring new ideas and open-source best practices to the table. Nice-to-Have
Experience with high-performance or parallel file systems and integrating with distributed compute environments. Experience with processing and managing extremely large volumes of data. Exposure to cloud-native environments and cost-optimization strategies. Background in finance or trading.
Compensation
We offer competitive compensation, including salary and performance-based bonuses, along with comprehensive benefits.
We're looking for an experienced Data Platform Engineer to join our DevOps/Automation team and help reshape the way we design, operate, and scale our infrastructure. This role is ideal for someone who has a deep technical background in Kubernetes, Terraform, and modern data pipeline orchestration, and who is eager to take ownership of building a reliable, standardized platform to support mission-critical processes.
You'll work closely with engineers across research, high-performance computing (HPC), and trading operations to streamline our data pipelines, improve observability and alerting, and introduce best-in-class tooling to replace fragmented, homegrown solutions. Over time, you'll have the opportunity to potentially lead and grow the team, playing a central role in redefining our automation strategy.
Why Join Us
This is a unique opportunity to shape the foundation of our data platform at a firm where technical excellence is highly valued. You'll work alongside some of the brightest minds in trading and engineering while tackling complex, high-impact challenges. The systems you design will directly improve the speed, reliability, and scalability of our core operations - and set the stage for our next phase of growth.
What You'll Do Design and implement scalable, production-grade data pipelines, moving away from siloed, bespoke systems to a unified platform that supports alerting, monitoring, and recovery. Drive adoption of modern orchestration and workflow technologies like Airflow or similar job scheduling tools. Build and maintain infrastructure using Kubernetes, Terraform, and related tools to ensure consistency and reliability. Partner with HPC and data teams to integrate storage systems, job scheduling, and parallelized compute environments. Collaborate with engineers to identify areas for process improvement and propose open-source or commercial solutions to improve efficiency. Act as a subject matter expert for observability, performance monitoring, and failure recovery for critical data workflows. Provide technical leadership and mentorship to junior engineers, with a path toward leading the automation team in the future. What We're Looking For
2 - 7 years of relevant experience in software, data, and/or infrastructure engineering. Strong background in data engineering or data platforms, including pipeline design and workflow orchestration tools (e.g., Airflow, Slurm). Deep expertise in Kubernetes and Terraform - you've built, deployed, and managed complex systems beyond surface-level experience. Experience with monitoring and observability tools to ensure reliability and fast recovery when issues arise. Familiarity with large-scale systems and environments such as trading firms, banks, or financial data providers is a plus. The ability to balance day-to-day operational work with long-term re-architecture and strategic initiatives. Comfort collaborating across teams and influencing technical direction in a growing organization. A mindset of continuous improvement and an eagerness to bring new ideas and open-source best practices to the table. Nice-to-Have
Experience with high-performance or parallel file systems and integrating with distributed compute environments. Experience with processing and managing extremely large volumes of data. Exposure to cloud-native environments and cost-optimization strategies. Background in finance or trading.
Compensation
We offer competitive compensation, including salary and performance-based bonuses, along with comprehensive benefits.