Optiver US LLC
Optiver Chicago is seeking a seasoned Data Architect to support and advance the data capabilities of our local research team and contribute to our broader Lakehouse architecture vision. In this role, you’ll work with cutting-edge tools like Databricks, Apache Spark, and Delta Lake, translating experimental research workflows into scalable, production‑grade systems.
As a critical engineering presence in Chicago, you’ll directly influence local data strategy while collaborating with our Global Research and Data Platform teams. Your work will drive real‑time insights, enable predictive analytics, and power decision‑making across the trading lifecycle.
This role combines hands‑on data engineering, cross‑regional collaboration, and platform innovation—all within a high‑performance, data‑driven environment.
What you’ll do
Design, build, and maintain reliable ETL/ELT pipelines using Spark, Structured Streaming, Databricks, and our in‑house high‑performance tools
Optimize and productionize research workflows with a strong focus on scalability, resilience, and performance tuning
Collaborate with power users to develop and share reusable patterns, templates, and onboarding pathways
Define, document, and enforce data engineering best practices
Mentor junior engineers and drive a culture of continuous learning and DataOps excellence
What You’ll Get
Immediate impact on the data systems powering world‑class research and real‑time trading decisions.
A unique opportunity to shape Lakehouse engineering in the United States and influence global data architecture.
High autonomy to own complex workflows and template “how‑to” solutions across teams.
Close collaboration with quant researchers and traders to unlock predictive insights and trading alpha.
Partnership with best‑in‑class engineers across Chicago, Amsterdam, and Sydney.
In addition, you’ll receive:
The opportunity to work alongside best‑in‑class professionals from over 40 different countries
401(k) match up to 50%
Comprehensive health, mental, dental, vision, disability, and life coverage
Extensive office perks, including breakfast, lunch and snacks, regular social events, clubs, sporting leagues and more
Who you are
5+ years of hands‑on experience in data engineering, delivering robust pipelines at scale
Advanced Python skills and deep experience with Apache Spark and the Databricks platform
Familiarity with Delta Lake, streaming data systems (e.g., Kafka), and distributed compute environments
Solid understanding of cloud‑native data architectures (preferably AWS) and infrastructure cost optimization principles
Proficiency in relational databases (e.g., PostgreSQL) and modern orchestration tools
Bonus: Experience with system‑level languages (e.g., C++, Rust) and exposure to MLOps or MLflow
Proven ability to lead projects independently and deliver outcomes in fast‑paced environments
Clear communicator who collaborates well with researchers, traders, and engineers alike
Enthusiastic mentor and strong advocate for engineering rigor and platform scalability
Bachelor’s or Master’s degree in Computer Science, Engineering, or a related technical discipline
Who we are At Optiver, our mission is to improve the market by injecting liquidity, providing accurate pricing, increasing transparency and stabilizing the market no matter the conditions. With a focus on continuous improvement, we prioritize safeguarding the health and efficiency of the markets for all participants. As one of the largest market‑making institutions, we are a respected partner on 100+ exchanges across the globe.
Our differences are our edge. Optiver does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, physical or mental disability, or other legally protected characteristics.
Below is the expected base salary for this position. This is a good‑faith estimate of the base pay scale for this position and offers will ultimately be determined based on experience, education, skill set, and performance in the interview process. This position will also be eligible for a discretionary bonus (if determined by Optiver) and Optiver’s benefits package with the benefits listed above.
#J-18808-Ljbffr
As a critical engineering presence in Chicago, you’ll directly influence local data strategy while collaborating with our Global Research and Data Platform teams. Your work will drive real‑time insights, enable predictive analytics, and power decision‑making across the trading lifecycle.
This role combines hands‑on data engineering, cross‑regional collaboration, and platform innovation—all within a high‑performance, data‑driven environment.
What you’ll do
Design, build, and maintain reliable ETL/ELT pipelines using Spark, Structured Streaming, Databricks, and our in‑house high‑performance tools
Optimize and productionize research workflows with a strong focus on scalability, resilience, and performance tuning
Collaborate with power users to develop and share reusable patterns, templates, and onboarding pathways
Define, document, and enforce data engineering best practices
Mentor junior engineers and drive a culture of continuous learning and DataOps excellence
What You’ll Get
Immediate impact on the data systems powering world‑class research and real‑time trading decisions.
A unique opportunity to shape Lakehouse engineering in the United States and influence global data architecture.
High autonomy to own complex workflows and template “how‑to” solutions across teams.
Close collaboration with quant researchers and traders to unlock predictive insights and trading alpha.
Partnership with best‑in‑class engineers across Chicago, Amsterdam, and Sydney.
In addition, you’ll receive:
The opportunity to work alongside best‑in‑class professionals from over 40 different countries
401(k) match up to 50%
Comprehensive health, mental, dental, vision, disability, and life coverage
Extensive office perks, including breakfast, lunch and snacks, regular social events, clubs, sporting leagues and more
Who you are
5+ years of hands‑on experience in data engineering, delivering robust pipelines at scale
Advanced Python skills and deep experience with Apache Spark and the Databricks platform
Familiarity with Delta Lake, streaming data systems (e.g., Kafka), and distributed compute environments
Solid understanding of cloud‑native data architectures (preferably AWS) and infrastructure cost optimization principles
Proficiency in relational databases (e.g., PostgreSQL) and modern orchestration tools
Bonus: Experience with system‑level languages (e.g., C++, Rust) and exposure to MLOps or MLflow
Proven ability to lead projects independently and deliver outcomes in fast‑paced environments
Clear communicator who collaborates well with researchers, traders, and engineers alike
Enthusiastic mentor and strong advocate for engineering rigor and platform scalability
Bachelor’s or Master’s degree in Computer Science, Engineering, or a related technical discipline
Who we are At Optiver, our mission is to improve the market by injecting liquidity, providing accurate pricing, increasing transparency and stabilizing the market no matter the conditions. With a focus on continuous improvement, we prioritize safeguarding the health and efficiency of the markets for all participants. As one of the largest market‑making institutions, we are a respected partner on 100+ exchanges across the globe.
Our differences are our edge. Optiver does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, physical or mental disability, or other legally protected characteristics.
Below is the expected base salary for this position. This is a good‑faith estimate of the base pay scale for this position and offers will ultimately be determined based on experience, education, skill set, and performance in the interview process. This position will also be eligible for a discretionary bonus (if determined by Optiver) and Optiver’s benefits package with the benefits listed above.
#J-18808-Ljbffr