Hyatt
Senior Machine Learning/MLOps Engineer (Remote Opportunity)
Hyatt, Chicago, Illinois, United States, 60290
Senior Machine Learning / MLOps Engineer (Remote Opportunity)
Summary: At Hyatt, we are working to advance care through data‑driven decisions and automation. This mission serves as the foundation for every decision as we create the future of travel. We can’t do that without the best talent – talent that is innovative, curious, and driven to create exceptional experiences for our guests, customers, owners and colleagues.
In this role you will design and implement algorithmic product architectures across Personalization, Generative AI, Forecasting, and Decision Science domains, as well as foundational MLOps frameworks to bring our machine learning models to life across the full lifecycle of the product including data ingestion, ML processing, and results delivery/activation. This role combines deep technical modeling expertise with infrastructure engineering skills to design, build, and operate end‑to‑end ML/AI systems at scale. You'll work across the full ML lifecycle – from distributed training and model optimization to production deployment and monitoring.
This role will work cross‑functionally with various data science teams, data engineering teams, and data architecture teams. The ideal candidate can serve as both solutions architect and hands‑on implementation engineer and guide the team towards best‑in‑class MLOps platform and algorithmic product implementations.
You will be a part of a ground‑floor, hands‑on, highly visible team positioned for growth and highly collaborative and passionate about machine learning and AI. Applying the latest techniques and approaches across the domains of data science, machine learning, and AI isn’t just a nice to have – it’s a must.
Benefits
Annual allotment of free hotel stays at Hyatt hotels globally
Flexible work schedule and location
Work‑life benefits including wellbeing initiatives such as a complimentary Headspace subscription, and a discount at the on‑site fitness center
A global family assistance policy with paid time off following the birth or adoption of a child as well as financial assistance for adoption
Paid Time Off, Medical, Dental, Vision, 401K with company match
Our Commitment to Diversity, Equity, and Inclusion Our success is underpinned by a diverse, equitable and inclusive culture and we are committed to diversity across the board—from who we hire and develop, organizations we support, and who we buy from and work with. We constantly strive to reflect the world we care for with teams that achieve and grow together.
Who You Are As our ideal candidate, you understand the power and purpose of our culture of care, and embody our core values of Empathy, Inclusion, Integrity, Experimentation, Respect and Wellbeing. You enjoy working with others, are results driven and are looking for a variety of opportunities to develop personally and professionally.
Qualifications Infrastructure Design & AI‑Services Architecture
Partner with data scientists to design AI‑services and architectures that activate ML models and maximize their impact, such as real‑time streaming use‑cases and offline batch optimizations
Lead the design and implementation of ML infrastructure solutions, including data ingestion pipelines, feature processing, model training, and serving environments
Build and maintain scalable inference systems for real‑time and batch predictions
Deploy models across various compute environments (EC2, EKS, SageMaker, specialized inference chips)
MLOps Platform & Pipeline Automation
Implement, evolve, and maintain our MLOps platform, technology, and processes; including Feature Store, ML Observability, ML Governance, Training and Deployment pipelines
Create and maintain automated workflows for model training, evaluation, and deployment using infrastructure‑as‑code patterns
Build MLOps platforms and tooling that abstract complex engineering tasks for data science teams
Implement CI/CD pipelines for both model artifacts and infrastructure components
Model Development & Optimization
Design, implement, and optimize machine learning models including deep learning architectures, LLMs, and specialized models (e.g., BERT‑based classifiers) across Personalization, Generative AI, Forecasting, and Decision Science domains
Implement distributed training workflows using PyTorch and other frameworks
Fine‑tune large language models and optimize inference performance using model compilation and optimization tools (Neuron compiler for AWS Inferentia, ONNX, vLLM)
Optimize models for specific hardware targets (GPU, TPU, AWS Inferentia/Trainium)
Performance & Operational Excellence
Enhance and maintain existing AI‑services as needed to maximize impact of the algorithmic product
Monitor ML systems for performance, accuracy, latency, and cost optimization
Conduct performance profiling and optimization of training and inference workloads
Implement observability and monitoring solutions across the ML stack
Cross‑functional Partnership & Technical Leadership
Partner with data engineering team to ensure data science data needs are being delivered in the appropriate format/cadence required for maximum impact
Partner with data architecture, data governance, and security team to ensure solutions meet required standards
Mentor team members on both modeling techniques and infrastructure best practices
Stay up to date with latest AI and MLOps design patterns as well as AWS services with respect to Machine Learning
Learning Engineering Qualifications Experience Required
Master’s degree in Computer Science, Software Engineering, Machine Learning, or related fields required
5+ years of implementing AI solutions in a cloud environment with a focus on AI‑services and MLOps foundations. Hospitality experience not required
3+ years of hands‑on experience with both ML model development and production infrastructure
Technical Competencies
Cloud & Infrastructure: Expertise in AWS cloud services (EC2, EKS, S3, SageMaker, Inferentia/Trainium), Terraform/CloudFormation, Docker, Kubernetes
Data & Processing: Expertise in Python, SQL, PySpark, Apache Spark, Airflow, Kinesis, feature stores, model serving frameworks
Development & Operations: Experience with streaming and batch data architectures at scale, DevOps and CI/CD concepts (GitHub Actions, CodePipeline), monitoring (CloudWatch, Prometheus, MLflow)
Machine Learning & Deep Learning: PyTorch, TensorFlow, distributed training, LLM fine‑tuning, transformer architectures, model optimization, ONNX, vLLM, hardware‑specific optimizations
Additional Requirements
Experience operating in an Agile Methodology environment
Experience building end‑to‑end ML systems from research to production
Excellent communication and teamwork skills
Position will not require customer‑facing interactions
Desired Qualifications
Previous work on recommendation systems, NLP applications, or real‑time inference systems
Experience with MLOps platform development and feature store implementations
Familiarity with security and compliance standards in cloud environments
The position responsibilities outlined above are in no way to be construed as all encompassing. Other duties, responsibilities, and qualifications may be required and/or assigned as necessary.
Research shows that women, people of color and other historically excluded groups tend to apply to jobs only if they meet all of the listed job qualifications. Unsure if you check every box, but feeling inspired to enhance your career? Apply. We’d love to consider your unique experiences and how you could make Hyatt even better.
The salary range for this position is $130,000 to $170,000. This position is also eligible to earn incentive awards and an annual bonus. The final pay rate/salary offered to the successful candidate will depend on experience, skill level and other qualifications for the role, as well as the location of the performance of work. Pay for the successful candidate will meet local requirements, including the local minimum wage rate.
Job Details
Seniority level: Mid‑Senior level
Employment type: Full‑time
Job function: Engineering and Information Technology
Industry: Hospitality
#J-18808-Ljbffr
In this role you will design and implement algorithmic product architectures across Personalization, Generative AI, Forecasting, and Decision Science domains, as well as foundational MLOps frameworks to bring our machine learning models to life across the full lifecycle of the product including data ingestion, ML processing, and results delivery/activation. This role combines deep technical modeling expertise with infrastructure engineering skills to design, build, and operate end‑to‑end ML/AI systems at scale. You'll work across the full ML lifecycle – from distributed training and model optimization to production deployment and monitoring.
This role will work cross‑functionally with various data science teams, data engineering teams, and data architecture teams. The ideal candidate can serve as both solutions architect and hands‑on implementation engineer and guide the team towards best‑in‑class MLOps platform and algorithmic product implementations.
You will be a part of a ground‑floor, hands‑on, highly visible team positioned for growth and highly collaborative and passionate about machine learning and AI. Applying the latest techniques and approaches across the domains of data science, machine learning, and AI isn’t just a nice to have – it’s a must.
Benefits
Annual allotment of free hotel stays at Hyatt hotels globally
Flexible work schedule and location
Work‑life benefits including wellbeing initiatives such as a complimentary Headspace subscription, and a discount at the on‑site fitness center
A global family assistance policy with paid time off following the birth or adoption of a child as well as financial assistance for adoption
Paid Time Off, Medical, Dental, Vision, 401K with company match
Our Commitment to Diversity, Equity, and Inclusion Our success is underpinned by a diverse, equitable and inclusive culture and we are committed to diversity across the board—from who we hire and develop, organizations we support, and who we buy from and work with. We constantly strive to reflect the world we care for with teams that achieve and grow together.
Who You Are As our ideal candidate, you understand the power and purpose of our culture of care, and embody our core values of Empathy, Inclusion, Integrity, Experimentation, Respect and Wellbeing. You enjoy working with others, are results driven and are looking for a variety of opportunities to develop personally and professionally.
Qualifications Infrastructure Design & AI‑Services Architecture
Partner with data scientists to design AI‑services and architectures that activate ML models and maximize their impact, such as real‑time streaming use‑cases and offline batch optimizations
Lead the design and implementation of ML infrastructure solutions, including data ingestion pipelines, feature processing, model training, and serving environments
Build and maintain scalable inference systems for real‑time and batch predictions
Deploy models across various compute environments (EC2, EKS, SageMaker, specialized inference chips)
MLOps Platform & Pipeline Automation
Implement, evolve, and maintain our MLOps platform, technology, and processes; including Feature Store, ML Observability, ML Governance, Training and Deployment pipelines
Create and maintain automated workflows for model training, evaluation, and deployment using infrastructure‑as‑code patterns
Build MLOps platforms and tooling that abstract complex engineering tasks for data science teams
Implement CI/CD pipelines for both model artifacts and infrastructure components
Model Development & Optimization
Design, implement, and optimize machine learning models including deep learning architectures, LLMs, and specialized models (e.g., BERT‑based classifiers) across Personalization, Generative AI, Forecasting, and Decision Science domains
Implement distributed training workflows using PyTorch and other frameworks
Fine‑tune large language models and optimize inference performance using model compilation and optimization tools (Neuron compiler for AWS Inferentia, ONNX, vLLM)
Optimize models for specific hardware targets (GPU, TPU, AWS Inferentia/Trainium)
Performance & Operational Excellence
Enhance and maintain existing AI‑services as needed to maximize impact of the algorithmic product
Monitor ML systems for performance, accuracy, latency, and cost optimization
Conduct performance profiling and optimization of training and inference workloads
Implement observability and monitoring solutions across the ML stack
Cross‑functional Partnership & Technical Leadership
Partner with data engineering team to ensure data science data needs are being delivered in the appropriate format/cadence required for maximum impact
Partner with data architecture, data governance, and security team to ensure solutions meet required standards
Mentor team members on both modeling techniques and infrastructure best practices
Stay up to date with latest AI and MLOps design patterns as well as AWS services with respect to Machine Learning
Learning Engineering Qualifications Experience Required
Master’s degree in Computer Science, Software Engineering, Machine Learning, or related fields required
5+ years of implementing AI solutions in a cloud environment with a focus on AI‑services and MLOps foundations. Hospitality experience not required
3+ years of hands‑on experience with both ML model development and production infrastructure
Technical Competencies
Cloud & Infrastructure: Expertise in AWS cloud services (EC2, EKS, S3, SageMaker, Inferentia/Trainium), Terraform/CloudFormation, Docker, Kubernetes
Data & Processing: Expertise in Python, SQL, PySpark, Apache Spark, Airflow, Kinesis, feature stores, model serving frameworks
Development & Operations: Experience with streaming and batch data architectures at scale, DevOps and CI/CD concepts (GitHub Actions, CodePipeline), monitoring (CloudWatch, Prometheus, MLflow)
Machine Learning & Deep Learning: PyTorch, TensorFlow, distributed training, LLM fine‑tuning, transformer architectures, model optimization, ONNX, vLLM, hardware‑specific optimizations
Additional Requirements
Experience operating in an Agile Methodology environment
Experience building end‑to‑end ML systems from research to production
Excellent communication and teamwork skills
Position will not require customer‑facing interactions
Desired Qualifications
Previous work on recommendation systems, NLP applications, or real‑time inference systems
Experience with MLOps platform development and feature store implementations
Familiarity with security and compliance standards in cloud environments
The position responsibilities outlined above are in no way to be construed as all encompassing. Other duties, responsibilities, and qualifications may be required and/or assigned as necessary.
Research shows that women, people of color and other historically excluded groups tend to apply to jobs only if they meet all of the listed job qualifications. Unsure if you check every box, but feeling inspired to enhance your career? Apply. We’d love to consider your unique experiences and how you could make Hyatt even better.
The salary range for this position is $130,000 to $170,000. This position is also eligible to earn incentive awards and an annual bonus. The final pay rate/salary offered to the successful candidate will depend on experience, skill level and other qualifications for the role, as well as the location of the performance of work. Pay for the successful candidate will meet local requirements, including the local minimum wage rate.
Job Details
Seniority level: Mid‑Senior level
Employment type: Full‑time
Job function: Engineering and Information Technology
Industry: Hospitality
#J-18808-Ljbffr