General Motors
Staff ML Engineer, Inference Platform
General Motors, Sunnyvale, California, United States, 94087
Hybrid
This role is categorized as hybrid. This means the successful candidate is expected to report to the Sunnyvale Tecnical Center, CA at least three times per week, at minimum or other frequency dictated by the business.
This job is eligible for relocation assistance.
About the Team The ML Inference Platform is part of the AI Compute Platforms organization within Infrastructure Platforms. Our team owns the cloud-agnostic, reliable, and cost-efficient platform that powers GM’s AI efforts. We’re proud to serve as the AI infrastructure platform for teams developing autonomous vehicles (L3/L4/L5), as well as other groups building AI-driven products for GM and its customers. We enable rapid innovation and feature development by optimizing for high-priority, ML-centric use cases. Our platform supports the serving of state-of-the‑art (SOTA) machine learning models for experimental and bulk inference, with a focus on performance, availability, concurrency, and scalability. We’re committed to maximizing GPU utilization across platforms (B200, H100, A100, and more) while maintaining reliability and cost efficiency.
About the Role We are seeking a Staff ML Infrastructure engineer to help build and scale robust Compute platforms for ML workflows. In this role, you’ll work closely with ML engineers and researchers to ensure efficient model serving and inference in production, for their workflows such as data mining, labeling, model distillation, simulations and more. This is a high-impact opportunity to influence the future of AI infrastructure at GM. You will play a key role in shaping the architecture, roadmap and user-experience of a robust ML inference service supporting real-time, batch, and experimental inference needs. The ideal candidate brings experience in designing distributed systems for ML, strong problem-solving skills, and a product mindset focused on platform usability and reliability.
What you’ll be doing
Design and implement core platform backend software components.
Collaborate with ML engineers and researchers to understand critical workflows, parse them to platform requirements, and deliver incremental value.
Lead technical decision-making on model serving strategies, orchestration, caching, model versioning, and auto-scaling mechanisms.
Drive the development of monitoring, observability, and metrics to ensure reliability, performance, and resource optimization of inference services.
Proactively research and integrate state-of-the‑art model serving frameworks, hardware accelerators, and distributed computing techniques.
Lead large‑scale technical initiatives across GM’s ML ecosystem.
Raise the engineering bar through technical leadership, establishing best practices.
Contribute to open source projects; represent GM in relevant communities.
Minimum Requirements
8+ years of industry experience, with focus on machine learning systems or high performance backend services.
Expertise in either Go, Python, C++ or other relevant coding languages.
Expertise in ML inference, model serving frameworks (triton, rayserve, vLLM etc).
Strong communication skills and a proven ability to drive cross‑functional initiatives.
Experience working with cloud platforms such as GCP, Azure, or AWS.
Ability to thrive in a dynamic, multi‑tasking environment with ever‑evolving priorities.
Preferred Qualifications
Hands‑on experience building ML infrastructure platforms for model serving/inference.
Experience working with or designing interfaces, apis and clients for ML workflows.
Experience with Ray framework, and/or vLLM.
Experience with distributed systems, and handling large‑scale data processing.
Familiarity with telemetry, and other feedback loops to inform product improvements.
Familiarity with hardware acceleration (GPUs) and optimizations for inference workloads.
Contributions to open‑source ML serving frameworks.
Why Join Us If you’re excited to tackle some of today’s most complex engineering challenges, see the impact of your work in real‑world AV applications, and help shape the future of AI infrastructure at GM—this is the team for you.
Compensation The compensation information is a good faith estimate only. It is based on what a successful applicant might be paid in accordance with applicable state laws. The compensation may not be representative for positions located outside of New York, Colorado, California, or Washington.
Compensation:
The expected base compensation for this role is
$195,000 - $298,000
Actual base compensation within the identified range will vary based on factors relevant to the position.
Bonus Potential:
An incentive pay program offers payouts based on company performance, job level, and individual performance.
Benefits:
GM offers a variety of health and wellbeing benefit programs. Benefit options include medical, dental, vision, Health Savings Account, Flexible Spending Accounts, retirement savings plan, sickness and accident benefits, life insurance, paid vacation & holidays, tuition assistance programs, employee assistance program, GM vehicle discounts and more.
Company Vehicle Upon successful completion of a motor vehicle report review, you will be eligible to participate in a company vehicle evaluation program, through which you will be assigned a General Motors vehicle to drive and evaluate. Note: program participants are required to purchase/lease a qualifying GM vehicle every four years unless one of a limited number of exceptions applies.
#J-18808-Ljbffr
This role is categorized as hybrid. This means the successful candidate is expected to report to the Sunnyvale Tecnical Center, CA at least three times per week, at minimum or other frequency dictated by the business.
This job is eligible for relocation assistance.
About the Team The ML Inference Platform is part of the AI Compute Platforms organization within Infrastructure Platforms. Our team owns the cloud-agnostic, reliable, and cost-efficient platform that powers GM’s AI efforts. We’re proud to serve as the AI infrastructure platform for teams developing autonomous vehicles (L3/L4/L5), as well as other groups building AI-driven products for GM and its customers. We enable rapid innovation and feature development by optimizing for high-priority, ML-centric use cases. Our platform supports the serving of state-of-the‑art (SOTA) machine learning models for experimental and bulk inference, with a focus on performance, availability, concurrency, and scalability. We’re committed to maximizing GPU utilization across platforms (B200, H100, A100, and more) while maintaining reliability and cost efficiency.
About the Role We are seeking a Staff ML Infrastructure engineer to help build and scale robust Compute platforms for ML workflows. In this role, you’ll work closely with ML engineers and researchers to ensure efficient model serving and inference in production, for their workflows such as data mining, labeling, model distillation, simulations and more. This is a high-impact opportunity to influence the future of AI infrastructure at GM. You will play a key role in shaping the architecture, roadmap and user-experience of a robust ML inference service supporting real-time, batch, and experimental inference needs. The ideal candidate brings experience in designing distributed systems for ML, strong problem-solving skills, and a product mindset focused on platform usability and reliability.
What you’ll be doing
Design and implement core platform backend software components.
Collaborate with ML engineers and researchers to understand critical workflows, parse them to platform requirements, and deliver incremental value.
Lead technical decision-making on model serving strategies, orchestration, caching, model versioning, and auto-scaling mechanisms.
Drive the development of monitoring, observability, and metrics to ensure reliability, performance, and resource optimization of inference services.
Proactively research and integrate state-of-the‑art model serving frameworks, hardware accelerators, and distributed computing techniques.
Lead large‑scale technical initiatives across GM’s ML ecosystem.
Raise the engineering bar through technical leadership, establishing best practices.
Contribute to open source projects; represent GM in relevant communities.
Minimum Requirements
8+ years of industry experience, with focus on machine learning systems or high performance backend services.
Expertise in either Go, Python, C++ or other relevant coding languages.
Expertise in ML inference, model serving frameworks (triton, rayserve, vLLM etc).
Strong communication skills and a proven ability to drive cross‑functional initiatives.
Experience working with cloud platforms such as GCP, Azure, or AWS.
Ability to thrive in a dynamic, multi‑tasking environment with ever‑evolving priorities.
Preferred Qualifications
Hands‑on experience building ML infrastructure platforms for model serving/inference.
Experience working with or designing interfaces, apis and clients for ML workflows.
Experience with Ray framework, and/or vLLM.
Experience with distributed systems, and handling large‑scale data processing.
Familiarity with telemetry, and other feedback loops to inform product improvements.
Familiarity with hardware acceleration (GPUs) and optimizations for inference workloads.
Contributions to open‑source ML serving frameworks.
Why Join Us If you’re excited to tackle some of today’s most complex engineering challenges, see the impact of your work in real‑world AV applications, and help shape the future of AI infrastructure at GM—this is the team for you.
Compensation The compensation information is a good faith estimate only. It is based on what a successful applicant might be paid in accordance with applicable state laws. The compensation may not be representative for positions located outside of New York, Colorado, California, or Washington.
Compensation:
The expected base compensation for this role is
$195,000 - $298,000
Actual base compensation within the identified range will vary based on factors relevant to the position.
Bonus Potential:
An incentive pay program offers payouts based on company performance, job level, and individual performance.
Benefits:
GM offers a variety of health and wellbeing benefit programs. Benefit options include medical, dental, vision, Health Savings Account, Flexible Spending Accounts, retirement savings plan, sickness and accident benefits, life insurance, paid vacation & holidays, tuition assistance programs, employee assistance program, GM vehicle discounts and more.
Company Vehicle Upon successful completion of a motor vehicle report review, you will be eligible to participate in a company vehicle evaluation program, through which you will be assigned a General Motors vehicle to drive and evaluate. Note: program participants are required to purchase/lease a qualifying GM vehicle every four years unless one of a limited number of exceptions applies.
#J-18808-Ljbffr