Amazon Web Services (AWS)
Senior Delivery Consultant - Senior Machine Learning Engineer, AWS Professional
Amazon Web Services (AWS), Herndon, Virginia, United States, 22070
Senior Delivery Consultant – Senior Machine Learning Engineer (AWS Professional Services)
Join to apply for the
Senior Delivery Consultant – Senior Machine Learning Engineer, AWS Professional Services, AWS Professional Services
role at
Amazon Web Services (AWS) .
Description
Are you excited about building software solutions around large, complex Machine Learning (ML) and Artificial Intelligence (AI) systems? Want to help the largest global enterprises derive business value through the adoption and automation of Generative AI (GenAI)? Excited by using massive amounts of disparate data to develop AI/ML models? Eager to learn to apply AI/ML to a diverse array of enterprise use? Thrilled to be a key part of Amazon, who has been investing in Machine Learning for decades—pioneering and shaping the world’s AI technology?
The Amazon Web Services Professional Services (ProServe) team is seeking a skilled ML Engineer to join our team as a Delivery Consultant at Amazon Web Services (AWS). In this role, you'll work closely with customers to design, implement, and manage AWS AI/ML and GenAI solutions that meet their technical requirements and business objectives. You will lead customer‑focused project teams, act as a technical leader, and perform hands‑on development of ML solutions with exceptional quality.
Possessing a deep understanding of AWS products and services, you will be proficient in architecting complex, scalable, and secure AI/ML and GenAI solutions tailored to each customer. You’ll work closely with stakeholders to gather requirements, assess current infrastructure, and propose effective migration strategies to AWS. As a trusted advisor, you will guide customers on industry trends, emerging technologies, and innovative solutions, while leading implementation, ensuring best‑practice adherence, optimizing performance, and managing risk throughout the project lifecycle.
Key Job Responsibilities
Lead project teams and implement end‑to‑end AI/ML and GenAI projects—from understanding business needs to data preparation, model development, deployment, and monitoring.
Design and implement machine learning pipelines that support high‑performance, reliable, scalable, and secure ML workloads.
Design scalable MLOps solutions using AWS services and leverage GenAI solutions when applicable.
Collaborate with cross‑functional teams (Applied Science, DevOps, Data Engineering, Cloud Infrastructure, Applications) to prepare, analyze, and operationalize data and AI/ML models.
Serve as a trusted advisor to customers on AI/ML, GenAI solutions, and cloud architectures.
Share knowledge and best practices within the organization through mentoring, training, publications, and reusable artifacts.
Ensure solutions meet industry standards and support customers in advancing their AI/ML, GenAI, and cloud adoption strategies.
Basic Qualifications
5+ years of cloud architecture and implementation experience.
Bachelor’s degree in Computer Science, Engineering, or related field (or equivalent experience).
8+ years leading technical teams and hands‑on experience in data, software, or ML engineering, with strong distributed‑computing knowledge (e.g., data pipelines, training and inference, ML infrastructure design).
5+ years developing predictive modeling, natural language processing, and deep learning, with a proven track record of building and deploying ML models on cloud (e.g., Amazon SageMaker).
5+ years developing with SQL, Python, and at least one additional programming language (Java, Scala, JavaScript, TypeScript). Proficient with leading ML libraries and frameworks (TensorFlow, PyTorch).
Preferred Qualifications
AWS experience and proficiency in AWS services such as SageMaker, Bedrock, EC2, ECS, EKS, OpenSearch, Step Functions, VPC, CloudFormation.
AWS Professional certifications (Solutions Architect Professional, DevOps Engineer Professional, etc.).
Experience with automation (Terraform, Python), Infrastructure as Code (CloudFormation, CDK), Containers, and CI/CD pipelines.
Knowledge of common security and compliance standards (HIPAA, GDPR).
Strong communication skills and ability to explain complex concepts to technical and non‑technical audiences, and lead technical teams in customer projects.
Experience building ML pipelines with MLOps best practices, including data preprocessing, model hosting, feature selection, hyperparameter tuning, distributed & GPU training, deployment, monitoring, and retraining.
Experience with MLOps frameworks (MLFlow, Kubeflow), orchestration (Airflow, AWS Step Functions), and building applications using GenAI technologies (LLMs, Vector Stores, LangChain, Prompt Engineering).
Amazon is an equal‑opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information.
Our compensation reflects the cost of labor across several US geographic markets. The base pay for this position ranges from $138,200 per year in our lowest geographic market up to $239,000 per year in our highest geographic market. Pay is based on a number of factors including market location and may vary depending on job‑related knowledge, skills, and experience. Amazon is a total‑compensation company. Dependent on the position offered, equity, sign‑on payments, and other forms of compensation may be provided as part of a total compensation package, in addition to a full range of medical, financial, and other benefits. For more information, please visit https://www.aboutamazon.com/workplace/employee‑benefits.
This position will remain posted until filled. Applicants should apply via our internal or external career site.
Job ID: A3017067 .
Location: Herndon, VA .
Salary Range: $118,200 – $204,300 .
Employment Type: Full‑time .
#J-18808-Ljbffr
Senior Delivery Consultant – Senior Machine Learning Engineer, AWS Professional Services, AWS Professional Services
role at
Amazon Web Services (AWS) .
Description
Are you excited about building software solutions around large, complex Machine Learning (ML) and Artificial Intelligence (AI) systems? Want to help the largest global enterprises derive business value through the adoption and automation of Generative AI (GenAI)? Excited by using massive amounts of disparate data to develop AI/ML models? Eager to learn to apply AI/ML to a diverse array of enterprise use? Thrilled to be a key part of Amazon, who has been investing in Machine Learning for decades—pioneering and shaping the world’s AI technology?
The Amazon Web Services Professional Services (ProServe) team is seeking a skilled ML Engineer to join our team as a Delivery Consultant at Amazon Web Services (AWS). In this role, you'll work closely with customers to design, implement, and manage AWS AI/ML and GenAI solutions that meet their technical requirements and business objectives. You will lead customer‑focused project teams, act as a technical leader, and perform hands‑on development of ML solutions with exceptional quality.
Possessing a deep understanding of AWS products and services, you will be proficient in architecting complex, scalable, and secure AI/ML and GenAI solutions tailored to each customer. You’ll work closely with stakeholders to gather requirements, assess current infrastructure, and propose effective migration strategies to AWS. As a trusted advisor, you will guide customers on industry trends, emerging technologies, and innovative solutions, while leading implementation, ensuring best‑practice adherence, optimizing performance, and managing risk throughout the project lifecycle.
Key Job Responsibilities
Lead project teams and implement end‑to‑end AI/ML and GenAI projects—from understanding business needs to data preparation, model development, deployment, and monitoring.
Design and implement machine learning pipelines that support high‑performance, reliable, scalable, and secure ML workloads.
Design scalable MLOps solutions using AWS services and leverage GenAI solutions when applicable.
Collaborate with cross‑functional teams (Applied Science, DevOps, Data Engineering, Cloud Infrastructure, Applications) to prepare, analyze, and operationalize data and AI/ML models.
Serve as a trusted advisor to customers on AI/ML, GenAI solutions, and cloud architectures.
Share knowledge and best practices within the organization through mentoring, training, publications, and reusable artifacts.
Ensure solutions meet industry standards and support customers in advancing their AI/ML, GenAI, and cloud adoption strategies.
Basic Qualifications
5+ years of cloud architecture and implementation experience.
Bachelor’s degree in Computer Science, Engineering, or related field (or equivalent experience).
8+ years leading technical teams and hands‑on experience in data, software, or ML engineering, with strong distributed‑computing knowledge (e.g., data pipelines, training and inference, ML infrastructure design).
5+ years developing predictive modeling, natural language processing, and deep learning, with a proven track record of building and deploying ML models on cloud (e.g., Amazon SageMaker).
5+ years developing with SQL, Python, and at least one additional programming language (Java, Scala, JavaScript, TypeScript). Proficient with leading ML libraries and frameworks (TensorFlow, PyTorch).
Preferred Qualifications
AWS experience and proficiency in AWS services such as SageMaker, Bedrock, EC2, ECS, EKS, OpenSearch, Step Functions, VPC, CloudFormation.
AWS Professional certifications (Solutions Architect Professional, DevOps Engineer Professional, etc.).
Experience with automation (Terraform, Python), Infrastructure as Code (CloudFormation, CDK), Containers, and CI/CD pipelines.
Knowledge of common security and compliance standards (HIPAA, GDPR).
Strong communication skills and ability to explain complex concepts to technical and non‑technical audiences, and lead technical teams in customer projects.
Experience building ML pipelines with MLOps best practices, including data preprocessing, model hosting, feature selection, hyperparameter tuning, distributed & GPU training, deployment, monitoring, and retraining.
Experience with MLOps frameworks (MLFlow, Kubeflow), orchestration (Airflow, AWS Step Functions), and building applications using GenAI technologies (LLMs, Vector Stores, LangChain, Prompt Engineering).
Amazon is an equal‑opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information.
Our compensation reflects the cost of labor across several US geographic markets. The base pay for this position ranges from $138,200 per year in our lowest geographic market up to $239,000 per year in our highest geographic market. Pay is based on a number of factors including market location and may vary depending on job‑related knowledge, skills, and experience. Amazon is a total‑compensation company. Dependent on the position offered, equity, sign‑on payments, and other forms of compensation may be provided as part of a total compensation package, in addition to a full range of medical, financial, and other benefits. For more information, please visit https://www.aboutamazon.com/workplace/employee‑benefits.
This position will remain posted until filled. Applicants should apply via our internal or external career site.
Job ID: A3017067 .
Location: Herndon, VA .
Salary Range: $118,200 – $204,300 .
Employment Type: Full‑time .
#J-18808-Ljbffr