Logo
NVIDIA

Solutions Architect, AI and ML

NVIDIA, Seattle, Washington, us, 98127

Save Job

Join to apply for the

Solutions Architect, AI and ML

role at

NVIDIA . NVIDIA is building the world’s leading AI company and we are looking for an experienced Cloud Solution Architect to help customers adopt GPU hardware and software and build ML/DL solutions on cloud platforms.

What You Will Be Doing

Help cloud customers craft, deploy, and maintain scalable, GPU-accelerated inference pipelines on cloud ML services and Kubernetes for large language models (LLMs) and generative AI workloads.

Enhance performance tuning using TensorRT/TensorRT-LLM, vLLM, Dynamo, and Triton Inference Server to improve GPU utilization and model efficiency.

Collaborate with multi-functional teams (engineering, product) and offer technical mentorship to cloud customers implementing AI inference at scale.

Build custom PoCs for solutions that address customer’s critical business needs applying NVIDIA hardware and software technology.

Partner with Sales Account Managers or Developer Relations Managers to identify and secure new business opportunities for NVIDIA products and solutions for ML/DL and other software solutions.

Prepare and deliver technical content to customers including presentations about purpose-built solutions, workshops about NVIDIA products and solutions, etc.

Conduct regular technical customer meetings for project/product roadmap, feature discussions, and intro to new technologies. Establish close technical ties to the customer to facilitate rapid resolution of customer issues.

What We Need To See

BS/MS/PhD in Electrical/Computer Engineering, Computer Science, Statistics, Physics, or other Engineering fields or equivalent experience.

3+ Years in Solutions Architecture with a proven track record of moving AI inference from POC to production in cloud computing environments including AWS, GCP, or Azure.

3+ Years of hands-on experience with Deep Learning frameworks such as PyTorch and TensorFlow.

Excellent knowledge of the theory and practice of LLM and DL inference.

Strong fundamentals in programming, optimizations, and software design, especially in Python.

Experience with containerization and orchestration technologies like Docker and Kubernetes, monitoring, and observability solutions for AI deployments.

Knowledge of Inference technologies - NVIDIA NIM, TensorRT-LLM, Dynamo, Triton Inference Server, vLLM, etc.

Proficiency in problem-solving and debugging skills in GPU environments.

Excellent presentation, communication and collaboration skills.

Ways To Stand Out From The Crowd

AWS, GCP or Azure Professional Solution Architect Certification.

Experience optimizing and deploying large MoE LLMs at scale.

Active contributions to open-source AI inference projects (e.g., vLLM, TensorRT-LLM Dynamo, SGLang, Triton or similar).

Experience with Multi-GPU Multi-node Inference technologies like Tensor Parallelism/Expert Parallelism, Disaggregated Serving, LWS, MPI, EFA/Infiniband, NVLink/PCIe, etc.

Experience in developing and integrating monitoring and alerting solutions using Prometheus, Grafana, and NVIDIA DCGM and GPU performance Analysis and tools like NVIDIA Nsight Systems.

Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is $120,000 – $189,750 for Level 2, and $148,000 – $235,750 for Level 3. You will also be eligible for equity and benefits.

NVIDIA is committed to fostering a diverse work environment and is proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

Applications for this job will be accepted at least until October 21, 2025.

#J-18808-Ljbffr