Logo
NVIDIA

Solution Architect - OEM AI Software

NVIDIA, Granite Heights, Wisconsin, United States

Save Job

Overview

Solution Architect - OEM AI Software

at

NVIDIA NVIDIA is seeking outstanding Solutions Architect to help grow our OEM enterprise AI business. Our solutions architects work across different teams and help partners and customers with the latest Accelerated Computing, Generative AI, and AI Factory deployments. You will become a trusted technical advisor with our OEM partners and work on software solutions focused on enabling enterprise Generative AI workflows. This role is an excellent opportunity to work in an interdisciplinary team using the latest technologies at NVIDIA. What You Will Be Doing

Working with our OEM partners to architect enterprise-grade end-to-end generative AI software solutions. Collaborate closely with OEM partners' software development teams to craft world-class joint AI solutions. Collaborate with sales and business development teams to support pre-sales activities, including technical presentations and demonstrations of Generative AI capabilities. Work closely with NVIDIA engineering teams to provide feedback and contribute to the evolution of generative AI software. Engage directly with customers/partners to understand their requirements and challenges. Lead workshops and design sessions to define and refine generative AI solutions with a strong emphasis on enterprise workflows. Implement strategies for efficient and effective training of LLMs to achieve peak performance. Design and implement RAG-based workflows to improve content generation and information retrieval. What We Need To See

3-5+ years of hands-on experience as a solution architect or similar with a specific focus on AI solutions. BS, MS, or PhD in Computer Science, Electrical/Computer Engineering, Physics, Mathematics, other Engineering or related fields (or equivalent experience). Proven track record of successfully deploying and optimizing Generative AI models for inference in production environments. Expertise in training and fine-tuning LLMs using popular frameworks such as TensorFlow, PyTorch, or Hugging Face Transformers. Proficiency in model deployment and optimization techniques for efficient inference on various hardware platforms, with a focus on GPUs. Solid understanding of GPU cluster architecture and the ability to bring to bear parallel processing for accelerated model training and inference. Excellent communication and collaboration skills with the ability to articulate complex technical concepts to both technical and non-technical collaborators. Experience leading workshops, training sessions, and presenting technical solutions to diverse audiences. Ways To Stand Out

Experience in deploying Generative AI models in cloud environments and on-premises infrastructures. Experience with NVIDIA GPUs and software libraries, such as NVIDIA NIM, NVIDIA NeMo Framework, NVIDIA Triton Inference Server, TensorRT, TensorRT-LLM. Proven ability to optimize LLM models for inference speed, memory efficiency, and resource utilization. Familiarity with Docker or equivalent experience in containerization technologies and Kubernetes for scalable and efficient model deployment. Deep understanding of GPU cluster architecture, parallel computing, and distributed computing concepts. You base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 120,000 USD - 189,750 USD for Level 2, and 148,000 USD - 235,750 USD for Level 3. You will also be eligible for equity and benefits. Applications for this job will be accepted at least until July 29, 2025. EEO Statement:

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. We do not discriminate on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

#J-18808-Ljbffr