ZipRecruiter
Job DescriptionJob DescriptionLead AI/ML Engineer - Generative AI LLMs (LangChain, Python, MLOps)
Build and lead production-grade generative AI solutions, LLM applications, and MLOps infrastructure across cloud- environments.
Experience:& 10 - 25 Years&
Mandatory Skills:&
LangChain LangGraph Python JavaScript AWS Bedrock Orchestration PyTorch/TensorFlow/Hugging Face MLOps What You’ll Do
Lead and deliver end-to-end AI solutions. Responsibilities are grouped for quick scanning:
Architecture System Design
Architect and implement advanced AI and machine learning systems that solve complex business problems.
Design scalable, cloud- AI architectures (including AWS) and production ML infrastructure.
LLM Applications Generative AI
Lead the design and deployment of LLM-based applications using frameworks like LangChain, LlamaIndex, and vector databases.
Design and build AI copilots, agents, and generative workflows that integrate seamlessly into modern software ecosystems.
Modeling Engineering
Develop end-to-end ML pipelines from data acquisition and model training to deployment and monitoring.
Apply deep expertise in NLP, computer vision, or predictive modeling to build intelligent, real-time systems.
Evaluate and fine-tune foundation models for custom enterprise use cases.
Explore and implement retrieval-augmented (RAG), semantic search, and multi-modal reasoning techniques.
MLOps Deployment
Lead model deployment, orchestration, monitoring, and lifecycle management using MLOps best practices.
Contribute to internal AI frameworks, toolkits, and accelerators to speed up solution delivery.
Leadership Collaboration
Collaborate with cross-functional product, design, and engineering teams to define intelligent experiences.
Mentor engineers on AI architecture, model lifecycle best practices, and ethical/secure use of machine learning.
Drive client-facing AI strategy conversations and champion experimentation and innovation.
Requirements
You’ll bring:
10+ years of software engineering experience with a strong focus on AI/ML and intelligent systems
3+ years in a technical leadership role, building and deploying machine learning systems in production
Deep expertise in Python and modern AI/ML libraries (e.g., PyTorch, TensorFlow, Hugging Face Transformers)
Experience with large models (OpenAI, Anthropic, Cohere, open source LLMs) and prompt engineering
Familiarity with vector databases (e.g., Pinecone, Weaviate,& FAISS) and scalable ML infrastructure
Knowledge of AI system design, data engineering for ML, model evaluation, and MLOps practices
Experience integrating AI capabilities into full-stack applications and cloud- environments, specifically within AWS.
Strong communication skills and a consulting mindset—able to confidently lead client-facing discussions on AI strategy
Passion for experimentation, innovation, and shaping the future of applied AI
Build and lead production-grade generative AI solutions, LLM applications, and MLOps infrastructure across cloud- environments.
Experience:& 10 - 25 Years&
Mandatory Skills:&
LangChain LangGraph Python JavaScript AWS Bedrock Orchestration PyTorch/TensorFlow/Hugging Face MLOps What You’ll Do
Lead and deliver end-to-end AI solutions. Responsibilities are grouped for quick scanning:
Architecture System Design
Architect and implement advanced AI and machine learning systems that solve complex business problems.
Design scalable, cloud- AI architectures (including AWS) and production ML infrastructure.
LLM Applications Generative AI
Lead the design and deployment of LLM-based applications using frameworks like LangChain, LlamaIndex, and vector databases.
Design and build AI copilots, agents, and generative workflows that integrate seamlessly into modern software ecosystems.
Modeling Engineering
Develop end-to-end ML pipelines from data acquisition and model training to deployment and monitoring.
Apply deep expertise in NLP, computer vision, or predictive modeling to build intelligent, real-time systems.
Evaluate and fine-tune foundation models for custom enterprise use cases.
Explore and implement retrieval-augmented (RAG), semantic search, and multi-modal reasoning techniques.
MLOps Deployment
Lead model deployment, orchestration, monitoring, and lifecycle management using MLOps best practices.
Contribute to internal AI frameworks, toolkits, and accelerators to speed up solution delivery.
Leadership Collaboration
Collaborate with cross-functional product, design, and engineering teams to define intelligent experiences.
Mentor engineers on AI architecture, model lifecycle best practices, and ethical/secure use of machine learning.
Drive client-facing AI strategy conversations and champion experimentation and innovation.
Requirements
You’ll bring:
10+ years of software engineering experience with a strong focus on AI/ML and intelligent systems
3+ years in a technical leadership role, building and deploying machine learning systems in production
Deep expertise in Python and modern AI/ML libraries (e.g., PyTorch, TensorFlow, Hugging Face Transformers)
Experience with large models (OpenAI, Anthropic, Cohere, open source LLMs) and prompt engineering
Familiarity with vector databases (e.g., Pinecone, Weaviate,& FAISS) and scalable ML infrastructure
Knowledge of AI system design, data engineering for ML, model evaluation, and MLOps practices
Experience integrating AI capabilities into full-stack applications and cloud- environments, specifically within AWS.
Strong communication skills and a consulting mindset—able to confidently lead client-facing discussions on AI strategy
Passion for experimentation, innovation, and shaping the future of applied AI