Saxon Global
Gen AI Architect
If you’re interested, please share your latest resume with me at
Ramesh.s@saxonglobal.com .
Duration: FTE/Contract
Job Summary
Educational Qualification*:
Bachelor's or Master's degree in Computer Science, Data Science, or a related field.
Experience Range:
Total IT 15+ & 10-12 years of experience in AI/ML-related roles, with a strong focus on LLMs & Agentic AI technology.
Primary (Must have skills)
Generative AI Solution Architecture (2–3 years): Proven experience in designing and architecting GenAI applications, including Retrieval-Augmented Generation (RAG), LLM orchestration (LangChain, LangGraph), and advanced prompt design strategies.
Backend & Integration Expertise (5+ years): Strong background in architecting Python-based microservices, APIs, and orchestration layers that enable tool invocation, context management, and task decomposition across cloud-native environments (Azure Functions, GCP Cloud Functions, Kubernetes).
Enterprise LLM Architecture (2–3 years): Hands-on experience in architecting end-to-end LLM solutions using Azure OpenAI, Azure AI Studio, Hugging Face models, and GCP Vertex AI, ensuring scalability, security, and performance.
RAG & Data Pipeline Design (2–3 years): Expertise in designing and optimizing RAG pipelines, including enterprise data ingestion, embedding generation, and vector search using Azure Cognitive Search, Pinecone, Weaviate, FAISS, or GCP Vertex AI Matching Engine.
LLM Optimization & Adaptation (2–3 years): Experience in implementing fine-tuning and parameter-efficient tuning approaches (LoRA, QLoRA, PEFT) and integrating memory modules (long-term, short-term, episodic) to enhance agent intelligence.
Multi-Agent Orchestration (2–3 years): Skilled in designing multi-agent frameworks and orchestration pipelines with LangChain, AutoGen, or DSPy, enabling goal‑driven planning, task decomposition, and tool/API invocation.
Performance Engineering (2–3 years): Experience in optimizing GCP Vertex AI models for latency, throughput, and scalability in enterprise‑grade deployments.
AI Application Integration (2–3 years): Proven ability to integrate OpenAI and third‑party models into enterprise applications via APIs and custom connectors (MuleSoft, Apigee, Azure APIM).
Governance & Guardrails (1–2 years): Hands‑on experience in implementing security, compliance, and governance frameworks for LLM‑based applications, including content moderation, data protection, and responsible AI guardrails.
Job Description of Role (RNR) - To be Evaluated by Technical Panel
Key technical skills: As a Technical Architect specializing in LLMs and Agentic AI, you will own the architecture, strategy, and delivery of enterprise‑grade AI solutions.
Primary Responsibilities:
Architect Scalable GenAI Solutions: Lead the design of enterprise architectures for LLM and multi‑agent systems, ensuring scalability, resilience, and security across Azure and GCP platforms.
Technology Strategy & Guidance: Provide strategic technical leadership to customers and internal teams, aligning GenAI projects with business outcomes.
LLM & RAG Applications: Architect and guide development of LLM‑powered applications, assistants, and RAG pipelines for structured and unstructured data.
Agentic AI Frameworks: Define and implement agentic AI architectures leveraging frameworks like LangGraph, AutoGen, DSPy, and cloud‑native orchestration tools.
Integration & APIs: Oversee integration of OpenAI, Azure OpenAI, and GCP Vertex AI models into enterprise systems, including MuleSoft Apigee connectors.
LLMOps & Governance: Establish LLMOps practices (CI/CD, monitoring, optimization, cost control) and enforce responsible AI guardrails (bias detection, prompt injection protection, hallucination reduction).
Enterprise Governance: Lead architecture reviews, governance boards, and technical design authority for all LLM initiatives.
Collaboration: Partner with data scientists, engineers, and business teams to translate use cases into scalable, secure solutions.
Documentation & Standards: Define and maintain best practices, playbooks, and technical documentation for enterprise adoption.
Monitoring & Observability: Guide implementation of AgentOps dashboards for usage, adoption, ingestion health, and platform performance visibility.
Secondary Responsibilities:
Innovation & Research: Stay ahead of advancements in OpenAI, Azure AI, and GCP Vertex AI, evaluating new features and approaches for enterprise adoption.
Proof of Concepts: Lead or sponsor PoCs to validate feasibility, ROI, and technical fit for new AI capabilities.
Ecosystem Expertise: Remain current on Azure AI services (Cognitive Search, AI Studio, Cognitive Services) and GCP AI stack (Vertex AI, BigQuery, Matching Engine).
Business Alignment: Collaborate with product and business leadership to prioritize high‑value AI initiatives with measurable outcomes.
Mentorship: Coach engineering teams on LLM solution design, performance tuning, and evaluation techniques.
Soft skills / other skills
Communication Skills: Communicate effectively with internal and customer stakeholders. Approach: verbal, emails and instant messages.
Interpersonal Skills: Strong interpersonal skills to build and maintain productive relationships with team members & customer representatives. Provide constructive feedback during code reviews and be open to receiving feedback on your own code.
Problem‑Solving and Analytical Thinking: Capability to troubleshoot and resolve issues efficiently. Analytical mindset. Ability to bring idea into reality through technology implementation & adoption.
Task/Work Updates: Prior experience in working on Agile/Scrum projects with exposure to tools like Jira/Azure DevOps. Provides regular updates, proactive and due diligent to carry out responsibilities.
Secondary Skills to be Planned Post Hiring
Knowledge of MCP's and A2A SDK
Version Control: Proficiency with version control tools like Git.
Agile Methodologies: Experience working in Agile development environments.
Seniority level Mid-Senior level
Employment type Contract
Job function Information Technology
Industries Staffing and Recruiting
Location & Salary San Jose, CA – $83,600 - $158,700 (2 weeks ago)
#J-18808-Ljbffr
Ramesh.s@saxonglobal.com .
Duration: FTE/Contract
Job Summary
Educational Qualification*:
Bachelor's or Master's degree in Computer Science, Data Science, or a related field.
Experience Range:
Total IT 15+ & 10-12 years of experience in AI/ML-related roles, with a strong focus on LLMs & Agentic AI technology.
Primary (Must have skills)
Generative AI Solution Architecture (2–3 years): Proven experience in designing and architecting GenAI applications, including Retrieval-Augmented Generation (RAG), LLM orchestration (LangChain, LangGraph), and advanced prompt design strategies.
Backend & Integration Expertise (5+ years): Strong background in architecting Python-based microservices, APIs, and orchestration layers that enable tool invocation, context management, and task decomposition across cloud-native environments (Azure Functions, GCP Cloud Functions, Kubernetes).
Enterprise LLM Architecture (2–3 years): Hands-on experience in architecting end-to-end LLM solutions using Azure OpenAI, Azure AI Studio, Hugging Face models, and GCP Vertex AI, ensuring scalability, security, and performance.
RAG & Data Pipeline Design (2–3 years): Expertise in designing and optimizing RAG pipelines, including enterprise data ingestion, embedding generation, and vector search using Azure Cognitive Search, Pinecone, Weaviate, FAISS, or GCP Vertex AI Matching Engine.
LLM Optimization & Adaptation (2–3 years): Experience in implementing fine-tuning and parameter-efficient tuning approaches (LoRA, QLoRA, PEFT) and integrating memory modules (long-term, short-term, episodic) to enhance agent intelligence.
Multi-Agent Orchestration (2–3 years): Skilled in designing multi-agent frameworks and orchestration pipelines with LangChain, AutoGen, or DSPy, enabling goal‑driven planning, task decomposition, and tool/API invocation.
Performance Engineering (2–3 years): Experience in optimizing GCP Vertex AI models for latency, throughput, and scalability in enterprise‑grade deployments.
AI Application Integration (2–3 years): Proven ability to integrate OpenAI and third‑party models into enterprise applications via APIs and custom connectors (MuleSoft, Apigee, Azure APIM).
Governance & Guardrails (1–2 years): Hands‑on experience in implementing security, compliance, and governance frameworks for LLM‑based applications, including content moderation, data protection, and responsible AI guardrails.
Job Description of Role (RNR) - To be Evaluated by Technical Panel
Key technical skills: As a Technical Architect specializing in LLMs and Agentic AI, you will own the architecture, strategy, and delivery of enterprise‑grade AI solutions.
Primary Responsibilities:
Architect Scalable GenAI Solutions: Lead the design of enterprise architectures for LLM and multi‑agent systems, ensuring scalability, resilience, and security across Azure and GCP platforms.
Technology Strategy & Guidance: Provide strategic technical leadership to customers and internal teams, aligning GenAI projects with business outcomes.
LLM & RAG Applications: Architect and guide development of LLM‑powered applications, assistants, and RAG pipelines for structured and unstructured data.
Agentic AI Frameworks: Define and implement agentic AI architectures leveraging frameworks like LangGraph, AutoGen, DSPy, and cloud‑native orchestration tools.
Integration & APIs: Oversee integration of OpenAI, Azure OpenAI, and GCP Vertex AI models into enterprise systems, including MuleSoft Apigee connectors.
LLMOps & Governance: Establish LLMOps practices (CI/CD, monitoring, optimization, cost control) and enforce responsible AI guardrails (bias detection, prompt injection protection, hallucination reduction).
Enterprise Governance: Lead architecture reviews, governance boards, and technical design authority for all LLM initiatives.
Collaboration: Partner with data scientists, engineers, and business teams to translate use cases into scalable, secure solutions.
Documentation & Standards: Define and maintain best practices, playbooks, and technical documentation for enterprise adoption.
Monitoring & Observability: Guide implementation of AgentOps dashboards for usage, adoption, ingestion health, and platform performance visibility.
Secondary Responsibilities:
Innovation & Research: Stay ahead of advancements in OpenAI, Azure AI, and GCP Vertex AI, evaluating new features and approaches for enterprise adoption.
Proof of Concepts: Lead or sponsor PoCs to validate feasibility, ROI, and technical fit for new AI capabilities.
Ecosystem Expertise: Remain current on Azure AI services (Cognitive Search, AI Studio, Cognitive Services) and GCP AI stack (Vertex AI, BigQuery, Matching Engine).
Business Alignment: Collaborate with product and business leadership to prioritize high‑value AI initiatives with measurable outcomes.
Mentorship: Coach engineering teams on LLM solution design, performance tuning, and evaluation techniques.
Soft skills / other skills
Communication Skills: Communicate effectively with internal and customer stakeholders. Approach: verbal, emails and instant messages.
Interpersonal Skills: Strong interpersonal skills to build and maintain productive relationships with team members & customer representatives. Provide constructive feedback during code reviews and be open to receiving feedback on your own code.
Problem‑Solving and Analytical Thinking: Capability to troubleshoot and resolve issues efficiently. Analytical mindset. Ability to bring idea into reality through technology implementation & adoption.
Task/Work Updates: Prior experience in working on Agile/Scrum projects with exposure to tools like Jira/Azure DevOps. Provides regular updates, proactive and due diligent to carry out responsibilities.
Secondary Skills to be Planned Post Hiring
Knowledge of MCP's and A2A SDK
Version Control: Proficiency with version control tools like Git.
Agile Methodologies: Experience working in Agile development environments.
Seniority level Mid-Senior level
Employment type Contract
Job function Information Technology
Industries Staffing and Recruiting
Location & Salary San Jose, CA – $83,600 - $158,700 (2 weeks ago)
#J-18808-Ljbffr