Intellibee
Principal Security Architect-GenAI Security &Readi, Charlotte, NC, US
Intellibee, Charlotte, North Carolina, United States, 28245
Principal Security Architect-GenAI Security &Readi, Charlotte, NC, US
Principal Security Architect – GenAI Security & Readiness Principal Security Architect – GenAI Security & Readiness (Data Protection Team) Top Skills: Hands-on experience with GenAI frameworks (LangChain, LangGraph, Langfuse), LLMs (GPT-4o, Claude, Llama 3), and vector DBs Understanding of Cyber Security Framework Fluent in Agile; Project & Stakeholder Mgmt. Demonstrated ability to build strong relationships; Critical thinking; Attention to detail/detail oriented/organized; Self-starter – able to get things done – take the ball and run with it. Experience working with senior level leaders – executive presence; Must have excellent communication skills - written and verbal; team player - work well with others – collaborative, able to challenge respectfully. CISSP/ CISM/CCSP – preferred but not required. Position Summary We are seeking a visionary and technically adept Principal Security Architect to lead GenAI security and readiness initiatives within the Data Protection team. This role will be instrumental in shaping enterprise-wide strategies for securing generative AI platforms, ensuring compliance, and enhancing data protection capabilities across cloud and hybrid environments. The ideal candidate will bring deep expertise in AI/ML systems, cloud-native architectures, and cybersecurity frameworks, with a strong focus on operational readiness and risk mitigation. Key Responsibilities GenAI Security Strategy & Architecture Define and implement security architecture for GenAI platforms including LLMs, RAG pipelines, agentic workflows, and inference services. Lead threat modeling and risk assessments for GenAI components such as LangChain, LangGraph, Langfuse, and vector databases (e.g., OpenSearch, Milvus). Establish governance and guardrails for prompt engineering, model evaluation (RAGAs, G-Eval), and responsible AI practices. Data Protection & Compliance Collaborate with the Data Protection team to integrate GenAI security controls into existing DLP, encryption, and classification frameworks. Ensure alignment with regulatory standards (e.g., NIST CSF, ISO 27001, CIS) and support audit readiness for GenAI deployments. Evaluate and enforce IAM, data residency, and privacy controls across AWS, Azure, and hybrid environments. Operational Readiness & Monitoring Build observability and monitoring frameworks for GenAI systems using tools like Prometheus, Grafana, ELK, and Langfuse. Develop automated pipelines for model deployment, evaluation, and rollback using Kubernetes, Helm, and CI/CD tools (e.g., ArgoCD, CircleCI). Lead incident response planning and tabletop exercises for GenAI-related security events. Cross-Functional Enablement Partner with engineering, product, and business units to embed security into GenAI use cases such as chatbots, knowledge assistants, and document summarization. Deliver training and awareness programs on GenAI security risks and best practices. Represent the Data Protection team in enterprise AI steering committees and working groups. Minimum Qualifications Bachelor’s degree in Computer Science, Cybersecurity, or related field. 10+ years of experience in cybersecurity, with 1+ years in AI/ML or GenAI security. Hands-on experience with GenAI frameworks (LangChain, LangGraph, Langfuse), LLMs (GPT-4o, Claude, Llama 3), and vector DBs. Proficiency in Python, Kubernetes, AWS (Bedrock, Sagemaker), Azure Databricks, and MLOps tools (MLflow, Argo workflows). Strong understanding of data protection principles, encryption, and cloud security. Preferred Qualifications Master’s degree in Cybersecurity, AI/ML, or related discipline. Certifications: CISSP, CCSP, AWS Machine Learning Specialty, or equivalent. Experience in highly regulated industries (e.g., financial services, insurance). Familiarity with data governance tools (e.g., Collibra, Alation) and GenAI evaluation frameworks (RAGAs, Guardrails). Exposure to agentic AI design patterns and multi-agent orchestration. Success Measures Deployment of secure and compliant GenAI platforms within 6–12 months. Reduction in GenAI-related data exposure and model misuse incidents. Positive feedback from stakeholders on security enablement and collaboration. Demonstrated leadership in GenAI security architecture and readiness planning
#J-18808-Ljbffr
Principal Security Architect – GenAI Security & Readiness Principal Security Architect – GenAI Security & Readiness (Data Protection Team) Top Skills: Hands-on experience with GenAI frameworks (LangChain, LangGraph, Langfuse), LLMs (GPT-4o, Claude, Llama 3), and vector DBs Understanding of Cyber Security Framework Fluent in Agile; Project & Stakeholder Mgmt. Demonstrated ability to build strong relationships; Critical thinking; Attention to detail/detail oriented/organized; Self-starter – able to get things done – take the ball and run with it. Experience working with senior level leaders – executive presence; Must have excellent communication skills - written and verbal; team player - work well with others – collaborative, able to challenge respectfully. CISSP/ CISM/CCSP – preferred but not required. Position Summary We are seeking a visionary and technically adept Principal Security Architect to lead GenAI security and readiness initiatives within the Data Protection team. This role will be instrumental in shaping enterprise-wide strategies for securing generative AI platforms, ensuring compliance, and enhancing data protection capabilities across cloud and hybrid environments. The ideal candidate will bring deep expertise in AI/ML systems, cloud-native architectures, and cybersecurity frameworks, with a strong focus on operational readiness and risk mitigation. Key Responsibilities GenAI Security Strategy & Architecture Define and implement security architecture for GenAI platforms including LLMs, RAG pipelines, agentic workflows, and inference services. Lead threat modeling and risk assessments for GenAI components such as LangChain, LangGraph, Langfuse, and vector databases (e.g., OpenSearch, Milvus). Establish governance and guardrails for prompt engineering, model evaluation (RAGAs, G-Eval), and responsible AI practices. Data Protection & Compliance Collaborate with the Data Protection team to integrate GenAI security controls into existing DLP, encryption, and classification frameworks. Ensure alignment with regulatory standards (e.g., NIST CSF, ISO 27001, CIS) and support audit readiness for GenAI deployments. Evaluate and enforce IAM, data residency, and privacy controls across AWS, Azure, and hybrid environments. Operational Readiness & Monitoring Build observability and monitoring frameworks for GenAI systems using tools like Prometheus, Grafana, ELK, and Langfuse. Develop automated pipelines for model deployment, evaluation, and rollback using Kubernetes, Helm, and CI/CD tools (e.g., ArgoCD, CircleCI). Lead incident response planning and tabletop exercises for GenAI-related security events. Cross-Functional Enablement Partner with engineering, product, and business units to embed security into GenAI use cases such as chatbots, knowledge assistants, and document summarization. Deliver training and awareness programs on GenAI security risks and best practices. Represent the Data Protection team in enterprise AI steering committees and working groups. Minimum Qualifications Bachelor’s degree in Computer Science, Cybersecurity, or related field. 10+ years of experience in cybersecurity, with 1+ years in AI/ML or GenAI security. Hands-on experience with GenAI frameworks (LangChain, LangGraph, Langfuse), LLMs (GPT-4o, Claude, Llama 3), and vector DBs. Proficiency in Python, Kubernetes, AWS (Bedrock, Sagemaker), Azure Databricks, and MLOps tools (MLflow, Argo workflows). Strong understanding of data protection principles, encryption, and cloud security. Preferred Qualifications Master’s degree in Cybersecurity, AI/ML, or related discipline. Certifications: CISSP, CCSP, AWS Machine Learning Specialty, or equivalent. Experience in highly regulated industries (e.g., financial services, insurance). Familiarity with data governance tools (e.g., Collibra, Alation) and GenAI evaluation frameworks (RAGAs, Guardrails). Exposure to agentic AI design patterns and multi-agent orchestration. Success Measures Deployment of secure and compliant GenAI platforms within 6–12 months. Reduction in GenAI-related data exposure and model misuse incidents. Positive feedback from stakeholders on security enablement and collaboration. Demonstrated leadership in GenAI security architecture and readiness planning
#J-18808-Ljbffr