The Hartford
Principal Security Engineer - GenAI and Emerging Tech - Remote
The Hartford, Chicago, Illinois, United States, 60290
Principal Security Engineer – GenAI and Emerging Tech – Remote
Join The Hartford as a Principal Security Engineer to set the security direction and requirements for the company’s secure use of AI / GenAI capabilities and lead the charge in evaluating, recommending and helping implement new and emerging security capabilities.
Role reports directly to the Chief Information Security Officer (CISO) and is an essential leadership position that partners closely with other technology leaders, providing the right person the opportunity to help shape our future security practices.
Responsibilities
Partnering with key stakeholders and technology partners to provide leadership direction and support for our company’s continued GenAI priorities, bringing a security perspective that balances with business imperatives and delivery timeframes
Designing and developing architectures, frameworks, and requirements for the secure consumption of AI/GenAI capabilities across various patterns and usages, including internally maintained models, as well as Software as a Service (SaaS) solutions
Performing threat modelling and risk assessments against GenAI use cases, recommending security requirements, and monitoring adherence with guidance
Working with development teams, data scientists and security professionals to design and implement security measures that protect AI models against various threats and vulnerabilities, including prompt injections, inference attacks, data poisoning, model thefts, and others
Representing the organization in leadership discussions, risk governance councils, and various AI/GenAI working teams
Leading the cybersecurity team’s efforts to continuously monitor, assess and evaluate emerging security technologies, partnering with the enterprise Innovation team to proactively identify and recommend potential new capabilities
Qualifications
5+ years’ experience as a security professional with a focus on Security Architecture responsibilities related to cloud security, threat modelling, identity and management and authentication, network security, software engineering, cryptography, penetration testing, mobile security, and/or infrastructure services
AI/ML Security Leadership: Proven expertise in securing Generative AI systems, with successful implementation of AI security frameworks.
Generative AI & LLMs: Hands‑on experience leading AI/ML initiatives using large language models (LLMs) and platforms such as GCP Vertex AI, AWS Bedrock, SageMaker, ChatGPT, etc.
Cross‑Platform AI Security: Deep knowledge of securing AI applications and platform products across major cloud providers (AWS, GCP, Microsoft Azure) and AI ecosystems, including CoPilot and other enterprise‑grade LLMs.
Cloud Security Engineering: Experience designing and deploying robust cloud security architectures for AI/ML workloads across AWS and Google Cloud.
Threat Modeling & Risk Mitigation: Subject matter expert in identifying and mitigating AI‑specific attack surfaces and threats.
End‑to‑End AI Security Strategy: Demonstrated ability to lead the development and execution of comprehensive AI/ML security strategies, integrating secure model development, deployment, and monitoring practices.
Certified Information Systems Security Professional (CISSP), Certified Information Security Manager (CISM), and/or Cloud and AI‑specific certifications are highly desirable.
Candidate Requirements Candidate must be authorized to work in the US without company sponsorship. The company will not support the STEM OPT I‑983 Training Plan endorsement for this position.
Compensation The listed annualized base pay range is $149,360 – $224,040. Actual base pay could vary and may be above or below the listed range based on performance, proficiency and demonstration of competencies required for the role.
Equal Opportunity Employer/Sex/Race/Color/Veterans/Disability/Sexual Orientation/Gender Identity or Expression/Religion/Age.
#J-18808-Ljbffr
Role reports directly to the Chief Information Security Officer (CISO) and is an essential leadership position that partners closely with other technology leaders, providing the right person the opportunity to help shape our future security practices.
Responsibilities
Partnering with key stakeholders and technology partners to provide leadership direction and support for our company’s continued GenAI priorities, bringing a security perspective that balances with business imperatives and delivery timeframes
Designing and developing architectures, frameworks, and requirements for the secure consumption of AI/GenAI capabilities across various patterns and usages, including internally maintained models, as well as Software as a Service (SaaS) solutions
Performing threat modelling and risk assessments against GenAI use cases, recommending security requirements, and monitoring adherence with guidance
Working with development teams, data scientists and security professionals to design and implement security measures that protect AI models against various threats and vulnerabilities, including prompt injections, inference attacks, data poisoning, model thefts, and others
Representing the organization in leadership discussions, risk governance councils, and various AI/GenAI working teams
Leading the cybersecurity team’s efforts to continuously monitor, assess and evaluate emerging security technologies, partnering with the enterprise Innovation team to proactively identify and recommend potential new capabilities
Qualifications
5+ years’ experience as a security professional with a focus on Security Architecture responsibilities related to cloud security, threat modelling, identity and management and authentication, network security, software engineering, cryptography, penetration testing, mobile security, and/or infrastructure services
AI/ML Security Leadership: Proven expertise in securing Generative AI systems, with successful implementation of AI security frameworks.
Generative AI & LLMs: Hands‑on experience leading AI/ML initiatives using large language models (LLMs) and platforms such as GCP Vertex AI, AWS Bedrock, SageMaker, ChatGPT, etc.
Cross‑Platform AI Security: Deep knowledge of securing AI applications and platform products across major cloud providers (AWS, GCP, Microsoft Azure) and AI ecosystems, including CoPilot and other enterprise‑grade LLMs.
Cloud Security Engineering: Experience designing and deploying robust cloud security architectures for AI/ML workloads across AWS and Google Cloud.
Threat Modeling & Risk Mitigation: Subject matter expert in identifying and mitigating AI‑specific attack surfaces and threats.
End‑to‑End AI Security Strategy: Demonstrated ability to lead the development and execution of comprehensive AI/ML security strategies, integrating secure model development, deployment, and monitoring practices.
Certified Information Systems Security Professional (CISSP), Certified Information Security Manager (CISM), and/or Cloud and AI‑specific certifications are highly desirable.
Candidate Requirements Candidate must be authorized to work in the US without company sponsorship. The company will not support the STEM OPT I‑983 Training Plan endorsement for this position.
Compensation The listed annualized base pay range is $149,360 – $224,040. Actual base pay could vary and may be above or below the listed range based on performance, proficiency and demonstration of competencies required for the role.
Equal Opportunity Employer/Sex/Race/Color/Veterans/Disability/Sexual Orientation/Gender Identity or Expression/Religion/Age.
#J-18808-Ljbffr