The Hartford
Sr Staff Data Engineer – GE07DE
We’re determined to make a difference and are proud to be an insurance company that goes well beyond coverages and policies. Working here means having every opportunity to achieve your goals – and to help others accomplish theirs, too. Join our team as we help shape the future.
Join our team as a Sr GenAI Data Engineer and lead the charge in developing cutting‑edge AI solutions and data engineering strategies. Embrace our core values of innovation, collaboration, and excellence as you unlock unparalleled growth opportunities in the dynamic field of AI and data engineering. Shape the future of technology with us! Apply now to be part of our innovative journey and make a significant impact!
Key Responsibilities Primary job responsibilities include:
Develop AI‑driven systems to improve data capabilities, ensuring compliance with industry best practices. Implement efficient Retrieval‑Augmented Generation (RAG) architectures and integrate with enterprise data infrastructure.
Collaborate with cross‑functional teams to integrate solutions into operational processes and systems supporting various functions.
Stay up to date with industry advancements in AI and apply modern technologies and methodologies to our systems.
Design, build and maintain scalable and robust real‑time data streaming pipelines using technologies such as GCP, Vertex AI, S3, AWS Bedrock, Spark streaming, or similar.
Develop data domains and data products for various consumption archetypes including Reporting, Data Science, AI/ML, Analytics etc.
Ensure the reliability, availability, and scalability of data pipelines and systems through effective monitoring, alerting, and incident management.
Implement best practices in reliability engineering, including redundancy, fault tolerance, and disaster recovery strategies.
Collaborate closely with DevOps and infrastructure teams to ensure seamless deployment, operation, and maintenance of data systems.
Mentor junior team members and engage in communities of practice to deliver high‑quality data and AI solutions while promoting best practices, standards, and adoption of reusable patterns.
Apply AI solutions to insurance‑specific data use cases and challenges.
Partner with architects and stakeholders to influence and implement the vision of the AI and data pipelines while safeguarding the integrity and scalability of the environment.
Required Skills & Experience
8+ years of strong hands‑on experience programming skills in Python.
7+ years of data engineering experience including Data solutions, SQL and NoSQL, Snowflake, ETL/ELT tools, CICD, Bigdata, Cloud Technologies (AWS/Google/AZURE), Python/Spark.
3+ years focused on supporting Generative AI technologies.
2+ years of implementing production‑ready enterprise‑grade GenAI data solutions.
3+ years of experience implementing Retrieval‑Augmented Generation (RAG) pipelines.
3+ years of experience with vector databases and graph databases, including implementation and optimization.
3+ years of experience processing and leveraging unstructured data for GenAI applications.
3+ years of proficiency in implementing scalable AI‑driven data systems supporting agentic solutions (AWS Lambda, S3, EC2, Langchain, Langgraph).
3+ years experience in building AI pipelines that bring together structured, semi‑structured and unstructured data. Includes pre‑processing with extraction, chunking, embedding and grounding strategies, semantic modeling, and preparing the data for models and agentic solutions.
Nice to Have
Experience with prompt engineering techniques for large language models.
Experience in implementing data governance practices, including data quality, lineage, data catalogue capture, holistically, strategically, and dynamically on a large‑scale data platform.
Experience in multi‑cloud hybrid AI solutions.
AI certifications, experience in P&C or Employee Benefits industry, knowledge of natural language processing (NLP) and computer vision technologies.
Contributions to open‑source AI projects or research publications in the field of Generative AI.
Candidate must be authorized to work in the US without company sponsorship.
The company will not support the STEM OPT I-983 Training Plan endorsement for this position.
Compensation The listed annualized base pay range is $135,040 – $202,560. Actual base pay could vary and may be above or below the listed range based on factors including but not limited to performance, proficiency and demonstration of competencies required for the role. The base pay is just one component of The Hartford’s total compensation package for employees. Other rewards may include short‑term or annual bonuses, long‑term incentives, and on‑the‑spot recognition.
Equal Opportunity Employer/Sex/Race/Color/Veterans/Disability/Sexual Orientation/Gender Identity or Expression/Religion/Age
#J-18808-Ljbffr
Join our team as a Sr GenAI Data Engineer and lead the charge in developing cutting‑edge AI solutions and data engineering strategies. Embrace our core values of innovation, collaboration, and excellence as you unlock unparalleled growth opportunities in the dynamic field of AI and data engineering. Shape the future of technology with us! Apply now to be part of our innovative journey and make a significant impact!
Key Responsibilities Primary job responsibilities include:
Develop AI‑driven systems to improve data capabilities, ensuring compliance with industry best practices. Implement efficient Retrieval‑Augmented Generation (RAG) architectures and integrate with enterprise data infrastructure.
Collaborate with cross‑functional teams to integrate solutions into operational processes and systems supporting various functions.
Stay up to date with industry advancements in AI and apply modern technologies and methodologies to our systems.
Design, build and maintain scalable and robust real‑time data streaming pipelines using technologies such as GCP, Vertex AI, S3, AWS Bedrock, Spark streaming, or similar.
Develop data domains and data products for various consumption archetypes including Reporting, Data Science, AI/ML, Analytics etc.
Ensure the reliability, availability, and scalability of data pipelines and systems through effective monitoring, alerting, and incident management.
Implement best practices in reliability engineering, including redundancy, fault tolerance, and disaster recovery strategies.
Collaborate closely with DevOps and infrastructure teams to ensure seamless deployment, operation, and maintenance of data systems.
Mentor junior team members and engage in communities of practice to deliver high‑quality data and AI solutions while promoting best practices, standards, and adoption of reusable patterns.
Apply AI solutions to insurance‑specific data use cases and challenges.
Partner with architects and stakeholders to influence and implement the vision of the AI and data pipelines while safeguarding the integrity and scalability of the environment.
Required Skills & Experience
8+ years of strong hands‑on experience programming skills in Python.
7+ years of data engineering experience including Data solutions, SQL and NoSQL, Snowflake, ETL/ELT tools, CICD, Bigdata, Cloud Technologies (AWS/Google/AZURE), Python/Spark.
3+ years focused on supporting Generative AI technologies.
2+ years of implementing production‑ready enterprise‑grade GenAI data solutions.
3+ years of experience implementing Retrieval‑Augmented Generation (RAG) pipelines.
3+ years of experience with vector databases and graph databases, including implementation and optimization.
3+ years of experience processing and leveraging unstructured data for GenAI applications.
3+ years of proficiency in implementing scalable AI‑driven data systems supporting agentic solutions (AWS Lambda, S3, EC2, Langchain, Langgraph).
3+ years experience in building AI pipelines that bring together structured, semi‑structured and unstructured data. Includes pre‑processing with extraction, chunking, embedding and grounding strategies, semantic modeling, and preparing the data for models and agentic solutions.
Nice to Have
Experience with prompt engineering techniques for large language models.
Experience in implementing data governance practices, including data quality, lineage, data catalogue capture, holistically, strategically, and dynamically on a large‑scale data platform.
Experience in multi‑cloud hybrid AI solutions.
AI certifications, experience in P&C or Employee Benefits industry, knowledge of natural language processing (NLP) and computer vision technologies.
Contributions to open‑source AI projects or research publications in the field of Generative AI.
Candidate must be authorized to work in the US without company sponsorship.
The company will not support the STEM OPT I-983 Training Plan endorsement for this position.
Compensation The listed annualized base pay range is $135,040 – $202,560. Actual base pay could vary and may be above or below the listed range based on factors including but not limited to performance, proficiency and demonstration of competencies required for the role. The base pay is just one component of The Hartford’s total compensation package for employees. Other rewards may include short‑term or annual bonuses, long‑term incentives, and on‑the‑spot recognition.
Equal Opportunity Employer/Sex/Race/Color/Veterans/Disability/Sexual Orientation/Gender Identity or Expression/Religion/Age
#J-18808-Ljbffr