Logo
The Hartford

Sr Staff Data Engineer - Hybrid

The Hartford, Chicago, Illinois, United States, 60290

Save Job

Sr Staff Data Engineer – Hybrid We’re determined to make a difference and are proud to be an insurance company that goes well beyond coverages and policies. Working here means having every opportunity to achieve your goals – and to help others accomplish theirs, too. Join our team as we help shape the future.

Sr Staff AI Data Engineer is responsible for implementing AI data pipelines that bring together structured, semi-structured, and unstructured data to support AI and Agentic solutions. This includes pre-processing with extraction, chunking, embedding and grounding strategies to get the data ready.

This role will have a hybrid work schedule, with the expectation of working in an office location (Hartford, CT; Chicago, IL; Columbus, OH; and Charlotte, NC) 3 days a week (Tuesday through Thursday).

Responsibilities

AI Data Engineering lead responsible for implementing AI data pipelines that bring together structured, semi-structured and unstructured data to support AI and Agentic solutions.

Develop AI-driven systems to improve data capabilities, ensuring compliance with industry’s best practices.

Implement efficient Retrieval-Augmented Generation (RAG) architectures and integrate with enterprise data infrastructure.

Collaborate with cross-functional teams to integrate solutions into operational processes and systems supporting various functions.

Stay up to date with industry advancements in AI and apply modern technologies and methodologies to our systems.

Design, build and maintain scalable and robust real-time data streaming pipelines using technologies such as Apache Kafka, AWS Kinesis, Spark streaming, or similar.

Develop data domains and data products for various consumption archetypes including Reporting, Data Science, AI/ML, Analytics, etc.

Ensure the reliability, availability, and scalability of data pipelines and systems through effective monitoring, alerting, and incident management.

Implement best practices in reliability engineering, including redundancy, fault tolerance, and disaster recovery strategies.

Collaborate closely with DevOps and infrastructure teams to ensure seamless deployment, operation, and maintenance of data systems.

Mentor junior team members and engage in communities of practice to deliver high-quality data and AI solutions while promoting best practices, standards, and adoption of reusable patterns.

Develop graph database solutions for complex data relationships supporting AI systems.

Apply AI solutions to insurance-specific data use cases and challenges.

Partner with architects and stakeholders to influence and implement the vision of the AI and data pipelines while safeguarding the integrity and scalability of the environment.

Qualifications

Bachelor’s or Master’s degree in Computer Science, Artificial Intelligence, or a related field.

8+ years of strong hands‑on data engineering experience including data solutions, SQL and NoSQL, Snowflake, ETL/ELT tools, CI/CD, big data, and cloud technologies (AWS/Google/Azure).

Strong programming skills in Python and familiarity with deep learning frameworks such as PyTorch or TensorFlow.

Experience in implementing data governance practices, including data quality, lineage, data catalogue capture, and other large‑scale platform practices.

Experience with cloud platforms (AWS, GCP, or Azure) and containerization technologies (Docker, Kubernetes).

Strong written and verbal communication skills and the ability to explain technical concepts to various stakeholders. \ Preferred Qualifications

Experience in multi‑cloud hybrid AI solutions.

AI certifications.

Experience in Employee Benefits industry.

Knowledge of natural language processing (NLP) and computer vision technologies.

Contributions to open‑source AI projects or research publications in the field of Generative AI.

Experience with building AI pipelines that bring together structured, semi‑structured and unstructured data, including pre‑processing, chunking, embedding, grounding, semantic modeling, and data preparation for models and Agentic solutions.

Experience in vector databases, graph databases, NoSQL, Document DBs, including design, implementation, and optimization (e.g., AWS OpenSearch, GCP Vertex AI, Neo4j, Spanner Graph, Neptune, Mongo, DynamoDB).

3+ years of AI/ML experience with 1+ year of data engineering experience focused on supporting Generative AI technologies.

Hands‑on experience implementing production‑ready enterprise‑grade AI data solutions.

Experience with prompt engineering techniques for large language models.

Experience implementing Retrieval‑Augmented Generation (RAG) pipelines and integrating retrieval mechanisms with language models.

Proficiency in implementing scalable AI‑driven data systems supporting agentic solutions (e.g., AWS Lambda, S3, EC2, Langchain, Langgraph).

Compensation The listed annualized base pay range is $135,040 – $202,560. Actual base pay could vary and may be above or below the listed range based on factors including but not limited to performance, proficiency and demonstration of competencies required for the role. The base pay is just one component of The Hartford’s total compensation package for employees. Other rewards may include short‑term or annual bonuses, long‑term incentives, and on‑the‑spot recognition.

Equal Opportunity Employer Equal Opportunity Employer/Sex/Race/Color/Veterans/Disability/Sexual Orientation/Gender Identity or Expression/Religion/Age

Seniority level Mid‑Senior level

Employment type Full‑time

Job function Information Technology

#J-18808-Ljbffr