Logo
COVU

Senior Data Engineer

COVU, Los Angeles, California, United States, 90079

Save Job

Join to apply for the

Senior Data Engineer

role at

COVU

COVU is a venture‑backed technology startup transforming the insurance industry. We empower independent agencies with AI‑driven insights and digitized operations, enabling them to manage risk more effectively. Our team is building an AI‑first company set to redefine the future of insurance distribution.

Role Overview We are seeking an experienced and product‑focused Senior Data Engineer to be a core member of our Platform product team. This is a high‑impact role where you will play a pivotal part in evolving our core data infrastructure. Your primary mission will be to develop key components of our "Policy Journal" — the foundational data asset that will serve as the single source of truth for all policy, commission, and client accounting information. You will work closely with the Lead Data Engineer and business stakeholders to translate requirements into robust data models and scalable pipelines that drive analytics and operational efficiency for our agents, managers, and leadership. This role requires a blend of greenfield development, strategic refactoring of existing systems, and a deep understanding of how to create trusted, high‑quality data products.

What You’ll Do

Develop the Policy Journal: Build and maintain our master data solution that unifies policy, commission, and accounting data from sources like IVANS and Applied EPIC, implementing data models and pipelines for the "gold record" powering our platform.

Ensure Data Quality and Reliability: Implement robust data quality checks, monitoring, and alerting to ensure accuracy and timeliness of all data pipelines and champion best practices in data governance and engineering.

Build the Foundational Analytics Platform: Implement and enhance our analytics framework using modern tooling (Snowflake, dbt, Airflow), building and optimizing critical pipelines to transform raw data into clean, reliable, and performant dimensional models for business intelligence.

Modernize Core ETL Processes: Refactor our existing Java & SQL (PostgreSQL) ETL system, resolve core issues (data duplication, performance bottlenecks), rewrite critical components in Python, and migrate orchestration to Airflow.

Implement Data Quality Frameworks: Build and execute automated data validation frameworks within our QA strategy, writing tests that ensure accuracy, completeness, and integrity of pipelines and the Policy Journal.

Collaborate and Contribute to Design: Partner with product managers, Lead Data Engineer, and stakeholders to understand complex requirements and translate them into well‑designed, maintainable solutions.

What We’re Looking For

5+ years of experience in data engineering, building and maintaining scalable production pipelines.

Expert‑level proficiency in Python and SQL.

Strong experience with modern data stack technologies (cloud data warehouse, orchestrator, transformation tools).

Hands‑on experience with AWS data services (S3, Glue, Lambda, RDS).

Experience in the insurtech industry and familiarity with insurance data concepts.

Demonstrated ability to design and implement robust data models for analytics and reporting.

Pragmatic problem‑solver, able to analyze and refactor complex legacy systems; reading Java/Hibernate logic is a plus.

Excellent communication skills and ability to collaborate with technical and non‑technical stakeholders.

Bonus Points For

Direct experience working with data from Agency Management Systems like Applied EPIC, Nowcerts, EZlynx.

Direct experience with Carrier data (Accord XML, IVANS AL3).

Experience with BI tools such as Tableau, Looker, or Power BI.

Prior startup or fast‑paced agile environment experience.

Application Process

Intro call with People team.

Technical interviews.

Final interview with leaders.

#J-18808-Ljbffr