IsoTalent
Sr. Data Engineer
Location/Onsite:
Remote - Utah
Our client seeks a skilled Sr. Data Engineer to join their growing technology team in the GovTech and healthcare space. Do you have deep experience building and optimizing large-scale data pipelines? Are you passionate about data quality, accuracy, and security? Do you thrive on solving complex integration challenges and collaborating across teams? If yes, this may be the perfect Sr. Data Engineer position for you. Keep scrolling to see what this company has to offer.
The Perks! Competitive salary, $120,000-$190,000, based on experience Comprehensive health, dental, and vision benefits 401k Generous PTO $100/month for a wellness benefit $50/month for your phone bill
A Day in the Life of the Sr. Data Engineer In this role, youll design, build, and maintain the data backbone that powers analytics, reporting, and operational workflows for our clients platform. Youll own end-to-end pipelines, integrations, and data modeling while collaborating with product, engineering, and compliance teams to ensure data is accurate, accessible, and secure.
Responsibilities include: Architect, develop, and operate scalable ETL/ELT pipelines in Python using Apache Airflow, AWS Glue, or similar tools Ingest and transform data from internal services and third-party APIs (REST, streaming, webhooks) Design and maintain schemas in AWS Redshift, Snowflake, or similar warehouses Optimize table structures, partitioning, and indexing for performance and cost efficiency Build and maintain integrations with external systems (payment gateways, identity providers, data vendors) Implement monitoring, retries, and alerting for reliable data flow Enforce data governance, encryption, and access controls in compliance with HIPAA, FedRAMP, and SOC-2 Provide self-service data access and documentation for stakeholders Tune performance and identify opportunities to reduce AWS costs Evaluate and prototype emerging data technologies such as Spark, Kafka, or dbt
Requirements and Qualifications: 7+ years in data engineering or analytics engineering roles Experience building and managing ETL/ELT pipelines in Python with tools like Airflow or AWS Glue Hands-on experience with cloud data warehouses (e.g., Redshift, Snowflake) Proven success in integrating and transforming data from APIs, streaming platforms, and message queues Familiarity with HIPAA, FedRAMP, and SOC-2 compliance Strong communication skills and ability to collaborate across teams Ability to work remotely in UT
About the Hiring Company: Our client is a fast-growing technology company at the intersection of GovTech and healthcare. Their platform helps streamline workflows, enhance data-driven decision-making, and improve outcomes for organizations that serve the public. They are committed to building robust, secure, and high-performance data systems that drive meaningful impact.
Come Join Our Data Engineering Team! Start by filling out this 3-minute, mobile-friendly application here. We look forward to hearing from you!
Remote - Utah
Our client seeks a skilled Sr. Data Engineer to join their growing technology team in the GovTech and healthcare space. Do you have deep experience building and optimizing large-scale data pipelines? Are you passionate about data quality, accuracy, and security? Do you thrive on solving complex integration challenges and collaborating across teams? If yes, this may be the perfect Sr. Data Engineer position for you. Keep scrolling to see what this company has to offer.
The Perks! Competitive salary, $120,000-$190,000, based on experience Comprehensive health, dental, and vision benefits 401k Generous PTO $100/month for a wellness benefit $50/month for your phone bill
A Day in the Life of the Sr. Data Engineer In this role, youll design, build, and maintain the data backbone that powers analytics, reporting, and operational workflows for our clients platform. Youll own end-to-end pipelines, integrations, and data modeling while collaborating with product, engineering, and compliance teams to ensure data is accurate, accessible, and secure.
Responsibilities include: Architect, develop, and operate scalable ETL/ELT pipelines in Python using Apache Airflow, AWS Glue, or similar tools Ingest and transform data from internal services and third-party APIs (REST, streaming, webhooks) Design and maintain schemas in AWS Redshift, Snowflake, or similar warehouses Optimize table structures, partitioning, and indexing for performance and cost efficiency Build and maintain integrations with external systems (payment gateways, identity providers, data vendors) Implement monitoring, retries, and alerting for reliable data flow Enforce data governance, encryption, and access controls in compliance with HIPAA, FedRAMP, and SOC-2 Provide self-service data access and documentation for stakeholders Tune performance and identify opportunities to reduce AWS costs Evaluate and prototype emerging data technologies such as Spark, Kafka, or dbt
Requirements and Qualifications: 7+ years in data engineering or analytics engineering roles Experience building and managing ETL/ELT pipelines in Python with tools like Airflow or AWS Glue Hands-on experience with cloud data warehouses (e.g., Redshift, Snowflake) Proven success in integrating and transforming data from APIs, streaming platforms, and message queues Familiarity with HIPAA, FedRAMP, and SOC-2 compliance Strong communication skills and ability to collaborate across teams Ability to work remotely in UT
About the Hiring Company: Our client is a fast-growing technology company at the intersection of GovTech and healthcare. Their platform helps streamline workflows, enhance data-driven decision-making, and improve outcomes for organizations that serve the public. They are committed to building robust, secure, and high-performance data systems that drive meaningful impact.
Come Join Our Data Engineering Team! Start by filling out this 3-minute, mobile-friendly application here. We look forward to hearing from you!