Gigster
Get AI-powered advice on this job and more exclusive features.
Virtasant is a global technology services company with a network of over 4,000 technology professionals across 130+ countries. We specialize in cloud architecture, infrastructure, migration, and optimization, helping enterprises scale efficiently while maintaining cost control.
Our clients range from Fortune 500 companies to fast-growing startups, relying on us to build high-performance infrastructure, optimize cloud environments, and enable continuous delivery at scale.
About the Role We’re looking for a Senior Data Engineer to join us and work with our client’s Data Platform team. Our client is a leading healthcare technology company, dedicated to transforming the patient and provider experience through innovative, data‑driven solutions.
You will architect and build core services, automation tools, and integrations that power our client’s data ecosystem. You’ll own high‑impact platform components, improve pipeline reliability and observability, and partner closely with data engineering, analytics, and DevOps to advance the scalability and developer experience of our client’s data platform.
Location Based in Latin America. If you are based in the contiguous US states, you can apply here - https://virtasant.teamtailor.com/jobs/6884308-senior-data-engineer-data-platform-usa. If you are neither based in Latin America nor the contiguous US states, we won’t be able to consider your application for the role.
Responsibilities
Build Automation & Tooling: develop scalable backend services, APIs, and internal tools to automate data platform workflows (e.g. data onboarding, validation, pipeline orchestration, schema tracking, quality monitoring).
Data Platform Integration: integrate tools with core data infrastructure, building pipelines (Airflow, Spark, dbt, Kafka, Snowflake, or similar) to expose capabilities via APIs and UIs.
Observability & Governance: build visualization and monitoring components for data lineage, job health, and quality metrics.
Collaboration: work cross‑functionally with data engineering, product, and DevOps teams to define requirements and deliver end‑to‑end solutions.
Qualifications
7+ years of experience in data engineering or software development with at least 5 years building production‑grade data or platform services.
Strong programming skills in Python & SQL and at least one major data platform (Snowflake, BigQuery, Redshift, or similar).
Develop tooling for schema evolution, data contracts, and developer self‑service.
Deep experience with streaming, distributed compute, or S3‑based table formats (Spark, Kafka, Iceberg/Delta/Hudi).
Experience with schema governance, metadata systems, and data quality frameworks.
Understanding of orchestration tools (Airflow, Dagster, Prefect, etc.).
Solid grasp of CI/CD and Docker.
At least 2 years of experience in AWS.
Experience building data pipelines using dbt.
Nice‑to‑Haves
Experience with data observability, data catalog, or metadata management tools.
Experience working with healthcare data (X12, FHIR).
Proven experience in data migration projects (legacy technologies to the latest technologies).
Experience building internal developer platforms or data portals.
Understanding of authentication/authorization (OAuth2, JWT, SSO).
Recruitment Process
Technical Interview (45 min)
Screening interview with client’s hiring manager (30 min)
Client technical interview (45 min)
We strive to move efficiently from step to step so that the recruitment process can be as fast as possible.
Benefits
Totally remote within Latin America, full‑time (40h/week)
Stable, long‑term independent contract agreement
Work hours – US Eastern time office hours
#J-18808-Ljbffr
Virtasant is a global technology services company with a network of over 4,000 technology professionals across 130+ countries. We specialize in cloud architecture, infrastructure, migration, and optimization, helping enterprises scale efficiently while maintaining cost control.
Our clients range from Fortune 500 companies to fast-growing startups, relying on us to build high-performance infrastructure, optimize cloud environments, and enable continuous delivery at scale.
About the Role We’re looking for a Senior Data Engineer to join us and work with our client’s Data Platform team. Our client is a leading healthcare technology company, dedicated to transforming the patient and provider experience through innovative, data‑driven solutions.
You will architect and build core services, automation tools, and integrations that power our client’s data ecosystem. You’ll own high‑impact platform components, improve pipeline reliability and observability, and partner closely with data engineering, analytics, and DevOps to advance the scalability and developer experience of our client’s data platform.
Location Based in Latin America. If you are based in the contiguous US states, you can apply here - https://virtasant.teamtailor.com/jobs/6884308-senior-data-engineer-data-platform-usa. If you are neither based in Latin America nor the contiguous US states, we won’t be able to consider your application for the role.
Responsibilities
Build Automation & Tooling: develop scalable backend services, APIs, and internal tools to automate data platform workflows (e.g. data onboarding, validation, pipeline orchestration, schema tracking, quality monitoring).
Data Platform Integration: integrate tools with core data infrastructure, building pipelines (Airflow, Spark, dbt, Kafka, Snowflake, or similar) to expose capabilities via APIs and UIs.
Observability & Governance: build visualization and monitoring components for data lineage, job health, and quality metrics.
Collaboration: work cross‑functionally with data engineering, product, and DevOps teams to define requirements and deliver end‑to‑end solutions.
Qualifications
7+ years of experience in data engineering or software development with at least 5 years building production‑grade data or platform services.
Strong programming skills in Python & SQL and at least one major data platform (Snowflake, BigQuery, Redshift, or similar).
Develop tooling for schema evolution, data contracts, and developer self‑service.
Deep experience with streaming, distributed compute, or S3‑based table formats (Spark, Kafka, Iceberg/Delta/Hudi).
Experience with schema governance, metadata systems, and data quality frameworks.
Understanding of orchestration tools (Airflow, Dagster, Prefect, etc.).
Solid grasp of CI/CD and Docker.
At least 2 years of experience in AWS.
Experience building data pipelines using dbt.
Nice‑to‑Haves
Experience with data observability, data catalog, or metadata management tools.
Experience working with healthcare data (X12, FHIR).
Proven experience in data migration projects (legacy technologies to the latest technologies).
Experience building internal developer platforms or data portals.
Understanding of authentication/authorization (OAuth2, JWT, SSO).
Recruitment Process
Technical Interview (45 min)
Screening interview with client’s hiring manager (30 min)
Client technical interview (45 min)
We strive to move efficiently from step to step so that the recruitment process can be as fast as possible.
Benefits
Totally remote within Latin America, full‑time (40h/week)
Stable, long‑term independent contract agreement
Work hours – US Eastern time office hours
#J-18808-Ljbffr