J.A.P Tech Consulting
Location
Remote – Poland / US timezone overlap required
Employment Type Hourly Rate Range (B2B)
150 – 200 PLN/h
About the Role
We’re looking for a
Senior Data Engineer
to join an enterprise‑grade analytics and data platform team serving a large US‑based organization. You’ll be responsible for building and maintaining robust data pipelines, ensuring data quality, and enabling advanced analytics at scale.
The ideal candidate is highly skilled in
Python ,
SQL , and
cloud data solutions (AWS) , with experience across both batch and streaming pipelines.
Key Responsibilities
Design and implement scalable
ETL/ELT pipelines Work with data architects and business stakeholders to define data models and flows Manage data ingestion, transformation, and validation processes Optimize performance across large datasets and complex pipelines Integrate structured and unstructured data from diverse sources (APIs, databases, flat files) Maintain and evolve data lake and warehouse infrastructure
Required Skills
5+ years of experience as a
Data Engineer Strong hands‑on experience with
Python
and
SQL Expertise with
AWS services : S3, Lambda, Glue, Redshift, EMR Familiarity with
data orchestration tools
(Airflow, Step Functions, or similar) Solid understanding of
data warehousing concepts Experience working with
large‑scale data pipelines Strong analytical and communication skills
Nice to Have
Experience with
Snowflake
or
Databricks Proficiency in
data governance
and data catalog tools Familiarity with
CI/CD for data
(e.g., dbt, Terraform for infra) Prior exposure to
healthcare, fintech, or logistics
domains Timezone:
Must overlap with US Eastern Time (min. 4 hours)
Why Work with JAP Tech Consulting?
Remote‑first contracts with US‑based clients Transparent B2B cooperation Long‑term engagements with cutting‑edge tech
Read our Candidate Privacy Policy.
#J-18808-Ljbffr
Remote – Poland / US timezone overlap required
Employment Type Hourly Rate Range (B2B)
150 – 200 PLN/h
About the Role
We’re looking for a
Senior Data Engineer
to join an enterprise‑grade analytics and data platform team serving a large US‑based organization. You’ll be responsible for building and maintaining robust data pipelines, ensuring data quality, and enabling advanced analytics at scale.
The ideal candidate is highly skilled in
Python ,
SQL , and
cloud data solutions (AWS) , with experience across both batch and streaming pipelines.
Key Responsibilities
Design and implement scalable
ETL/ELT pipelines Work with data architects and business stakeholders to define data models and flows Manage data ingestion, transformation, and validation processes Optimize performance across large datasets and complex pipelines Integrate structured and unstructured data from diverse sources (APIs, databases, flat files) Maintain and evolve data lake and warehouse infrastructure
Required Skills
5+ years of experience as a
Data Engineer Strong hands‑on experience with
Python
and
SQL Expertise with
AWS services : S3, Lambda, Glue, Redshift, EMR Familiarity with
data orchestration tools
(Airflow, Step Functions, or similar) Solid understanding of
data warehousing concepts Experience working with
large‑scale data pipelines Strong analytical and communication skills
Nice to Have
Experience with
Snowflake
or
Databricks Proficiency in
data governance
and data catalog tools Familiarity with
CI/CD for data
(e.g., dbt, Terraform for infra) Prior exposure to
healthcare, fintech, or logistics
domains Timezone:
Must overlap with US Eastern Time (min. 4 hours)
Why Work with JAP Tech Consulting?
Remote‑first contracts with US‑based clients Transparent B2B cooperation Long‑term engagements with cutting‑edge tech
Read our Candidate Privacy Policy.
#J-18808-Ljbffr