Madfish
At TechBiz Global, we are providing recruitment service to our TOP clients from our portfolio. We are currently seeking a Data Engineer to join one of our
clients ' teams. If you're looking for an exciting opportunity to grow in a innovative environment, this could be the perfect fit for you.
Hybrid work: 2-3 days from office Location: Warsaw
Key Responsibilities
Design, develop, and maintain
ETL/ELT pipelines
using
Apache Airflow
and
AWS Glue .
Build robust and scalable data architectures leveraging
AWS services
such as
S3, Lambda, CloudWatch, Kinesis, and Redshift .
Integrate
real-time and batch data pipelines
using
Kafka
and AWS streaming solutions.
Ensure data quality, reliability, and performance through effective
monitoring, debugging, and optimization .
Collaborate with cross-functional teams to understand data requirements and deliver efficient solutions.
Manage
version control
and
collaborative workflows
using
Git .
Implement
infrastructure as code
solutions with
Terraform
and
Ansible
to automate deployments.
Establish
CI/CD pipelines
to streamline testing, deployment, and versioning processes.
Document data models, workflows, and architecture to support transparency and scalability.
Key Requirements
Core Languages: Proficient in SQL, Python, and PySpark.
Framework Knowledge: Experience with Apache Airflow, AWS Glue, Kafka, and Redshift.
Cloud & DevOps: Strong hands‑on experience with the AWS stack (Lambda, S3, CloudWatch, SNS/SQS, Kinesis).
Infrastructure Automation: Practical experience with Terraform and Ansible.
Version Control & CI/CD: Skilled in Git and familiar with continuous integration and delivery pipelines.
Debugging & Monitoring: Proven ability to maintain, monitor, and optimize ETL pipelines for performance and reliability.
Qualifications
Bachelor’s or Master’s degree in
Computer Science, Information Technology, Data Engineering , or a related field.
4+ years of hands‑on experience as a
Data Engineer .
Strong understanding of
data modeling, warehousing, and distributed systems .
Excellent
English communication skills , both written and verbal.
#J-18808-Ljbffr
clients ' teams. If you're looking for an exciting opportunity to grow in a innovative environment, this could be the perfect fit for you.
Hybrid work: 2-3 days from office Location: Warsaw
Key Responsibilities
Design, develop, and maintain
ETL/ELT pipelines
using
Apache Airflow
and
AWS Glue .
Build robust and scalable data architectures leveraging
AWS services
such as
S3, Lambda, CloudWatch, Kinesis, and Redshift .
Integrate
real-time and batch data pipelines
using
Kafka
and AWS streaming solutions.
Ensure data quality, reliability, and performance through effective
monitoring, debugging, and optimization .
Collaborate with cross-functional teams to understand data requirements and deliver efficient solutions.
Manage
version control
and
collaborative workflows
using
Git .
Implement
infrastructure as code
solutions with
Terraform
and
Ansible
to automate deployments.
Establish
CI/CD pipelines
to streamline testing, deployment, and versioning processes.
Document data models, workflows, and architecture to support transparency and scalability.
Key Requirements
Core Languages: Proficient in SQL, Python, and PySpark.
Framework Knowledge: Experience with Apache Airflow, AWS Glue, Kafka, and Redshift.
Cloud & DevOps: Strong hands‑on experience with the AWS stack (Lambda, S3, CloudWatch, SNS/SQS, Kinesis).
Infrastructure Automation: Practical experience with Terraform and Ansible.
Version Control & CI/CD: Skilled in Git and familiar with continuous integration and delivery pipelines.
Debugging & Monitoring: Proven ability to maintain, monitor, and optimize ETL pipelines for performance and reliability.
Qualifications
Bachelor’s or Master’s degree in
Computer Science, Information Technology, Data Engineering , or a related field.
4+ years of hands‑on experience as a
Data Engineer .
Strong understanding of
data modeling, warehousing, and distributed systems .
Excellent
English communication skills , both written and verbal.
#J-18808-Ljbffr