Tech Providers,
Data Engineer (Builder)
Experience Needed
5–7 years in data engineering or database development.
Hands‑on experience with SQL Server ETL/ELT pipelines.
Experience integrating pipelines with cloud services (AWS Glue, Azure Data Factory, GCP Dataflow).
Familiarity with streaming technologies (Kafka, Kinesis).
Experience in
data modeling and architecture design .
Proficiency in
Python/Scala/Java programming
for pipeline development.
Exposure to
DevOps automation
(Terraform, Ansible) and
containerization
(Docker, Kubernetes).
DevOps and automation maturity with certifications (HashiCorp Terraform Associate, AWS DevOps Engineer) and
containerization
(Docker, Kubernetes).
Preferred:
Advanced programming depth with applied coursework or certifications (Python Institute PCPP, Scala Professional Certification).
Preferred:
Data modeling specialization with advanced coursework or vendor‑specific training (Snowflake, AWS Big Data Specialty).
Education
Bachelor’s degree in Computer Science, Software Engineering, or related technical field.
Certifications (Preferred)
AWS Certified Data Engineer
Azure Data Engineer Associate
Google Professional Data Engineer
Aptitudes
Problem‑solving and optimization mindset.
Documentation skills: ability to produce pipeline diagrams, technical specifications, and compliance documentation.
Strong debugging and troubleshooting skills.
Collaborative aptitude for working with governance and SQL engineers.
Full command of the English language, both written and verbal, is mandatory.
Software Use
SQL Server (ETL/ELT pipelines, stored procedures).
Orchestration tools (Airflow, DBT).
Cloud integration services (AWS Glue, Azure Data Factory, GCP Dataflow).
Observability tools (OpenLineage, Monte Carlo).
DevOps automation tools (Terraform, Ansible).
Containerization platforms (Docker, Kubernetes).
Skill Evaluation The Data Engineer role is the
technical backbone of modernization , focusing on pipelines, integrations, and cloud readiness. Preferred certifications strengthen expertise in automation and containerization. This role ensures
scalable, reliable, and secure data infrastructure , enabling analytics and governance.
#J-18808-Ljbffr
5–7 years in data engineering or database development.
Hands‑on experience with SQL Server ETL/ELT pipelines.
Experience integrating pipelines with cloud services (AWS Glue, Azure Data Factory, GCP Dataflow).
Familiarity with streaming technologies (Kafka, Kinesis).
Experience in
data modeling and architecture design .
Proficiency in
Python/Scala/Java programming
for pipeline development.
Exposure to
DevOps automation
(Terraform, Ansible) and
containerization
(Docker, Kubernetes).
DevOps and automation maturity with certifications (HashiCorp Terraform Associate, AWS DevOps Engineer) and
containerization
(Docker, Kubernetes).
Preferred:
Advanced programming depth with applied coursework or certifications (Python Institute PCPP, Scala Professional Certification).
Preferred:
Data modeling specialization with advanced coursework or vendor‑specific training (Snowflake, AWS Big Data Specialty).
Education
Bachelor’s degree in Computer Science, Software Engineering, or related technical field.
Certifications (Preferred)
AWS Certified Data Engineer
Azure Data Engineer Associate
Google Professional Data Engineer
Aptitudes
Problem‑solving and optimization mindset.
Documentation skills: ability to produce pipeline diagrams, technical specifications, and compliance documentation.
Strong debugging and troubleshooting skills.
Collaborative aptitude for working with governance and SQL engineers.
Full command of the English language, both written and verbal, is mandatory.
Software Use
SQL Server (ETL/ELT pipelines, stored procedures).
Orchestration tools (Airflow, DBT).
Cloud integration services (AWS Glue, Azure Data Factory, GCP Dataflow).
Observability tools (OpenLineage, Monte Carlo).
DevOps automation tools (Terraform, Ansible).
Containerization platforms (Docker, Kubernetes).
Skill Evaluation The Data Engineer role is the
technical backbone of modernization , focusing on pipelines, integrations, and cloud readiness. Preferred certifications strengthen expertise in automation and containerization. This role ensures
scalable, reliable, and secure data infrastructure , enabling analytics and governance.
#J-18808-Ljbffr