IBM
Introduction
At IBM, work is more than a job - it's a calling: To build. To design. To code. To consult. To think along with clients and sell. To make markets. To invent. To collaborate. Not just to do something better, but to attempt things you've never thought possible. Are you ready to lead in this new era of technology and solve some of the world's most challenging problems? If so, let’s talk.
Your Role and Responsibilities We are looking for a motivated person experienced with building data warehouses and analytics systems in the cloud (AWS, Azure, GCP, Snowflake). This role focuses on the design and development of Snowflake Data Cloud solutions, including data ingestion pipelines, data architecture, data governance and security. You will develop database architectures, data warehouses, and ensure optimal data delivery across ongoing customer projects, while leading technical teams and supporting customers’ next‑generation data initiatives.
What We Are Looking For We are looking for a Sr. Consultant, Data Engineer to join our growing team of experts. The ideal candidate is an experienced data pipeline builder and migration specialist who enjoys optimizing data systems from the ground up. You will work in the design and development of Snowflake Data Cloud solutions. The work includes data ingestion pipelines, data architecture, data governance, and security.
Required Technical and Professional Expertise
Bachelor’s degree in engineering, computer science or equivalent area.
Expertise in evaluating, selecting, and integrating ingestion technologies to solve complex data challenges.
Leadership in architectural decisions for high‑throughput data ingestion frameworks, including real‑time data processing and analytics.
Mentorship of junior engineers in best practices for data ingestion, performance tuning, and troubleshooting.
5+ years in related technical roles – data management, database development, ETL, data warehouses and pipelines.
Experience designing and developing data warehouses (Teradata, Oracle Exadata, Netezza, SQL Server, Spark).
Experience building ETL / ELT ingestion pipelines with tools such as DataStage, Informatica, Matillion.
SQL scripting.
Cloud experience on AWS (Azure or GCP experience also valuable).
Python scripting, Scala required.
Ability to prepare reports and present to internal and customer stakeholders.
Track record of sound problem‑solving skills and action‑oriented mindset.
Strong interpersonal skills including assertiveness and ability to build strong client relationships.
Ability to work in Agile teams.
Experience hiring, developing and managing a technical team.
Location This role can be performed from anywhere in the US.
Background As of April 2025, Hakkoda has been acquired by IBM and will be integrated into the IBM organization. Your recruitment process will be managed by IBM, which will be the hiring entity.
#J-18808-Ljbffr
Your Role and Responsibilities We are looking for a motivated person experienced with building data warehouses and analytics systems in the cloud (AWS, Azure, GCP, Snowflake). This role focuses on the design and development of Snowflake Data Cloud solutions, including data ingestion pipelines, data architecture, data governance and security. You will develop database architectures, data warehouses, and ensure optimal data delivery across ongoing customer projects, while leading technical teams and supporting customers’ next‑generation data initiatives.
What We Are Looking For We are looking for a Sr. Consultant, Data Engineer to join our growing team of experts. The ideal candidate is an experienced data pipeline builder and migration specialist who enjoys optimizing data systems from the ground up. You will work in the design and development of Snowflake Data Cloud solutions. The work includes data ingestion pipelines, data architecture, data governance, and security.
Required Technical and Professional Expertise
Bachelor’s degree in engineering, computer science or equivalent area.
Expertise in evaluating, selecting, and integrating ingestion technologies to solve complex data challenges.
Leadership in architectural decisions for high‑throughput data ingestion frameworks, including real‑time data processing and analytics.
Mentorship of junior engineers in best practices for data ingestion, performance tuning, and troubleshooting.
5+ years in related technical roles – data management, database development, ETL, data warehouses and pipelines.
Experience designing and developing data warehouses (Teradata, Oracle Exadata, Netezza, SQL Server, Spark).
Experience building ETL / ELT ingestion pipelines with tools such as DataStage, Informatica, Matillion.
SQL scripting.
Cloud experience on AWS (Azure or GCP experience also valuable).
Python scripting, Scala required.
Ability to prepare reports and present to internal and customer stakeholders.
Track record of sound problem‑solving skills and action‑oriented mindset.
Strong interpersonal skills including assertiveness and ability to build strong client relationships.
Ability to work in Agile teams.
Experience hiring, developing and managing a technical team.
Location This role can be performed from anywhere in the US.
Background As of April 2025, Hakkoda has been acquired by IBM and will be integrated into the IBM organization. Your recruitment process will be managed by IBM, which will be the hiring entity.
#J-18808-Ljbffr