Bridgehead IT Inc.
Position Summary
The Data Integration Engineer will own the design, build, and operation of scalable data pipelines and warehouse models that power analytics and operational reporting. You’ll integrate diverse sources (databases, APIs, SaaS apps, flat files), and engineer performant ELT/ETL in the cloud. You’ll collaborate closely with analytics, app dev, and business stakeholders to turn requirements into trusted, production-grade datasets.
Responsibilities Data Pipeline Development
Design and implement scalable ETL/ELT pipelines (batch and near-real-time) to ingest from databases, APIs, SaaS, and flat files into Snowflake, Azure Synapse, or similar.
Build integrations using tools such as Azure Data Factory (ADF), Fivetran, CData Sync, and Boomi; extend with custom code where needed.
Write clean, maintainable code (primarily SQL, plus Python or PHP when required for custom connectors, transformations, or microservices).
Optimize workflows for performance, reliability, and scalability (partitioning, parallelism, incremental loads, CDC, idempotency, retry/rollback).
Manage Datawarehouse platforms such as Azure Synapse and Snowflake
Troubleshoot data pipeline failures and errors.
Data Warehouse Management
Develop and maintain data models, schemas, views, and stored procedures; manage staging/core/mart layers and source to target mappings.
Implement data quality validation and monitoring (null/dup/range checks, schema drift detection, reconciliation).
Respond to and troubleshoot errors identified.
Apply warehouse best practices (clustering/partitioning, cost governance, role-based access, tagging/lineage).
SQL & Python Support
Complete complex SQL queries, refactors (window functions, CTE chains), and performance tuning (explain plans, pruning, join strategy).
Create and/or troubleshoot Python notebooks (packaging, scheduling, secret mgmt) and integration into pipelines.
Collaboration & Documentation
Partner with data analysts and stakeholders to clarify requirements and acceptance criteria and translate into source to target and technical designs.
Maintain technical specs, data flow diagrams, and operational procedures; contribute to standards and reusable patterns.
Qualifications The ideal candidate will possess the following abilities, attributes, experience and skills:
4+ years’ experience in Data Warehousing and Data Engineering.
Strong experience with Data Warehouse as a Service (DWaaS) platforms (Snowflake, BigQuery, etc).
Strong SQL skills and ability to write queries and data extracts.
Experience working with different database types
Experience working with and troubleshooting different ETL tools such as Azure Data Factory, Boomi, Fivetran, and CData Sync.
Strong understanding of DWaaS database architecture and ability to design and build optimal data processing pipelines.
Demonstrated skill in designing highly scalable ETL processes with complex data transformations, data formats including data cleansing, data quality assessment, error handling and monitoring.
Design, develop, manage, and monitor complex ETL data pipelines and support it through all environment runways.
Experience with Python/JavaScript or other scripting languages is a plus.
Provide Support and troubleshooting for data platforms.
Manage and prioritize multiple assignments.
Ability to work individually and as a team.
Provide technical guidance and mentoring for other team members.
Good communication and cross functional skills.
Bridgehead ITis proud to be an equal opportunity workplace and is an affirmative action employer.
#J-18808-Ljbffr
Responsibilities Data Pipeline Development
Design and implement scalable ETL/ELT pipelines (batch and near-real-time) to ingest from databases, APIs, SaaS, and flat files into Snowflake, Azure Synapse, or similar.
Build integrations using tools such as Azure Data Factory (ADF), Fivetran, CData Sync, and Boomi; extend with custom code where needed.
Write clean, maintainable code (primarily SQL, plus Python or PHP when required for custom connectors, transformations, or microservices).
Optimize workflows for performance, reliability, and scalability (partitioning, parallelism, incremental loads, CDC, idempotency, retry/rollback).
Manage Datawarehouse platforms such as Azure Synapse and Snowflake
Troubleshoot data pipeline failures and errors.
Data Warehouse Management
Develop and maintain data models, schemas, views, and stored procedures; manage staging/core/mart layers and source to target mappings.
Implement data quality validation and monitoring (null/dup/range checks, schema drift detection, reconciliation).
Respond to and troubleshoot errors identified.
Apply warehouse best practices (clustering/partitioning, cost governance, role-based access, tagging/lineage).
SQL & Python Support
Complete complex SQL queries, refactors (window functions, CTE chains), and performance tuning (explain plans, pruning, join strategy).
Create and/or troubleshoot Python notebooks (packaging, scheduling, secret mgmt) and integration into pipelines.
Collaboration & Documentation
Partner with data analysts and stakeholders to clarify requirements and acceptance criteria and translate into source to target and technical designs.
Maintain technical specs, data flow diagrams, and operational procedures; contribute to standards and reusable patterns.
Qualifications The ideal candidate will possess the following abilities, attributes, experience and skills:
4+ years’ experience in Data Warehousing and Data Engineering.
Strong experience with Data Warehouse as a Service (DWaaS) platforms (Snowflake, BigQuery, etc).
Strong SQL skills and ability to write queries and data extracts.
Experience working with different database types
Experience working with and troubleshooting different ETL tools such as Azure Data Factory, Boomi, Fivetran, and CData Sync.
Strong understanding of DWaaS database architecture and ability to design and build optimal data processing pipelines.
Demonstrated skill in designing highly scalable ETL processes with complex data transformations, data formats including data cleansing, data quality assessment, error handling and monitoring.
Design, develop, manage, and monitor complex ETL data pipelines and support it through all environment runways.
Experience with Python/JavaScript or other scripting languages is a plus.
Provide Support and troubleshooting for data platforms.
Manage and prioritize multiple assignments.
Ability to work individually and as a team.
Provide technical guidance and mentoring for other team members.
Good communication and cross functional skills.
Bridgehead ITis proud to be an equal opportunity workplace and is an affirmative action employer.
#J-18808-Ljbffr