Northbound Executive Search
Overview
We are seeking a skilled Data Engineer to join our data team. The role involves managing and enhancing enterprise data warehouses, creating data pipelines to integrate third-party data, monitoring daily data loads, improving integration processes, and developing comprehensive data quality and monitoring tools.
This position requires a collaborative team player who can work across different business functions, quickly learn complex concepts, and communicate effectively with both technical and non-technical stakeholders. The ideal candidate is detail-oriented, self-motivated, curious, and able to perform under time-sensitive deadlines.
Responsibilities
Manage and enhance SQL Server–based enterprise data warehouses.
Design, develop, and maintain ETL/SSIS workflows and APIs for ingesting external data sources.
Translate complex business requirements into clear technical specifications.
Monitor daily and month-end data warehouse processes and resolve issues promptly.
Perform SQL Server tuning and warehouse performance optimization.
Reconcile data from various sources and consolidate into data marts.
Develop and maintain SSAS Tabular models or cubes for reporting needs.
Document data logic, processes, key datasets, and data flows.
Contribute to modern data platform initiatives using tools such as Python, Spark, DuckDB, Delta Lake, and related technologies.
Implement data quality checks, validations, and anomaly detection frameworks.
Support the development of metadata catalogs, data lineage tracking, and data governance processes.
Assist with infrastructure support and participate in future-proofing and cloud integration efforts.
Qualifications and Skills
Bachelor’s degree in Computer Science or related field.
3–5 years of experience with SQL, data warehousing, SSIS, SQL Server, cloud databases, and Python.
Strong understanding of data modeling methodologies.
Proven experience creating, debugging, and optimizing ETL processes.
Skilled in handling large datasets and query optimization.
Effective at working with business teams to capture and implement requirements.
Strong written and verbal communication skills.
Experience with analytics and visualization tools, SSAS, or equivalent platforms.
Familiarity with Parquet, Delta Lake, Apache Arrow, or similar formats.
Experience with Spark, Pandas, Polars, dbt, DuckDB, or Databricks is a plus.
Exposure to metadata/cataloging tools and version control (Git) and CI/CD.
Seniority level
Associate
Employment type
Full-time
Job function
Data Engineering
Industries
Investment Management
#J-18808-Ljbffr
This position requires a collaborative team player who can work across different business functions, quickly learn complex concepts, and communicate effectively with both technical and non-technical stakeholders. The ideal candidate is detail-oriented, self-motivated, curious, and able to perform under time-sensitive deadlines.
Responsibilities
Manage and enhance SQL Server–based enterprise data warehouses.
Design, develop, and maintain ETL/SSIS workflows and APIs for ingesting external data sources.
Translate complex business requirements into clear technical specifications.
Monitor daily and month-end data warehouse processes and resolve issues promptly.
Perform SQL Server tuning and warehouse performance optimization.
Reconcile data from various sources and consolidate into data marts.
Develop and maintain SSAS Tabular models or cubes for reporting needs.
Document data logic, processes, key datasets, and data flows.
Contribute to modern data platform initiatives using tools such as Python, Spark, DuckDB, Delta Lake, and related technologies.
Implement data quality checks, validations, and anomaly detection frameworks.
Support the development of metadata catalogs, data lineage tracking, and data governance processes.
Assist with infrastructure support and participate in future-proofing and cloud integration efforts.
Qualifications and Skills
Bachelor’s degree in Computer Science or related field.
3–5 years of experience with SQL, data warehousing, SSIS, SQL Server, cloud databases, and Python.
Strong understanding of data modeling methodologies.
Proven experience creating, debugging, and optimizing ETL processes.
Skilled in handling large datasets and query optimization.
Effective at working with business teams to capture and implement requirements.
Strong written and verbal communication skills.
Experience with analytics and visualization tools, SSAS, or equivalent platforms.
Familiarity with Parquet, Delta Lake, Apache Arrow, or similar formats.
Experience with Spark, Pandas, Polars, dbt, DuckDB, or Databricks is a plus.
Exposure to metadata/cataloging tools and version control (Git) and CI/CD.
Seniority level
Associate
Employment type
Full-time
Job function
Data Engineering
Industries
Investment Management
#J-18808-Ljbffr