Huron
Huron is a global consultancy that collaborates with clients to drive strategic growth, ignite innovation, and navigate constant change. We help clients accelerate operational, digital and cultural transformation, enabling the change they need to own their future.
We’re looking for a
Data Engineer (Manager)
to join our Data Science & Machine Learning team in the Commercial Digital practice. In this role you will own the full data lifecycle from source integration through analytics‑ready delivery, while leading and developing a team of data engineers.
What You’ll Do
Lead and mentor junior data engineers—provide technical guidance, conduct code reviews, and support professional development. Foster a culture of continuous learning and high-quality engineering practices within the team.
Manage complex multi‑workstream data engineering projects—oversee project planning, resource allocation, and delivery timelines.
Design and architect end‑to‑end data solutions—from source extraction and ingestion through transformation, quality validation, and delivery.
Lead development of modern data transformation layers using dbt—implement modular SQL models, testing frameworks, documentation, and CI/CD practices.
Architect lakehouse solutions using open table formats (Delta Lake, Apache Iceberg) on Microsoft Fabric, Snowflake, and Databricks.
Establish DataOps best practices—define and implement CI/CD pipelines for data assets, data quality monitoring, observability, lineage tracking, and automated testing standards.
Serve as a trusted advisor to clients—build long‑standing partnerships, understand business problems, translate data requirements into technical solutions, and communicate architecture decisions to both technical and executive audiences.
Contribute to business development—participate in business development activities, develop reusable assets and methodologies, and help shape the technical direction of Huron’s data engineering capabilities.
Required Qualifications
5+ years of hands‑on experience building and deploying data pipelines in production.
Experience leading and developing technical teams—including coaching, mentorship, code review, and performance management.
Strong SQL and Python programming skills with deep experience in PySpark for distributed data processing.
Experience building data pipelines that serve AI/ML systems—including feature engineering workflows, vector embeddings for RAG, and data quality frameworks.
Experience with modern data transformation tools, especially dbt.
Experience with cloud data platforms and lakehouse architectures—Snowflake, Databricks, Microsoft Fabric, and familiarity with open table formats.
Proficiency with workflow orchestration tools such as Apache Airflow, Dagster, Prefect, or Microsoft Data Factory.
Solid foundation in data modeling concepts: dimensional modeling, data vault, normalization/denormalization.
Excellent communication and client management skills.
Bachelor’s degree in Computer Science, Engineering, Mathematics, or related technical field.
Willingness to travel approximately 30% to client sites as needed.
Preferred Qualifications
Experience in Financial Services, Manufacturing, or Energy & Utilities industries.
Background in building data infrastructure for ML/AI systems—including feature stores, training data pipelines, vector databases for RAG/LLM workloads, or model serving architectures.
Experience with real‑time and streaming data architectures using Kafka, Spark Streaming, Flink, or Azure Event Hubs.
Familiarity with MCP (Model Context Protocol), A2A (Agent‑to‑Agent), or similar standards for AI system data integration.
Experience with data quality and observability frameworks such as Great Expectations, Soda, Monte Carlo, or dbt tests at enterprise scale.
Knowledge of data governance, cataloging, and lineage tools (Unity Catalog, Purview, Alation, or similar).
Experience with high‑performance Python data tools such as Polars or DuckDB.
Cloud certifications (Snowflake SnowPro, Databricks Data Engineer, Azure Data Engineer, or AWS Data Analytics).
Consulting experience or demonstrated ability to work across multiple domains.
Contributions to open‑source data engineering projects or active participation in the dbt/data community.
Master’s degree or PhD in a technical field.
Why Huron Variety that accelerates your growth.
In consulting, you’ll work across industries and data architectures that would take a decade to encounter at a single company. Our Commercial segment spans Financial Services, Manufacturing, Energy & Utilities, and more—each engagement is a new data ecosystem to master and a new platform to ship.
Impact you can measure.
Our clients are Fortune 500 companies making significant investments in data infrastructure. The pipelines you build will power real decisions—the ML models that drive production schedules, the dashboards that inform pricing strategies, the data products that enable self‑service analytics.
A team that builds.
Huron’s Data Science & Machine Learning team is a close‑knit group of practitioners. We write code, build pipelines, and deploy platforms.
Investment in your development.
We provide resources for continuous learning, conference attendance, and certification.
Position Level Manager
Seniority level Mid‑Senior level
Employment type Full‑time
Job function Information Technology
Industries Business Consulting and Services
#J-18808-Ljbffr
We’re looking for a
Data Engineer (Manager)
to join our Data Science & Machine Learning team in the Commercial Digital practice. In this role you will own the full data lifecycle from source integration through analytics‑ready delivery, while leading and developing a team of data engineers.
What You’ll Do
Lead and mentor junior data engineers—provide technical guidance, conduct code reviews, and support professional development. Foster a culture of continuous learning and high-quality engineering practices within the team.
Manage complex multi‑workstream data engineering projects—oversee project planning, resource allocation, and delivery timelines.
Design and architect end‑to‑end data solutions—from source extraction and ingestion through transformation, quality validation, and delivery.
Lead development of modern data transformation layers using dbt—implement modular SQL models, testing frameworks, documentation, and CI/CD practices.
Architect lakehouse solutions using open table formats (Delta Lake, Apache Iceberg) on Microsoft Fabric, Snowflake, and Databricks.
Establish DataOps best practices—define and implement CI/CD pipelines for data assets, data quality monitoring, observability, lineage tracking, and automated testing standards.
Serve as a trusted advisor to clients—build long‑standing partnerships, understand business problems, translate data requirements into technical solutions, and communicate architecture decisions to both technical and executive audiences.
Contribute to business development—participate in business development activities, develop reusable assets and methodologies, and help shape the technical direction of Huron’s data engineering capabilities.
Required Qualifications
5+ years of hands‑on experience building and deploying data pipelines in production.
Experience leading and developing technical teams—including coaching, mentorship, code review, and performance management.
Strong SQL and Python programming skills with deep experience in PySpark for distributed data processing.
Experience building data pipelines that serve AI/ML systems—including feature engineering workflows, vector embeddings for RAG, and data quality frameworks.
Experience with modern data transformation tools, especially dbt.
Experience with cloud data platforms and lakehouse architectures—Snowflake, Databricks, Microsoft Fabric, and familiarity with open table formats.
Proficiency with workflow orchestration tools such as Apache Airflow, Dagster, Prefect, or Microsoft Data Factory.
Solid foundation in data modeling concepts: dimensional modeling, data vault, normalization/denormalization.
Excellent communication and client management skills.
Bachelor’s degree in Computer Science, Engineering, Mathematics, or related technical field.
Willingness to travel approximately 30% to client sites as needed.
Preferred Qualifications
Experience in Financial Services, Manufacturing, or Energy & Utilities industries.
Background in building data infrastructure for ML/AI systems—including feature stores, training data pipelines, vector databases for RAG/LLM workloads, or model serving architectures.
Experience with real‑time and streaming data architectures using Kafka, Spark Streaming, Flink, or Azure Event Hubs.
Familiarity with MCP (Model Context Protocol), A2A (Agent‑to‑Agent), or similar standards for AI system data integration.
Experience with data quality and observability frameworks such as Great Expectations, Soda, Monte Carlo, or dbt tests at enterprise scale.
Knowledge of data governance, cataloging, and lineage tools (Unity Catalog, Purview, Alation, or similar).
Experience with high‑performance Python data tools such as Polars or DuckDB.
Cloud certifications (Snowflake SnowPro, Databricks Data Engineer, Azure Data Engineer, or AWS Data Analytics).
Consulting experience or demonstrated ability to work across multiple domains.
Contributions to open‑source data engineering projects or active participation in the dbt/data community.
Master’s degree or PhD in a technical field.
Why Huron Variety that accelerates your growth.
In consulting, you’ll work across industries and data architectures that would take a decade to encounter at a single company. Our Commercial segment spans Financial Services, Manufacturing, Energy & Utilities, and more—each engagement is a new data ecosystem to master and a new platform to ship.
Impact you can measure.
Our clients are Fortune 500 companies making significant investments in data infrastructure. The pipelines you build will power real decisions—the ML models that drive production schedules, the dashboards that inform pricing strategies, the data products that enable self‑service analytics.
A team that builds.
Huron’s Data Science & Machine Learning team is a close‑knit group of practitioners. We write code, build pipelines, and deploy platforms.
Investment in your development.
We provide resources for continuous learning, conference attendance, and certification.
Position Level Manager
Seniority level Mid‑Senior level
Employment type Full‑time
Job function Information Technology
Industries Business Consulting and Services
#J-18808-Ljbffr