Logo
SustainableHR PEO & Recruiting

Data Engineer

SustainableHR PEO & Recruiting, Madison, Wisconsin, us, 53774

Save Job

SustainableHR PEO & Recruiting provided pay range This range is provided by SustainableHR PEO & Recruiting. Your actual pay will be based on your skills and experience — talk with your recruiter to learn more.

Base pay range $90,000.00/yr - $110,000.00/yr

We’re partnering with a Madison-based organization that’s investing heavily in a modern data platform to deliver reliable, scalable, and well-governed data across analytics, applications, and integrations.

This

Data Engineer

will play a key role in designing and building production-grade data pipelines, models, and services that support Power BI reporting, APIs, and downstream systems. You’ll collaborate closely with Infrastructure, QA, Database Administration, and application teams to deliver automated, observable, and secure data workflows.

This is a hybrid role. Candidates must be located in the Madison, WI area and able to work on-site as needed.

What You’ll Do

Design and evolve canonical data models, data marts, and lake/warehouse structures

Establish standards for schema design, naming conventions, partitioning, and CDC

Build resilient batch and streaming pipelines using Microsoft Fabric Data Factory, Spark notebooks, and Lakehouse tables

Design and optimize Delta/Parquet tables in OneLake and Direct Lake models for Power BI

Create reusable ingestion and transformation frameworks focused on performance and reliability

Integrations & APIs

Develop secure data services and APIs supporting applications, reporting, and partner integrations

Define and publish data contracts (OpenAPI/Swagger) with versioning and deprecation standards

Partner with DBA and Infrastructure teams to enforce least-privilege access

Infrastructure as Code & DevOps

Author and maintain IaC modules using Bicep/ARM (and where appropriate, Terraform or Ansible)

Own CI/CD pipelines for data, configuration, and infrastructure changes

Collaborate with QA on unit, integration, and regression testing across data workflows

Observability, Reliability & Governance

Implement logging, lineage, metrics, and alerting for pipelines and datasets

Define SLAs for data freshness and quality

Tune Spark performance and manage cloud costs

Apply data quality rules, RBAC, sensitivity labeling, and audit standards

Work cross-functionally with Infrastructure, QA, DBA, and application teams

Contribute to documentation, knowledge sharing, and modern data engineering best practices

What We’re Looking For Required Experience

3+ years building and operating production ETL/ELT pipelines

Apache Spark experience (Microsoft Fabric, Synapse, or Databricks)

Strong T-SQL and Python skills

Streaming platforms such as Azure Event Hubs or Kafka

Change Data Capture (CDC) implementations

Infrastructure as Code and CI/CD (Azure DevOps)

API design for data services (REST/OpenAPI, versioning, authentication)

Preferred Experience

Microsoft Fabric Lakehouse architecture and Power BI Direct Lake optimization

Kusto Query Language (KQL), Eventstream, or Eventhouse exposure

Experience with data lineage, metadata, or cost governance tools

#J-18808-Ljbffr