Prosum
Sr. Data Engineer – Hybrid (Midtown Phoenix)
We are seeking a Sr. Data Engineer to join our team in Midtown Phoenix (hybrid schedule: 3 days per week onsite, 2 days remote). In this role, you will design, build, and optimize enterprise data platforms and analytics solutions leveraging
Databricks , integrate with an on-premises
MS SQL Server Data Warehouse , and enable insights through
Power BI . You will drive the development of high-performance data pipelines, ensure data quality, and define architecture, integration, and modeling strategies to support advanced analytics and business intelligence across the enterprise. Base pay range
$120,000.00/yr - $140,000.00/yr Responsibilities
Design and implement scalable
ETL/ELT pipelines
using Databricks (PySpark, Delta Lake) on AWS. Utilize AWS services (S3, Lambda, EventBridge, IAM) for data ingestion and orchestration. Build and manage ingestion architecture (bronze/silver layers) in Delta Lake. Develop connectors and strategies for data movement between on-prem MS SQL Server and cloud environments. Optimize ingestion from legacy systems to the data lake with attention to security and latency. Implement data models supporting both operational and analytical use cases. Ensure data quality, lineage, and governance using Unity Catalog or equivalent tools. Create efficient data views and aggregate tables to improve reporting performance. Optimize Spark jobs, SQL queries, and Power BI DAX queries for speed and scalability. Design and maintain ETL processes using
SSIS
to load data into data warehouses/marts; implement logging, error handling, incremental loading, and automation with SQL Server Agent. Guide onshore/offshore teams, perform code reviews, and collaborate with analysts, DBAs, and cross-functional stakeholders. Tune T-SQL queries, SSIS packages, SSAS processing, and SSRS report execution. Implement automated testing for ETL workflows; champion high-quality software practices including TDD. Qualifications
Bachelor’s degree in Computer Science, Information Systems, Data Engineering, or related field. 6+ years of data engineering experience
with cloud and hybrid data platforms. Expertise in
Databricks, Delta Lake, PySpark, Python, Java . Strong experience with
AWS (S3, Glue, IAM, Lambda, Step Functions) . Proficiency with
MS SQL Server
(performance tuning, stored procedures, SSIS/ETL migration). Knowledge of data modeling, partitioning, and optimization strategies. Experience with CI/CD pipelines (Bitbucket, Azure DevOps, Jenkins, etc.). Technical expertise with Microsoft tools: MSSQL, SSIS, SSAS, SSRS, Batch Scripting. Familiarity with
Unity Catalog, Databricks SQL, RBAC . Understanding of enterprise data warehouse concepts (SCD, CDC). Exposure to
DevOps/DataOps
practices and IaC (Terraform, CloudFormation). AWS Certified Data Analytics Power BI Data Analyst Associate Seniority level
Mid-Senior level Employment type
Full-time Job function
Data Engineering Get notified when a new job is posted.
#J-18808-Ljbffr
Databricks , integrate with an on-premises
MS SQL Server Data Warehouse , and enable insights through
Power BI . You will drive the development of high-performance data pipelines, ensure data quality, and define architecture, integration, and modeling strategies to support advanced analytics and business intelligence across the enterprise. Base pay range
$120,000.00/yr - $140,000.00/yr Responsibilities
Design and implement scalable
ETL/ELT pipelines
using Databricks (PySpark, Delta Lake) on AWS. Utilize AWS services (S3, Lambda, EventBridge, IAM) for data ingestion and orchestration. Build and manage ingestion architecture (bronze/silver layers) in Delta Lake. Develop connectors and strategies for data movement between on-prem MS SQL Server and cloud environments. Optimize ingestion from legacy systems to the data lake with attention to security and latency. Implement data models supporting both operational and analytical use cases. Ensure data quality, lineage, and governance using Unity Catalog or equivalent tools. Create efficient data views and aggregate tables to improve reporting performance. Optimize Spark jobs, SQL queries, and Power BI DAX queries for speed and scalability. Design and maintain ETL processes using
SSIS
to load data into data warehouses/marts; implement logging, error handling, incremental loading, and automation with SQL Server Agent. Guide onshore/offshore teams, perform code reviews, and collaborate with analysts, DBAs, and cross-functional stakeholders. Tune T-SQL queries, SSIS packages, SSAS processing, and SSRS report execution. Implement automated testing for ETL workflows; champion high-quality software practices including TDD. Qualifications
Bachelor’s degree in Computer Science, Information Systems, Data Engineering, or related field. 6+ years of data engineering experience
with cloud and hybrid data platforms. Expertise in
Databricks, Delta Lake, PySpark, Python, Java . Strong experience with
AWS (S3, Glue, IAM, Lambda, Step Functions) . Proficiency with
MS SQL Server
(performance tuning, stored procedures, SSIS/ETL migration). Knowledge of data modeling, partitioning, and optimization strategies. Experience with CI/CD pipelines (Bitbucket, Azure DevOps, Jenkins, etc.). Technical expertise with Microsoft tools: MSSQL, SSIS, SSAS, SSRS, Batch Scripting. Familiarity with
Unity Catalog, Databricks SQL, RBAC . Understanding of enterprise data warehouse concepts (SCD, CDC). Exposure to
DevOps/DataOps
practices and IaC (Terraform, CloudFormation). AWS Certified Data Analytics Power BI Data Analyst Associate Seniority level
Mid-Senior level Employment type
Full-time Job function
Data Engineering Get notified when a new job is posted.
#J-18808-Ljbffr