Blue-Grace Logistics LLC
Data Engineer III - Hybrid - Riverview, FL
Blue-Grace Logistics LLC, Riverview, Florida, us, 33568
Job Summary
Role Overview
Design, build, and maintain data pipelines and infrastructure supporting enterprise analytics and reporting. Work across the full data lifecycle from source systems through data lakes to consumption layers, with increasing responsibility and complexity based on level.
Level Distinctions
Level 1 (0-2 years):
Supports existing pipelines, assists with data loads, performs data quality validation, and learns core technologies under guidance Level 2 (2-5 years):
Independently builds ETL/ELT workflows, optimizes existing pipelines, troubleshoots production issues, and contributes to architecture decisions Level 3 (5+ years):
Architects complex data solutions, leads major initiatives, mentors junior team members, and drives technical innovation
Key Responsibilities
Develop and maintain incremental data pipelines using SSIS, PySpark notebooks, and Python
Implement and optimize ETL/ELT processes for Azure SQL databases and data lake environments
Build meta-driven, scalable data ingestion frameworks supporting SCD Type 2 methodology
Write and optimize T‑SQL stored procedures, views, and dynamic SQL for data transformations
Develop Python‑based automation including Azure Functions, blob triggers, and API integrations
Maintain data quality, perform root cause analysis, and resolve data pipeline issues
Support CI/CD processes including code reviews, Git version control, and Azure DevOps workflows
Monitor pipeline performance and implement optimization strategies
Assist with data modeling, indexing strategies, and schema evolution
Collaborate with business stakeholders and Analytics team to meet reporting requirements
Participate in data platform modernization and improvement initiatives
Required Skills
Proficiency with SQL Server, T‑SQL, and database concepts (indexing, query optimization)
Experience with ETL/ELT tools and concepts (SSIS or similar preferred)
Understanding of data warehousing, dimensional modeling, and SCD methodologies
Familiarity with Azure cloud services (especially Azure SQL Database)
Ability to troubleshoot data issues and perform data analysis
Strong problem‑solving skills and attention to detail
Experience with version control (Git) and collaborative development
Preferred Skills (varies by level)
Python programming for data engineering and automation
PySpark and distributed data processing
Azure Functions, Logic Apps, or similar serverless technologies
Experience with VisualCron, Azure Data Factory, or workflow orchestration tools
Knowledge of Tableau, Power BI, or data visualization platforms
Redgate tools (SQL Prompt, Source Control, SQL Monitor)
Understanding of CI/CD practices and Azure DevOps
Modern data platform experience (lakehouse architectures, Delta Lake, etc.)
#J-18808-Ljbffr
Design, build, and maintain data pipelines and infrastructure supporting enterprise analytics and reporting. Work across the full data lifecycle from source systems through data lakes to consumption layers, with increasing responsibility and complexity based on level.
Level Distinctions
Level 1 (0-2 years):
Supports existing pipelines, assists with data loads, performs data quality validation, and learns core technologies under guidance Level 2 (2-5 years):
Independently builds ETL/ELT workflows, optimizes existing pipelines, troubleshoots production issues, and contributes to architecture decisions Level 3 (5+ years):
Architects complex data solutions, leads major initiatives, mentors junior team members, and drives technical innovation
Key Responsibilities
Develop and maintain incremental data pipelines using SSIS, PySpark notebooks, and Python
Implement and optimize ETL/ELT processes for Azure SQL databases and data lake environments
Build meta-driven, scalable data ingestion frameworks supporting SCD Type 2 methodology
Write and optimize T‑SQL stored procedures, views, and dynamic SQL for data transformations
Develop Python‑based automation including Azure Functions, blob triggers, and API integrations
Maintain data quality, perform root cause analysis, and resolve data pipeline issues
Support CI/CD processes including code reviews, Git version control, and Azure DevOps workflows
Monitor pipeline performance and implement optimization strategies
Assist with data modeling, indexing strategies, and schema evolution
Collaborate with business stakeholders and Analytics team to meet reporting requirements
Participate in data platform modernization and improvement initiatives
Required Skills
Proficiency with SQL Server, T‑SQL, and database concepts (indexing, query optimization)
Experience with ETL/ELT tools and concepts (SSIS or similar preferred)
Understanding of data warehousing, dimensional modeling, and SCD methodologies
Familiarity with Azure cloud services (especially Azure SQL Database)
Ability to troubleshoot data issues and perform data analysis
Strong problem‑solving skills and attention to detail
Experience with version control (Git) and collaborative development
Preferred Skills (varies by level)
Python programming for data engineering and automation
PySpark and distributed data processing
Azure Functions, Logic Apps, or similar serverless technologies
Experience with VisualCron, Azure Data Factory, or workflow orchestration tools
Knowledge of Tableau, Power BI, or data visualization platforms
Redgate tools (SQL Prompt, Source Control, SQL Monitor)
Understanding of CI/CD practices and Azure DevOps
Modern data platform experience (lakehouse architectures, Delta Lake, etc.)
#J-18808-Ljbffr