RIT Solutions, Inc.
Senior Data Engineer - Remote role
Seeking an experienced Data Engineer - Need strong skills in Azure Data Factory & Databricks. Refactoring applications from AWS to Azure. (MySQL, EDI Protocols nice to have from a healthcare perspective) - Need to be able to create data pipelines using ADF. Python, Pyspark great to have. Transitioning/refactoring from PHP to ETL. The purpose of this position is to perform Data Development functions which include: the design of new or enhancement of existing enterprise database systems; maintenance and/or development of critical data processes; unit and system testing; support and help desk tasks. It also requires defining and adopting best practices for each of the data development functions as well as visualization and ETL processes. This position is also responsible for architecting ETL functions between a multitude of relational databases and external data files. Essential Duties and Responsibilities • Work with a highly dynamic team focused on Digital Transformation. • Understand the domain and business processes to implement successful data pipelines. • Provide work status, and coordinate with Data Engineers. • Manage customer deliverables and regularly report the status via Weekly/Monthly reviews. • Design, develop and maintain ETL processes as well as Stored Procedures, Functions and Views • Program in T-SQL with relational databases including currently supported versions of Microsoft SQL Server. • Write high performance SQL queries using Joins, Cross Apply, Aggregate Queries, Merge, Pivot. • Design normalized database tables with proper indexing and constraints. • Perform SQL query tuning and performance optimization on complex and inefficient queries. • Provide guidance in the use of table variable, temporary table, CTE appropriately to deal with large datasets. • Collaborate with DBA on database design and performance enhancements. • Leading in all phases of the software development life cycle in a team environment. • Debug existing code and troubleshoot for issues. • Design and provide a framework for maintaining existing data warehouse for reporting and data analytics. • Follow best practices, design, develop, test and document ETL processes.
Seeking an experienced Data Engineer - Need strong skills in Azure Data Factory & Databricks. Refactoring applications from AWS to Azure. (MySQL, EDI Protocols nice to have from a healthcare perspective) - Need to be able to create data pipelines using ADF. Python, Pyspark great to have. Transitioning/refactoring from PHP to ETL. The purpose of this position is to perform Data Development functions which include: the design of new or enhancement of existing enterprise database systems; maintenance and/or development of critical data processes; unit and system testing; support and help desk tasks. It also requires defining and adopting best practices for each of the data development functions as well as visualization and ETL processes. This position is also responsible for architecting ETL functions between a multitude of relational databases and external data files. Essential Duties and Responsibilities • Work with a highly dynamic team focused on Digital Transformation. • Understand the domain and business processes to implement successful data pipelines. • Provide work status, and coordinate with Data Engineers. • Manage customer deliverables and regularly report the status via Weekly/Monthly reviews. • Design, develop and maintain ETL processes as well as Stored Procedures, Functions and Views • Program in T-SQL with relational databases including currently supported versions of Microsoft SQL Server. • Write high performance SQL queries using Joins, Cross Apply, Aggregate Queries, Merge, Pivot. • Design normalized database tables with proper indexing and constraints. • Perform SQL query tuning and performance optimization on complex and inefficient queries. • Provide guidance in the use of table variable, temporary table, CTE appropriately to deal with large datasets. • Collaborate with DBA on database design and performance enhancements. • Leading in all phases of the software development life cycle in a team environment. • Debug existing code and troubleshoot for issues. • Design and provide a framework for maintaining existing data warehouse for reporting and data analytics. • Follow best practices, design, develop, test and document ETL processes.