Integrated Resources
Software Engineering Senior Advisor
Integrated Resources, Chicago, Illinois, United States, 60290
Job Title: Software Engineering Senior Advisor
Location: Remote
Duration: 07+ months to Start (Possible extension)
Job Description:
Provides counsel and advice to top management on significant Engineering matters, often requiring coordination between organizations. Designs and develops a consolidated, conformed enterprise data warehouse and data lake which store all critical data across Customer, Provider, Claims, Client and Benefits data. Manages processes that are highly complex and impact the greater organization. Designs, develops and implements methods, processes, tools and analyses to sift through large amounts of data stored in a data warehouse or data mart to find relationships and patterns. May lead or manage sizable projects. Participates in the delivery of the definitive enterprise information environment that enables strategic decision-making capabilities across enterprise via an analytics and reporting. Focuses on providing thought leadership and technical expertise across multiple disciplines. Recognized internally as "the go-to person" for the most complex Information Management assignments. Required Skill set: • Bachelor's degree required •
8+ years of diverse Technology
experience with a minimum of 5 years of experience in Software development with bachelor's in computer science or equivalent trade. • Experience with
AWS capabilities - Glue, DynamoDB, Lambda, Redshift, elastic search. • Experience with API and Terraforms • Software development experience with
Python, PySpark and Apache Spark •
Proficiency in SQL , relational and non-relational databases, query optimization and data modelling. • Strong knowledge of Data Integration
(e.g. Streaming, Batch, Error and Replay)
and data analysis techniques. • Experience with
GitHub, Jenkins, and Terraform. • Experience with Teradata (Vantage) or any RDBMS system/ ETL Tools • Good experience on designing and developing data pipelines for data ingestion and transformation using Spark. • Excellent in trouble shooting the performance and data skew issues. • Working knowledge on the implementation
of data lake ETL using AWS glue, Databricks
etc. • Experience with large scale distributed relational and
NoSQL
database systems. • Excellent communication skills • Provide recommendations and design optimal configurations for large-scale deployments.
Desired • Experience in
Healthcare • Experience with
Data & Analytics
Job Description:
Provides counsel and advice to top management on significant Engineering matters, often requiring coordination between organizations. Designs and develops a consolidated, conformed enterprise data warehouse and data lake which store all critical data across Customer, Provider, Claims, Client and Benefits data. Manages processes that are highly complex and impact the greater organization. Designs, develops and implements methods, processes, tools and analyses to sift through large amounts of data stored in a data warehouse or data mart to find relationships and patterns. May lead or manage sizable projects. Participates in the delivery of the definitive enterprise information environment that enables strategic decision-making capabilities across enterprise via an analytics and reporting. Focuses on providing thought leadership and technical expertise across multiple disciplines. Recognized internally as "the go-to person" for the most complex Information Management assignments. Required Skill set: • Bachelor's degree required •
8+ years of diverse Technology
experience with a minimum of 5 years of experience in Software development with bachelor's in computer science or equivalent trade. • Experience with
AWS capabilities - Glue, DynamoDB, Lambda, Redshift, elastic search. • Experience with API and Terraforms • Software development experience with
Python, PySpark and Apache Spark •
Proficiency in SQL , relational and non-relational databases, query optimization and data modelling. • Strong knowledge of Data Integration
(e.g. Streaming, Batch, Error and Replay)
and data analysis techniques. • Experience with
GitHub, Jenkins, and Terraform. • Experience with Teradata (Vantage) or any RDBMS system/ ETL Tools • Good experience on designing and developing data pipelines for data ingestion and transformation using Spark. • Excellent in trouble shooting the performance and data skew issues. • Working knowledge on the implementation
of data lake ETL using AWS glue, Databricks
etc. • Experience with large scale distributed relational and
NoSQL
database systems. • Excellent communication skills • Provide recommendations and design optimal configurations for large-scale deployments.
Desired • Experience in
Healthcare • Experience with
Data & Analytics