Alpine Physician Partners
Join to apply for the
Data Engineer III
role at
Alpine Physician Partners
Overview Are you looking to work for a company that has been recognized for over a decade as a Top Place to Work? Apply today to become a part of a company that continues to commit to putting our employees first.
Position Summary The Data Engineer III is a senior level data engineer role and is responsible for designing & building data pipelines for enabling value-based healthcare, population health management, and enterprise analytics. The data engineer will design, develop, maintain, and support cloud-based (Microsoft Azure) big data platform using modern data engineering design patterns and tools. This position will also bridge the gap between on premise, SQL-based Data Warehousing solution and the Azure Databricks Lakehouse architecture, learning and supporting the existing proprietary system while working with technical leaders to further develop plans to migrate to a cloud-based solution.
Essential Functions
Owns solution design blueprints and architecture of the enterprise data platform features and functionality, including data ingestions, data integrations, data pipelines, data models, data quality, data governance.
Plays technical leadership role and leads other team members and guides them on solution design blueprints, data solutions development, and best practices for our enterprise data platform.
Designs, builds and maintains scalable, automated data pipelines to enable Reporting, Data Visualization, Advanced Analytics, Data Science, and Machine Learning solutions.
Supports critical data pipelines with a scalable distributed architecture, including data ingestion (streaming, events, and batch), data integration (ETL, ELT, Azure Data Factory), and distributed data processing using Databricks Data & Analytics and Azure Cloud Technology Stacks.
Builds cloud data solutions using multiple technologies, such as SQL, Python, Data Lake (Databricks Delta Lake), Cloud Data Warehouse (Azure Synapse), RDBMS, NoSQL databases.
Takes part in regular Scrum activities (daily standup meetings, weekly planning poker, post-sprint retrospectives, etc.).
Addresses issues in data flow logic and data model design, as well as the interoperability of new datasets with existing data models.
Participates in and conducts code-reviews for all changes to the codebase and conveys coding standards clearly and concisely.
Tests work on each assignment before working with Product Owners to ensure business requirements are fulfilled.
Coordinates identification of requirements and recommending new data features in conjunction with development manager, product owners, and department managers.
Acts as a subject matter expert by sharing information and providing support and training to others, as well as spearheading team projects and establishing goals and milestones for projects.
Ensures goals and commitments to the team are met.
Adheres to the company’s Compliance Program and to federal and state laws and regulations.
Other duties as assigned.
Knowledge, Skills and Abilities
Minimum 6 years of experience required in creating robust enterprise-grade data engineering pipelines using SQL, Python, Apache Spark, ETL, ELT, Databricks Technology Stack, Azure Cloud Services, Cloud-based Data and Analytics platforms required. 8-10 years preferred.
Minimum 3 years of experience required in solution design blueprinting and leading technical team members towards delivery of robust enterprise-grade data platform solutions.
Strong proficiency in SQL and data analysis required.
Experience in distributed data (structured, semi-structured, unstructured, streaming) processing techniques using Apache Spark, Hadoop, Hive, Kafka, and big data ecosystem technologies preferred.
Experience in data modeling and design for data warehouse, relational databases, and NoSQL data stores preferred.
Ability to mentor others
Excellent verbal and written communication skills
Great customer service skills
Great teamwork and leadership skills
Independent problem-solving skills
Self-motivated and self-managed
Proficient in Microsoft Office Suite
Qualifications
Bachelor’s degree in computer science, information systems, or equivalent work experience
7+ years working with the following concepts and technologies:
Relational and Dimensional Data Models, T-SQL, Microsoft SQL Server, etc.
Data integration strategies and supporting technologies
Electronic data exchange (FHIR, HL7, CSV, EDI, etc.)
Minimum 6 years of experience required in creating robust enterprise-grade data engineering pipelines using SQL, Python, Apache Spark, ETL, ELT, Databricks Technology Stack, Azure Cloud Services, Cloud-based Data and Analytics platforms required. 8-10 years preferred.
Experience migrating on premise data warehousing solutions to Azure solutions (preferred)
Experience with Visual Studio and .NET technologies such as C# (preferred)
Experience working with common health care datasets (preferred)
Experience working on an agile/scrum-driven software development team
Home office that is HIPAA compliant for all remote or telecommuting positions as outlined by the company policies and procedures
Salary Range $90,000-$135,000
#J-18808-Ljbffr
Data Engineer III
role at
Alpine Physician Partners
Overview Are you looking to work for a company that has been recognized for over a decade as a Top Place to Work? Apply today to become a part of a company that continues to commit to putting our employees first.
Position Summary The Data Engineer III is a senior level data engineer role and is responsible for designing & building data pipelines for enabling value-based healthcare, population health management, and enterprise analytics. The data engineer will design, develop, maintain, and support cloud-based (Microsoft Azure) big data platform using modern data engineering design patterns and tools. This position will also bridge the gap between on premise, SQL-based Data Warehousing solution and the Azure Databricks Lakehouse architecture, learning and supporting the existing proprietary system while working with technical leaders to further develop plans to migrate to a cloud-based solution.
Essential Functions
Owns solution design blueprints and architecture of the enterprise data platform features and functionality, including data ingestions, data integrations, data pipelines, data models, data quality, data governance.
Plays technical leadership role and leads other team members and guides them on solution design blueprints, data solutions development, and best practices for our enterprise data platform.
Designs, builds and maintains scalable, automated data pipelines to enable Reporting, Data Visualization, Advanced Analytics, Data Science, and Machine Learning solutions.
Supports critical data pipelines with a scalable distributed architecture, including data ingestion (streaming, events, and batch), data integration (ETL, ELT, Azure Data Factory), and distributed data processing using Databricks Data & Analytics and Azure Cloud Technology Stacks.
Builds cloud data solutions using multiple technologies, such as SQL, Python, Data Lake (Databricks Delta Lake), Cloud Data Warehouse (Azure Synapse), RDBMS, NoSQL databases.
Takes part in regular Scrum activities (daily standup meetings, weekly planning poker, post-sprint retrospectives, etc.).
Addresses issues in data flow logic and data model design, as well as the interoperability of new datasets with existing data models.
Participates in and conducts code-reviews for all changes to the codebase and conveys coding standards clearly and concisely.
Tests work on each assignment before working with Product Owners to ensure business requirements are fulfilled.
Coordinates identification of requirements and recommending new data features in conjunction with development manager, product owners, and department managers.
Acts as a subject matter expert by sharing information and providing support and training to others, as well as spearheading team projects and establishing goals and milestones for projects.
Ensures goals and commitments to the team are met.
Adheres to the company’s Compliance Program and to federal and state laws and regulations.
Other duties as assigned.
Knowledge, Skills and Abilities
Minimum 6 years of experience required in creating robust enterprise-grade data engineering pipelines using SQL, Python, Apache Spark, ETL, ELT, Databricks Technology Stack, Azure Cloud Services, Cloud-based Data and Analytics platforms required. 8-10 years preferred.
Minimum 3 years of experience required in solution design blueprinting and leading technical team members towards delivery of robust enterprise-grade data platform solutions.
Strong proficiency in SQL and data analysis required.
Experience in distributed data (structured, semi-structured, unstructured, streaming) processing techniques using Apache Spark, Hadoop, Hive, Kafka, and big data ecosystem technologies preferred.
Experience in data modeling and design for data warehouse, relational databases, and NoSQL data stores preferred.
Ability to mentor others
Excellent verbal and written communication skills
Great customer service skills
Great teamwork and leadership skills
Independent problem-solving skills
Self-motivated and self-managed
Proficient in Microsoft Office Suite
Qualifications
Bachelor’s degree in computer science, information systems, or equivalent work experience
7+ years working with the following concepts and technologies:
Relational and Dimensional Data Models, T-SQL, Microsoft SQL Server, etc.
Data integration strategies and supporting technologies
Electronic data exchange (FHIR, HL7, CSV, EDI, etc.)
Minimum 6 years of experience required in creating robust enterprise-grade data engineering pipelines using SQL, Python, Apache Spark, ETL, ELT, Databricks Technology Stack, Azure Cloud Services, Cloud-based Data and Analytics platforms required. 8-10 years preferred.
Experience migrating on premise data warehousing solutions to Azure solutions (preferred)
Experience with Visual Studio and .NET technologies such as C# (preferred)
Experience working with common health care datasets (preferred)
Experience working on an agile/scrum-driven software development team
Home office that is HIPAA compliant for all remote or telecommuting positions as outlined by the company policies and procedures
Salary Range $90,000-$135,000
#J-18808-Ljbffr