HonorVet Technologies
Sr Data Analyst Snowflake ADF
HonorVet Technologies, Little Rock, Arkansas, United States, 72208
Job Title: Sr Data Analyst Snowflake ADF
Contract Type: 6 month contract (to hire)
Responsibilities • Lead the development and management of ETL processes for Program Integrity data marts in Snowflake and SQL Server. • Design, build, and maintain data pipelines using Azure Data Factory (ADF) to support ETL operations and Peer Group Profiling procedures and outputs. • Implement and manage DevOps practices to ensure efficient data loading and workload balancing within the Program Integrity data mart. • Collaborate with cross-functional teams to ensure seamless data integration and consistent data flow across platforms. • Monitor, optimize, and troubleshoot data pipelines and workflows to maintain performance and reliability. • Independently identify and resolve complex technical issues to maintain operational efficiency. • Communicate effectively and foster collaboration across teams to support project execution and alignment. • Apply advanced analytical skills and attention to detail to uphold data accuracy and quality in all deliverables. • Lead cloud-based project delivery, ensuring adherence to timelines, scope, and performance benchmarks. • Ensure data security and compliance with relevant industry standards and regulatory requirements.
Requirements: • Over 5 years of hands-on expertise in cloud-based data engineering and data warehousing, with a strong emphasis on building scalable and efficient data solutions. • More than 2 years of focused experience in Snowflake, including the design, development, and maintenance of automated data ingestion workflows using Snowflake SQL procedures and Python scripting. • Practical experience with Azure data tools such as Azure Data Factory (ADF), SQL Server, and Blob Storage, alongside Snowflake. • Skilled in managing various data formats (CSV, JSON, VARIANT) and executing data loading/exporting tasks using SnowSQL, with orchestration via ADF. • Proficient in using data science and analytics tools like Snowpark, Apache Spark, pandas, NumPy, and Scikit-learn for complex data processing and modeling. • Strong experience with Python and other scripting languages for data manipulation and automation. • Proven ability to develop cloud-native ETL processes and data pipelines using modern technologies on Azure and Snowflake. • Demonstrated excellence in analytical thinking and independent problem-solving.
Preferred Skills: • Certification in Azure or Snowflake • Experience with data modeling and database design • Knowledge of data governance and data quality best practices • Familiarity with other cloud platforms (e.g., AWS, Google Cloud)
Project: This individual will be using a mapping document to write the ETL process to load the PI data mart. They will be testing the load process. They will be create a balancing routine to be exectuted after each data load. They will work closely with data analytics and QA testes to ensure that the data is loaded correctly. As a part of the ETL process there will be procecures for summaring data. They will be responsible for creating the ADF pipeline to move data from SQL Server to Snowfalke and from Snowflake to SQL Server.
Ideal Background: Experience with setting up DDL, mapping data, and extract, transform, and load (ETL) procedures in Snowflake and SQL Server. Experience with Azure services; Azure Data Factory pipelines for ETL and movement of data between Snowflake and SQL Server. Experience with identifying and creating balancing procedues with medical claim data, and ability to resolve balancing issues. Abilty to easily communitcate process with technial staff as well as non-technical staff. Expeience with creating algorithms or models with tools like like Snowpark, Apache Spark, pandas, NumPy, and Scikit-learn for complex data processing and modeling. Expeience with Phython - used for data manipuation and automation.
Top 3 Requirements: • Over 5 years of hands-on expertise in cloud-based data engineering and data warehousing, with a strong emphasis on building scalable and efficient data solutions. • More than 2 years of focused experience in Snowflake, including the design, development, and maintenance of automated data ingestion workflows using Snowflake SQL procedures and Python scripting. • Practical experience with Azure data tools such as Azure Data Factory (ADF), SQL Server, and Blob Storage, alongside Snowflake.
What experience will set candidates apart from one another? Recent medical claim processing experience. Data science and analytics experience.
Team: 1 Product Owner, 1 PM Implemenation Manager, 1 SME/Analytics Manager, 2 ETL/Algorithm Developers, 2 Report Developers, and 7 Data Analyst.
2 Rounds of Interviews Verbal quizes to be certain of the individuals knowlege. There is no written/virtal test.
Contract Type: 6 month contract (to hire)
Responsibilities • Lead the development and management of ETL processes for Program Integrity data marts in Snowflake and SQL Server. • Design, build, and maintain data pipelines using Azure Data Factory (ADF) to support ETL operations and Peer Group Profiling procedures and outputs. • Implement and manage DevOps practices to ensure efficient data loading and workload balancing within the Program Integrity data mart. • Collaborate with cross-functional teams to ensure seamless data integration and consistent data flow across platforms. • Monitor, optimize, and troubleshoot data pipelines and workflows to maintain performance and reliability. • Independently identify and resolve complex technical issues to maintain operational efficiency. • Communicate effectively and foster collaboration across teams to support project execution and alignment. • Apply advanced analytical skills and attention to detail to uphold data accuracy and quality in all deliverables. • Lead cloud-based project delivery, ensuring adherence to timelines, scope, and performance benchmarks. • Ensure data security and compliance with relevant industry standards and regulatory requirements.
Requirements: • Over 5 years of hands-on expertise in cloud-based data engineering and data warehousing, with a strong emphasis on building scalable and efficient data solutions. • More than 2 years of focused experience in Snowflake, including the design, development, and maintenance of automated data ingestion workflows using Snowflake SQL procedures and Python scripting. • Practical experience with Azure data tools such as Azure Data Factory (ADF), SQL Server, and Blob Storage, alongside Snowflake. • Skilled in managing various data formats (CSV, JSON, VARIANT) and executing data loading/exporting tasks using SnowSQL, with orchestration via ADF. • Proficient in using data science and analytics tools like Snowpark, Apache Spark, pandas, NumPy, and Scikit-learn for complex data processing and modeling. • Strong experience with Python and other scripting languages for data manipulation and automation. • Proven ability to develop cloud-native ETL processes and data pipelines using modern technologies on Azure and Snowflake. • Demonstrated excellence in analytical thinking and independent problem-solving.
Preferred Skills: • Certification in Azure or Snowflake • Experience with data modeling and database design • Knowledge of data governance and data quality best practices • Familiarity with other cloud platforms (e.g., AWS, Google Cloud)
Project: This individual will be using a mapping document to write the ETL process to load the PI data mart. They will be testing the load process. They will be create a balancing routine to be exectuted after each data load. They will work closely with data analytics and QA testes to ensure that the data is loaded correctly. As a part of the ETL process there will be procecures for summaring data. They will be responsible for creating the ADF pipeline to move data from SQL Server to Snowfalke and from Snowflake to SQL Server.
Ideal Background: Experience with setting up DDL, mapping data, and extract, transform, and load (ETL) procedures in Snowflake and SQL Server. Experience with Azure services; Azure Data Factory pipelines for ETL and movement of data between Snowflake and SQL Server. Experience with identifying and creating balancing procedues with medical claim data, and ability to resolve balancing issues. Abilty to easily communitcate process with technial staff as well as non-technical staff. Expeience with creating algorithms or models with tools like like Snowpark, Apache Spark, pandas, NumPy, and Scikit-learn for complex data processing and modeling. Expeience with Phython - used for data manipuation and automation.
Top 3 Requirements: • Over 5 years of hands-on expertise in cloud-based data engineering and data warehousing, with a strong emphasis on building scalable and efficient data solutions. • More than 2 years of focused experience in Snowflake, including the design, development, and maintenance of automated data ingestion workflows using Snowflake SQL procedures and Python scripting. • Practical experience with Azure data tools such as Azure Data Factory (ADF), SQL Server, and Blob Storage, alongside Snowflake.
What experience will set candidates apart from one another? Recent medical claim processing experience. Data science and analytics experience.
Team: 1 Product Owner, 1 PM Implemenation Manager, 1 SME/Analytics Manager, 2 ETL/Algorithm Developers, 2 Report Developers, and 7 Data Analyst.
2 Rounds of Interviews Verbal quizes to be certain of the individuals knowlege. There is no written/virtal test.