KellyMitchell Group
Job Summary
Our client is seeking a Database Engineering Manager to join their team! This position is located in Denver, Colorado. Responsibilities
Building and maintaining ETL process from various sources into company data environment like snowflake, azure, fabric Building views and monitoring pipelines to ensure they run smoothly Designing and prototyping new data structures and exploring how to leverage new tools or platforms Creating proof-of-concept models and ensuring data is organized for easy access and analysis Working with other teams to understand data needs, defining requirements, and creating data views that can be shared across the organization Ensuring proper data governance, security, and access control Integrating data from various sources to create a unified and comprehensive dataset Testing data pipelines, validating data accuracy, and ensuring the integrity of the data structures Desired Skills/Experience
Proficient in writing and optimizing advanced SQL queries Hands-on experience working with large corporate datasets, not just academic or lab projects Experience with cloud data platforms like Redshift, Snowflake, Fabric, or Azure is essential, particularly in exploring data, handling flat files, and leveraging ETL processes Proficient in building and orchestrating ETL pipelines using tools like Apache, Fabric Data Flows, or similar technologies Experience with Spark in a Python environment, a plus Familiarity with the properties and concepts of major machine learning techniques, a plus Benefits
Medical, Dental, & Vision Insurance Plans Employee-Owned Profit Sharing (ESOP) 401K offered The approximate pay range for this position starting at $90,000 - 105,000. Please note that the pay range provided is a good faith estimate. Final compensation may vary based on factors including but not limited to background, knowledge, skills, and location. We comply with local wage minimums.
#J-18808-Ljbffr
Our client is seeking a Database Engineering Manager to join their team! This position is located in Denver, Colorado. Responsibilities
Building and maintaining ETL process from various sources into company data environment like snowflake, azure, fabric Building views and monitoring pipelines to ensure they run smoothly Designing and prototyping new data structures and exploring how to leverage new tools or platforms Creating proof-of-concept models and ensuring data is organized for easy access and analysis Working with other teams to understand data needs, defining requirements, and creating data views that can be shared across the organization Ensuring proper data governance, security, and access control Integrating data from various sources to create a unified and comprehensive dataset Testing data pipelines, validating data accuracy, and ensuring the integrity of the data structures Desired Skills/Experience
Proficient in writing and optimizing advanced SQL queries Hands-on experience working with large corporate datasets, not just academic or lab projects Experience with cloud data platforms like Redshift, Snowflake, Fabric, or Azure is essential, particularly in exploring data, handling flat files, and leveraging ETL processes Proficient in building and orchestrating ETL pipelines using tools like Apache, Fabric Data Flows, or similar technologies Experience with Spark in a Python environment, a plus Familiarity with the properties and concepts of major machine learning techniques, a plus Benefits
Medical, Dental, & Vision Insurance Plans Employee-Owned Profit Sharing (ESOP) 401K offered The approximate pay range for this position starting at $90,000 - 105,000. Please note that the pay range provided is a good faith estimate. Final compensation may vary based on factors including but not limited to background, knowledge, skills, and location. We comply with local wage minimums.
#J-18808-Ljbffr