Phaxis
Overview
The Data Engineer will partner with a wide range of business teams to implement analytical and data solutions that drive business value and customer satisfaction. He or she will be responsible for collecting, storing, processing, analyzing, and modeling large sets of data and building applications and solutions using data. The primary focus will be on building, maintaining, implementing, monitoring, supporting and integrating analytical and data solutions with the architecture used across the company.
How You\'ll Shine
Maintain and monitor our analytics data warehouses and data platform.
Design, implement, test, deploy, and maintain stable, secure, and scalable data engineering solutions and pipelines in support of data and analytics projects, including integrating new data sources into the central data warehouse and moving data out to applications and affiliates.
Hands-on development, deployment, maintenance and support of cloud and on-premise solutions, web service infrastructure and supporting technologies.
Produce scalable, replicable code and engineering solutions that automate repetitive data management tasks.
Work with project managers, business analysts, data scientists and other groups to translate requirements into technical specifications.
Collaborate with key stakeholders to ensure data infrastructure meets business needs in a scalable way.
Critically assess the technical strategy, identify gaps, and propose creative solutions.
Qualifications
Bachelor’s degree in computer and information science required; Master’s degree preferred.
Snowflake and Python certification preferred but not required.
Excellent listening, interpersonal, communication (written & verbal) and problem-solving skills.
Ability to collect and compile relevant data; extremely organized with great attention to detail.
Strong ability to analyze information and think systematically; strong business analysis skills.
Good understanding of the company’s business processes and the industry at large.
Good working SQL knowledge and experience with relational databases; familiarity with a variety of databases.
Experience building and optimizing data pipelines and data sets using scripting languages or ETL tools.
Ability to perform root cause analysis on data processes to answer business questions and identify opportunities for improvement.
Strong analytic skills related to working with unstructured datasets.
Ability to build and use APIs to push and pull data from various data systems and platforms.
Build processes supporting data extraction, transformation, and loading of data into data structures.
Experience manipulating, processing and extracting value from large, disconnected datasets; ability to build data models and manage data warehouses.
3 years of related data engineering/IT experience.
1+ years of proven experience with Apache Spark, Hadoop, Java/Scala, Python and AWS.
1+ years of proven experience with Microsoft .NET technologies (C#, VB.NET) and experience designing, developing and deploying Windows & Web applications.
2+ years of experience in data modeling/database development using PL/SQL and SQL Server 2016 or later and Snowflake.
1+ years of proven experience building data pipelines and ETL processes in Cloud and on-premise environments using Snowpipe, Informatica, Airflow, Kafka, etc.
Experience equivalent to the education requirement may be accepted in lieu of the education requirement.
Seniority level
Mid-Senior level
Employment type
Other
Job function
Information Technology
Industries
Hospitality
#J-18808-Ljbffr
How You\'ll Shine
Maintain and monitor our analytics data warehouses and data platform.
Design, implement, test, deploy, and maintain stable, secure, and scalable data engineering solutions and pipelines in support of data and analytics projects, including integrating new data sources into the central data warehouse and moving data out to applications and affiliates.
Hands-on development, deployment, maintenance and support of cloud and on-premise solutions, web service infrastructure and supporting technologies.
Produce scalable, replicable code and engineering solutions that automate repetitive data management tasks.
Work with project managers, business analysts, data scientists and other groups to translate requirements into technical specifications.
Collaborate with key stakeholders to ensure data infrastructure meets business needs in a scalable way.
Critically assess the technical strategy, identify gaps, and propose creative solutions.
Qualifications
Bachelor’s degree in computer and information science required; Master’s degree preferred.
Snowflake and Python certification preferred but not required.
Excellent listening, interpersonal, communication (written & verbal) and problem-solving skills.
Ability to collect and compile relevant data; extremely organized with great attention to detail.
Strong ability to analyze information and think systematically; strong business analysis skills.
Good understanding of the company’s business processes and the industry at large.
Good working SQL knowledge and experience with relational databases; familiarity with a variety of databases.
Experience building and optimizing data pipelines and data sets using scripting languages or ETL tools.
Ability to perform root cause analysis on data processes to answer business questions and identify opportunities for improvement.
Strong analytic skills related to working with unstructured datasets.
Ability to build and use APIs to push and pull data from various data systems and platforms.
Build processes supporting data extraction, transformation, and loading of data into data structures.
Experience manipulating, processing and extracting value from large, disconnected datasets; ability to build data models and manage data warehouses.
3 years of related data engineering/IT experience.
1+ years of proven experience with Apache Spark, Hadoop, Java/Scala, Python and AWS.
1+ years of proven experience with Microsoft .NET technologies (C#, VB.NET) and experience designing, developing and deploying Windows & Web applications.
2+ years of experience in data modeling/database development using PL/SQL and SQL Server 2016 or later and Snowflake.
1+ years of proven experience building data pipelines and ETL processes in Cloud and on-premise environments using Snowpipe, Informatica, Airflow, Kafka, etc.
Experience equivalent to the education requirement may be accepted in lieu of the education requirement.
Seniority level
Mid-Senior level
Employment type
Other
Job function
Information Technology
Industries
Hospitality
#J-18808-Ljbffr