CapB InfoteK
Overview
Hadoop Data Engineer role in New York at
CapB InfoteK . The ETL Hadoop Data Engineer will be responsible for analyzing the business requirements, design, develop and implement highly efficient, scalable ETL processes. The candidate is required to perform daily project functions with a focus on meeting business objectives on time in a rapidly changing work environment, and should be able to lead and drive globally located teams to achieve business objectives. Responsibilities
Analyze business requirements, design, develop and implement highly efficient, scalable ETL processes. Perform daily project functions with a focus on meeting business objectives on time in a rapidly changing environment; lead and drive globally located teams to achieve business objectives. Required Skills
3-8 years of hands-on experience with ETL and Hadoop. Knowledge of Hadoop ecosystem components (Hive, Impala, Spark) and applying them to practical problems. Experience with relational databases (Teradata, DB2, Oracle, SQL Server). Hands-on experience writing shell scripts on Unix. Experience in data warehousing ETL tools and MPP database systems. Understanding of data models (conceptual, logical, physical) and dimensional/relational data model design. Ability to analyze functional specifications and define data extraction methodologies. Identifies data sources and works with source system teams and data analysts to define data extraction methodologies. Strong SQL skills with experience in Teradata, DB2, Oracle; PL/SQL. Maintain batch processing jobs and respond to production issues. Communicate effectively with stakeholders. Knowledge of data analysis, data profiling, and root cause analysis. Ability to understand banking system processes and data flows. Ability to work independently and to lead and mentor the team. Seniority level
Mid-Senior level Employment type
Full-time Job function
Information Technology
#J-18808-Ljbffr
Hadoop Data Engineer role in New York at
CapB InfoteK . The ETL Hadoop Data Engineer will be responsible for analyzing the business requirements, design, develop and implement highly efficient, scalable ETL processes. The candidate is required to perform daily project functions with a focus on meeting business objectives on time in a rapidly changing work environment, and should be able to lead and drive globally located teams to achieve business objectives. Responsibilities
Analyze business requirements, design, develop and implement highly efficient, scalable ETL processes. Perform daily project functions with a focus on meeting business objectives on time in a rapidly changing environment; lead and drive globally located teams to achieve business objectives. Required Skills
3-8 years of hands-on experience with ETL and Hadoop. Knowledge of Hadoop ecosystem components (Hive, Impala, Spark) and applying them to practical problems. Experience with relational databases (Teradata, DB2, Oracle, SQL Server). Hands-on experience writing shell scripts on Unix. Experience in data warehousing ETL tools and MPP database systems. Understanding of data models (conceptual, logical, physical) and dimensional/relational data model design. Ability to analyze functional specifications and define data extraction methodologies. Identifies data sources and works with source system teams and data analysts to define data extraction methodologies. Strong SQL skills with experience in Teradata, DB2, Oracle; PL/SQL. Maintain batch processing jobs and respond to production issues. Communicate effectively with stakeholders. Knowledge of data analysis, data profiling, and root cause analysis. Ability to understand banking system processes and data flows. Ability to work independently and to lead and mentor the team. Seniority level
Mid-Senior level Employment type
Full-time Job function
Information Technology
#J-18808-Ljbffr