Saransh Inc
Overview
Position :
Big Data Engineer (Only 12 Years Exp).
Location :
NYC NY
Skills :
Coding expert in Java / Python Hadoop Ecosystem(HDFS) Spark Hive
Strong work experience - Agile environment preferred
Responsibilities
Designs and builds scalable data pipelines integrates diverse sources and optimizes storage / processing using Hadoop ecosystem and Greenplum.
Ensures data quality security and compliance through governance frameworks.
Implements orchestration monitoring and performance tuning for reliable cost-efficient operations.
Expertise in Hadoop ecosystem (HDFS Hive Spark Kafka) and MPP databases like Greenplum for large-scale data processing and optimization.
Collaborates with Data Owners and stakeholders to translate business rules into technical solutions.
Delivers curated datasets lineage and documentation aligned with SLAs and regulatory standards.
Subject matter expert having experience of interacting with client understanding the requirement and guiding the team.
Documenting the requirements clearly with defined scope and must play an anchor role in setting the right expectations and delivering as per the schedule.
Key Skills
Apache Hive
S3
Hadoop
Redshift
Spark
AWS
Apache Pig
NoSQL
Big Data
Data Warehouse
Kafka
Scala
Employment Details Employment Type :
Full Time
Experience :
years
Vacancy :
1
#J-18808-Ljbffr
Big Data Engineer (Only 12 Years Exp).
Location :
NYC NY
Skills :
Coding expert in Java / Python Hadoop Ecosystem(HDFS) Spark Hive
Strong work experience - Agile environment preferred
Responsibilities
Designs and builds scalable data pipelines integrates diverse sources and optimizes storage / processing using Hadoop ecosystem and Greenplum.
Ensures data quality security and compliance through governance frameworks.
Implements orchestration monitoring and performance tuning for reliable cost-efficient operations.
Expertise in Hadoop ecosystem (HDFS Hive Spark Kafka) and MPP databases like Greenplum for large-scale data processing and optimization.
Collaborates with Data Owners and stakeholders to translate business rules into technical solutions.
Delivers curated datasets lineage and documentation aligned with SLAs and regulatory standards.
Subject matter expert having experience of interacting with client understanding the requirement and guiding the team.
Documenting the requirements clearly with defined scope and must play an anchor role in setting the right expectations and delivering as per the schedule.
Key Skills
Apache Hive
S3
Hadoop
Redshift
Spark
AWS
Apache Pig
NoSQL
Big Data
Data Warehouse
Kafka
Scala
Employment Details Employment Type :
Full Time
Experience :
years
Vacancy :
1
#J-18808-Ljbffr