zAnswer LLC
Role: Big Data Engineer (Lead Level)
Location: NYC. NY (Onsite)
Job Type: Contract
Duration: Long Term
Overview
We are seeking a highly skilled
Data Engineer
with deep expertise in
Big Data technologies, data lakes, and modern analytics platforms . The ideal candidate will design, build, and optimize scalable data pipelines that support advanced analytics and business intelligence. This role requires strong hands‑on experience with
Hadoop ecosystems, Snowflake, Kafka, Spark , and other distributed data platforms.
Responsibilities
Design, develop, and maintain
data pipelines
for ingesting, transforming, and delivering large-scale datasets.
Manage and optimize
data lake architectures
to ensure scalability, reliability, and performance.
Implement and support
Hadoop-based solutions
for distributed data processing.
Integrate and manage
Snowflake
for cloud-based data warehousing and analytics.
Build and maintain
real‑time streaming solutions
using
Kafka .
Develop and optimize
Spark applications
for batch and streaming workloads.
Collaborate with data analysts, scientists, and business stakeholders to deliver actionable insights.
Ensure
data quality, governance, and security
across all platforms.
Monitor and troubleshoot data pipelines to maintain high availability and performance.
Skills & Qualifications Core Skills
Big Data Ecosystem : Hadoop (HDFS, Hive, Pig, MapReduce), Spark, Kafka.
Cloud Data Warehousing : Snowflake (preferred), Redshift, BigQuery.
Data Lake Management : Experience with large-scale data storage and retrieval.
Data Pipelines : ETL/ELT design, orchestration tools (Airflow, NiFi, etc.).
Programming & Scripting : Python, Scala, Java, SQL.
Data Analysis : Strong ability to query, analyze, and interpret large datasets.
Distributed Systems : Understanding of scalability, fault tolerance, and performance optimization.
DevOps & Automation : CI/CD pipelines, containerization (Docker, Kubernetes).
Visualization & BI Tools : Familiarity with Tableau, Power BI, or similar.
Preferred Qualifications
12+ years of experience in
data engineering or big data roles .
Experience with
cloud platforms
(AWS, Azure, GCP).
Strong problem‑solving and analytical mindset.
Excellent communication and collaboration skills.
#J-18808-Ljbffr
Data Engineer
with deep expertise in
Big Data technologies, data lakes, and modern analytics platforms . The ideal candidate will design, build, and optimize scalable data pipelines that support advanced analytics and business intelligence. This role requires strong hands‑on experience with
Hadoop ecosystems, Snowflake, Kafka, Spark , and other distributed data platforms.
Responsibilities
Design, develop, and maintain
data pipelines
for ingesting, transforming, and delivering large-scale datasets.
Manage and optimize
data lake architectures
to ensure scalability, reliability, and performance.
Implement and support
Hadoop-based solutions
for distributed data processing.
Integrate and manage
Snowflake
for cloud-based data warehousing and analytics.
Build and maintain
real‑time streaming solutions
using
Kafka .
Develop and optimize
Spark applications
for batch and streaming workloads.
Collaborate with data analysts, scientists, and business stakeholders to deliver actionable insights.
Ensure
data quality, governance, and security
across all platforms.
Monitor and troubleshoot data pipelines to maintain high availability and performance.
Skills & Qualifications Core Skills
Big Data Ecosystem : Hadoop (HDFS, Hive, Pig, MapReduce), Spark, Kafka.
Cloud Data Warehousing : Snowflake (preferred), Redshift, BigQuery.
Data Lake Management : Experience with large-scale data storage and retrieval.
Data Pipelines : ETL/ELT design, orchestration tools (Airflow, NiFi, etc.).
Programming & Scripting : Python, Scala, Java, SQL.
Data Analysis : Strong ability to query, analyze, and interpret large datasets.
Distributed Systems : Understanding of scalability, fault tolerance, and performance optimization.
DevOps & Automation : CI/CD pipelines, containerization (Docker, Kubernetes).
Visualization & BI Tools : Familiarity with Tableau, Power BI, or similar.
Preferred Qualifications
12+ years of experience in
data engineering or big data roles .
Experience with
cloud platforms
(AWS, Azure, GCP).
Strong problem‑solving and analytical mindset.
Excellent communication and collaboration skills.
#J-18808-Ljbffr