Logo
Purple Drive

Big Data Engineer

Purple Drive, Sunnyvale, California, United States, 94087

Save Job

Key Responsibilities

Design, develop, and maintain

big data pipelines

leveraging

Hadoop, Spark, and Kafka . Develop and optimize

J2EE backend services

for large-scale data processing and integration. Program in

Python, Scala, and Java

to build robust data and backend solutions. Architect and implement

cloud-native solutions

on

GCP and Azure

platforms. Design and maintain

data models, data warehousing solutions, and BI tools

to enable data-driven decision making. Implement and manage

workflow orchestration frameworks

such as

Automic and Airflow

for data pipeline automation. Collaborate with cross-functional teams including data scientists, analysts, and DevOps engineers to deliver end-to-end data solutions. Ensure

data quality, security, and scalability

across all platforms. Troubleshoot and optimize performance of large-scale distributed systems. Required Skills & Qualifications

Bachelor's/Master's degree in Computer Science, Engineering, or related field (or equivalent experience). 6-8years

of experience in

big data engineering and backend development . Expertise in

Hadoop ecosystem (HDFS, MapReduce, Hive, Spark)

and

Kafka . Strong programming skills in

Python, Scala, and Java . Hands-on experience with

J2EE backend development . Proven experience with

cloud services (GCP, Azure)

for data engineering and application deployment. Proficiency in

data modeling, warehousing concepts, and BI tools . Experience with

workflow orchestration tools

such as

Airflow, Automic . Strong problem-solving and debugging skills in large-scale distributed environments. Preferred Skills (Nice to Have)

Familiarity with

containerization and orchestration

(Docker, Kubernetes). Knowledge of

streaming frameworks

(Spark Streaming, Flink, Storm). Exposure to

CI/CD pipelines

and DevOps practices. Experience in

Agile/Scrum development environments .