Empower Professionals
Role
Data Engineer
Location Alpharetta, GA (Hybrid)
Duration 12 Months
Must Have Skills
Java
Python
Apache Spark
Job Description We are seeking a highly skilled Senior Data Engineer with strong expertise in designing, building, and optimizing large-scale data processing systems. The ideal candidate has hands‑on experience with Java, Python, and modern Big‑Data technologies, along with the ability to build high‑performance, scalable data pipelines.
Key Responsibilities
Develop, optimize, and maintain data processing applications using Java and Python to ensure high performance, scalability, and reliability.
Design and implement large‑scale data processing workflows using Apache Spark, including building complex Spark applications for efficient data transformation and analytics.
Work with Hadoop, HDFS, and cloud‑based Big Data ecosystems to manage and process high‑volume datasets effectively.
Collaborate with cross‑functional teams to build robust data solutions supporting business analytics and machine‑learning initiatives.
Troubleshoot, tune, and enhance data pipelines to meet performance standards and operational SLAs.
Required Skills & Experience
4 to 8 years of hands‑on experience as a Data Engineer or in similar Big Data engineering roles.
Strong programming expertise in Java and Python.
Advanced experience with Apache Spark (RDDs, DataFrames, Spark SQL, and performance tuning).
Solid understanding of Hadoop, HDFS, and cloud Big Data technologies (AWS, Azure, or Google Cloud Platform preferred).
Experience with distributed systems, ETL/ELT workflows, and scalable data architectures.
#J-18808-Ljbffr
Location Alpharetta, GA (Hybrid)
Duration 12 Months
Must Have Skills
Java
Python
Apache Spark
Job Description We are seeking a highly skilled Senior Data Engineer with strong expertise in designing, building, and optimizing large-scale data processing systems. The ideal candidate has hands‑on experience with Java, Python, and modern Big‑Data technologies, along with the ability to build high‑performance, scalable data pipelines.
Key Responsibilities
Develop, optimize, and maintain data processing applications using Java and Python to ensure high performance, scalability, and reliability.
Design and implement large‑scale data processing workflows using Apache Spark, including building complex Spark applications for efficient data transformation and analytics.
Work with Hadoop, HDFS, and cloud‑based Big Data ecosystems to manage and process high‑volume datasets effectively.
Collaborate with cross‑functional teams to build robust data solutions supporting business analytics and machine‑learning initiatives.
Troubleshoot, tune, and enhance data pipelines to meet performance standards and operational SLAs.
Required Skills & Experience
4 to 8 years of hands‑on experience as a Data Engineer or in similar Big Data engineering roles.
Strong programming expertise in Java and Python.
Advanced experience with Apache Spark (RDDs, DataFrames, Spark SQL, and performance tuning).
Solid understanding of Hadoop, HDFS, and cloud Big Data technologies (AWS, Azure, or Google Cloud Platform preferred).
Experience with distributed systems, ETL/ELT workflows, and scalable data architectures.
#J-18808-Ljbffr