Compunnel
We are seeking a highly experienced Senior Data Engineer with a strong background in big data technologies, data modeling, and ETL development.
The ideal candidate will have over 10 years of experience in data engineering and analysis, with hands-on expertise in the Hadoop ecosystem, Python, Kafka, and cloud-based data processing.
This role requires strong communication skills and the ability to work in Agile environments, with a preference for candidates located in Cleveland or Pittsburgh.
Key Responsibilities
Design, develop, and maintain scalable data pipelines and ETL processes.
Work extensively with the Hadoop ecosystem including Hadoop, PySpark, HBase, Hive, Pig, Sqoop, Scala, Flume, HDFS, and MapReduce.
Develop and optimize data workflows using Python and Kafka.
Apply strong knowledge of data modeling, data design, and database concepts.
Perform data pre-processing, extraction, ingestion, normalization, quality checks, and loading.
Collaborate with cross-functional teams and stakeholders in Agile development environments.
Participate in client-facing discussions, providing technical leadership and coordination across the SDLC.
Contribute to the design and implementation of machine learning and AI solutions (preferred).
Leverage AWS for data processing and analytics (preferred).
Utilize data modeling tools such as Erwin (preferred).
Required Qualifications
10+ years of experience in Data Engineering and Data Analysis. Hands-on experience with the Hadoop stack and related technologies. Proficiency in Python and Kafka. Strong understanding of ETL processes, data quality, and data normalization. Experience working in Agile methodologies and tools like Jira. Excellent communication and leadership skills in client-facing roles. Bachelors or Masters degree in Computer Science or a related field. Preferred Qualifications
Experience with machine learning models and AI. Familiarity with AWS data services and cloud-based analytics. Experience using Erwin or similar data modeling tools. Preferred location: Cleveland or Pittsburgh.
#J-18808-Ljbffr
10+ years of experience in Data Engineering and Data Analysis. Hands-on experience with the Hadoop stack and related technologies. Proficiency in Python and Kafka. Strong understanding of ETL processes, data quality, and data normalization. Experience working in Agile methodologies and tools like Jira. Excellent communication and leadership skills in client-facing roles. Bachelors or Masters degree in Computer Science or a related field. Preferred Qualifications
Experience with machine learning models and AI. Familiarity with AWS data services and cloud-based analytics. Experience using Erwin or similar data modeling tools. Preferred location: Cleveland or Pittsburgh.
#J-18808-Ljbffr