Maveric Systems
Direct message the job poster from Maveric Systems Limited
Key Responsibilities
Design, develop, and optimize scalable data pipelines using
Apache Spark ,
Python , and
Hadoop . Implement robust data ingestion, transformation, and storage solutions for large-scale datasets. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. Manage and deploy Big Data tools and frameworks including
Kafka ,
Hive ,
HBase , and
Flink . Ensure data quality, integrity, and availability across distributed systems. Conduct performance tuning and benchmarking of Big Data applications. Implement data governance practices including metadata management and data lineage tracking. Stay current with emerging technologies and integrate them into the data ecosystem as needed. Required Qualifications
Bachelors or Masters degree in Computer Science, Information Technology, or related field. 7+ years of experience in software development with a focus on Big Data technologies. Strong programming skills in
Python ,
Java , or
Scala . Hands-on experience with
Hadoop ,
Spark ,
Kafka , and
NoSQL databases . Experience building and maintaining ETL/ELT pipelines. Familiarity with cloud platforms (AWS, GCP, or Azure) is a plus. Excellent problem-solving and communication skills. Seniority level
Mid-Senior level Employment type
Full-time Job function
Information Technology
#J-18808-Ljbffr
Design, develop, and optimize scalable data pipelines using
Apache Spark ,
Python , and
Hadoop . Implement robust data ingestion, transformation, and storage solutions for large-scale datasets. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. Manage and deploy Big Data tools and frameworks including
Kafka ,
Hive ,
HBase , and
Flink . Ensure data quality, integrity, and availability across distributed systems. Conduct performance tuning and benchmarking of Big Data applications. Implement data governance practices including metadata management and data lineage tracking. Stay current with emerging technologies and integrate them into the data ecosystem as needed. Required Qualifications
Bachelors or Masters degree in Computer Science, Information Technology, or related field. 7+ years of experience in software development with a focus on Big Data technologies. Strong programming skills in
Python ,
Java , or
Scala . Hands-on experience with
Hadoop ,
Spark ,
Kafka , and
NoSQL databases . Experience building and maintaining ETL/ELT pipelines. Familiarity with cloud platforms (AWS, GCP, or Azure) is a plus. Excellent problem-solving and communication skills. Seniority level
Mid-Senior level Employment type
Full-time Job function
Information Technology
#J-18808-Ljbffr