eTeam
Must Have Skills: Hive, Spark, Pig
Detailed Job Description: • Minimum years of hands-on experience in data engineering, with at least 8+ years in Hive development. • Design, develop, and maintain scalable data pipelines using HiveQL and Apache Hive. • Implement data governance, lineage, and quality checks across data workflows. • Tune Hive queries for performance and scalability in production environments. • Integrate Hive with other tools in the Hadoop ecosystem (e.g., Spark, Pig, Oozie, Sqoop). • Familiarity with cloud platforms (AWS, Azure, GCP) and data lake architectures. • Strong proficiency in HiveQL, SQL, and data modeling. • Work with large datasets in Hadoop Distributed File System (HDFS) and optimize query performance. • Experience with Unix/Linux shell scripting and version control systems (e.g., Git).
Detailed Job Description: • Minimum years of hands-on experience in data engineering, with at least 8+ years in Hive development. • Design, develop, and maintain scalable data pipelines using HiveQL and Apache Hive. • Implement data governance, lineage, and quality checks across data workflows. • Tune Hive queries for performance and scalability in production environments. • Integrate Hive with other tools in the Hadoop ecosystem (e.g., Spark, Pig, Oozie, Sqoop). • Familiarity with cloud platforms (AWS, Azure, GCP) and data lake architectures. • Strong proficiency in HiveQL, SQL, and data modeling. • Work with large datasets in Hadoop Distributed File System (HDFS) and optimize query performance. • Experience with Unix/Linux shell scripting and version control systems (e.g., Git).