Diverse Lynx
Overview
Python, Scala, Java, Telecom domain knowledge, Data-Warehouse/ DataBricks knowledge, NoSQL-JSON-Document store (e.g. MongoDB), Data-Mapper (e.g. Nifi), Data Analytics, Spark, Kafka and Azure technology
Responsibilities
Competent Big Data/Databrick engineer, who is independent, results driven and is capable of taking business requirements and building out the technologies to take it to production. Big Data Engineer with expert level experience in Hadoop ecosystem and real time analytics tools including PySpark/Scala Spark/Hive/Hadoop CLI/MapReduce/Storm/Kafka/Lambda Architecture/Mongo. Familiar with job scheduling challenges in Hadoop. Experienced in creating and submitting Spark jobs; experience in high performance tuning and scalability. Experience in working on real time stream processing technologies like Spark structured streaming and Kafka. Expertise in Python/Spark and their related libraries and frameworks. Experience in building pipeline and efforts involved in deployment. Unix/Linux expertise; comfortable with Linux operating system and Shell Scripting. Experience in Azure cache. Design, development, unit and integration testing of complex data pipelines and to handle data volumes to derive insights. Ability to optimize code to be able to run efficiently with stipulated SLA. Excellent problem solving skills, with attention to detail, focus on quality and timely delivery of assigned tasks.
Qualifications
Any additional relevant experience or certifications related to Big Data engineering, Spark, Hadoop, streaming technologies, or cloud platforms may be considered as supporting qualifications.
Diverse Lynx LLC is an Equal Employment Opportunity employer. All qualified applicants will receive due consideration for employment without any discrimination. All applicants will be evaluated solely on the basis of their ability, competence and their proven capability to perform the functions outlined in the corresponding role. We promote and support a diverse workforce across all levels in the company. #J-18808-Ljbffr
Responsibilities
Competent Big Data/Databrick engineer, who is independent, results driven and is capable of taking business requirements and building out the technologies to take it to production. Big Data Engineer with expert level experience in Hadoop ecosystem and real time analytics tools including PySpark/Scala Spark/Hive/Hadoop CLI/MapReduce/Storm/Kafka/Lambda Architecture/Mongo. Familiar with job scheduling challenges in Hadoop. Experienced in creating and submitting Spark jobs; experience in high performance tuning and scalability. Experience in working on real time stream processing technologies like Spark structured streaming and Kafka. Expertise in Python/Spark and their related libraries and frameworks. Experience in building pipeline and efforts involved in deployment. Unix/Linux expertise; comfortable with Linux operating system and Shell Scripting. Experience in Azure cache. Design, development, unit and integration testing of complex data pipelines and to handle data volumes to derive insights. Ability to optimize code to be able to run efficiently with stipulated SLA. Excellent problem solving skills, with attention to detail, focus on quality and timely delivery of assigned tasks.
Qualifications
Any additional relevant experience or certifications related to Big Data engineering, Spark, Hadoop, streaming technologies, or cloud platforms may be considered as supporting qualifications.
Diverse Lynx LLC is an Equal Employment Opportunity employer. All qualified applicants will receive due consideration for employment without any discrimination. All applicants will be evaluated solely on the basis of their ability, competence and their proven capability to perform the functions outlined in the corresponding role. We promote and support a diverse workforce across all levels in the company. #J-18808-Ljbffr