Diverse Lynx
Python, Scala, Java, Telecom domain knowledge, Data-Warehouse/ DataBricks knowledge, NoSQL-JSON-Document store (e.g. MongoDB), Data-Mapper (e.g. Nifi), Data Analytics, Spark, Kafka and Azure technology
Competent
Big Data/Databrick engineer
, who is independent, results driven and is capable of taking business requirements and building out the technologies to take it to production.
Big Data Engineer with expert level experience in Hadoop ecosystem and real time analytics tools including PySpark/Scala Spark/Hive/Hadoop CLI/MapReduce/ Storm/Kafka/Lambda Architecture/Mongo
Familiar with job scheduling challenges in Hadoop
Experienced in creating and submitting Spark jobs Experience in high performance tuning and scalability
Experience in working on real time stream processing technologies like Spark structured streaming, Kafka Expertise in Python/Spark and their related libraries and frameworks
Experience in building pipeline and efforts involved in deployment
Unix/Linux expertise; comfortable with Linux operating system and Shell Scripting
Experience in Azure cache
Design, Development, Unit and Integration testing of complex data pipelines and to handle data volumes to derive insights
Ability to optimize code to be able to run efficiently with stipulated SLA
Excellent problem solving skills, with attention to detail, focus on quality and timely delivery of assigned tasks
Diverse Lynx LLC is an Equal Employment Opportunity employer. All qualified applicants will receive due consideration for employment without any discrimination. All applicants will be evaluated solely on the basis of their ability, competence and their proven capability to perform the functions outlined in the corresponding role. We promote and support a diverse workforce across all levels in the company.
Competent
Big Data/Databrick engineer
, who is independent, results driven and is capable of taking business requirements and building out the technologies to take it to production.
Big Data Engineer with expert level experience in Hadoop ecosystem and real time analytics tools including PySpark/Scala Spark/Hive/Hadoop CLI/MapReduce/ Storm/Kafka/Lambda Architecture/Mongo
Familiar with job scheduling challenges in Hadoop
Experienced in creating and submitting Spark jobs Experience in high performance tuning and scalability
Experience in working on real time stream processing technologies like Spark structured streaming, Kafka Expertise in Python/Spark and their related libraries and frameworks
Experience in building pipeline and efforts involved in deployment
Unix/Linux expertise; comfortable with Linux operating system and Shell Scripting
Experience in Azure cache
Design, Development, Unit and Integration testing of complex data pipelines and to handle data volumes to derive insights
Ability to optimize code to be able to run efficiently with stipulated SLA
Excellent problem solving skills, with attention to detail, focus on quality and timely delivery of assigned tasks
Diverse Lynx LLC is an Equal Employment Opportunity employer. All qualified applicants will receive due consideration for employment without any discrimination. All applicants will be evaluated solely on the basis of their ability, competence and their proven capability to perform the functions outlined in the corresponding role. We promote and support a diverse workforce across all levels in the company.