Jobsbridge
Jobs Bridge Inc is among the fastest growing IT staffing / professional services organization with its own job portal.
Jobs Bridge works extremely closely with a large number of IT organizations in the most in-demand technology skill sets.
Job Description
The ideal candidate will have experience with Hadoop, Big Data, and related technologies such as Flume, Storm, and Hive. Job Details: Total Experience: 2 years Max Salary: Not Mentioned Employment Type: Direct Jobs (Full Time) Domain: Any OPT and EAD candidates are eligible to apply. Job Responsibilities:
Build distributed, scalable, and reliable data pipelines that ingest and process data at scale and in real-time. Collaborate with other teams to design and develop and deploy data tools that support both operations and product use cases. Perform offline analysis of large data sets using components from the Hadoop ecosystem. Evaluate and advise on technical aspects of open work requests in the product backlog with the project lead. Own product features from the development phase through to production deployment. Evaluate big data technologies and prototype solutions to improve our data processing architecture. Candidate Profile:
BS in Computer Science or related area Minimum 2 years of experience on Big Data Platform Proficiency with Java, Python, Scala, HBase, Hive, MapReduce, ETL, Kafka, Mongo, Postgres, and Visualization technologies Flair for data, schema, data model, and efficiency in big data related life cycle Understanding of automated QA needs related to Big data Understanding of various Visualization platforms (Tableau, D3JS, others) Proficiency with agile or lean development practices Strong object-oriented design and analysis skills Excellent technical and organizational skills Excellent written and verbal communication skills Desired Skill Sets:
Batch processing: Hadoop MapReduce, Cascading/Scalding, Apache Spark Stream processing: Apache Storm, AKKA, Samza, Spark streaming ETL Tools: Data Stage, Informatica
#J-18808-Ljbffr
The ideal candidate will have experience with Hadoop, Big Data, and related technologies such as Flume, Storm, and Hive. Job Details: Total Experience: 2 years Max Salary: Not Mentioned Employment Type: Direct Jobs (Full Time) Domain: Any OPT and EAD candidates are eligible to apply. Job Responsibilities:
Build distributed, scalable, and reliable data pipelines that ingest and process data at scale and in real-time. Collaborate with other teams to design and develop and deploy data tools that support both operations and product use cases. Perform offline analysis of large data sets using components from the Hadoop ecosystem. Evaluate and advise on technical aspects of open work requests in the product backlog with the project lead. Own product features from the development phase through to production deployment. Evaluate big data technologies and prototype solutions to improve our data processing architecture. Candidate Profile:
BS in Computer Science or related area Minimum 2 years of experience on Big Data Platform Proficiency with Java, Python, Scala, HBase, Hive, MapReduce, ETL, Kafka, Mongo, Postgres, and Visualization technologies Flair for data, schema, data model, and efficiency in big data related life cycle Understanding of automated QA needs related to Big data Understanding of various Visualization platforms (Tableau, D3JS, others) Proficiency with agile or lean development practices Strong object-oriented design and analysis skills Excellent technical and organizational skills Excellent written and verbal communication skills Desired Skill Sets:
Batch processing: Hadoop MapReduce, Cascading/Scalding, Apache Spark Stream processing: Apache Storm, AKKA, Samza, Spark streaming ETL Tools: Data Stage, Informatica
#J-18808-Ljbffr