Ideavat Inc
Jobs Bridge Inc is among the fastest growing IT staffing / professional services organizations with its own job portal.
Jobs Bridge works closely with a large number of IT organizations in the most in-demand technology skill sets.
Job Description
OPT and EAD candidates can apply. Desired Skills and Experience
Job Responsibilities: Build distributed, scalable, and reliable data pipelines that ingest and process data at scale and in real-time. Collaborate with other teams to design, develop, and deploy data tools supporting both operations and product use cases. Perform offline analysis of large data sets using components from the Hadoop ecosystem. Evaluate and advise on technical aspects of open work requests in the product backlog with the project lead. Own product features from development through to production deployment. Evaluate big data technologies and prototype solutions to improve data processing architecture. Candidate Profile: BSc in Computer Science or related field. Minimum 2 years of experience on Big Data platforms. Proficiency with Java, Python, Scala, HBase, Hive, MapReduce, ETL, Kafka, MongoDB, Postgres, Visualization technologies, etc. Strong understanding of data schemas, data models, and efficiency in the big data lifecycle. Knowledge of automated QA needs related to Big Data. Experience with visualization platforms like Tableau, D3JS, or others. Proficiency with agile or lean development practices. Strong object-oriented design and analysis skills. Excellent technical, organizational, written, and verbal communication skills. Top skill sets / technologies in the ideal candidate include: Batch processing: Hadoop MapReduce, Cascading/Scalding, Apache Spark. Stream processing: Apache Storm, AKKA, Samza, Spark Streaming. ETL Tools: DataStage, Informatica. Technologies we use include: R, Java, Hadoop/MapReduce, Flume, Storm, Kafka, MemSQL, Pig, Hive, ETL. Qualifications #J-18808-Ljbffr
OPT and EAD candidates can apply. Desired Skills and Experience
Job Responsibilities: Build distributed, scalable, and reliable data pipelines that ingest and process data at scale and in real-time. Collaborate with other teams to design, develop, and deploy data tools supporting both operations and product use cases. Perform offline analysis of large data sets using components from the Hadoop ecosystem. Evaluate and advise on technical aspects of open work requests in the product backlog with the project lead. Own product features from development through to production deployment. Evaluate big data technologies and prototype solutions to improve data processing architecture. Candidate Profile: BSc in Computer Science or related field. Minimum 2 years of experience on Big Data platforms. Proficiency with Java, Python, Scala, HBase, Hive, MapReduce, ETL, Kafka, MongoDB, Postgres, Visualization technologies, etc. Strong understanding of data schemas, data models, and efficiency in the big data lifecycle. Knowledge of automated QA needs related to Big Data. Experience with visualization platforms like Tableau, D3JS, or others. Proficiency with agile or lean development practices. Strong object-oriented design and analysis skills. Excellent technical, organizational, written, and verbal communication skills. Top skill sets / technologies in the ideal candidate include: Batch processing: Hadoop MapReduce, Cascading/Scalding, Apache Spark. Stream processing: Apache Storm, AKKA, Samza, Spark Streaming. ETL Tools: DataStage, Informatica. Technologies we use include: R, Java, Hadoop/MapReduce, Flume, Storm, Kafka, MemSQL, Pig, Hive, ETL. Qualifications #J-18808-Ljbffr