Sonoma Consulting
Halo Group is a premier provider of IT talent. We place technology experts within
the teams of the world’s leading companies to help them build innovative
businesses that keep them one step closer to their customers and one step
ahead of the competition. We offer a meaningful work environment for
employees, attractive and interesting engagements for consultants, and cutting-edge
digital innovation for our customers.
We delight in helping our customers execute their digital vision. Big projects or
small, Halo Group knows that by combining the highest quality talent with our
unwavering support, we will become an invaluable extension of the team. Halo
Group's experienced consultants in Detroit, Atlanta and Dallas specialize in all
areas of product/project governance, UX/UI, multi-platform applications, quality
assurance/testing, cloud computing, and data analytics.
Since its inception, Halo Group has been recognized for numerous awards, including:
INC 5000
Future 50
101 Best and Brightest
Michigan 50 Companies to Watch
Goldline Research - “Most Dependable Companies”
Ernst & Young - “Entrepreneur of the Year” Finalist
Job Description
Preferred Selecting and integrating Big Data tools and frameworks required to provide requested capabilities Implementing Data ingestion and ETL processes on Hadoop Monitoring performance and advising any necessary infrastructure changes Defining data retention policies Design and build data processing pipelines for structured and unstructured data using tools and frameworks in the Hadoop ecosystem Develop applications that are scalable to handle millions of events/records Design and launch scalable, reliable and efficient processes to move, transform and report on large amounts of data Participate in meetings with business (account/product management, data scientists) to obtain new requirements Follow our Agile software development process with daily scrums and monthly Sprints Ability to work collaboratively on a cross-functional team with a wide range of experience levels QUALIFICATIONS Bachelor's degree and 8+ years relevant experience or Master’s degree and 6+ years of relevant experience 4+ years in industry implementing big data solutions on Hadoop Proficient understanding of distributed computing principles Proficiency with Hadoop v2, MapReduce, HDFS Experience with building stream-processing systems, using solutions such as Storm or Kafka and Spark-Streaming Good knowledge of Big Data querying tools, such as Pig, Hive, Phoenix Experience with Spark Experience with integration of data from multiple data sources Experience with 1 or 2 NoSQL/Graph databases, such as HBase, Cassandra, MongoDB, Neo4j Proficiency in programming languages like SCALA, Java, Python Experience with Linux OS, shell scripting Experience with relational databases (SQL) Experience in working with real-time data feeds Experience in working with unstructured data Experience in implementing Scoop Jobs to Import/Export data from Hadoop Knowledge of various ETL techniques and frameworks, such as Pig, Hive, or Flume Experience with various messaging systems, such as Kafka or RabbitMQ Experience with Big Data ML toolkits, such as Mahout, SparkML, or H2O Good understanding of Lambda Architecture, along with its advantages and drawbacks Experience with Hortonworks Hadoop Data Platform (HDP) Experience with all or some of the following supporting Hadoop administration and security frameworks: HCatalog, Drill, NiFi, Oozie, Falcon, Ranger, Ambari, Zeplin. Qualifications
•Bachelor's degree or foreign equivalent required from an accredited institution. Will also consider three years of progressive experience in the specialty in lieu of every year of education. •At least 2 years of experience with Information Technology. Additional Information
** U.S. Citizens and those who are authorized to work independently in the United States are encouraged to apply. We are unable to sponsor at this time. This is a Full-Time / Permanent job opportunity. Only for US Citizen and Green Card Holder ** All your information will be kept confidential according to EEO guidelines.
#J-18808-Ljbffr
Preferred Selecting and integrating Big Data tools and frameworks required to provide requested capabilities Implementing Data ingestion and ETL processes on Hadoop Monitoring performance and advising any necessary infrastructure changes Defining data retention policies Design and build data processing pipelines for structured and unstructured data using tools and frameworks in the Hadoop ecosystem Develop applications that are scalable to handle millions of events/records Design and launch scalable, reliable and efficient processes to move, transform and report on large amounts of data Participate in meetings with business (account/product management, data scientists) to obtain new requirements Follow our Agile software development process with daily scrums and monthly Sprints Ability to work collaboratively on a cross-functional team with a wide range of experience levels QUALIFICATIONS Bachelor's degree and 8+ years relevant experience or Master’s degree and 6+ years of relevant experience 4+ years in industry implementing big data solutions on Hadoop Proficient understanding of distributed computing principles Proficiency with Hadoop v2, MapReduce, HDFS Experience with building stream-processing systems, using solutions such as Storm or Kafka and Spark-Streaming Good knowledge of Big Data querying tools, such as Pig, Hive, Phoenix Experience with Spark Experience with integration of data from multiple data sources Experience with 1 or 2 NoSQL/Graph databases, such as HBase, Cassandra, MongoDB, Neo4j Proficiency in programming languages like SCALA, Java, Python Experience with Linux OS, shell scripting Experience with relational databases (SQL) Experience in working with real-time data feeds Experience in working with unstructured data Experience in implementing Scoop Jobs to Import/Export data from Hadoop Knowledge of various ETL techniques and frameworks, such as Pig, Hive, or Flume Experience with various messaging systems, such as Kafka or RabbitMQ Experience with Big Data ML toolkits, such as Mahout, SparkML, or H2O Good understanding of Lambda Architecture, along with its advantages and drawbacks Experience with Hortonworks Hadoop Data Platform (HDP) Experience with all or some of the following supporting Hadoop administration and security frameworks: HCatalog, Drill, NiFi, Oozie, Falcon, Ranger, Ambari, Zeplin. Qualifications
•Bachelor's degree or foreign equivalent required from an accredited institution. Will also consider three years of progressive experience in the specialty in lieu of every year of education. •At least 2 years of experience with Information Technology. Additional Information
** U.S. Citizens and those who are authorized to work independently in the United States are encouraged to apply. We are unable to sponsor at this time. This is a Full-Time / Permanent job opportunity. Only for US Citizen and Green Card Holder ** All your information will be kept confidential according to EEO guidelines.
#J-18808-Ljbffr