BizTek People
Job Posting
Job Information Job Opening ID: 6730 Date Opened: 10/07/2020 Job Type: Contract Language Skills: English Location: Beaverton Industry: Manufacturing City: Beaverton State/Province: Oregon Country: United States Zip/Postal Code: 97003 Job Description
Role Responsibilities: Design and implement data products and features in collaboration with product owners, data analysts, and business partners using Agile/Scrum methodology Contribute to overall architecture, frameworks and patterns for processing and storing large data volumes Translate product backlog items into engineering designs and logical units of work Profile and analyze data for the purpose of designing scalable solutions Define and apply appropriate data acquisition and consumption strategies for given technical scenarios Design and implement distributed data processing pipelines using tools and languages prevalent in the big data ecosystem Build utilities, user defined functions, libraries, and frameworks to better enable data flow patterns Implement complex automated routines using workflow orchestration tools Work with architecture, engineering leads and other teams to ensure quality solutions are implemented, and engineering best practices are defined and adhered to Anticipate, identify and solve issues concerning data management to improve data quality Build and incorporate automated unit tests and participate in integration testing efforts Utilize and advance continuous integration and deployment frameworks Troubleshoot data issues and perform root cause analysis Work across teams to resolve operational & performance issues Requirements
The following qualifications and technical skills will position you well for this role: MS/BS in Computer Science, or related technical discipline 5+ years of experience in large-scale software development, 3+ years of big data experience Strong programming experience, Python preferred Extensive experience working with Hadoop and related processing frameworks such as Spark, Hive, etc. Experience with messaging/streaming/complex event processing tooling and frameworks with an emphasis on Spark Streaming or Structured Streaming and Apache Nifi Good understanding of file formats including JSON, Parquet, Avro, and others Familiarity with data warehousing, dimensional modeling, and ETL development Experience with RDBMS systems, SQL and SQL Analytical functions Experience with workflow orchestration tools like Apache Airflow Skill Set
Big Data
Job Information Job Opening ID: 6730 Date Opened: 10/07/2020 Job Type: Contract Language Skills: English Location: Beaverton Industry: Manufacturing City: Beaverton State/Province: Oregon Country: United States Zip/Postal Code: 97003 Job Description
Role Responsibilities: Design and implement data products and features in collaboration with product owners, data analysts, and business partners using Agile/Scrum methodology Contribute to overall architecture, frameworks and patterns for processing and storing large data volumes Translate product backlog items into engineering designs and logical units of work Profile and analyze data for the purpose of designing scalable solutions Define and apply appropriate data acquisition and consumption strategies for given technical scenarios Design and implement distributed data processing pipelines using tools and languages prevalent in the big data ecosystem Build utilities, user defined functions, libraries, and frameworks to better enable data flow patterns Implement complex automated routines using workflow orchestration tools Work with architecture, engineering leads and other teams to ensure quality solutions are implemented, and engineering best practices are defined and adhered to Anticipate, identify and solve issues concerning data management to improve data quality Build and incorporate automated unit tests and participate in integration testing efforts Utilize and advance continuous integration and deployment frameworks Troubleshoot data issues and perform root cause analysis Work across teams to resolve operational & performance issues Requirements
The following qualifications and technical skills will position you well for this role: MS/BS in Computer Science, or related technical discipline 5+ years of experience in large-scale software development, 3+ years of big data experience Strong programming experience, Python preferred Extensive experience working with Hadoop and related processing frameworks such as Spark, Hive, etc. Experience with messaging/streaming/complex event processing tooling and frameworks with an emphasis on Spark Streaming or Structured Streaming and Apache Nifi Good understanding of file formats including JSON, Parquet, Avro, and others Familiarity with data warehousing, dimensional modeling, and ETL development Experience with RDBMS systems, SQL and SQL Analytical functions Experience with workflow orchestration tools like Apache Airflow Skill Set
Big Data