Pennsylvania Staffing
Job Opportunity at Zoom
You will build, optimize, and maintain data pipelines using modern big data technologies like Apache Spark and Kafka to handle high-volume data and support AI. The team is a close-knit group of seven highly experienced, senior engineers. This team powers all of Zoom's AI features and enables data-driven decision-making across the organization. You will work with seasoned engineers and analysts to deliver data solutions, leveraging modern big data, distributed processing, and cloud technologies. Responsibilities
Designing, developing, and maintaining ETL processes and data pipelines for structured and unstructured data. Also optimizing data warehouses and analytical data models. Writing and tuning complex SQL queries to support analytics, reporting, and application needs. Collaborating with stakeholders to translate business thinking into data transformations and workflows. Implementing distributed data processing solutions using Apache Spark and Apache Flink with Scala or Python. Building streaming and messaging solutions using Apache Kafka. Working with and optimize solutions in the big data ecosystem (e.g., Hive, Presto, HDFS, Parquet, ORC). Deploying and managing workloads using Kubernetes (K8s) in cloud or hybrid environments. Developing scripts in Linux/Unix shell for automation and data processing tasks. Qualifications
Demonstrate Bachelor's in Computer Science, Data Engineering, or a related field, OR 13 years of relevant professional experience. Apply your expertise in ETL design, development, and data warehousing concepts. Demonstrate your expertise in SQL by writing and optimizing complex queries. Apply your deep expertise in Python programming, with a focus on object-oriented principles. Leverage your experience with distributed data processing frameworks like Apache Spark and Apache Flink. Showcase your experience with AWS or other major cloud platforms. Exhibit proficiency in Linux/Unix shell scripting. Utilize your solid understanding of the big data ecosystem, including tools like Hive, Presto, Hadoop, HDFS, Parquet, and ORC. Salary Range
Minimum: $73,300.00 Maximum: $169,500.00 In addition to the base salary and/or on target earnings listed, Zoom has a total direct compensation philosophy that takes into consideration; base salary, bonus, and equity value. Note: Starting pay will be based on a number of factors and commensurate with qualifications & experience. We also have a location based compensation structure; there may be a different range for candidates in this and other locations.
You will build, optimize, and maintain data pipelines using modern big data technologies like Apache Spark and Kafka to handle high-volume data and support AI. The team is a close-knit group of seven highly experienced, senior engineers. This team powers all of Zoom's AI features and enables data-driven decision-making across the organization. You will work with seasoned engineers and analysts to deliver data solutions, leveraging modern big data, distributed processing, and cloud technologies. Responsibilities
Designing, developing, and maintaining ETL processes and data pipelines for structured and unstructured data. Also optimizing data warehouses and analytical data models. Writing and tuning complex SQL queries to support analytics, reporting, and application needs. Collaborating with stakeholders to translate business thinking into data transformations and workflows. Implementing distributed data processing solutions using Apache Spark and Apache Flink with Scala or Python. Building streaming and messaging solutions using Apache Kafka. Working with and optimize solutions in the big data ecosystem (e.g., Hive, Presto, HDFS, Parquet, ORC). Deploying and managing workloads using Kubernetes (K8s) in cloud or hybrid environments. Developing scripts in Linux/Unix shell for automation and data processing tasks. Qualifications
Demonstrate Bachelor's in Computer Science, Data Engineering, or a related field, OR 13 years of relevant professional experience. Apply your expertise in ETL design, development, and data warehousing concepts. Demonstrate your expertise in SQL by writing and optimizing complex queries. Apply your deep expertise in Python programming, with a focus on object-oriented principles. Leverage your experience with distributed data processing frameworks like Apache Spark and Apache Flink. Showcase your experience with AWS or other major cloud platforms. Exhibit proficiency in Linux/Unix shell scripting. Utilize your solid understanding of the big data ecosystem, including tools like Hive, Presto, Hadoop, HDFS, Parquet, and ORC. Salary Range
Minimum: $73,300.00 Maximum: $169,500.00 In addition to the base salary and/or on target earnings listed, Zoom has a total direct compensation philosophy that takes into consideration; base salary, bonus, and equity value. Note: Starting pay will be based on a number of factors and commensurate with qualifications & experience. We also have a location based compensation structure; there may be a different range for candidates in this and other locations.