Sigmaways Inc
Overview
If you are passionate about building reliable data platforms and thrive in a fast paced collaborative environment, come join our dynamic team. As a Senior Big Data Engineer, you will design and scale next-generation data platforms with Spark, Hadoop, Hive, Kafka, and cloud technologies like AWS, Azure, and Snowflake. In this role, you will develop pipelines, optimize big data ecosystems, and partner with product, engineering, and data science teams to deliver impactful business insights. Responsibilities
Design, develop, and support data applications and platforms with a focus on Big Data/Hadoop, Python/Spark, and related technologies. Partner with leadership to conceptualize next-generation products, contribute to the technical architecture of the data platform, resolve production issues, and drive continuous process improvements. Collaborate closely with product management, business stakeholders, engineers, analysts, and data scientists. Deliver end to end ownership of components, from inception through production release. Recommend and implement software engineering best practices with enterprise-wide impact. Ensure quality, security, maintainability, and cost-effectiveness of delivered solutions. Contribute expertise across the full software development lifecycle (coding, testing, deployment) and mentor peers on best practices. Stay current with emerging technologies and adapt quickly to new tools and approaches. Qualifications
Bachelor’s degree in Computer Science, Software Engineering, or related field. At least 7 years of experience designing, developing, and maintaining Big Data platforms including Data Lakes, Operational Data Marts, and Analytics Data Warehouses. Deep expertise in Spark, Hadoop/MR, Hive, Kafka, and distributed data ecosystems. Hands-on experience building and managing data ingestion, validation, transformation, and consumption pipelines using tools such as Hive, Spark, EMR, Glue ETL/Catalog, and Snowflake. Strong background in ETL development with tools like Cloudera/Hadoop, Nifi, and Spark. Proficiency with SQL databases (PostgreSQL, MySQL/MariaDB). Solid understanding of AWS and Azure cloud infrastructure, distributed systems, and reliability engineering practices. Experience with infrastructure-as-code and CI/CD tools, including Terraform, Jenkins, Kubernetes, and Docker. Familiarity with APIs and integration patterns. Strong programming skills in Python and shell scripting.
#J-18808-Ljbffr
If you are passionate about building reliable data platforms and thrive in a fast paced collaborative environment, come join our dynamic team. As a Senior Big Data Engineer, you will design and scale next-generation data platforms with Spark, Hadoop, Hive, Kafka, and cloud technologies like AWS, Azure, and Snowflake. In this role, you will develop pipelines, optimize big data ecosystems, and partner with product, engineering, and data science teams to deliver impactful business insights. Responsibilities
Design, develop, and support data applications and platforms with a focus on Big Data/Hadoop, Python/Spark, and related technologies. Partner with leadership to conceptualize next-generation products, contribute to the technical architecture of the data platform, resolve production issues, and drive continuous process improvements. Collaborate closely with product management, business stakeholders, engineers, analysts, and data scientists. Deliver end to end ownership of components, from inception through production release. Recommend and implement software engineering best practices with enterprise-wide impact. Ensure quality, security, maintainability, and cost-effectiveness of delivered solutions. Contribute expertise across the full software development lifecycle (coding, testing, deployment) and mentor peers on best practices. Stay current with emerging technologies and adapt quickly to new tools and approaches. Qualifications
Bachelor’s degree in Computer Science, Software Engineering, or related field. At least 7 years of experience designing, developing, and maintaining Big Data platforms including Data Lakes, Operational Data Marts, and Analytics Data Warehouses. Deep expertise in Spark, Hadoop/MR, Hive, Kafka, and distributed data ecosystems. Hands-on experience building and managing data ingestion, validation, transformation, and consumption pipelines using tools such as Hive, Spark, EMR, Glue ETL/Catalog, and Snowflake. Strong background in ETL development with tools like Cloudera/Hadoop, Nifi, and Spark. Proficiency with SQL databases (PostgreSQL, MySQL/MariaDB). Solid understanding of AWS and Azure cloud infrastructure, distributed systems, and reliability engineering practices. Experience with infrastructure-as-code and CI/CD tools, including Terraform, Jenkins, Kubernetes, and Docker. Familiarity with APIs and integration patterns. Strong programming skills in Python and shell scripting.
#J-18808-Ljbffr