Masscom Corporation
Timing:
Must be available between 6 PM to 12 AM IST (rest flexible during daytime) Working days:
5 days a week Experience:
5+ years of experience About Us: Since 2015, Masscom Corp has focused on providing high-quality services at a reasonable cost to start-ups and mid-scale businesses. While we have a few “enterprise scale” organizations to name in our client roster, our primary focus remains small to medium scale businesses especially start-ups. Given such a focus, we truly understand the day-to-day challenges in delivering technology solutions to this segment. Our extensive array of services helps our customers leverage next-generation technologies to build, transform and manage their IT operations as a natural extension to their in-house teams. We provide a unique mix of domain expertise of cutting-edge Digital Product Engineering, Infrastructure Engineering and Support Services to keep up with day-to-day operations. Key Responsibilities Design, build, and maintain scalable ETL/ELT data pipelines using Apache Spark and Airflow Develop efficient data ingestion and transformation processes for structured and semi-structured data Implement and optimize Snowflake data models for high-performance analytics and BI reporting Collaborate with Data Scientists to support machine learning workflows , data preparation, and feature engineering Write performant, reusable Python and SQL code to support automation and pipeline orchestration Ensure data quality, reliability, and observability across production workflows Participate in architecture reviews and contribute to improving data engineering standards and best practices Required Skills & Experience
5+ years of hands-on experience in data engineering roles Strong expertise in Apache Spark (PySpark or Scala preferred) Proficient in Apache Airflow for workflow orchestration Advanced proficiency in Python programming Strong command over SQL for data manipulation and analytics Solid experience with Snowflake or similar modern data warehouse technologies Experience with building data pipelines in cloud-based environments (e.g., AWS, GCP, or Azure) Nice-to-Have
Exposure to machine learning workflows or experience supporting ML pipelines Familiarity with CI/CD practices, containerization (Docker), and version control (Git) Experience with real-time data processing or event streaming tools (e.g., Kafka) WhyJoinMasscom?
Permanent WorkFromHome GenerousPaidTimeOff(PTO) ,Vacationandholidays
#J-18808-Ljbffr
Must be available between 6 PM to 12 AM IST (rest flexible during daytime) Working days:
5 days a week Experience:
5+ years of experience About Us: Since 2015, Masscom Corp has focused on providing high-quality services at a reasonable cost to start-ups and mid-scale businesses. While we have a few “enterprise scale” organizations to name in our client roster, our primary focus remains small to medium scale businesses especially start-ups. Given such a focus, we truly understand the day-to-day challenges in delivering technology solutions to this segment. Our extensive array of services helps our customers leverage next-generation technologies to build, transform and manage their IT operations as a natural extension to their in-house teams. We provide a unique mix of domain expertise of cutting-edge Digital Product Engineering, Infrastructure Engineering and Support Services to keep up with day-to-day operations. Key Responsibilities Design, build, and maintain scalable ETL/ELT data pipelines using Apache Spark and Airflow Develop efficient data ingestion and transformation processes for structured and semi-structured data Implement and optimize Snowflake data models for high-performance analytics and BI reporting Collaborate with Data Scientists to support machine learning workflows , data preparation, and feature engineering Write performant, reusable Python and SQL code to support automation and pipeline orchestration Ensure data quality, reliability, and observability across production workflows Participate in architecture reviews and contribute to improving data engineering standards and best practices Required Skills & Experience
5+ years of hands-on experience in data engineering roles Strong expertise in Apache Spark (PySpark or Scala preferred) Proficient in Apache Airflow for workflow orchestration Advanced proficiency in Python programming Strong command over SQL for data manipulation and analytics Solid experience with Snowflake or similar modern data warehouse technologies Experience with building data pipelines in cloud-based environments (e.g., AWS, GCP, or Azure) Nice-to-Have
Exposure to machine learning workflows or experience supporting ML pipelines Familiarity with CI/CD practices, containerization (Docker), and version control (Git) Experience with real-time data processing or event streaming tools (e.g., Kafka) WhyJoinMasscom?
Permanent WorkFromHome GenerousPaidTimeOff(PTO) ,Vacationandholidays
#J-18808-Ljbffr