aKUBE
Overview
Get AI-powered advice on this job and more exclusive features. Direct message the job poster from aKUBE. Rate Range : Up to $82/hr on W2 depending on experience (no C2C or 1099 or sub-contract) Work Authorization : GC, USC, All valid EADs except OPT, CPT, H1B Must Have: 4+ years of experience in big data and/or data intensive projects 4+ years of hands-on Python development Strong experience with PySpark Experience with AWS services: Redshift, S3, DynamoDB, SageMaker, Athena, Lambda Experience with Databricks, Snowflake, Jenkins Strong knowledge of data engineering practices: data pipelines, ETL, data governance, metadata management, data lineage Experience with APIs, data wrangling, and advanced data transformations Responsibilities: Design, build, and maintain batch and real-time data pipelines integrating 1st, 2nd, and 3rd party data sources Lead the design and evolution of BI Delta Lake infrastructure to support analytics and reporting Develop data catalogs, validation routines, error logging, and monitoring solutions for high-quality datasets Build integrations with marketing, media, and subscription platforms to optimize KPIs Partner with Data Architect to enable attribution, segmentation, and activation capabilities across business teams Collaborate with product, lifecycle, and marketing teams to democratize insights and improve engagement through data-driven solutions Coach engineers and BI team members on best practices for building large-scale and governed data platforms Qualifications: Bachelors degree in STEM field (required) Excellent communicator and collaborator, able to connect technical solutions with business outcomes Demonstrated ability to deliver solutions under evolving data conditions Strong problem-solving and analytical skills with intellectual curiosity Preferred: Experience with marketing technology stacks, CDPs (mParticle, Hightouch), ML platform integrations, experimentation frameworks, and front-end/full-stack development Familiarity with binary serialization formats (Parquet, Avro, Thrift) a plus. Seniority level
Mid-Senior level Employment type
Contract Job function
Information Technology, Engineering, and Other Industries
Information Technology & Services, Broadcast Media Production and Distribution, and Technology, Information and Media
#J-18808-Ljbffr
Get AI-powered advice on this job and more exclusive features. Direct message the job poster from aKUBE. Rate Range : Up to $82/hr on W2 depending on experience (no C2C or 1099 or sub-contract) Work Authorization : GC, USC, All valid EADs except OPT, CPT, H1B Must Have: 4+ years of experience in big data and/or data intensive projects 4+ years of hands-on Python development Strong experience with PySpark Experience with AWS services: Redshift, S3, DynamoDB, SageMaker, Athena, Lambda Experience with Databricks, Snowflake, Jenkins Strong knowledge of data engineering practices: data pipelines, ETL, data governance, metadata management, data lineage Experience with APIs, data wrangling, and advanced data transformations Responsibilities: Design, build, and maintain batch and real-time data pipelines integrating 1st, 2nd, and 3rd party data sources Lead the design and evolution of BI Delta Lake infrastructure to support analytics and reporting Develop data catalogs, validation routines, error logging, and monitoring solutions for high-quality datasets Build integrations with marketing, media, and subscription platforms to optimize KPIs Partner with Data Architect to enable attribution, segmentation, and activation capabilities across business teams Collaborate with product, lifecycle, and marketing teams to democratize insights and improve engagement through data-driven solutions Coach engineers and BI team members on best practices for building large-scale and governed data platforms Qualifications: Bachelors degree in STEM field (required) Excellent communicator and collaborator, able to connect technical solutions with business outcomes Demonstrated ability to deliver solutions under evolving data conditions Strong problem-solving and analytical skills with intellectual curiosity Preferred: Experience with marketing technology stacks, CDPs (mParticle, Hightouch), ML platform integrations, experimentation frameworks, and front-end/full-stack development Familiarity with binary serialization formats (Parquet, Avro, Thrift) a plus. Seniority level
Mid-Senior level Employment type
Contract Job function
Information Technology, Engineering, and Other Industries
Information Technology & Services, Broadcast Media Production and Distribution, and Technology, Information and Media
#J-18808-Ljbffr