Logo
PlayOn! Sports

Staff Software Engineer, Data

PlayOn! Sports, Alpharetta, Georgia, United States, 30239

Save Job

Overview

As a Staff Software Engineer on our Data Platform, you’ll be the senior IC who sets the bar for real‑time, reliable, and cost‑efficient data systems that fuel short‑form video generation, recommendations, and executive‑grade analytics. You’ll architect and build end‑to‑end data products—from streaming ingestion and Flink SQL transforms to Snowflake analytical models and SQLmesh validations—that serve both ML/AI and BI needs with clear SLAs. You’ll harden contracts, lineage, and data quality; optimize cost and latency; and lead design reviews, mentoring senior engineers while partnering with DS/AI Engineering, BI/Analytics, and brand teams across NFHS Network, GoFan, and MaxPreps. You combine deep systems thinking with pragmatic delivery. You’re fluent in SQL and Python, confident with Kafka + Flink (or equivalent), and at home modeling messy real‑world domains like sports events, tickets, subscriptions, and ad yield. You write crisp design docs, ship iteratively, instrument everything, and translate ambiguous business goals into measurable outcomes. The outcomes you’ll deliver

Reduce end‑to‑end latency for priority streaming pipelines (e.g., scoreboard → short‑form video features) with clearly defined SLAs and runbooks. Establish data contracts, tests, and lineage for one high‑impact domain (e.g., ticketing funnels or subscription attribution) enabling new BI/ML use cases. Deliver measurable cost/performance wins across Snowflake/Kafka/S3 (e.g., storage compaction, task scheduling, partitioning) with transparent dashboards. In This Role, You Can Expect To

Design and own real‑time and batch pipelines (Confluent Cloud/Kafka, Flink SQL, Snowflake, SQLmesh) with reliability and observability built‑in. Model domain‑rich schemas for sports/video/ticketing/subscriptions/ads that serve both analytics and ML feature needs. Implement data quality, contracts, testing, and lineage (dbt tests, Great Expectations, OpenLineage/Marquez or similar). Drive SLOs, alerting, and remediation playbooks to minimize data downtime and MTTR; lead blameless postmortems. Scale event ingestion/CDC from sources like GA4/Segment, app stores, payments (Stripe), 3rd‑party sports systems; manage schema evolution. To Thrive In This Role, You Have

8–12+ years in data engineering (or equivalent) with ownership of mission‑critical pipelines and data products at scale. Expert SQL and strong Python; deep knowledge of incremental processing, partitioning, compaction, and schema design. Hands‑on with real‑time systems (Kafka + Flink or similar) and modern ELT in a warehouse (Snowflake preferred). Proven reliability practice: contracts, tests, lineage, SLAs/SLOs, and effective incident response. Experience modeling complex domains and enabling both BI and ML/feature store use cases. Clear, concise design docs and stakeholder communication; ability to lead cross‑team initiatives. How You Play

Ownership over Participation-

You take responsibility for achieving holistic outcomes, prioritize key objectives, and adapt quickly when situations require a different approach. You follow through even against the toughest challenges. Team over Stars-

You are a bridge builder, establishing processes and relationships with teams outside your own. You work to rally around common goals, find win‑win solutions, compromise when necessary, and help others succeed. Growth over Comfort-

You are driven by a desire to grow and actively seek opportunities to expand your comfort zone, skills, and confidence. You embrace new challenges with curiosity, accepting discomfort and failure as opportunities to learn. Fairness over Popularity-

You approach decisions with a scientist’s mindset, challenging your assumptions and remaining objective. You consider long‑term impact rather than relying on short‑term gains, proactively seek others’ perspectives, and manage emotions in decision‑making.

#J-18808-Ljbffr