Logo
Atlanta Hawks

Analytics Engineer

Atlanta Hawks, Atlanta, Georgia, United States, 30383

Save Job

Overview

A professional basketball team and state-of-the-art arena/entertainment venue that specializes in creating memorable experiences for each guest. We live for the fast-paced world of sports and live entertainment, and we strive to deliver wonderful experiences that create lasting memories. We look for teammates who share the same enthusiasm and excellence. Who are you

An enthusiastic lover of sports, live entertainment, and people. You have a true passion for engaging in meaningful interactions and creating memorable experiences for all guests. You strive to be helpful, engaging, and knowledgeable about Atlanta Hawks and State Farm Arena. You enjoy being part of an exciting and dynamic group and are committed to continuously enhancing the productivity and effectiveness of your team. You work hard and celebrate hard, and you strive for guests to be positively impacted by their interactions with you. Analytics Engineer

The Analytics Engineer is responsible for designing, developing, and maintaining our data models, ensuring that our enterprise data architecture is robust, scalable, and aligned with business needs. Sitting at the intersection of data engineering and business intelligence, the Analytics Engineer will convert raw, ingested data into clean, curated, and governed datasets that serve as the foundation for reporting, analytics, marketing activation, and operational decision-making across the company. In this position, you will play a key role in developing the data transformation layer within our warehouse. You will define and maintain business logic, build reusable data models, and ensure that metrics provided through semantic models are reliable, consistent, and thoroughly documented. This includes enabling activation workflows by integrating with Customer Data Platforms (CDPs) and reverse ETL tools, ensuring data can flow back into the business applications where it drives customer engagement and business value. Key Responsibilities

Data Modeling & Transformation: Design and implement robust data models that capture business processes in a way that is technically efficient and intuitive to support both operational and analytical applications. Architect transformations that normalize and organize ingested data into curated, dimensional structures (fact and dimension tables, incremental models, semantic layers). Continuously optimize queries and transformations to balance performance, cost efficiency, and usability at scale. Implement development workflows using tools like

dbt , ensuring maintainable, version-controlled transformations. Assist in building semantic layers or standardized data marts that enable consistent reporting across BI and analytics platforms. Dimensional Modeling & Warehouse Design

Apply well-established modeling methodologies (Kimball, Inmon, or hybrid) to create fact and dimension tables that are intuitive to explore and flexible for BI and operational use cases. Design data structures that account for slowly changing dimensions, surrogate keys, and evolving hierarchies. Optimize warehouse schemas to maximize performance of common queries and dashboards while maintaining scalability. Provide guidance on how analysts and BI engineers should access curated data to maximize usability and efficiency. Data Cleaning, Enrichment, & Quality Assurance

Standardize raw input data from multiple ingestion pipelines, resolving schema inconsistencies, handling null values, normalizing dimensions, and reconciling discrepancies across systems. Enhance raw data through enrichment pipelines (e.g., mapping IDs, joining reference data, and appending third-party sources) to ensure richer datasets for business teams. Build a systematic approach to data quality monitoring within the warehouse, including checks for completeness, freshness, anomalies, duplicates, and referential integrity. Proactively surface quality issues to upstream data engineers or source system owners and drive resolution workflows. Testing, Documentation, & Governance

Implement automated data testing frameworks to validate assumptions and prevent issues from reaching production (e.g., uniqueness constraints, non-null checks, schema integrity, reference checks). Integrate testing and validation into CI/CD workflows, ensuring code changes are reviewed and tested before release. Document data models, transformations, lineage, and business logic in internal wikis/tools and self-serve documentation systems like dbt Docs. Establish and enforce governance standards around naming conventions, folder structures, code review practices, and data definitions. Cross-Functional Collaboration & Enablement

Work closely with Data Engineers to align on pipeline designs, ensuring ingestion delivers the raw elements needed to build reliable transformations. Partner with BI Engineers and business analysts to understand reporting needs and build curated datasets that shorten the path to insight. Act as a domain expert for business metrics, advising stakeholders on dataset usage and metric design trade-offs. Contribute to building a culture of data literacy by creating self-service data products that empower stakeholders to explore data with minimal technical barriers. Requirements

Strong proficiency in SQL and experience with ELT transformation tools (e.g., dbt). Deep understanding of dimensional modeling and warehouse design principles. Experience with modern cloud data platforms (Databricks, Snowflake, BigQuery). Familiarity with business intelligence tools (Tableau, Looker, Power BI) and their data integration patterns. Exposure to Customer Data Platforms (CDPs) and/or Reverse ETL workflows and how curated warehouse data can be activated in marketing, sales, or customer engagement platforms. Knowledge of testing frameworks, Git-based workflows, and CI/CD for analytics. Strong communication skills for bridging technical and business audiences. Preferred Qualifications

Hands-on experience with data orchestration tools (Airflow, Dagster, Prefect). Familiarity with Reverse ETL tools (Hightouch, Census) or CDPs (Segment, RudderStack, mParticle, Amperity). Exposure to Python or similar languages for lightweight data manipulation and automation. Background in data governance, data cataloguing, or enterprise data management practices. Experience working in a fast-paced SaaS or product-driven organization. We are an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, sex, sexual orientation, age, disability, gender identity, marital or veteran status, or any other protected class. If this opportunity looks exciting to you, please complete the application process. Go Hawks!

#J-18808-Ljbffr