Xponential Fitness
Join to apply for the
Principal Data Engineer
role at
Xponential Fitness .
This range is provided by Xponential Fitness. Your actual pay will be based on your skills and experience — talk with your recruiter to learn more.
Base pay range $165,000.00/h - $195,000.00/h
Who We Are
Xponential Fitness, Inc. (NYSE: XPOF) is a leading global franchisor of health and wellness brands. Through its mission to make boutique health and wellness experiences accessible to everyone, the Company operates a diversified platform of five brands spanning across verticals including Pilates, barre, stretching, strength‑training, and yoga. In partnership with its franchisees, Xponential brands offer energetic, accessible, and personalized workout experiences led by highly qualified instructors in studio locations across 49 U.S. states, Puerto Rico, and 30 countries globally. Xponential’s portfolio of brands includes Club Pilates, StretchLab, YogaSix, Pure Barre, and BFT.
Summary
The Principal Data Engineer will lead the design and evolution of Xponential Fitness’s enterprise data architecture, enabling scalable, secure, and high‑performing data infrastructure that powers real‑time analytics, AI/ML capabilities, and strategic decision‑making across the organization. This role sits at the intersection of data strategy and engineering execution — responsible for building an integrated data ecosystem that supports Xponential’s diverse portfolio of brands and digital platforms. The principal data engineer will partner cross‑functionally with leaders in AI, application development, business intelligence, and cybersecurity to ensure the right data is available, trustworthy, and actionable across the enterprise. This leader will play a critical role in advancing the company’s data modernization agenda, establishing best‑in‑class data practices, and unlocking value through insights, automation, and innovation.
Key Responsibilities
Design and implement resilient, cloud‑native data architectures supporting both batch and real‑time pipelines.
Lead the ingestion, transformation, and orchestration of data via Fivetran, Apache Airflow, and Python‑based ETL/ELT.
Standardize pipelines from member management and point‑of‑sales systems, digital platforms, and MarTech tools into a centralized lakehouse and warehouse.
Partner with software engineering teams to ensure pipelines are CI/CD‑enabled using GitHub Actions and CodePipeline.
Optimize compute, storage, and processing layers to ensure scalable, secure, and cost‑effective data operations.
Integrate modern container orchestration, caching, and task automation approaches to support data enrichment, transformation, and delivery at scale.
Leverage infrastructure‑as‑code and CI/CD pipelines to standardize deployments and reduce operational overhead.
Align data platform architecture with application and DevOps workflows to support consistent, governed, and observable services across brands and environments.
Collaborate with AI engineers to enable end‑to‑end MLOps, feature engineering pipelines, and training data provisioning.
Ensure pipelines support model retraining, scoring, and inference workloads across ECS and Lambda environments.
Prepare time‑series, transactional, and behavioral datasets for model consumption.
Define and enforce data governance policies including lineage, metadata management, and data quality rules.
Implement encryption, RBAC, and masking strategies to protect personal and sensitive business data.
Ensure infrastructure and data flows meet regulatory and contractual obligations (e.g., SOX, PCI, GDPR).
Instrument data workflows with CloudWatch, Kinesis Firehose, Sumo Logic, Sentry, and New Relic for real‑time visibility.
Tune Snowflake performance, control costs, and monitor data freshness across the platform.
Automate validation and anomaly detection to ensure continuous pipeline reliability.
Mentor data engineers, promoting best practices in scalable design, modular pipeline development, and IaC.
Lead architecture reviews and cross‑functional design sessions across data, application, and security teams.
Translate technical decisions into business impact narratives for leadership and stakeholders.
Benefits
Competitive Salary Range Between $165,000 - $195,000 depending on experience
Comprehensive Medical, Dental, and Vision benefits
Opportunities for additional bonuses
Cell phone allowance
401k plan with 4% company match
Complimentary corporate memberships to XPLUS and XPASS
Discounts up to 30% on retail brand merchandise
On Campus Amenities: Reborn Coffee Shop, Hangar 24, Mini Putting Green, Basketball Court, Bird Sanctuary, Car Washing Services (M/W), Dry Cleaning Services
Qualifications
10+ years of experience in data engineering, cloud architecture, or big data infrastructure, 5+ years in a senior or leadership capacity with a track record of building scalable data platforms, and proven ability to lead complex cross‑functional initiatives and influence architectural decisions across technology and business teams.
Expertise in ELT/ETL design, real‑time streaming, data modeling, and orchestration frameworks.
Hands‑on experience with scalable compute (e.g., container‑based workloads), relational and non‑relational storage, caching systems, and infrastructure automation tools.
Proficient in tools like Snowflake, dbt, Apache Airflow, Fivetran, and orchestration via GitHub Actions or CodePipeline.
Strong skills in SQL and Python; experienced with CI/CD workflows and infrastructure‑as‑code.
Familiarity with graph-based data modeling and platforms like Neo4j, Amazon Neptune for relationship‑driven use cases.
Implementation of log aggregation, container monitoring, and data pipeline observability using tools such as CloudWatch, Sumo Logic, Sentry, or New Relic.
Experience partnering with AI/ML teams to design pipelines that support model development, training, and deployment. Exposure to MLOps principles and feature engineering workflows.
Familiarity with regulatory requirements (SOX, PCI, GDPR) and best practices for data security, access control, and metadata management.
Bachelor’s or Master’s degree in Computer Science, Engineering, Information Systems, or a related field.
Seniority level: Mid‑Senior level
Employment type: Full‑time
Job function: Information Technology
Industries: Wellness and Fitness Services
Xponential Fitness LLC provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
#J-18808-Ljbffr
Principal Data Engineer
role at
Xponential Fitness .
This range is provided by Xponential Fitness. Your actual pay will be based on your skills and experience — talk with your recruiter to learn more.
Base pay range $165,000.00/h - $195,000.00/h
Who We Are
Xponential Fitness, Inc. (NYSE: XPOF) is a leading global franchisor of health and wellness brands. Through its mission to make boutique health and wellness experiences accessible to everyone, the Company operates a diversified platform of five brands spanning across verticals including Pilates, barre, stretching, strength‑training, and yoga. In partnership with its franchisees, Xponential brands offer energetic, accessible, and personalized workout experiences led by highly qualified instructors in studio locations across 49 U.S. states, Puerto Rico, and 30 countries globally. Xponential’s portfolio of brands includes Club Pilates, StretchLab, YogaSix, Pure Barre, and BFT.
Summary
The Principal Data Engineer will lead the design and evolution of Xponential Fitness’s enterprise data architecture, enabling scalable, secure, and high‑performing data infrastructure that powers real‑time analytics, AI/ML capabilities, and strategic decision‑making across the organization. This role sits at the intersection of data strategy and engineering execution — responsible for building an integrated data ecosystem that supports Xponential’s diverse portfolio of brands and digital platforms. The principal data engineer will partner cross‑functionally with leaders in AI, application development, business intelligence, and cybersecurity to ensure the right data is available, trustworthy, and actionable across the enterprise. This leader will play a critical role in advancing the company’s data modernization agenda, establishing best‑in‑class data practices, and unlocking value through insights, automation, and innovation.
Key Responsibilities
Design and implement resilient, cloud‑native data architectures supporting both batch and real‑time pipelines.
Lead the ingestion, transformation, and orchestration of data via Fivetran, Apache Airflow, and Python‑based ETL/ELT.
Standardize pipelines from member management and point‑of‑sales systems, digital platforms, and MarTech tools into a centralized lakehouse and warehouse.
Partner with software engineering teams to ensure pipelines are CI/CD‑enabled using GitHub Actions and CodePipeline.
Optimize compute, storage, and processing layers to ensure scalable, secure, and cost‑effective data operations.
Integrate modern container orchestration, caching, and task automation approaches to support data enrichment, transformation, and delivery at scale.
Leverage infrastructure‑as‑code and CI/CD pipelines to standardize deployments and reduce operational overhead.
Align data platform architecture with application and DevOps workflows to support consistent, governed, and observable services across brands and environments.
Collaborate with AI engineers to enable end‑to‑end MLOps, feature engineering pipelines, and training data provisioning.
Ensure pipelines support model retraining, scoring, and inference workloads across ECS and Lambda environments.
Prepare time‑series, transactional, and behavioral datasets for model consumption.
Define and enforce data governance policies including lineage, metadata management, and data quality rules.
Implement encryption, RBAC, and masking strategies to protect personal and sensitive business data.
Ensure infrastructure and data flows meet regulatory and contractual obligations (e.g., SOX, PCI, GDPR).
Instrument data workflows with CloudWatch, Kinesis Firehose, Sumo Logic, Sentry, and New Relic for real‑time visibility.
Tune Snowflake performance, control costs, and monitor data freshness across the platform.
Automate validation and anomaly detection to ensure continuous pipeline reliability.
Mentor data engineers, promoting best practices in scalable design, modular pipeline development, and IaC.
Lead architecture reviews and cross‑functional design sessions across data, application, and security teams.
Translate technical decisions into business impact narratives for leadership and stakeholders.
Benefits
Competitive Salary Range Between $165,000 - $195,000 depending on experience
Comprehensive Medical, Dental, and Vision benefits
Opportunities for additional bonuses
Cell phone allowance
401k plan with 4% company match
Complimentary corporate memberships to XPLUS and XPASS
Discounts up to 30% on retail brand merchandise
On Campus Amenities: Reborn Coffee Shop, Hangar 24, Mini Putting Green, Basketball Court, Bird Sanctuary, Car Washing Services (M/W), Dry Cleaning Services
Qualifications
10+ years of experience in data engineering, cloud architecture, or big data infrastructure, 5+ years in a senior or leadership capacity with a track record of building scalable data platforms, and proven ability to lead complex cross‑functional initiatives and influence architectural decisions across technology and business teams.
Expertise in ELT/ETL design, real‑time streaming, data modeling, and orchestration frameworks.
Hands‑on experience with scalable compute (e.g., container‑based workloads), relational and non‑relational storage, caching systems, and infrastructure automation tools.
Proficient in tools like Snowflake, dbt, Apache Airflow, Fivetran, and orchestration via GitHub Actions or CodePipeline.
Strong skills in SQL and Python; experienced with CI/CD workflows and infrastructure‑as‑code.
Familiarity with graph-based data modeling and platforms like Neo4j, Amazon Neptune for relationship‑driven use cases.
Implementation of log aggregation, container monitoring, and data pipeline observability using tools such as CloudWatch, Sumo Logic, Sentry, or New Relic.
Experience partnering with AI/ML teams to design pipelines that support model development, training, and deployment. Exposure to MLOps principles and feature engineering workflows.
Familiarity with regulatory requirements (SOX, PCI, GDPR) and best practices for data security, access control, and metadata management.
Bachelor’s or Master’s degree in Computer Science, Engineering, Information Systems, or a related field.
Seniority level: Mid‑Senior level
Employment type: Full‑time
Job function: Information Technology
Industries: Wellness and Fitness Services
Xponential Fitness LLC provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
#J-18808-Ljbffr