Logo
way

Senior Data Engineer

way, Austin, Texas, us, 78716

Save Job

Headquartered in Austin, Texas, with its EMEA HQ in Paris, Way is the category‑leading B2B technology platform empowering brands to unlock the power of experiences. In a world where 76% of consumers prefer spending on experiences over material goods, Way enables brands to adapt to this shift with cutting‑edge technology.

Founded in 2020, Way began as a solution for hospitality brands to drive brand loyalty and generate experiential revenue at scale. Industry leaders like Hyatt Hotels, Hilton, Trailborn, and Auberge Resorts Collection rely on Way’s all‑in‑one experiential platform to launch unforgettable experiences — from hot air balloon rides in Mexico City to truffle hunting in the French countryside.

Way has achieved significant milestones, including a $20 million Series A funding round in late 2022, led by Tiger Global and MSD Capital (Michael Dell), at a $100 M valuation. As the company continues its rapid growth, we’re seeking visionary, driven team players to join our dynamic environment, where challenges are met with unmatched rewards as we transform the hospitality and experiences industry globally.

We are seeking an experienced Senior Data Engineer to establish and lead our data infrastructure as an early member of our data team. This role will be responsible for building our entire data platform from the ground up, implementing a comprehensive data lake and pipeline architecture, and establishing a culture of data‑driven decision‑making throughout our organization. The ideal candidate will serve as both a technical leader and data advocate, driving automation excellence while flexing into data science and analytics responsibilities as needed.

Key Responsibilities

Data Infrastructure & Pipeline Architecture

Design and implement a comprehensive data lake architecture using modern cloud‑native technologies

Build scalable ETL/ELT pipelines for real‑time and batch data processing across all data sources

Establish data ingestion frameworks to collect data from application APIs, third‑party services, and databases

Architect automated data quality monitoring, validation, and alerting systems

Create robust data warehousing solutions optimized for analytics and business intelligence

Implement DataOps practices with automated testing and deployment pipelines (CI/CD for data)

Data Engineering & Analysis

Develop and maintain Python‑based data processing frameworks and utilities

Build an automated data pipeline orchestration using Apache Airflow or similar tools

Create streaming data processing solutions using Apache Kafka, Kinesis, or Pub/Sub

Implement infrastructure as code for all data platform components (Terraform, CloudFormation)

Establish feature stores and data models that support both operational and analytical workloads

Optimize data storage costs and query performance across the entire platform

Collaborate with product and business teams to identify key metrics, KPIs, and analytical requirements

Build automated reporting dashboards and self‑service business intelligence tools

Support predictive modeling initiatives and A/B testing frameworks

Required Experience

Minimum 5+ years of hands‑on data engineering experience with increasing responsibility

Preferred 7+ years in data engineering, analytics engineering, or data platform roles

Proven track record of building data systems from scratch or leading data infrastructure transformations

Experience working as a solo data engineer or in small, autonomous data teams

Required Technical Skills

Python proficiency required – demonstrated experience building data pipelines, ETL frameworks, and automation scripts

SQL expertise – advanced knowledge of complex queries, performance optimization, and data modeling

Strong experience with cloud platforms (AWS, GCP, or Azure) and their data services

Proficiency with data lake technologies (Delta Lake, Apache Iceberg, or Apache Hudi)

Experience with data orchestration tools (Apache Airflow, Prefect, Dagster, or similar)

Knowledge of streaming data technologies (Apache Kafka, Kinesis, Pub/Sub)

Familiarity with data warehouse technologies (Snowflake, BigQuery, Redshift, Databricks)

Understanding of containerization (Docker, Kubernetes) and infrastructure as code

Experience with version control systems (Git) and collaborative development workflows

Preferred Qualifications

Background in real‑time analytics and event‑driven architectures

Previous experience in startup or fast‑paced environments

Understanding of data privacy regulations (GDPR, CCPA) and security best practices

Experience with performance monitoring and observability tools

Knowledge of dimensional modeling and data warehouse design patterns

Benefits

Compensation includes a highly competitive salary, generous equity, medical, dental, and vision coverage paid 100% by the company, 401K benefits, and other travel‑related perks

$500 Annual Experience Stipend (Can be used at any of our client partners)

#J-18808-Ljbffr