Logo
Remoteworldwide

Staff Data Engineer

Remoteworldwide, Snowflake, Arizona, United States, 85937

Save Job

Hi there! We’re looking for a Staff Data Engineer to take ownership of the data models and infrastructure that power analytics across our company. This is our first dedicated data engineering hire — a high-impact opportunity to shape our semantic layer, streamline our modern data stack, and unlock self-service analytics for the entire business. You’ll be the go-to person for building a high-trust, governed data foundation that enables everyone (from executives to product teams) to make smarter, faster decisions. What we value

High agency, high impact — you’re energized by building and improving, not just maintaining the status quo Collaboration — you thrive on partnering with analysts, product teams, and data scientists to bring innovative ideas into reality Pragmatism — you balance speed with scalability, building solutions that work today and scale over the long haul Continuous improvement — you’re always on the lookout for ways to make models cleaner, pipelines more efficient, and governance stronger Curiosity — you want to learn, experiment, and evolve the data stack as the company grows and our products evolve AI-ready data — you build pipelines and models that don’t just serve today’s reporting needs but are structured and governed to support machine learning and AI use cases tomorrow What you’ll do

Own and evolve our semantic layer : design, document, and optimize dbt models that drive business KPIs and enable self-service analytics Administer and improve our data stack (Snowflake, dbt, Sigma, Stitch, Fivetran) — ensuring reliability, scalability, and best practices Partner with analytics and product teams to deliver trusted, AI-ready data that supports both internal decision-making and new product features Lead improvements in efficiency and governance : streamline dbt runtimes, refactor inefficient SQL, reduce Snowflake costs, and implement RBAC and documentation standards Evaluate and shape the future of our orchestration and data infrastructure , including opportunities for front-end data apps (e.g., Streamlit) What we’re looking for

5–8 years of hands-on experience as a Data or Analytics Engineer in a fast-paced, high-growth environment Must-have : advanced SQL, dbt, and Python — you’re comfortable writing complex queries, building robust models, and automating workflows Experience with Snowflake administration and optimization Familiarity with ETL tools (like Stitch and Fivetran), and modeling and orchestration frameworks (like dbt Cloud and Airflow) Experience with GCP nice to have, Terraform experience a plus A track record of building and improving data systems — not just operating them Excited by the opportunity to join early, take ownership, and have a direct impact on the company’s growth What success looks like in this role

Within your first 3–6 months, you will: Deliver clean, documented dbt models that define core business metrics Ensure reliable pipelines from key systems into Snowflake Improve query performance and reduce data costs Establish basic governance and documentation standards Enable analysts to build trusted dashboards on top of consistent data Lay the groundwork for AI/ML by structuring data for future use cases About the Data Team

We’re building the data foundation to power the next phase of our company’s growth. With a modern stack already in place (Snowflake, dbt, Sigma, Fivetran/Stitch), we’re ready to take things to the next level: a governed, documented, and trusted semantic layer, a world-class self-service analytics experience, and a data platform that scales with our AI-first future. Email, push notifications, text messages, in-app messages, webhooks: automated and powered by your data.

#J-18808-Ljbffr