HyperFi
Senior Platform Engineer at HyperFi
Join to apply for the
Senior Platform Engineer
role at
HyperFi . We’re building the kind of platform we always wanted to use: fast, flexible, and built for making sense of real-world complexity. Behind the scenes is a robust, event‑driven architecture that connects systems, abstracts messy workflows, and leaves room for smart automation. The surface is clean and simple. The interactions are seamless and intuitive. The machinery underneath is anything but. That’s where you come in.
We’re a well‑networked founding team with strong execution roots and a clear roadmap. We’re backed, focused, and delivering fast.
We’re looking for a
Senior Platform Engineer
to join early — someone who knows what “production ready” really means, and who’s excited to build the infrastructure, observability, and tooling that power everything else. You’ll work closely with our engineers and CTO to design and evolve the foundation of our systems — from GCP Terraform modules to Databricks pipelines and deployment flows. This is a clean‑slate environment: no legacy, no gatekeeping, just impact. You have strong opinions? We want to hear them.
What You’ll Do
Own and evolve our Terraform‑based GCP stack — provisioning services, networking, and secrets
Design and operate CI/CD pipelines (GitHub Actions, Cloud Build) to keep deploys fast and safe
Implement and maintain observability (metrics, logs, traces, alerts) with tools like DataDog or OpenTelemetry
Partner with backend engineers to define service boundaries, infrastructure modules, and deployment patterns
Help shape Databricks and Unity Catalog infrastructure, including workspace automation and secrets management
Drive reliability practices — health checks, rollbacks, autoscaling, graceful degradation
Build internal developer tooling that makes local development and preview environments smooth and repeatable
Be a multiplier: improve our developer experience, not just uptime
Tech Stack (So Far)
Infra: Terraform, GCP (Cloud Run, Pub/Sub, GKE, CloudSQL, Secrets Manager)
CI/CD: GitHub Actions, Cloud Build
Runtime: Python (Sanic + async services), Postgres, Databricks, event‑driven pipelines
Observability: OpenTelemetry, DataDog (you’ll help decide), GCP Monitoring
Everything as code: infrastructure, schema, pipelines
How We Build
Engineers come first: your time, focus, and judgment are respected
Deep work > chaos: fixed cycles & cooldowns protect focus and keep context switching low
Autonomy is the default: trusted builders who own outcomes, no babysitters
Ship daily, safely: merge early, integrate vertically, ship often, use feature flags, and keep momentum
Outcomes over optics: solve real problems, not ticket soup
Voice matters: from week one, contribute, improve something, and shape how we build
Senior peers, no ego: collaborate in a high‑trust, async‑friendly environment
Bold problems, cool tech: work on complex challenges that actually move the needle
Fun is part of it: we move fast, but we also celebrate wins and laugh together
What We’re Looking For
6–8 years of experience in DevOps, Platform, or SRE‑type roles
Expert‑level Terraform and strong command of GCP primitives
Proven ability to own CI/CD pipelines and production deployment workflows
Hands‑on familiarity with Python services, especially async backends
Practical knowledge of observability, security, and IaC best practices
Startup‑ready mindset: pragmatic, lean, self‑directed, allergic to bureaucracy
Confident English communication for cross‑hub collaboration
Bonus If You
Have run DataDog, Prometheus, Grafana, or OpenTelemetry in production
Built ephemer or preview environments for dev/test
Automated Databricks or Unity Catalog workspace provisioning
Designed internal CLI tools or DX improvements
Survived (and learned from) a real‑world incident or outage
Location & Compensation
Must be based in San Francisco, Las Vegas, or Tel Aviv
Full‑time role with competitive comp
Flexible hours, async‑friendly culture, engineering‑led environment
#J-18808-Ljbffr
Senior Platform Engineer
role at
HyperFi . We’re building the kind of platform we always wanted to use: fast, flexible, and built for making sense of real-world complexity. Behind the scenes is a robust, event‑driven architecture that connects systems, abstracts messy workflows, and leaves room for smart automation. The surface is clean and simple. The interactions are seamless and intuitive. The machinery underneath is anything but. That’s where you come in.
We’re a well‑networked founding team with strong execution roots and a clear roadmap. We’re backed, focused, and delivering fast.
We’re looking for a
Senior Platform Engineer
to join early — someone who knows what “production ready” really means, and who’s excited to build the infrastructure, observability, and tooling that power everything else. You’ll work closely with our engineers and CTO to design and evolve the foundation of our systems — from GCP Terraform modules to Databricks pipelines and deployment flows. This is a clean‑slate environment: no legacy, no gatekeeping, just impact. You have strong opinions? We want to hear them.
What You’ll Do
Own and evolve our Terraform‑based GCP stack — provisioning services, networking, and secrets
Design and operate CI/CD pipelines (GitHub Actions, Cloud Build) to keep deploys fast and safe
Implement and maintain observability (metrics, logs, traces, alerts) with tools like DataDog or OpenTelemetry
Partner with backend engineers to define service boundaries, infrastructure modules, and deployment patterns
Help shape Databricks and Unity Catalog infrastructure, including workspace automation and secrets management
Drive reliability practices — health checks, rollbacks, autoscaling, graceful degradation
Build internal developer tooling that makes local development and preview environments smooth and repeatable
Be a multiplier: improve our developer experience, not just uptime
Tech Stack (So Far)
Infra: Terraform, GCP (Cloud Run, Pub/Sub, GKE, CloudSQL, Secrets Manager)
CI/CD: GitHub Actions, Cloud Build
Runtime: Python (Sanic + async services), Postgres, Databricks, event‑driven pipelines
Observability: OpenTelemetry, DataDog (you’ll help decide), GCP Monitoring
Everything as code: infrastructure, schema, pipelines
How We Build
Engineers come first: your time, focus, and judgment are respected
Deep work > chaos: fixed cycles & cooldowns protect focus and keep context switching low
Autonomy is the default: trusted builders who own outcomes, no babysitters
Ship daily, safely: merge early, integrate vertically, ship often, use feature flags, and keep momentum
Outcomes over optics: solve real problems, not ticket soup
Voice matters: from week one, contribute, improve something, and shape how we build
Senior peers, no ego: collaborate in a high‑trust, async‑friendly environment
Bold problems, cool tech: work on complex challenges that actually move the needle
Fun is part of it: we move fast, but we also celebrate wins and laugh together
What We’re Looking For
6–8 years of experience in DevOps, Platform, or SRE‑type roles
Expert‑level Terraform and strong command of GCP primitives
Proven ability to own CI/CD pipelines and production deployment workflows
Hands‑on familiarity with Python services, especially async backends
Practical knowledge of observability, security, and IaC best practices
Startup‑ready mindset: pragmatic, lean, self‑directed, allergic to bureaucracy
Confident English communication for cross‑hub collaboration
Bonus If You
Have run DataDog, Prometheus, Grafana, or OpenTelemetry in production
Built ephemer or preview environments for dev/test
Automated Databricks or Unity Catalog workspace provisioning
Designed internal CLI tools or DX improvements
Survived (and learned from) a real‑world incident or outage
Location & Compensation
Must be based in San Francisco, Las Vegas, or Tel Aviv
Full‑time role with competitive comp
Flexible hours, async‑friendly culture, engineering‑led environment
#J-18808-Ljbffr