Logo
DatologyAI

Software Engineer, Cloud Infrastructure

DatologyAI, Redwood City, California, United States, 94061

Save Job

About the Company Models are what they eat. But a large portion of training compute is wasted training on data that are already learned, irrelevant, or even harmful, leading to worse models that cost more to train and deploy.

At DatologyAI, we’ve built a state of the art data curation suite to automatically curate and optimize petabytes of data to create the best possible training data for your models. Training on curated data can dramatically reduce training time and cost ( 7-40x faster training

depending on the use case), dramatically increase model performance

as if you had trained on >10x more raw data

without increasing the cost of training, and allow smaller models with

fewer than half the parameters

to outperform larger models despite using far less compute at inference time, substantially reducing the cost of deployment. For more details, check out our recent blog posts sharing our high-level results for text models and image-text models.

We raised a total of $57.5M in two rounds, a Seed and Series A. Our investors include Felicis Ventures, Radical Ventures, Amplify Partners, Microsoft, Amazon, and AI visionaries like Geoff Hinton, Yann LeCun, Jeff Dean, and many others who deeply understand the importance and difficulty of identifying and optimizing the best possible training data for models. Our team has pioneered this frontier research area and has the deep expertise on both data research and data engineering necessary to solve this incredibly challenging problem and make data curation easy for anyone who wants to train their own model on their own data.

This role is based in Redwood City, CA. We are in office 4 days a week.

About the Role We’re looking for an experienced

Cloud Infrastructure Engineer

to join our core team at DatologyAI. In this role, you will lead the design, build, and operation of highly available, secure, and scalable cloud infrastructure that powers our training, inference, and data curation pipelines. You’ll work closely with engineering, research, and product teams to define how we deploy and manage compute resources across AWS and other cloud providers. This role is a key early hire and offers an opportunity to have a deep technical and cultural impact.

What You’ll Work On

Architect and maintain our multi-cloud infrastructure (primarily AWS, potentially Azure/GCP), with a focus on reliability, security, and scalability

Define and implement infrastructure-as-code best practices using Terraform, CloudFormation, Pulumi (and similar technologies)

Design and manage Kubernetes-based systems for model training, inference, and data processing workloads

Optimize our CI/CD pipelines and streamline deployment of services across environments

Build monitoring, alerting, and logging systems to ensure high system availability and observability

Collaborate with research and engineering teams to provide infrastructure support for training large-scale ML models

Ensure our infrastructure supports various deployment models (cloud, on-prem, hybrid) for enterprise use cases

Drive cost-efficiency strategies across compute and storage resources

Respond to and resolve infrastructure-related incidents with a sense of ownership and urgency

About You

You’ve led or helped build robust infrastructure systems at a startup or fast-moving engineering organization

Deep experience working with cloud providers (especially AWS), and ideally exposure to multi-cloud or hybrid-cloud setups

Strong with Kubernetes, Terraform, and containerized architectures

Confident with systems-level debugging—networking issues, memory leaks, resource bottlenecks, etc.

Comfortable writing clean, maintainable scripts in Bash, Python, or Go

You care deeply about building secure and scalable systems and take pride in reliable infrastructure

You’re collaborative, humble, and ready to own high-impact projects end-to-end

Nice to Have

Experience supporting infrastructure for ML workloads (training pipelines, inference clusters, GPU orchestration)

Built or scaled infrastructure for teams working with large-scale datasets

Exposure to cost monitoring and optimization tools in cloud environments

Background supporting compliance and security in enterprise deployments

Compensation At DatologyAI, we are dedicated to rewarding talent with highly competitive salary and significant equity. The salary for this position ranges from $180,000 to $250,000.

The candidate's starting pay will be determined based on job-related skills, experience, qualifications, and interview performance.

We offer a comprehensive benefits package to support our employees' well-being and professional growth:

100% covered health benefits (medical, vision, and dental).

401(k) plan with a generous 4% company match.

Unlimited PTO policy

Annual $2,000 wellness stipend.

Annual $1,000 learning and development stipend.

Daily lunches and snacks are provided in our office!

Relocation assistance for employees moving to the Bay Area.

#J-18808-Ljbffr