Logo
Troveo AI

Software Engineer, Delivery

Troveo AI, San Francisco, California, United States, 94199

Save Job

About Troveo

Troveo is building the next-generation data platform to train AI video models. We offer the world’s largest library of AI video training data—featuring millions of hours of licensed video content. Our end-to-end data pipeline connects creators, rights holders, and AI research labs, enabling scalable, compliant, and innovative uses of video for AI applications and model development.

We are an early-stage, high-growth venture backed by forward-thinking investors, and we’re seeking a deeply technical engineer to help build and optimize the backbone of our content delivery systems.

Role Overview As a

Software Engineer, Delivery , you’ll own the reliability, performance, and scalability of Troveo’s video content delivery infrastructure. This role is highly hands-on, blending systems engineering with data-centric development to ensure seamless transfer and processing of petabyte-scale video data.

You’ll work across data transport, distributed processing, and client integration layers - building efficient, fault-tolerant systems that power Troveo’s end-to-end AI data pipeline. Ideal candidates have a strong command of algorithms, concurrency, and network programming, paired with a pragmatic mindset for maintaining production-grade reliability.

Key Responsibilities Core Delivery Engineering

Design, build, and maintain robust delivery pipelines that handle large-scale video ingestion, transformation, and distribution across distributed systems.

Optimize throughput, latency, and fault-tolerance across Troveo’s global data delivery layer.

Implement monitoring, redundancy, and recovery mechanisms to maintain system reliability at scale.

Collaborate with platform and ML teams to ensure smooth data handoffs into analytics, training, and indexing workflows.

Systems Design & Optimization

Apply strong fundamentals in algorithms, data structures, and concurrency to optimize data movement and task scheduling.

Develop and tune software for high-performance, parallel data processing and low-latency streaming workloads.

Implement and optimize both

OLAP

and

OLTP

integrations—bridging analytics warehouses and transactional databases for real-time delivery insights.

Leverage tools like

Python ,

Go , or

Node.js

to build efficient services and automation frameworks.

Network & Distributed Systems

Build and maintain network-aware systems that support high-throughput video delivery using TCP/UDP, socket programming, and custom streaming protocols.

Profile, benchmark, and optimize data transmission across multi-region infrastructure.

Contribute to distributed coordination mechanisms to ensure system consistency and efficient data replication.

Reliability & Maintenance

Own production operations for delivery services—implement alerting, observability, and incident response workflows.

Partner with infrastructure engineers to scale compute and storage resources dynamically.

Drive continuous improvement in uptime, throughput, and cost efficiency.

Qualifications & Experience

4 - 6 years of experience in software engineering, with focus areas in distributed systems, networking, or data infrastructure.

Deep understanding of

algorithms ,

data structures , and

concurrency control .

Proven experience building systems that interact with both

OLAP

(e.g., Snowflake, BigQuery, Redshift) and

OLTP

(e.g., Postgres, MySQL, DynamoDB) layers.

Strong proficiency in

Python ,

Go , or

Node.js

for systems-level development.

Familiarity with

network programming principles —including TCP/UDP protocols, sockets, and performance optimization for high-throughput data streams.

Experience operating within distributed, data-heavy production environments.

Clear, pragmatic communication skills; capable of collaborating closely with data, ML, and platform teams.

Nice to Have

Experience designing and implementing

microservices architectures .

Familiarity with

vector databases ,

Elasticsearch , or similar search/indexing technologies.

Exposure to modern streaming frameworks or distributed task queues (e.g., Kafka, Celery, Airflow).

Knowledge of cloud infrastructure operations (AWS preferred).

Location & Compensation Location:

Strong preference for candidates based in the

San Francisco Bay Area .

Compensation:

$120,000 – $160,000 base salary + meaningful equity participation.

Why Join Troveo?

Work at the cutting edge of AI, video, and distributed data infrastructure.

Build the systems that deliver and power the world’s largest AI video datasets.

Collaborate with a world-class team of engineers, researchers, and industry experts.

High autonomy, high impact—your work will directly shape Troveo’s core delivery platform.

Competitive compensation with significant equity upside.

#J-18808-Ljbffr