Delve
About Delve
The high-stakes world of public affairs runs on intelligence, but the way professionals track issues and anticipate risks hasn’t kept up with the speed of modern decision-making—until now.At Delve, we’re pioneering AI-driven public affairs intelligence, helping teams move faster, cut through the noise, and stay ahead of the conversation. No more fragmented research. No more outdated workflows. Just powerful, AI-enhanced insights that drive real impact.We’re out of stealth and scaling fast. Some of the most influential firms in public affairs are already using our platform, and we’re backed by top leaders and firms in public affairs and tech.This is your chance to be part of something big—not just another AI company, but a team redefining how mission-critical knowledge work gets done. We move fast, solve with AI first, lean into curiosity, and build with precision and transparency. If that sounds like the kind of team you want to be part of, now is the time to join us. Description
You're a systems-minded engineer who thrives on building the data infrastructure behind AI-driven products. You get excited by large-scale scraping, scalable pipelines, and optimizing backends for performance and reliability. You understand how data flows through systems, how to make it usable at scale, and how to build cloud-native platforms that don't break under pressure. At Delve, we’re building the next generation of public affairs intelligence—helping teams track issues, anticipate risks, and make smarter decisions with less manual work. As our
Data Engineer , you’ll architect and own the ingestion, transformation, and delivery infrastructure that powers our AI. What You’ll Do
Build & Optimize Data Ingestion Systems Design and maintain high-volume web scraping and data ingestion pipelines. Implement robust web scraping frameworks to pull structured and unstructured data from diverse sources. Handle rate-limiting, retries, deduplication, and normalization at scale. Build and Scale Data Pipelines Develop scalable ETL workflows for processing and transforming large datasets. Automate data ingestion, storage, and retrieval processes. Optimize pipeline performance for speed, cost efficiency, and reliability. Own the Data Infrastructure Build and manage data storage infrastructure and vector databases optimized for retrieval. Deploy APIs and services that deliver enriched data to AI models and applications. Implement monitoring and alerting to ensure system performance and availability. Ensure data flows efficiently through our pipelines into downstream systems. Enrich & Structure Data for AI Models Collaborate with AI/ML engineers to support high-accuracy retrieval performance. Normalize and format data to support chunking strategies for embedding. Implement data cleaning and enrichment techniques to improve data usability. Who You Are
Engineer First, Data Always
– You have 3+ years of backend or infrastructure engineering experience, ideally with data-intensive applications. Python Native
– You write clean, modular Python and know your way around the standard data stack. Scraping Pro
– You’ve worked with tools like Scrapy, Playwright, or custom-built ingestion systems. ETL & API Savvy
– You know how to move, clean, and serve data efficiently and reliably. Cloud-Literate
– You’re fluent in AWS and understand modern cloud-native architecture. Curious and Collaborative
– You ask good questions, move fast with purpose, and enjoy solving hard problems as part of a team. Bonus Points If You: Have worked on vector databases (e.g., FAISS, Pinecone, pgvector) and LLM data pipelines. Know how to optimize for chunking, tokenization, or embedding strategies. Have experience with Django or similar Python web frameworks. Have built AI-powered SaaS products or search/retrieval platforms. Why Join Us
Shape the Backbone of an AI Platform
– Your systems will power the insights public affairs teams rely on. Build With a Mission-Driven Team
– We’re engineers, analysts, and policy pros working together to build tech that matters. Operate Like an Owner
– You’ll have autonomy, impact, and a direct line to leadership. Hybrid Flexibility, Competitive Comp
– $100,000–$180,000 salary range, stock options, benefits, and a hybrid schedule from our D.C. office. If you want to build elegant systems that empower AI-driven insights and solve complex data infrastructure problems, we want to hear from you.
#J-18808-Ljbffr
The high-stakes world of public affairs runs on intelligence, but the way professionals track issues and anticipate risks hasn’t kept up with the speed of modern decision-making—until now.At Delve, we’re pioneering AI-driven public affairs intelligence, helping teams move faster, cut through the noise, and stay ahead of the conversation. No more fragmented research. No more outdated workflows. Just powerful, AI-enhanced insights that drive real impact.We’re out of stealth and scaling fast. Some of the most influential firms in public affairs are already using our platform, and we’re backed by top leaders and firms in public affairs and tech.This is your chance to be part of something big—not just another AI company, but a team redefining how mission-critical knowledge work gets done. We move fast, solve with AI first, lean into curiosity, and build with precision and transparency. If that sounds like the kind of team you want to be part of, now is the time to join us. Description
You're a systems-minded engineer who thrives on building the data infrastructure behind AI-driven products. You get excited by large-scale scraping, scalable pipelines, and optimizing backends for performance and reliability. You understand how data flows through systems, how to make it usable at scale, and how to build cloud-native platforms that don't break under pressure. At Delve, we’re building the next generation of public affairs intelligence—helping teams track issues, anticipate risks, and make smarter decisions with less manual work. As our
Data Engineer , you’ll architect and own the ingestion, transformation, and delivery infrastructure that powers our AI. What You’ll Do
Build & Optimize Data Ingestion Systems Design and maintain high-volume web scraping and data ingestion pipelines. Implement robust web scraping frameworks to pull structured and unstructured data from diverse sources. Handle rate-limiting, retries, deduplication, and normalization at scale. Build and Scale Data Pipelines Develop scalable ETL workflows for processing and transforming large datasets. Automate data ingestion, storage, and retrieval processes. Optimize pipeline performance for speed, cost efficiency, and reliability. Own the Data Infrastructure Build and manage data storage infrastructure and vector databases optimized for retrieval. Deploy APIs and services that deliver enriched data to AI models and applications. Implement monitoring and alerting to ensure system performance and availability. Ensure data flows efficiently through our pipelines into downstream systems. Enrich & Structure Data for AI Models Collaborate with AI/ML engineers to support high-accuracy retrieval performance. Normalize and format data to support chunking strategies for embedding. Implement data cleaning and enrichment techniques to improve data usability. Who You Are
Engineer First, Data Always
– You have 3+ years of backend or infrastructure engineering experience, ideally with data-intensive applications. Python Native
– You write clean, modular Python and know your way around the standard data stack. Scraping Pro
– You’ve worked with tools like Scrapy, Playwright, or custom-built ingestion systems. ETL & API Savvy
– You know how to move, clean, and serve data efficiently and reliably. Cloud-Literate
– You’re fluent in AWS and understand modern cloud-native architecture. Curious and Collaborative
– You ask good questions, move fast with purpose, and enjoy solving hard problems as part of a team. Bonus Points If You: Have worked on vector databases (e.g., FAISS, Pinecone, pgvector) and LLM data pipelines. Know how to optimize for chunking, tokenization, or embedding strategies. Have experience with Django or similar Python web frameworks. Have built AI-powered SaaS products or search/retrieval platforms. Why Join Us
Shape the Backbone of an AI Platform
– Your systems will power the insights public affairs teams rely on. Build With a Mission-Driven Team
– We’re engineers, analysts, and policy pros working together to build tech that matters. Operate Like an Owner
– You’ll have autonomy, impact, and a direct line to leadership. Hybrid Flexibility, Competitive Comp
– $100,000–$180,000 salary range, stock options, benefits, and a hybrid schedule from our D.C. office. If you want to build elegant systems that empower AI-driven insights and solve complex data infrastructure problems, we want to hear from you.
#J-18808-Ljbffr