Darwin Recruitment
Senior Software Engineer – Scientific Data Systems
Remote (US) or Hybrid | Full-Time | Defense & Intelligence Sector
A company working at the intersection of satellites, RF detection, and data analytics is hiring a Senior Software Engineer to join their algorithms group. This team owns the pipeline that pushes massive volumes of radio frequency data through DSP and geolocation algorithms, then turns that into a product for government and commercial customers.
This isn’t a research role. It’s a production software role where code needs to be clean, memory‑efficient, tested, and reliable. The algorithms already exist. Your job is to build the plumbing that connects them together, handles large volumes of incoming data, and delivers the output in a repeatable, scalable way.
This sits in the DSP org, working directly with signal processing, ML, and data engineers. They’re looking for someone who’s built real production systems in Python and C++, has a strong grasp of computational complexity, and is comfortable navigating large‑scale scientific or time‑series data.
You’d be doing things like
Building and maintaining performant code in Python and C++
Creating reusable components that support DSP and geolocation algorithms
Handling large datasets and deploying software into a containerized, cloud‑based environment
Supporting CI/CD pipelines and writing automated tests
Collaborating across teams to keep software stable, scalable, and reliable
You’ll need
5+ years of experience writing production software in Python and C++
Familiarity with cloud environments (AWS preferred) and containerized deployments (Docker, Kubernetes)
Comfort working in Linux-based environments
Experience with standard Python libraries like NumPy, pandas, and SciPy
Solid understanding of software performance, memory, and data complexity
Experience with Git-based CI/CD (GitLab preferred)
Nice to have
Background in signal processing or having worked around RF datasets
Experience with orchestration tools like Airflow or Dagster
Familiarity with pybind11, or building C++/Python integrations
Exposure to data quality systems and validation tools
Experience deploying data or ML-based products to customers
The ideal fit You’ve built and shipped software that handles complex data at scale. Maybe you worked at a company delivering data feeds or running data platforms as a product. You understand how to move large datasets around, and why code structure and computational cost matters when things get big. You’re not afraid to own a chunk of code and keep it running.
Adam Slade
#J-18808-Ljbffr
A company working at the intersection of satellites, RF detection, and data analytics is hiring a Senior Software Engineer to join their algorithms group. This team owns the pipeline that pushes massive volumes of radio frequency data through DSP and geolocation algorithms, then turns that into a product for government and commercial customers.
This isn’t a research role. It’s a production software role where code needs to be clean, memory‑efficient, tested, and reliable. The algorithms already exist. Your job is to build the plumbing that connects them together, handles large volumes of incoming data, and delivers the output in a repeatable, scalable way.
This sits in the DSP org, working directly with signal processing, ML, and data engineers. They’re looking for someone who’s built real production systems in Python and C++, has a strong grasp of computational complexity, and is comfortable navigating large‑scale scientific or time‑series data.
You’d be doing things like
Building and maintaining performant code in Python and C++
Creating reusable components that support DSP and geolocation algorithms
Handling large datasets and deploying software into a containerized, cloud‑based environment
Supporting CI/CD pipelines and writing automated tests
Collaborating across teams to keep software stable, scalable, and reliable
You’ll need
5+ years of experience writing production software in Python and C++
Familiarity with cloud environments (AWS preferred) and containerized deployments (Docker, Kubernetes)
Comfort working in Linux-based environments
Experience with standard Python libraries like NumPy, pandas, and SciPy
Solid understanding of software performance, memory, and data complexity
Experience with Git-based CI/CD (GitLab preferred)
Nice to have
Background in signal processing or having worked around RF datasets
Experience with orchestration tools like Airflow or Dagster
Familiarity with pybind11, or building C++/Python integrations
Exposure to data quality systems and validation tools
Experience deploying data or ML-based products to customers
The ideal fit You’ve built and shipped software that handles complex data at scale. Maybe you worked at a company delivering data feeds or running data platforms as a product. You understand how to move large datasets around, and why code structure and computational cost matters when things get big. You’re not afraid to own a chunk of code and keep it running.
Adam Slade
#J-18808-Ljbffr