Andiamo
Software Engineer - Interactive AI Deep-Research Platform
Andiamo, New York, New York, us, 10261
Software Engineer - Interactive AI Deep-Research Platform
Software Engineer — Agent Orchestration & Autonomous Research (AI × Finance)
Role Overview
Help build the intelligent backbone that powers autonomous AI research for institutional finance. As a
Software Engineer , you’ll design the core orchestration layer that coordinates specialized agents, steers multi-step research workflows, and executes mission-critical analysis with reliability, observability, and security at scale.
Why This Role
Foundation work: Ship the systems that enable agents to reason, collaborate, and deliver answers on real financial questions. High leverage: Your architecture decisions will shape how autonomous research is performed across multiple enterprises. Production impact: What you build will run against live data, under real SLAs, and support high-stakes decisions.
Your First 90 Days
Design and launch a production multi-agent workflow with senior mentorship and clear success metrics. Stand up an agent-orchestration service (state machines, task queues, retries, circuit breakers) for a real research use case. Own a core subsystem end-to-end—from architecture and implementation to deployment and on-call. See your code power autonomous research for a top institutional client.
What You’ll Build
Agent Orchestration & Workflow Engines: Coordination services that route work across specialized agents, enforce deadlines, and guarantee idempotent execution. Multi-Agent Architecture: Communication patterns, task delegation, result synthesis, and dynamic resource allocation across heterogeneous workloads. Autonomous Execution Frameworks: Long-running, multi-phase flows with automatic backoff, error recovery, and graceful degradation—while preserving human oversight. Model Integration & Routing: Interfaces for multiple LLM providers and domain models with fallback rules, A/B evaluation, and cost/latency budgets. Real-Time Data Pipelines: Event-driven ingestion for market data, filings, news, and alternative datasets that trigger agent workflows. Memory & Context: Vector search and knowledge graphs that maintain long-horizon context, reuse prior analysis, and improve retrieval quality. Enterprise-Grade Reliability: Authentication, authorization, audit logs, and compliance-ready telemetry fit for regulated customers.
What We’re Looking For (Must-Haves)
2+ years building production-scale distributed systems or backend services. Strong CS fundamentals: algorithms, concurrency, systems design, and debugging multi-service architectures. Experience with agent frameworks or multi-agent coordination patterns. Fluency in Python, Node.js, or Rust; comfortable with microservices and event-driven designs. Hands-on experience with vector databases and semantic retrieval (e.g., Pinecone, Weaviate, Chroma). Track record of shipping resilient systems that support real users and tight SLAs. Clear technical communication with both engineers and business stakeholders.
Nice to Have
Background building AI/ML production systems, workflow orchestration, or autonomous agent platforms. Familiarity with financial data sources, APIs, or enterprise integrations. Experience with Kafka (or similar), Kubernetes, and infrastructure-as-code. Mentoring or technical leadership experience; setting architectural direction. Startup experience or substantial side projects built from scratch.
How We Work
Ownership: Engineers own outcomes end-to-end—design, ship, measure, iterate. Evidence-driven: We prioritize telemetry, experimentation, and rigorous post-incident learning. Safety & Reliability: Guardrails, fallbacks, and observability are first-class citizens.
Mentorship & Growth
Weekly 1:1s with senior engineers experienced in enterprise-scale distributed systems. Deep architectural reviews and guidance on multi-agent/agentic system design. A clear path toward technical leadership and ownership of critical subsystems. “Learn by shipping” culture—production systems powering real research.
Representative Tech Stack
Backend: Python, Node.js, Rust; PostgreSQL, Redis AI/ML: Multiple LLM providers; embeddings; vector databases Infrastructure: AWS, Docker, Kubernetes, Temporal, Kafka, Airflow Observability: Metrics, logs, traces (e.g., Datadog or similar) Tooling: Git, GitHub Actions, Pulumi
#J-18808-Ljbffr
Software Engineer — Agent Orchestration & Autonomous Research (AI × Finance)
Role Overview
Help build the intelligent backbone that powers autonomous AI research for institutional finance. As a
Software Engineer , you’ll design the core orchestration layer that coordinates specialized agents, steers multi-step research workflows, and executes mission-critical analysis with reliability, observability, and security at scale.
Why This Role
Foundation work: Ship the systems that enable agents to reason, collaborate, and deliver answers on real financial questions. High leverage: Your architecture decisions will shape how autonomous research is performed across multiple enterprises. Production impact: What you build will run against live data, under real SLAs, and support high-stakes decisions.
Your First 90 Days
Design and launch a production multi-agent workflow with senior mentorship and clear success metrics. Stand up an agent-orchestration service (state machines, task queues, retries, circuit breakers) for a real research use case. Own a core subsystem end-to-end—from architecture and implementation to deployment and on-call. See your code power autonomous research for a top institutional client.
What You’ll Build
Agent Orchestration & Workflow Engines: Coordination services that route work across specialized agents, enforce deadlines, and guarantee idempotent execution. Multi-Agent Architecture: Communication patterns, task delegation, result synthesis, and dynamic resource allocation across heterogeneous workloads. Autonomous Execution Frameworks: Long-running, multi-phase flows with automatic backoff, error recovery, and graceful degradation—while preserving human oversight. Model Integration & Routing: Interfaces for multiple LLM providers and domain models with fallback rules, A/B evaluation, and cost/latency budgets. Real-Time Data Pipelines: Event-driven ingestion for market data, filings, news, and alternative datasets that trigger agent workflows. Memory & Context: Vector search and knowledge graphs that maintain long-horizon context, reuse prior analysis, and improve retrieval quality. Enterprise-Grade Reliability: Authentication, authorization, audit logs, and compliance-ready telemetry fit for regulated customers.
What We’re Looking For (Must-Haves)
2+ years building production-scale distributed systems or backend services. Strong CS fundamentals: algorithms, concurrency, systems design, and debugging multi-service architectures. Experience with agent frameworks or multi-agent coordination patterns. Fluency in Python, Node.js, or Rust; comfortable with microservices and event-driven designs. Hands-on experience with vector databases and semantic retrieval (e.g., Pinecone, Weaviate, Chroma). Track record of shipping resilient systems that support real users and tight SLAs. Clear technical communication with both engineers and business stakeholders.
Nice to Have
Background building AI/ML production systems, workflow orchestration, or autonomous agent platforms. Familiarity with financial data sources, APIs, or enterprise integrations. Experience with Kafka (or similar), Kubernetes, and infrastructure-as-code. Mentoring or technical leadership experience; setting architectural direction. Startup experience or substantial side projects built from scratch.
How We Work
Ownership: Engineers own outcomes end-to-end—design, ship, measure, iterate. Evidence-driven: We prioritize telemetry, experimentation, and rigorous post-incident learning. Safety & Reliability: Guardrails, fallbacks, and observability are first-class citizens.
Mentorship & Growth
Weekly 1:1s with senior engineers experienced in enterprise-scale distributed systems. Deep architectural reviews and guidance on multi-agent/agentic system design. A clear path toward technical leadership and ownership of critical subsystems. “Learn by shipping” culture—production systems powering real research.
Representative Tech Stack
Backend: Python, Node.js, Rust; PostgreSQL, Redis AI/ML: Multiple LLM providers; embeddings; vector databases Infrastructure: AWS, Docker, Kubernetes, Temporal, Kafka, Airflow Observability: Metrics, logs, traces (e.g., Datadog or similar) Tooling: Git, GitHub Actions, Pulumi
#J-18808-Ljbffr