Net2Source (N2S)
Role Description
Join a horizontal engineering team supporting 600+ application teams on a mission to elevate engineering maturity across the organization. This team drives standards, guidelines, platform capabilities, and large-scale technical debt remediation.
In this role, you will develop advanced agentic AI workflows to automatically analyze codebases, detect technical debt, and generate high-quality fixes—from vulnerability patches to dependency and language upgrades. This is a hands‑on, high‑impact opportunity to shape the future of automated software modernization.
Key Responsibilities
Design, develop, and maintain LLM-powered multi‑agent workflows for code analysis, remediation proposals, and safe patch generation
Implement agentic patterns such as planning/execution loops, tool orchestration, sandboxing, guardrails, and failure recovery
Build scalable automation systems for technical debt remediation, including language/runtime upgrades, dependency modernization, vulnerability patching, and configuration drift correction
Collaborate with Developer Experience and Platform teams to define engineering standards and reusable best practices
Architect and optimize RAG pipelines, including chunking strategies, embeddings, hybrid search, reranking, and retrieval policies
Develop evaluation frameworks for LLMs, RAG, and multi‑agent workflows, including offline datasets, validation metrics, statistical testing, and A/B experiments
Contribute to backend systems using Python, distributed systems, microservices, PostgreSQL, DBT, vector databases, caching, streaming, and queueing technologies
Build CI/CD pipelines, observability dashboards, and conduct performance analysis across model, retrieval, and network layers
Work cross‑functionally with product, platform, and security teams to take prototypes to production‑grade services
Communicate effectively with stakeholders, produce high‑quality technical documentation, and mentor junior engineers
Must‑Have Qualifications
5+ years of experience building production‑grade systems with end‑to‑end ownership
Expertise in Python, software engineering best practices, testing strategies, CI/CD, and system design
Hands‑on experience shipping LLM‑powered features (e.g., autonomous workflows, function calling) with measurable reliability or latency improvements
Strong understanding of multi‑agent architectures including planners, executors, and tool routing
Deep knowledge of RAG systems: chunking, embeddings, vector/hybrid search, retrieval policies
Experience evaluating LLMs and agent workflows using statistical reasoning and validation techniques
Proficiency with AWS (Lambda, ECS/EKS, S3, API Gateway, EC2, IAM) and Infrastructure‑as‑Code
Experience with observability tools (e.g., Datadog) covering logging, tracing, and metrics
Familiarity with PostgreSQL, DBT, data modeling, schema evolution, and performance tuning
Knowledge of vector databases such as Pinecone or pgvector
Experience designing or optimizing CI/CD pipelines (GitHub Actions or similar)
Proven track record in application modernization, dependency management, and technical debt reduction
Ability to rapidly prototype, validate, and transition solutions into production
Preferred Skills
Experience designing agent infrastructure with sandboxing, tool isolation, and fail‑safe execution
Background in large‑scale platform engineering or developer experience tooling
Understanding of enterprise AI security, compliance, and privacy requirements
Strong architectural communication skills, including RFC development and technical diagramming
Attributes
Adaptable, proactive problem solver
Strong ownership mindset with excellent collaboration and communication skills
Comfortable working in fast‑paced, ambiguous R&D environments
Passionate about building high‑leverage platform capabilities that support hundreds of engineering teams
Seniority Level Mid‑Senior level
Employment Type Contract
Job Function Information Technology
Industries Banking and Financial Services
#J-18808-Ljbffr
In this role, you will develop advanced agentic AI workflows to automatically analyze codebases, detect technical debt, and generate high-quality fixes—from vulnerability patches to dependency and language upgrades. This is a hands‑on, high‑impact opportunity to shape the future of automated software modernization.
Key Responsibilities
Design, develop, and maintain LLM-powered multi‑agent workflows for code analysis, remediation proposals, and safe patch generation
Implement agentic patterns such as planning/execution loops, tool orchestration, sandboxing, guardrails, and failure recovery
Build scalable automation systems for technical debt remediation, including language/runtime upgrades, dependency modernization, vulnerability patching, and configuration drift correction
Collaborate with Developer Experience and Platform teams to define engineering standards and reusable best practices
Architect and optimize RAG pipelines, including chunking strategies, embeddings, hybrid search, reranking, and retrieval policies
Develop evaluation frameworks for LLMs, RAG, and multi‑agent workflows, including offline datasets, validation metrics, statistical testing, and A/B experiments
Contribute to backend systems using Python, distributed systems, microservices, PostgreSQL, DBT, vector databases, caching, streaming, and queueing technologies
Build CI/CD pipelines, observability dashboards, and conduct performance analysis across model, retrieval, and network layers
Work cross‑functionally with product, platform, and security teams to take prototypes to production‑grade services
Communicate effectively with stakeholders, produce high‑quality technical documentation, and mentor junior engineers
Must‑Have Qualifications
5+ years of experience building production‑grade systems with end‑to‑end ownership
Expertise in Python, software engineering best practices, testing strategies, CI/CD, and system design
Hands‑on experience shipping LLM‑powered features (e.g., autonomous workflows, function calling) with measurable reliability or latency improvements
Strong understanding of multi‑agent architectures including planners, executors, and tool routing
Deep knowledge of RAG systems: chunking, embeddings, vector/hybrid search, retrieval policies
Experience evaluating LLMs and agent workflows using statistical reasoning and validation techniques
Proficiency with AWS (Lambda, ECS/EKS, S3, API Gateway, EC2, IAM) and Infrastructure‑as‑Code
Experience with observability tools (e.g., Datadog) covering logging, tracing, and metrics
Familiarity with PostgreSQL, DBT, data modeling, schema evolution, and performance tuning
Knowledge of vector databases such as Pinecone or pgvector
Experience designing or optimizing CI/CD pipelines (GitHub Actions or similar)
Proven track record in application modernization, dependency management, and technical debt reduction
Ability to rapidly prototype, validate, and transition solutions into production
Preferred Skills
Experience designing agent infrastructure with sandboxing, tool isolation, and fail‑safe execution
Background in large‑scale platform engineering or developer experience tooling
Understanding of enterprise AI security, compliance, and privacy requirements
Strong architectural communication skills, including RFC development and technical diagramming
Attributes
Adaptable, proactive problem solver
Strong ownership mindset with excellent collaboration and communication skills
Comfortable working in fast‑paced, ambiguous R&D environments
Passionate about building high‑leverage platform capabilities that support hundreds of engineering teams
Seniority Level Mid‑Senior level
Employment Type Contract
Job Function Information Technology
Industries Banking and Financial Services
#J-18808-Ljbffr