Progress Software Corporation
We are Progress (Nasdaq: PRGS) - the trusted provider of software that enables our customers to develop, deploy and manage responsible, AI-powered applications and experiences with agility and ease.
We’re proud to have a diverse, global team where we value the individual and enrich our culture by considering varied perspectives because we believe people power progress. Join us as an AI Engineer and help us do what we do best: propelling business forward.
You’ll work on our product, Progress Agentic RAG, a cutting-edge system that combines retrieval-augmented generation (RAG) and advanced agentic pipelines to transform how humans interact with unstructured data.
In this role, you will:
Architect and implement advanced RAG pipelines, including cutting edge extraction, memory systems, and tool-augmented agents.
Lead the integration of LLMs with external APIs, databases, and internal knowledge graphs.
Architect and optimize multi-agent systems, ensuring agents can collaborate, delegate tasks, and interact seamlessly with both internal and external tools and APIs.
Take part in LLM fine tuning projects, helping plan and architect efficient, reproducible pipelines.
Mentor junior engineers and collaborate with researchers to bring cutting-edge NLP ideas into production.
Optimize performance, latency, and cost across the stack.
Own the lifecycle of NLP components: from design to deployment and monitoring.
Contribute to internal tooling.
Your Background:
Excellent communication and team collaboration skills.
7+ years of experience in software engineering, with at least 5 in NLP or ML.
Deep understanding of LLMs, Transformers, and RAG architectures.
Strong Python skills and experience with frameworks like FastAPI and HuggingFace.
Have experience deploying ML models in production through specialized inference servers such as Triton Inference Server, vLLM, SGLang or TGI.
Proficient in managing the full ML production lifecycle, including CI/CD pipelines, containerization with Docker, orchestration using Kubernetes, and monitoring of services.
Worked previously in services deployed in any major cloud provider (AWS, Azure, GCP).
Familiarity with vector databases.
Excellent communication and leadership skills.
Comfortable working in a remote-first environment.
Additionally, it would be beneficial if you have:
Strong Communication and interpersonal skills (Spoken and written).
Experience in building evaluation frameworks for LLM or RAG environments.
Experience with agentic frameworks or autonomous agents.
Contributions to open-source NLP or ML projects.
Passion about staying up to date with the latest NLP research.
Compensation:
Generous remuneration package
Employee Stock Purchase Plan Enrollment
Vacation, Family, and Health:
23 vacation days annually
Birthday day off
Community service time off
International Women’s Day – March 8 is an official holiday for all employees
Life and Medical Insurance
#J-18808-Ljbffr
We’re proud to have a diverse, global team where we value the individual and enrich our culture by considering varied perspectives because we believe people power progress. Join us as an AI Engineer and help us do what we do best: propelling business forward.
You’ll work on our product, Progress Agentic RAG, a cutting-edge system that combines retrieval-augmented generation (RAG) and advanced agentic pipelines to transform how humans interact with unstructured data.
In this role, you will:
Architect and implement advanced RAG pipelines, including cutting edge extraction, memory systems, and tool-augmented agents.
Lead the integration of LLMs with external APIs, databases, and internal knowledge graphs.
Architect and optimize multi-agent systems, ensuring agents can collaborate, delegate tasks, and interact seamlessly with both internal and external tools and APIs.
Take part in LLM fine tuning projects, helping plan and architect efficient, reproducible pipelines.
Mentor junior engineers and collaborate with researchers to bring cutting-edge NLP ideas into production.
Optimize performance, latency, and cost across the stack.
Own the lifecycle of NLP components: from design to deployment and monitoring.
Contribute to internal tooling.
Your Background:
Excellent communication and team collaboration skills.
7+ years of experience in software engineering, with at least 5 in NLP or ML.
Deep understanding of LLMs, Transformers, and RAG architectures.
Strong Python skills and experience with frameworks like FastAPI and HuggingFace.
Have experience deploying ML models in production through specialized inference servers such as Triton Inference Server, vLLM, SGLang or TGI.
Proficient in managing the full ML production lifecycle, including CI/CD pipelines, containerization with Docker, orchestration using Kubernetes, and monitoring of services.
Worked previously in services deployed in any major cloud provider (AWS, Azure, GCP).
Familiarity with vector databases.
Excellent communication and leadership skills.
Comfortable working in a remote-first environment.
Additionally, it would be beneficial if you have:
Strong Communication and interpersonal skills (Spoken and written).
Experience in building evaluation frameworks for LLM or RAG environments.
Experience with agentic frameworks or autonomous agents.
Contributions to open-source NLP or ML projects.
Passion about staying up to date with the latest NLP research.
Compensation:
Generous remuneration package
Employee Stock Purchase Plan Enrollment
Vacation, Family, and Health:
23 vacation days annually
Birthday day off
Community service time off
International Women’s Day – March 8 is an official holiday for all employees
Life and Medical Insurance
#J-18808-Ljbffr