Insight Global
We are in search of a Software Engineer to help build the next generation of intelligent infrastructure for the Studio Digital Supply Chain. This role blends backend software engineering, workflow and AI to build agentic platforms that support workflows like mastering, localization, and delivery.
This is a hands-on engineering role that demands strong execution, adaptability, and comfort working at the pace of evolving AI technologies.
Key Responsibilities
Design & Build AI Middleware: • Build API gateways, wrappers, and middleware services that expose studio systems and data to AI agents. • Integrate tools, metadata, and content systems through model context protocol (MCP). • Standardize how agents' interface with internal services across the supply chain.
Build Agentic Workflows: • Leverage agentic platforms such as LangGraph or LangChain to manage tool execution, context routing, and decision logic. • Implement tracing, audit logging, and governance checkpoints including human approval and fallback paths. • Integrate orchestration into media workflows such as mastering, localization, and delivery.
Productize and Optimize AI Services: • Translate prototypes into production-ready microservices using Java, TypeScript, or Python, deployed via Kubernetes and managed through CI/CD pipelines. • Implement event-driven services using Kafka, EventBridge, or SNS/SQS to support loosely coupled, fault-tolerant communication across services. • Build and maintain APIs with clear contracts, emphasizing reliability, observability, and long-term maintainability.
Lead, Collaborate, and Mentor: • Work closely with ML engineers, platform teams, and product managers to deliver business-ready AI capabilities. • Translate defined requirements into production-ready code • Participate in sprint planning, standups, and delivery checkpoints
We are a company committed to creating diverse and inclusive environments where people can bring their full, authentic selves to work every day. We are an equal opportunity/affirmative action employer that believes everyone matters. Qualified candidates will receive consideration for employment regardless of their race, color, ethnicity, religion, sex (including pregnancy), sexual orientation, gender identity and expression, marital status, national origin, ancestry, genetic factors, age, disability, protected veteran status, military or uniformed service member status, or any other status or characteristic protected by applicable laws, regulations, and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please send a request to HR@insightglobal.com.To learn more about how we collect, keep, and process your private information, please review Insight Global's Workforce Privacy Policy: https://insightglobal.com/workforce-privacy-policy/.
Required Skills & Experience • Bachelor's or master's degree in a Computer Science, Software Engineering, or a related field • 5+ years of backend or systems engineering experience, with at least 1 year of building AI-powered or workflow-centric platforms. • Proficient in Java, Python, or Node.js, with deep understanding of testing, observability, and systems at scale. • Production experience with cloud platforms such as AWS, and container orchestration with Docker and Kubernetes. • Hands-on experience deploying infrastructure via Terraform, CDK, or similar infrastructure-as-code frameworks. • Experience building or integrating workflow engines, ideally in LLM or GenAI contexts. • Clear, structured communication skills and a track record of technical leadership in cross-functional teams.
Nice to Have Skills & Experience • Experience with LLM frameworks and APIs such as LangGraph, LangChain, OpenAI, or Anthropic. • Background in media, entertainment, or content operations, including localization or post-production workflows. • Experience building multi-tenant or shared platform services. • Exposure to Agile delivery practices and DevOps automation at scale. • Contributions to GenAI open-source projects or internal AI/ML infrastructure tooling.
Benefit packages for this role will start on the 31st day of employment and include medical, dental, and vision insurance, as well as HSA, FSA, and DCFSA account options, and 401k retirement account access with employer matching. Employees in this role are also entitled to paid sick leave and/or other paid time off as provided by applicable law.
This is a hands-on engineering role that demands strong execution, adaptability, and comfort working at the pace of evolving AI technologies.
Key Responsibilities
Design & Build AI Middleware: • Build API gateways, wrappers, and middleware services that expose studio systems and data to AI agents. • Integrate tools, metadata, and content systems through model context protocol (MCP). • Standardize how agents' interface with internal services across the supply chain.
Build Agentic Workflows: • Leverage agentic platforms such as LangGraph or LangChain to manage tool execution, context routing, and decision logic. • Implement tracing, audit logging, and governance checkpoints including human approval and fallback paths. • Integrate orchestration into media workflows such as mastering, localization, and delivery.
Productize and Optimize AI Services: • Translate prototypes into production-ready microservices using Java, TypeScript, or Python, deployed via Kubernetes and managed through CI/CD pipelines. • Implement event-driven services using Kafka, EventBridge, or SNS/SQS to support loosely coupled, fault-tolerant communication across services. • Build and maintain APIs with clear contracts, emphasizing reliability, observability, and long-term maintainability.
Lead, Collaborate, and Mentor: • Work closely with ML engineers, platform teams, and product managers to deliver business-ready AI capabilities. • Translate defined requirements into production-ready code • Participate in sprint planning, standups, and delivery checkpoints
We are a company committed to creating diverse and inclusive environments where people can bring their full, authentic selves to work every day. We are an equal opportunity/affirmative action employer that believes everyone matters. Qualified candidates will receive consideration for employment regardless of their race, color, ethnicity, religion, sex (including pregnancy), sexual orientation, gender identity and expression, marital status, national origin, ancestry, genetic factors, age, disability, protected veteran status, military or uniformed service member status, or any other status or characteristic protected by applicable laws, regulations, and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please send a request to HR@insightglobal.com.To learn more about how we collect, keep, and process your private information, please review Insight Global's Workforce Privacy Policy: https://insightglobal.com/workforce-privacy-policy/.
Required Skills & Experience • Bachelor's or master's degree in a Computer Science, Software Engineering, or a related field • 5+ years of backend or systems engineering experience, with at least 1 year of building AI-powered or workflow-centric platforms. • Proficient in Java, Python, or Node.js, with deep understanding of testing, observability, and systems at scale. • Production experience with cloud platforms such as AWS, and container orchestration with Docker and Kubernetes. • Hands-on experience deploying infrastructure via Terraform, CDK, or similar infrastructure-as-code frameworks. • Experience building or integrating workflow engines, ideally in LLM or GenAI contexts. • Clear, structured communication skills and a track record of technical leadership in cross-functional teams.
Nice to Have Skills & Experience • Experience with LLM frameworks and APIs such as LangGraph, LangChain, OpenAI, or Anthropic. • Background in media, entertainment, or content operations, including localization or post-production workflows. • Experience building multi-tenant or shared platform services. • Exposure to Agile delivery practices and DevOps automation at scale. • Contributions to GenAI open-source projects or internal AI/ML infrastructure tooling.
Benefit packages for this role will start on the 31st day of employment and include medical, dental, and vision insurance, as well as HSA, FSA, and DCFSA account options, and 401k retirement account access with employer matching. Employees in this role are also entitled to paid sick leave and/or other paid time off as provided by applicable law.