Logo
Netspend Corporation

ML Ops Developer III

Netspend Corporation, Austin, Texas, us, 78716

Save Job

About the Company:

Ouro is a global, vertically-integrated financial services and technology company dedicated to the delivery of innovative financial empowerment solutions to consumers worldwide. Ouro’s financial products and services span prepaid, debit, cross-border payments, and loyalty solutions for consumers and enterprise partners.

Ouro's flagship product Netspend provides prepaid and debit account solutions that connect customers with secure, convenient access to global payment networks so they can manage their money and make everyday purchases. With a nationwide U.S. retail network, customers can purchase and reload Netspend products at 130,000 reload points and over 100,000 distributing locations.

Since Ouro's founding in 1999 by industry pioneers Roy and Bertrand Sosa, Ouro products have processed billions of dollars in transaction volume and served millions of customers worldwide. The company is headquartered in Austin, Texas with regional offices around the world. Learn more at www.ouro.com.

Role Summary The MLOps Developer will join a centralized MLOps Engineering team responsible for productionizing machine learning and generative AI workloads at enterprise scale. The role will drive the design, automation, deployment, observability, and governance of ML and LLM platforms using AWS SageMaker and Amazon Bedrock.

This position requires close collaboration with Data Science (DS) teams to support model development, training, validation, and deployment into production. You will also be responsible for evolving and optimizing ML workflows, continuously improving automation, reliability, and security to meet emerging business and platform requirements.

Key Responsibilities

Architect, deploy, and operate development and production MLOps platforms on AWS (SageMaker, Bedrock)

Build and maintain CI/CD pipelines for ML model training and deployment

Implement Infrastructure as Code (IaC) using Terraform

Manage AWS cloud components, including IAM, VPC, EKS, Lambda, security, networking, monitoring, and compliance

Automate the end-to-end ML model lifecycle (training, deployment, endpoints, monitoring, and failure detection)

Configure and manage cloud observability (logging, alerts, dashboards – CloudWatch and monitoring tools)

Enable secure LLM onboarding, prompt orchestration, and governance using Amazon Bedrock

Ensure platform reliability, scalability, security, and regulatory compliance

Partner with DS and Engineering to support ML model productionization and release governance

Required Skills

Advanced proficiency in Python

Strong experience in Terraform (IaC) for AWS infrastructure automation

Hands-on knowledge of CI/CD, DevOps, and deployment governance

Experience with AWS ML/AI ecosystem: SageMaker, Bedrock, IAM, VPC, EKS, Lambda, CloudWatch, cloud security, and monitoring

Practical experience with ML model deployment, endpoints, and production support

Solid understanding of cloud security, networking, logging, and observability

Knowledge of MLOps best practices and ML system design

Preferred Qualifications

7+ years of experience in AI/ML engineering or platform roles

Experience with AWS SageMaker endpoints, pipelines, and model hosting

Experience integrating, orchestrating, or governing LLM workloads using Amazon Bedrock

Prior experience with ML deployments in production environments

Familiarity with Terraform modules and EKS-based deployments

Knowledge of ML observability, monitoring, and failure detection

Experience in FinTech or enterprise data platforms is an advantage

Why This Role Matters This is a mission-critical engineering role that ensures ML and LLM workloads are deployed securely, reliably, and efficiently at enterprise scale. The role supports long-term AI platform evolution and accelerates scalable ML and generative AI adoption across the organization.

#J-18808-Ljbffr