Prosum
Overview
Location:
Scottsdale, AZ (Hybrid work Schedule) Term:
Full-time/Direct Hire Job Summary
We are seeking a skilled ML Ops Engineer with hands-on experience in AWS, Python, SQL, and (Spark or PySpark) to help deploy, monitor, and maintain machine learning models in production environments. The ideal candidate will have a strong background in putting models into production, automating ML workflows, and collaborating closely with data scientists and engineers to ensure scalable, reliable, and efficient ML systems. Responsibilities
Collaborate with data scientists and ML engineers to deploy machine learning models into production. Design, implement, and maintain robust CI/CD pipelines for model training, validation, and deployment. Monitor and maintain models in production, ensuring optimal performance and reliability. Manage model versioning, rollback strategies, and updates across environments. Build and optimize data processing pipelines using Spark or PySpark. Write production-ready code in Python for ML workflows and automation tasks. Develop and maintain scalable data workflows using SQL and AWS-native data tools. Utilize AWS services (such as S3, SageMaker, Lambda, ECS, EMR, Glue) for model deployment and data engineering tasks. Work closely with cross-functional teams to ensure models meet business and performance requirements. Troubleshoot and resolve issues in ML pipelines and production environments. Required Qualifications
3+ years of experience in ML Ops, Data Engineering, or Machine Learning Engineering roles. Proven experience putting models into production and maintaining models post-deployment. Proficient in Python, including ML libraries and scripting. Strong expertise in SQL and working with large datasets. Hands-on experience with Spark or PySpark in distributed data processing environments. Solid understanding and experience with AWS cloud services relevant to ML Ops. Experience with containerization (e.g., Docker) and orchestration tools (e.g., Kubernetes, Airflow). Familiarity with monitoring tools and logging for model observability and alerting. Strong problem-solving skills and ability to work independently or as part of a team. Preferred Qualifications
Experience with Amazon SageMaker or other ML model deployment platforms. Familiarity with ML lifecycle tools (MLflow, TFX, Kubeflow, etc.). Experience working in agile environments and using DevOps best practices. "This position does not offer sponsorship. Candidates must be legally authorized to work in the United States without sponsorship now or in the future."
#J-18808-Ljbffr
Location:
Scottsdale, AZ (Hybrid work Schedule) Term:
Full-time/Direct Hire Job Summary
We are seeking a skilled ML Ops Engineer with hands-on experience in AWS, Python, SQL, and (Spark or PySpark) to help deploy, monitor, and maintain machine learning models in production environments. The ideal candidate will have a strong background in putting models into production, automating ML workflows, and collaborating closely with data scientists and engineers to ensure scalable, reliable, and efficient ML systems. Responsibilities
Collaborate with data scientists and ML engineers to deploy machine learning models into production. Design, implement, and maintain robust CI/CD pipelines for model training, validation, and deployment. Monitor and maintain models in production, ensuring optimal performance and reliability. Manage model versioning, rollback strategies, and updates across environments. Build and optimize data processing pipelines using Spark or PySpark. Write production-ready code in Python for ML workflows and automation tasks. Develop and maintain scalable data workflows using SQL and AWS-native data tools. Utilize AWS services (such as S3, SageMaker, Lambda, ECS, EMR, Glue) for model deployment and data engineering tasks. Work closely with cross-functional teams to ensure models meet business and performance requirements. Troubleshoot and resolve issues in ML pipelines and production environments. Required Qualifications
3+ years of experience in ML Ops, Data Engineering, or Machine Learning Engineering roles. Proven experience putting models into production and maintaining models post-deployment. Proficient in Python, including ML libraries and scripting. Strong expertise in SQL and working with large datasets. Hands-on experience with Spark or PySpark in distributed data processing environments. Solid understanding and experience with AWS cloud services relevant to ML Ops. Experience with containerization (e.g., Docker) and orchestration tools (e.g., Kubernetes, Airflow). Familiarity with monitoring tools and logging for model observability and alerting. Strong problem-solving skills and ability to work independently or as part of a team. Preferred Qualifications
Experience with Amazon SageMaker or other ML model deployment platforms. Familiarity with ML lifecycle tools (MLflow, TFX, Kubeflow, etc.). Experience working in agile environments and using DevOps best practices. "This position does not offer sponsorship. Candidates must be legally authorized to work in the United States without sponsorship now or in the future."
#J-18808-Ljbffr