Logo
Compunnel, Inc.

AWS Machine Learning Operations Engineer

Compunnel, Inc., Durham, North Carolina, United States, 27703

Save Job

AWS Machine Learning Operations Engineer

03/28/2025 Contract Active Job Description:

Job Summary: We are seeking a skilled and experienced Machine Learning Engineer to join our team and work on deploying machine learning models in the AWS cloud environment. The ideal candidate will have a strong background in Python software development, data engineering, and building data pipelines, with hands-on experience in deploying and optimizing ML models. You will collaborate closely with data scientists to fine-tune models and solve data-related challenges while utilizing AWS services like SageMaker, Lambda, and others. Key Responsibilities: Model Deployment & Optimization: Deploy and optimize machine learning models in the AWS cloud environment, focusing on scalability and performance. Data Pipeline Development: Design and engineer data solutions and build efficient data pipelines, ensuring smooth data flow across the system. Collaboration with Data Scientists: Work closely with data scientists to adjust and optimize ML models and queries for better performance. Data Latency Management: Address issues related to data latency and ensure minimal delays in data processing. SageMaker Usage: Leverage AWS SageMaker for machine learning model training, deployment, and management. ETL & Data Engineering: Utilize strong ETL skills to process, clean, and prepare data for machine learning applications. Automation & DevOps: Implement CI/CD and DevOps automation using tools such as Jenkins and Terraform to streamline development processes. AI/ML Projects: Contribute to AI, machine learning, and deep learning projects, helping scale and optimize solutions. Collaborative Environment: Actively engage in code reviews, pair programming, and contribute to a continuous learning environment within the team. Required Qualifications:

Experience: 5+ years of experience in Python software development and building robust data pipelines. ETL Skills: Strong skills in ETL processes, with a focus on data transformation and cleaning. AWS Services: Extensive experience deploying machine learning models in AWS, including services such as SageMaker, Lambda, SQS, SNS, Athena, Glue, and ECR. ML Model Optimization: Proven experience tweaking and optimizing machine learning models for deployment at scale. CI/CD & DevOps: Hands-on experience with CI/CD pipelines and DevOps automation tools like Jenkins and Terraform. Machine Learning & AI Projects: Prior experience working on AI, machine learning, or deep learning projects. Collaboration: Desire to work in a collaborative environment, with a focus on continuous learning, code review, and pair programming. Data Engineering: Strong background in data engineering, with specific experience in ML Ops and exposure to Data Science/AI. Preferred Qualifications:

Advanced ML/AI Experience: Additional experience in more advanced machine learning techniques or deep learning frameworks. Cloud Certifications: AWS certifications or similar certifications related to cloud and machine learning. DevOps Automation Tools: Familiarity with more DevOps tools and automation frameworks is a plus. Certifications (if any):

AWS Certified Solutions Architect – Associate or Professional

#J-18808-Ljbffr