Tech Pro Inc.
Position: MLOps Engineer
Location: Baltimore, MD (5 days on-site, local candidates preferred) Overview
We are seeking an experienced MLOps Engineer to support the deployment, monitoring, and optimization of machine learning systems in production environments. The role involves building and maintaining ML pipelines, automating workflows, managing infrastructure, and collaborating with data science and engineering teams to ensure reliable model performance. Responsibilities
Design, build, and manage end-to-end ML pipelines, including data ingestion, preprocessing, model training, validation, and deployment. Integrate ML models into production environments using containerization and cloud platforms such as AWS or Azure. Implement monitoring and alerting systems to track model performance and system health. Automate repetitive tasks within the ML lifecycle to improve reliability and reduce manual effort. Optimize the performance and scalability of machine learning models and related infrastructure. Collaborate with data scientists and engineers to align MLOps processes with business and technical requirements. Maintain and manage cloud-based infrastructure for ML workloads using services like AWS Sagemaker, Azure ML, or equivalent. Experience with CI/CD pipelines, version control systems (Git), and DevOps tools (Jenkins, GitHub Actions, etc.). Proficient in Python for automation, scripting, and integration with ML frameworks (TensorFlow, PyTorch, scikit-learn). Strong understanding of both NoSQL (e.g., MongoDB) and relational databases (e.g., PostgreSQL, MySQL). Solid background in machine learning concepts, model serving, and MLOps best practices. Qualifications
Proficiency in Python for automation and integration with ML frameworks. Experience with cloud platforms (AWS, Azure) and ML services (e.g., AWS Sagemaker, Azure ML). Experience with containerization (Docker) and orchestration (Kubernetes) a plus. Strong knowledge of CI/CD, version control (Git), and DevOps tooling (Jenkins, GitHub Actions).
#J-18808-Ljbffr
Location: Baltimore, MD (5 days on-site, local candidates preferred) Overview
We are seeking an experienced MLOps Engineer to support the deployment, monitoring, and optimization of machine learning systems in production environments. The role involves building and maintaining ML pipelines, automating workflows, managing infrastructure, and collaborating with data science and engineering teams to ensure reliable model performance. Responsibilities
Design, build, and manage end-to-end ML pipelines, including data ingestion, preprocessing, model training, validation, and deployment. Integrate ML models into production environments using containerization and cloud platforms such as AWS or Azure. Implement monitoring and alerting systems to track model performance and system health. Automate repetitive tasks within the ML lifecycle to improve reliability and reduce manual effort. Optimize the performance and scalability of machine learning models and related infrastructure. Collaborate with data scientists and engineers to align MLOps processes with business and technical requirements. Maintain and manage cloud-based infrastructure for ML workloads using services like AWS Sagemaker, Azure ML, or equivalent. Experience with CI/CD pipelines, version control systems (Git), and DevOps tools (Jenkins, GitHub Actions, etc.). Proficient in Python for automation, scripting, and integration with ML frameworks (TensorFlow, PyTorch, scikit-learn). Strong understanding of both NoSQL (e.g., MongoDB) and relational databases (e.g., PostgreSQL, MySQL). Solid background in machine learning concepts, model serving, and MLOps best practices. Qualifications
Proficiency in Python for automation and integration with ML frameworks. Experience with cloud platforms (AWS, Azure) and ML services (e.g., AWS Sagemaker, Azure ML). Experience with containerization (Docker) and orchestration (Kubernetes) a plus. Strong knowledge of CI/CD, version control (Git), and DevOps tooling (Jenkins, GitHub Actions).
#J-18808-Ljbffr