VELVETECH LLC
From IT consulting to building custom tech solutions and integrating enterprise systems — our services span a full range of software development to accelerate your project delivery.
With high-level expertise in technologies, data, and processes, we ensure you get digital solutions that perfectly blend into your business environment to boost its efficiency.
Position overview
We are seeking an experienced
MLOps Engineer
with deep expertise in
cloud services
such as
Azure ML
and
AWS SageMaker , providing end-to-end MLOps operations and services. The successful candidate will cover the entire
ML lifecycle , including
developing/writing pipelines
for model development,
preparing training data, training models, deploying models , and
monitoring and testing models
in production. Additionally, this role requires
LLMOps experience , particularly in deploying
open-source LLM models
such as
Llama, Mistral, R1 . This role will also involve experimenting with different ML/LLM models, calculating and monitoring evaluation metrics, and providing the best AI solutions for various marketplace-related tasks, such as product category classification, product data enrichment, dynamic price optimization, and price and demand forecasting. Technologies
MLOps and Cloud Services – Azure ML Studio, AWS SageMaker, MLflow, Databricks Containerization and Orchestration – Docker, Kubernetes Model Monitoring and Deployment – BentoML, Ollama, Hugging Face LLMOps Tools and Libraries – vLLM, LiteLLM, LangChain, Langfuse CI/CD and Workflow Automation – Gitlab, Azure DevOps Responsibilities
MLOps and ML Lifecycle Management:
Develop and optimize
MLOps pipelines
for scalable model development and deployment. Automate
model training, deployment, monitoring, and testing
workflows. Manage
data pipelines , ensuring efficient training data preparation. Implement
model performance tracking and versioning
using MLflow. LLMOps and Large-Scale AI Deployments:
Deploy and fine-tune
open-source LLM models
for various business use cases. Utilize
vLLM, LiteLLM, BentoML, Ollama
for optimized LLM inference and deployment. Monitor and evaluate
LLM performance
across different metrics including
latency, accuracy, and cost-effectiveness . AI Solutions for Marketplace Optimization:
Develop AI-based solutions for
product category classification
and
product data enrichment . Implement
dynamic price optimization models
and
forecast price and demand trends . Build AI systems to select optimal recommerce channels for selling products based on various business factors, including price seasonality and inventory costs. Requirements
Software engineering:
Python, ML frameworks (PyTorch, TensorFlow), microservices, data pipelines. Problem-solving:
Evaluating and fine-tuning ML/LLM models, tracking metrics, optimizing AI solutions. This position offers a unique opportunity to work at the
intersection of MLOps, LLMOps, and AI-driven business solutions . If you have the technical depth and passion for
building scalable, production-grade AI solutions , we encourage you to apply!
#J-18808-Ljbffr
We are seeking an experienced
MLOps Engineer
with deep expertise in
cloud services
such as
Azure ML
and
AWS SageMaker , providing end-to-end MLOps operations and services. The successful candidate will cover the entire
ML lifecycle , including
developing/writing pipelines
for model development,
preparing training data, training models, deploying models , and
monitoring and testing models
in production. Additionally, this role requires
LLMOps experience , particularly in deploying
open-source LLM models
such as
Llama, Mistral, R1 . This role will also involve experimenting with different ML/LLM models, calculating and monitoring evaluation metrics, and providing the best AI solutions for various marketplace-related tasks, such as product category classification, product data enrichment, dynamic price optimization, and price and demand forecasting. Technologies
MLOps and Cloud Services – Azure ML Studio, AWS SageMaker, MLflow, Databricks Containerization and Orchestration – Docker, Kubernetes Model Monitoring and Deployment – BentoML, Ollama, Hugging Face LLMOps Tools and Libraries – vLLM, LiteLLM, LangChain, Langfuse CI/CD and Workflow Automation – Gitlab, Azure DevOps Responsibilities
MLOps and ML Lifecycle Management:
Develop and optimize
MLOps pipelines
for scalable model development and deployment. Automate
model training, deployment, monitoring, and testing
workflows. Manage
data pipelines , ensuring efficient training data preparation. Implement
model performance tracking and versioning
using MLflow. LLMOps and Large-Scale AI Deployments:
Deploy and fine-tune
open-source LLM models
for various business use cases. Utilize
vLLM, LiteLLM, BentoML, Ollama
for optimized LLM inference and deployment. Monitor and evaluate
LLM performance
across different metrics including
latency, accuracy, and cost-effectiveness . AI Solutions for Marketplace Optimization:
Develop AI-based solutions for
product category classification
and
product data enrichment . Implement
dynamic price optimization models
and
forecast price and demand trends . Build AI systems to select optimal recommerce channels for selling products based on various business factors, including price seasonality and inventory costs. Requirements
Software engineering:
Python, ML frameworks (PyTorch, TensorFlow), microservices, data pipelines. Problem-solving:
Evaluating and fine-tuning ML/LLM models, tracking metrics, optimizing AI solutions. This position offers a unique opportunity to work at the
intersection of MLOps, LLMOps, and AI-driven business solutions . If you have the technical depth and passion for
building scalable, production-grade AI solutions , we encourage you to apply!
#J-18808-Ljbffr