Machinify, Inc.
Machinify is the leading provider of AI-powered software products that transform healthcare claims and payment operations. Each year, the healthcare industry generates over $200B in claims mispayments, creating incredible waste, friction and frustration for all participants: patients, providers, and especially payers. Machinify's revolutionary AI-platform has enabled the company to develop and deploy, at light speed, industry-specific products that increase the speed and accuracy of claims processing by orders of magnitude.
What You'll Do
Design and implement feature stores
to power multiple ML models, balancing common reusable features with client-specific extensions. Build and own end-to-end ML pipelines
for training, versioning, and deploying models as containerized services in production. Establish and maintain CI/CD and continuous training pipelines
that ensure fast, reliable, and reproducible ML deployments. Collaborate closely with Data Science and Data Engineering teams
to accelerate experimentation cycles and translate models into robust production services. Implement monitoring systems
to track model performance, detect data/concept drift, and trigger retraining workflows when needed. Drive engineering best practices
across the ML lifecycle, including reproducibility, observability, and scalability of ML systems. Provide technical leadership
on architecture and tooling decisions for MLOps and production ML systems. What You'll Bring
Expertise in
machine learning engineering and MLOps , with hands-on experience building production-grade ML systems. Strong programming skills in
Python
(Java/Scala a plus). Experience with
feature stores
(e.g., Sagemaker, Feast, Hopsworks, Tecton, or equivalent). Familiarity with
ML lifecycle tools
such as MLflow, Kubeflow, DVC or equivalent for experiment tracking and model versioning. Experience deploying ML models via
Docker + Kubernetes , and serving them with frameworks such as
KServe, Seldon, or BentoML . Strong background in
workflow orchestration
(Airflow, Prefect, Kubeflow Pipelines) and
CI/CD pipelines
(GitHub Actions, GitLab CI, Jenkins, etc.). Familiarity with
cloud ML platforms
(AWS SageMaker, GCP Vertex AI, or Azure ML). Knowledge of
monitoring and observability tools
(Prometheus, Grafana or similar) and passion for tracking metrics. Strong computer science foundations (data structures, distributed systems, asynchronous programming). Solid collaboration skills to work effectively with Data Science, Data Engineering, and cross-functional stakeholders. Technical Requirements
Bachelor's or Master's in Computer Science, Engineering, or related field. 6+ years of experience
building and shipping production ML solutions
end-to-end. Demonstrated experience
owning ML pipelines and deployments
in production environments. Strong
testing and code quality discipline , with experience contributing to and improving enterprise-grade systems.
Equal Employment Opportunity at Machinify
Machinify is committed to hiring talented and qualified individuals with diverse backgrounds for all of its positions. Machinify believes that the gathering and celebration of unique backgrounds, qualities, and cultures enriches the workplace.
See our Candidate Privacy Notice at: https://www.machinify.com/candidate-privacy-notice/
What You'll Do
Design and implement feature stores
to power multiple ML models, balancing common reusable features with client-specific extensions. Build and own end-to-end ML pipelines
for training, versioning, and deploying models as containerized services in production. Establish and maintain CI/CD and continuous training pipelines
that ensure fast, reliable, and reproducible ML deployments. Collaborate closely with Data Science and Data Engineering teams
to accelerate experimentation cycles and translate models into robust production services. Implement monitoring systems
to track model performance, detect data/concept drift, and trigger retraining workflows when needed. Drive engineering best practices
across the ML lifecycle, including reproducibility, observability, and scalability of ML systems. Provide technical leadership
on architecture and tooling decisions for MLOps and production ML systems. What You'll Bring
Expertise in
machine learning engineering and MLOps , with hands-on experience building production-grade ML systems. Strong programming skills in
Python
(Java/Scala a plus). Experience with
feature stores
(e.g., Sagemaker, Feast, Hopsworks, Tecton, or equivalent). Familiarity with
ML lifecycle tools
such as MLflow, Kubeflow, DVC or equivalent for experiment tracking and model versioning. Experience deploying ML models via
Docker + Kubernetes , and serving them with frameworks such as
KServe, Seldon, or BentoML . Strong background in
workflow orchestration
(Airflow, Prefect, Kubeflow Pipelines) and
CI/CD pipelines
(GitHub Actions, GitLab CI, Jenkins, etc.). Familiarity with
cloud ML platforms
(AWS SageMaker, GCP Vertex AI, or Azure ML). Knowledge of
monitoring and observability tools
(Prometheus, Grafana or similar) and passion for tracking metrics. Strong computer science foundations (data structures, distributed systems, asynchronous programming). Solid collaboration skills to work effectively with Data Science, Data Engineering, and cross-functional stakeholders. Technical Requirements
Bachelor's or Master's in Computer Science, Engineering, or related field. 6+ years of experience
building and shipping production ML solutions
end-to-end. Demonstrated experience
owning ML pipelines and deployments
in production environments. Strong
testing and code quality discipline , with experience contributing to and improving enterprise-grade systems.
Equal Employment Opportunity at Machinify
Machinify is committed to hiring talented and qualified individuals with diverse backgrounds for all of its positions. Machinify believes that the gathering and celebration of unique backgrounds, qualities, and cultures enriches the workplace.
See our Candidate Privacy Notice at: https://www.machinify.com/candidate-privacy-notice/