Verticalmove, Inc
Senior Software Engineer - Platform AI Infrastructure
Verticalmove, Inc, Boston, Massachusetts, us, 02298
Senior Software Engineer - Platform AI Infrastructure
Picture a company redefining how life sciences harness data — one that turns the noise of fragmented scientific systems into clarity that accelerates discovery, development and ultimately, human progress.
They are a pioneer in Scientific Data Cloud, building a cloud‑native ecosystem engineered specifically for life sciences, connecting laboratory instruments, informatics systems and analytics applications into a single, intelligent network. Trusted by the world’s leading biopharma innovators, their open platform serves as the digital nervous system for scientific operations, enabling researchers to unlock insights at unprecedented scale.
We’re building next‑generation AI platform that empowers scientists and engineers to operationalize advanced machine learning models at global scale, operating at the intersection of life sciences, cloud computing and AI.
ATTN – Please read carefully: We can not sponsor new visas or transfer existing visas. We are only considering US citizens or green card holders.
100% Remote – However we require our team to be co‑located in the Boston, MA area for occasional design meetings. If you are not located in Boston, MA your resume will not be considered.
What You’ll Do As a Senior Platform Engineer, you’ll help architect and build our client’s proprietary, next‑generation AI platform — their internal equivalent to AWS SageMaker. This platform will serve as the foundation for developing, training and deploying advanced AI models across global scientific and biopharma environments.
Much like SageMaker, this system will enable teams to:
Rapidly
build, train and deploy
AI models at scale
Seamlessly
integrate data pipelines
for high‑volume ingestion and transformation
Deliver
secure, reliable and production‑grade AI workflows
across distributed cloud infrastructure
You’ll collaborate across data, AI and engineering teams to design resilient systems that power the company’s most ambitious machine learning and scientific data initiatives — enabling automation, scalability and operational excellence at the intersection of AI and life sciences.
Responsibilities
Architect, build and maintain
cloud‑native infrastructure
for AI and data workloads using platforms like Databricks and AWS Bedrock.
Develop scalable
data pipelines
to ingest, transform and serve data for ML, analytics and scientific applications.
Implement
infrastructure‑as‑code
using tools such as CloudFormation and AWS CDK to ensure consistency and security.
Partner with AI engineers and data scientists to optimize model deployment, monitoring and performance.
Lead
observability best practices , including advanced monitoring, alerting and logging across AI systems.
Evolve the AI platform to support emerging frameworks, data modalities and use cases.
Research and recommend
cutting‑edge tools
and approaches to improve scalability, cost‑efficiency and speed.
Integrate AI and
LLM‑based architectures
(e.g., retrieval‑augmented generation) into production environments.
What You Bring Preferred Experience
Familiarity with emerging
LLM orchestration frameworks
(e.g., DSPy) for complex prompt pipelines.
Experience with
vector databases / embedding stores
(e.g., OpenSearch, Pinecone) for semantic search and retrieval.
Understanding of
LLM cost optimization, latency reduction and usage analytics at scale .
Required Experience
7+ years of professional experience in
software or infrastructure engineering , including production AI systems.
Proven expertise in
building and maintaining ML infrastructure , including model deployment, lifecycle management and automation.
Deep knowledge of
AWS
and modern infrastructure‑as‑code frameworks (ideally CDK).
Expert‑level proficiency in
TypeScript
and
Python
for backend and API development.
Hands‑on experience with
Databricks MLFlow , including model registration, versioning and serving.
Strong understanding of
containerization (Docker) ,
CI/CD pipelines
and orchestration tools (e.g., ECS).
Demonstrated ability to design secure, scalable, and fault‑tolerant infrastructure for real‑time and batch AI workloads.
Excellent communication skills and the ability to collaborate effectively with cross‑functional teams.
Seniority Level Mid‑Senior level
Employment Type Full‑time
Job Function Engineering and Information Technology
#J-18808-Ljbffr
They are a pioneer in Scientific Data Cloud, building a cloud‑native ecosystem engineered specifically for life sciences, connecting laboratory instruments, informatics systems and analytics applications into a single, intelligent network. Trusted by the world’s leading biopharma innovators, their open platform serves as the digital nervous system for scientific operations, enabling researchers to unlock insights at unprecedented scale.
We’re building next‑generation AI platform that empowers scientists and engineers to operationalize advanced machine learning models at global scale, operating at the intersection of life sciences, cloud computing and AI.
ATTN – Please read carefully: We can not sponsor new visas or transfer existing visas. We are only considering US citizens or green card holders.
100% Remote – However we require our team to be co‑located in the Boston, MA area for occasional design meetings. If you are not located in Boston, MA your resume will not be considered.
What You’ll Do As a Senior Platform Engineer, you’ll help architect and build our client’s proprietary, next‑generation AI platform — their internal equivalent to AWS SageMaker. This platform will serve as the foundation for developing, training and deploying advanced AI models across global scientific and biopharma environments.
Much like SageMaker, this system will enable teams to:
Rapidly
build, train and deploy
AI models at scale
Seamlessly
integrate data pipelines
for high‑volume ingestion and transformation
Deliver
secure, reliable and production‑grade AI workflows
across distributed cloud infrastructure
You’ll collaborate across data, AI and engineering teams to design resilient systems that power the company’s most ambitious machine learning and scientific data initiatives — enabling automation, scalability and operational excellence at the intersection of AI and life sciences.
Responsibilities
Architect, build and maintain
cloud‑native infrastructure
for AI and data workloads using platforms like Databricks and AWS Bedrock.
Develop scalable
data pipelines
to ingest, transform and serve data for ML, analytics and scientific applications.
Implement
infrastructure‑as‑code
using tools such as CloudFormation and AWS CDK to ensure consistency and security.
Partner with AI engineers and data scientists to optimize model deployment, monitoring and performance.
Lead
observability best practices , including advanced monitoring, alerting and logging across AI systems.
Evolve the AI platform to support emerging frameworks, data modalities and use cases.
Research and recommend
cutting‑edge tools
and approaches to improve scalability, cost‑efficiency and speed.
Integrate AI and
LLM‑based architectures
(e.g., retrieval‑augmented generation) into production environments.
What You Bring Preferred Experience
Familiarity with emerging
LLM orchestration frameworks
(e.g., DSPy) for complex prompt pipelines.
Experience with
vector databases / embedding stores
(e.g., OpenSearch, Pinecone) for semantic search and retrieval.
Understanding of
LLM cost optimization, latency reduction and usage analytics at scale .
Required Experience
7+ years of professional experience in
software or infrastructure engineering , including production AI systems.
Proven expertise in
building and maintaining ML infrastructure , including model deployment, lifecycle management and automation.
Deep knowledge of
AWS
and modern infrastructure‑as‑code frameworks (ideally CDK).
Expert‑level proficiency in
TypeScript
and
Python
for backend and API development.
Hands‑on experience with
Databricks MLFlow , including model registration, versioning and serving.
Strong understanding of
containerization (Docker) ,
CI/CD pipelines
and orchestration tools (e.g., ECS).
Demonstrated ability to design secure, scalable, and fault‑tolerant infrastructure for real‑time and batch AI workloads.
Excellent communication skills and the ability to collaborate effectively with cross‑functional teams.
Seniority Level Mid‑Senior level
Employment Type Full‑time
Job Function Engineering and Information Technology
#J-18808-Ljbffr