CARERRA AGENCY
DevOps, CI/CD, CloudFormation, Terraform, Docker, Amazon Web Services (AWS), Linux, GitHub, Kubernetes, High Performance Computing (HPC),
Reference # Job-11762
Job Description
We are seeking a skilled DevOps / Systems Engineer to design, implement, and maintain cloud and DevOps infrastructure with a focus on reliability, scalability, and security. This role offers the opportunity to work with data-intensive workflows, bioinformatics pipelines, and scientific information systems while serving as a technical subject‑matter expert across multiple teams.
This is an exciting opportunity to join a cutting‑edge life sciences organization as a DevOps Engineer, where you’ll play a crucial role in supporting groundbreaking research through robust cloud infrastructure and automation. You’ll work at the intersection of technology and science, designing and maintaining systems that power genomics, bioinformatics, and AI‑enabled processes while collaborating with scientists, data engineers, and developers to accelerate innovation in healthcare.
Overview
Start Date: ASAP
Duration: 6 months Contract‑to‑Hire position
Location: San Diego, CA Hybrid (3 days onsite Tuesday, Wednesday, Thursday)
Compensation: Salary starting at $120K upon conversion to client employee, plus a target bonus program and a comprehensive benefits package.
Key Responsibilities Cloud Infrastructure & Automation
Design, build, and maintain secure AWS cloud environments supporting genomics, imaging, computational biology, and laboratory applications
Implement and manage Infrastructure as Code (IaC) using tools such as Terraform or CloudFormation for reproducible and scalable deployments
Build, automate, and administrate organizational infrastructure including microservices architecture and serverless technologies
DevOps & CI/CD
Administer and enhance the organization’s GitHub environment, including repository structure, workflow improvements, and access governance
Develop and maintain CI/CD pipelines for bioinformatics workflows, internal tools, and GMP‑related applications
Automate deployment of containerized workflows using Docker and Kubernetes
Systems Management & Monitoring
Manage Linux/Windows servers supporting laboratory and GMP systems utilizing high‑performance computing environments
Monitor, troubleshoot, and optimize compute, storage, and network resources for large‑scale scientific datasets
Implement system monitoring, observability, and alerting solutions to maintain uptime for mission‑critical systems
Security & Compliance
Apply cloud security best practices and ensure alignment with GxP, HIPAA, and FDA 21 CFR Part 11 compliance requirements
Support secure handling and protection of sensitive research and patient data
Collaboration & Leadership
Collaborate with developers, data scientists, and bioinformaticians to integrate computational workflows and applications into production environments
Serve as a DevOps and cloud subject‑matter expert, providing training and guidance to cross‑functional partners
Contribute technical insights to architectural discussions and tooling evaluations
Technical Skills
Infrastructure as Code: Expertise with Terraform, CloudFormation, or similar for automated deployments
CI/CD Pipelines: Hands‑on experience building and optimizing workflows
Containerization & Orchestration: Skilled in Docker and Kubernetes
Configuration Management: Familiarity with Ansible, Puppet, or Chef
Scripting & Automation: Strong scripting skills in Python, Bash, and PowerShell
Monitoring & Logging: Well‑versed in tools like CloudWatch, Prometheus, Grafana, and ELK
System Engineering: Broad experience including microservices architecture and serverless technologies
Networking & Security: Knowledge of networking fundamentals and security best practices
HPC & Data Processing: Experience with HPC systems and large‑scale workflows
Regulatory Compliance: Familiar with GxP, HIPAA, and FDA 21 CFR Part 11 requirements
Professional Skills
Strong analytical mindset with attention to detail and accuracy
Clear written and verbal communication skills with ability to document methods and results
Works effectively with operational leaders, scientists, engineers, and IT staff
Understanding of business processes and ability to analyze using process mapping and optimization
Able to manage multiple projects, prioritize tasks, and seek guidance when appropriate
Thrives in fast‑paced, evolving environments
Focused on reliability, consistency, and continuous improvement
Willingness to guide partners on DevOps best practices and tooling
Eager to expand skills in evolving data governance and analytics practices
Education & Experience
Bachelor’s degree in Computer Science, Information Technology, Bioinformatics, or related field; or equivalent experience
Minimum 3 years in DevOps and Systems Engineering roles, preferably in biotech or healthcare
Hands‑on experience with at least one major cloud provider (AWS, Azure, GCP)
Certifications such as AWS Certified DevOps Engineer, Azure DevOps Expert, or similar preferred
Apply today! #LI-GS1 #LI-DNI
For more information, please email: Job-11762@thecarreraagency.com
#J-18808-Ljbffr
Reference # Job-11762
Job Description
We are seeking a skilled DevOps / Systems Engineer to design, implement, and maintain cloud and DevOps infrastructure with a focus on reliability, scalability, and security. This role offers the opportunity to work with data-intensive workflows, bioinformatics pipelines, and scientific information systems while serving as a technical subject‑matter expert across multiple teams.
This is an exciting opportunity to join a cutting‑edge life sciences organization as a DevOps Engineer, where you’ll play a crucial role in supporting groundbreaking research through robust cloud infrastructure and automation. You’ll work at the intersection of technology and science, designing and maintaining systems that power genomics, bioinformatics, and AI‑enabled processes while collaborating with scientists, data engineers, and developers to accelerate innovation in healthcare.
Overview
Start Date: ASAP
Duration: 6 months Contract‑to‑Hire position
Location: San Diego, CA Hybrid (3 days onsite Tuesday, Wednesday, Thursday)
Compensation: Salary starting at $120K upon conversion to client employee, plus a target bonus program and a comprehensive benefits package.
Key Responsibilities Cloud Infrastructure & Automation
Design, build, and maintain secure AWS cloud environments supporting genomics, imaging, computational biology, and laboratory applications
Implement and manage Infrastructure as Code (IaC) using tools such as Terraform or CloudFormation for reproducible and scalable deployments
Build, automate, and administrate organizational infrastructure including microservices architecture and serverless technologies
DevOps & CI/CD
Administer and enhance the organization’s GitHub environment, including repository structure, workflow improvements, and access governance
Develop and maintain CI/CD pipelines for bioinformatics workflows, internal tools, and GMP‑related applications
Automate deployment of containerized workflows using Docker and Kubernetes
Systems Management & Monitoring
Manage Linux/Windows servers supporting laboratory and GMP systems utilizing high‑performance computing environments
Monitor, troubleshoot, and optimize compute, storage, and network resources for large‑scale scientific datasets
Implement system monitoring, observability, and alerting solutions to maintain uptime for mission‑critical systems
Security & Compliance
Apply cloud security best practices and ensure alignment with GxP, HIPAA, and FDA 21 CFR Part 11 compliance requirements
Support secure handling and protection of sensitive research and patient data
Collaboration & Leadership
Collaborate with developers, data scientists, and bioinformaticians to integrate computational workflows and applications into production environments
Serve as a DevOps and cloud subject‑matter expert, providing training and guidance to cross‑functional partners
Contribute technical insights to architectural discussions and tooling evaluations
Technical Skills
Infrastructure as Code: Expertise with Terraform, CloudFormation, or similar for automated deployments
CI/CD Pipelines: Hands‑on experience building and optimizing workflows
Containerization & Orchestration: Skilled in Docker and Kubernetes
Configuration Management: Familiarity with Ansible, Puppet, or Chef
Scripting & Automation: Strong scripting skills in Python, Bash, and PowerShell
Monitoring & Logging: Well‑versed in tools like CloudWatch, Prometheus, Grafana, and ELK
System Engineering: Broad experience including microservices architecture and serverless technologies
Networking & Security: Knowledge of networking fundamentals and security best practices
HPC & Data Processing: Experience with HPC systems and large‑scale workflows
Regulatory Compliance: Familiar with GxP, HIPAA, and FDA 21 CFR Part 11 requirements
Professional Skills
Strong analytical mindset with attention to detail and accuracy
Clear written and verbal communication skills with ability to document methods and results
Works effectively with operational leaders, scientists, engineers, and IT staff
Understanding of business processes and ability to analyze using process mapping and optimization
Able to manage multiple projects, prioritize tasks, and seek guidance when appropriate
Thrives in fast‑paced, evolving environments
Focused on reliability, consistency, and continuous improvement
Willingness to guide partners on DevOps best practices and tooling
Eager to expand skills in evolving data governance and analytics practices
Education & Experience
Bachelor’s degree in Computer Science, Information Technology, Bioinformatics, or related field; or equivalent experience
Minimum 3 years in DevOps and Systems Engineering roles, preferably in biotech or healthcare
Hands‑on experience with at least one major cloud provider (AWS, Azure, GCP)
Certifications such as AWS Certified DevOps Engineer, Azure DevOps Expert, or similar preferred
Apply today! #LI-GS1 #LI-DNI
For more information, please email: Job-11762@thecarreraagency.com
#J-18808-Ljbffr