Logo
Palo Alto Networks

Principal Engineer Machine Learning (MLOps DLP Detection)

Palo Alto Networks, Santa Clara, California, us, 95053

Save Job

Principal Engineer Machine Learning (MLOps DLP Detection) Our Mission

At Palo Alto Networks® everything starts and ends with our mission: Being the cybersecurity partner of choice, protecting our digital way of life. Our vision is a world where each day is safer and more secure than the one before. We are a company built on the foundation of challenging and disrupting the way things are done, and we’re looking for innovators who are as committed to shaping the future of cybersecurity as we are.

Who We Are

We believe collaboration thrives in person. That’s why most of our teams work from the office full time, with flexibility when it’s needed. This model supports real-time problem-solving, stronger relationships, and the kind of precision that drives great outcomes.

Job Description Your Career

We are looking for a Principal MLOps Engineer to lead the design, development, and operation of production-grade machine learning infrastructure at scale. In this role, you will architect robust pipelines, deploy and monitor ML models, and ensure reliability, reproducibility, and governance across our AI/ML ecosystem. You will work at the intersection of ML, DevOps, and cloud systems, enabling our teams to accelerate experimentation while ensuring secure, efficient, and compliant deployments.

Location

This role is located at our dynamic Santa Clara California headquarters campus, 3 days in office per week. Not a remote role.

Your Impact

Architect, design, and lead the implementation of the entire ML lifecycle. This includes ML model development and deployment workflows that seamlessly transition models from initial experimentation/development to complex cloud and hybrid production environments.

Develop and maintain highly automated, resilient systems that enable the continuous training, rigorous testing, deployment, real-time monitoring, and robust rollback of machine learning models in production, ensuring performance meets massive scale demands.

Establish and enforce state-of-the-art practices for model versioning, reproducibility, auditing, lineage tracking, and compliance across the entire model inventory.

Develop comprehensive, real-time monitoring, alerting, and logging solutions focused on deep operational health, model performance analysis (e.g., drift detection), and business metric impact.

Act as the primary driver for efficiency, pioneering best practices in Infrastructure-as-Code (IaC), sophisticated container orchestration, and continuous delivery (CD) to reduce operational toil.

Partner closely Security Teams, and Product Engineering to define requirements and deliver robust, secure, and production-ready AI systems.

Continuously evaluate, prototype, and introduce cutting-edge tools, frameworks, and practices that fundamentally elevate the scalability, reliability, and security posture of our production ML operations.

Optimize Infrastructure & Cost: Strategically manage and optimize ML infrastructure resources to drive down operational costs, improve efficiency, and reduce model bootstrapping times.

Qualifications Your Experience

8+ years of software/DevOps/ML engineering experience, with at least 3+ years focused specifically on advanced MLOps, ML Platform, or production ML infrastructure and 5+ years of experience building ML models

Deep expertise in building scalable, production-grade systems using strong programming skills (Python, Go, or Java).

Expertise in leveraging cloud platforms (AWS, GCP, Azure) and container orchestration (Kubernetes, Docker) for ML workloads.

Proven hands-on experience in the ML Infrastructure lifecycle, including:

Model Serving: (TensorFlow Serving, TorchServe, Triton Inference Server/TIS).

Workflow Orchestration: (Airflow, Kubeflow, MLflow, Ray, Vertex AI, SageMaker).

Mandatory Experience with Advanced Inferencing Techniques: Demonstrable ability to utilize advanced hardware/software acceleration and optimization techniques, such as TensorRT (TRT), Triton Inference Server (TIS), ONNX Runtime, Model Distillation, Quantization, and pruning.

Strong, hands-on experience with comprehensive CI/CD pipelines, infrastructure-as-code (Terraform, Helm), and robust monitoring/observability solutions (Prometheus, Grafana, ELK/EFK stack).

Comprehensive knowledge of data pipelines, feature stores, and high-throughput streaming systems (Kafka, Spark, Flink).

Expertise in operationalizing ML models, including model monitoring, drift detection, automated retraining pipelines, and maintaining strong governance and security frameworks.

A strong track record of influencing cross-functional stakeholders, defining organizational best practices, and actively mentoring engineers at all levels.

Unwavering passion for operational excellence, building highly scalable, and securing mission-critical ML systems.

MS/PhD in Computer Science/Data Science, Engineering

Equal Opportunity & EEO Statement We’re committed to providing reasonable accommodations for all qualified individuals with a disability. If you require assistance or accommodation due to a disability or special need, please contact us at accommodations@paloaltonetworks.com.

Palo Alto Networks is an equal opportunity employer. We celebrate diversity in our workplace, and all qualified applicants will receive consideration for employment without regard to age, ancestry, color, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or other legally protected characteristics.

All your information will be kept confidential according to EEO guidelines.

Is role eligible for Immigration Sponsorship?: Yes

Seniority level Associate

Employment type Full-time

Job function Computer and Network Security

#J-18808-Ljbffr