Microsoft
Principal Applied Scientist – Security AI Models
Security represents the most critical priorities for our customers in a world awash in digital threats, regulatory scrutiny, and estate complexity. Microsoft Security aspires to make the world a safer place for all. We want to reshape security and empower every user, customer, and developer with a security cloud that protects them with end‑to‑end, simplified solutions. The Microsoft Security organization accelerates Microsoft’s mission and bold ambitions to ensure that our company and industry is securing digital technology platforms, devices, and clouds in our customers’ heterogeneous environments, as well as ensuring the security of our own internal estate. Our culture is centered on a growth mindset, inspiring excellence, and encouraging teams and leaders to bring their best each day.
The Security Models Training team builds and operates large‑scale AI training and adaptation engines that power Microsoft Security products, turning cutting‑edge research into dependable, production‑ready capabilities. As a Principal Applied Scientist – Security AI Models, you will lead end‑to‑end model development for security scenarios, including privacy‑aware data curation, continual pretraining, task‑focused fine‑tuning, reinforcement learning, and rigorous evaluation. You will drive training efficiency on distributed GPU systems, deepen model reasoning and tool‑use skills, and embed responsible AI and compliance into every stage of the workflow. The role is hands‑on and impact‑focused, partnering closely with engineering and product to translate innovations into shipped experiences, designing objective benchmarks and quality gates, and mentoring scientists and engineers to scale results across globally distributed teams. You will combine strong coding and experimentation with a systems mindset to accelerate iteration cycles, improve throughput and reliability, and help shape the next generation of secure, trustworthy AI for our customers.
Responsibilities
Execute the full modeling lifecycle for security scenarios from data ingestion and curation to training, evaluation, deployment, and monitoring.
Design and operate privacy‑preserving data workflows, including anonymization, templating, synthetic augmentation, and quantitative utility measurement.
Develop and maintain fine‑tuning and adaptation recipes for transformer models, including parameter‑efficient methods and reinforcement learning from human or synthetic feedback.
Contribute to objective benchmarks, metrics, and automated gates for accuracy, robustness, safety, and performance to enable repeatable model shipping.
Collaborate with engineering and product teams to productionize models, harden pipelines, and meet service‑level objectives for latency, throughput, and availability.
Uphold high‑quality documentation and experiment hygiene and foster a culture of rapid iteration grounded in responsible AI principles.
Stay current with the latest AI advances and help translate promising techniques into practical, measurable impact.
Required Qualifications
Bachelor’s Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 6+ years related experience.
OR Master’s Degree in the same fields AND 4+ years related experience.
OR Doctorate in the same fields AND 3+ years related experience.
OR equivalent experience.
5+ years experience creating publications (e.g., patents, libraries, peer‑reviewed academic papers).
Proficiency in Python and PyTorch, with hands‑on experience building and debugging large‑scale training jobs.
Other Requirements Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These include specialized security screenings such as the Microsoft Cloud Background Check, which must be passed upon hire/transfer and every two years thereafter.
Preferred Qualifications
Security domain experience in one or more areas: security operations, threat intelligence, malware analysis, vulnerability and posture management, anomaly detection, phishing and fraud detection, or cloud identity and access.
Experience with distributed training and scaling techniques, e.g., DeepSpeed, FSDP, ZeRO, model and pipeline parallelism, mixed precision, and profiling.
Experience with privacy‑preserving ML including differential privacy concepts, privacy risk assessment, and utility measurement on privatized data.
The typical base pay range for this role across the U.S. is USD $139,900 – $274,800 per year, with higher ranges for the San Francisco Bay area and New York City metropolitan area ($188,000 – $304,200). Microsoft will accept applications for the role until November 16, 2025.
Microsoft is an equal opportunity employer. Consistent with applicable law, all qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.
#J-18808-Ljbffr
The Security Models Training team builds and operates large‑scale AI training and adaptation engines that power Microsoft Security products, turning cutting‑edge research into dependable, production‑ready capabilities. As a Principal Applied Scientist – Security AI Models, you will lead end‑to‑end model development for security scenarios, including privacy‑aware data curation, continual pretraining, task‑focused fine‑tuning, reinforcement learning, and rigorous evaluation. You will drive training efficiency on distributed GPU systems, deepen model reasoning and tool‑use skills, and embed responsible AI and compliance into every stage of the workflow. The role is hands‑on and impact‑focused, partnering closely with engineering and product to translate innovations into shipped experiences, designing objective benchmarks and quality gates, and mentoring scientists and engineers to scale results across globally distributed teams. You will combine strong coding and experimentation with a systems mindset to accelerate iteration cycles, improve throughput and reliability, and help shape the next generation of secure, trustworthy AI for our customers.
Responsibilities
Execute the full modeling lifecycle for security scenarios from data ingestion and curation to training, evaluation, deployment, and monitoring.
Design and operate privacy‑preserving data workflows, including anonymization, templating, synthetic augmentation, and quantitative utility measurement.
Develop and maintain fine‑tuning and adaptation recipes for transformer models, including parameter‑efficient methods and reinforcement learning from human or synthetic feedback.
Contribute to objective benchmarks, metrics, and automated gates for accuracy, robustness, safety, and performance to enable repeatable model shipping.
Collaborate with engineering and product teams to productionize models, harden pipelines, and meet service‑level objectives for latency, throughput, and availability.
Uphold high‑quality documentation and experiment hygiene and foster a culture of rapid iteration grounded in responsible AI principles.
Stay current with the latest AI advances and help translate promising techniques into practical, measurable impact.
Required Qualifications
Bachelor’s Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 6+ years related experience.
OR Master’s Degree in the same fields AND 4+ years related experience.
OR Doctorate in the same fields AND 3+ years related experience.
OR equivalent experience.
5+ years experience creating publications (e.g., patents, libraries, peer‑reviewed academic papers).
Proficiency in Python and PyTorch, with hands‑on experience building and debugging large‑scale training jobs.
Other Requirements Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These include specialized security screenings such as the Microsoft Cloud Background Check, which must be passed upon hire/transfer and every two years thereafter.
Preferred Qualifications
Security domain experience in one or more areas: security operations, threat intelligence, malware analysis, vulnerability and posture management, anomaly detection, phishing and fraud detection, or cloud identity and access.
Experience with distributed training and scaling techniques, e.g., DeepSpeed, FSDP, ZeRO, model and pipeline parallelism, mixed precision, and profiling.
Experience with privacy‑preserving ML including differential privacy concepts, privacy risk assessment, and utility measurement on privatized data.
The typical base pay range for this role across the U.S. is USD $139,900 – $274,800 per year, with higher ranges for the San Francisco Bay area and New York City metropolitan area ($188,000 – $304,200). Microsoft will accept applications for the role until November 16, 2025.
Microsoft is an equal opportunity employer. Consistent with applicable law, all qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.
#J-18808-Ljbffr