Cisco Systems, Inc.
Machine Learning Engineer PhD (Full Time) - United States
Cisco Systems, Inc., San Francisco, California, United States, 94199
Please note this posting is to advertise potential job opportunities. This exact role may not be open today but could open in the near future. When you apply, a Cisco representative may contact you directly if a relevant position opens.
Meet the Team Join our innovative engineering team focused on building next-generation AI/ML solutions. You’ll collaborate with skilled colleagues across platform, security, release engineering, and support teams to deliver high-impact products and ensure their perfect operation.
Your Impact Dive into the development and implementation of cutting‑edge generative AI applications using the latest large language models—think GPT‑4, Claude, Llama, and beyond! Take on the challenge of optimizing neural networks for natural language processing and machine perception, drawing on a toolkit that includes convolutional and transformer‑based models, student‑teacher frameworks, distillation, and generative adversarial networks (GANs). Performance, scalability, and reliability are front and center as models are trained, fine‑tuned, and put through their paces for real‑world deployment.
Collaboration is at the heart of this role—work alongside talented engineers and cross‑functional teams to gather and prep data, design custom layers, and automate model deployment. Experimentation with new technologies and ongoing learning are always encouraged. Production‑ready code, robust testing, and creative problem‑solving all play a part in bringing innovative AI solutions to life. What an exciting place to grow and make an impact!
Minimum Qualifications
Recent graduate or in your final year of studies toward a PhD in Computer Science, Electrical Engineering, Artificial Intelligence, Machine Learning, or a related field.
3+ years of experience in backend development using Go or Python.
Understanding of LLM infrastructure and optimization, validated by technical interview responses, project documentation, or relevant publications.
Hands‑on experience with model building and AI/LLM research, demonstrated through portfolio work, code samples, technical assessments, or documented academic or professional projects.
Preferred Qualifications
Experience working with inference engines (e.g., vLLM, Triton, TorchServe).
Knowledge of GPU architecture and optimization.
Familiarity with agent frameworks.
Exposure to cloud native solutions and platforms.
Experience with cybersecurity principles and Python programming, including common AI libraries.
Familiarity with distributed systems and asynchronous programming models.
#J-18808-Ljbffr
Meet the Team Join our innovative engineering team focused on building next-generation AI/ML solutions. You’ll collaborate with skilled colleagues across platform, security, release engineering, and support teams to deliver high-impact products and ensure their perfect operation.
Your Impact Dive into the development and implementation of cutting‑edge generative AI applications using the latest large language models—think GPT‑4, Claude, Llama, and beyond! Take on the challenge of optimizing neural networks for natural language processing and machine perception, drawing on a toolkit that includes convolutional and transformer‑based models, student‑teacher frameworks, distillation, and generative adversarial networks (GANs). Performance, scalability, and reliability are front and center as models are trained, fine‑tuned, and put through their paces for real‑world deployment.
Collaboration is at the heart of this role—work alongside talented engineers and cross‑functional teams to gather and prep data, design custom layers, and automate model deployment. Experimentation with new technologies and ongoing learning are always encouraged. Production‑ready code, robust testing, and creative problem‑solving all play a part in bringing innovative AI solutions to life. What an exciting place to grow and make an impact!
Minimum Qualifications
Recent graduate or in your final year of studies toward a PhD in Computer Science, Electrical Engineering, Artificial Intelligence, Machine Learning, or a related field.
3+ years of experience in backend development using Go or Python.
Understanding of LLM infrastructure and optimization, validated by technical interview responses, project documentation, or relevant publications.
Hands‑on experience with model building and AI/LLM research, demonstrated through portfolio work, code samples, technical assessments, or documented academic or professional projects.
Preferred Qualifications
Experience working with inference engines (e.g., vLLM, Triton, TorchServe).
Knowledge of GPU architecture and optimization.
Familiarity with agent frameworks.
Exposure to cloud native solutions and platforms.
Experience with cybersecurity principles and Python programming, including common AI libraries.
Familiarity with distributed systems and asynchronous programming models.
#J-18808-Ljbffr