TEMP-TEAM PTE LTD
About the Company:
Our client is a rapidly growing AI security company focused on enabling safe, trustworthy, and compliant AI adoption globally. Their platform is one of the first purpose-built systems for GenAI and autonomous systems, empowering enterprises with real-time threat detection, compliance alignment, and AI governance. Their AI-RMF platform helps organizations detect and mitigate real-world threats such as hallucinations, adversarial prompts, agentic misalignment, and regulatory risks — including those outlined in OWASP’s top GenAI threats. Designed to integrate seamlessly into existing ML pipelines and security frameworks, the platform supports both hybrid and on-premise deployments. About the Role: We are hiring a
Senior AI Security Architect
to lead the design and technical blueprint of our client’s AI Risk Management Platform. This is a key role where you'll bridge AI/ML technologies, cybersecurity, and regulatory compliance in a fast-paced, high-impact environment. This role is not a conventional AI research role — it's a leadership opportunity for an architect who understands the security implications of GenAI systems, AI governance, and multi-agent risk frameworks. Key Responsibilities: Lead solution architecture for AI-RMF platform components across cloud, on-premises, and hybrid environments. Design blueprints to prevent hallucinations, bias, drift, prompt injection, and adversarial attacks in AI models. Integrate explainability (XAI), risk scoring, and regulatory compliance controls into system architecture. Define orchestration patterns for multi-agent systems and distributed AI safety agents. Align architectural decisions with evolving regulations (NIST AI RMF, ISO/IEC 42001, EU AI Act, Singapore IMDA). Partner with ML engineers, security experts, and compliance leads to embed governance into AI pipelines. Conduct threat modeling, architectural reviews, and continuous optimization for system scalability and resilience. Ensure security, observability, and auditability across all AI development workflows. Requirements: Master’s or PhD in Computer Science, Artificial Intelligence, Data Science, or related fields. 8–12 years of relevant experience in AI/ML systems architecture, ideally with exposure to AI security and governance. Strong foundation in ML frameworks (e.g. TensorFlow, PyTorch, HuggingFace), MLOps tools (e.g. Kubeflow, MLflow), and cloud-native architecture (Azure preferred). Experience with AI observability, threat detection tools, and agent-based systems (e.g., OpenPromptGuard, Rebuff, Anthropic Red Teaming, Pinecone, LangSmith, Trulens, RAIL). Familiarity with regulatory frameworks (e.g., SOC2, ISO 27001, GDPR, AI Act) and cybersecurity best practices. Knowledge of LLM security threats including hallucinations, prompt injection, memory poisoning, drift, and privilege escalation. Hands-on experience with explainability tools (e.g. SHAP, LIME) and containerization (Docker, Kubernetes). What’s Offered: Be part of a first-of-its-kind AI risk management product, shaping the future of secure and responsible AI. Work with a global team of leading AI researchers, engineers, and compliance experts. Opportunity to influence global standards for AI safety and governance. Competitive salary, performance incentives, and equity options. Flexible hybrid work arrangements. Equal Opportunity Statement:
We are committed to building a diverse and inclusive workplace. All applicants will be considered based on merit, regardless of age, race, gender, religion, marital status, family responsibilities, or disability. If the role requires proficiency in a specific language (e.g., Mandarin), it is solely to liaise with regional stakeholders or for regulatory documentation purposes. Interested Applicants please write to shc@juhlerprofessionals.com.sg with your updated resume in words document format. Kindly include your current remuneration details, notice period and expected salary. R1325699 EA01C3135
#J-18808-Ljbffr
Our client is a rapidly growing AI security company focused on enabling safe, trustworthy, and compliant AI adoption globally. Their platform is one of the first purpose-built systems for GenAI and autonomous systems, empowering enterprises with real-time threat detection, compliance alignment, and AI governance. Their AI-RMF platform helps organizations detect and mitigate real-world threats such as hallucinations, adversarial prompts, agentic misalignment, and regulatory risks — including those outlined in OWASP’s top GenAI threats. Designed to integrate seamlessly into existing ML pipelines and security frameworks, the platform supports both hybrid and on-premise deployments. About the Role: We are hiring a
Senior AI Security Architect
to lead the design and technical blueprint of our client’s AI Risk Management Platform. This is a key role where you'll bridge AI/ML technologies, cybersecurity, and regulatory compliance in a fast-paced, high-impact environment. This role is not a conventional AI research role — it's a leadership opportunity for an architect who understands the security implications of GenAI systems, AI governance, and multi-agent risk frameworks. Key Responsibilities: Lead solution architecture for AI-RMF platform components across cloud, on-premises, and hybrid environments. Design blueprints to prevent hallucinations, bias, drift, prompt injection, and adversarial attacks in AI models. Integrate explainability (XAI), risk scoring, and regulatory compliance controls into system architecture. Define orchestration patterns for multi-agent systems and distributed AI safety agents. Align architectural decisions with evolving regulations (NIST AI RMF, ISO/IEC 42001, EU AI Act, Singapore IMDA). Partner with ML engineers, security experts, and compliance leads to embed governance into AI pipelines. Conduct threat modeling, architectural reviews, and continuous optimization for system scalability and resilience. Ensure security, observability, and auditability across all AI development workflows. Requirements: Master’s or PhD in Computer Science, Artificial Intelligence, Data Science, or related fields. 8–12 years of relevant experience in AI/ML systems architecture, ideally with exposure to AI security and governance. Strong foundation in ML frameworks (e.g. TensorFlow, PyTorch, HuggingFace), MLOps tools (e.g. Kubeflow, MLflow), and cloud-native architecture (Azure preferred). Experience with AI observability, threat detection tools, and agent-based systems (e.g., OpenPromptGuard, Rebuff, Anthropic Red Teaming, Pinecone, LangSmith, Trulens, RAIL). Familiarity with regulatory frameworks (e.g., SOC2, ISO 27001, GDPR, AI Act) and cybersecurity best practices. Knowledge of LLM security threats including hallucinations, prompt injection, memory poisoning, drift, and privilege escalation. Hands-on experience with explainability tools (e.g. SHAP, LIME) and containerization (Docker, Kubernetes). What’s Offered: Be part of a first-of-its-kind AI risk management product, shaping the future of secure and responsible AI. Work with a global team of leading AI researchers, engineers, and compliance experts. Opportunity to influence global standards for AI safety and governance. Competitive salary, performance incentives, and equity options. Flexible hybrid work arrangements. Equal Opportunity Statement:
We are committed to building a diverse and inclusive workplace. All applicants will be considered based on merit, regardless of age, race, gender, religion, marital status, family responsibilities, or disability. If the role requires proficiency in a specific language (e.g., Mandarin), it is solely to liaise with regional stakeholders or for regulatory documentation purposes. Interested Applicants please write to shc@juhlerprofessionals.com.sg with your updated resume in words document format. Kindly include your current remuneration details, notice period and expected salary. R1325699 EA01C3135
#J-18808-Ljbffr