Salesforce, Inc..
Salesforce is the global leader in Customer Relationship Management (CRM), bringing companies and customers together in the digital age. Founded in 1999, Salesforce enables companies of every size and industry to take advantage of powerful technologies—cloud, mobile, social, IoT, AI, voice, and blockchain—to create a 360° view of their customers. We are committed to a diverse and inclusive workforce and a culture that provides great opportunities for all.
Role Overview
We’re seeking a hands-on LMTS who blends software engineering rigor with data engineering depth and ML practicality. You will design, build, and operate high-throughput data platforms and production ML/analytics services that power Agentic security experiences on the Security Data Fabric. This role bridges complex data challenges and scalable, production-ready software—integrating pipelines, models, and APIs directly into Salesforce and adjacent cloud services.
Roles & Responsibilities
Architecture & Data Platforms
* Lead design/implementation of scalable data models and domain contracts; ensure performance, integrity, and governance.
* Build and optimize ETL/ELT and streaming workloads (batch + near real time) with strong SLAs on quality, latency, and cost.
* Drive platform reliability/observability: SLIs/SLOs, lineage, completeness, freshness, and automated parity tests.
Analytics, ML & Decisioning
* Develop, validate, and deploy statistical/ML models and risk-scoring services that deliver actionable insights.
* Productionize models as services/microservices with clear interfaces, feature stores, and monitoring for drift & performance.
Product & APIs
* Ship secure, well-tested software that integrates pipelines and models into applications, APIs, and microservices.
* Expose read-only and action APIs for partner systems; enable dashboards (Tableau/CRMA) for executive and customer reporting.
Leadership & Collaboration
* Provide technical leadership and mentorship; raise the quality bar via design reviews, code reviews, and documentation.
* Partner with product, security, and platform teams to translate business problems into pragmatic technical solutions.
* Stay current on data/ML/cloud trends; evaluate and introduce tools and patterns that move the needle.
Agentic AI — Additional Requirements (Nice to have: Experience & Capabilities)
Why this matters: Build agentic workflows that detect, reason, and act safely at scale.
Core Experience
* Demonstrated delivery of agentic planning/acting loops (e.g., tool/function calling, ReAct/Reflexion-style patterns), and multi-agent orchestration (role specialization, delegation, handoffs).
* Robust tool adapters behind typed JSON schemas for action systems (e.g., GUS, Midgard, Data Cloud, Security Hub/GSX/FDP, CI/Git); retries, idempotency, and side-effect control.
* Retrieval & memory at scale (RAG, hybrid search, query rewrite, re-ranking), with strong grounding and token budget control.
* Evaluation & quality: golden sets, rubric scoring, agent telemetry (thought/action traces), AB/canary gates for prompts & tools.
Architecture & Operations
* Safety envelopes: autonomy modes (manual/confirm/auto), policy/guardrail engines, approvals, spend caps, and blast-radius limits.
* Observability: end-to-end traces from perception → plan → tools → effects; metrics for solve rate, handoff rate, iteration depth, latency, cost.
* Reliability: bounded loops/timeouts, circuit breakers, dead-letter queues, compensating actions; deterministic fallbacks.
* Cost/Perf: caching/ batching/streaming; model routing and fallback chains tuned to SLOs and unit economics.
* Simulation: dry-run/shadow modes, replay harnesses, synthetic incidents, and policy tests before enabling autonomy.
Security & Governance
* Secure-by-design PII handling, RBAC/ABAC, complete audit trails for agent actions; GovCloud/export-control awareness.
* Content safety & red-teaming; hallucination/grounding checks; provenance and model risk documentation.
Required Qualifications
* Bachelor’s or Master’s in CS, Data Science, Statistics, Engineering, or related quantitative field (or equivalent practical experience).
* 8+ years in data engineering / software engineering operating large-scale, high-throughput, low-latency pipelines.
* Expertise with Airflow, Spark, Hadoop, Kafka, Flink (or equivalents).
* Strong proficiency in Python, Scala, or Java; solid SQL and experience with at least one NoSQL store.
* Practical understanding of statistical modeling and machine learning with production deployments.
* Public cloud experience (AWS/Azure/GCP) and managed data services.
* Ability to communicate complex technical concepts clearly to technical and non-technical audiences.
* Proven problem solving, attention to detail, and results orientation.
* Working knowledge of data privacy regulations (e.g., GDPR, CCPA) and secure data handling.
* Agentic AI (Nice to have): Proven experience shipping agentic workflows (planning + tool use) with measurable outcomes (solve rate, MTTR reduction, cost/tx); strong RAG foundation and action APIs behind autonomy envelopes; implemented guardrails (policies, approvals, rate limits) and comprehensive telemetry.
Preferred Qualifications
* MS in Software Engineering or related field.
* Salesforce data ecosystem: Tableau CRM/CRMA, Salesforce Data Cloud, Marketing Cloud Personalization, MuleSoft.
* Containers/infra: Docker, Kubernetes, Terraform; CI/CD for data & ML (testing, canary/blue-green, IaC).
* Experience with stream processing and real-time analytics (CRMA/Tableau).
* Open-source contributions or strong portfolio.
* Nice to have: Salesforce depth (data model, Apex/LWC, REST/SOAP/Bulk APIs, integration patterns); Salesforce certifications (Platform Developer, Data Architecture & Management Designer).
* Nice to have (Agentic): Multi-agent systems (supervisor/planner patterns), LangGraph/AutoGen/CrewAI (or equivalent), vector DBs & re-rankers, model routing; security domain familiarity (OCSF, vuln/asset graphs, runtime exploitability) and GovCloud constraints.
#J-18808-Ljbffr
#J-18808-Ljbffr