Bespoke Technologies LLC
BT-145 - Machine Learning Engineer
Location: Dulles (fully on-site, no remote option)
**MUST HAVE A POLY CLEARANCE TO APPLY. Those without a Poly clearance will not be considered.**
What you will do in this role:
Design, implement, and maintain scalable backend services and APIs in a containerized cloud environment (AWS preferred) Build mission-critical production applications focused on data discovery, analysis, and secure data delivery Integrate with cloud services and data platforms to expose high-value data through secure, performant interfaces Contribute to application features that integrate LLMs, agents, or ML models into production systems Collaborate in a Lean Agile environment with teammates and stakeholders, participating in code reviews, system design, and continuous improvement Work with CI/CD pipelines, modern build tools, and testing frameworks to ensure quality, security, and delivery speed Monitor and improve the performance and reliability of services, APIs, and data-driven components
Skills that would make you highly effective in this role:
Strong Python application development skills with experience in modern frameworks (FastAPI preferred; Flask, Django acceptable) Experience designing and implementing scalable, maintainable, and OOP-based software in distributed systems Curiosity in LLM prompt engineering, context engineering, or agentic applications Proficiency with source control (Git) and CI/CD pipelines (AWS CodeBuild preferred, Jenkins, GitLab CI, GitHub Actions) Familiarity with DevSecOps practices, containerization (Docker, Kubernetes), and cloud infrastructure Experience with testing frameworks (PyTest preferred; unittest acceptable) Experience with Python project and dependency management tools (poetry preferred; uv, make, pip, conda acceptable) Effective written and verbal communication skills for technical collaboration
Skills that would make you an above-and-beyond candidate:
Experience with agents or LLM workflows: prompt engineering, data pipelines, agents/multi-agent workflows (LangChain, LangGraph) Familiarity with ML frameworks (e.g., scikit-learn, TensorFlow, PyTorch) and NLP libraries (e.g., spaCy, Hugging Face Transformers) Hands-on with MLOps or model-serving tools (e.g., MLflow, SageMaker, Kubeflow) Familiarity with observability stacks (Prometheus/Grafana preferred; CloudWatch, ELK/EFK acceptable) Experience with event-driven and streaming systems (Kafka, Kinesis, SQS/SNS, AWS Step Functions) Knowledge of Infrastructure as Code (Terraform) and modern deployment pipelines Contributions to open-source projects, community efforts, or personal projects
What you will do in this role:
Design, implement, and maintain scalable backend services and APIs in a containerized cloud environment (AWS preferred) Build mission-critical production applications focused on data discovery, analysis, and secure data delivery Integrate with cloud services and data platforms to expose high-value data through secure, performant interfaces Contribute to application features that integrate LLMs, agents, or ML models into production systems Collaborate in a Lean Agile environment with teammates and stakeholders, participating in code reviews, system design, and continuous improvement Work with CI/CD pipelines, modern build tools, and testing frameworks to ensure quality, security, and delivery speed Monitor and improve the performance and reliability of services, APIs, and data-driven components
Skills that would make you highly effective in this role:
Strong Python application development skills with experience in modern frameworks (FastAPI preferred; Flask, Django acceptable) Experience designing and implementing scalable, maintainable, and OOP-based software in distributed systems Curiosity in LLM prompt engineering, context engineering, or agentic applications Proficiency with source control (Git) and CI/CD pipelines (AWS CodeBuild preferred, Jenkins, GitLab CI, GitHub Actions) Familiarity with DevSecOps practices, containerization (Docker, Kubernetes), and cloud infrastructure Experience with testing frameworks (PyTest preferred; unittest acceptable) Experience with Python project and dependency management tools (poetry preferred; uv, make, pip, conda acceptable) Effective written and verbal communication skills for technical collaboration
Skills that would make you an above-and-beyond candidate:
Experience with agents or LLM workflows: prompt engineering, data pipelines, agents/multi-agent workflows (LangChain, LangGraph) Familiarity with ML frameworks (e.g., scikit-learn, TensorFlow, PyTorch) and NLP libraries (e.g., spaCy, Hugging Face Transformers) Hands-on with MLOps or model-serving tools (e.g., MLflow, SageMaker, Kubeflow) Familiarity with observability stacks (Prometheus/Grafana preferred; CloudWatch, ELK/EFK acceptable) Experience with event-driven and streaming systems (Kafka, Kinesis, SQS/SNS, AWS Step Functions) Knowledge of Infrastructure as Code (Terraform) and modern deployment pipelines Contributions to open-source projects, community efforts, or personal projects