Intellibus
The AI Engineering Architect & Technical Coach is a hands‑on engineering leader responsible for designing, building, and guiding the technical foundation of an enterprise‑scale AI transformation program.
This role bridges architecture, execution, and mentorship, ensuring that AI experiments, data pipelines, and production systems are technically sound, scalable, and reusable across squads.
You’ll work side with engineers in the AI Skunk Works, Data Foundations, and Engineering Excellence squads setting engineering standards, unblocking delivery, and embedding best practices in cloud, data, and AI systems.
Your north star: make sure every AI experiment can scale cleanly, securely, and reliably.
Key Responsibilities
Design the technical architecture for AI and data initiatives — including ingestion, transformation, and model deployment pipelines.
Define and document reference architectures, API standards, and reusability frameworks.
Collaborate with data engineers to build scalable ETL/ELT pipelines and feature stores that feed AI models.
Ensure solutions adhere to security, compliance, and governance requirements.
Evaluate and optimize cloud infrastructure (AWS, Azure, or GCP) for cost, performance, and resilience.
Partner with Skunk Works leads to make AI experiments technically viable.
Set up data pipelines, connectors, and lightweight back‑end APIs for pilot experiments.
Optimize workflows for 2‑week sprint cycles — enabling rapid iteration and testing.
Ensure each experiment’s architecture supports clean handoff to production once validated.
Evaluate and integrate AI tools, APIs, or SDKs (e.g., OpenAI, Hugging Face, Vertex AI, Azure AI Studio).
Hands‑On Development & Coaching
Act as a player‑coach able to prototype, debug, and code alongside engineers.
Build or review Python scripts, SQL queries, APIs, and pipeline automation.
Coach engineers on coding standards, CI/CD automation, observability, and testing practices.
Conduct stability reviews and code walk‑throughs to raise engineering quality.
Lead “Engineering Excellence” workshops on reliability, scalability, and AI deployment hygiene.
Build deep trust with both the client’s technical teams and Intellibus engineers.
Engineering Quality & Platform Improvement
Define and enforce engineering excellence standards: stability, scalability, and security.
Implement automation in build, deploy, and monitoring pipelines.
Lead incident reviews and root‑cause analyses to improve reliability metrics.
Collaborate with the Engineering Excellence squad to uplift delivery velocity and reduce incidents.
Work closely with the Director of AI Practice & Transformation on cross‑squad technical strategy.
Key Qualifications
8–15 years in software, data, or AI engineering; 3–5 years in lead or architect‑level roles.
Proficiency in Python, SQL, and cloud‑native architectures (AWS, Azure, or GCP).
Hands‑on experience with data‑pipeline frameworks (Airflow, dbt, Kafka, Spark).
Experience with ML model deployment (MLflow, SageMaker, Vertex AI, or custom API deployment).
Knowledge of container orchestration (Docker, Kubernetes) and CI/CD tools (GitHub Actions, Jenkins).
Strong communicator who can explain complex technical concepts simply to non‑technical stakeholders.
Bonus: Experience in retail systems (POS, inventory, merchandising, supply chain) or large‑scale data integrations.
Location: Based in or near Phoenix, AZ (preferred).
Success Profile
50% Architect – designs reusable systems and processes for AI experimentation.
30% Engineer – writes, reviews, and deploys framework‑level code, training engineers to use it effectively.
20% Coach – teaches and unblocks other coaches in the Engineering Excellence squad.
Always outcome‑oriented – ensuring every technical effort delivers measurable business value.
If you are interested in contacting us, please apply, and our team will reach out to you within the hour.
#J-18808-Ljbffr
This role bridges architecture, execution, and mentorship, ensuring that AI experiments, data pipelines, and production systems are technically sound, scalable, and reusable across squads.
You’ll work side with engineers in the AI Skunk Works, Data Foundations, and Engineering Excellence squads setting engineering standards, unblocking delivery, and embedding best practices in cloud, data, and AI systems.
Your north star: make sure every AI experiment can scale cleanly, securely, and reliably.
Key Responsibilities
Design the technical architecture for AI and data initiatives — including ingestion, transformation, and model deployment pipelines.
Define and document reference architectures, API standards, and reusability frameworks.
Collaborate with data engineers to build scalable ETL/ELT pipelines and feature stores that feed AI models.
Ensure solutions adhere to security, compliance, and governance requirements.
Evaluate and optimize cloud infrastructure (AWS, Azure, or GCP) for cost, performance, and resilience.
Partner with Skunk Works leads to make AI experiments technically viable.
Set up data pipelines, connectors, and lightweight back‑end APIs for pilot experiments.
Optimize workflows for 2‑week sprint cycles — enabling rapid iteration and testing.
Ensure each experiment’s architecture supports clean handoff to production once validated.
Evaluate and integrate AI tools, APIs, or SDKs (e.g., OpenAI, Hugging Face, Vertex AI, Azure AI Studio).
Hands‑On Development & Coaching
Act as a player‑coach able to prototype, debug, and code alongside engineers.
Build or review Python scripts, SQL queries, APIs, and pipeline automation.
Coach engineers on coding standards, CI/CD automation, observability, and testing practices.
Conduct stability reviews and code walk‑throughs to raise engineering quality.
Lead “Engineering Excellence” workshops on reliability, scalability, and AI deployment hygiene.
Build deep trust with both the client’s technical teams and Intellibus engineers.
Engineering Quality & Platform Improvement
Define and enforce engineering excellence standards: stability, scalability, and security.
Implement automation in build, deploy, and monitoring pipelines.
Lead incident reviews and root‑cause analyses to improve reliability metrics.
Collaborate with the Engineering Excellence squad to uplift delivery velocity and reduce incidents.
Work closely with the Director of AI Practice & Transformation on cross‑squad technical strategy.
Key Qualifications
8–15 years in software, data, or AI engineering; 3–5 years in lead or architect‑level roles.
Proficiency in Python, SQL, and cloud‑native architectures (AWS, Azure, or GCP).
Hands‑on experience with data‑pipeline frameworks (Airflow, dbt, Kafka, Spark).
Experience with ML model deployment (MLflow, SageMaker, Vertex AI, or custom API deployment).
Knowledge of container orchestration (Docker, Kubernetes) and CI/CD tools (GitHub Actions, Jenkins).
Strong communicator who can explain complex technical concepts simply to non‑technical stakeholders.
Bonus: Experience in retail systems (POS, inventory, merchandising, supply chain) or large‑scale data integrations.
Location: Based in or near Phoenix, AZ (preferred).
Success Profile
50% Architect – designs reusable systems and processes for AI experimentation.
30% Engineer – writes, reviews, and deploys framework‑level code, training engineers to use it effectively.
20% Coach – teaches and unblocks other coaches in the Engineering Excellence squad.
Always outcome‑oriented – ensuring every technical effort delivers measurable business value.
If you are interested in contacting us, please apply, and our team will reach out to you within the hour.
#J-18808-Ljbffr