Tential
We are seeking a hands-on, principle-driven Data Architect to design and build the foundational data platform for our client's AI-enabled advisory future. This role is central to the "Data for AI" workstream, requiring an individual who is not only an expert in modern data stacks (Azure/Databricks) but also possesses deep, fundamental knowledge in data modeling, data engineering patterns, and system design.
You will be responsible for architecting scalable, governed, and discoverable data systems that power memory, context, and intelligent services across global advisory platforms. Success in this role depends on the ability to translate business needs into robust, well-modeled data architectures that are primed for AI and machine learning consumption.
Key Responsibilities
Architect & Design:
Lead the design and implementation of enterprise-scale data platforms on Azure using Databricks as the core analytical engine. Create end-to-end architecture blueprints that are scalable, secure, and cost-effective. Data Modeling & Engineering Excellence:
Define and enforce standards for data modeling (e.g., dimensional, data vault), data ingestion patterns (batch, streaming), and transformation frameworks. Articulate and implement strategies for managing complex data relationships and lineage. Platform Optimization:
Serve as the Databricks subject matter expert, tuning environments for peak performance (cluster configurations, query optimization), managing Delta Lake tables, and leveraging Unity Catalog for governance. AI/ML Data Enablement:
Architect data systems that directly support AI workloads. This includes designing for feature stores, vector databases for RAG pipelines, and ensuring data is structured for efficient model training and inference. Governance & Metadata Strategy:
Implement and champion robust data governance, security, and compliance practices using tools like Azure Purview. Design and maintain a comprehensive metadata strategy to ensure full data discoverability and context for both humans and AI agents. Leadership & Mentorship:
Mentor data engineers, conduct rigorous design and code reviews, and collaborate with global teams to elevate the organization's overall data capabilities. Required Qualifications & Competencies
8+ years of progressive experience
in data architecture and engineering, with a proven track record in enterprise-scale environments. Deep Foundational Knowledge:
Must possess expert-level understanding of:
Data Modeling:
Demonstrable expertise in designing and explaining different data modeling techniques (e.g., Normalized, Star Schema, Data Vault) and their trade-offs. Data Engineering Fundamentals:
Ability to articulate and implement patterns for managing data relationships, data quality, lineage, and complex data transformations.
Advanced Databricks Expertise:
Hands-on, production-level experience with:
Delta Lake (including performance tuning with Z-Ordering, OPTIMIZE, VACUUM) Unity Catalog for security and governance Delta Live Tables for pipeline orchestration Cluster configuration and Spark performance tuning
Proven Azure Data Stack Proficiency:
Strong hands-on experience with Azure Data Factory, Azure Data Lake Storage (ADLS Gen2), and Azure Synapse Analytics. Architectural Prowess:
Demonstrated experience in designing and building modern data platforms, with a clear understanding of Lakehouse architecture and its implementation. Strong Communicator:
Excellent verbal and written communication skills, with the ability to explain complex technical concepts to both technical and non-technical stakeholders. What Differentiates a Top Candidate
AI/ML Data Infrastructure Experience:
Practical experience in building data platforms that directly feed AI/ML systems. This includes:
Hands-on experience with
vector databases
(e.g., Pinecone, Weaviate, Azure AI Search) and designing
RAG pipelines . Understanding of
feature stores
and MLOps principles. Experience preparing and managing data for training and fine-tuning large language models (LLMs).
Builder Mentality:
A proven desire to not only design but also to implement, optimize, and troubleshoot complex data systems hands-on. Strategic Thinker:
Ability to contribute to long-term technical roadmaps, balancing immediate project needs with strategic platform evolution.
Location: US Commitment: Full-time
#LI-JD1 #REMOTE
Architect & Design:
Lead the design and implementation of enterprise-scale data platforms on Azure using Databricks as the core analytical engine. Create end-to-end architecture blueprints that are scalable, secure, and cost-effective. Data Modeling & Engineering Excellence:
Define and enforce standards for data modeling (e.g., dimensional, data vault), data ingestion patterns (batch, streaming), and transformation frameworks. Articulate and implement strategies for managing complex data relationships and lineage. Platform Optimization:
Serve as the Databricks subject matter expert, tuning environments for peak performance (cluster configurations, query optimization), managing Delta Lake tables, and leveraging Unity Catalog for governance. AI/ML Data Enablement:
Architect data systems that directly support AI workloads. This includes designing for feature stores, vector databases for RAG pipelines, and ensuring data is structured for efficient model training and inference. Governance & Metadata Strategy:
Implement and champion robust data governance, security, and compliance practices using tools like Azure Purview. Design and maintain a comprehensive metadata strategy to ensure full data discoverability and context for both humans and AI agents. Leadership & Mentorship:
Mentor data engineers, conduct rigorous design and code reviews, and collaborate with global teams to elevate the organization's overall data capabilities. Required Qualifications & Competencies
8+ years of progressive experience
in data architecture and engineering, with a proven track record in enterprise-scale environments. Deep Foundational Knowledge:
Must possess expert-level understanding of:
Data Modeling:
Demonstrable expertise in designing and explaining different data modeling techniques (e.g., Normalized, Star Schema, Data Vault) and their trade-offs. Data Engineering Fundamentals:
Ability to articulate and implement patterns for managing data relationships, data quality, lineage, and complex data transformations.
Advanced Databricks Expertise:
Hands-on, production-level experience with:
Delta Lake (including performance tuning with Z-Ordering, OPTIMIZE, VACUUM) Unity Catalog for security and governance Delta Live Tables for pipeline orchestration Cluster configuration and Spark performance tuning
Proven Azure Data Stack Proficiency:
Strong hands-on experience with Azure Data Factory, Azure Data Lake Storage (ADLS Gen2), and Azure Synapse Analytics. Architectural Prowess:
Demonstrated experience in designing and building modern data platforms, with a clear understanding of Lakehouse architecture and its implementation. Strong Communicator:
Excellent verbal and written communication skills, with the ability to explain complex technical concepts to both technical and non-technical stakeholders. What Differentiates a Top Candidate
AI/ML Data Infrastructure Experience:
Practical experience in building data platforms that directly feed AI/ML systems. This includes:
Hands-on experience with
vector databases
(e.g., Pinecone, Weaviate, Azure AI Search) and designing
RAG pipelines . Understanding of
feature stores
and MLOps principles. Experience preparing and managing data for training and fine-tuning large language models (LLMs).
Builder Mentality:
A proven desire to not only design but also to implement, optimize, and troubleshoot complex data systems hands-on. Strategic Thinker:
Ability to contribute to long-term technical roadmaps, balancing immediate project needs with strategic platform evolution.
Location: US Commitment: Full-time
#LI-JD1 #REMOTE