Logo
University of Texas

Director of AI Platforms, Texas Institute for Electronics

University of Texas, Austin, Texas, us, 78716

Save Job

**Job Posting Title:**Director of AI Platforms, Texas Institute for Electronics**----****Hiring Department:**Texas Institute for Electronics**----****Position Open To:**All Applicants**----****Weekly Scheduled Hours:**40**----****FLSA Status:**To Be Determined at Offer**----****Earliest Start Date:**Ongoing**----****Position Duration:**Expected to Continue**----****Location:**AUSTIN, TX**----****Job Details:**## **General Notes**## ### About TIETexas Institute for Electronics (TIE) is a transformative, well-funded semiconductor foundry venture combining the agility of a startup with the scale of a national initiative.Our Mission A key part of our mission is to advance the state of the art in 3D heterogeneous integration (3DHI), chiplet-based architectures, and multi-component microsystems- catalyzing breakthroughs across microelectronics, artificial intelligence, quantum computing, high-performance computing, and next-generation healthcare devices.Our Impact Backed by $1.4 billion in combined funding from DARPA, Texas state initiatives, and strategic partners, we are building foundational capabilities in advanced packaging and integrated design infrastructure to restore U.S. leadership in microelectronics manufacturing.Our Technology Our 3DHI and chiplet integration platforms integrate novel thermal management and advanced interconnect solutions to deliver unprecedented performance and energy efficiency. Operating at the intersection of defense electronics and commercial markets, TIE offers a rare opportunity to reimagine an industry from the ground up and build transformative products with global impact. UT Austin, recognized by Forbes as one of , provides outstanding

and

packages that include:* Competitive health benefits (employee premiums covered at 100%, family premiums at 50%)* Voluntary Vision, Dental, Life, and Disability insurance options* Generous paid vacation, sick time, and holidays* Teachers Retirement System of Texas, a defined benefit retirement plan, with 8.25% employer matching funds* Additional Voluntary Retirement Programs: Tax Sheltered Annuity 403(b) and a Deferred Compensation program 457(b)* Flexible spending account options for medical and childcare expenses* Robust free training access through LinkedIn Learning plus professional conference opportunities* Tuition assistance* Expansive employee discount program including athletic tickets* Free access to UT Austin's libraries and museums with staff ID card* Free rides on all UT Shuttle and Austin CapMetro buses with staff ID card* For more details, please see

and

and## **Purpose**Drive the design, deployment, and optimization of enterprise-grade LLM systems, ensuring scalable, secure, and high-performance AI solutions tailored to complex organizational needs. Lead technical innovation across architecture, MLOps, and retrieval-augmented generation to deliver impactful, privacy-compliant AI capabilities.## **Responsibilities*** Define and lead the software architecture and implementation roadmap for a scalable, modular AI infrastructure platform. You will work across backend, orchestration, and deployment layers—focusing on performance, security, and reliability.* Build and manage a high-caliber engineering team, including backend developers, platform engineers, and site reliability engineers. You will be responsible for mentoring, hiring, and setting a culture of technical excellence and operational discipline.* Own core services that power AI pipelines, including APIs for data ingestion and transformation, orchestration of model inference jobs, and integration with LLM orchestration layers and vector stores.* Establish technical strategy and design standards that support rapid prototyping, automated testing, and code reuse across teams. You will define best practices and lead by example in system design, code reviews, and architectural discussions.* Lead on-premise deployment strategy, ensuring our stack is optimized for hybrid environments. You will manage challenges around air-gapped deployments, resource management, and update rollouts in constrained environments.* Collaborate cross-functionally with AI engineering, product management, and customer success to align engineering priorities with product goals. You’ll help translate high-level needs into deliverable milestones.* Implement and maintain CI/CD pipelines and DevOps best practices, focusing on security, observability, rollback safety, and developer productivity.* Develop and enforce SLAs/SLOs for critical services, putting in place monitoring, alerting, and incident response practices that ensure uptime and stability in enterprise-grade deployments.* Stay on top of evolving technologies in distributed systems, containerization, service mesh, observability, and developer tooling—bringing in the best ideas to future-proof our platform.Other related functions as assigned.## **Required Qualifications*** BS in Computer Science, Engineering, or a related field.* 8 or more years of software engineering experience, including 3 or more years focused on AI/ML and LLM-based applications.* Deep knowledge of LLM architectures and tools – you understand transformer models inside and out and are fluent in the surrounding ecosystem (from tokenization and embedding techniques to prompt engineering and fine-tuning methods).* Proven track record of productionizing LLM applications end-to-end. You have built and deployed AI-powered solutions (using both commercial APIs and open-source models) into real-world production environments – including experience with on-prem or private cloud deployments of AI systems.* Hands-on experience with the LLM tech stack: this includes building pipelines with vector databases (for embedding storage/search) and using LLM orchestration frameworks like LangChain or LlamaIndex to compose prompts, tools, and data retrieval.* Experience with modern model serving and scaling – familiarity with frameworks such as vLLM, LMDeploy, Ray (for distributed inference), or Triton Inference Server to optimize runtime performance of large models.* Exceptional engineering and problem-solving skills. You can design elegant solutions for complex challenges and debug issues across the ML stack (data, model, infrastructure) when things go wrong.* Excellent communication skills. You know how to articulate complex technical concepts clearly and adjust your message for engineers, founders, or other stakeholders. You can document architectures, write clear project plans, and mentor others by explaining the “why” behind technical decisions.* You have the ability to work effectively in fast-paced environments. You have the ability to act with urgency, adapt quickly to new information, and take ownership of.* Execution mindset. You have demonstrated experience driving projects forward in a hands-on role without heavy process or management overhead. You excel at managing multiple priorities, staying organized, and delivering results in a lean team setting.*Relevant education and experience may be substituted as appropriate.*## **Preferred Qualifications*** MS or PhD in Computer Science, Machine Learning, or a related discipline.* Prior technical leadership experience. Experience leading an engineering team or serving as a tech lead for complex AI/ML projects. Ability to mentor others and experience managing project roadmaps or teams in previous roles.* Domain expertise in NLP/LLMs. Publications, open-source contributions, or recognized expertise in the NLP/LLM field (e.g. contributions to Transformer libraries, research in language modeling, etc.) will set you apart.* Enterprise AI experience. Familiarity with the unique challenges of applying AI in enterprise settings – such as handling sensitive data, ensuring compliance (e.g. GDPR, SOC2), or integrating with enterprise IT systems – is a plus.## **Salary Range**TIE #J-18808-Ljbffr