TCS USAAvance Consulting
Senior Python developer with AI/ML/NLP Exp
TCS USAAvance Consulting, Addison, Texas, United States, 75001
Skill: Senior Python developer with AI/ML/NLP Exp
We are looking for a highly experienced AI/ML/NLP Engineer with expert-level skills in Python programming and proven success in building and tuning production-grade machine learning and natural language processing models.
The ideal candidate will have a strong track record of delivering AI-driven systems end-to-end and be capable of leading projects, mentoring teams, and solving complex AI problems at scale.
Key Responsibilities: Lead the design, development, and deployment of advanced AI/ML/NLP models and solutions using Python.
Fine-tune large-scale pre-trained models (e.g., BERT, GPT, T5, LLaMA, Mistral, Claude, etc.) for domain-specific tasks.
Build custom models from scratch where needed, including data preprocessing, model architecture design, training, evaluation, and optimization.
Perform hyperparameter tuning, prompt engineering, embedding generation, and model explainability analysis.
Build end-to-end ML pipelines, ensuring scalability, modularity, and reusability using tools like MLflow, Airflow, Ray, and Kubeflow.
Mentor and guide junior team members and contribute to technical strategy and architecture.
Collaborate with cross-functional teams including data engineering, DevOps, product, and business stakeholders.
Maintain strong documentation, reproducibility, and model governance practices.
Stay updated with cutting-edge research and translate innovations into real-world applications.
Core Skills Required: Expert-level Python skills with strong software engineering discipline (testing, modularity, CI/CD).
Deep hands-on expertise in Machine Learning, Deep Learning, NLP, and LLM fine-tuning.
Experience with transformers and large models using libraries like Hugging Face Transformers, LangChain, and OpenAI APIs.
Advanced knowledge of PyTorch and/or TensorFlow for deep learning tasks.
Strong experience with vector databases (e.g., FAISS, Pinecone, Weaviate) and RAG architectures.
Practical experience deploying AI/ML systems on cloud platforms (AWS, Azure, GCP) and using Docker/Kubernetes.
Experience with retrieval systems, semantic search, or conversational AI/chatbot frameworks.
Proven success with model optimization techniques such as quantization, distillation, LoRA, PEFT, etc.
The ideal candidate will have a strong track record of delivering AI-driven systems end-to-end and be capable of leading projects, mentoring teams, and solving complex AI problems at scale.
Key Responsibilities: Lead the design, development, and deployment of advanced AI/ML/NLP models and solutions using Python.
Fine-tune large-scale pre-trained models (e.g., BERT, GPT, T5, LLaMA, Mistral, Claude, etc.) for domain-specific tasks.
Build custom models from scratch where needed, including data preprocessing, model architecture design, training, evaluation, and optimization.
Perform hyperparameter tuning, prompt engineering, embedding generation, and model explainability analysis.
Build end-to-end ML pipelines, ensuring scalability, modularity, and reusability using tools like MLflow, Airflow, Ray, and Kubeflow.
Mentor and guide junior team members and contribute to technical strategy and architecture.
Collaborate with cross-functional teams including data engineering, DevOps, product, and business stakeholders.
Maintain strong documentation, reproducibility, and model governance practices.
Stay updated with cutting-edge research and translate innovations into real-world applications.
Core Skills Required: Expert-level Python skills with strong software engineering discipline (testing, modularity, CI/CD).
Deep hands-on expertise in Machine Learning, Deep Learning, NLP, and LLM fine-tuning.
Experience with transformers and large models using libraries like Hugging Face Transformers, LangChain, and OpenAI APIs.
Advanced knowledge of PyTorch and/or TensorFlow for deep learning tasks.
Strong experience with vector databases (e.g., FAISS, Pinecone, Weaviate) and RAG architectures.
Practical experience deploying AI/ML systems on cloud platforms (AWS, Azure, GCP) and using Docker/Kubernetes.
Experience with retrieval systems, semantic search, or conversational AI/chatbot frameworks.
Proven success with model optimization techniques such as quantization, distillation, LoRA, PEFT, etc.