Experis
Position Title:
(2) Data Engineer - LLM & Generative AI Integration Location:
Columbus OH Role:
3 month contract Rate:
80hr Citizenship:
US Citizens Interviews:
will be on-site
Position 1
- will be integrating large language models and generative AI systems into the environment. Mandatory skills needed: Databricks, Azure, Cloud Computing, Python, SQL, Semantic Kernal or Llamaindex. Position 2
- will be creating the data pipelines from beginning to end. Transforming the data, getting it ready for AI.
Mandtory skills: CI/CD pipeline experience, Azure Databricks, SQL, Python. This one will communicate more with business users, so will need good communication/collaboration skills.
Overview We are looking for an innovative
Data Engineer
to lead the integration of
Large Language Models (LLMs)
and
Generative AI systems
within our enterprise data ecosystem. This role focuses on designing, automating, and optimizing data pipelines and interfaces that connect curated enterprise data with advanced AI models. You will bridge the gap between
data engineering and AI innovation , delivering secure, scalable, and high-performance systems that power next-generation language-driven applications.
Key Responsibilities
Design, build, and optimize data pipelines supporting LLM-powered systems and AI applications. Integrate Generative AI and LLM technologies (OpenAI, Anthropic, Azure OpenAI, LLaMA, Mistral, etc.) with enterprise data sources. Develop and maintain
Retrieval-Augmented Generation (RAG)
pipelines connecting structured and unstructured data to model contexts. Collaborate with data scientists, ML engineers, and AI researchers to align data quality with model performance. Implement
agentic system architectures
and orchestration frameworks (LangChain, Semantic Kernel, or similar). Enforce
AI security, governance, and compliance
best practices for responsible data use. Automate LLM evaluation, fine-tuning, and deployment workflows where applicable. Monitor and troubleshoot AI data pipelines for performance, accuracy, and scalability. Document design patterns, integration frameworks, and operational playbooks.
Required Skills & Qualifications Proven experience as a
Data Engineer or ML Engineer
working with LLM or Generative AI integrations. Strong programming skills in
Python, SQL , and distributed data frameworks ( Spark, Databricks ). Hands-on experience with
RAG architectures , vector databases (Pinecone, Weaviate, Chroma, FAISS), and embedding pipelines. Familiarity with frameworks such as
LangChain, LlamaIndex, and Semantic Kernel . Knowledge of
AI security and privacy , including prompt injection prevention and data governance. Solid understanding of
cloud-based AI infrastructure , preferably
Azure AI Services, Azure Databricks, and Azure OpenAI Service . Strong problem-solving skills and ability to collaborate across data, infrastructure, and AI teams. Bachelor's degree in
Computer Science, Engineering, or a related field
(or equivalent experience).
Preferred Qualifications Experience fine-tuning or customizing LLMs for enterprise use cases. Familiarity with
MLflow, MLOps , and
CI/CD pipelines
for model deployment. Knowledge of
medallion data architecture
and
Delta Lake
for AI-ready data management. Experience with
real-time data systems
(Kafka, Event Hubs) for streaming AI applications. Contributions to
open-source AI projects or enterprise AI integrations .
(2) Data Engineer - LLM & Generative AI Integration Location:
Columbus OH Role:
3 month contract Rate:
80hr Citizenship:
US Citizens Interviews:
will be on-site
Position 1
- will be integrating large language models and generative AI systems into the environment. Mandatory skills needed: Databricks, Azure, Cloud Computing, Python, SQL, Semantic Kernal or Llamaindex. Position 2
- will be creating the data pipelines from beginning to end. Transforming the data, getting it ready for AI.
Mandtory skills: CI/CD pipeline experience, Azure Databricks, SQL, Python. This one will communicate more with business users, so will need good communication/collaboration skills.
Overview We are looking for an innovative
Data Engineer
to lead the integration of
Large Language Models (LLMs)
and
Generative AI systems
within our enterprise data ecosystem. This role focuses on designing, automating, and optimizing data pipelines and interfaces that connect curated enterprise data with advanced AI models. You will bridge the gap between
data engineering and AI innovation , delivering secure, scalable, and high-performance systems that power next-generation language-driven applications.
Key Responsibilities
Design, build, and optimize data pipelines supporting LLM-powered systems and AI applications. Integrate Generative AI and LLM technologies (OpenAI, Anthropic, Azure OpenAI, LLaMA, Mistral, etc.) with enterprise data sources. Develop and maintain
Retrieval-Augmented Generation (RAG)
pipelines connecting structured and unstructured data to model contexts. Collaborate with data scientists, ML engineers, and AI researchers to align data quality with model performance. Implement
agentic system architectures
and orchestration frameworks (LangChain, Semantic Kernel, or similar). Enforce
AI security, governance, and compliance
best practices for responsible data use. Automate LLM evaluation, fine-tuning, and deployment workflows where applicable. Monitor and troubleshoot AI data pipelines for performance, accuracy, and scalability. Document design patterns, integration frameworks, and operational playbooks.
Required Skills & Qualifications Proven experience as a
Data Engineer or ML Engineer
working with LLM or Generative AI integrations. Strong programming skills in
Python, SQL , and distributed data frameworks ( Spark, Databricks ). Hands-on experience with
RAG architectures , vector databases (Pinecone, Weaviate, Chroma, FAISS), and embedding pipelines. Familiarity with frameworks such as
LangChain, LlamaIndex, and Semantic Kernel . Knowledge of
AI security and privacy , including prompt injection prevention and data governance. Solid understanding of
cloud-based AI infrastructure , preferably
Azure AI Services, Azure Databricks, and Azure OpenAI Service . Strong problem-solving skills and ability to collaborate across data, infrastructure, and AI teams. Bachelor's degree in
Computer Science, Engineering, or a related field
(or equivalent experience).
Preferred Qualifications Experience fine-tuning or customizing LLMs for enterprise use cases. Familiarity with
MLflow, MLOps , and
CI/CD pipelines
for model deployment. Knowledge of
medallion data architecture
and
Delta Lake
for AI-ready data management. Experience with
real-time data systems
(Kafka, Event Hubs) for streaming AI applications. Contributions to
open-source AI projects or enterprise AI integrations .