Equiliem
Public Trust - Can start without it but must be able to obtain
Occasional onsite once a quarter in Gaithersburg area
Job Description
Client is seeking a GenAI Data Automation Engineer to design and implement innovative, AI-driven automation solutions across AWS and Azure hybrid environments. You will be responsible for building intelligent, scalable data pipelines and automations that integrate cloud services, enterprise tools, and Generative AI to support mission‑critical analytics, reporting, and customer engagement platforms. Ideal candidate is mission focused, delivery oriented, applies critical thinking to create innovative functions and solve technical issues.
Who we are
Client is a Fortune 500 technology, engineering, and science solutions and services leader working to solve the world’s toughest challenges in the defense, intelligence, civil, and health markets. Client Civil Group helps the government modernize operations with leading edge AI/ML driven data management and analytics solutions. We are trusted partners to both government and highly regulated commercial customers looking for transformative solutions in mission IT, security, software, engineering, and operations. We work with our customers including the FAA, DOE, DOJ, NASA, National Science Foundation, Transportation Security Administration, Custom and Border Protection, airports, and electric utilities to make the world safer, healthier, and more efficient.
In this role you will
Design and maintain data pipelines in AWS using S3, RDS/SQL Server, Glue, Lambda, EMR, DynamoDB, and Step Functions
Develop ETL/ELT processes to move data from multiple data systems including DynamoDB → SQL Server (AWS) and between AWS ↔ Azure SQL systems
Integrate AWS Connect, Nice inContact CRM data into the enterprise data pipeline for analytics and operational reporting
Engineer, enhance ingestion pipelines with Apache Spark, Flume, Kafka for real‑time and batch processing into Apache Solr, AWS Open Search platforms
Leverage Generative AI services and Frameworks (AWS Bedrock, Amazon Q, Azure OpenAI, Hugging Face, LangChain) to create automated processes for vector generation and embedding from unstructured data to support Generative AI mode. Automate data quality checks, metadata tagging, and lineage tracking.
Enhance ingestion/ETL with LLM‑assisted transformation and anomaly detection
Build conversational BI interfaces that allow natural language access to Solr and SQL data
Develop AI‑powered copilots for pipeline monitoring and automated troubleshooting
Implement SQL Server stored procedures, indexing, query optimization, profiling, and execution plan tuning to maximize performance
Apply CI/CD best practices using GitHub, Jenkins, or Azure DevOps for both data pipelines and GenAI model integration
Ensure security and compliance through IAM, KMS encryption, VPC isolation, RBAC, and firewalls.
Support Agile DevOps processes with sprint‑based delivery of pipeline and AI‑enabled features.
Required Qualifications
BS in Computer Science or related field with 2+ years of data engineering, automation experiences
Hands‑on experience with LLM, Generative AI frameworks using AWS Bedrock, Azure OpenAI or open source platform
Hands‑on experience with SQL, SSIS, Python, Spark, Bash, Power shell, AWS/Azure CLIs
Experience with AWS services like S3, RDS/SQL Server, Glue, Lambda, EMR, Dynam
oDB. Familiarity with Apache Flume, Kafka, Solr for large‑scale data ingestion and search
Experience with integrating REST API calls in data pipelines and workflows
Familiarity with JIRA, GitHub / Azure DevOps / Jenkins for SDLC and CI/CD automat
Strong troubleshooting and performance optimization skills in SQL, Spark or other data engineering solutions
Experience operationalizing Generative AI (GenAI Ops) pipelines, including model deployment, monitoring, retraining, and lifecycle management for LLMs and AI‑enabled data workflows
Good communication and presentation skills
US Citizenship and ability to obtain Public Trust clearance.
Preferred (plus)
Experience implementing RAG pipelines, embeddings, and vector search with Solr, OpenSearch, FAISS, Pinecone, or Pgvector
Experience with multi‑cloud data integration (AWS ↔ Azure SQL). Familiarity with Microsoft BizTalk and SSIS for SQL Server ETL workflows. Knowledge of data lineage/governance tools (Purview, Unity Catalog, AWS Glue Cata log). Familiarity with Infrastructure‑as‑Code (Terraform/CloudFormation, Bicep) for automated deployments. Experience with compliance frameworks (FedRAMP, PCI‑DSS, HIPAA).
Seniority level: Mid‑Senior level
Employment type: Contract
Job function: Consulting
Industries: IT Services and IT Consulting and Aviation and Aerospace Component Manufacturing
#J-18808-Ljbffr
Occasional onsite once a quarter in Gaithersburg area
Job Description
Client is seeking a GenAI Data Automation Engineer to design and implement innovative, AI-driven automation solutions across AWS and Azure hybrid environments. You will be responsible for building intelligent, scalable data pipelines and automations that integrate cloud services, enterprise tools, and Generative AI to support mission‑critical analytics, reporting, and customer engagement platforms. Ideal candidate is mission focused, delivery oriented, applies critical thinking to create innovative functions and solve technical issues.
Who we are
Client is a Fortune 500 technology, engineering, and science solutions and services leader working to solve the world’s toughest challenges in the defense, intelligence, civil, and health markets. Client Civil Group helps the government modernize operations with leading edge AI/ML driven data management and analytics solutions. We are trusted partners to both government and highly regulated commercial customers looking for transformative solutions in mission IT, security, software, engineering, and operations. We work with our customers including the FAA, DOE, DOJ, NASA, National Science Foundation, Transportation Security Administration, Custom and Border Protection, airports, and electric utilities to make the world safer, healthier, and more efficient.
In this role you will
Design and maintain data pipelines in AWS using S3, RDS/SQL Server, Glue, Lambda, EMR, DynamoDB, and Step Functions
Develop ETL/ELT processes to move data from multiple data systems including DynamoDB → SQL Server (AWS) and between AWS ↔ Azure SQL systems
Integrate AWS Connect, Nice inContact CRM data into the enterprise data pipeline for analytics and operational reporting
Engineer, enhance ingestion pipelines with Apache Spark, Flume, Kafka for real‑time and batch processing into Apache Solr, AWS Open Search platforms
Leverage Generative AI services and Frameworks (AWS Bedrock, Amazon Q, Azure OpenAI, Hugging Face, LangChain) to create automated processes for vector generation and embedding from unstructured data to support Generative AI mode. Automate data quality checks, metadata tagging, and lineage tracking.
Enhance ingestion/ETL with LLM‑assisted transformation and anomaly detection
Build conversational BI interfaces that allow natural language access to Solr and SQL data
Develop AI‑powered copilots for pipeline monitoring and automated troubleshooting
Implement SQL Server stored procedures, indexing, query optimization, profiling, and execution plan tuning to maximize performance
Apply CI/CD best practices using GitHub, Jenkins, or Azure DevOps for both data pipelines and GenAI model integration
Ensure security and compliance through IAM, KMS encryption, VPC isolation, RBAC, and firewalls.
Support Agile DevOps processes with sprint‑based delivery of pipeline and AI‑enabled features.
Required Qualifications
BS in Computer Science or related field with 2+ years of data engineering, automation experiences
Hands‑on experience with LLM, Generative AI frameworks using AWS Bedrock, Azure OpenAI or open source platform
Hands‑on experience with SQL, SSIS, Python, Spark, Bash, Power shell, AWS/Azure CLIs
Experience with AWS services like S3, RDS/SQL Server, Glue, Lambda, EMR, Dynam
oDB. Familiarity with Apache Flume, Kafka, Solr for large‑scale data ingestion and search
Experience with integrating REST API calls in data pipelines and workflows
Familiarity with JIRA, GitHub / Azure DevOps / Jenkins for SDLC and CI/CD automat
Strong troubleshooting and performance optimization skills in SQL, Spark or other data engineering solutions
Experience operationalizing Generative AI (GenAI Ops) pipelines, including model deployment, monitoring, retraining, and lifecycle management for LLMs and AI‑enabled data workflows
Good communication and presentation skills
US Citizenship and ability to obtain Public Trust clearance.
Preferred (plus)
Experience implementing RAG pipelines, embeddings, and vector search with Solr, OpenSearch, FAISS, Pinecone, or Pgvector
Experience with multi‑cloud data integration (AWS ↔ Azure SQL). Familiarity with Microsoft BizTalk and SSIS for SQL Server ETL workflows. Knowledge of data lineage/governance tools (Purview, Unity Catalog, AWS Glue Cata log). Familiarity with Infrastructure‑as‑Code (Terraform/CloudFormation, Bicep) for automated deployments. Experience with compliance frameworks (FedRAMP, PCI‑DSS, HIPAA).
Seniority level: Mid‑Senior level
Employment type: Contract
Job function: Consulting
Industries: IT Services and IT Consulting and Aviation and Aerospace Component Manufacturing
#J-18808-Ljbffr