Expedient
Join Expedient's AI CTRL product team as a Data Engineer, where you'll transform complex, unstructured datasets into clean, AI‑ready data for enterprise clients. Working on‑site in Cleveland, you'll play a critical role in ensuring client data drives accurate AI/ML decision‑making within our enterprise cloud environment.
Key Responsibilities
Transform Data for AI : Ingest, structure, and cleanse large unstructured datasets, making them suitable for AI/ML model training and analysis.
Build Data Pipelines : Design and maintain scalable ETL workflows that integrate data from diverse sources into our AI platform.
Ensure Data Quality : Implement validation and cleansing processes that directly impact AI model accuracy.
Collaborate Cross‑Functionally : Work closely with our Agentic AI professional services team and Solution Architects to meet AI use‑case requirements.
Engage with Clients : Explain complex data concepts in simple terms, helping stakeholders understand how quality data drives better AI outcomes.
Optimize & Innovate : Troubleshoot pipeline issues, enhance performance, and stay current with emerging data engineering technologies.
Qualifications
Experience : 5+ years designing and building data pipelines and ETL processes in production environments.
Technical Expertise : Proficiency in Python, SQL, and ETL frameworks (Apache Airflow, AWS Glue, etc.); strong knowledge of SQL/NoSQL databases; experience with unstructured data transformation; familiarity with big data frameworks and AI/ML concepts.
Key Attributes : Adaptable in fast‑paced environments, strong ownership mentality, entrepreneurial mindset, excellent communicator, passionate about solving business problems with data.
Education : Bachelor's degree in Computer Science, Data Engineering, or related field (or equivalent practical experience).
Location : Cleveland, Ohio office. On‑site role, regional travel may be required.
Salary range is $125,000 to $175,000 annually, based on experience and skills.
#J-18808-Ljbffr
Key Responsibilities
Transform Data for AI : Ingest, structure, and cleanse large unstructured datasets, making them suitable for AI/ML model training and analysis.
Build Data Pipelines : Design and maintain scalable ETL workflows that integrate data from diverse sources into our AI platform.
Ensure Data Quality : Implement validation and cleansing processes that directly impact AI model accuracy.
Collaborate Cross‑Functionally : Work closely with our Agentic AI professional services team and Solution Architects to meet AI use‑case requirements.
Engage with Clients : Explain complex data concepts in simple terms, helping stakeholders understand how quality data drives better AI outcomes.
Optimize & Innovate : Troubleshoot pipeline issues, enhance performance, and stay current with emerging data engineering technologies.
Qualifications
Experience : 5+ years designing and building data pipelines and ETL processes in production environments.
Technical Expertise : Proficiency in Python, SQL, and ETL frameworks (Apache Airflow, AWS Glue, etc.); strong knowledge of SQL/NoSQL databases; experience with unstructured data transformation; familiarity with big data frameworks and AI/ML concepts.
Key Attributes : Adaptable in fast‑paced environments, strong ownership mentality, entrepreneurial mindset, excellent communicator, passionate about solving business problems with data.
Education : Bachelor's degree in Computer Science, Data Engineering, or related field (or equivalent practical experience).
Location : Cleveland, Ohio office. On‑site role, regional travel may be required.
Salary range is $125,000 to $175,000 annually, based on experience and skills.
#J-18808-Ljbffr