JCD Staffing
**This position requires either US Citizenship or Green Card status. Please do not apply if you do not meet this requirement.**
About the Role :
Drive AI innovation by building robust data pipelines that power machine learning applications. As a Data Engineer, you'll transform raw data into actionable insights, enabling AI-driven solutions for industries like retail, finance, and healthcare, while ensuring data integrity and scalability.
Responsibilities : Design and maintain scalable data pipelines using Apache Spark, Kafka, or Snowflake. Ensure high-quality, real-time data for ML model training and inference. Collaborate with data scientists to preprocess and transform data for AI models. Implement data governance, security, and compliance best practices (e.g., GDPR). Optimize data storage and retrieval for large-scale AI applications. Automate ETL processes to streamline data workflows and reduce latency. Qualifications : 3+ years of data engineering experience, with exposure to AI/ML pipelines. Proficiency in SQL, Python, and big data tools (Spark, Hadoop, Kafka). Experience with cloud platforms (AWS, Azure, or GCP) for data processing. Strong analytical and problem-solving skills for complex data challenges. Knowledge of data modeling and schema design for AI use cases. Excellent teamwork and communication skills for cross-functional collaboration.
About the Role :
Drive AI innovation by building robust data pipelines that power machine learning applications. As a Data Engineer, you'll transform raw data into actionable insights, enabling AI-driven solutions for industries like retail, finance, and healthcare, while ensuring data integrity and scalability.
Responsibilities : Design and maintain scalable data pipelines using Apache Spark, Kafka, or Snowflake. Ensure high-quality, real-time data for ML model training and inference. Collaborate with data scientists to preprocess and transform data for AI models. Implement data governance, security, and compliance best practices (e.g., GDPR). Optimize data storage and retrieval for large-scale AI applications. Automate ETL processes to streamline data workflows and reduce latency. Qualifications : 3+ years of data engineering experience, with exposure to AI/ML pipelines. Proficiency in SQL, Python, and big data tools (Spark, Hadoop, Kafka). Experience with cloud platforms (AWS, Azure, or GCP) for data processing. Strong analytical and problem-solving skills for complex data challenges. Knowledge of data modeling and schema design for AI use cases. Excellent teamwork and communication skills for cross-functional collaboration.