Jobs via Dice
Data Architect with Snowflake and DBT
Location:
Cincinnati, OH (Onsite)
Employment Type:
Contract on W2
Job Summary
We are seeking a highly skilled Data Architect. The ideal candidate will design, implement, and manage data architectures that enable scalable AI/ML solutions to solve real-world business problems.
Key Responsibilities
Design and implement scalable data architectures to support AI/ML workloads.
Work closely with Data Scientists, ML Engineers, and Business teams to understand requirements and convert them into technical solutions.
Build and maintain data pipelines (ETL/ELT) to prepare structured and unstructured data for modeling.
Ensure data quality, governance, privacy, and compliance standards are met.
Design data lakes, warehouses, and MLOps pipelines using tools like Databricks, Snowflake, AWS/Google Cloud Platform/Azure, etc.
Optimize data storage and processing to support large-scale AI/ML training and inference.
Define and manage metadata, master data, and data catalog solutions.
Evaluate and implement AI/ML tools and frameworks (e.g., TensorFlow, PyTorch, MLflow).
Lead data modeling, schema design, and performance tuning.
Collaborate with DevOps for CI/CD of ML models and data workflows.
Required Skills & Qualifications
Bachelor's or Master’s degree in Computer Science, Data Engineering, or related field.
5+ years of experience in data architecture and engineering.
Strong expertise in AI/ML concepts and model lifecycle.
Proficiency in SQL, Python, and data modeling techniques.
Experience with big data platforms: Hadoop, Spark, Kafka, etc.
Hands‑on experience with cloud platforms (AWS, Azure, or Google Cloud Platform) and services like S3, Redshift, BigQuery, etc.
Experience in building and managing MLOps pipelines.
Familiarity with containerization (Docker, Kubernetes) and workflow tools like Airflow.
Preferred Skills
Experience with data governance tools (e.g., Collibra, Alation).
Knowledge of AI/ML model interpretability and ethical AI practices.
Seniority Level:
Mid-Senior
#J-18808-Ljbffr
Location:
Cincinnati, OH (Onsite)
Employment Type:
Contract on W2
Job Summary
We are seeking a highly skilled Data Architect. The ideal candidate will design, implement, and manage data architectures that enable scalable AI/ML solutions to solve real-world business problems.
Key Responsibilities
Design and implement scalable data architectures to support AI/ML workloads.
Work closely with Data Scientists, ML Engineers, and Business teams to understand requirements and convert them into technical solutions.
Build and maintain data pipelines (ETL/ELT) to prepare structured and unstructured data for modeling.
Ensure data quality, governance, privacy, and compliance standards are met.
Design data lakes, warehouses, and MLOps pipelines using tools like Databricks, Snowflake, AWS/Google Cloud Platform/Azure, etc.
Optimize data storage and processing to support large-scale AI/ML training and inference.
Define and manage metadata, master data, and data catalog solutions.
Evaluate and implement AI/ML tools and frameworks (e.g., TensorFlow, PyTorch, MLflow).
Lead data modeling, schema design, and performance tuning.
Collaborate with DevOps for CI/CD of ML models and data workflows.
Required Skills & Qualifications
Bachelor's or Master’s degree in Computer Science, Data Engineering, or related field.
5+ years of experience in data architecture and engineering.
Strong expertise in AI/ML concepts and model lifecycle.
Proficiency in SQL, Python, and data modeling techniques.
Experience with big data platforms: Hadoop, Spark, Kafka, etc.
Hands‑on experience with cloud platforms (AWS, Azure, or Google Cloud Platform) and services like S3, Redshift, BigQuery, etc.
Experience in building and managing MLOps pipelines.
Familiarity with containerization (Docker, Kubernetes) and workflow tools like Airflow.
Preferred Skills
Experience with data governance tools (e.g., Collibra, Alation).
Knowledge of AI/ML model interpretability and ethical AI practices.
Seniority Level:
Mid-Senior
#J-18808-Ljbffr