Davita Inc.
We are seeking an experienced Senior Data and AI Engineer to join our team. The Data and AI Engineer is responsible for designing, building, and maintaining robust data pipelines and AI systems that support the bank's digital transformation, analytics, and automation initiatives. This role blends advanced data engineering with AI/ML expertise to deliver scalable, secure, and high-performing solutions that enable business insights, operational efficiency, and innovative customer experiences.
The ideal candidate will have expertise in cloud based ETL tools such as Azure Data Factory (ADF), Apache Airflow, dbt, implementing medallion-style data architectures inside modern datawarehouse platforms such as BigQuery, Snowflake and Redshift, ML framework such as Pytorch, Tensorflow etc, Gen-AI platforms such as Google Vertex AI, Snowflake Cortex AI, AWS Bedrock etc, Experience with CI/CD pipelines, DevOps/MLOps practices, and version control (GitHub Actions or similar).
Responsibilities:
Design, implement, and maintain scalable data pipelines using ADF and dbt
Develop and optimize ELT processes within a medallion architecture (Bronze, Silver, Gold, Semantics layers)
Collaborate with data governor, analysts, and other stakeholders to understand data requirements and deliver high-quality datasets
Implement data quality checks and monitoring throughout the data lifecycle
Optimize query performance and data models for efficient analytics
Contribute to data governance and documentation efforts
Design and implement ML and AI models to enhance data insights and automation
Integrate analytical models into existing data pipelines and workflows.
Incorporate DevOps, MLOps, and AIOps principles for continuous delivery, monitoring, and automated maintenance of AI systems
Monitor and optimize the performance of AI models and data pipelines, ensuring reliability and scalability
Stay updated with the latest AI and machine learning technologies and best practices.
Requirements:
Bachelor's degree in Computer Science, Engineering, or related field 5+ years of experience as a Data Engineer Strong proficiency in SQL and Python Hands-on experience with Azure Data Factory, AWS Glue or Apache Airflow for workflow orchestration Expertise in using dbt for data transformation and modeling Experience implementing medallion architecture or similar multi-layer data architectures Familiarity with cloud data platforms (e.g., BigQuery, Snowflake, or Redshift) Knowledge of data warehousing concepts and dimensional modeling Experience with developing ML and statistical models Strong problem-solving skills and attention to detail Excellent communication skills and ability to work in a collaborative environment Preferred Qualifications:
Experience with Delta Lake or similar data lakehouse technologies Familiarity with AI/ML frameworks (e.g., TensorFlow, PyTorch), NLP, and generative AI models Knowledge of data governance and compliance requirements Experience with CI/CD pipelines, DevOps/MLOps practices, and version control (GitHub Actions or similar) Experience dealing with data at financial institutions/banks Experience with FIS IBS core banking system Familiarity with Kafka, Kinesis or similar data streaming service Familiarity with microservices based and event driven architecture Experience with efficient code development and debugging using gen-ai tools like Github co-pilot Helping all candidates find great careers is our goal. The information you provide here is secure and confidential. We are now directing you to the original job posting. Please apply directly for this job at the employer’s website.
#J-18808-Ljbffr
Bachelor's degree in Computer Science, Engineering, or related field 5+ years of experience as a Data Engineer Strong proficiency in SQL and Python Hands-on experience with Azure Data Factory, AWS Glue or Apache Airflow for workflow orchestration Expertise in using dbt for data transformation and modeling Experience implementing medallion architecture or similar multi-layer data architectures Familiarity with cloud data platforms (e.g., BigQuery, Snowflake, or Redshift) Knowledge of data warehousing concepts and dimensional modeling Experience with developing ML and statistical models Strong problem-solving skills and attention to detail Excellent communication skills and ability to work in a collaborative environment Preferred Qualifications:
Experience with Delta Lake or similar data lakehouse technologies Familiarity with AI/ML frameworks (e.g., TensorFlow, PyTorch), NLP, and generative AI models Knowledge of data governance and compliance requirements Experience with CI/CD pipelines, DevOps/MLOps practices, and version control (GitHub Actions or similar) Experience dealing with data at financial institutions/banks Experience with FIS IBS core banking system Familiarity with Kafka, Kinesis or similar data streaming service Familiarity with microservices based and event driven architecture Experience with efficient code development and debugging using gen-ai tools like Github co-pilot Helping all candidates find great careers is our goal. The information you provide here is secure and confidential. We are now directing you to the original job posting. Please apply directly for this job at the employer’s website.
#J-18808-Ljbffr