Onebridge
Onebridge, a Marlabs Company, is a global AI and Data Analytics Consulting Firm that empowers organizations worldwide to drive better outcomes through data and technology. Since 2005, we have partnered with some of the largest healthcare, life sciences, financial services, and government entities across the globe. We have an exciting opportunity for a highly skilled DataOps Consultant to join our innovative and dynamic team.
Employment Type:
Full Time Location:
Indianapolis, IN - Hybrid DataOps Consultant | About You As a DataOps Consultant, you are responsible for ensuring the seamless integration, automation, and optimization of data pipelines and infrastructure. You excel at collaborating with cross-functional teams to deliver scalable and efficient data solutions that meet business needs. With expertise in cloud platforms, data processing tools, and version control, you maintain the reliability and performance of data operations. Your focus on data integrity, quality, and continuous improvement drives the success of data workflows. Always proactive, you are committed to staying ahead of industry trends and solving complex challenges to enhance the organization's data ecosystem.
Develop, deploy, and maintain scalable and efficient data pipelines that handle large volumes of data from various sources.
Collaborate with Data Engineers, Data Scientists, and Business Analysts to ensure that data solutions meet business requirements and optimize workflows.
Monitor, troubleshoot, and optimize data pipelines to ensure high availability, reliability, and performance.
Implement and maintain automation for data ingestion, transformation, and deployment processes to improve efficiency.
Ensure data quality by implementing validation checks and continuous monitoring to detect and resolve issues.
Document data processes, pipeline configurations, and troubleshooting steps to maintain clarity and consistency across teams.
5+ years of experience working in DataOps or related fields, with strong hands-on experience in cloud platforms (AWS, Azure, Google Cloud) for data storage, processing, and analytics.
Proficiency in programming languages such as Python, Java, or Scala for building and maintaining data pipelines.
Experience with data orchestration tools like Apache Airflow, Azure Data Factory, or similar automated data workflows.
Expertise in big data processing frameworks (e.g., Apache Kafka, Apache Spark, Hadoop) for handling large volumes of data.
Hands-on experience with version control systems such as Git for managing code and deployment pipelines.
Solid understanding of data governance, security best practices, and regulatory compliance standards (e.g., GDPR, HIPAA).
#J-18808-Ljbffr
Employment Type:
Full Time Location:
Indianapolis, IN - Hybrid DataOps Consultant | About You As a DataOps Consultant, you are responsible for ensuring the seamless integration, automation, and optimization of data pipelines and infrastructure. You excel at collaborating with cross-functional teams to deliver scalable and efficient data solutions that meet business needs. With expertise in cloud platforms, data processing tools, and version control, you maintain the reliability and performance of data operations. Your focus on data integrity, quality, and continuous improvement drives the success of data workflows. Always proactive, you are committed to staying ahead of industry trends and solving complex challenges to enhance the organization's data ecosystem.
Develop, deploy, and maintain scalable and efficient data pipelines that handle large volumes of data from various sources.
Collaborate with Data Engineers, Data Scientists, and Business Analysts to ensure that data solutions meet business requirements and optimize workflows.
Monitor, troubleshoot, and optimize data pipelines to ensure high availability, reliability, and performance.
Implement and maintain automation for data ingestion, transformation, and deployment processes to improve efficiency.
Ensure data quality by implementing validation checks and continuous monitoring to detect and resolve issues.
Document data processes, pipeline configurations, and troubleshooting steps to maintain clarity and consistency across teams.
5+ years of experience working in DataOps or related fields, with strong hands-on experience in cloud platforms (AWS, Azure, Google Cloud) for data storage, processing, and analytics.
Proficiency in programming languages such as Python, Java, or Scala for building and maintaining data pipelines.
Experience with data orchestration tools like Apache Airflow, Azure Data Factory, or similar automated data workflows.
Expertise in big data processing frameworks (e.g., Apache Kafka, Apache Spark, Hadoop) for handling large volumes of data.
Hands-on experience with version control systems such as Git for managing code and deployment pipelines.
Solid understanding of data governance, security best practices, and regulatory compliance standards (e.g., GDPR, HIPAA).
#J-18808-Ljbffr