DataOps Engineer
Varite - Kansas City
Work at Varite
Overview
- View job
Overview
DataOps Engineer
Location: USA (Frisco/Kansas)
Hybrid Position
Pay Range - $60-$65/hr on W2
Job Summary:
We are seeking a skilled and motivated DataOps Engineer specializing in Kafka, Databricks, and Snowflake skills. The ideal candidate must possess a unique skill set for managing and optimizing data pipelines and operations within a modern data architecture. This role combines principles of Data Ops with data engineering to ensure data quality, reliability, and efficient delivery.
Years of experience needed:
- 4-6 years of experience as DataOps Engineer
- Telecommunication Billing experience is preferred.
Technical Skills:
- Hands-on experience on Apache Kafka clusters for real-time data streaming, event processing, and building decoupled data architectures.
- Experience of working with Databricks for data engineering and building ETL/ELT processes within the Databricks environment.
- Experience to leverage Snowflake as a cloud-based data warehouse for analytical workloads
- Must have hands-on experience to build and supports data workflows and analytics, focuses on data availability, quality and resiliency
- Familiarity with version control systems, particularly Git, and collaboration platforms.
- Knowledge of infrastructure-as-code (IAC) principles and tools, such as Ansible and Terraform.
- Strong problem-solving skills, a proactive approach to system health, and the ability to troubleshoot complex issues.
- A solid background in system administration, infrastructure management, or software engineering.
- Experience in incident response, post-incident analysis, and implementing preventive measures.
- Familiarity with observability tools, monitoring, and alerting systems.
- A commitment to balancing reliability concerns with continuous innovation and development.
Certifications Needed:
- None
Key Responsibilities:
- Builds and supports data workflows and analytics, focuses on data availability, quality and resiliency.
- Designing, building, and maintaining robust data pipelines for both batch and real-time data processing. This involves utilizing Kafka for streaming data ingestion and processing, and leveraging Databricks for complex transformations and analytics
- Integrating data from various sources into Snowflake for business intelligence and reporting
- Implementing CI/CD pipelines for data solutions, automating deployments, and applying monitoring and alerting practices to ensure data quality and system availability.
- Apply knowledge of infrastructure-as-code (IAC) principles and tools, including Ansible and Terraform, to manage and scale our infrastructure.
- Demonstrate strong problem-solving skills to troubleshoot complex issues efficiently.
- Leverage your background in system administration, infrastructure management, or software engineering to optimize our systems.
- Actively participate in incident response, post-incident analysis, and take preventive measures to ensure system stability and reliability.
- Implement observability tools, monitoring, and alerting systems to proactively identify and resolve potential issues.
- Strike a balance between ensuring system reliability and promoting continuous innovation and development.