Logo
Israelvcforum

Senior Data Engineer

Israelvcforum, Snowflake, Arizona, United States, 85937

Save Job

Job Description

BioCatch is the leader in Behavioral Biometrics, a technology that leverages machine learning to analyze an online users physical and cognitive digital behavior to protect individuals online. BioCatchs mission is to unlock the power of behavior and deliver actionable insights to create a digital world where identity, trust, and ease coexist. Today, 32 of the world's largest 100 banks and 210 total financial institutions rely on BioCatch Connect to combat fraud, facilitate digital transformation, and grow customer relationships. BioCatchs Client Innovation Board, an industry-led initiative including American Express, Barclays, Citi Ventures, and National Australia Bank, helps BioCatch to identify creative and cutting-edge ways to leverage the unique attributes of behavior for fraud prevention. With over a decade of analyzing data, more than 80 registered patents, and unparalleled experience, BioCatch continues to innovate to solve tomorrows problems. For more information, please visit www.biocatch.com. Main responsibilities: Provide the direction of our data architecture. Determine the right tools for the right jobs. We collaborate on the requirements and then you call the shots on what gets built. Manage end-to-end execution of high-performance, large-scale data-driven projects, including design, implementation, and ongoing maintenance. Optimize and monitor the team-related cloud costs. Design and construct monitoring tools to ensure the efficiency and reliability of data processes. Implement CI/CD for Data Workflows

Requirements

5+ Years of Experience in data engineering and big data at large scales. - Must Extensive experience with data lakes, warehouses, and lakehouses (Snowflake, Delta Lake, Iceberg, BigQuery, Redshift). - Must Hands-on experience with Kafka, RabbitMQ, or similar for real-time data processing. - Must Strong programming skills in Python or at least one OOP language (e.g., Java, Scala). - Must Experience in working with any of the major cloud providers: Azure, Google Cloud , AWS) Must Hands-on experience in designing, building, and maintaining batch and streaming data pipelines using Python/PySpark for large-scale workloads. - Must Familiar with the concepts of containerization and orchestration, with proficiency in in Docker and Kubernetes. - Must Expertise in ETL development, data modeling, and data warehousing best practices. - Big Advantage Knowledge of monitoring & observability (Datadog, Prometheus, ELK, etc) - Big Advantage Experience in working with transactional data lake formats like Delta Lake and Iceberg, including schema evolution, partitioning strategies, and performance tuning. - Big Advantage Proven experience in developing data-driven microservices for processing and managing data. - Big Advantage Experience with infrastructure as code, deployment automation, and CI/CD - Big Advantage practices using tools such as Helm, ArgoCD, Terraform, GitHub Actions, and Jenkins. - Big Advantage Experience in Design Patterns concepts Advantage

Our stack: Azure, GCP, Databricks, Snowflake, Airflow, Spark, Kafka, Kubernetes, Neo4J, AeroSpike, ELK, DataDog, , Micro-Services, Python, SQL Your stack: Proven strong back-end software engineering skills, ability to think for yourself and challenge common assumptions, commit to high-quality execution and embrace collaboration. #J-18808-Ljbffr