BilgeAdam Technologies GmbH
Overview
BGTS International Business Unit is looking for Data Devops Engineers to join our team and work on core banking and credit card systems. This is a great opportunity to build scalable, high-performance solutions in the financial domain. As a Data Devops Engineer you'd be working with us to build, deploy, and maintain a modern, automated, cloud-native, continuously delivered and distributed data platform. This includes the operational relational databases, data pipelines, and supplementary data services which all work in concert to provide the company with accurate, timely, and actionable metrics and insights to grow and improve our business using data and are business critical for many other teams in the company. You may be collaborating with our data engineering team to set up new features and services for the data platform, assisting our data science team to maintain our ML-related services, supporting our product engineering teams to configure and customize our data infrastructure to fit their needs, or pairing with other data devops engineers to bring to fruition cutting-edge technologies and infrastructure in the data space.
Our Ideal Candidate
We seek candidates with in-depth experience in the following skills and technologies and a motivation to build up experience and fill any gaps on the job. More importantly, we look for people who are highly logical, respect best practices while applying critical thinking, adapt to new situations, can work independently to deliver projects end-to-end, communicate well in English, collaborate effectively with teammates and stakeholders, and are eager to be on a high-performing team and advance their careers with us.
Responsibilities
Build, deploy, and maintain a modern, automated, cloud-native, continuously delivered and distributed data platform.
Manage operational relational databases, data pipelines, and supplementary data services to provide accurate metrics and insights.
Collaborate with data engineering, data science, and product engineering teams to implement features, ML services, and customized data infrastructure.
Adopt and promote best practices in DevOps, cloud, and data platform engineering within a high-performing team.
Requirements
General AWS or Cloud services: Glue, EMR, EC2, ELB, EFS, S3, Lambda, API Gateway, IAM, Cloudwatch, DMS
Infrastructure, Configuration and Deployments as Code: Terraform, Ansible
DevOps tools and services: Git, CircleCI, Jenkins, Artifactory, Spinnaker, AWS CodePipeline, Chef
Container management and orchestration: Docker, Docker Swarm, ECS, EKS/Kubernetes, Mesos
Relational databases: PostgreSQL preferred; Oracle is a plus. General SQL systems experience required
Log ingestion, Monitoring and Alerting: Elastic Stack, Datadog, Prometheus, Grafana
General computing concepts and expertise: Unix environments, network configuration, distributed and cloud computing, service resilience and scalability strategies
Agile/Lean project methodologies and rituals: Scrum, Kanban
Also good to have
Distributed messaging and event streaming systems: Kafka, Pulsar, RabbitMQ, Google Pub/Sub
Modern NoSQL / Analytical / Graph Data Stores: DynamoDB, Redis, Athena, Presto, Snowflake, Neo4J, Neptune
Data Governance and Security: Certificate management, Encryption methods, Network security strategies
#J-18808-Ljbffr
Our Ideal Candidate
We seek candidates with in-depth experience in the following skills and technologies and a motivation to build up experience and fill any gaps on the job. More importantly, we look for people who are highly logical, respect best practices while applying critical thinking, adapt to new situations, can work independently to deliver projects end-to-end, communicate well in English, collaborate effectively with teammates and stakeholders, and are eager to be on a high-performing team and advance their careers with us.
Responsibilities
Build, deploy, and maintain a modern, automated, cloud-native, continuously delivered and distributed data platform.
Manage operational relational databases, data pipelines, and supplementary data services to provide accurate metrics and insights.
Collaborate with data engineering, data science, and product engineering teams to implement features, ML services, and customized data infrastructure.
Adopt and promote best practices in DevOps, cloud, and data platform engineering within a high-performing team.
Requirements
General AWS or Cloud services: Glue, EMR, EC2, ELB, EFS, S3, Lambda, API Gateway, IAM, Cloudwatch, DMS
Infrastructure, Configuration and Deployments as Code: Terraform, Ansible
DevOps tools and services: Git, CircleCI, Jenkins, Artifactory, Spinnaker, AWS CodePipeline, Chef
Container management and orchestration: Docker, Docker Swarm, ECS, EKS/Kubernetes, Mesos
Relational databases: PostgreSQL preferred; Oracle is a plus. General SQL systems experience required
Log ingestion, Monitoring and Alerting: Elastic Stack, Datadog, Prometheus, Grafana
General computing concepts and expertise: Unix environments, network configuration, distributed and cloud computing, service resilience and scalability strategies
Agile/Lean project methodologies and rituals: Scrum, Kanban
Also good to have
Distributed messaging and event streaming systems: Kafka, Pulsar, RabbitMQ, Google Pub/Sub
Modern NoSQL / Analytical / Graph Data Stores: DynamoDB, Redis, Athena, Presto, Snowflake, Neo4J, Neptune
Data Governance and Security: Certificate management, Encryption methods, Network security strategies
#J-18808-Ljbffr