Jobgether
As a Data and AI Engineer, you will design, implement, and optimize advanced data pipelines and backend integrations to support next‑generation cybersecurity and data‑intensive platforms. You will work on scalable ingestion, enrichment, and annotation workflows while enabling AI‑driven features such as natural language querying and intelligent analytics. This role involves close collaboration with cross‑functional teams to deliver secure, high‑performance, and reliable solutions. Your work will directly impact the architecture, efficiency, and intelligence of complex systems, bridging data engineering, applied AI, and cybersecurity in innovative ways. You will also have opportunities to shape cloud and on‑premises deployments and influence the adoption of AI technologies across the organization.
Accountabilities
Design and build high‑performance data pipelines for ingestion, transformation, and enrichment of large datasets
Implement workflows for data correlation, annotation, and contextual enrichment to support analytics and AI features
Develop and maintain database schemas, optimizing storage strategies for performance and scalability
Integrate AI/ML models into data workflows, including RAG pipelines and embeddings for advanced analytics
Ensure reliability and scalability of pipelines and services, including monitoring, error handling, and performance tuning
Collaborate with DevOps, product, and security teams to deploy, document, and transfer knowledge of solutions
Requirements
BSc/MSc in Computer Science, Data Engineering, or a related field, or equivalent practical experience
5+ years of experience in data engineering or big data analytics, with exposure to AI/ML integration
Proficiency in Python (Pandas, PySpark, FastAPI) and familiarity with Java/Scala for Spark workflows
Experience designing data pipelines, modeling schemas, and managing large‑scale datasets
Hands‑on experience with big data technologies such as Spark, Iceberg, Hive, Presto/Trino, and Superset
Strong SQL skills and experience with at least one NoSQL or distributed storage solution
Practical experience building and deploying APIs and services in cloud or on‑prem environments
Strong problem‑solving, debugging, and communication skills; proficient in English
Nice‑to‑have: experience with RAG pipelines, LLM applications, GPU acceleration, containerization (Docker/Kubernetes), and cybersecurity concepts
Benefits
Competitive salary and growth opportunities in a cutting‑edge cybersecurity and AI environment
Remote work flexibility within EMEA
Opportunity to work on innovative projects combining AI, data engineering, and security
Collaboration with a multidisciplinary team of engineers, researchers, and AI specialists
Professional development and exposure to next‑generation AI and cybersecurity technologies
Access to advanced tools, cloud resources, and modern data/AI infrastructure
#J-18808-Ljbffr
Accountabilities
Design and build high‑performance data pipelines for ingestion, transformation, and enrichment of large datasets
Implement workflows for data correlation, annotation, and contextual enrichment to support analytics and AI features
Develop and maintain database schemas, optimizing storage strategies for performance and scalability
Integrate AI/ML models into data workflows, including RAG pipelines and embeddings for advanced analytics
Ensure reliability and scalability of pipelines and services, including monitoring, error handling, and performance tuning
Collaborate with DevOps, product, and security teams to deploy, document, and transfer knowledge of solutions
Requirements
BSc/MSc in Computer Science, Data Engineering, or a related field, or equivalent practical experience
5+ years of experience in data engineering or big data analytics, with exposure to AI/ML integration
Proficiency in Python (Pandas, PySpark, FastAPI) and familiarity with Java/Scala for Spark workflows
Experience designing data pipelines, modeling schemas, and managing large‑scale datasets
Hands‑on experience with big data technologies such as Spark, Iceberg, Hive, Presto/Trino, and Superset
Strong SQL skills and experience with at least one NoSQL or distributed storage solution
Practical experience building and deploying APIs and services in cloud or on‑prem environments
Strong problem‑solving, debugging, and communication skills; proficient in English
Nice‑to‑have: experience with RAG pipelines, LLM applications, GPU acceleration, containerization (Docker/Kubernetes), and cybersecurity concepts
Benefits
Competitive salary and growth opportunities in a cutting‑edge cybersecurity and AI environment
Remote work flexibility within EMEA
Opportunity to work on innovative projects combining AI, data engineering, and security
Collaboration with a multidisciplinary team of engineers, researchers, and AI specialists
Professional development and exposure to next‑generation AI and cybersecurity technologies
Access to advanced tools, cloud resources, and modern data/AI infrastructure
#J-18808-Ljbffr