Dbmk
About the Role
We are looking for a
Data Engineer
to join our team and help design, build, and maintain scalable data infrastructure. As a Data Engineer, you will be responsible for managing data pipelines, optimizing data workflows, and ensuring the availability and reliability of data for analytics and business intelligence. You will work closely with data scientists, analysts, and software engineers to enable data-driven decision-making. Responsibilities:
Design, develop, and maintain
ETL/ELT data pipelines
for efficient data processing. Build and optimize
data warehouses, data lakes, and real-time data streaming architectures . Work with
structured and unstructured data
from multiple sources to ensure data consistency and integrity. Optimize
database performance, storage, and indexing strategies . Collaborate with data scientists and analysts to provide data access and support advanced analytics. Ensure
data security, governance, and compliance
with industry standards. Implement
data quality monitoring
and anomaly detection mechanisms. Automate data processing and deployment using
CI/CD pipelines . Stay updated with emerging
big data technologies and best practices . Requirements:
3+ years of experience
as a Data Engineer or in a similar role. Strong experience with
SQL and database management systems
(PostgreSQL, MySQL, SQL Server). Experience with
big data technologies
(Apache Spark, Hadoop, Kafka). Proficiency in
cloud data platforms
(AWS, Azure, or Google Cloud). Knowledge of
ETL/ELT tools
(Airflow, DBT, Talend, Informatica). Strong programming skills in
Python, Scala, or Java . Experience with
data warehousing solutions
(Snowflake, Redshift, BigQuery). Familiarity with
data modeling, data governance, and security best practices . Hands-on experience with
CI/CD pipelines and Infrastructure as Code (Terraform, CloudFormation) . Nice to Have:
Experience with
real-time data processing
(Flink, Kinesis, Pub/Sub). Knowledge of
NoSQL databases
(MongoDB, Cassandra, DynamoDB). Understanding of
machine learning pipelines and MLOps . Familiarity with
BI tools
(Tableau, Power BI, Looker). What We Offer:
Competitive salary and performance-based bonuses Flexible working arrangements Private medical care and wellness programs Continuous learning opportunities and certifications Participation in international conferences and training Collaborative and innovative work environment
#J-18808-Ljbffr
We are looking for a
Data Engineer
to join our team and help design, build, and maintain scalable data infrastructure. As a Data Engineer, you will be responsible for managing data pipelines, optimizing data workflows, and ensuring the availability and reliability of data for analytics and business intelligence. You will work closely with data scientists, analysts, and software engineers to enable data-driven decision-making. Responsibilities:
Design, develop, and maintain
ETL/ELT data pipelines
for efficient data processing. Build and optimize
data warehouses, data lakes, and real-time data streaming architectures . Work with
structured and unstructured data
from multiple sources to ensure data consistency and integrity. Optimize
database performance, storage, and indexing strategies . Collaborate with data scientists and analysts to provide data access and support advanced analytics. Ensure
data security, governance, and compliance
with industry standards. Implement
data quality monitoring
and anomaly detection mechanisms. Automate data processing and deployment using
CI/CD pipelines . Stay updated with emerging
big data technologies and best practices . Requirements:
3+ years of experience
as a Data Engineer or in a similar role. Strong experience with
SQL and database management systems
(PostgreSQL, MySQL, SQL Server). Experience with
big data technologies
(Apache Spark, Hadoop, Kafka). Proficiency in
cloud data platforms
(AWS, Azure, or Google Cloud). Knowledge of
ETL/ELT tools
(Airflow, DBT, Talend, Informatica). Strong programming skills in
Python, Scala, or Java . Experience with
data warehousing solutions
(Snowflake, Redshift, BigQuery). Familiarity with
data modeling, data governance, and security best practices . Hands-on experience with
CI/CD pipelines and Infrastructure as Code (Terraform, CloudFormation) . Nice to Have:
Experience with
real-time data processing
(Flink, Kinesis, Pub/Sub). Knowledge of
NoSQL databases
(MongoDB, Cassandra, DynamoDB). Understanding of
machine learning pipelines and MLOps . Familiarity with
BI tools
(Tableau, Power BI, Looker). What We Offer:
Competitive salary and performance-based bonuses Flexible working arrangements Private medical care and wellness programs Continuous learning opportunities and certifications Participation in international conferences and training Collaborative and innovative work environment
#J-18808-Ljbffr