Douge International
We are looking for a skilled
Data Engineer
to join our data team and help build scalable data pipelines, optimize data infrastructure, and enable analytics that support strategic business goals. Responsibilities:
Design, build, and maintain robust, scalable data pipelines and ETL processes.
Develop and maintain data architecture that ensures data availability, quality, and integrity.
Collaborate with data analysts, data scientists, and cross-functional teams to understand data needs and deliver reliable data solutions.
Optimize data workflows and processing for performance and cost-efficiency.
Implement data governance, security, and compliance best practices.
Monitor and troubleshoot data systems to ensure continuous operation.
Requirements:
Bachelor's or Master's degree in Computer Science, Engineering, or related field.
3+ years of experience as a Data Engineer or in a similar role.
Proficiency with SQL and experience working with relational databases (e.g., PostgreSQL, MySQL).
Strong programming skills in Python, Java, or Scala.
Experience with big data technologies (e.g., Spark, Hadoop, Hive).
Familiarity with cloud platforms (e.g., AWS, GCP, or Azure) and tools like S3, Redshift, BigQuery, or Snowflake.
Knowledge of data warehousing concepts, data modeling, and ETL frameworks.
Experience with version control tools like Git and CI/CD pipelines is a plus.
Nice to Have:
Experience with orchestration tools (e.g., Airflow, dbt, Prefect).
Familiarity with containerization (Docker, Kubernetes).
Background in analytics or data science.
What We Offer:
Competitive salary and equity package
Flexible work environment (remote/hybrid)
Health, dental, and vision insurance
Generous PTO and parental leave
Professional development budget
How to Apply:
Submit your resume and a brief cover letter outlining your experience.
#J-18808-Ljbffr
Data Engineer
to join our data team and help build scalable data pipelines, optimize data infrastructure, and enable analytics that support strategic business goals. Responsibilities:
Design, build, and maintain robust, scalable data pipelines and ETL processes.
Develop and maintain data architecture that ensures data availability, quality, and integrity.
Collaborate with data analysts, data scientists, and cross-functional teams to understand data needs and deliver reliable data solutions.
Optimize data workflows and processing for performance and cost-efficiency.
Implement data governance, security, and compliance best practices.
Monitor and troubleshoot data systems to ensure continuous operation.
Requirements:
Bachelor's or Master's degree in Computer Science, Engineering, or related field.
3+ years of experience as a Data Engineer or in a similar role.
Proficiency with SQL and experience working with relational databases (e.g., PostgreSQL, MySQL).
Strong programming skills in Python, Java, or Scala.
Experience with big data technologies (e.g., Spark, Hadoop, Hive).
Familiarity with cloud platforms (e.g., AWS, GCP, or Azure) and tools like S3, Redshift, BigQuery, or Snowflake.
Knowledge of data warehousing concepts, data modeling, and ETL frameworks.
Experience with version control tools like Git and CI/CD pipelines is a plus.
Nice to Have:
Experience with orchestration tools (e.g., Airflow, dbt, Prefect).
Familiarity with containerization (Docker, Kubernetes).
Background in analytics or data science.
What We Offer:
Competitive salary and equity package
Flexible work environment (remote/hybrid)
Health, dental, and vision insurance
Generous PTO and parental leave
Professional development budget
How to Apply:
Submit your resume and a brief cover letter outlining your experience.
#J-18808-Ljbffr