Blossom Hr
About the job Data Engineer
Data Engineer:
Basic Requirements
Experience building both streaming and batch data pipelines/ETL, with a solid understanding of design principles. Expertise in Python, PostgreSQL, and PL/pgSQL development, along with administration of large databases focused on performance and production support in native cloud environments. Experience with scalability solutions, multi-region replication, and failover strategies. Familiarity with data warehouse technologies such as Trino, ClickHouse, Airflow, etc. Bachelors degree (or equivalent) in Computer Science, Engineering, or a related technical discipline, or equivalent practical experience. Strong programming skills with experience in at least one programming language. Proficiency in English. Preferred Requirements
Knowledge of Kubernetes and Docker. 4+ years of relevant experience in data engineering or related fields. Experience with agile development methodologies. Proven track record of delivering and maintaining web-scale data systems in production. Experience working with Kafka, preferably Redpanda and Redpanda Connect. The Ideal Candidate
Passionate about data engineering and scalable data systems. Interested in creating innovative solutions and evolving system architecture for robustness and maintainability. Enjoys writing high-quality code and pushing the boundaries of technical capabilities. Brings enthusiasm to the team, with the ability to focus and deliver quality work on schedule. Responsibilities
Build scalable, reliable data pipelines that deliver accurate data feeds from internal and external sources. Maintain scalable and high-performance cloud-deployed relational and non-relational databases. Collaborate on architecture design, focusing on solutions that are scalable and secure. Strive to make data systems operate with near real-time performance. Design, document, automate, and execute test plans to ensure dataset quality. Participate in feature generation and analysis to support data-driven decision-making.
Data Engineer:
Basic Requirements
Experience building both streaming and batch data pipelines/ETL, with a solid understanding of design principles. Expertise in Python, PostgreSQL, and PL/pgSQL development, along with administration of large databases focused on performance and production support in native cloud environments. Experience with scalability solutions, multi-region replication, and failover strategies. Familiarity with data warehouse technologies such as Trino, ClickHouse, Airflow, etc. Bachelors degree (or equivalent) in Computer Science, Engineering, or a related technical discipline, or equivalent practical experience. Strong programming skills with experience in at least one programming language. Proficiency in English. Preferred Requirements
Knowledge of Kubernetes and Docker. 4+ years of relevant experience in data engineering or related fields. Experience with agile development methodologies. Proven track record of delivering and maintaining web-scale data systems in production. Experience working with Kafka, preferably Redpanda and Redpanda Connect. The Ideal Candidate
Passionate about data engineering and scalable data systems. Interested in creating innovative solutions and evolving system architecture for robustness and maintainability. Enjoys writing high-quality code and pushing the boundaries of technical capabilities. Brings enthusiasm to the team, with the ability to focus and deliver quality work on schedule. Responsibilities
Build scalable, reliable data pipelines that deliver accurate data feeds from internal and external sources. Maintain scalable and high-performance cloud-deployed relational and non-relational databases. Collaborate on architecture design, focusing on solutions that are scalable and secure. Strive to make data systems operate with near real-time performance. Design, document, automate, and execute test plans to ensure dataset quality. Participate in feature generation and analysis to support data-driven decision-making.