Nova Talent
Senior Data Engineer
We are seeking a skilled Data Engineer to join our team and drive our data infrastructure forward. In this role, you will primarily focus on maintaining and enhancing our data warehouse and pipelines (80%) while also contributing to data analysis and reporting initiatives (20%). You'll work closely with cross-functional stakeholders to build robust data solutions and create actionable insights through compelling visualizations. Key Responsibilities Data Engineering Infrastructure Management: Maintain, enhance, and optimize existing data warehouse architecture and ETL pipelines. Pipeline Development: Design and implement scalable ETL/ELT processes ensuring data quality, integrity, and timeliness. Performance Optimization: Monitor and improve pipeline performance, troubleshoot issues, and implement best practices. Documentation: Create and maintain comprehensive documentation for data engineering processes, architecture, and configurations. Data Analysis & Reporting Stakeholder Collaboration: Partner with business teams to gather requirements and translate them into technical solutions. Report Development: Build and maintain PowerBI dashboards and reports that drive business decisions. Data Modeling: Develop new data models and enhance existing ones to support advanced analytics. Insight Communication: Transform complex data findings into clear, actionable insights for various departments. Required Qualifications Technical Skills Programming & Query Languages: Strong proficiency in Python, SQL, and PySpark. Big Data Platforms: Experience with cloud data platforms including Snowflake, BigQuery, and Databricks. Databricks experience highly preferred. Orchestration Tools: Proven experience with workflow orchestration tools (Airflow preferred). Cloud Platforms: Experience with AWS (preferred), Azure, or Google Cloud Platform. Data Visualization: Proficiency in PowerBI (preferred) or Tableau. Database Systems: Familiarity with relational database management systems (RDBMS). Development Practices Version Control: Proficient with Git for code management and collaboration. CI/CD: Hands-on experience implementing and maintaining continuous integration/deployment pipelines. Documentation: Strong ability to create clear technical documentation. Experience & Communication Professional Experience: 3+ years in data engineering or closely related roles. Language Requirements: Fluent English communication skills for effective collaboration with U.S. based team members. Pipeline Expertise: Demonstrated experience building and maintaining production data pipelines.
We are seeking a skilled Data Engineer to join our team and drive our data infrastructure forward. In this role, you will primarily focus on maintaining and enhancing our data warehouse and pipelines (80%) while also contributing to data analysis and reporting initiatives (20%). You'll work closely with cross-functional stakeholders to build robust data solutions and create actionable insights through compelling visualizations. Key Responsibilities Data Engineering Infrastructure Management: Maintain, enhance, and optimize existing data warehouse architecture and ETL pipelines. Pipeline Development: Design and implement scalable ETL/ELT processes ensuring data quality, integrity, and timeliness. Performance Optimization: Monitor and improve pipeline performance, troubleshoot issues, and implement best practices. Documentation: Create and maintain comprehensive documentation for data engineering processes, architecture, and configurations. Data Analysis & Reporting Stakeholder Collaboration: Partner with business teams to gather requirements and translate them into technical solutions. Report Development: Build and maintain PowerBI dashboards and reports that drive business decisions. Data Modeling: Develop new data models and enhance existing ones to support advanced analytics. Insight Communication: Transform complex data findings into clear, actionable insights for various departments. Required Qualifications Technical Skills Programming & Query Languages: Strong proficiency in Python, SQL, and PySpark. Big Data Platforms: Experience with cloud data platforms including Snowflake, BigQuery, and Databricks. Databricks experience highly preferred. Orchestration Tools: Proven experience with workflow orchestration tools (Airflow preferred). Cloud Platforms: Experience with AWS (preferred), Azure, or Google Cloud Platform. Data Visualization: Proficiency in PowerBI (preferred) or Tableau. Database Systems: Familiarity with relational database management systems (RDBMS). Development Practices Version Control: Proficient with Git for code management and collaboration. CI/CD: Hands-on experience implementing and maintaining continuous integration/deployment pipelines. Documentation: Strong ability to create clear technical documentation. Experience & Communication Professional Experience: 3+ years in data engineering or closely related roles. Language Requirements: Fluent English communication skills for effective collaboration with U.S. based team members. Pipeline Expertise: Demonstrated experience building and maintaining production data pipelines.