Data Engineer
Aurigait - Snowflake, Arizona, United States, 85937
Work at Aurigait
Overview
- View job
Overview
Design and implement ETL/ELT pipelines using Apache Spark and orchestration tools (Airflow/Dagster). Build and optimize data models on Snowflake and cloud platforms. Collaborate with analytics teams to deliver reliable data for reporting and ML initiatives. Monitor pipeline performance, troubleshoot data quality issues, and implement testing frameworks. Contribute to data architecture decisions and work with cross-functional teams to deliver quality data solutions. Required Skills & Experience
2-4 years of experience in data engineering or related field Strong proficiency with
Snowflake
including data modeling, performance optimisation, and cost management Hands-on experience building data pipelines with Apache Spark (PySpark) Experience with workflow orchestration tools (Airflow, Dagster, or similar) Proficiency with dbt for data transformation, modeling, and testing Proficiency in Python and SQL for data processing and analysis Experience with cloud platforms (AWS, Azure, or GCP) and their data services Understanding of data warehouse concepts, dimensional modeling, and data lake architectures Preferred Qualifications
Experience with infrastructure as code tools (Terraform, CloudFormation) Knowledge of streaming technologies (Kafka, Kinesis, Pub/Sub) Familiarity with containerisation (Docker, Kubernetes) Experience with data quality frameworks and monitoring tools Understanding of CI/CD practices for data pipelines Knowledge of data catalog and governance tools Advanced dbt features including macros, packages, and documentation Experience with table format technologies (Apache Iceberg, Apache Hudi) Technical Environment
Data Warehouse:
Snowflake Transformation:
dbt Version Control:
Git Monitoring:
DataDog, Grafana, or similar
#J-18808-Ljbffr