FlightSafety International
About FlightSafety International
FlightSafety International is the world’s premier professional aviation training company and supplier of flight simulators, visual systems and displays to commercial, government and military organizations. The company provides training for pilots, technicians and other aviation professionals from 167 countries and independent territories. FlightSafety operates the world’s largest fleet of advanced full-flight simulators and award-winning maintenance training at Learning Centers and training locations in the United States, Canada, France and the United Kingdom.
Purpose of Position The Senior Data Engineer is a hands-on technical expert responsible for designing, building, and maintaining modern data pipelines and architectures in a cloud-based environment. This role supports enterprise analytics by enabling scalable, reliable, and automated data solutions using Azure , Databricks , DBT , and Airflow . The engineer collaborates across teams to deliver high-quality data products that drive business intelligence and advanced analytic.
Tasks and Responsibilities
Design and develop scalable ETL / ELT pipelines using Azure Data Factory (ADF), Databricks , and DBT
Implement real-time and batch data processing using Delta Live Tables (DLT)
Orchestrate data workflows using Databricks LakeFlow, Apache Airflow, and ADF pipelines
Design and implement Data Vault 2.0 models for cloud-based data warehousing
Develop data ingestion and replication solutions using tools such as Fivetran , SQDR, Rivery, or custom Python scripts
Write Python and PySpark code for data transformation, cleansing, and automation
Monitor and optimize pipeline performance, ensuring data quality and reliability
Collaborate with analysts, architects, and business stakeholders to understand data needs and deliver consistent datasets
Maintain documentation for data flows, models, and pipeline logic
Support data governance, metadata management, and compliance initiatives
Participate in Agile ceremonies and contribute to sprint planning, reviews, and retrospectives
Troubleshoot and resolve production issues related to data pipelines and integrations
Contribute to CI / CD automation and DevOps practices for data engineering components
Provide input on architectural decisions and participate in enterprise data strategy discussions
Infrequent travel as needed
Minimum Education
Bachelor's degree from an accredited institution or equivalent industry experience
Minimum Experience
10+ years of experience in software or data engineering roles
5+ years of experience in enterprise ETL, analytics, and reporting (SSIS, SSAS, SSRS)
3+ years of experience withAzure Data FactoryandAzure Data Lake Storage
2+ years of experience with Databricks , Delta Live Tables , and Unity Catalog
2+ years of experience with Python and PySpark
Experience with DBT for data modeling and transformation
Experience withApache Airflowfor workflow orchestration
Experience with Data Vault 2.0 modeling (certification preferred)
Knowledge, Skills, Abilities
Strong understanding of ELT / ETL concepts, data modeling, and cloud data platforms
Experience with data ingestion and replication tools (e.g., Fivetran , SQDR, Rivery)
Proficiency in building data solutions using Python , PySpark, and DBT
Experience orchestrating workflows usingApache AirflowandDatabricks LakeFlow
Familiarity with DevOps practices, CI / CD pipelines, and version control systems (Git, TFS)
Expert-level SQL development, including T-SQL, stored procedures, UDFs, and performance tuning
Experience working in Agile / Scrum environments
Knowledge of data governance, metadata management, and DQ frameworks
Familiarity with Apache Spark, Kafka, and NoSQL technologies
Exposure to AI / ML concepts and tools, preferably within Databricks
Strong communication skills and ability to collaborate with cross-functional teams
Ability to manage multiple priorities and deliver high-quality solutions under tight deadlines
Experience in Education or Aviation industries is a plus
#J-18808-Ljbffr
Purpose of Position The Senior Data Engineer is a hands-on technical expert responsible for designing, building, and maintaining modern data pipelines and architectures in a cloud-based environment. This role supports enterprise analytics by enabling scalable, reliable, and automated data solutions using Azure , Databricks , DBT , and Airflow . The engineer collaborates across teams to deliver high-quality data products that drive business intelligence and advanced analytic.
Tasks and Responsibilities
Design and develop scalable ETL / ELT pipelines using Azure Data Factory (ADF), Databricks , and DBT
Implement real-time and batch data processing using Delta Live Tables (DLT)
Orchestrate data workflows using Databricks LakeFlow, Apache Airflow, and ADF pipelines
Design and implement Data Vault 2.0 models for cloud-based data warehousing
Develop data ingestion and replication solutions using tools such as Fivetran , SQDR, Rivery, or custom Python scripts
Write Python and PySpark code for data transformation, cleansing, and automation
Monitor and optimize pipeline performance, ensuring data quality and reliability
Collaborate with analysts, architects, and business stakeholders to understand data needs and deliver consistent datasets
Maintain documentation for data flows, models, and pipeline logic
Support data governance, metadata management, and compliance initiatives
Participate in Agile ceremonies and contribute to sprint planning, reviews, and retrospectives
Troubleshoot and resolve production issues related to data pipelines and integrations
Contribute to CI / CD automation and DevOps practices for data engineering components
Provide input on architectural decisions and participate in enterprise data strategy discussions
Infrequent travel as needed
Minimum Education
Bachelor's degree from an accredited institution or equivalent industry experience
Minimum Experience
10+ years of experience in software or data engineering roles
5+ years of experience in enterprise ETL, analytics, and reporting (SSIS, SSAS, SSRS)
3+ years of experience withAzure Data FactoryandAzure Data Lake Storage
2+ years of experience with Databricks , Delta Live Tables , and Unity Catalog
2+ years of experience with Python and PySpark
Experience with DBT for data modeling and transformation
Experience withApache Airflowfor workflow orchestration
Experience with Data Vault 2.0 modeling (certification preferred)
Knowledge, Skills, Abilities
Strong understanding of ELT / ETL concepts, data modeling, and cloud data platforms
Experience with data ingestion and replication tools (e.g., Fivetran , SQDR, Rivery)
Proficiency in building data solutions using Python , PySpark, and DBT
Experience orchestrating workflows usingApache AirflowandDatabricks LakeFlow
Familiarity with DevOps practices, CI / CD pipelines, and version control systems (Git, TFS)
Expert-level SQL development, including T-SQL, stored procedures, UDFs, and performance tuning
Experience working in Agile / Scrum environments
Knowledge of data governance, metadata management, and DQ frameworks
Familiarity with Apache Spark, Kafka, and NoSQL technologies
Exposure to AI / ML concepts and tools, preferably within Databricks
Strong communication skills and ability to collaborate with cross-functional teams
Ability to manage multiple priorities and deliver high-quality solutions under tight deadlines
Experience in Education or Aviation industries is a plus
#J-18808-Ljbffr