Falcon Smart IT (FalconSmartIT)
Tech Lead Azure Data Engineer with SAP Hana
Falcon Smart IT (FalconSmartIT), Santa Clara, California, us, 95053
Job Title:
Tech Lead Azure Data Engineer with SAP Hana Job Type:
Full-Time Job Description: Task:
Migrating from SAP to DBX based tech stack (DPaaS 2.0) Technical Lead Data Engineering & Migration Key Responsibilities: Databricks & Delta Lake:
Hands-on experience with Databricks, including development using PySpark or Spark SQL, efficient use of Delta Lake for scalable data pipelines, and data lineage in Databricks. Azure Data Factory:
Building and managing ETL pipelines using ADF, orchestration with Databricks, Blob Storage, SAP sources, monitoring, error handling, and pipeline performance tuning. Performance Optimization:
Handling performance optimization on Databricks. Python & PySpark:
Writing robust, maintainable data processing scripts, using Python/Spark for custom transformations and integration logic. SAP HANA knowledge:
Integrating SAP data with other platforms, handling large-scale SAP data extraction, transformation, and migration. Good to Have Skills ETL Tools SAP Data Services:
Creating, deploying, and optimizing data jobs in SAP BODS/Data Services, working with complex mappings and SAP-specific data types, handling change data capture (CDC) scenarios. Data Profiling & Validation:
Experience in data profiling, validation, and reconciliation during migrations. Azure Cloud & Networking:
Understanding Azure services related to compute, storage, networking, and security; experience resolving firewall, VPN, and VNet issues impacting data pipelines; familiarity with IAM, RBAC, and secure credential storage.
#J-18808-Ljbffr
Tech Lead Azure Data Engineer with SAP Hana Job Type:
Full-Time Job Description: Task:
Migrating from SAP to DBX based tech stack (DPaaS 2.0) Technical Lead Data Engineering & Migration Key Responsibilities: Databricks & Delta Lake:
Hands-on experience with Databricks, including development using PySpark or Spark SQL, efficient use of Delta Lake for scalable data pipelines, and data lineage in Databricks. Azure Data Factory:
Building and managing ETL pipelines using ADF, orchestration with Databricks, Blob Storage, SAP sources, monitoring, error handling, and pipeline performance tuning. Performance Optimization:
Handling performance optimization on Databricks. Python & PySpark:
Writing robust, maintainable data processing scripts, using Python/Spark for custom transformations and integration logic. SAP HANA knowledge:
Integrating SAP data with other platforms, handling large-scale SAP data extraction, transformation, and migration. Good to Have Skills ETL Tools SAP Data Services:
Creating, deploying, and optimizing data jobs in SAP BODS/Data Services, working with complex mappings and SAP-specific data types, handling change data capture (CDC) scenarios. Data Profiling & Validation:
Experience in data profiling, validation, and reconciliation during migrations. Azure Cloud & Networking:
Understanding Azure services related to compute, storage, networking, and security; experience resolving firewall, VPN, and VNet issues impacting data pipelines; familiarity with IAM, RBAC, and secure credential storage.
#J-18808-Ljbffr