Galent
Location: Jersey City, NJ
Must Have
12 years of experience working on Azure and Databricks Lakehouse implementation with a grasp of translating business requirements into technical solutions for data curation and consumption
Experience in building Databricks Lakehouse from a modeling and transformation ELT perspective
Experience in SQL programming, PySpark, ADLS
2+ years experience working as a Data Engineer; experience developing on the Azure Modern Data Warehouse platform (Azure Data Lake, Azure Data Factory, Azure Databricks, Databricks Delta Lake)
Good to Have
3+ years experience in SQL
Experience handling structured and unstructured datasets
Good knowledge of DevOps tools and processes
Expertise on Databricks Delta Lake implementation, Databricks architecture and configuration
Expertise in Azure Data Factory building pipelines and orchestration
Prior experience doing SQL Server migration to cloud
Experience in effort estimation for new projects
Proven ability to drive complex design and development efforts in an agile environment
Prior experience migrating on‑premise SQL Server to Delta Lake
Automating data pipelines through a CI/CD delivery methodology
Expertise in Databricks; expert in ETL pipelines
Typical Azure services used: ADX, ADF, Databricks, ADLS for data ETL and storage
Expertise in Azure Data Factory Data Flow, Azure Databricks and Data Lake Storage
Design and implement ETL components for multiple applications
Good understanding of data modelling for data lake
Hands‑on experience using secure file transfer tools like Kiteworks
Oracle PL/SQL knowledge a plus
Soft Skills
Strong written and oral communication skills (required onsite)
Strong ability to work with client stakeholders
Requirement review and work effort estimation
Hands‑on Unity Catalog work
Experience with reporting tools such as Power BI or Mosaic AI tools
Seniority Level Mid‑Senior Level
Employment Type Contract
Job Function
Consulting
Technology, Information and Internet
#J-18808-Ljbffr
Must Have
12 years of experience working on Azure and Databricks Lakehouse implementation with a grasp of translating business requirements into technical solutions for data curation and consumption
Experience in building Databricks Lakehouse from a modeling and transformation ELT perspective
Experience in SQL programming, PySpark, ADLS
2+ years experience working as a Data Engineer; experience developing on the Azure Modern Data Warehouse platform (Azure Data Lake, Azure Data Factory, Azure Databricks, Databricks Delta Lake)
Good to Have
3+ years experience in SQL
Experience handling structured and unstructured datasets
Good knowledge of DevOps tools and processes
Expertise on Databricks Delta Lake implementation, Databricks architecture and configuration
Expertise in Azure Data Factory building pipelines and orchestration
Prior experience doing SQL Server migration to cloud
Experience in effort estimation for new projects
Proven ability to drive complex design and development efforts in an agile environment
Prior experience migrating on‑premise SQL Server to Delta Lake
Automating data pipelines through a CI/CD delivery methodology
Expertise in Databricks; expert in ETL pipelines
Typical Azure services used: ADX, ADF, Databricks, ADLS for data ETL and storage
Expertise in Azure Data Factory Data Flow, Azure Databricks and Data Lake Storage
Design and implement ETL components for multiple applications
Good understanding of data modelling for data lake
Hands‑on experience using secure file transfer tools like Kiteworks
Oracle PL/SQL knowledge a plus
Soft Skills
Strong written and oral communication skills (required onsite)
Strong ability to work with client stakeholders
Requirement review and work effort estimation
Hands‑on Unity Catalog work
Experience with reporting tools such as Power BI or Mosaic AI tools
Seniority Level Mid‑Senior Level
Employment Type Contract
Job Function
Consulting
Technology, Information and Internet
#J-18808-Ljbffr