ValueMomentum
Location: Edison, NJ (onsite 5 days a week)
Responsibilities
Develop modern data solutions and architecture for cloud-native data platforms.
Build cost-effective infrastructure in Databricks and orchestrate workflows using Databricks/ADF.
Lead data strategy sessions focused on scalability, performance, and flexibility.
Build CI/CD pipelines for Databricks in Azure DevOps.
Collaborate with customers to implement solutions for data modernization.
Create training plans and learning materials to upskill VM associates.
Develop industry-specific and domain-oriented solutions for customer data needs.
Build a smart operations framework for DataOps and MLOps.
Requirements
Should have 10+ years of experience with last 4 years in implementing Cloud native end-to-end Data Solutions from ingestion to consumption to support variety of needs such as Modern Data warehouse, BI, Insights and Analytics
Should have experience in architecture and implementing End to End Modern Data Solutions using Azure and advance data processing frameworks like Databricks etc.
Experience with Databricks, PySpark, and modern data platforms.
Proficiency in cloud-native architecture and data governance.
Strong experience in migrating from on-premises to cloud solutions (Spark, Hadoop to Databricks).
Understanding of Agile/Scrum methodologies.
Demonstrated knowledge of data warehouse concepts. Strong understanding of Cloud native databases, columnar database architectures
Ability to work with Data Engineering teams, Data Management Team, BI and Analytics in a complex development IT environment.
Good appreciation and at least one implementation experience on processing substrates in Data Engineering - such as ETL Tools, Kafka, ELT techniques
Data Mesh and Data Products designing, and implementation knowledge will be an added advantage.
Seniority level
Mid-Senior level
Employment type
Full-time
Job function
Information Technology
Industries
IT Services and IT Consulting
#J-18808-Ljbffr
Responsibilities
Develop modern data solutions and architecture for cloud-native data platforms.
Build cost-effective infrastructure in Databricks and orchestrate workflows using Databricks/ADF.
Lead data strategy sessions focused on scalability, performance, and flexibility.
Build CI/CD pipelines for Databricks in Azure DevOps.
Collaborate with customers to implement solutions for data modernization.
Create training plans and learning materials to upskill VM associates.
Develop industry-specific and domain-oriented solutions for customer data needs.
Build a smart operations framework for DataOps and MLOps.
Requirements
Should have 10+ years of experience with last 4 years in implementing Cloud native end-to-end Data Solutions from ingestion to consumption to support variety of needs such as Modern Data warehouse, BI, Insights and Analytics
Should have experience in architecture and implementing End to End Modern Data Solutions using Azure and advance data processing frameworks like Databricks etc.
Experience with Databricks, PySpark, and modern data platforms.
Proficiency in cloud-native architecture and data governance.
Strong experience in migrating from on-premises to cloud solutions (Spark, Hadoop to Databricks).
Understanding of Agile/Scrum methodologies.
Demonstrated knowledge of data warehouse concepts. Strong understanding of Cloud native databases, columnar database architectures
Ability to work with Data Engineering teams, Data Management Team, BI and Analytics in a complex development IT environment.
Good appreciation and at least one implementation experience on processing substrates in Data Engineering - such as ETL Tools, Kafka, ELT techniques
Data Mesh and Data Products designing, and implementation knowledge will be an added advantage.
Seniority level
Mid-Senior level
Employment type
Full-time
Job function
Information Technology
Industries
IT Services and IT Consulting
#J-18808-Ljbffr