Logo
LTIMindtree

Senior Specialist - Package Implementation

LTIMindtree, Hartford, Connecticut, us, 06112

Save Job

Role description

A Oracle Engineer with Oracle Fusion experience and Databricks is responsible for designing developing and maintaining scalable and efficient data solutions that integrate data from various sources including Oracle Fusion applications and process it within the Databricks environment

Key Responsibilities

Data Pipeline Development

Design build and optimize robust ETLELT pipelines to ingest transform and load data from Oracle Fusion applications and other sources into the Databricks Lakehouse This involves using PySpark SQL and Databricks notebooks

Databricks Platform Expertise

Leverage Databricks functionalities such as Delta Lake Unity Catalog and Spark optimization techniques to ensure data quality performance and governance

Oracle Fusion Integration

Develop connectors and integration strategies to extract data from Oracle Fusion modules eg Financials HCM SCM using APIs SQL or other appropriate methods

Data Modeling and Warehousing

Design and implement data models within Databricks potentially following a medallion architecture to support analytical and reporting requirements

Performance Optimization

Tune Spark jobs and optimize data processing within Databricks for efficiency and costeffectiveness

Data Quality and Governance

Implement data quality checks error handling and data validation frameworks to ensure data integrity Adhere to data governance policies and security best practices

Collaboration

Work closely with data architects data scientists business analysts and other stakeholders to understand data requirements and deliver solutions that meet business needs

Automation and CICD

Develop automation scripts and implement CICD pipelines for Databricks workflows and deployments

Troubleshooting and Support

Provide operational support troubleshoot datarelated issues and perform root cause analysis

Required Skills and Qualifications

Strong proficiency in Databricks

Including PySparkScala Delta Lake Unity Catalog and Databricks notebooks

Experience with Oracle Fusion

Knowledge of Oracle Fusion data structures APIs and data extraction methods

Expertise in SQL

For querying manipulating and optimizing data in both Oracle and Databricks

Cloud Platform Experience

Familiarity with a major cloud provider eg AWS Azure GCP where Databricks is deployed

Data Warehousing and ETLELT Concepts

Solid understanding of data warehousing principles and experience in building and optimizing data pipelines

Problemsolving and Analytical Skills

Ability to analyze complex data issues and propose effective solutions

Communication and Collaboration

Strong interpersonal skills to work effectively within crossfunctional teams