Logo
JPMorgan Chase & Co.

Database Databricks Engineer - Lead Data Engineer

JPMorgan Chase & Co., Jersey City, New Jersey, United States, 07390

Save Job

Join us as we embark on a journey of collaboration and innovation, where your unique skills and talents will be valued and celebrated. Together we will create a brighter future and make a meaningful difference.

As a Lead Data Engineer at JPMorgan Chase within the Corporate Technology, you are an integral part of an agile team that works to enhance, build, and deliver data collection, storage, access, and analytics solutions in a secure, stable, and scalable way. As a core technical contributor, you are responsible for maintaining critical data pipelines and architectures across multiple technical areas within various business functions in support of the firm’s business objectives.

Regulatory, Controls & Op Risk Technology (RCORT) is part of Corporate Technology, which designs and develops business products required to identify and manage JPMC’s regulatory obligations and associated firm policies, a control environment designed to manage compliance and operational risks across all Lines of Business (LOBs) and Corporate Functions and the calculation of Regulatory Capital and projection of Operational Risk Losses.

Job responsibilities

Generates data models for their team using firmwide tooling, linear algebra, statistics, and geometrical algorithms

Delivers data collection, storage, access, and analytics data platform solutions in a secure, stable, and scalable way

Implements database back-up, recovery, and archiving strategy

Evaluates and reports on access control processes to determine effectiveness of data asset security with minimal supervision

Adds to team culture of diversity, equity, inclusion, and respect

Required qualifications, capabilities, and skills

Formal training or certification on data engineering concepts and 5+ years applied experience

Hands on experience with open-source distributed SQL engines - like Presto, Apache Drill or Dremio and ingestion/processing stack like Hadoop, Spark and Kafka is a

must

Extensive experience with AWS databricks and Hadoop is a

must

Experience and proficiency across the data lifecycle

Experience with database back-up, recovery, and archiving strategy

Proficient knowledge of linear algebra, statistics, and geometrical algorithms

Working experience with both relational and NoSQL databases

Preferred qualifications, capabilities, and skills

Exposure to Lakehouse like Snowflake

Experience with infrastructure automation technologies like Docker and K8s

#J-18808-Ljbffr