JPMorgan Chase & Co.
Principal Software Engineer - (Data Engineering / Java / Python) | Risk Technolo
JPMorgan Chase & Co., New York, New York, us, 10261
If you are looking for a game-changing career, working for one of the world's leading financial institutions, you’ve come to the right place.
As a Principal Software Engineer at JPMorganChase within the Corporate Risk Technology team, you will provide expertise and engineering excellence as an integral part of an agile team to enhance, build, and deliver trusted market-leading technology products in a secure, stable, and scalable way. Leverage your advanced technical capabilities and collaborate with colleagues across the organization to drive best-in-class outcomes across various technologies to support one or more of the firm’s portfolios.
Job responsibilities
Architect and implement complex, scalable data engineering/ coding frameworks and solutions using modern software design principles.
Lead the design and development of secure, high-quality production code for data-intensive applications; review and mentor other engineers.
Drive adoption of advanced technical methods and practices aligned with the latest industry standards.
Advises cross-functional teams on technological matters within your domain of expertise
Serves as the function’s go-to subject matter expert
Creates durable, reusable software frameworks that are leveraged across teams and functions
Influences leaders and senior stakeholders across business, product, and technology teams
Champions the firm’s culture of diversity, opportunity, inclusion, and respect
Contributes to the development of technical methods in specialized fields in line with the latest product development methodologies
Required qualifications, capabilities, and skills
Formal training or certification on data engineering concepts and 10+ years applied experience
Strong proficiency in Data Engineering, Data Architecture, AI/ML with hands‑on experience in designing, implementing, testing, and ensuring the operational stability of large‑scale enterprise data platforms and solutions
Expert in one or more programming language(s) eg. Java, Python , C/C++;
Advanced working knowledge of relational and NoSQL databases, data lake architectures, data mesh concepts, and data governance.
Practical experience with cloud-native data platforms (AWS, Azure, GCP).
Experience leading technical teams and projects as a Tech Lead or Data Architect
Experience in large scale data processing, using micro services, API design, Kafka, Redis, MemCached, Observability (Dynatrace , Splunk, Grafana or similar), Orchestration (Airflow, Temporal)
Advanced knowledge of software application development and technical processes
Ability to present and effectively communicate with Senior Leaders and Executives
Experience in Computer Science, Computer Engineering, Mathematics, or a related technical field
Preferred qualifications, capabilities, and skills
Deep hands‑on experience with Spark/PySpark, other big data processing technologies
Experience with modern data technologies and cloud‑based solutions, such as Databricks or Snowflake.
Expertise in modern, open‑source table formats and catalog services for managing large‑scale data in data lakes, such as Apache Iceberg.
Experience with federated database entitlements such as Immuta or similar tools.
Familiarity with data consumption tools like Dremio or Starburst.
Knowledge of the financial services industry and their IT systems
#J-18808-Ljbffr
As a Principal Software Engineer at JPMorganChase within the Corporate Risk Technology team, you will provide expertise and engineering excellence as an integral part of an agile team to enhance, build, and deliver trusted market-leading technology products in a secure, stable, and scalable way. Leverage your advanced technical capabilities and collaborate with colleagues across the organization to drive best-in-class outcomes across various technologies to support one or more of the firm’s portfolios.
Job responsibilities
Architect and implement complex, scalable data engineering/ coding frameworks and solutions using modern software design principles.
Lead the design and development of secure, high-quality production code for data-intensive applications; review and mentor other engineers.
Drive adoption of advanced technical methods and practices aligned with the latest industry standards.
Advises cross-functional teams on technological matters within your domain of expertise
Serves as the function’s go-to subject matter expert
Creates durable, reusable software frameworks that are leveraged across teams and functions
Influences leaders and senior stakeholders across business, product, and technology teams
Champions the firm’s culture of diversity, opportunity, inclusion, and respect
Contributes to the development of technical methods in specialized fields in line with the latest product development methodologies
Required qualifications, capabilities, and skills
Formal training or certification on data engineering concepts and 10+ years applied experience
Strong proficiency in Data Engineering, Data Architecture, AI/ML with hands‑on experience in designing, implementing, testing, and ensuring the operational stability of large‑scale enterprise data platforms and solutions
Expert in one or more programming language(s) eg. Java, Python , C/C++;
Advanced working knowledge of relational and NoSQL databases, data lake architectures, data mesh concepts, and data governance.
Practical experience with cloud-native data platforms (AWS, Azure, GCP).
Experience leading technical teams and projects as a Tech Lead or Data Architect
Experience in large scale data processing, using micro services, API design, Kafka, Redis, MemCached, Observability (Dynatrace , Splunk, Grafana or similar), Orchestration (Airflow, Temporal)
Advanced knowledge of software application development and technical processes
Ability to present and effectively communicate with Senior Leaders and Executives
Experience in Computer Science, Computer Engineering, Mathematics, or a related technical field
Preferred qualifications, capabilities, and skills
Deep hands‑on experience with Spark/PySpark, other big data processing technologies
Experience with modern data technologies and cloud‑based solutions, such as Databricks or Snowflake.
Expertise in modern, open‑source table formats and catalog services for managing large‑scale data in data lakes, such as Apache Iceberg.
Experience with federated database entitlements such as Immuta or similar tools.
Familiarity with data consumption tools like Dremio or Starburst.
Knowledge of the financial services industry and their IT systems
#J-18808-Ljbffr