J.P. Morgan
Senior Principal Software Engineer - Databircks
J.P. Morgan, Jersey City, New Jersey, United States, 07390
Join JPMorgan Chase as a Senior Principal Software Engineer – Shape the Future of Data & Analytics
Are you passionate about building innovative data solutions that drive real business impact? The Chief Data & Analytics Office (CDAO) at JPMorgan Chase is leading the way in transforming how the firm leverages data, artificial intelligence, and machine learning to create new products, boost productivity, and manage risk responsibly. The Chief Data & Analytics Office (CDAO) at JPMorgan Chase is responsible for accelerating the firm’s data and analytics journey. This includes ensuring the quality, integrity, and security of the company's data, as well as leveraging this data to generate insights and drive decision‑making. The CDAO is also responsible for developing and implementing solutions that support the firm’s commercial goals by harnessing artificial intelligence and machine learning technologies to develop new products, improve productivity, and enhance risk management effectively and responsibly. As a Senior Principal Software Engineer at JPMorgan Chase on our AIML Data Platforms & Chief Data and Analytics Team, you’ll play a pivotal role in architecting and implementing advanced data platforms that power insights and innovation across the company. You’ll work with cutting‑edge technologies like Databricks, Spark, and cloud platforms, collaborating with talented teams to solve complex challenges and deliver high‑quality solutions. Job Responsibilities:
Executes creative software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems Develops secure high‑quality production code, and reviews and debugs code written by others Lead the design and development of scalable data pipelines and analytics solutions using Databricks, Spark, and cloud platforms Define and implement best practices for data engineering, data lake architecture, and distributed computing Solves the companies most challenging cloud data platform problems by building innovative technical solutions around Data Lake Tools Identifies opportunities to eliminate or automate remediation of recurring issues to improve overall operational stability of software applications and systems Adds to team culture of diversity, opportunity, inclusion, and respect Mentor and guide technical teams, fostering a culture of continuous learning and excellence in software engineering practices Required qualifications, capabilities, and skills
Formal training or certification on software engineering concepts and 10+ years applied experience Expert‑level proficiency in Databricks, Apache Spark, and distributed data processing Hands‑on experience with Python and/or Java application program development with use of automated unit testing Hands‑on practical experience delivering system design, application development, testing, and operational stability Hands‑on practical experience with terraform development and understanding of terraform enterprise Hands‑on experience with GitHub / Bitbucket code versioning tool, Jenkins build tool and pypi / maven artifactory integrations Knowledge of Big Data distributed compute frameworks like Spark Preferred qualifications, capabilities, and skills
Experience with Agile development processes, as needed (SCRUM/KANBAN) using JIRA Experience in Data pipelines using Spark Experience in managing product release lifecycle at enterprise level
#J-18808-Ljbffr
Are you passionate about building innovative data solutions that drive real business impact? The Chief Data & Analytics Office (CDAO) at JPMorgan Chase is leading the way in transforming how the firm leverages data, artificial intelligence, and machine learning to create new products, boost productivity, and manage risk responsibly. The Chief Data & Analytics Office (CDAO) at JPMorgan Chase is responsible for accelerating the firm’s data and analytics journey. This includes ensuring the quality, integrity, and security of the company's data, as well as leveraging this data to generate insights and drive decision‑making. The CDAO is also responsible for developing and implementing solutions that support the firm’s commercial goals by harnessing artificial intelligence and machine learning technologies to develop new products, improve productivity, and enhance risk management effectively and responsibly. As a Senior Principal Software Engineer at JPMorgan Chase on our AIML Data Platforms & Chief Data and Analytics Team, you’ll play a pivotal role in architecting and implementing advanced data platforms that power insights and innovation across the company. You’ll work with cutting‑edge technologies like Databricks, Spark, and cloud platforms, collaborating with talented teams to solve complex challenges and deliver high‑quality solutions. Job Responsibilities:
Executes creative software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems Develops secure high‑quality production code, and reviews and debugs code written by others Lead the design and development of scalable data pipelines and analytics solutions using Databricks, Spark, and cloud platforms Define and implement best practices for data engineering, data lake architecture, and distributed computing Solves the companies most challenging cloud data platform problems by building innovative technical solutions around Data Lake Tools Identifies opportunities to eliminate or automate remediation of recurring issues to improve overall operational stability of software applications and systems Adds to team culture of diversity, opportunity, inclusion, and respect Mentor and guide technical teams, fostering a culture of continuous learning and excellence in software engineering practices Required qualifications, capabilities, and skills
Formal training or certification on software engineering concepts and 10+ years applied experience Expert‑level proficiency in Databricks, Apache Spark, and distributed data processing Hands‑on experience with Python and/or Java application program development with use of automated unit testing Hands‑on practical experience delivering system design, application development, testing, and operational stability Hands‑on practical experience with terraform development and understanding of terraform enterprise Hands‑on experience with GitHub / Bitbucket code versioning tool, Jenkins build tool and pypi / maven artifactory integrations Knowledge of Big Data distributed compute frameworks like Spark Preferred qualifications, capabilities, and skills
Experience with Agile development processes, as needed (SCRUM/KANBAN) using JIRA Experience in Data pipelines using Spark Experience in managing product release lifecycle at enterprise level
#J-18808-Ljbffr