Jobs via Dice
Big Data Engineer
Onsite Interview in Alpharetta, GA
This position is for a Big Data Engineer on Morgan Stanley Wealth Management Framework CoE team at Morgan Stanley’s Alpharetta or New York offices. The CoE team is responsible for defining and governing the data platforms. We are looking for colleagues with a strong sense of ownership and the ability to drive solutions. The role is primarily responsible for automating existing processes and bringing new ideas and innovations. The ideal candidate will be a self‑motivated team player committed to delivering on time and able to work with or without minimal supervision.
Responsibilities
Design & Develop new automation framework for ETL processing Support existing framework and become the technical point of contact for all related teams Enhance existing ETL automation framework as per user requirements Performance tuning of spark, snowflake ETL jobs New technology POC and suitability analysis for Cloud migration Process optimization with the help of automation and new utility development Work in collaboration for any issues and new features Support any batch issue Support application teams with any queries Required Skills
7+ years of Data engineering experience Strong in UNIX Shell, Python scripting knowledge Strong in Spark Strong knowledge of SQL Hands‑on knowledge on how HDFS/Hive/Impala/Spark works Strong in logical reasoning capabilities Working knowledge of GitHub, DevOps, CICD/ Enterprise code management tools Strong collaboration and communication skills Must possess strong team‑player skills and excellent written and verbal communication skills Ability to create and maintain a positive environment of shared success Ability to execute and prioritize tasks and resolve issues without aid from a direct manager or project sponsor Good to have working experience on Snowflake & any data integration tool (i.e. Informatica Cloud) Primary Skills
Python Big Data Apache Spark Seniority level
Mid‑Senior level Employment type
Full‑time Job function
Engineering and Information Technology Industries
Software Development
#J-18808-Ljbffr
Design & Develop new automation framework for ETL processing Support existing framework and become the technical point of contact for all related teams Enhance existing ETL automation framework as per user requirements Performance tuning of spark, snowflake ETL jobs New technology POC and suitability analysis for Cloud migration Process optimization with the help of automation and new utility development Work in collaboration for any issues and new features Support any batch issue Support application teams with any queries Required Skills
7+ years of Data engineering experience Strong in UNIX Shell, Python scripting knowledge Strong in Spark Strong knowledge of SQL Hands‑on knowledge on how HDFS/Hive/Impala/Spark works Strong in logical reasoning capabilities Working knowledge of GitHub, DevOps, CICD/ Enterprise code management tools Strong collaboration and communication skills Must possess strong team‑player skills and excellent written and verbal communication skills Ability to create and maintain a positive environment of shared success Ability to execute and prioritize tasks and resolve issues without aid from a direct manager or project sponsor Good to have working experience on Snowflake & any data integration tool (i.e. Informatica Cloud) Primary Skills
Python Big Data Apache Spark Seniority level
Mid‑Senior level Employment type
Full‑time Job function
Engineering and Information Technology Industries
Software Development
#J-18808-Ljbffr