New Jersey Staffing
EY Enterprise Data Architect
At EY, we're all in to shape your future with confidence. We'll help you succeed in a globally connected powerhouse of diverse teams and take your career wherever you want it to go. Join EY and help to build a better working world. The opportunity: As an EY Enterprise Data Architect within the EY SAP Enterprise Data Management Initiative, you will lead the definition, design, and execution of scalable data architecture strategies to support enterprise-wide data migration and transformation programs. This role ensures data integrity, consistency, scalability, and alignment with the organization's enterprise data standards, while enabling long-term data governance. Your key responsibilities: Lead the end-to-end data architecture strategy across SAP BDC initiatives, including data modeling, transformation, and governance. Define target and transition architectures for both SAP and non-SAP data domains, integrating with modern platforms. Ensure data platform alignment with architectural principles across hybrid cloud environments. Design and validate data flows, lineage, and integration points between legacy systems, SAP S/4HANA, and cloud-based platforms. Collaborate with migration teams to drive effective ETL/ELT strategy and data quality frameworks. Identify and mitigate data risks, including data duplication, data loss, latency, or security compliance gaps. Guide tool selection, integration architecture, and usage of cloud-native services. Lead the design and implementation of data pipelines, notebooks, and workflows on the Databricks platform supporting SAP data migration and analytics use cases. Develop, optimize, and tune Spark jobs for large-scale, distributed data processing. Collaborate with data architects to align Databricks solutions with enterprise data governance and architecture principles. Enable data engineers and scientists by building reusable libraries, APIs, and data transformation frameworks using Databricks. Integrate Databricks with SAP data sources and external platforms. Establish and promote best practices around security, cost optimization, and performance on Databricks. Participate in tooling and automation efforts such as CI/CD pipelines for Databricks assets. Provide training, mentorship, and knowledge sharing to junior team members and other stakeholders. Stay current with Databricks and Apache Spark ecosystem updates and innovations. Collaborate in cross-functional teams leveraging NVIDIA GPU acceleration or AI/ML frameworks integrated with Databricks. Skills and attributes for success: Strong background in enterprise data architecture and SAP data management. Experience working with cloud providers and platforms. Proficiency in data modeling, data governance, and metadata management. Experience designing and governing large-scale data migration and transformation initiatives. Knowledge of data migration tools. Solid understanding of data privacy and compliance standards. Strong leadership, stakeholder management, and cross-functional collaboration skills. Familiarity with architecture frameworks. Strong expertise in Databricks platform and Apache Spark. Experience designing and developing large-scale ETL pipelines. Familiarity with cloud ecosystems. Knowledge of data architecture, governance, and security practices on cloud data platforms. Ability to troubleshoot and optimize Spark job performance. Understanding of data science and machine learning workflows on Databricks. Strong communication skills. Other requirements: Experience working in SAP data migration or BDC projects. Agile mindset and experience working in DevOps/CI-CD environments. Relevant certifications are beneficial. Ability to work across global teams. Job requirements: Education: BS/MS in Computer Science, Data Engineering, or related field. Experience: 1215+ years of experience in enterprise data architecture, with at least 5 years in SAP BDC or related data transformation environments, with 3+ years specifically on Databricks and Spark-based data platforms. What we offer you: We offer a comprehensive compensation and benefits package where you'll be rewarded based on your performance and recognized for the value you bring to the business. Join us in our team-led and leader-enabled hybrid model. Under our flexible vacation policy, you'll decide how much vacation time you need based on your own personal circumstances. Are you ready to shape your future with confidence? Apply today.
At EY, we're all in to shape your future with confidence. We'll help you succeed in a globally connected powerhouse of diverse teams and take your career wherever you want it to go. Join EY and help to build a better working world. The opportunity: As an EY Enterprise Data Architect within the EY SAP Enterprise Data Management Initiative, you will lead the definition, design, and execution of scalable data architecture strategies to support enterprise-wide data migration and transformation programs. This role ensures data integrity, consistency, scalability, and alignment with the organization's enterprise data standards, while enabling long-term data governance. Your key responsibilities: Lead the end-to-end data architecture strategy across SAP BDC initiatives, including data modeling, transformation, and governance. Define target and transition architectures for both SAP and non-SAP data domains, integrating with modern platforms. Ensure data platform alignment with architectural principles across hybrid cloud environments. Design and validate data flows, lineage, and integration points between legacy systems, SAP S/4HANA, and cloud-based platforms. Collaborate with migration teams to drive effective ETL/ELT strategy and data quality frameworks. Identify and mitigate data risks, including data duplication, data loss, latency, or security compliance gaps. Guide tool selection, integration architecture, and usage of cloud-native services. Lead the design and implementation of data pipelines, notebooks, and workflows on the Databricks platform supporting SAP data migration and analytics use cases. Develop, optimize, and tune Spark jobs for large-scale, distributed data processing. Collaborate with data architects to align Databricks solutions with enterprise data governance and architecture principles. Enable data engineers and scientists by building reusable libraries, APIs, and data transformation frameworks using Databricks. Integrate Databricks with SAP data sources and external platforms. Establish and promote best practices around security, cost optimization, and performance on Databricks. Participate in tooling and automation efforts such as CI/CD pipelines for Databricks assets. Provide training, mentorship, and knowledge sharing to junior team members and other stakeholders. Stay current with Databricks and Apache Spark ecosystem updates and innovations. Collaborate in cross-functional teams leveraging NVIDIA GPU acceleration or AI/ML frameworks integrated with Databricks. Skills and attributes for success: Strong background in enterprise data architecture and SAP data management. Experience working with cloud providers and platforms. Proficiency in data modeling, data governance, and metadata management. Experience designing and governing large-scale data migration and transformation initiatives. Knowledge of data migration tools. Solid understanding of data privacy and compliance standards. Strong leadership, stakeholder management, and cross-functional collaboration skills. Familiarity with architecture frameworks. Strong expertise in Databricks platform and Apache Spark. Experience designing and developing large-scale ETL pipelines. Familiarity with cloud ecosystems. Knowledge of data architecture, governance, and security practices on cloud data platforms. Ability to troubleshoot and optimize Spark job performance. Understanding of data science and machine learning workflows on Databricks. Strong communication skills. Other requirements: Experience working in SAP data migration or BDC projects. Agile mindset and experience working in DevOps/CI-CD environments. Relevant certifications are beneficial. Ability to work across global teams. Job requirements: Education: BS/MS in Computer Science, Data Engineering, or related field. Experience: 1215+ years of experience in enterprise data architecture, with at least 5 years in SAP BDC or related data transformation environments, with 3+ years specifically on Databricks and Spark-based data platforms. What we offer you: We offer a comprehensive compensation and benefits package where you'll be rewarded based on your performance and recognized for the value you bring to the business. Join us in our team-led and leader-enabled hybrid model. Under our flexible vacation policy, you'll decide how much vacation time you need based on your own personal circumstances. Are you ready to shape your future with confidence? Apply today.