Bull City Talent Group
Position Summary
Bull City Talent Group's client has an immediate need for an Enterprise Data Engineer to join an in-flight project to design data models per business requirements and aligned to data design guidelines and standards. The ideal candidate will have a MINIMUM of 3 years of data engineering experience leveraging Azure Data Lake, Azure Data Factory, and Azure Databricks. The resource must also have extensive experience with PySpark, Python and SQL. Prior experience must also include modeling SAP/ERP data. The resource will report to the consulting team and will work under aggressive deadlines and time constraints.
Job Duties & Responsibilities
Assemble large, complex data sets that meet functional / non-functional business requirements
Identify, design, and implement internal process improvements: automate manual processes, optimize data delivery, re-design infrastructure for greater scalability, etc.
Leverage the appropriate infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and ‘big data’ technologies.
Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics
Working with stakeholders including the Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs
Requirements
Strong experience in building modern data platforms that leverage Azure Data Lake, Azure Data Factory and Azure Databricks.
Proficient in PySpark, Python and SQL.
Strong experience designing data models that meets business needs that meets design guidelines and standards is a must.
Prior experience working/data modeling SAP/ERP data
#J-18808-Ljbffr
Job Duties & Responsibilities
Assemble large, complex data sets that meet functional / non-functional business requirements
Identify, design, and implement internal process improvements: automate manual processes, optimize data delivery, re-design infrastructure for greater scalability, etc.
Leverage the appropriate infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and ‘big data’ technologies.
Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics
Working with stakeholders including the Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs
Requirements
Strong experience in building modern data platforms that leverage Azure Data Lake, Azure Data Factory and Azure Databricks.
Proficient in PySpark, Python and SQL.
Strong experience designing data models that meets business needs that meets design guidelines and standards is a must.
Prior experience working/data modeling SAP/ERP data
#J-18808-Ljbffr