Leeds Professional Resources
Information Technology Executive Recruiter
Position Overview This role is responsible for architecting and deploying reliable data workflows that ensure high standards of quality, integrity, and security in cloud-based ecosystems. The position involves automating repetitive processes through scripting, applying advanced data engineering methods, and researching emerging tools to improve performance and scalability. The focus is on building and maintaining solutions that strengthen data warehouse capabilities while supporting both existing and newly developed systems. Key Responsibilities Create and manage end-to-end data pipelines to extract, transform, and load information from diverse sources into cloud-hosted warehouse environments. Design and maintain ETL solutions using platforms such as Fivetran, Matillion, SSIS, SSAS, SSRS, and AWS-based Snowflake Dataflow. Enhance query performance and storage efficiency through techniques such as partitioning, indexing, and data compression. Develop and sustain large-scale data processing frameworks in AWS or other cloud platforms. Partner with analysts and business stakeholders to ensure information is accurate, accessible, and ready for analytical use. Oversee the design, integration, and deployment of complex systems. Monitor operations, troubleshoot issues, and maintain optimal performance of data flows and infrastructure. Apply strong security practices and permissions management to safeguard sensitive information in cloud environments. Use scripting languages, including Python and Bash, to automate recurring or resource-intensive processes. Qualifications Education: Bachelor’s degree in Computer Science, Engineering, or a closely related discipline. Experience: At least four years working with data warehousing, cloud data solutions, or complex data systems. Proficiency in data modeling, SQL, and scripting with tools such as Python or PowerShell. Background in both modern and legacy system environments. Strong knowledge of OLAP cubes and dimensional modeling concepts. Proven leadership ability and experience coaching other engineers. Skilled in engaging with stakeholders, delivering results, and driving innovation. Capable of managing priorities in a dynamic, high-demand environment. Familiarity with agile development methods, QA best practices, and user-focused design. Proficient in collaboration tools like Jira, Confluence, and Lucidchart. Deep understanding of database platforms including SQL Server, Oracle, MySQL, SAP HANA, and Snowflake.
#J-18808-Ljbffr
Position Overview This role is responsible for architecting and deploying reliable data workflows that ensure high standards of quality, integrity, and security in cloud-based ecosystems. The position involves automating repetitive processes through scripting, applying advanced data engineering methods, and researching emerging tools to improve performance and scalability. The focus is on building and maintaining solutions that strengthen data warehouse capabilities while supporting both existing and newly developed systems. Key Responsibilities Create and manage end-to-end data pipelines to extract, transform, and load information from diverse sources into cloud-hosted warehouse environments. Design and maintain ETL solutions using platforms such as Fivetran, Matillion, SSIS, SSAS, SSRS, and AWS-based Snowflake Dataflow. Enhance query performance and storage efficiency through techniques such as partitioning, indexing, and data compression. Develop and sustain large-scale data processing frameworks in AWS or other cloud platforms. Partner with analysts and business stakeholders to ensure information is accurate, accessible, and ready for analytical use. Oversee the design, integration, and deployment of complex systems. Monitor operations, troubleshoot issues, and maintain optimal performance of data flows and infrastructure. Apply strong security practices and permissions management to safeguard sensitive information in cloud environments. Use scripting languages, including Python and Bash, to automate recurring or resource-intensive processes. Qualifications Education: Bachelor’s degree in Computer Science, Engineering, or a closely related discipline. Experience: At least four years working with data warehousing, cloud data solutions, or complex data systems. Proficiency in data modeling, SQL, and scripting with tools such as Python or PowerShell. Background in both modern and legacy system environments. Strong knowledge of OLAP cubes and dimensional modeling concepts. Proven leadership ability and experience coaching other engineers. Skilled in engaging with stakeholders, delivering results, and driving innovation. Capable of managing priorities in a dynamic, high-demand environment. Familiarity with agile development methods, QA best practices, and user-focused design. Proficient in collaboration tools like Jira, Confluence, and Lucidchart. Deep understanding of database platforms including SQL Server, Oracle, MySQL, SAP HANA, and Snowflake.
#J-18808-Ljbffr