Data Freelance Hub
Senior SQL and ETL Engineer | Remote
Data Freelance Hub, California, Missouri, United States, 65018
⭐ Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior SQL and ETL Engineer, fully remote for 12+ months, offering a competitive pay rate. Key skills include expertise in SQL, ETL tools, data modeling, and Python/PySpark, with experience in data warehouse architecture and big data technologies required.
Skills Required
Strong expertise in SQL, PL/SQL, and T‑SQL with advanced query tuning, stored procedure optimization, and relational data modelling across Oracle, SQL Server, PostgreSQL, and MySQL.
Proficiency in modern ETL/ELT tools including Azure Synapse Analytics, Azure Data Factory, and SSIS, with the ability to design scalable ingestion, transformation, and loading workflows.
Ability to design and implement data warehouse data models (star schema, snowflake, dimensional hierarchies) and optimise models for analytics and large‑scale reporting.
Strong understanding of data integration, data validation, cleansing, profiling, and end‑to‑end data quality processes to ensure accuracy and consistency across systems.
Knowledge of enterprise data warehouse architecture, including staging layers, data marts, data lakes, and cloud‑based ingestion frameworks.
Experience applying best practices for scalable, maintainable ETL engineering, including metadata‑driven design and automation.
Proficiency in Python and PySpark (and familiarity with Shell/Perl) for automating ETL pipelines, handling semi‑structured data, and transforming large datasets.
Experience handling structured and semi‑structured data formats (CSV, JSON, XML, Parquet) and consuming REST APIs for ingestion.
Knowledge of data security and compliance practices, including credential management, encryption, and governance in Azure.
Expertise in optimising ETL and data warehouse performance through indexing, partitioning, caching strategies, and pipeline optimisation.
Familiarity with CI/CD workflows using Git/GitHub Actions for ETL deployment across Dev, QA, and Production environments.
Ability to collaborate with analysts and business stakeholders, translating complex requirements into actionable datasets, KPIs, and reporting structures.
Experience developing and optimising SQL, PL/SQL, and T‑SQL logic, including stored procedures, functions, performance tuning, and advanced relational modelling across Oracle and SQL Server.
Experience working with mainframe systems for data extraction, mapping, and conversion into modern ETL/ELT pipelines.
Experience designing, orchestrating, and deploying ETL/ELT pipelines using Azure Synapse Analytics, Azure Data Factory, SSIS, and Azure DevOps CI/CD workflows.
Experience building and maintaining enterprise data warehouses using Oracle, SQL Server, Teradata, or cloud data platforms.
Experience working with big data technologies such as Apache Spark, PySpark, or Hadoop for large‑scale data transformation.
Experience integrating structured and semi‑structured data (CSV, XML, JSON, Parquet) and consuming APIs using Python/PySpark.
Experience supporting production ETL operations, troubleshooting pipeline failures, conducting root‑cause analysis, and ensuring SLAs for daily, monthly, or regulatory reporting workloads.
Freelance data hiring powered by an engaged, trusted community — not a CV database.
85 Great Portland Street, London, England, W1W 7LT
#J-18808-Ljbffr
Skills Required
Strong expertise in SQL, PL/SQL, and T‑SQL with advanced query tuning, stored procedure optimization, and relational data modelling across Oracle, SQL Server, PostgreSQL, and MySQL.
Proficiency in modern ETL/ELT tools including Azure Synapse Analytics, Azure Data Factory, and SSIS, with the ability to design scalable ingestion, transformation, and loading workflows.
Ability to design and implement data warehouse data models (star schema, snowflake, dimensional hierarchies) and optimise models for analytics and large‑scale reporting.
Strong understanding of data integration, data validation, cleansing, profiling, and end‑to‑end data quality processes to ensure accuracy and consistency across systems.
Knowledge of enterprise data warehouse architecture, including staging layers, data marts, data lakes, and cloud‑based ingestion frameworks.
Experience applying best practices for scalable, maintainable ETL engineering, including metadata‑driven design and automation.
Proficiency in Python and PySpark (and familiarity with Shell/Perl) for automating ETL pipelines, handling semi‑structured data, and transforming large datasets.
Experience handling structured and semi‑structured data formats (CSV, JSON, XML, Parquet) and consuming REST APIs for ingestion.
Knowledge of data security and compliance practices, including credential management, encryption, and governance in Azure.
Expertise in optimising ETL and data warehouse performance through indexing, partitioning, caching strategies, and pipeline optimisation.
Familiarity with CI/CD workflows using Git/GitHub Actions for ETL deployment across Dev, QA, and Production environments.
Ability to collaborate with analysts and business stakeholders, translating complex requirements into actionable datasets, KPIs, and reporting structures.
Experience developing and optimising SQL, PL/SQL, and T‑SQL logic, including stored procedures, functions, performance tuning, and advanced relational modelling across Oracle and SQL Server.
Experience working with mainframe systems for data extraction, mapping, and conversion into modern ETL/ELT pipelines.
Experience designing, orchestrating, and deploying ETL/ELT pipelines using Azure Synapse Analytics, Azure Data Factory, SSIS, and Azure DevOps CI/CD workflows.
Experience building and maintaining enterprise data warehouses using Oracle, SQL Server, Teradata, or cloud data platforms.
Experience working with big data technologies such as Apache Spark, PySpark, or Hadoop for large‑scale data transformation.
Experience integrating structured and semi‑structured data (CSV, XML, JSON, Parquet) and consuming APIs using Python/PySpark.
Experience supporting production ETL operations, troubleshooting pipeline failures, conducting root‑cause analysis, and ensuring SLAs for daily, monthly, or regulatory reporting workloads.
Freelance data hiring powered by an engaged, trusted community — not a CV database.
85 Great Portland Street, London, England, W1W 7LT
#J-18808-Ljbffr