Qode
Job Title: AWS Glue Data Engineer
Location : Fort Mill, SC or New York City, NY (Hybrid)
Job Summary
We are seeking a highly skilled
AWS Glue Data Engineer
to design, develop, and optimize large-scale data pipelines and ETL workflows on AWS. The ideal candidate will have strong expertise in AWS cloud-native data services, data modeling, and pipeline orchestration, with hands-on experience building robust and scalable data solutions for enterprise environments.
Key Responsibilities
Design, develop, and maintain
ETL pipelines
using
AWS Glue, Glue Studio, and Glue Catalog . Ingest, transform, and load large datasets from structured and unstructured sources into AWS data lakes/warehouses. Work with
S3, Redshift, Athena, Lambda, and Step Functions
for data storage, query, and orchestration. Build and optimize
PySpark/Scala scripts
within AWS Glue for complex transformations. Implement data quality checks, lineage, and monitoring across pipelines. Collaborate with business analysts, data scientists, and product teams to deliver reliable data solutions. Ensure compliance with
data security, governance, and regulatory requirements
(BFSI preferred). Troubleshoot production issues and optimize pipeline performance. Required Qualifications
12+ years of experience in
Data Engineering , with at least 5+ years on
AWS cloud data services . Strong expertise in
AWS Glue, S3, Redshift, Athena, Lambda, Step Functions, CloudWatch . Proficiency in
PySpark, Python, SQL
for ETL and data transformations. Experience in
data modeling (star, snowflake, dimensional models)
and performance tuning. Hands-on experience with
data lake/data warehouse
architecture and implementation. Strong problem-solving skills and ability to work in Agile/Scrum environments. Preferred Qualifications
Experience in
BFSI / Wealth Management domain . AWS Certified Data Analytics – Specialty or AWS Solutions Architect certification. Familiarity with CI/CD pipelines for data engineering (CodePipeline, Jenkins, GitHub Actions). Knowledge of BI/Visualization tools like
Tableau, Power BI, QuickSight . Education
Bachelor’s degree in Computer Science, Information Technology, Engineering, or related field. Master’s degree preferred.
Location : Fort Mill, SC or New York City, NY (Hybrid)
Job Summary
We are seeking a highly skilled
AWS Glue Data Engineer
to design, develop, and optimize large-scale data pipelines and ETL workflows on AWS. The ideal candidate will have strong expertise in AWS cloud-native data services, data modeling, and pipeline orchestration, with hands-on experience building robust and scalable data solutions for enterprise environments.
Key Responsibilities
Design, develop, and maintain
ETL pipelines
using
AWS Glue, Glue Studio, and Glue Catalog . Ingest, transform, and load large datasets from structured and unstructured sources into AWS data lakes/warehouses. Work with
S3, Redshift, Athena, Lambda, and Step Functions
for data storage, query, and orchestration. Build and optimize
PySpark/Scala scripts
within AWS Glue for complex transformations. Implement data quality checks, lineage, and monitoring across pipelines. Collaborate with business analysts, data scientists, and product teams to deliver reliable data solutions. Ensure compliance with
data security, governance, and regulatory requirements
(BFSI preferred). Troubleshoot production issues and optimize pipeline performance. Required Qualifications
12+ years of experience in
Data Engineering , with at least 5+ years on
AWS cloud data services . Strong expertise in
AWS Glue, S3, Redshift, Athena, Lambda, Step Functions, CloudWatch . Proficiency in
PySpark, Python, SQL
for ETL and data transformations. Experience in
data modeling (star, snowflake, dimensional models)
and performance tuning. Hands-on experience with
data lake/data warehouse
architecture and implementation. Strong problem-solving skills and ability to work in Agile/Scrum environments. Preferred Qualifications
Experience in
BFSI / Wealth Management domain . AWS Certified Data Analytics – Specialty or AWS Solutions Architect certification. Familiarity with CI/CD pipelines for data engineering (CodePipeline, Jenkins, GitHub Actions). Knowledge of BI/Visualization tools like
Tableau, Power BI, QuickSight . Education
Bachelor’s degree in Computer Science, Information Technology, Engineering, or related field. Master’s degree preferred.