Programmers.io
Dear Applicants, this position requires visa‑independent candidates.
Note: (OPT, CPT, H1B holders will not work at this time)
Responsibilities
Design, develop, and maintain scalable ETL pipelines using AWS Glue
Collaborate with data engineers and analysts to understand data requirements
Build and manage data extraction, transformation, and loading processes
Optimize and troubleshoot existing Glue jobs and workflows
Ensure data quality, integrity, and security throughout the ETL process
Integrate AWS Glue with other AWS services like S3, Lambda, Redshift, and Step Functions
Maintain documentation of data workflows and processes
Stay updated with the latest AWS tools and best practices
Required Skills
Strong hands‑on experience with AWS Glue, PySpark, and Python
Proficiency in SQL and working with structured/unstructured data (JSON, CSV, Parquet)
Experience with data warehousing concepts and tools
Familiarity with CI/CD pipelines, Terraform, and scripting (PowerShell, Bash)
Solid understanding of data modeling, data integration, and data management
Exposure to AWS Batch, Step Functions, and Data Catalogs
Seniority Level Mid‑Senior level
Employment Type Full‑time
Job Function Information Technology
Industries Information Services
Location: Chicago, IL
#J-18808-Ljbffr
Note: (OPT, CPT, H1B holders will not work at this time)
Responsibilities
Design, develop, and maintain scalable ETL pipelines using AWS Glue
Collaborate with data engineers and analysts to understand data requirements
Build and manage data extraction, transformation, and loading processes
Optimize and troubleshoot existing Glue jobs and workflows
Ensure data quality, integrity, and security throughout the ETL process
Integrate AWS Glue with other AWS services like S3, Lambda, Redshift, and Step Functions
Maintain documentation of data workflows and processes
Stay updated with the latest AWS tools and best practices
Required Skills
Strong hands‑on experience with AWS Glue, PySpark, and Python
Proficiency in SQL and working with structured/unstructured data (JSON, CSV, Parquet)
Experience with data warehousing concepts and tools
Familiarity with CI/CD pipelines, Terraform, and scripting (PowerShell, Bash)
Solid understanding of data modeling, data integration, and data management
Exposure to AWS Batch, Step Functions, and Data Catalogs
Seniority Level Mid‑Senior level
Employment Type Full‑time
Job Function Information Technology
Industries Information Services
Location: Chicago, IL
#J-18808-Ljbffr