Data Freelance Hub
⭐ Featured Role | Apply direct with Data Freelance Hub
This role is for an AWS Data Engineer, offering a 6-month contract at $60.00 - $65.00 per hour. Candidates must have strong Python/PySpark skills, AWS experience (Glue, Redshift), and solid SQL proficiency. On-site work is required.
United States
Reston, VA 20191
Key Responsibilities
Build and maintain ETL pipelines using Python and PySpark on AWS Glue and other compute platforms
Orchestrate workflows with AWS Step Functions and serverless components (Lambda)
Implement messaging and event-driven patterns using AWS SNS and SQS
Design and optimize data storage and querying in Amazon Redshift
Write performant SQL for data transformations, validation, and reporting
Ensure data quality, monitoring, error handling and operational support for pipelines
Collaborate with data consumers, engineers, and stakeholders to translate requirements into solutions
Contribute to CI/CD, infrastructure-as-code, and documentation for reproducible deployment
Required Skills
Strong experience with Python and PySpark for large-scale data processing
Proven hands-on experience with AWS services: Lambda, SNS, SQS, Glue, Redshift, Step Functions
Solid SQL skills and familiarity with data modeling and query optimization
Experience with ETL best practices, data quality checks, and monitoring/alerting
Familiarity with version control (Git) and basic DevOps/CI-CD workflows
Compensation & Benefits Pay: $60.00 - $65.00 per hour
Expected hours: 40 per week
Benefits: 401(k), Dental insurance, Flexible schedule, Health insurance, Vision insurance
Work Location: In person
Additional Information Freelance data hiring powered by an engaged, trusted community — not a CV database.
85 Great Portland Street, London, England, W1W 7LT
#J-18808-Ljbffr
This role is for an AWS Data Engineer, offering a 6-month contract at $60.00 - $65.00 per hour. Candidates must have strong Python/PySpark skills, AWS experience (Glue, Redshift), and solid SQL proficiency. On-site work is required.
United States
Reston, VA 20191
Key Responsibilities
Build and maintain ETL pipelines using Python and PySpark on AWS Glue and other compute platforms
Orchestrate workflows with AWS Step Functions and serverless components (Lambda)
Implement messaging and event-driven patterns using AWS SNS and SQS
Design and optimize data storage and querying in Amazon Redshift
Write performant SQL for data transformations, validation, and reporting
Ensure data quality, monitoring, error handling and operational support for pipelines
Collaborate with data consumers, engineers, and stakeholders to translate requirements into solutions
Contribute to CI/CD, infrastructure-as-code, and documentation for reproducible deployment
Required Skills
Strong experience with Python and PySpark for large-scale data processing
Proven hands-on experience with AWS services: Lambda, SNS, SQS, Glue, Redshift, Step Functions
Solid SQL skills and familiarity with data modeling and query optimization
Experience with ETL best practices, data quality checks, and monitoring/alerting
Familiarity with version control (Git) and basic DevOps/CI-CD workflows
Compensation & Benefits Pay: $60.00 - $65.00 per hour
Expected hours: 40 per week
Benefits: 401(k), Dental insurance, Flexible schedule, Health insurance, Vision insurance
Work Location: In person
Additional Information Freelance data hiring powered by an engaged, trusted community — not a CV database.
85 Great Portland Street, London, England, W1W 7LT
#J-18808-Ljbffr