Smart IT Frame LLC
Seeking an AWS Data Engineer to design, build, and maintain scalable data pipelines and ETL solutions using Python/Pyspark and AWS managed services to support analytics and data product needs.
Key Responsibilities
Build and maintain ETL pipelines using
Python
and
PySpark
on AWS Glue and other compute platforms
Orchestrate workflows with AWS Step Functions and serverless components ( Lambda )
Implement messaging and event-driven patterns using AWS
SNS
and
SQS
Design and optimize data storage and querying in Amazon
Redshift
Write performant
SQL
for data transformations, validation, and reporting
Ensure data quality, monitoring, error handling and operational support for pipelines
Collaborate with data consumers, engineers, and stakeholders to translate requirements into solutions
Contribute to
CI/CD , infrastructure-as-code, and documentation for reproducible deployments
Required Skills
Strong experience with Python and PySpark for large-scale data processing
Proven hands‑on experience with AWS services: Lambda, SNS, SQS, Glue, Redshift, Step Functions
Solid SQL skills and familiarity with data modeling and query optimization
Experience with ETL best practices, data quality checks, and monitoring/alerting
Familiarity with version control (Git) and basic DevOps/CI‑CD workflows
Seniority level
Mid‑Senior level
Employment type
Full‑time
Job function
Other
Industries
Software Development and IT Services and IT Consulting
#J-18808-Ljbffr
Key Responsibilities
Build and maintain ETL pipelines using
Python
and
PySpark
on AWS Glue and other compute platforms
Orchestrate workflows with AWS Step Functions and serverless components ( Lambda )
Implement messaging and event-driven patterns using AWS
SNS
and
SQS
Design and optimize data storage and querying in Amazon
Redshift
Write performant
SQL
for data transformations, validation, and reporting
Ensure data quality, monitoring, error handling and operational support for pipelines
Collaborate with data consumers, engineers, and stakeholders to translate requirements into solutions
Contribute to
CI/CD , infrastructure-as-code, and documentation for reproducible deployments
Required Skills
Strong experience with Python and PySpark for large-scale data processing
Proven hands‑on experience with AWS services: Lambda, SNS, SQS, Glue, Redshift, Step Functions
Solid SQL skills and familiarity with data modeling and query optimization
Experience with ETL best practices, data quality checks, and monitoring/alerting
Familiarity with version control (Git) and basic DevOps/CI‑CD workflows
Seniority level
Mid‑Senior level
Employment type
Full‑time
Job function
Other
Industries
Software Development and IT Services and IT Consulting
#J-18808-Ljbffr