Logo
Rishi Writes

AWS Data Engineer – (12 Years, Hybrid – Dallas, TX) Komm Force Solutions

Rishi Writes, Dallas, Texas, United States, 75215

Save Job

About the Role – AWS Cloud Data Engineer

We are hiring a skilled

AWS Data Engineer

to help develop and optimize cloud-native data pipelines using

Python, AWS Glue, Redshift, and Kafka . This hybrid role in

Dallas, TX

offers the chance to work on mission-critical ETL workflows and data transformation solutions across large datasets. Key Responsibilities – Python-Based ETL & AWS Data Engineering

Build and automate

ETL pipelines using Python

for cloud data lakes and warehouses

Integrate with

AWS services

like S3, Glue, EMR, Redshift, Athena, Kinesis, and SageMaker

Design and execute

automated testing frameworks

for data validation and integrity

Create dashboards and reports to support

data visualization and insights

Collaborate with analysts, product managers, and developers to deliver accurate data solutions

Perform

data migration

from on-premises to AWS environments

Execute

DevOps and DataOps practices

for pipeline optimization

Troubleshoot data pipeline issues, investigate anomalies, and resolve performance bottlenecks

Required Skills – Python, AWS Glue, Redshift, and Kafka

Proficiency in

Python programming

for data automation

Hands-on experience with

AWS services : S3, Glue, Redshift, EMR, Athena, SageMaker

Experience with

streaming tools like Kafka

and structured data pipelines

Strong command of

SQL, Unix/Linux scripting , and CI/CD tools

Knowledge of

ETL technologies

such as Informatica, Ab Initio, Alteryx, or AWS Glue

Experience with

cloud data lake

and

on-prem to cloud migration

strategies

Familiarity with

testing and validating ETL workflows

Preferred Experience – Data Science and DevOps in Cloud

Exposure to

machine learning platforms

like SageMaker, H2O, or ML Studio

Knowledge of

Jenkins, GitLab , and CI/CD practices

Familiarity with

Agile and Waterfall methodologies

Experience with

test case management

and defect tracking tools

Hands-on testing experience with

S3, HDFS , and similar storage tools

Soft Skills – Collaboration and Ownership

Strong communication and ability to explain technical concepts clearly

Self-motivated and proactive with strong ownership mindset

Ability to guide junior developers or testers during data operations

Problem-solving and debugging skills across complex data environments

Flexible and adaptive in hybrid work setups

Ready to Apply?

If you are passionate about building high-impact data engineering solutions using Python and AWS, this is your chance to grow with a cloud-first team. Check out other positions

Let’s discuss your next career move 15 FAQs About This AWS Data Engineer Role

1. What is the primary focus of this AWS Data Engineer position?

To build, test, and manage automated cloud-based ETL pipelines using Python and AWS services. 2. Is this a remote position?

It’s a hybrid role requiring partial onsite presence in Dallas, TX. 3. How much experience is required?

Candidates with 6–10 years of data engineering and cloud infrastructure experience are preferred. 4. What tools are commonly used in this role?

Python, AWS Glue, Redshift, EMR, Kafka, Jenkins, GitLab, SQL, and Unix scripting. 5. Is experience in data science mandatory?

Not mandatory, but familiarity with platforms like SageMaker or H2O is a bonus. 6. Will I work on real-time data or batch pipelines?

Both – the role involves streaming platforms like Kafka and batch processing using AWS tools. 7. Is prior DevOps experience necessary?

DevOps and DataOps exposure is highly preferred to support CI/CD for data pipelines. 8. Will I work directly with business stakeholders?

Yes, you’ll collaborate closely with analysts, developers, and product teams. 9. Is on-prem to cloud migration experience needed?

Yes, data migration experience is a plus. 10. Will I need to design dashboards or just backend pipelines?

You’ll support report creation and help design dashboards for data visualization. 11. What kind of scripting is expected?

Unix/Linux shell scripting and Python automation for testing and deployment. 12. What’s the interview process like?

Initial screening, followed by a technical interview and data engineering assessment. 13. Do I need experience with specific testing tools?

Experience with test case management and defect tracking tools is preferred. 14. What are typical KPIs or success metrics in this role?

Uptime, accuracy, data freshness, test coverage, and pipeline performance. 15. Can I expect growth opportunities?

Yes, you’ll be working on cutting-edge cloud solutions with potential for leadership.

#J-18808-Ljbffr