Jobs via Dice
AWS Data Engineer - Python, Pyspark, EMR, Spark, Kafka/ Kinesis, SQL/NoSQL
Jobs via Dice, Chicago, Illinois, United States
Data Engineer (Contract)
People Force Consulting Inc. is seeking a highly skilled Data Engineer with deep expertise in Python, PySpark, AWS, and big data ecosystems to drive scalable, real‑time data solutions using CI/CD and stream‑processing frameworks. Minimum 8+ years of experience, required in‑person interview. Location: Chicago, IL.
Apply via Dice today!
Mandatory Skills
Python ,
PySpark ,
AWS
Good to Have Skills
EMR ,
Kafka / Kinesis
Responsibilities
Proficient developer in multiple languages; Python is a must, with ability to learn new ones quickly.
Expertise in SQL (complex queries, relational databases preferably PostgreSQL, and NoSQL databases like Redis and Elasticsearch).
Extensive big data experience, including EMR, Spark, Kafka/Kinesis; optimize data pipelines, architectures, and datasets.
Hands‑on AWS experience – Lambda, Glue, Athena, Kinesis, IAM, EMR/PySpark, Docker.
Proficient in CI/CD development using Git, Terraform, and agile methodologies.
Comfortable with stream‑processing systems (Storm, Spark‑Streaming) and workflow management tools (Airflow).
Exposure to knowledge graph technologies (Graph DB, OWL, SPARQL) is a plus.
Qualifications
Engineering degree (BE/ME/BTech/MTech/BSc/MSc). Technical certification in multiple technologies is desirable.
At least 8 years of experience in data engineering.
Experience in Python, PySpark, AWS.
Seniority level Mid‑Senior level
Employment type Full‑time (Contract)
Job function Information Technology
Industries Software Development
Referrals increase your chances of interviewing at Jobs via Dice by 2x.
#J-18808-Ljbffr
Apply via Dice today!
Mandatory Skills
Python ,
PySpark ,
AWS
Good to Have Skills
EMR ,
Kafka / Kinesis
Responsibilities
Proficient developer in multiple languages; Python is a must, with ability to learn new ones quickly.
Expertise in SQL (complex queries, relational databases preferably PostgreSQL, and NoSQL databases like Redis and Elasticsearch).
Extensive big data experience, including EMR, Spark, Kafka/Kinesis; optimize data pipelines, architectures, and datasets.
Hands‑on AWS experience – Lambda, Glue, Athena, Kinesis, IAM, EMR/PySpark, Docker.
Proficient in CI/CD development using Git, Terraform, and agile methodologies.
Comfortable with stream‑processing systems (Storm, Spark‑Streaming) and workflow management tools (Airflow).
Exposure to knowledge graph technologies (Graph DB, OWL, SPARQL) is a plus.
Qualifications
Engineering degree (BE/ME/BTech/MTech/BSc/MSc). Technical certification in multiple technologies is desirable.
At least 8 years of experience in data engineering.
Experience in Python, PySpark, AWS.
Seniority level Mid‑Senior level
Employment type Full‑time (Contract)
Job function Information Technology
Industries Software Development
Referrals increase your chances of interviewing at Jobs via Dice by 2x.
#J-18808-Ljbffr