Logo
AARATECH

Data Engineer

AARATECH, Chicago, Illinois, United States

Save Job

Data Engineer – Data Pipeline Specialist Aaratech Inc. is a specialized IT consulting and staffing company that places elite engineering talent into high‑impact roles at leading U.S. organizations. We focus on modern technologies across cloud, data, and software disciplines. Our client engagements offer long‑term stability, competitive compensation, and opportunities to work on cutting‑edge data projects.

Base Pay Range $60,000.00/yr – $80,000.00/yr

Location United States (Remote / On‑site – based on client needs)

Employment Type Full‑time (Contract or Contract‑to‑Hire)

Experience Level Mid‑level (3–4 years)

Eligibility Open to U.S. Citizens and Green Card holders only. We do not offer visa sponsorship.

Responsibilities

Design and develop scalable data pipelines to support batch and real‑time processing.

Implement efficient extract, transform, load (ETL) processes using Apache Spark and dbt.

Develop and optimize queries using SQL for data analysis and warehousing.

Build and maintain data warehousing solutions on Snowflake or BigQuery.

Collaborate with business and technical teams to gather requirements and create accurate data models.

Write reusable and maintainable code in Python for data ingestion, processing, and automation.

Ensure end‑to‑end data processing integrity, scalability, and performance.

Follow best practices for data governance, security, and compliance.

Required Skills & Experience

3–4 years of experience in data engineering or a similar role.

Strong proficiency in SQL and Python.

Experience with ETL frameworks and building data pipelines.

Solid understanding of data warehousing concepts and architecture.

Hands‑on experience with Snowflake, Apache Spark, or similar big data technologies.

Proven experience in data modeling and data schema design.

Exposure to data processing frameworks and performance optimization techniques.

Familiarity with cloud platforms like AWS, GCP, or Azure.

Nice to Have

Experience with streaming data pipelines (Kafka, Kinesis).

Exposure to CI/CD practices in data development.

Prior work in consulting or multi‑client environments.

Understanding of data quality frameworks and monitoring strategies.

#J-18808-Ljbffr