AARATECH
This range is provided by AARATECH. Your actual pay will be based on your skills and experience — talk with your recruiter to learn more.
Base pay range $60,000.00/yr - $80,000.00/yr
Eligibility Open to U.S. Citizens and Green Card holders only. We do not offer visa sponsorship.
About the Role Aaratech Inc is a specialized IT consulting and staffing company that places elite engineering talent into high-impact roles at leading U.S. organizations. We focus on modern technologies across cloud, data, and software disciplines. Our client engagements offer long-term stability, competitive compensation, and the opportunity to work on cutting-edge data projects.
We are seeking a
Data Engineer
with
3–4 years of experience
to join a client-facing role focused on building and maintaining scalable
data pipelines , robust
data models , and modern
data warehousing
solutions. You'll work with a variety of tools and frameworks, including
Apache Spark ,
Snowflake , and
Python , to deliver clean, reliable, and timely data for advanced analytics and reporting.
Responsibilities
Design and develop scalable
Data Pipelines
to support batch and real-time processing
Implement efficient
Extract, Transform, Load (ETL)
processes using tools like
Apache Spark
and
dbt
Develop and optimize queries using
SQL
for data analysis and warehousing
Build and maintain
Data Warehousing
solutions using platforms like
Snowflake
or
BigQuery
Collaborate with business and technical teams to gather requirements and create accurate
Data Models
Write reusable and maintainable code in
Python
for data ingestion, processing, and automation
Ensure end-to-end
Data Processing
integrity, scalability, and performance
Follow best practices for
data governance ,
security , and
compliance
Required Skills & Experience
3–4 years
of experience in
Data Engineering
or a similar role
Strong proficiency in
SQL
and
Python
Experience with
Extract, Transform, Load (ETL)
frameworks and building
data pipelines
Solid understanding of
Data Warehousing
concepts and architecture
Hands‑on experience with
Snowflake ,
Apache Spark , or similar big data technologies
Proven experience in
Data Modeling
and data schema design
Exposure to
Data Processing
frameworks and performance optimization techniques
Familiarity with cloud platforms like
AWS ,
GCP , or
Azure
Nice to Have
Experience with
streaming data pipelines
(e.g., Kafka, Kinesis)
Exposure to
CI/CD
practices in data development
Prior work in consulting or multi-client environments
Understanding of
data quality
frameworks and monitoring strategies
#J-18808-Ljbffr
Base pay range $60,000.00/yr - $80,000.00/yr
Eligibility Open to U.S. Citizens and Green Card holders only. We do not offer visa sponsorship.
About the Role Aaratech Inc is a specialized IT consulting and staffing company that places elite engineering talent into high-impact roles at leading U.S. organizations. We focus on modern technologies across cloud, data, and software disciplines. Our client engagements offer long-term stability, competitive compensation, and the opportunity to work on cutting-edge data projects.
We are seeking a
Data Engineer
with
3–4 years of experience
to join a client-facing role focused on building and maintaining scalable
data pipelines , robust
data models , and modern
data warehousing
solutions. You'll work with a variety of tools and frameworks, including
Apache Spark ,
Snowflake , and
Python , to deliver clean, reliable, and timely data for advanced analytics and reporting.
Responsibilities
Design and develop scalable
Data Pipelines
to support batch and real-time processing
Implement efficient
Extract, Transform, Load (ETL)
processes using tools like
Apache Spark
and
dbt
Develop and optimize queries using
SQL
for data analysis and warehousing
Build and maintain
Data Warehousing
solutions using platforms like
Snowflake
or
BigQuery
Collaborate with business and technical teams to gather requirements and create accurate
Data Models
Write reusable and maintainable code in
Python
for data ingestion, processing, and automation
Ensure end-to-end
Data Processing
integrity, scalability, and performance
Follow best practices for
data governance ,
security , and
compliance
Required Skills & Experience
3–4 years
of experience in
Data Engineering
or a similar role
Strong proficiency in
SQL
and
Python
Experience with
Extract, Transform, Load (ETL)
frameworks and building
data pipelines
Solid understanding of
Data Warehousing
concepts and architecture
Hands‑on experience with
Snowflake ,
Apache Spark , or similar big data technologies
Proven experience in
Data Modeling
and data schema design
Exposure to
Data Processing
frameworks and performance optimization techniques
Familiarity with cloud platforms like
AWS ,
GCP , or
Azure
Nice to Have
Experience with
streaming data pipelines
(e.g., Kafka, Kinesis)
Exposure to
CI/CD
practices in data development
Prior work in consulting or multi-client environments
Understanding of
data quality
frameworks and monitoring strategies
#J-18808-Ljbffr