RIT Solutions, Inc.
Role Overview:
We are seeking a Jr.
Data Engineer
with 3-5 years of experience to contribute to data pipeline development and optimization. This role will focus on building reliable and scalable data flows, supporting business reporting, and working closely with senior engineers in large-scale data warehousing initiatives. Key Responsibilities:
Develop and maintain
ETL/ELT pipelines
and workflows for
data warehousing
projects. Work with
SQL, Hadoop, Spark, and Python
to process and transform large datasets. Support integration of structured and unstructured data from multiple sources. Collaborate with senior engineers to implement best practices in coding, performance tuning, and data quality. Gain exposure to
Apache NiFi
for real-time/streaming data ingestion. Contribute to projects involving
cloud platforms
(AWS, Azure, or GCP). Document workflows, data lineage, and maintain operational support for data systems. Required Skills & Experience:
3-5 years of hands-on experience in
data engineering / data warehousing
development. Strong proficiency in
SQL
and working knowledge of
Python
(or other programming languages). Practical experience with
Hadoop
and
Apache Spark . Exposure to
Apache NiFi
and cloud-based data concepts is desirable. Strong analytical and debugging skills. Ability to work in a collaborative environment and contribute to agile delivery.
Data Engineer
with 3-5 years of experience to contribute to data pipeline development and optimization. This role will focus on building reliable and scalable data flows, supporting business reporting, and working closely with senior engineers in large-scale data warehousing initiatives. Key Responsibilities:
Develop and maintain
ETL/ELT pipelines
and workflows for
data warehousing
projects. Work with
SQL, Hadoop, Spark, and Python
to process and transform large datasets. Support integration of structured and unstructured data from multiple sources. Collaborate with senior engineers to implement best practices in coding, performance tuning, and data quality. Gain exposure to
Apache NiFi
for real-time/streaming data ingestion. Contribute to projects involving
cloud platforms
(AWS, Azure, or GCP). Document workflows, data lineage, and maintain operational support for data systems. Required Skills & Experience:
3-5 years of hands-on experience in
data engineering / data warehousing
development. Strong proficiency in
SQL
and working knowledge of
Python
(or other programming languages). Practical experience with
Hadoop
and
Apache Spark . Exposure to
Apache NiFi
and cloud-based data concepts is desirable. Strong analytical and debugging skills. Ability to work in a collaborative environment and contribute to agile delivery.