Logo
TriTech Enterprise Systems

Data/ETL Developer

TriTech Enterprise Systems, Baltimore, Maryland, United States, 21276

Save Job

TriTech Enterprise Systems, Inc. (TriTech) is seeking a "Data/ETL Developer" to support a State of Maryland contract.

This is a hybrid position that is location in Baltimore, Maryland.

The candidate will be responsible for designing, building, and maintaining data pipelines and infrastructure to support data-driven decisions and analytics.

The candidate(s) tasks will include the following:

Design, develop and maintain data pipelines, and extract, transform, load (ETL) processes to collect, process and store structured and unstructured data

B uild data architecture and storage solutions, including data lakehouses, data lakes, data warehouse, and data marts to support analytics and reporting

D evelop data reliability, efficiency, and qualify checks and processes

P repare data for data modeling

M onitor and optimize data architecture and data processing systems

C ollaboration with multiple teams to understand requirements and objectives

A dminister testing and troubleshooting related to performance, reliability, and scalability

C reate and update documentation

Additional Responsibilities: In addition to the responsibilities listed above, the individual will also be expected to perform the following using data architecture and modeling technique to do the following:

D esign and implement robust, scalable data models to support PMM application, analytics, and business intelligence initiatives

O ptimize data warehousing solutions and manage data migrations in the AWS ecosystem, utilizing Amazon Redshift, RDS, and DocumentDB services

ETL Development:

D evelop and maintain scalable ETL pipelines using AWS Glue and other AWS services to enhance data collection, integration, and aggregation

E nsure data integrity and timeliness in the data pipeline, troubleshooting any issues that arise during data processing

Data Integration:

I ntegrate data from various sources using AWS technologies, ensuring seamless data flow across systems

C ollaborate with stakeholders to define data ingestion requirements and implement solutions to meet business needs

Performance Optimization:

M onitor, tune, and manage database performance to ensure efficient data loads and queries

I mplement best practices for data management within AWS to optimize storage and computing costs

Security and Compliance:

E nsure all data practices comply with regulatory requirements and department policies

I mplement and maintain security measures to protect data within AWS services

Team Collaboration and Leadership:

L ead and mentor junior data engineers and team members on AWS best practices and technical challenges

Collaborate with UI/API team, business analysts, and other stakeholders to support data- driven decision- making

Innovation and Continuous Improvement:

E xplore and adopt new technologies within the AWS cloud to enhance the capabilities of the data platform

C ontinuously improve existing systems by analyzing business needs and technology trends

Education:

T his position requires a bachelor’s or master’s degree from an accredited college or university with a major in computer science, statistics, mathematics, economics, or related field.

T hree (3) years of equivalent experience in a related field may be substituted for the bachelor’s degree.

General Experience:

T he proposed candidate must have a minimum of three (3) years of experience as a data engineer.

Specialized experience:

T he candidate should have experience as data engineer or similar role with a strong understanding of data architecture and ETL processes.

T he candidate should be proficient in programming languages for data processing and knowledgeable of distributed computing and parallel processing:

M inimum 5 + years ETL coding experience

P roficiency in programming languages such as Python and SQL for data processing and automation

E xperience with distributed computing frameworks like Apache Spark or similar technologies

E xperience with AWS data environment, primarily Glue, S3, DocumentDB, Redshift, RDS, Athena, etc.

E xperience with data warehouses/RDBMS like Redshift and NoSQL data stores such as DocumentDB, DynamoDB, OpenSearch, etc.

E xperience in building data lakes using AWS Lake Formation

E xperience with workflow orchestration and scheduling tools like AWS Step Functions, AWS MWAA, etc.

S trong understanding of relational databases (including tables, views, indexes, table spaces)

E xperience with source control tools such as GitHub and related CI/CD processes

A bility to analyze a company’s data needs

S trong problem-solving skills

E xperience with the SDLC and Agile methodologies

TriTech is an Equal Opportunity Employer.

Powered by JazzHR