Logo
INTELLISWIFT INC

AWS Data Engineer 3851

INTELLISWIFT INC, Torrance, California, United States, 90504

Save Job

Overview

Daily Tasks Performed: Develop and Maintain Data Integration Solutions. The role focuses on designing, building, and optimizing data integration pipelines, ensuring data quality, and enabling business intelligence and analytics through robust data architectures on AWS. Responsibilities

Design and implement data integration workflows

using AWS Glue EMR, Lambda, and Redshift. Pyspark ,

Spark

and

Python

for

data processing large datasets . Extract, transform, and load

data into target systems. Build ETL pipelines using Iceberg . Validate

and

cleanse data . Ensure data quality

and

integrity

by

implementing monitoring ,

validation , and

error handling mechanisms

within

data pipelines . Enhance performance

and

optimization

of

data workflows

to meet SLAs; ensure scalability of data integration on

AWS cloud infrastructure . Data Analysis

and

Data Warehousing concepts

(star/snowflake schema design, dimensional modeling, and reporting enablement). Resolve performance bottlenecks . Optimize data processing to enhance Redshift's performance . Refine integration processes . Support Business Intelligence and Analytics : translate business requirements to

technical specifications

and coded data pipelines; ensure integrated data for business intelligence and analytics; meet data requirements. Maintain Documentation and Compliance : document all data integration processes, workflows, and technical & system specifications; ensure compliance with data governance policies, industry standards, and regulatory requirements. What will this person be working on

Design ,

development , and

management

of

data integration processes . Integrating data from diverse sources , transforming it to meet business requirements, and loading it into target systems such as

data warehouses

or

data lakes . Position Success Criteria (Desired) - ANTS

Bachelor's degree

in computer science, information technology, or a related field. A master's degree can be advantageous. 4-6+ years of experience

in

data engineering ,

database design ,

ETL processes . Experience with

Iceberg . 5+ years

in

programming languages

such as

PySpark ,

Python

and SQL. 5+ years of experience

with

AWS tools and technologies

(S3, EMR, Glue, Athena, Redshift, Postgres, RDS, Lambda, PySpark). 3+ years

of experience with

databases, data marts, data warehouses . ETL development ,

system integration , and

CI/CD implementation . Experience in

complex database objects

to move changed data across multiple environments. Solid understanding of

data security , privacy, and compliance. Participation in

agile development processes , including sprint planning, stand-ups, and retrospectives. Provide technical guidance and mentorship to junior developers.

#J-18808-Ljbffr