NAM Info Inc
Data Engineer
We are looking for a
Data Engineer
with deep expertise in
Snowflake
and the
AWS
ecosystem. You will design, build, and maintain modern data pipelines that power analytical, operational, and AI/ML workloads.
Key Responsibilities
Design and develop end‑to‑end ETL/ELT pipelines for structured and unstructured data into
Snowflake
and
AWS
data lakes.
Orchestrate data ingestion and transformation using tools such as
AWS Glue ,
dbt ,
Informatica ,
Talend ,
Matillion , and
Apache NiFi .
Implement incremental loads, CDC, and real‑time streaming with
Kafka ,
Kinesis , or
Debezium .
Model data for analytics using dimensional and Data Vault architectures; optimize Snowflake performance with clustering and materialized views.
Automate workflows in
Apache Airflow ,
AWS Step Functions , or
Glue Workflows , and set up CI/CD pipelines with
Git
and
Jenkins .
Ensure data quality, governance, and security aligned with GDPR, SOC‑2, and enterprise IAM policies.
Required Skills & Qualifications
Bachelor’s or Master’s in Computer Science, Data Engineering, or related field.
8–10 years of data engineering experience in cloud environments.
Advanced proficiency in
Snowflake
schema design, query optimization, and cost management.
Hands‑on with ETL tools:
AWS Glue ,
dbt ,
Informatica ,
Talend ,
Matillion ,
Apache NiFi , or Azure Data Factory.
Strong SQL and programming skills in
Python ,
Scala , or
Java .
Experience with data modeling (3NF, Star, Snowflake, Data Vault 2.0) and orchestration.
Knowledge of data cataloging, lineage, and metadata management.
Bonus: real‑time streaming, CDC pipelines, API ingestion, or machine‑learning data prep.
Please submit your resume to
jnehru@nam-it.com .
#J-18808-Ljbffr
Data Engineer
with deep expertise in
Snowflake
and the
AWS
ecosystem. You will design, build, and maintain modern data pipelines that power analytical, operational, and AI/ML workloads.
Key Responsibilities
Design and develop end‑to‑end ETL/ELT pipelines for structured and unstructured data into
Snowflake
and
AWS
data lakes.
Orchestrate data ingestion and transformation using tools such as
AWS Glue ,
dbt ,
Informatica ,
Talend ,
Matillion , and
Apache NiFi .
Implement incremental loads, CDC, and real‑time streaming with
Kafka ,
Kinesis , or
Debezium .
Model data for analytics using dimensional and Data Vault architectures; optimize Snowflake performance with clustering and materialized views.
Automate workflows in
Apache Airflow ,
AWS Step Functions , or
Glue Workflows , and set up CI/CD pipelines with
Git
and
Jenkins .
Ensure data quality, governance, and security aligned with GDPR, SOC‑2, and enterprise IAM policies.
Required Skills & Qualifications
Bachelor’s or Master’s in Computer Science, Data Engineering, or related field.
8–10 years of data engineering experience in cloud environments.
Advanced proficiency in
Snowflake
schema design, query optimization, and cost management.
Hands‑on with ETL tools:
AWS Glue ,
dbt ,
Informatica ,
Talend ,
Matillion ,
Apache NiFi , or Azure Data Factory.
Strong SQL and programming skills in
Python ,
Scala , or
Java .
Experience with data modeling (3NF, Star, Snowflake, Data Vault 2.0) and orchestration.
Knowledge of data cataloging, lineage, and metadata management.
Bonus: real‑time streaming, CDC pipelines, API ingestion, or machine‑learning data prep.
Please submit your resume to
jnehru@nam-it.com .
#J-18808-Ljbffr