Code17Tek
Overview
We are seeking an experienced
AWS & Snowflake Data Engineer
to design, build, and maintain cloud-based data pipelines, warehouses, and analytics platforms. The ideal candidate will have a strong background in
AWS data services ,
Snowflake architecture , and
ETL/ELT development
using modern data engineering practices. You will be responsible for building scalable, cost‑efficient data solutions that support advanced analytics, business intelligence, and machine learning initiatives. Key Responsibilities
Design, develop, and optimize
data pipelines
to ingest data from multiple sources into
Snowflake
using
AWS Glue, Lambda, Step Functions, or Airflow . Create and manage
Snowflake databases, warehouses, schemas, roles, and access controls . Develop
ETL/ELT processes
to transform, cleanse, and load structured and semi‑structured data (JSON, Parquet, etc.). Implement
data modeling
best practices (Star/Snowflake schema) for analytics and reporting. Use
Python, SQL, or PySpark
to automate data integration and validation. Leverage
AWS S3, Glue, Lambda, Redshift, Kinesis, Step Functions, and CloudFormation/Terraform
for data architecture and automation. Manage
IAM roles, networking, and resource configuration
for secure and efficient access to Snowflake and AWS services. Implement
monitoring, logging, and cost optimization
for data workflows using CloudWatch and other observability tools. Data Quality, Security & Governance
Ensure data accuracy, completeness, and consistency across environments. Implement
data lineage and cataloging
using tools such as AWS Glue Data Catalog or Collibra. Enforce
security and compliance
through role‑based access control, encryption, and auditing. Work closely with
Data Analysts, Scientists, and BI Developers
to enable self‑service data access. Partner with business stakeholders to understand data needs and translate them into scalable technical solutions. Participate in
code reviews, architecture discussions, and CI/CD deployments
for data workflows. Required Skills
5+ years of experience in
Data Engineering
or related field. Strong experience with
AWS Data Services
(S3, Glue, Lambda, Redshift, Step Functions). Hands‑on expertise in
Snowflake
(data warehousing, schema design, query optimization). Proficiency in
SQL
and
Python/PySpark
for data transformations. Experience with
data pipeline orchestration
(Airflow, Step Functions, or similar). Solid understanding of
data modeling, performance tuning, and cost management . Familiarity with
version control (Git)
and
CI/CD pipelines (CodePipeline, Jenkins, or GitHub Actions) . Preferred Skills
Experience with
Databricks
or
EMR
for large‑scale data processing. Exposure to
API‑based data ingestion
(REST, GraphQL). Knowledge of
data cataloging and lineage tools
(Purview, Collibra, Alation). Familiarity with
Terraform or CloudFormation
for IaC. Experience in
Agile/Scrum
environments.
#J-18808-Ljbffr
We are seeking an experienced
AWS & Snowflake Data Engineer
to design, build, and maintain cloud-based data pipelines, warehouses, and analytics platforms. The ideal candidate will have a strong background in
AWS data services ,
Snowflake architecture , and
ETL/ELT development
using modern data engineering practices. You will be responsible for building scalable, cost‑efficient data solutions that support advanced analytics, business intelligence, and machine learning initiatives. Key Responsibilities
Design, develop, and optimize
data pipelines
to ingest data from multiple sources into
Snowflake
using
AWS Glue, Lambda, Step Functions, or Airflow . Create and manage
Snowflake databases, warehouses, schemas, roles, and access controls . Develop
ETL/ELT processes
to transform, cleanse, and load structured and semi‑structured data (JSON, Parquet, etc.). Implement
data modeling
best practices (Star/Snowflake schema) for analytics and reporting. Use
Python, SQL, or PySpark
to automate data integration and validation. Leverage
AWS S3, Glue, Lambda, Redshift, Kinesis, Step Functions, and CloudFormation/Terraform
for data architecture and automation. Manage
IAM roles, networking, and resource configuration
for secure and efficient access to Snowflake and AWS services. Implement
monitoring, logging, and cost optimization
for data workflows using CloudWatch and other observability tools. Data Quality, Security & Governance
Ensure data accuracy, completeness, and consistency across environments. Implement
data lineage and cataloging
using tools such as AWS Glue Data Catalog or Collibra. Enforce
security and compliance
through role‑based access control, encryption, and auditing. Work closely with
Data Analysts, Scientists, and BI Developers
to enable self‑service data access. Partner with business stakeholders to understand data needs and translate them into scalable technical solutions. Participate in
code reviews, architecture discussions, and CI/CD deployments
for data workflows. Required Skills
5+ years of experience in
Data Engineering
or related field. Strong experience with
AWS Data Services
(S3, Glue, Lambda, Redshift, Step Functions). Hands‑on expertise in
Snowflake
(data warehousing, schema design, query optimization). Proficiency in
SQL
and
Python/PySpark
for data transformations. Experience with
data pipeline orchestration
(Airflow, Step Functions, or similar). Solid understanding of
data modeling, performance tuning, and cost management . Familiarity with
version control (Git)
and
CI/CD pipelines (CodePipeline, Jenkins, or GitHub Actions) . Preferred Skills
Experience with
Databricks
or
EMR
for large‑scale data processing. Exposure to
API‑based data ingestion
(REST, GraphQL). Knowledge of
data cataloging and lineage tools
(Purview, Collibra, Alation). Familiarity with
Terraform or CloudFormation
for IaC. Experience in
Agile/Scrum
environments.
#J-18808-Ljbffr