Artech L.L.C.
Location
Portland, OR (Local or Willing to relocate)
Job Description We are seeking a highly skilled
Databricks Data Engineer
with a minimum of 10 years of total experience, including strong expertise in the
retail industry . The ideal candidate will be responsible for designing, developing, and optimizing data pipelines and architectures to support advanced analytics and business intelligence initiatives. This role requires proficiency in
Python, SQL, cloud platforms, and ETL tools
within a retail-focused data ecosystem.
Key Responsibilities
Design, develop, and maintain scalable
data pipelines
using
Databricks and Snowflake .
Work with
Python libraries
such as Pandas, NumPy, PySpark, PyOdbc, PyMsSQL, Requests, Boto3, SimpleSalesforce, and JSON for efficient data processing.
Optimize and enhance
SQL queries, stored procedures, triggers, and schema designs
for
RDBMS (MSSQL/MySQL) and NoSQL (DynamoDB/MongoDB/Redis)
databases.
Develop and manage
REST APIs
to integrate various data sources and applications.
Implement
AWS cloud solutions
using AWS Data Exchange, Athena, CloudFormation, Lambda, S3, AWS Console, IAM, STS, EC2, and EMR.
Utilize
ETL tools
such as Apache Airflow, AWS Glue, Azure Data Factory, Talend, and Alteryx to orchestrate and automate data workflows.
Work with
Hadoop and Hive
for big data processing and analysis.
Collaborate with cross‑functional teams to understand business needs and develop
efficient data solutions
that drive decision‑making in the
retail domain .
Ensure
data quality, governance, and security
across all data assets and pipelines.
Required Qualifications
10+ years
of total experience in data engineering and data processing.
6+ years of hands‑on experience in
Python programming
for data processing and analytics.
4+ years of experience working with
Databricks and Snowflake .
4+ years of expertise in
SQL development, performance tuning, and RDBMS/NoSQL databases .
4+ years of experience designing and managing
REST APIs .
2+ years of experience with
AWS data services .
2+ years of hands‑on experience with
ETL tools
like Apache Airflow, AWS Glue, Azure Data Factory, Talend, or Alteryx.
1+ year experience with
Hadoop and Hive .
Strong understanding of
retail industry data needs
and best practices.
Excellent problem‑solving, analytical, and communication skills.
Preferred Qualifications
Experience with
real‑time data processing and streaming technologies .
Familiarity with
machine learning and AI‑driven analytics .
Certifications in
Databricks, AWS, or Snowflake .
This is an exciting opportunity to work on cutting‑edge data engineering solutions in a fast‑paced retail environment. If you are passionate about leveraging data to drive business success and innovation, we encourage you to apply!
Seniority Level Mid‑Senior level
Employment Type Contract
Job Function Information Technology
Industries Technology, Information and Media
#J-18808-Ljbffr
Job Description We are seeking a highly skilled
Databricks Data Engineer
with a minimum of 10 years of total experience, including strong expertise in the
retail industry . The ideal candidate will be responsible for designing, developing, and optimizing data pipelines and architectures to support advanced analytics and business intelligence initiatives. This role requires proficiency in
Python, SQL, cloud platforms, and ETL tools
within a retail-focused data ecosystem.
Key Responsibilities
Design, develop, and maintain scalable
data pipelines
using
Databricks and Snowflake .
Work with
Python libraries
such as Pandas, NumPy, PySpark, PyOdbc, PyMsSQL, Requests, Boto3, SimpleSalesforce, and JSON for efficient data processing.
Optimize and enhance
SQL queries, stored procedures, triggers, and schema designs
for
RDBMS (MSSQL/MySQL) and NoSQL (DynamoDB/MongoDB/Redis)
databases.
Develop and manage
REST APIs
to integrate various data sources and applications.
Implement
AWS cloud solutions
using AWS Data Exchange, Athena, CloudFormation, Lambda, S3, AWS Console, IAM, STS, EC2, and EMR.
Utilize
ETL tools
such as Apache Airflow, AWS Glue, Azure Data Factory, Talend, and Alteryx to orchestrate and automate data workflows.
Work with
Hadoop and Hive
for big data processing and analysis.
Collaborate with cross‑functional teams to understand business needs and develop
efficient data solutions
that drive decision‑making in the
retail domain .
Ensure
data quality, governance, and security
across all data assets and pipelines.
Required Qualifications
10+ years
of total experience in data engineering and data processing.
6+ years of hands‑on experience in
Python programming
for data processing and analytics.
4+ years of experience working with
Databricks and Snowflake .
4+ years of expertise in
SQL development, performance tuning, and RDBMS/NoSQL databases .
4+ years of experience designing and managing
REST APIs .
2+ years of experience with
AWS data services .
2+ years of hands‑on experience with
ETL tools
like Apache Airflow, AWS Glue, Azure Data Factory, Talend, or Alteryx.
1+ year experience with
Hadoop and Hive .
Strong understanding of
retail industry data needs
and best practices.
Excellent problem‑solving, analytical, and communication skills.
Preferred Qualifications
Experience with
real‑time data processing and streaming technologies .
Familiarity with
machine learning and AI‑driven analytics .
Certifications in
Databricks, AWS, or Snowflake .
This is an exciting opportunity to work on cutting‑edge data engineering solutions in a fast‑paced retail environment. If you are passionate about leveraging data to drive business success and innovation, we encourage you to apply!
Seniority Level Mid‑Senior level
Employment Type Contract
Job Function Information Technology
Industries Technology, Information and Media
#J-18808-Ljbffr