Logo
neteffects

Senior Data Engineer (Databricks)

neteffects, Atlanta, Georgia, United States, 30383

Save Job

Overview

We are seeking a

Senior Data Engineer

with strong hands-on experience in

Databricks

and modern data engineering practices. This role is ideal for an engineer who enjoys building, optimizing, and maintaining

scalable data pipelines

that enable analytics, business intelligence, and data-driven decision-making across the organization. The ideal candidate will be highly proficient in

data modeling, data pipeline orchestration, ETL/ELT design , and

cloud data engineering

(preferably AWS), with deep knowledge of the

Medallion Data Architecture (Bronze, Silver, Gold layers)

to ensure data reliability, scalability, and reusability. Key Responsibilities

Design & Build Data Pipelines:

Architect, implement, and maintain scalable end-to-end data pipelines using

Databricks ,

Spark , and related technologies. Medallion Data Architecture:

Design and implement data workflows following the

Medallion (Bronze–Silver–Gold)

architecture — ensuring structured, quality-controlled data flow from raw ingestion to curated and analytics-ready datasets. Data Transformation & Optimization:

Develop efficient data processing and transformation workflows to support analytics and reporting use cases. Data Integration:

Integrate diverse data sources including APIs, databases, and cloud storage into unified datasets. Performance Tuning:

Optimize

Spark

jobs, queries, and workflows for efficiency, scalability, and cost-effectiveness. Collaboration:

Work closely with cross-functional teams (data science, analytics, and business units) to design and implement data solutions aligned with business goals. Data Quality & Validation:

Implement robust validation, monitoring, and observability processes to ensure data accuracy, completeness, and reliability. Automation & Governance:

Contribute to

data governance ,

security , and

automation

initiatives within the data ecosystem. Cloud Environment:

Leverage

AWS

services (e.g.,

S3 ,

Glue ,

Lambda ,

Redshift ) to build and deploy cloud-native data solutions. Qualifications

Bachelor’s or Master’s degree in Computer Science, Information Systems, or related field. 5+ years

of experience as a

Data Engineer

or

Senior Data Engineer

in enterprise-scale environments. Proven hands-on experience with

Databricks ,

Apache Spark , and

PySpark

for large-scale data engineering and analytics. Strong understanding of

Medallion Data Architecture

and experience implementing

Bronze, Silver, and Gold

data layers within a Databricks or lakehouse environment. Proficiency in

Python

and

SQL

for data manipulation, automation, and orchestration. Experience designing and maintaining

ETL/ELT processes

and

data pipelines

for large datasets. Working knowledge of

AWS

(preferred) or other cloud platforms (Azure, GCP). Familiarity with data modeling, schema design, and performance tuning in

data lake

or

data warehouse

environments. Solid understanding of

data governance ,

security , and

compliance

principles. Excellent communication, analytical, and problem-solving skills. Strong teamwork skills with the ability to collaborate across distributed teams. Nice to Have

Experience with tools like

Fivetran ,

Prophecy , or

Precisely Connect . Exposure to

Delta Lake ,

Airflow , or

dbt . Prior experience developing in

Lakehouse

environments or

data mesh

architectures. Familiarity with

CI/CD

practices for data pipelines. Experience working in

Agile

or

DevOps-oriented

environments. Benefits

Health insurance Health savings account Dental insurance Vision insurance Flexible spending accounts Life insurance Retirement plan All qualified applicants will receive consideration for employment without regard to age, race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran.

#J-18808-Ljbffr