Logo
Revel IT

Senior Data Analytics Engineer

Revel IT, Columbus, Ohio, United States, 43224

Save Job

We are seeking a highly skilled

Analytics Data Engineer

with deep expertise in building scalable data solutions on the

AWS platform . The ideal candidate is a

10/10 expert in Python and PySpark , with strong working knowledge of

SQL.

This engineer will play a critical role in translating business and end‑user needs into robust analytics products—spanning ingestion, transformation, curation, and enablement for downstream reporting and visualization.

You will work closely with both business stakeholders and IT teams to design, develop, and deploy advanced data pipelines and analytical capabilities that power enterprise decision‑making.

Key Responsibilities

Design, develop, and optimize

scalable data ingestion pipelines

using Python, PySpark, and AWS native services.

Build end‑to‑end solutions to move

large‑scale big data

from source systems into AWS environments (e.g., S3, Redshift, DynamoDB, RDS).

Develop and maintain robust data transformation and curation processes to support analytics, dashboards, and business intelligence tools.

Implement best practices for data quality, validation, auditing, and error‑handling within pipelines.

Analytics Solution Design

Collaborate with business users to understand analytical needs and translate them into technical specifications, data models, and solution architectures.

Build curated datasets optimized for reporting, visualization, machine learning, and self‑service analytics.

Contribute to solution design for analytics products leveraging AWS services such as

AWS Glue, Lambda, EMR, Athena, Step Functions, Redshift, Kinesis, Lake Formation , etc.

Cross‑Functional Collaboration

Work with IT and business partners to define requirements, architecture, and KPIs for analytical solutions.

Participate in Daily Scrum meetings, code reviews, and architecture discussions to ensure alignment with enterprise data strategy and coding standards.

Provide mentorship and guidance to junior engineers and analysts as needed.

Employ strong skills in

Python, Pyspark and SQL

to support data engineering tasks, broader system integration requirements, and application layer needs.

Implement scripts, utilities, and micro‑services as needed to support analytics workloads.

Required Qualifications

5+ years

of professional experience in data engineering, analytics engineering, or full‑stack data development roles.

Python

PySpark

Strong working knowledge of:

SQL and other programming languages

Demonstrated experience designing and delivering big‑data ingestion and transformation solutions through AWS.

Hands‑on experience with AWS services such as

Glue, EMR, Lambda, Redshift, S3, Kinesis, CloudFormation, IAM , etc.

Strong understanding of

data warehousing, ETL/ELT, distributed computing, and data modeling .

Ability to partner effectively with business stakeholders and translate requirements into technical solutions.

Strong problem‑solving skills and the ability to work independently in a fast‑paced environment.

Preferred Qualifications

Experience with BI/Visualization tools such as Tableau

Experience building CI/CD pipelines for data products (e.g., Jenkins, GitHub Actions).

Familiarity with machine learning workflows or MLOps frameworks.

Knowledge of metadata management, data governance, and data lineage tools.

Seniority level: Mid‑Senior level

Employment type: Other

Job function: Information Technology

Industries: Electrical Equipment Manufacturing

#J-18808-Ljbffr