Logo
Precision Technologies

Data Engineer

Precision Technologies, Trenton, New Jersey, United States

Save Job

Job Summary:

We are seeking a

Senior Data Engineer with 10+ years of hands‑on experience

in designing, building, and optimizing

scalable, high‑performance data platforms and pipelines . The ideal candidate will have deep expertise in

data ingestion, ETL/ELT, data warehousing, big‑data processing, cloud‑native architectures, real‑time streaming, and analytics enablement , and will partner closely with

Analytics, Data Science, Product, and Engineering teams

across the

full data lifecycle .

Key Responsibilities

Design, develop, and maintain

end‑to‑end data pipelines

for

structured, semi‑structured, and unstructured data

using batch and real‑time processing frameworks.

Build and optimize

ETL/ELT pipelines

using

Python, SQL, PySpark, Spark SQL , and orchestration tools such as

Apache Airflow, Azure Data Factory, AWS Glue, or Prefect .

Develop

scalable big‑data solutions

using

Apache Spark, Hadoop ecosystem , and distributed processing techniques for high‑volume data workloads.

Design and manage

cloud‑based data platforms

on

AWS, Azure, or GCP , including

Data Lakes, Lakehouse architectures, and Cloud Data Warehouses .

Implement and optimize

data warehousing solutions

using

Snowflake, Amazon Redshift, Google BigQuery, Azure Synapse , and dimensional modeling techniques.

Develop

data transformation layers

to support

analytics, reporting, and machine learning workloads , ensuring high data quality and performance.

Build and maintain

real‑time and streaming data pipelines

using

Apache Kafka, Kafka Streams, Spark Streaming, or Azure Event Hubs .

Design and implement

data models

using

star schema, snowflake schema, and normalized models , aligned with business and analytics requirements.

Ensure

data quality, validation, reconciliation, and lineage

using

data quality frameworks, metadata management, and governance tools .

Implement

data security, access controls, encryption, and compliance

standards across platforms, supporting

PII, GDPR, SOC, and regulatory requirements .

Optimize

SQL performance, partitioning strategies, indexing, and query tuning

across relational and analytical databases.

Containerize and deploy data workloads using

Docker

and orchestrate pipelines using

Kubernetes

where applicable.

Build and maintain

CI/CD pipelines for data engineering workflows

using

Jenkins, GitHub Actions, Azure DevOps , and infrastructure‑as‑code tools.

Monitor and troubleshoot data pipelines using

logging, alerting, and observability tools

to ensure reliability and SLA adherence.

Collaborate with

Data Architects, Data Scientists, BI Developers, and Product teams

to deliver analytics‑ready datasets.

Support

UAT, production releases, incident management, and root cause analysis

for data platforms.

Lead

architecture decisions , conduct

code reviews , mentor junior engineers, and enforce

data engineering best practices .

Drive

data modernization, cloud migration, and performance optimization initiatives

across enterprise data ecosystems.

Required Skills

Programming & Querying:

Python, SQL, PySpark, Spark SQL

Big Data:

Apache Spark, Hadoop, Kafka

Streaming:

Kafka, Spark Streaming, Event Hubs

Data Modeling:

Star Schema, Snowflake Schema, Dimensional Modeling

Methodologies:

Agile, SDLC, DataOps

Seniority Level Mid‑Senior level

Employment Type Full‑time

Job Function Engineering and Information Technology

Industries IT Services and IT Consulting

Location: Jersey City, NJ

#J-18808-Ljbffr