Tential Solutions
Join to apply for the
Data Engineer
role at
Tential Solutions
We’re partnering with a
Big 4 consulting firm
to add a
Data Engineer
to their team supporting a major banking and credit organization. This role focuses on building and optimizing scalable, cloud-based data pipelines using
Python, Java, SQL, AWS, Spark, Databricks, and EMR . You’ll work across consulting and client teams to deliver reliable data solutions that power analytics, risk, and credit decisioning use cases. This position is
fully remote .
Responsibilities
Design, build, and maintain scalable data pipelines and ETL/ELT processes using Python, Java, and SQL.
Develop and optimize distributed data processing workloads using Spark (batch and/or streaming) on AWS.
Build and manage data workflows on AWS, leveraging services such as EMR, S3, Lambda, Glue, and related components as appropriate.
Use Databricks to develop, schedule, and monitor notebooks, jobs, and workflows supporting analytics and data products.
Implement data models and structures that support banking/credit analytics, reporting, and downstream applications (e.g., risk, fraud, portfolio, customer insights).
Monitor, troubleshoot, and tune pipeline performance, reliability, and cost in a production cloud environment.
Collaborate with consultants, client stakeholders, data analysts, and data scientists to understand requirements and translate them into technical solutions.
Apply best practices for code quality, testing, version control, and CI/CD within the data environment.
Contribute to documentation, standards, and reusable components to improve consistency and speed across the data engineering team.
Required Qualifications
Strong hands‑on experience with Python and Java for data engineering, ETL/ELT, or backend data services.
Advanced SQL skills, including complex queries, performance tuning, and working with large, relational datasets.
Production experience on AWS, ideally with services such as EMR, S3, Lambda, Glue, IAM, and CloudWatch.
Practical experience building and optimizing Spark jobs (PySpark, Spark SQL, or Scala).
Hands‑on experience with Databricks (notebooks, clusters, jobs, and/or Delta Lake).
Proven experience building and supporting reliable, performant data pipelines in a modern cloud environment.
Solid understanding of data warehousing concepts, data modeling, and best practices for structured and semi‑structured data.
Experience working in collaborative engineering environments (Git, code reviews, branching strategies).
Strong communication skills and comfort working in a consulting/client‑facing environment.
Preferred Qualifications (Nice To Have)
Experience in banking, credit, financial services, or highly regulated environments.
Background with streaming data (e.g., Spark Streaming, Kafka, Kinesis) and real‑time or near–real‑time data processing.
Familiarity with orchestration tools (e.g., Airflow, Databricks jobs scheduler, Step Functions).
Experience supporting analytics, BI, or data science teams (e.g., building curated datasets, feature stores, or semantic layers).
Seniority Level: Entry level. Employment type: Contract. Job function: Information Technology.
Remote: Yes
#J-18808-Ljbffr
Data Engineer
role at
Tential Solutions
We’re partnering with a
Big 4 consulting firm
to add a
Data Engineer
to their team supporting a major banking and credit organization. This role focuses on building and optimizing scalable, cloud-based data pipelines using
Python, Java, SQL, AWS, Spark, Databricks, and EMR . You’ll work across consulting and client teams to deliver reliable data solutions that power analytics, risk, and credit decisioning use cases. This position is
fully remote .
Responsibilities
Design, build, and maintain scalable data pipelines and ETL/ELT processes using Python, Java, and SQL.
Develop and optimize distributed data processing workloads using Spark (batch and/or streaming) on AWS.
Build and manage data workflows on AWS, leveraging services such as EMR, S3, Lambda, Glue, and related components as appropriate.
Use Databricks to develop, schedule, and monitor notebooks, jobs, and workflows supporting analytics and data products.
Implement data models and structures that support banking/credit analytics, reporting, and downstream applications (e.g., risk, fraud, portfolio, customer insights).
Monitor, troubleshoot, and tune pipeline performance, reliability, and cost in a production cloud environment.
Collaborate with consultants, client stakeholders, data analysts, and data scientists to understand requirements and translate them into technical solutions.
Apply best practices for code quality, testing, version control, and CI/CD within the data environment.
Contribute to documentation, standards, and reusable components to improve consistency and speed across the data engineering team.
Required Qualifications
Strong hands‑on experience with Python and Java for data engineering, ETL/ELT, or backend data services.
Advanced SQL skills, including complex queries, performance tuning, and working with large, relational datasets.
Production experience on AWS, ideally with services such as EMR, S3, Lambda, Glue, IAM, and CloudWatch.
Practical experience building and optimizing Spark jobs (PySpark, Spark SQL, or Scala).
Hands‑on experience with Databricks (notebooks, clusters, jobs, and/or Delta Lake).
Proven experience building and supporting reliable, performant data pipelines in a modern cloud environment.
Solid understanding of data warehousing concepts, data modeling, and best practices for structured and semi‑structured data.
Experience working in collaborative engineering environments (Git, code reviews, branching strategies).
Strong communication skills and comfort working in a consulting/client‑facing environment.
Preferred Qualifications (Nice To Have)
Experience in banking, credit, financial services, or highly regulated environments.
Background with streaming data (e.g., Spark Streaming, Kafka, Kinesis) and real‑time or near–real‑time data processing.
Familiarity with orchestration tools (e.g., Airflow, Databricks jobs scheduler, Step Functions).
Experience supporting analytics, BI, or data science teams (e.g., building curated datasets, feature stores, or semantic layers).
Seniority Level: Entry level. Employment type: Contract. Job function: Information Technology.
Remote: Yes
#J-18808-Ljbffr