Logo
Endava

Senior Data Engineer

Endava, Deerfield, Illinois, United States, 60063

Save Job

Overview

Senior Data Engineer role at Endava. This description reflects the responsibilities, qualifications, and related information from the original posting. Company Description

Technology is our how. And people are our why. For over two decades, we have been harnessing technology to drive meaningful change. By combining world-class engineering, industry expertise and a people-centric mindset, we consult and partner with leading brands from various industries to create dynamic platforms and intelligent digital experiences that drive innovation and transform businesses. From prototype to real-world impact - be part of a global shift by doing work that matters. Responsibilities

Lead onshore technical engineers Act as the lead engineer for each domain Collaborate with Solution Architects to ensure understanding needed for engineering level guidance to nearshore teams Translate architecture design to engineering level guidance Own specifying architectural designs into sprint-ready engineering tasks for assigned domain Ensure delivery to the solution architecture within domain Answer design questions for sprint teams Participate in backlog refinement sessions to align team with the architecture Act as the internal technical expert supporting Product design and Engineering teams on demonstrations and technical discussions Create/Update documentation such as solution design patterns and engineering standards Lead technical discussions around APIs, integrations, data flow, compliance and security Serve as a trusted advisor on best practices, technical implementation, and scalability Design and implement highly performant and scalable ETL/ELT data pipelines using Databricks (Spark) and orchestration tools Develop data ingestion, transformation, and enrichment workflows using Scala, Python, and advanced SQL Support real-time and batch processing needs with optimized workflows, adhering to best practices for scalability and maintainability Leverage Databricks for developing and managing Spark-based data pipelines and analytics workloads Implement Unity Catalog for centralized data governance, access control, lineage, and auditability Manage Delta Lake schemas and optimize data lake performance using Z-ordering, compaction, and partitioning strategies Knowledge about testing and validation frameworks into pipelines using tools such as Great Expectations or custom frameworks Monitor data flows for anomalies, failures, and integrity issues and proactively troubleshoot and resolve them Knowledge about automated deployment pipelines for data workflows using Azure DevOps, GitHub Actions, or similar Apply version control and collaborative development practices for notebooks, pipelines, and data models Partner with Data Scientists, BI teams, and Product stakeholders to understand data needs and translate them into scalable solutions Mentor junior engineers and help standardize practices across the data engineering team Qualifications

5+ years of professional experience in data engineering roles Experience with Azure Cloud and hands-on experience with cloud services (such as ADF, Data Lake, Key Vault, etc.) Hands-on expertise with Databricks (including Spark tuning and cluster management) Scala, Python, and SQL Azure SQL, Cosmos DB, PostgreSQL Knowledge of databases (SQL and/or NoSQL) and data modeling Proven experience building and managing data pipelines in production Strong understanding of Data Lakehouse, Delta Lake, and medallion architecture Experience with data governance, access control, and lineage using Unity Catalog Must be able to work onsite 2-3 days a week Desirable

Pharmacy Business domain experience Exposure to DevOps, CI/CD pipelines, and containerization (Docker, Kubernetes) dbt for transformations and documentation Azure Event Hubs or Kafka for streaming data ingestion Airflow, Synapse, or Power BI for orchestration and reporting integration Experience with Infrastructure-as-Code tools like Terraform or Bicep for provisioning Azure data resources Exposure to data observability platforms (e.g., Monte Carlo, OpenLineage) Knowledge of data mesh and distributed data ownership models Additional Information

Discover some of the global benefits that empower our people to become the best version of themselves: Finance, Career Development, Learning Opportunities, Work-Life Balance, Health, Community USA Benefits (Full time roles only): Healthcare and benefits, Flexible Spending Accounts, Employer Paid Life Insurance, Health Savings Account, 401(k) with employer match Additional Employee Requirements

Participation in both internal meetings and external meetings via video calls, as necessary Ability to go into corporate or client offices to work onsite, as necessary Prolonged periods of remaining stationary at a desk and working on a computer, as necessary Ability to bend, kneel, crouch, and reach overhead, as necessary Hand-eye coordination necessary to operate computers and various pieces of office equipment Vision abilities including close vision, toleration of fluorescent lighting, and adjusting focus For positions that require business travel and/or event attendance, ability to lift 25 lbs, as necessary For positions that require business travel and/or event attendance, a valid driver’s license and acceptable driving record are required Reasonable accommodations will be made to enable employees requiring accommodations to perform the essential functions of their jobs, absent undue hardship. Seniority level

Mid-Senior level Employment type

Full-time Job function

Information Technology Industries: IT Services and IT Consulting

#J-18808-Ljbffr