84.51˚
Overview
Senior Data Engineer role at 84.51°. You will collaborate with cross-functional teams, leverage cutting-edge technologies, and ensure scalable, efficient, and secure data engineering practices. You’ll primarily work within Azure, creating an application that specializes in campaign management. Responsibilities
Take ownership of systems, processes, and the tech stack while driving features to completion through all phases of the SDLC, including internal and external facing applications and process improvement activities. Provide Technical Leadership: offer guidance to ensure clarity between ongoing projects and facilitate collaboration across teams to solve complex data engineering challenges. Build and Maintain Data Pipelines: design, build, and maintain scalable, efficient, and reliable data pipelines for data ingestion, transformation, and integration across sources and destinations using tools such as Kafka, Databricks, and similar toolsets. Drive Innovation: leverage technologies to modernize and extend core data assets (SQL-based, NoSQL-based, cloud-based, and real-time streaming platforms). Implement Automated Testing: design and implement automated unit, integration, and performance testing frameworks to ensure data quality, reliability, and compliance with standards. Optimize Data Workflows: optimize data workflows for performance, cost efficiency, and scalability. Mentor Team Members: mentor team members in data principles, patterns, processes, and practices to promote best practices. Draft and Review Documentation: draft and review architectural diagrams, interface specifications, and other design documents for clear communication of data solutions and requirements. Requirements
Bachelor’s degree typically in Computer Science, MIS, Mathematics, Business Analytics or another STEM field. Proven experience in leading cross-functional teams and managing complex projects from inception to completion. 3+ years of professional Data Development experience. 3+ years of experience with SQL and NoSQL technologies. 2+ years of experience building and maintaining data pipelines and workflows. 2+ years of experience developing with Python & PySpark. Experience developing within Databricks. Experience with CI/CD pipelines and processes. Experience with automated unit, integration, and performance testing. Experience with version control software such as Git. Full understanding of ETL and Data Warehousing concepts. Strong understanding of Agile principles (Scrum). Preferred Qualifications
Knowledge of Structured Streaming (Spark, Kafka, EventHub, or similar). Experience with GitHub SaaS/GitHub Actions. Experience with Service Oriented Architecture. Experience with containerization technologies such as Docker and Kubernetes. Pay Transparency and Benefits
The stated salary range represents the entire span applicable across all geographic markets from lowest to highest. Actual salary offers will be determined by factors including location, experience, knowledge, skills, and market data. In addition to salary, this position is eligible for variable compensation. Benefits include: Health (Medical, Dental, Vision), 401(k) with Roth option and matching, Health Savings Account with matching contribution, AD&D and supplemental insurance. Happiness: Hybrid work environment, paid time off including 5 weeks of vacation, wellness days, floating holidays, company holidays, and paid leave for maternity, paternity, and family care. Pay Range $97,000.00/yr - $166,750.00/yr Seniority level
Mid-Senior level Employment type
Full-time Job function
Information Technology Industries
Business Consulting and Services
#J-18808-Ljbffr
Senior Data Engineer role at 84.51°. You will collaborate with cross-functional teams, leverage cutting-edge technologies, and ensure scalable, efficient, and secure data engineering practices. You’ll primarily work within Azure, creating an application that specializes in campaign management. Responsibilities
Take ownership of systems, processes, and the tech stack while driving features to completion through all phases of the SDLC, including internal and external facing applications and process improvement activities. Provide Technical Leadership: offer guidance to ensure clarity between ongoing projects and facilitate collaboration across teams to solve complex data engineering challenges. Build and Maintain Data Pipelines: design, build, and maintain scalable, efficient, and reliable data pipelines for data ingestion, transformation, and integration across sources and destinations using tools such as Kafka, Databricks, and similar toolsets. Drive Innovation: leverage technologies to modernize and extend core data assets (SQL-based, NoSQL-based, cloud-based, and real-time streaming platforms). Implement Automated Testing: design and implement automated unit, integration, and performance testing frameworks to ensure data quality, reliability, and compliance with standards. Optimize Data Workflows: optimize data workflows for performance, cost efficiency, and scalability. Mentor Team Members: mentor team members in data principles, patterns, processes, and practices to promote best practices. Draft and Review Documentation: draft and review architectural diagrams, interface specifications, and other design documents for clear communication of data solutions and requirements. Requirements
Bachelor’s degree typically in Computer Science, MIS, Mathematics, Business Analytics or another STEM field. Proven experience in leading cross-functional teams and managing complex projects from inception to completion. 3+ years of professional Data Development experience. 3+ years of experience with SQL and NoSQL technologies. 2+ years of experience building and maintaining data pipelines and workflows. 2+ years of experience developing with Python & PySpark. Experience developing within Databricks. Experience with CI/CD pipelines and processes. Experience with automated unit, integration, and performance testing. Experience with version control software such as Git. Full understanding of ETL and Data Warehousing concepts. Strong understanding of Agile principles (Scrum). Preferred Qualifications
Knowledge of Structured Streaming (Spark, Kafka, EventHub, or similar). Experience with GitHub SaaS/GitHub Actions. Experience with Service Oriented Architecture. Experience with containerization technologies such as Docker and Kubernetes. Pay Transparency and Benefits
The stated salary range represents the entire span applicable across all geographic markets from lowest to highest. Actual salary offers will be determined by factors including location, experience, knowledge, skills, and market data. In addition to salary, this position is eligible for variable compensation. Benefits include: Health (Medical, Dental, Vision), 401(k) with Roth option and matching, Health Savings Account with matching contribution, AD&D and supplemental insurance. Happiness: Hybrid work environment, paid time off including 5 weeks of vacation, wellness days, floating holidays, company holidays, and paid leave for maternity, paternity, and family care. Pay Range $97,000.00/yr - $166,750.00/yr Seniority level
Mid-Senior level Employment type
Full-time Job function
Information Technology Industries
Business Consulting and Services
#J-18808-Ljbffr