Logo
Jazwares

Senior Data Engineer

Jazwares, Fort Lauderdale, Florida, us, 33336

Save Job

General Purpose

We are seeking a highly skilled and experienced Senior Data Engineer to join our Data and BI team. This role is critical in designing, building, and maintaining our core data infrastructure and pipelines. You will be instrumental in ensuring data availability, reliability, and performance, working closely with our Analytics Engineers and other stakeholders to support our evolving data needs. This position requires deep technical expertise in our modern data stack and a passion for building robust, automated, and secure data solutions.

Duties and Responsibilities

Databricks Platform Management:

- Administer, optimize, and scale our Databricks Lakehouse environment, ensuring high performance, cost efficiency, and operational excellence.

- Design and implement data ingestion patterns into Databricks using Delta Lake, optimizing for large-scale data processing and storage.

- Manage and enforce Unity Catalog for data governance, access control, and metadata management.

- Develop, optimize, and troubleshoot complex Spark jobs (PySpark/Scala) for data processing and transformation within Databricks.

Data Ingestion & Orchestration:

- Manage and extend data ingestion pipelines using Airbyte, including configuring connectors, monitoring syncs, and ensuring data quality and reliability from diverse source systems (e.g., ERP, CRM, marketing, supply chain).

- Orchestrate and automate data pipelines and dbt models using Databricks Workflows and potentially integrating with other orchestration tools.

Data Transformation & Modeling (dbt):

- Collaborate with Analytics Engineers to translate business requirements into efficient and scalable data models using dbt.

- Implement dbt best practices for modularity, testing, documentation, and version control.

- Ensure the seamless integration of dbt projects with Databricks for robust data transformation.

Infrastructure as Code (IaC) & Automation (Terraform, GitHub Actions):

- Develop, maintain, and enhance our data platform infrastructure and security configurations using Terraform. This includes provisioning Databricks workspaces, SQL Endpoints, Unity Catalog objects (catalogs, schemas, external locations, grants), and network components.

- Implement and manage CI/CD pipelines for data pipelines, dbt projects, and infrastructure deployments using GitHub Actions.

- Automate operational tasks, monitoring, and alerting for the data platform.

Security & Governance:

- Implement and enforce DevSecOps principles across the data stack, embedding security into every stage of the data lifecycle.

- Work closely with security teams to ensure compliance with data privacy regulations and internal security policies.

- Manage and rotate credentials securely (e.g., using Databricks secret scopes, cloud secret managers).

Collaboration & Mentorship:

- Partner effectively with Analytics Engineers, Data Scientists, and business stakeholders to deliver high-quality data solutions.

- Provide technical guidance and mentorship to junior team members.

- Champion data engineering best practices, code quality, and documentation standards.

- Perform other duties as assigned

Manages People:

No

Required Qualifications

Education/ Years of Experience

- Bachelor's degree in Computer Science, Data Engineering, or a related technical field required

- 5+ years of progressive experience as a Data Engineer, with a strong focus on cloud-based data platforms.

- Deep expertise in Databricks, including extensive experience with Spark (PySpark/Scala), Delta Lake, Unity Catalog, Databricks SQL, and platform administration.

- Proven experience with dbt for data modeling, transformation, and testing.

- Hands-on experience with Airbyte (or similar modern data ingestion tools like Fivetran, Stitch) for building and managing data pipelines.

- Strong proficiency with Terraform for defining, provisioning, and managing cloud infrastructure and Databricks resources as code.

- Expertise in Git and GitHub Actions for version control and implementing robust CI/CD pipelines.

- Proficiency in SQL and at least one programming language (Python strongly preferred, Scala is a plus).

- Solid understanding of data warehousing, data lake, and lakehouse architectures.

- Experience with cloud platforms (Azure, AWS, or GCP), particularly related to networking, storage, and identity & access management.

- Excellent problem-solving skills, attention to detail, and ability to troubleshoot complex data issues.

- Strong communication and collaboration skills, with the ability to articulate technical concepts to diverse audiences.

- Certification in Databricks, Terraform, dbt preferred

Knowledge, Skills, Abilities, and Other Characteristics (KSAO's)

- Experience in the CPG (Consumer Packaged Goods) industry.

- Familiarity with data visualization tools like Sigma or Tableau.

- Experience with streaming data technologies (e.g., Kafka, Spark Streaming).

- Knowledge of data governance frameworks and tools.

- Relevant certifications (e.g., Databricks Certified Data Engineer, HashiCorp Certified Terraform Associate).

Working Conditions

Environment:

Office

Extreme Exposures:

None

Schedule:

Regular office

Physical Requirements:

Light; Lifting up to 25 lbs

Must be able to sit at a desk for long periods of time

Travel Required:

No

This job description is not designed to cover or contain a comprehensive listing of activities, duties, or responsibilities that are required of the employee. Duties, responsibilities, and activities may change, or new ones may be assigned at any time with or without notice.

#updated07-25

Equal Opportunity Employer This employer is required to notify all applicants of their rights pursuant to federal employment laws. For further information, please review the Know Your Rights notice from the Department of Labor.