GlobalPoint
Overview
Seattle-based client Location: Seattle, WA. Duration: 12 months. 4 days in-office. Core Skills include data governance with Databricks Unity Catalog. 10-12+ years of experience in data engineering or a related field. Must coordinate with Offshore teams. Responsibilities
Design, build, and deploy data extraction, transformation, and loading processes and pipelines from various sources including databases, APIs, and data files. Develop and support data pipelines within a Cloud Data Platform, such as Databricks. Build data models that reflect domain expertise, meet current business needs, and remain flexible as strategy evolves. Monitor and optimize Databricks cluster performance, ensuring cost-effective scaling and resource utilization. Communicate technical concepts to non-technical audiences in both written and verbal form. Implement and maintain Delta Lake for optimized data storage, ensuring data reliability, performance, and versioning. Automate CI/CD pipelines for data workflows using Azure DevOps. Qualifications
Experience in a cloud environment (Azure preferred) with strong understanding of cloud data architecture. Hands-on experience with Databricks Cloud Data Platforms. Experience migrating to Unity Catalog. Experience with workflow orchestration (e.g., Databricks Jobs, Azure Data Factory pipelines). Programming languages: Python/PySpark, SQL, or Scala. Requirements
Location: Seattle, WA (on-site/off-site balance as specified). Employment type: Full-time. Seniority level: Mid-Senior level. Industry: IT Services and IT Consulting.
#J-18808-Ljbffr
Seattle-based client Location: Seattle, WA. Duration: 12 months. 4 days in-office. Core Skills include data governance with Databricks Unity Catalog. 10-12+ years of experience in data engineering or a related field. Must coordinate with Offshore teams. Responsibilities
Design, build, and deploy data extraction, transformation, and loading processes and pipelines from various sources including databases, APIs, and data files. Develop and support data pipelines within a Cloud Data Platform, such as Databricks. Build data models that reflect domain expertise, meet current business needs, and remain flexible as strategy evolves. Monitor and optimize Databricks cluster performance, ensuring cost-effective scaling and resource utilization. Communicate technical concepts to non-technical audiences in both written and verbal form. Implement and maintain Delta Lake for optimized data storage, ensuring data reliability, performance, and versioning. Automate CI/CD pipelines for data workflows using Azure DevOps. Qualifications
Experience in a cloud environment (Azure preferred) with strong understanding of cloud data architecture. Hands-on experience with Databricks Cloud Data Platforms. Experience migrating to Unity Catalog. Experience with workflow orchestration (e.g., Databricks Jobs, Azure Data Factory pipelines). Programming languages: Python/PySpark, SQL, or Scala. Requirements
Location: Seattle, WA (on-site/off-site balance as specified). Employment type: Full-time. Seniority level: Mid-Senior level. Industry: IT Services and IT Consulting.
#J-18808-Ljbffr