Georgia IT Inc
Principal - Data Engineer - Bellevue WA ( Hybrid)
Georgia IT Inc, Bellevue, Washington, us, 98009
Principal - Data Engineer
Location:
Bellevue, WA ( Hybrid) Postion Type: Contract Rate: DOE
US citizen, Green Card preferred, and no third party agencies -C2C. Job descriptions :
We operate in an Azure + Databricks Lakehouse. We'll need a person with: • Azure experience - ADF for orchestration, ADLS for storage, AzureDevOps for CI/CD • Databricks experience - all compute/ETL leverages Databricks and is programmed leveraging Spark (PySpark, SparkSQL) • PowerShell experience - this is our scripting language of choice • SQL proficiency - it's used everywhere (TSQL, PostgreSQL) • Proficiency with parquet and delta formats Additionally - they will need experience in: • SDLC + CI/CD - we follow a standard deployment process (dev, test, prod) that includes peer reviewed code. They need to be comfortable with standard DevOps practices. • Should have a deep understanding of indexes and partitioning. • Should be proficient optimizing code for performance (able to read a DAG, determine where CBO is using most resources) • Should be proficient in writing code in a matter that it can run repeatedly and produce the same state (we have a custom SQL Deployment framework)
Bellevue, WA ( Hybrid) Postion Type: Contract Rate: DOE
US citizen, Green Card preferred, and no third party agencies -C2C. Job descriptions :
We operate in an Azure + Databricks Lakehouse. We'll need a person with: • Azure experience - ADF for orchestration, ADLS for storage, AzureDevOps for CI/CD • Databricks experience - all compute/ETL leverages Databricks and is programmed leveraging Spark (PySpark, SparkSQL) • PowerShell experience - this is our scripting language of choice • SQL proficiency - it's used everywhere (TSQL, PostgreSQL) • Proficiency with parquet and delta formats Additionally - they will need experience in: • SDLC + CI/CD - we follow a standard deployment process (dev, test, prod) that includes peer reviewed code. They need to be comfortable with standard DevOps practices. • Should have a deep understanding of indexes and partitioning. • Should be proficient optimizing code for performance (able to read a DAG, determine where CBO is using most resources) • Should be proficient in writing code in a matter that it can run repeatedly and produce the same state (we have a custom SQL Deployment framework)