Logo
Intelliswift

Data Engineer II

Intelliswift, Seattle, Washington, us, 98127

Save Job

Pay rate range - $65/hr. to $70/hr. on W2 5 days in the office

Must Have Cloud Technologies (ex., Microsoft AWS) - 3yrs Python - 2yrs SQL - 3yrs

BASIC QUALIFICATIONS -

5+ years of data engineering experience - Experience with data modeling, warehousing and building ETL pipelines - Experience with SQL - Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS - Experience mentoring team members on best practices

Preferred : Past industry preference? - data center field Experience with big data technologies such as Hadoop, Hive, Spark, EMR Experience operating large data warehouses

Degree or Certification : Bachelor's required - (Computer Science preferred, but experience in the data field is ok)

Required : - Develop and maintain automated ETL pipelines (with monitoring) using scripting languages such as Python, Spark, SQL, and AWS services such as S3, Glue, Lambda, SNS, SQS, and KMS. - Implement and support reporting and analytics infrastructure for internal business customers. - Develop and maintain data security and permissions solutions for enterprise-scale data warehouse and data lake implementations, including data encryption and database user access controls and logging. - Develop data objects for business analytics using data modeling techniques. - Develop and optimize data warehouse and data lake tables using best practices for DDL, physical and logical tables, data partitioning, compression, and parallelization. - Develop and maintain data warehouse and data lake metadata, data catalog, and user documentation for internal business customers. - Work with internal business customers and software development teams to gather and document requirements for data publishing and data consumption via data warehouse, data lake, and analytics solutions.

Job Description We're looking for a Senior Data Engineer to help us grow our Data Lake and Data Warehouse Systems, which is being built using a serverless architecture, with 100% native AWS components, including Redshift Spectrum, Athena, S3, Lambda, Glue, EMR, Kinesis, SNS, CloudWatch, and more! We own a world-class data lake that is used to drive multi-billion dollar decisions on a regular cadence and we're looking to improve on filling the lake quickly, with as little human intervention as needed and democratize the data in the lake.

Our Data Engineers build the ETL and analytics solutions for our internal customers to answer questions with data and drive critical improvements for the business. Our Data Engineers use best practices in software engineering, data management, data storage, data computing, and distributed systems. We are passionate about solving business problems with data!