iSoftTek Solutions Inc
Position: Data Engineer III
Location: Seattle, WA 98121
Duration: 17 Months
Job Type: Contract
Work Type:
Onsite Job Description: The Infrastructure Automation team is responsible for delivering the software that powers our infrastructure. Responsibilities As a Data Engineer you will be working in one of the world's largest and most complex data warehouse environments. You will be developing and supporting the analytic technologies that give our customers timely, flexible and structured access to their data. Design, build, and maintain scalable, reliable, and reusable data pipelines and infrastructure that support analytics, reporting, and strategic decision-making You will be responsible for designing and implementing a platform using third-party and in-house reporting tools, modeling metadata, building reports and dashboards You will work with business customers in understanding the business requirements and implementing solutions to support analytical and reporting needs. Explore source systems, data flows, and business processes to uncover opportunities, ensure data accuracy and completeness, and drive improvements in data quality and usability. Required Skills & Experience 7+ years of related experience. Experience with data modeling, warehousing and building ETL pipelines Strong experience with SQL Experience in at least one modern scripting or programming language, such as Python, Java. Strong analytical skills, with the ability to translate business requirements into technical data solutions. Excellent communication skills, with the ability to collaborate across technical and business teams. A good candidate can partner with business owners directly to understand their requirements and provide data which can help them observe patterns and spot anomalies. Preferred Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) 3+ years of reparation of data for direct use in visualization tools like Tableau experience KPI: Meet requirements, how they action solutions, etc.: Leadership Principles: Deliver Results Dive Deep Top 3 must-have hard skills Strong experience with SQL Experience in at least one modern scripting or programming language, such as Python, Java. Experience with data modeling, warehousing and building ETL pipelines
Onsite Job Description: The Infrastructure Automation team is responsible for delivering the software that powers our infrastructure. Responsibilities As a Data Engineer you will be working in one of the world's largest and most complex data warehouse environments. You will be developing and supporting the analytic technologies that give our customers timely, flexible and structured access to their data. Design, build, and maintain scalable, reliable, and reusable data pipelines and infrastructure that support analytics, reporting, and strategic decision-making You will be responsible for designing and implementing a platform using third-party and in-house reporting tools, modeling metadata, building reports and dashboards You will work with business customers in understanding the business requirements and implementing solutions to support analytical and reporting needs. Explore source systems, data flows, and business processes to uncover opportunities, ensure data accuracy and completeness, and drive improvements in data quality and usability. Required Skills & Experience 7+ years of related experience. Experience with data modeling, warehousing and building ETL pipelines Strong experience with SQL Experience in at least one modern scripting or programming language, such as Python, Java. Strong analytical skills, with the ability to translate business requirements into technical data solutions. Excellent communication skills, with the ability to collaborate across technical and business teams. A good candidate can partner with business owners directly to understand their requirements and provide data which can help them observe patterns and spot anomalies. Preferred Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) 3+ years of reparation of data for direct use in visualization tools like Tableau experience KPI: Meet requirements, how they action solutions, etc.: Leadership Principles: Deliver Results Dive Deep Top 3 must-have hard skills Strong experience with SQL Experience in at least one modern scripting or programming language, such as Python, Java. Experience with data modeling, warehousing and building ETL pipelines