Amazon
Business Intelligence Engineer, RBS ARTS
Overview
A candidate will be a self-starter who is passionate about discovering and solving complicated problems, learning complex systems, working with numbers, and organizing and communicating data and reports. You will be detail-oriented and organized, capable of handling multiple projects at once, and capable of dealing with ambiguity and rapidly changing priorities. You will have expertise in process optimizations and systems thinking and will be required to engage directly with multiple internal teams to drive business projects/automation for the RBS team. Candidates must be successful both as individual contributors and in a team environment, and must be customer-centric. Our environment is fast-paced and requires someone who is flexible, detail-oriented, and comfortable working in a deadline-driven work environment.
Responsibilities
Design, development and ongoing operations of scalable, performant data warehouse (Redshift) tables, data pipelines, reports and dashboards.
Development of moderately to highly complex data processing jobs using appropriate technologies (e.g. SQL, Python, Spark, AWS Lambda, etc.).
Development of dashboards and reports.
Collaborating with stakeholders to understand business domains, requirements, and expectations. Additionally, working with owners of data source systems to understand capabilities and limitations.
Deliver minimally to moderately complex data analysis; collaborating as needed with Data Science as complexity increases.
Actively manage the timeline and deliverables of projects, anticipate risks and resolve issues.
Adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation.
Basic Qualifications
3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience
Experience with data visualization using Tableau, Quicksight, or similar tools
Experience with data modeling, warehousing and building ETL pipelines
Experience in Statistical Analysis packages such as R, SAS and Matlab
Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling
5+ years of relevant professional experience in business intelligence, analytics, statistics, data engineering, data science or related field
Experience with Data modeling, SQL, ETL, Data Warehousing and Data Lakes
Strong experience with engineering and operations best practices (version control, data quality/testing, monitoring, etc.)
Expert-level SQL
Proficiency with one or more general purpose programming languages (e.g. Python, Java, Scala, etc.)
Knowledge of AWS products such as Redshift, Quicksight, and Lambda
Excellent verbal/written communication & data presentation skills, including ability to succinctly summarize key findings and effectively communicate with both business and technical teams
Preferred Qualifications
Experience with scripting and automation tools
Familiarity with Infrastructure as Code (IaC) tools such as AWS CDK
Knowledge of AWS services such as SQS, SNS, CloudWatch and DynamoDB
Understanding of DevOps practices, including CI/CD pipelines and monitoring solutions
Understanding of cloud services, serverless architecture, and systems integration
Experience with data-specific programming languages/packages such as R or Python Pandas
Experience with AWS solutions such as EC2, DynamoDB, S3, and EMR
Knowledge of machine learning techniques and concepts
Additional Information Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status.
#J-18808-Ljbffr
Responsibilities
Design, development and ongoing operations of scalable, performant data warehouse (Redshift) tables, data pipelines, reports and dashboards.
Development of moderately to highly complex data processing jobs using appropriate technologies (e.g. SQL, Python, Spark, AWS Lambda, etc.).
Development of dashboards and reports.
Collaborating with stakeholders to understand business domains, requirements, and expectations. Additionally, working with owners of data source systems to understand capabilities and limitations.
Deliver minimally to moderately complex data analysis; collaborating as needed with Data Science as complexity increases.
Actively manage the timeline and deliverables of projects, anticipate risks and resolve issues.
Adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation.
Basic Qualifications
3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience
Experience with data visualization using Tableau, Quicksight, or similar tools
Experience with data modeling, warehousing and building ETL pipelines
Experience in Statistical Analysis packages such as R, SAS and Matlab
Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling
5+ years of relevant professional experience in business intelligence, analytics, statistics, data engineering, data science or related field
Experience with Data modeling, SQL, ETL, Data Warehousing and Data Lakes
Strong experience with engineering and operations best practices (version control, data quality/testing, monitoring, etc.)
Expert-level SQL
Proficiency with one or more general purpose programming languages (e.g. Python, Java, Scala, etc.)
Knowledge of AWS products such as Redshift, Quicksight, and Lambda
Excellent verbal/written communication & data presentation skills, including ability to succinctly summarize key findings and effectively communicate with both business and technical teams
Preferred Qualifications
Experience with scripting and automation tools
Familiarity with Infrastructure as Code (IaC) tools such as AWS CDK
Knowledge of AWS services such as SQS, SNS, CloudWatch and DynamoDB
Understanding of DevOps practices, including CI/CD pipelines and monitoring solutions
Understanding of cloud services, serverless architecture, and systems integration
Experience with data-specific programming languages/packages such as R or Python Pandas
Experience with AWS solutions such as EC2, DynamoDB, S3, and EMR
Knowledge of machine learning techniques and concepts
Additional Information Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status.
#J-18808-Ljbffr