Brillio
About Brillio: Brillio is one of the fastest growing digital technology service providers and a partner of choice for many Fortune 1000 companies seeking to turn disruption into a competitive advantage through innovative digital adoption. Brillio, renowned for its world‑class professionals, referred to as "Brillians", distinguishes itself through their capacity to seamlessly integrate cutting‑edge digital and design thinking skills with an unwavering dedication to client satisfaction.
Brillio takes pride in its status as an employer of choice, consistently attracting the most exceptional and talented individuals due to its unwavering emphasis on contemporary, groundbreaking technologies, and exclusive digital projects. Brillio's relentless commitment to providing an exceptional experience to its Brillians and nurturing their full potential consistently garners them the Great Place to Work® certification year after year.
Senior Data Specialist Primary Skills
Must have: Python, SQL/PLSQL, AWS, Postgresql, S3, Glue
Good to have: CDK, GitHub
Job requirements
Must have: Python, SQL/PLSQL, AWS, Postgresql, S3, Glue
Good to have: CDK, GitHub
Job Description We are looking for an experienced AWS Lead Data Engineer to design, build, and manage robust, scalable, and high‑performance data pipelines and data platforms on AWS. The ideal candidate will have a strong foundation in ETL fundamentals, data modeling, and modern data architecture, with hands‑on expertise across a broad spectrum of AWS services including Athena, Glue, Step Functions, Lambda, S3, and Lake Formation.
Key Responsibilities:
Design and implement scalable ETL/ELT pipelines using AWS Glue, Spark (PySpark), and Step Functions.
Work with structured and semi‑structured data using Athena, S3, and Lake Formation to enable efficient querying and access control.
Develop and deploy serverless data processing solutions using AWS Lambda and integrate them into pipeline orchestration.
Perform advanced SQL and PL/SQL development for data transformation, analysis, and performance tuning.
Build data lakes and data warehouses using S3, Aurora, and Athena.
Implement data governance, security, and access control strategies using AWS tools including Lake Formation, CloudFront, EBS/EFS, and IAM.
Develop and maintain metadata, lineage, and data cataloging capabilities.
Participate in data modeling exercises for both OLTP and OLAP environments.
Work closely with data scientists, analysts, and business stakeholders to understand data requirements and deliver actionable insights.
Monitor, debug, and optimize data pipelines for reliability and performance.
Required Skills & Experience:
Strong experience with AWS data services: Glue, Athena, Step Functions, Lambda, Lake Formation, S3, EC2, Aurora, EBS/EFS, CloudFront.
Proficient in PySpark, Python, SQL (basic and advanced), and PL/SQL.
Solid understanding of ETL/ELT processes and data warehousing concepts.
Familiarity with modern data platform fundamentals and distributed data processing.
Experience in data modeling (conceptual, logical, physical) for analytical and operational use cases.
Experience with orchestration and workflow management tools within AWS.
Strong debugging and performance tuning skills across the data stack.
Know what it’s like to work and grow at Brillio:
#J-18808-Ljbffr
Brillio takes pride in its status as an employer of choice, consistently attracting the most exceptional and talented individuals due to its unwavering emphasis on contemporary, groundbreaking technologies, and exclusive digital projects. Brillio's relentless commitment to providing an exceptional experience to its Brillians and nurturing their full potential consistently garners them the Great Place to Work® certification year after year.
Senior Data Specialist Primary Skills
Must have: Python, SQL/PLSQL, AWS, Postgresql, S3, Glue
Good to have: CDK, GitHub
Job requirements
Must have: Python, SQL/PLSQL, AWS, Postgresql, S3, Glue
Good to have: CDK, GitHub
Job Description We are looking for an experienced AWS Lead Data Engineer to design, build, and manage robust, scalable, and high‑performance data pipelines and data platforms on AWS. The ideal candidate will have a strong foundation in ETL fundamentals, data modeling, and modern data architecture, with hands‑on expertise across a broad spectrum of AWS services including Athena, Glue, Step Functions, Lambda, S3, and Lake Formation.
Key Responsibilities:
Design and implement scalable ETL/ELT pipelines using AWS Glue, Spark (PySpark), and Step Functions.
Work with structured and semi‑structured data using Athena, S3, and Lake Formation to enable efficient querying and access control.
Develop and deploy serverless data processing solutions using AWS Lambda and integrate them into pipeline orchestration.
Perform advanced SQL and PL/SQL development for data transformation, analysis, and performance tuning.
Build data lakes and data warehouses using S3, Aurora, and Athena.
Implement data governance, security, and access control strategies using AWS tools including Lake Formation, CloudFront, EBS/EFS, and IAM.
Develop and maintain metadata, lineage, and data cataloging capabilities.
Participate in data modeling exercises for both OLTP and OLAP environments.
Work closely with data scientists, analysts, and business stakeholders to understand data requirements and deliver actionable insights.
Monitor, debug, and optimize data pipelines for reliability and performance.
Required Skills & Experience:
Strong experience with AWS data services: Glue, Athena, Step Functions, Lambda, Lake Formation, S3, EC2, Aurora, EBS/EFS, CloudFront.
Proficient in PySpark, Python, SQL (basic and advanced), and PL/SQL.
Solid understanding of ETL/ELT processes and data warehousing concepts.
Familiarity with modern data platform fundamentals and distributed data processing.
Experience in data modeling (conceptual, logical, physical) for analytical and operational use cases.
Experience with orchestration and workflow management tools within AWS.
Strong debugging and performance tuning skills across the data stack.
Know what it’s like to work and grow at Brillio:
#J-18808-Ljbffr