Diverse Lynx
AWS Cloud Engineer | Seattle, WA / St. Louis, MO / Plano, TX / Dallas, TX / Houston, TX – Hybrid | Fulltime | 5 - 15 years of experience
Responsibilities
AWS data services (S3, Glue, Redshift, Athena, Lambda, Step Functions, Kinesis, etc.).
Unity Catalog, PySpark, AWS Glue, Lambda, Step Functions, and Apache Airflow.
Data Pipeline Development: Design, develop, and optimize ETL/ELT pipelines using AWS & Databricks services such as Unity Catalog, PySpark, AWS Glue, Lambda, Step Functions, and Apache Airflow.
Data Integration: Integrate data from various sources, including relational databases, APIs, and streaming data, ensuring high data quality and consistency.
Cloud Infrastructure Management: Build and manage scalable, secure, and cost-efficient data infrastructure using AWS services like S3, Redshift, Athena, and RDS.
Data Modeling: Create and maintain data models to support analytics and reporting requirements, ensuring efficient querying and storage.
Performance Optimization: Monitor and optimize the performance of data pipelines, databases, and queries to meet SLAs and reduce costs.
Collaboration: Work closely with data scientists, analysts, and software engineers to understand data needs and deliver solutions that enable business insights.
Security and Compliance: Implement best practices for data security, encryption, and compliance with regulations such as GDPR, CCPA, or ITAR.
Automation: Automate repetitive tasks and processes using scripting (Python, Bash) and Infrastructure as Code (e.g., Terraform, AWS CloudFormation).
Agile Development: Build and optimize CI/CD pipelines to enable rapid and reliable software releases using Gitlab in an Agile environment.
Monitoring and Troubleshooting: Set up monitoring and alerting for data pipelines and infrastructure, and troubleshoot issues to ensure high availability.
Qualifications
5 - 15 years of experience in data engineering or cloud engineering roles.
Strong experience with AWS data services (S3, Glue, Redshift, Athena, Lambda, Step Functions, Kinesis).
Experience with Unity Catalog, PySpark, AWS Glue, Lambda, Step Functions, and Apache Airflow.
Programming skills in Python, Scala, or PySpark for data processing and automation.
Expertise in SQL and experience with relational and NoSQL databases (e.g., RDS, DynamoDB).
Data modeling and ETL/ELT pipeline design across AWS and Databricks services.
Automation and infrastructure as code (Terraform, AWS CloudFormation).
Experience with CI/CD in an Agile environment (GitLab preferred).
Strong problem-solving, collaboration, and communication skills; ability to work with cross-functional teams.
Diverse Lynx LLC is an Equal Employment Opportunity employer. All qualified applicants will receive due consideration for employment without any discrimination. All applicants will be evaluated solely on the basis of their ability, competence and their proven capability to perform the functions outlined in the corresponding role. We promote and support a diverse workforce across all levels in the company.
#J-18808-Ljbffr
Responsibilities
AWS data services (S3, Glue, Redshift, Athena, Lambda, Step Functions, Kinesis, etc.).
Unity Catalog, PySpark, AWS Glue, Lambda, Step Functions, and Apache Airflow.
Data Pipeline Development: Design, develop, and optimize ETL/ELT pipelines using AWS & Databricks services such as Unity Catalog, PySpark, AWS Glue, Lambda, Step Functions, and Apache Airflow.
Data Integration: Integrate data from various sources, including relational databases, APIs, and streaming data, ensuring high data quality and consistency.
Cloud Infrastructure Management: Build and manage scalable, secure, and cost-efficient data infrastructure using AWS services like S3, Redshift, Athena, and RDS.
Data Modeling: Create and maintain data models to support analytics and reporting requirements, ensuring efficient querying and storage.
Performance Optimization: Monitor and optimize the performance of data pipelines, databases, and queries to meet SLAs and reduce costs.
Collaboration: Work closely with data scientists, analysts, and software engineers to understand data needs and deliver solutions that enable business insights.
Security and Compliance: Implement best practices for data security, encryption, and compliance with regulations such as GDPR, CCPA, or ITAR.
Automation: Automate repetitive tasks and processes using scripting (Python, Bash) and Infrastructure as Code (e.g., Terraform, AWS CloudFormation).
Agile Development: Build and optimize CI/CD pipelines to enable rapid and reliable software releases using Gitlab in an Agile environment.
Monitoring and Troubleshooting: Set up monitoring and alerting for data pipelines and infrastructure, and troubleshoot issues to ensure high availability.
Qualifications
5 - 15 years of experience in data engineering or cloud engineering roles.
Strong experience with AWS data services (S3, Glue, Redshift, Athena, Lambda, Step Functions, Kinesis).
Experience with Unity Catalog, PySpark, AWS Glue, Lambda, Step Functions, and Apache Airflow.
Programming skills in Python, Scala, or PySpark for data processing and automation.
Expertise in SQL and experience with relational and NoSQL databases (e.g., RDS, DynamoDB).
Data modeling and ETL/ELT pipeline design across AWS and Databricks services.
Automation and infrastructure as code (Terraform, AWS CloudFormation).
Experience with CI/CD in an Agile environment (GitLab preferred).
Strong problem-solving, collaboration, and communication skills; ability to work with cross-functional teams.
Diverse Lynx LLC is an Equal Employment Opportunity employer. All qualified applicants will receive due consideration for employment without any discrimination. All applicants will be evaluated solely on the basis of their ability, competence and their proven capability to perform the functions outlined in the corresponding role. We promote and support a diverse workforce across all levels in the company.
#J-18808-Ljbffr