eTeam
1. Prior experience working with TMO is highly preferred
2. Do not submit already recent evaluated profiles
JD: - Architect, deploy, and manage scalable AWS infrastructure using best practices (CloudFormation, Terraform preferred). - Integrate cloud-native services (ECS, Lambda, RDS, S3, IAM, CloudWatch, etc.). - Hands-on experience with AWS Glue (ETL jobs, crawlers, Glue Studio, Glue APIs). - Deep understanding of Apache Spark architecture and development (PySparkScala preferred). - Experience working with large-scale data ingestion and processing in distributed environments. - Experience with AWS services: S3, Redshift, RDS, Athena, Kinesis, EMR. - Strong scripting skills (Python, Bash, or similar). - Hands on experience with Kubernetes to deploy and manage services. - Knowledge of containerization technologies (Docker, Helm charts, etc.). - Familiarity with Infrastructure as Code (Ansible, AWS CDK, or CloudFormation). - Experience with monitoringlogging tools (AppD, Splunk, Grafana, ELK Stack). - Solid understanding of networking concepts (VPC, Subnets, Security Groups, Load Balancers). - Designing, implementing, and maintaining CICD pipelines to enable the rapid and reliable delivery of software to the platform - Utilizing software to improve the availability, scalability, latency, and efficiency of the platform. - Work with developers and other team members to design and implement CICD pipelines for efficient and reliable software delivery. - Automate and maintain services that manage infrastructure, platforms, databases, and application components in all environments. - Actively identify and reduce manual processes by replacing it with automation for efficiency and accuracy. Preferred Qualifications - AWS Certifications (AWS Dev or Devops, DevOps Engineer, or similar). - CKAD or CKA
JD: - Architect, deploy, and manage scalable AWS infrastructure using best practices (CloudFormation, Terraform preferred). - Integrate cloud-native services (ECS, Lambda, RDS, S3, IAM, CloudWatch, etc.). - Hands-on experience with AWS Glue (ETL jobs, crawlers, Glue Studio, Glue APIs). - Deep understanding of Apache Spark architecture and development (PySparkScala preferred). - Experience working with large-scale data ingestion and processing in distributed environments. - Experience with AWS services: S3, Redshift, RDS, Athena, Kinesis, EMR. - Strong scripting skills (Python, Bash, or similar). - Hands on experience with Kubernetes to deploy and manage services. - Knowledge of containerization technologies (Docker, Helm charts, etc.). - Familiarity with Infrastructure as Code (Ansible, AWS CDK, or CloudFormation). - Experience with monitoringlogging tools (AppD, Splunk, Grafana, ELK Stack). - Solid understanding of networking concepts (VPC, Subnets, Security Groups, Load Balancers). - Designing, implementing, and maintaining CICD pipelines to enable the rapid and reliable delivery of software to the platform - Utilizing software to improve the availability, scalability, latency, and efficiency of the platform. - Work with developers and other team members to design and implement CICD pipelines for efficient and reliable software delivery. - Automate and maintain services that manage infrastructure, platforms, databases, and application components in all environments. - Actively identify and reduce manual processes by replacing it with automation for efficiency and accuracy. Preferred Qualifications - AWS Certifications (AWS Dev or Devops, DevOps Engineer, or similar). - CKAD or CKA