Effulgent INC
Salary : $ 90000 - $ 96000 Job Type : Full-Time Location : Texas Job Posted : 2025-02-14
A Cloud Data Engineer is responsible for designing, building, and maintaining cloud-based data infrastructure and pipelines. They work with cloud services to ensure efficient data storage, processing, and integration, supporting analytics and business intelligence needs.
Key Responsibilities:
Design and implement scalable data architectures in cloud environments (AWS, Azure, GCP). Build and manage cloud-based data warehouses (Snowflake, Redshift, BigQuery). Develop and optimize ETL/ELT pipelines for data ingestion, transformation, and processing. Automate workflows using tools like Apache Airflow, AWS Glue, or Azure Data Factory. Work with big data technologies (Apache Spark, Hadoop, Kafka) for batch and streaming data. Implement real-time data processing solutions for analytics and reporting. Ensure data security, compliance (GDPR, HIPAA), and best practices in cloud environments. Implement role-based access control and data encryption strategies. Work closely with data scientists, analysts, and software engineers to optimize data workflows. Monitor and optimize cloud data storage and computing costs. Required Skills & Qualifications:
Education:
Bachelor’s or Master’s degree in Computer Science, Data Engineering, IT, or related fields. Technical Skills: Proficiency in SQL and NoSQL databases (PostgreSQL, MongoDB, DynamoDB). Strong programming skills in Python, Java, or Scala. Experience with cloud services:
AWS:
S3, Redshift, Glue, Lambda, EMR, Kinesis Azure:
Data Factory, Synapse, Cosmos DB, Blob Storage GCP:
BigQuery, Dataflow, Pub/Sub, Cloud Storage
Knowledge of Infrastructure as Code (Terraform, CloudFormation). Experience with CI/CD pipelines for data deployment. Preferred Qualifications (Nice-to-Have):
Certifications (AWS Certified Data Analytics, Google Professional Data Engineer, etc.). Experience with Kubernetes and Docker for containerized data applications. Familiarity with MLOps and AI/ML model deployment in cloud environments. Minimum 5 years of experience is required.
#J-18808-Ljbffr
Design and implement scalable data architectures in cloud environments (AWS, Azure, GCP). Build and manage cloud-based data warehouses (Snowflake, Redshift, BigQuery). Develop and optimize ETL/ELT pipelines for data ingestion, transformation, and processing. Automate workflows using tools like Apache Airflow, AWS Glue, or Azure Data Factory. Work with big data technologies (Apache Spark, Hadoop, Kafka) for batch and streaming data. Implement real-time data processing solutions for analytics and reporting. Ensure data security, compliance (GDPR, HIPAA), and best practices in cloud environments. Implement role-based access control and data encryption strategies. Work closely with data scientists, analysts, and software engineers to optimize data workflows. Monitor and optimize cloud data storage and computing costs. Required Skills & Qualifications:
Education:
Bachelor’s or Master’s degree in Computer Science, Data Engineering, IT, or related fields. Technical Skills: Proficiency in SQL and NoSQL databases (PostgreSQL, MongoDB, DynamoDB). Strong programming skills in Python, Java, or Scala. Experience with cloud services:
AWS:
S3, Redshift, Glue, Lambda, EMR, Kinesis Azure:
Data Factory, Synapse, Cosmos DB, Blob Storage GCP:
BigQuery, Dataflow, Pub/Sub, Cloud Storage
Knowledge of Infrastructure as Code (Terraform, CloudFormation). Experience with CI/CD pipelines for data deployment. Preferred Qualifications (Nice-to-Have):
Certifications (AWS Certified Data Analytics, Google Professional Data Engineer, etc.). Experience with Kubernetes and Docker for containerized data applications. Familiarity with MLOps and AI/ML model deployment in cloud environments. Minimum 5 years of experience is required.
#J-18808-Ljbffr