ShiftCode Analytics
Big Data Engineer - AI Team : Hearst Corporation
ShiftCode Analytics, Troy, Michigan, United States, 48083
Interview : Video
Visa : USC, GC, GC EAD, H4
This is hybrid from day-1 ( need only local candidates )
Description :
MM/DD of DOB Last 4 of SSN Visa : Exp date on visa DL copy Linkedin
NO VIOP OR GOOGLE VOICE NUMBERS - NO LENGHTY RESUMES PLEASE
LOCAL CANDIDATES ONLY
Qualifications & Skills:
Bachelor's degree or higher in Computer Science, Engineering, Mathematics, or a related field At least 5 years of experience in data engineering or a similar role (previous DBA experience is a plus) Experience with big data frameworks and tools, such as Spark, Hadoop, Kafka and Hive Expert in SQL, including a knowledge of efficient query and schema design, DDL, data modeling and use of stored procedures Proficient in at least one programming language, such as Python, Go or Java Experience with CI/CD, containerization (ex: docker, K8s) and orchestration (ex: Airflow) Experience building production systems with more modern ETL, ELT and data systems, such as AWS Glue, Databricks, Snowflake, Elastic, and Azure Cognitive Search Experience deploying data infrastructure on cloud platforms (AWS, Azure, or GCP) Strong knowledge of data quality, data governance, and data security principles and practices Excellent communication, collaboration, and problem-solving
Visa : USC, GC, GC EAD, H4
This is hybrid from day-1 ( need only local candidates )
Description :
MM/DD of DOB Last 4 of SSN Visa : Exp date on visa DL copy Linkedin
NO VIOP OR GOOGLE VOICE NUMBERS - NO LENGHTY RESUMES PLEASE
LOCAL CANDIDATES ONLY
Qualifications & Skills:
Bachelor's degree or higher in Computer Science, Engineering, Mathematics, or a related field At least 5 years of experience in data engineering or a similar role (previous DBA experience is a plus) Experience with big data frameworks and tools, such as Spark, Hadoop, Kafka and Hive Expert in SQL, including a knowledge of efficient query and schema design, DDL, data modeling and use of stored procedures Proficient in at least one programming language, such as Python, Go or Java Experience with CI/CD, containerization (ex: docker, K8s) and orchestration (ex: Airflow) Experience building production systems with more modern ETL, ELT and data systems, such as AWS Glue, Databricks, Snowflake, Elastic, and Azure Cognitive Search Experience deploying data infrastructure on cloud platforms (AWS, Azure, or GCP) Strong knowledge of data quality, data governance, and data security principles and practices Excellent communication, collaboration, and problem-solving