Middle Data Engineer
Vnsilicon - Snowflake, Arizona, United States, 85937
Work at Vnsilicon
Overview
- View job
Overview
Vietnam Silicon , we are on a mission to innovate and create world-class technology solutions. Develop and maintain data pipelines for ingesting, transforming, and storing datasets to support AI and analytics workloads, under the guidance of senior engineers. Implement Extract, Transform, Load (ETL) processes using Python, SQL, and data orchestration tools. Support the development and optimization of cloud-based data infrastructure on platforms like AWS, Azure, or GCP. Assist in integrating datasets with AI/ML models, ensuring data quality and accessibility for embeddings and model training pipelines. Contribute to the implementation of data warehousing solutions (e.g., Redshift, Snowflake) to enable analytics and AI applications. Help develop monitoring and logging systems to ensure data pipeline reliability and performance. Collaborate with cross-functional teams, including data scientists, to translate business requirements into data engineering solutions. Support team knowledge-sharing and contribute to Vietnam Silicon’s data engineering initiatives in the region Other tasks assigned by the Company or Line Manager Requirements Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related technical field. 3+ years of professional experience in data engineering or related roles. Proficiency in Python, SQL, and data pipeline tools (e.g., Apache Airflow, Spark). Experience with cloud platforms (e.g., AWS, Azure, GCP) and data storage solutions (e.g., Redshift, Postgres, Snowflake). Familiarity with designing and implementing ETL processes for datasets. Basic experience integrating data pipelines with AI/ML workloads, such as embeddings or model training. Good problem-solving and communication skills. Ability to work effectively in cross-functional teams. Prefer Exposure to distributed computing frameworks (e.g., PySpark, Hadoop) for data processing. Familiarity with MLOps practices for integrating data pipelines with AI model deployment. Knowledge of Southeast Asian data privacy regulations or regional business contexts. Contributions to open-source data engineering or AI projects. Experience with real-time or streaming data pipelines (e.g., Kafka, Flink). Basic proficiency in building data visualization tools or dashboards (e.g., Streamlit, Tableau, Superse t) Recruitment Process
1 1 ApplicationReview 2 2 InitialConversation 3 3 TechnicalInterview 4 4 FinalDiscussion 5 5 Offer &Welcome Please upload your Resume Select relevant documents to upload your Resume Select a file or drag and drop here JPG, PNG or PDF, file size no more than 10MB
#J-18808-Ljbffr