Solugenix Corporation
Sr. Data Engineers
Scottsdale, AZ
Full-Time
Job ID 25-09748
Job Description: Sr. Data Engineers for Scottsdale, AZ location. Design, build, and maintain robust data pipelines and architectures on the Databricks platform. Core skills encompass building big data pipelines, ETL workflows using Hadoop, Hive, HDFS, Spark and Cloud services like AWS S3, AWS EMR, Lambda, GLUE, and Redshift. Utilize advanced AWS platform capabilities and leveraging Databricks to process large-scale datasets, develop and deploy machine learning models to power data-driven business growth. Write Hive queries to determine business insights. Utilize AWS Cloud services, including S3, Athena, EMR, Lambda, Step Functions, and Redshift. Help determine data quality rule models and define error correction strategies. Determine business requirements for data management. Maintain data warehousing and create documentation detailing best practices. Technical Environment: Databricks, HDFS, MapReduce, Hive, Spark, Python, Hadoop, My SQL, SQL Server, Mongo DB, Oracle, AWS S3, EMR, Lambda, AWS Step, AWS Glue.
Education or Experience: Bachelor's degree in Computer Science, Computer Engineering, Electronics Engineering or a related Engineering field, or any related field plus 3 years of experience in big data engineering required.
Required Skills: Required Technical Skills: Databricks, HDFS, MapReduce, Hive, Spark, Python, Hadoop, My SQL, SQL Server, Mongo DB, Oracle, AWS S3, EMR, Lambda, AWS Step, AWS Glue. Some telecommuting permitted.
Job Description: Sr. Data Engineers for Scottsdale, AZ location. Design, build, and maintain robust data pipelines and architectures on the Databricks platform. Core skills encompass building big data pipelines, ETL workflows using Hadoop, Hive, HDFS, Spark and Cloud services like AWS S3, AWS EMR, Lambda, GLUE, and Redshift. Utilize advanced AWS platform capabilities and leveraging Databricks to process large-scale datasets, develop and deploy machine learning models to power data-driven business growth. Write Hive queries to determine business insights. Utilize AWS Cloud services, including S3, Athena, EMR, Lambda, Step Functions, and Redshift. Help determine data quality rule models and define error correction strategies. Determine business requirements for data management. Maintain data warehousing and create documentation detailing best practices. Technical Environment: Databricks, HDFS, MapReduce, Hive, Spark, Python, Hadoop, My SQL, SQL Server, Mongo DB, Oracle, AWS S3, EMR, Lambda, AWS Step, AWS Glue.
Education or Experience: Bachelor's degree in Computer Science, Computer Engineering, Electronics Engineering or a related Engineering field, or any related field plus 3 years of experience in big data engineering required.
Required Skills: Required Technical Skills: Databricks, HDFS, MapReduce, Hive, Spark, Python, Hadoop, My SQL, SQL Server, Mongo DB, Oracle, AWS S3, EMR, Lambda, AWS Step, AWS Glue. Some telecommuting permitted.