Georgia Staffing
Data Engineering Role
Design, architect, develop, create and modify end-to-end Data Engineering, including data transformation, ETL, and business intelligence, using cloud. Data analysis and profiling on AWS S3 datasets using Python and SQL/PySpark. Data engineering projects on AWS, integrating S3, Lambda, Glue, and EMR. Involve in data pipelines using spark via Databricks to migrate data from AWS s3 buckets to different databases in clouds such as snowflake and redshift. Identifying data sources, both internal and external, and working out a plan for data management that is aligned with organizational data strategy. Perform end-to-end analysis, design, and development, and testing activities in technologies such as Spark, Amazon Web Services (AWS), Snowflake, EMR Tableau for migrating legacy reporting applications to open-source technologies under the cloud-based platforms. Planning and execution of big data solutions using technologies such as Hadoop. In fact, the big data architect roles and responsibilities entail the complete life-cycle management of a Hadoop Solution. Develop sustainable data-driven solutions with cutting edge technologies to meet Capital One Anti Money Laundering Compliances. Coordinating and collaborating with cross-functional teams, stakeholders, and vendors for the smooth functioning of the enterprise data system. Create data monitoring models for each product and work with our marketing team to create models ahead of new releases. Build data pipelines using spark via Databricks to migrate data from AWS s3 buckets to different databases in clouds such as snowflake and redshift. Educational qualifications: Bachelor's degree or foreign equivalent degree in Computer Science, Information Systems, Technology, Engineering, or any related field. Requires five (5) years of progressive work experience in the job offered or in related I.T. occupations, such as a Data Scientist, Solution Architect, Enterprise Architect or related. Offered Salary: $135533
Design, architect, develop, create and modify end-to-end Data Engineering, including data transformation, ETL, and business intelligence, using cloud. Data analysis and profiling on AWS S3 datasets using Python and SQL/PySpark. Data engineering projects on AWS, integrating S3, Lambda, Glue, and EMR. Involve in data pipelines using spark via Databricks to migrate data from AWS s3 buckets to different databases in clouds such as snowflake and redshift. Identifying data sources, both internal and external, and working out a plan for data management that is aligned with organizational data strategy. Perform end-to-end analysis, design, and development, and testing activities in technologies such as Spark, Amazon Web Services (AWS), Snowflake, EMR Tableau for migrating legacy reporting applications to open-source technologies under the cloud-based platforms. Planning and execution of big data solutions using technologies such as Hadoop. In fact, the big data architect roles and responsibilities entail the complete life-cycle management of a Hadoop Solution. Develop sustainable data-driven solutions with cutting edge technologies to meet Capital One Anti Money Laundering Compliances. Coordinating and collaborating with cross-functional teams, stakeholders, and vendors for the smooth functioning of the enterprise data system. Create data monitoring models for each product and work with our marketing team to create models ahead of new releases. Build data pipelines using spark via Databricks to migrate data from AWS s3 buckets to different databases in clouds such as snowflake and redshift. Educational qualifications: Bachelor's degree or foreign equivalent degree in Computer Science, Information Systems, Technology, Engineering, or any related field. Requires five (5) years of progressive work experience in the job offered or in related I.T. occupations, such as a Data Scientist, Solution Architect, Enterprise Architect or related. Offered Salary: $135533