Georgia Staffing
Job Posting
Analyze, design, development, and implementation of Relational Database and Data Warehousing Systems using IBM Data Stage (Info Sphere Information Server, Web Sphere, Ascential Data Stage). Design parallel jobs using various stages like Join, Merge, Lookup, remove duplicates, Filter, Dataset, Lookup file set, Complex flat file, Modify, Aggregator, XML. Integration of various data sources (DB2-UDB, SQL Server, PL/SQL, Oracle, Teradata, XML and MS-Access) into data staging area. Work with Data Stage Manager, Designer, Administrator, and Director and provide oversight for database and server performance including leadership for SQL performance and tuning. Write, implementation, and testing of triggers, procedures, and functions in PL/SQL and Oracle. Perform database programming for Data Warehouses (Schemas), proficient in dimensional modeling (Star Schema modeling, and Snowflake modeling). Functional and technical requirement gathering and reviewing them with business analysts and architects. Design and develop data pipelines and BI reports using various tools such as Databricks, PySpark, Power BI, SQL Server, Oracle, Hive Script, Synapse, and Azure Data Factory. Creation of HLD and LLD for various ETL and Business Intelligence Reports requirement and involve in unit testing for job developed and data testing to ensure data transformation logic. Analyze the data generated by the business process, defining the granularity, source to target mapping of the data elements, creating indexes and aggregate tables for the data warehouse design and development. Data analysis using Databricks and PySpark to find root cause for critical defect raised and reconciliation technical analysis to identify data gaps and accuracy of reports developed. Develop strategies for Extraction, Transformation, and Loading (ETL) mechanism and provide analytics solutions, prototypes, and dashboards to help business taking strategic decisions Quality assurance and data analysis of ETL and BI applications, creating SQL test scripts Design, develop, document, test ETL jobs and mappings in Server and Parallel jobs using Data stage to populate tables in Data Warehouse and Data marts. Educational qualifications: Bachelor's degree or foreign equivalent degree in Computer Science, Information Systems, Technology, Engineering, or any related field. Requires five (5) years of progressive work experience in the job offered or in related I.T. occupations, such as a Software Developer, Data Engineer or related. Offered salary: $152131
Analyze, design, development, and implementation of Relational Database and Data Warehousing Systems using IBM Data Stage (Info Sphere Information Server, Web Sphere, Ascential Data Stage). Design parallel jobs using various stages like Join, Merge, Lookup, remove duplicates, Filter, Dataset, Lookup file set, Complex flat file, Modify, Aggregator, XML. Integration of various data sources (DB2-UDB, SQL Server, PL/SQL, Oracle, Teradata, XML and MS-Access) into data staging area. Work with Data Stage Manager, Designer, Administrator, and Director and provide oversight for database and server performance including leadership for SQL performance and tuning. Write, implementation, and testing of triggers, procedures, and functions in PL/SQL and Oracle. Perform database programming for Data Warehouses (Schemas), proficient in dimensional modeling (Star Schema modeling, and Snowflake modeling). Functional and technical requirement gathering and reviewing them with business analysts and architects. Design and develop data pipelines and BI reports using various tools such as Databricks, PySpark, Power BI, SQL Server, Oracle, Hive Script, Synapse, and Azure Data Factory. Creation of HLD and LLD for various ETL and Business Intelligence Reports requirement and involve in unit testing for job developed and data testing to ensure data transformation logic. Analyze the data generated by the business process, defining the granularity, source to target mapping of the data elements, creating indexes and aggregate tables for the data warehouse design and development. Data analysis using Databricks and PySpark to find root cause for critical defect raised and reconciliation technical analysis to identify data gaps and accuracy of reports developed. Develop strategies for Extraction, Transformation, and Loading (ETL) mechanism and provide analytics solutions, prototypes, and dashboards to help business taking strategic decisions Quality assurance and data analysis of ETL and BI applications, creating SQL test scripts Design, develop, document, test ETL jobs and mappings in Server and Parallel jobs using Data stage to populate tables in Data Warehouse and Data marts. Educational qualifications: Bachelor's degree or foreign equivalent degree in Computer Science, Information Systems, Technology, Engineering, or any related field. Requires five (5) years of progressive work experience in the job offered or in related I.T. occupations, such as a Software Developer, Data Engineer or related. Offered salary: $152131