Compunnel Inc.
Data Integration Developer (Local to Bay Area only)
Compunnel Inc., San Francisco, California, United States, 94199
Overview
Data Integration Developer (Local to Bay Area only) Base pay range may vary; actual pay will be based on skills and experience. Job details
Location: Pleasanton, CA (Local Candidates Only); No relocation or remote options available. Employment type: Full-time Base pay range: $60.00/hr - $70.00/hr Responsibilities
Build and optimize ETL pipelines using DataStage 11.7. Manage large-scale data workflows across hybrid cloud environments. Develop automation scripts using Shell, Perl, AWK, and Unix/Linux. Integrate data from various platforms: Hadoop, SAS, MicroStrategy. Apply best practices for performance, security, and reliability. Migrate and manage metadata, data mappings, and scheduling tools. Contribute to training and documentation for internal teams. Strong in ETL architecture, data ingestion, and transformation techniques. Experience with scripting and scheduling on Unix/Linux platforms. Proficient in advanced SQL and stored procedures across multiple DB platforms. Familiarity with data replication, CDC, and SOA/ESB architectures. DataStage administration and performance tuning experience is a plus. Exposure to predictive analytics architecture is a bonus. Qualifications
8+ years of hands-on experience with IBM DataStage (v11.3 or higher). Experience with cloud infrastructure (AWS, GCP, Azure) and data lake architecture. Strong expertise in IBM DataStage, cloud-based ETL tools, and data warehousing platforms like AWS Redshift.
#J-18808-Ljbffr
Data Integration Developer (Local to Bay Area only) Base pay range may vary; actual pay will be based on skills and experience. Job details
Location: Pleasanton, CA (Local Candidates Only); No relocation or remote options available. Employment type: Full-time Base pay range: $60.00/hr - $70.00/hr Responsibilities
Build and optimize ETL pipelines using DataStage 11.7. Manage large-scale data workflows across hybrid cloud environments. Develop automation scripts using Shell, Perl, AWK, and Unix/Linux. Integrate data from various platforms: Hadoop, SAS, MicroStrategy. Apply best practices for performance, security, and reliability. Migrate and manage metadata, data mappings, and scheduling tools. Contribute to training and documentation for internal teams. Strong in ETL architecture, data ingestion, and transformation techniques. Experience with scripting and scheduling on Unix/Linux platforms. Proficient in advanced SQL and stored procedures across multiple DB platforms. Familiarity with data replication, CDC, and SOA/ESB architectures. DataStage administration and performance tuning experience is a plus. Exposure to predictive analytics architecture is a bonus. Qualifications
8+ years of hands-on experience with IBM DataStage (v11.3 or higher). Experience with cloud infrastructure (AWS, GCP, Azure) and data lake architecture. Strong expertise in IBM DataStage, cloud-based ETL tools, and data warehousing platforms like AWS Redshift.
#J-18808-Ljbffr