Jobs via Dice
Overview
Role:
AB Initio Developer Location:
Charlotte, NC (Hybrid) Duration:
6+ months We are seeking a skilled Ab Initio Developer with strong experience in Python and PySpark to join our data engineering team. The ideal candidate will be responsible for designing, developing, and maintaining large-scale data integration and processing solutions, ensuring high performance and data quality. Responsibilities
Develop and maintain ETL workflows using Ab Initio. Collaborate with data analysts and other stakeholders to gather requirements and translate them into effective data pipelines. Implement data processing solutions utilizing Python and PySpark for big data processing tasks. Optimize existing data workflows for performance and reliability. Perform data validation, cleaning, and transformation activities. Troubleshoot and resolve data pipeline issues in production environments. Document technical specifications and develop best practices for data processing. Requirements
Proven experience as an Ab Initio Developer in a data engineering environment. Strong proficiency in Python programming. Hands-on experience with PySpark (Spark with Python API). Good understanding of big data technologies and concepts (Hadoop, Spark, etc.). Familiarity with SQL and relational databases. Experience with data modeling and schema design. Strong problem-solving skills and attention to detail. Excellent communication and teamwork skills. Preferred (not required)
Experience with cloud platforms such as AWS, Azure, or Google Cloud Platform. Knowledge of other big data tools and frameworks (e.g., Kafka, Hive). Familiarity with orchestration tools like Airflow or Oozie.
#J-18808-Ljbffr
Role:
AB Initio Developer Location:
Charlotte, NC (Hybrid) Duration:
6+ months We are seeking a skilled Ab Initio Developer with strong experience in Python and PySpark to join our data engineering team. The ideal candidate will be responsible for designing, developing, and maintaining large-scale data integration and processing solutions, ensuring high performance and data quality. Responsibilities
Develop and maintain ETL workflows using Ab Initio. Collaborate with data analysts and other stakeholders to gather requirements and translate them into effective data pipelines. Implement data processing solutions utilizing Python and PySpark for big data processing tasks. Optimize existing data workflows for performance and reliability. Perform data validation, cleaning, and transformation activities. Troubleshoot and resolve data pipeline issues in production environments. Document technical specifications and develop best practices for data processing. Requirements
Proven experience as an Ab Initio Developer in a data engineering environment. Strong proficiency in Python programming. Hands-on experience with PySpark (Spark with Python API). Good understanding of big data technologies and concepts (Hadoop, Spark, etc.). Familiarity with SQL and relational databases. Experience with data modeling and schema design. Strong problem-solving skills and attention to detail. Excellent communication and teamwork skills. Preferred (not required)
Experience with cloud platforms such as AWS, Azure, or Google Cloud Platform. Knowledge of other big data tools and frameworks (e.g., Kafka, Hive). Familiarity with orchestration tools like Airflow or Oozie.
#J-18808-Ljbffr