Accord Technologies Inc
Sr. Big Data Engineer
Accord Technologies Inc, Charlotte, North Carolina, United States, 28245
We are seeking a
Big Data Engineer
to join our Enterprise Data & Analytics team.
In this role, you will design and implement data solutions to manage, process, and analyze financial data at scale, supporting functions like risk management, customer analytics, fraud detection, and regulatory compliance.
Responsibilities
Design and build enterprise-scale data pipelines processing billions of transactions daily
Optimize Hadoop/Spark ecosystems for performance and reliability
Develop real-time data streaming solutions using Kafka/Flume
Implement data governance and quality frameworks for financial data
Collaborate with data scientists to productionize ML models
Modernize legacy data systems to cloud-native architectures (AWS/GCP)
Ensure solutions meet banking security and compliance standards (CCAR, BCBS 239)
Qualifications
Minimum 9+ years of experience in Big Data technologies
Experience with Hadoop, Spark, Hive, Impala
Proficiency in cloud platforms such as AWS (EMR, S3, Glue) or GCP (Dataproc, BigQuery)
Programming skills in Scala, Python, or Java
Knowledge of data modeling with SQL and NoSQL databases like HBase and Cassandra
Experience with CI/CD tools such as Git, Jenkins, Terraform
Banking or financial services experience
Knowledge of data mesh, warehouse, or lakehouse architectures
Certifications like AWS/GCP Data Engineer or Cloudera/DataBricks are a plus
Additional Information
Seniority level: Mid-Senior level
Employment type: Full-time
Job function: Engineering and IT
This job is active and accepting applications.
#J-18808-Ljbffr
Big Data Engineer
to join our Enterprise Data & Analytics team.
In this role, you will design and implement data solutions to manage, process, and analyze financial data at scale, supporting functions like risk management, customer analytics, fraud detection, and regulatory compliance.
Responsibilities
Design and build enterprise-scale data pipelines processing billions of transactions daily
Optimize Hadoop/Spark ecosystems for performance and reliability
Develop real-time data streaming solutions using Kafka/Flume
Implement data governance and quality frameworks for financial data
Collaborate with data scientists to productionize ML models
Modernize legacy data systems to cloud-native architectures (AWS/GCP)
Ensure solutions meet banking security and compliance standards (CCAR, BCBS 239)
Qualifications
Minimum 9+ years of experience in Big Data technologies
Experience with Hadoop, Spark, Hive, Impala
Proficiency in cloud platforms such as AWS (EMR, S3, Glue) or GCP (Dataproc, BigQuery)
Programming skills in Scala, Python, or Java
Knowledge of data modeling with SQL and NoSQL databases like HBase and Cassandra
Experience with CI/CD tools such as Git, Jenkins, Terraform
Banking or financial services experience
Knowledge of data mesh, warehouse, or lakehouse architectures
Certifications like AWS/GCP Data Engineer or Cloudera/DataBricks are a plus
Additional Information
Seniority level: Mid-Senior level
Employment type: Full-time
Job function: Engineering and IT
This job is active and accepting applications.
#J-18808-Ljbffr