Global Alliant Inc
This position requires U.S. Citizenship due to security clearance requirement
Responsibilities
Design, code, and maintain applications and data processing solutions using Java, Kotlin, Scala, and Apache Spark. Design and implement data loading and transformation for large datasets. Process data in various formats and compression codecs. Use Spark SQL, DataFrames, and Datasets for efficient data manipulation and querying within Spark applications. Optimize Spark applications, including tuning configurations, managing memory, and fine-tuning data serialization and task partitioning. Collaborate with cross-functional teams to understand data requirements and deliver solutions. Write clean, efficient, and well-documented code. Implement test-driven deployment practices and develop and execute unit, integration, and end-to-end tests. Identify and resolve issues during development and production. Contribute to the overall DevOps for builds, application deployment stages, and releases. Stay current with advancements in Kotlin, Scala, Spark, and related technologies. Required Experience and Skills
Minimum 5 years of experience in software development with a focus on Java, Kotlin, Scala, and Apache Spark. Strong proficiency in Java and Kotlin and experience with functional programming concepts. Deep understanding of Scala and its ecosystem, including frameworks and libraries. Extensive experience with Apache Spark and its core concepts, including RDDs, DataFrames, and Spark SQL. Experience with big data technologies and distributed computing concepts. Proficiency in SQL and experience with relational databases. Experience with AWS cloud-based data architecture and services. Experience with version control systems like Git. Strong problem-solving and analytical skills. Excellent communication and collaboration skills.
#J-18808-Ljbffr
Design, code, and maintain applications and data processing solutions using Java, Kotlin, Scala, and Apache Spark. Design and implement data loading and transformation for large datasets. Process data in various formats and compression codecs. Use Spark SQL, DataFrames, and Datasets for efficient data manipulation and querying within Spark applications. Optimize Spark applications, including tuning configurations, managing memory, and fine-tuning data serialization and task partitioning. Collaborate with cross-functional teams to understand data requirements and deliver solutions. Write clean, efficient, and well-documented code. Implement test-driven deployment practices and develop and execute unit, integration, and end-to-end tests. Identify and resolve issues during development and production. Contribute to the overall DevOps for builds, application deployment stages, and releases. Stay current with advancements in Kotlin, Scala, Spark, and related technologies. Required Experience and Skills
Minimum 5 years of experience in software development with a focus on Java, Kotlin, Scala, and Apache Spark. Strong proficiency in Java and Kotlin and experience with functional programming concepts. Deep understanding of Scala and its ecosystem, including frameworks and libraries. Extensive experience with Apache Spark and its core concepts, including RDDs, DataFrames, and Spark SQL. Experience with big data technologies and distributed computing concepts. Proficiency in SQL and experience with relational databases. Experience with AWS cloud-based data architecture and services. Experience with version control systems like Git. Strong problem-solving and analytical skills. Excellent communication and collaboration skills.
#J-18808-Ljbffr