R2 Technologies
R2 Technologies Corporation (R2) is a technology services provider headquartered in Alpharetta, GA, with expertise in a range of cutting-edge technologies. R2 specializes in Java, Dot Net, Big Data, Cloud Computing, artificial intelligence (AI), machine learning (ML), software development, project management, SAP, and enterprise resource planning (ERP) systems. Additionally, R2 offers highly skilled resources and productivity platforms that enable clients to rapidly deliver business value to their stakeholders.
R2's strength lies in providing platform-based solutions, architecting, and designing enterprise solutions, leveraging cloud technologies such as Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure to deliver scalable and cost-effective solutions.
R2's expertise in AI and ML enables clients to leverage the power of data to make data-driven decisions and improve their overall performance. R2 also provides solutions for internet of things (IoT) and blockchain technologies, which can help clients improve their supply chain management and streamline their operations.
Since its inception, R2 has rapidly grown to become one of the most respected and trusted technology companies in the United States, providing product development and staffing services to a diverse range of clients, including small and midsize businesses, as well as Fortune 1000 companies.
Job Title: Big Data Engineer
Location: Alpharetta, GA.
Type: Full-time / Contract
Overview:
We are seeking a skilled and experienced Developer to join our team. The ideal candidate will have expertise in programming and experience in building scalable and reliable applications.
Responsibilities:
• Understand the enterprise architecture within the context of existing platforms services and strategic direction.
• Digest broader enterprise, horizontal view across all technical disciplines to evaluate interoperability and incorporate it in solution architecture.
• Understand end-to-end solutions with sound technical architecture, in Big Data analytics framework along with customized solutions that are scalable, with primary focus on performance, quality, maintainability, cost and testability.
• Deliver innovative solutions within the platform to establish common components, while allowing customization of solutions for different products.
• Demonstrated knowledge of software development technology, principles, methods, tools, and practices and industry standards and trends; and current web and database technologies.
Required Skills:
• 5+ years of experience with Big Data pipelines including: Spark and Scala;
• 5+ years of experience working with internal stakeholders including: Networking, API Market Place, and Infosec and external third parties to build new services;
• 3+ years of AWS experience in building Enterprise scale applications and services;
• 3+ years of experience with AWS services including: ECS Fargate, EKS, Docker containers, Lambdas, EMR, EC2, SNS/SQS, MKS, S3, and RDS;
• 3+ years of experience building scaled data platforms and enterprise products on AWS cloud;
• 3+ years of experience in building Enterprise Level AWS infrastructure using Terraform or Cloud Formation Templates;
• 2+ years of experience working with Orchestration services and hands-on experience with Airflow;
• 2+ years of scripting experience with Shell or Python;
• Experience utilizing DevOps and CI/CD concepts to create deployment Pipelines; and
• Programming skills for data processing, availability, scalability, clustering, microservices, multi-threaded development and performance patterns.
• 3+ years of experience in production support and data reprocessing at any stage in the data pipeline.
• Experience with Python or Java, and Spark
• Experience in Apache Kafka
• Experience in data cleaning, visualization and reporting using Redshift.
• Experience in AWS EMR or Hadoop, and MapReduce
• Experience in Data Mining
• Experience processing large amounts of structured and unstructured data, including integrating data from multiple sources.
• Experience in AWS EC2, AWS RDS, AWS EBS, AWS IAM, and AWS S3
• Experience using SQL language in any MySQL or Oracle databases.
• 2+ years of experience: big data using nosql like mongodb or Spark in developing distributed processing applications; building applications with immutable infrastructure in the AWS (Amazon Web Services) Cloud with automation technologies like Terraform or Ansible or CloudFormation.
Optional Skills:
• Experience in designing and implementing highly performant data ingestion pipelines from multiple sources using Apache Spark and/or Azure Databricks
• Show efficiency in handling data - tracking data lineage, ensuring data quality, and improving discoverability of data.
• Integrating end-to-end data pipeline to take data from source systems to target data repositories ensures the quality and consistency of data is always maintained.
• Knowledge of Engineering and Operational Excellence using standard methodologies.
• Comfortable using PySpark APIs to perform advanced data transformations.
• Familiarity with implementing classes with Python.
Qualifications:
• Bachelor's degree in computer science, Engineering, or related field.
• Relevant certification.
Attributes:
We are seeking a candidate who is passionate, intelligent, and a critical thinker. The ideal candidate should be a proactive communicator, documenting their work clearly and succinctly. They should be detail-oriented, thoughtful, and respectful, with a focus on teamwork. The candidate should possess strong problem-solving skills and have the ability to work independently and within a team. They should be able to adapt to changing requirements and maintain a positive attitude in a fast-paced environment.
What's In It for You?
We offer competitive benefits, pay, and bonus potential, including group health insurance, vision and dental insurance, and paid vacation.
Skills:
Big Data,Spark,Scala,pyspark
R2's strength lies in providing platform-based solutions, architecting, and designing enterprise solutions, leveraging cloud technologies such as Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure to deliver scalable and cost-effective solutions.
R2's expertise in AI and ML enables clients to leverage the power of data to make data-driven decisions and improve their overall performance. R2 also provides solutions for internet of things (IoT) and blockchain technologies, which can help clients improve their supply chain management and streamline their operations.
Since its inception, R2 has rapidly grown to become one of the most respected and trusted technology companies in the United States, providing product development and staffing services to a diverse range of clients, including small and midsize businesses, as well as Fortune 1000 companies.
Job Title: Big Data Engineer
Location: Alpharetta, GA.
Type: Full-time / Contract
Overview:
We are seeking a skilled and experienced Developer to join our team. The ideal candidate will have expertise in programming and experience in building scalable and reliable applications.
Responsibilities:
• Understand the enterprise architecture within the context of existing platforms services and strategic direction.
• Digest broader enterprise, horizontal view across all technical disciplines to evaluate interoperability and incorporate it in solution architecture.
• Understand end-to-end solutions with sound technical architecture, in Big Data analytics framework along with customized solutions that are scalable, with primary focus on performance, quality, maintainability, cost and testability.
• Deliver innovative solutions within the platform to establish common components, while allowing customization of solutions for different products.
• Demonstrated knowledge of software development technology, principles, methods, tools, and practices and industry standards and trends; and current web and database technologies.
Required Skills:
• 5+ years of experience with Big Data pipelines including: Spark and Scala;
• 5+ years of experience working with internal stakeholders including: Networking, API Market Place, and Infosec and external third parties to build new services;
• 3+ years of AWS experience in building Enterprise scale applications and services;
• 3+ years of experience with AWS services including: ECS Fargate, EKS, Docker containers, Lambdas, EMR, EC2, SNS/SQS, MKS, S3, and RDS;
• 3+ years of experience building scaled data platforms and enterprise products on AWS cloud;
• 3+ years of experience in building Enterprise Level AWS infrastructure using Terraform or Cloud Formation Templates;
• 2+ years of experience working with Orchestration services and hands-on experience with Airflow;
• 2+ years of scripting experience with Shell or Python;
• Experience utilizing DevOps and CI/CD concepts to create deployment Pipelines; and
• Programming skills for data processing, availability, scalability, clustering, microservices, multi-threaded development and performance patterns.
• 3+ years of experience in production support and data reprocessing at any stage in the data pipeline.
• Experience with Python or Java, and Spark
• Experience in Apache Kafka
• Experience in data cleaning, visualization and reporting using Redshift.
• Experience in AWS EMR or Hadoop, and MapReduce
• Experience in Data Mining
• Experience processing large amounts of structured and unstructured data, including integrating data from multiple sources.
• Experience in AWS EC2, AWS RDS, AWS EBS, AWS IAM, and AWS S3
• Experience using SQL language in any MySQL or Oracle databases.
• 2+ years of experience: big data using nosql like mongodb or Spark in developing distributed processing applications; building applications with immutable infrastructure in the AWS (Amazon Web Services) Cloud with automation technologies like Terraform or Ansible or CloudFormation.
Optional Skills:
• Experience in designing and implementing highly performant data ingestion pipelines from multiple sources using Apache Spark and/or Azure Databricks
• Show efficiency in handling data - tracking data lineage, ensuring data quality, and improving discoverability of data.
• Integrating end-to-end data pipeline to take data from source systems to target data repositories ensures the quality and consistency of data is always maintained.
• Knowledge of Engineering and Operational Excellence using standard methodologies.
• Comfortable using PySpark APIs to perform advanced data transformations.
• Familiarity with implementing classes with Python.
Qualifications:
• Bachelor's degree in computer science, Engineering, or related field.
• Relevant certification.
Attributes:
We are seeking a candidate who is passionate, intelligent, and a critical thinker. The ideal candidate should be a proactive communicator, documenting their work clearly and succinctly. They should be detail-oriented, thoughtful, and respectful, with a focus on teamwork. The candidate should possess strong problem-solving skills and have the ability to work independently and within a team. They should be able to adapt to changing requirements and maintain a positive attitude in a fast-paced environment.
What's In It for You?
We offer competitive benefits, pay, and bonus potential, including group health insurance, vision and dental insurance, and paid vacation.
Skills:
Big Data,Spark,Scala,pyspark