CereCore
Position Summary
We are seeking a Principal Big Data Engineer to join our team in
Nashville, TN . The Principal Data Engineer serves as a primary development resource for design, writing code, test, implementation, document functionality, and maintain of NextGen solutions for the GCP Cloud enterprise data initiatives. The role requires working closely with data teams, frequently in a matrixed environment as part of a broader project team. Due to the emerging and fast-evolving nature of GCP/Hadoop technology and practice, the position requires that one stay well-informed of technological advancements and be proficient at putting new innovations into effective practice. In addition, this position requires a candidate who can analyze business requirements, perform design tasks, construct, test, and implement solutions with minimal supervision. This candidate will have a record of accomplishment of participation in successful projects in a fast-paced, mixed team environment. Will be working on DT&I team for Patient Insyste POD project. Responsibilities Communication and interpersonal skills Problem-solving and critical thinking skills. Understand strategic imperatives Technology & business knowledge This role will provide application development for specific business environments. Responsible for building and supporting a GCP based ecosystem designed for enterprise-wide analysis of structured, semi-structured, and unstructured data. Bring new data sources into GCP, transform and load to databases and support regular requests to move data from one cluster to another Develop a strong understanding of relevant product area, codebase, and/or systems Demonstrate proficiency in data analysis, programming, and software engineering Work closely with the Lead Architect and Product Owner to define, design and build new features and improve existing products Produce high quality code with good test coverage, using modern abstractions and frameworks Work independently, and complete tasks on-schedule by exercising strong judgment and problem-solving skills Closely collaborates with team members to successfully execute development initiatives using Agile practices and principles Participates in the deployment, change, configuration, management, administration and maintenance of deployment process and systems Proven experience effectively prioritizing workload to meet deadlines and work objectives Works in an environment with rapidly changing business requirements and priorities Work collaboratively with Data Scientists and business and IT leaders throughout the company to understand their needs and use cases. Work closely with management, architects and other teams to develop and implement the projects. Actively participate in technical group discussions and adopt any new technologies to improve the development and operations.
Requirements
Strong understanding of best practices and standards for GCP Data process design and implementation. Two plus Years of hands-on experience with GCP platform and experience with many of the following components:
Cloud Run, GKE, Cloud Functions Spark Streaming, Kafka, Pub/Sub Bigtable, Firestore, Cloud SQL, Cloud Spanner HL7, FHIR, JSON, Avro, Parquet Python, Java, Terraform BigQuery, Dataflow, Data Fusion Cloud Composer, DataProc, CI/CD, Cloud Logging Vertex AI, NLP, GitHub
Ability to multitask and to balance competing priorities. Ability to define and utilize best practice techniques and to impose order in a fast-changing environment. Must have strong problem-solving skills. Strong verbal, written, and interpersonal skills, including a desire to work within a highly-matrixed, team-oriented environment. Experience in Healthcare Domain Experience in Patient Data
Databases:
RDBMS MS SQL Server/Teradata/Oracle NoSQL, Hbase, Cassandra, MongoDB, In-memory, Columnar, other emerging technologies
Build Systems TFS, Maven, Ant Source Control Systems Git, Mercurial Continuous Integration Systems Jenkins or Bamboo Certifications (a plus, but not required): GCP Cloud Professional Data Engineer
#J-18808-Ljbffr
We are seeking a Principal Big Data Engineer to join our team in
Nashville, TN . The Principal Data Engineer serves as a primary development resource for design, writing code, test, implementation, document functionality, and maintain of NextGen solutions for the GCP Cloud enterprise data initiatives. The role requires working closely with data teams, frequently in a matrixed environment as part of a broader project team. Due to the emerging and fast-evolving nature of GCP/Hadoop technology and practice, the position requires that one stay well-informed of technological advancements and be proficient at putting new innovations into effective practice. In addition, this position requires a candidate who can analyze business requirements, perform design tasks, construct, test, and implement solutions with minimal supervision. This candidate will have a record of accomplishment of participation in successful projects in a fast-paced, mixed team environment. Will be working on DT&I team for Patient Insyste POD project. Responsibilities Communication and interpersonal skills Problem-solving and critical thinking skills. Understand strategic imperatives Technology & business knowledge This role will provide application development for specific business environments. Responsible for building and supporting a GCP based ecosystem designed for enterprise-wide analysis of structured, semi-structured, and unstructured data. Bring new data sources into GCP, transform and load to databases and support regular requests to move data from one cluster to another Develop a strong understanding of relevant product area, codebase, and/or systems Demonstrate proficiency in data analysis, programming, and software engineering Work closely with the Lead Architect and Product Owner to define, design and build new features and improve existing products Produce high quality code with good test coverage, using modern abstractions and frameworks Work independently, and complete tasks on-schedule by exercising strong judgment and problem-solving skills Closely collaborates with team members to successfully execute development initiatives using Agile practices and principles Participates in the deployment, change, configuration, management, administration and maintenance of deployment process and systems Proven experience effectively prioritizing workload to meet deadlines and work objectives Works in an environment with rapidly changing business requirements and priorities Work collaboratively with Data Scientists and business and IT leaders throughout the company to understand their needs and use cases. Work closely with management, architects and other teams to develop and implement the projects. Actively participate in technical group discussions and adopt any new technologies to improve the development and operations.
Requirements
Strong understanding of best practices and standards for GCP Data process design and implementation. Two plus Years of hands-on experience with GCP platform and experience with many of the following components:
Cloud Run, GKE, Cloud Functions Spark Streaming, Kafka, Pub/Sub Bigtable, Firestore, Cloud SQL, Cloud Spanner HL7, FHIR, JSON, Avro, Parquet Python, Java, Terraform BigQuery, Dataflow, Data Fusion Cloud Composer, DataProc, CI/CD, Cloud Logging Vertex AI, NLP, GitHub
Ability to multitask and to balance competing priorities. Ability to define and utilize best practice techniques and to impose order in a fast-changing environment. Must have strong problem-solving skills. Strong verbal, written, and interpersonal skills, including a desire to work within a highly-matrixed, team-oriented environment. Experience in Healthcare Domain Experience in Patient Data
Databases:
RDBMS MS SQL Server/Teradata/Oracle NoSQL, Hbase, Cassandra, MongoDB, In-memory, Columnar, other emerging technologies
Build Systems TFS, Maven, Ant Source Control Systems Git, Mercurial Continuous Integration Systems Jenkins or Bamboo Certifications (a plus, but not required): GCP Cloud Professional Data Engineer
#J-18808-Ljbffr