Logo
CatchProbe Intelligence Technologies

Senior Database Engineer(Elastic/Mongo/Hadoop)

CatchProbe Intelligence Technologies, San Francisco, California, United States, 94199

Save Job

Senior Database Engineer (Elastic/Mongo/Hadoop) Join to apply for the

Senior Database Engineer (Elastic/Mongo/Hadoop)

role at

CatchProbe Intelligence Technologies

Workplace Type: Remote Region: San Francisco, CA

Job Description

Must have experience with MongoDB installations, upgrades, support on MongoDB

Responsible for administration, maintenance, performance analysis, and capacity planning for MongoDB/Elastic/Hadoop clusters.

Coordinate and plan with application teams on MongoDB capacity planning for new applications.

Should have knowledge of using mongo base tools – like mongodump, mongoexport, mongorestore, mongoimport, mongostat, mongotop

Must be well-versed with JSON scripting, writing queries in mongo in shell scripts and in the mongo shell

Should be able to support sharded clusters and perform upgrades and other config maintenance on sharded clusters

Must be able to address, monitor and manage capacity requirements – all aspects CPU, memory and storage.

Must be able to assist application teams with assessment and/or resolution of performance bottlenecks observed in the MongoDB Database tier of their stack

Must be aware of different authentication/authorization methods used in Mongo – SCRAM-SHA1, X509, LDAP – including reconfiguration of instances with TLS/SSL and/or LDAP

Candidate must also be able to develop automated solutions for ad-hoc script execution requests, ad-hoc report generation, upgrades, installs

Experienced in NoSQL DB technologies

Must be aware of how to use Cloud manager and share relevant metrics for a given deployment when an issue arises

Must have experience with Docker, deploying Mongo containers running on Docker and supporting all aspects of MongoDB administration needs within a Docker container

Knowledge of administration and support of Hadoop systems will be an added advantage

Deploy Hadoop (Bigdata) cluster, comm/decommissioning of nodes, track jobs, monitor services like ZooKeeper, HBase, SOLR indexing, configure name-node HA, schedule, configuring backups & restore

Develop script to review logs and alert in case of long running queries

Demonstrable expertise in deployment and use of Postgres/MySQL, Kafka/Kinesis, etc.

Strong scripting experience with Python (preferred), and Shell (secondary)

Individually build services, and expose internal APIs for these services that allow other teams and workflows to use data infrastructure automation components.

Required Skills

Strong understanding of various relational and non-relational database technologies and their benefits, downsides and best use-case and help application teams to use the correct database technology based on their specific business use case

5+ years installing, automating, scaling and supporting NoSQL databases such as MongoDB, ElasticSearch, Hadoop, among other emerging technologies

1-2 years experience working with the databases in public clouds like Hetzner, AWS, Azure and GCP

Proficiency in automation

Knowledge of Ansible, Python, Terraform

Willingness and commitment to learn other database, automation, and cloud technologies

Great communication and collaboration skills

Software development experience and knowledge of modern software development processes

Knowledge/experience in development programming languages such as Java, Go, Node.js, HTML, CSS, Bootstrap etc. is a plus

Knowledge/experience on AI/ML is a big plus

Ability to multi-task and prioritize with little to no supervision, providing team leadership skills

Ability to work well under pressure

Consistent exercise of independent judgement and discretion in matters of significance

Excellent communication skills

Highly driven, highly involved, highly proactive

Datalake cluster ownership and technical point of contact for all applications on Hadoop cluster

Responsible for new application onboarding in Datalake by reviewing requirement and design

Assist existing and new applications to come up with most optimized and suitable solutions for their requirement

L3 point of contact for issues related to Hadoop platform

Core Responsibilities

Develop solutions for very complex and wide-reaching systems engineering problems, set new policies and procedures, create systems engineering and architectural documentation

Operating systems & disk management: provide in-depth knowledge, mentor junior team members, create basic task automation scripts

Database platform management: master understanding of database concepts, availability, performance, usage and configuration, set up, fix and tune complex replication

Storage and backup: set up, solve problems and tune complex SAN software issues, maintain policies and documentation

Scripting and development: develop software in several modern languages, design horizontally-scalable solutions and apply professional standards

Networking: recommend or help architect an entire system, perform network sniffing, understand protocols

Application technologies: provide recommendations and advice regarding web services, OS and storage, liaise with development, QA and business teams

Analyze systems, make recommendations to prevent problems, lead issue resolution activities

Lead end-to-end audit of monitors and alarms, define requirements for new tools

Apply time and project management skills to lead resolution of issues, communicate necessary information, consult with clients or third-party vendors

Consistent exercise of independent judgement and discretion in matters of significance

Regular, consistent and punctual attendance

Seniority level:

Mid-Senior level

Employment type:

Full-time

Job function:

Information Technology

Industries:

Software Development

Referrals increase your chances of interviewing at CatchProbe Intelligence Technologies by 2x

#J-18808-Ljbffr