Oracle
Senior Principal Member of Technical Staff (OCI-Data)
At Oracle Cloud Infrastructure (OCI), we are shaping the future of enterprise cloud with a diverse team of innovators committed to excellence. Blending the agility of a start‑up with the global reach of the world’s leading enterprise software company, we empower our teams to create breakthrough solutions and deliver value to our customers. Our org (OCI Horizon Data Platform & Analytics) manages the fully self‑service based data platform for 450 partner teams in OCI. We enable and build advanced analytics, machine learning, and generative AI‑powered solutions and applications that provide actionable insights and drive impactful decision making across OCI, including finance, products, operations, support, and more.
Responsibilities
Design and build scalable Data platform and Data Lake geared for performance, scale, availability, and data quality.
Lead the design, implementation, and optimization of scalable, high performing, and reliable data pipelines on large volume data sets, which will enable actionable insights and ML workloads for our partner teams in OCI.
Build and enable data and operational governance for the full lifecycle of the data journey from ingestion to consumption.
Build interfaces on top of curated data sets to enable customers to extract insights from diverse sources. Enable the backend and related integration work for delivering NL to SQL based GenAI assistants and agents.
Architect, design, and implement secure, scalable, and highly available infrastructure for cutting‑edge AI‑driven analytics solutions on Oracle Cloud Infrastructure.
Help with proactive query optimization efforts.
Lead and directly contribute to the end‑to‑end implementation and deployment of complex analytics and ML pipelines, ensuring robust performance, high reliability, and minimal downtime across multi‑region environments.
Influence deliverables of multiple engineering teams by leading design reviews and contributing to development and design standards.
Champion and execute automation initiatives, building advanced CI/CD pipelines and comprehensive lifecycle policy enforcement to streamline development and operations.
Set, enact, and maintain high standards for infrastructure security, compliance, and operational excellence at scale.
Collaborate closely with engineering, data science, and business teams—translating strategic goals into innovative, production‑grade solutions and actively addressing operational issues.
Mentor and guide engineers while remaining actively engaged in coding, architecture reviews, and technical problem‑solving.
Basic Qualifications
Bachelor’s degree in computer science, Engineering, or a related quantitative field.
10+ years of hands‑on software or data engineering experience, including direct responsibility for designing, building, and operating well‑governed large‑scale data infrastructure, data platform and data lake.
Extensive programming and coding expertise in Java/Python/Scala plus SQL, with a proven track record of delivering production‑quality data solutions.
Excellent knowledge and experience in RDBMS and Non‑Relational data stores. Demonstrated experience in SQL. Deep query performance tuning experience and knowledge is highly desirable but not mandatory.
Solid experience in building and enabling data and operational governance for the full lifecycle of the data journey from ingestion to consumption.
Hands‑on experience in extensible data pipeline development and orchestration with tools like Apache Airflow, Step Functions, etc.
Experience architecting, implementing, and operating secure, scalable, and highly available cloud‑based data infrastructure.
Solid experience in implementing software engineering security best practices and contributing to design/engineering for SecOps.
Practical experience deploying and optimizing CI/CD pipelines, automation tools, and infrastructure as code (e.g., Terraform, Ansible).
Proven experience influencing and partnering with product management, TPMs in translating business requirements into data and engineering specifications.
Solid understanding of DevOps/ML Ops practices. Desirable to have some hands‑on experience in building, deploying, and managing analytics/ML models in production.
Proven ability to design, evolve, and maintain RESTful APIs (OpenAPI/Swagger).
Proven ability to drill into code, resolve complex technical challenges, and actively contribute to problem resolution in high‑impact projects.
Effective communication and collaboration skills, with the ability to mentor others while remaining involved in implementation deliverables.
Experience and knowledge in containerization technologies.
Preferred Qualifications
Master’s or higher degree in Computer Science or a related field.
12+ years of professional experience in data infrastructure, cloud infrastructure, backend engineering, DevOps, or ML Ops roles.
Recognized certifications in Data platforms, cloud computing, security, or DevOps methodology.
Demonstrated experience operating and optimizing large‑scale analytics workloads (Spark, Autonomous Data Warehouse, Object Storage).
Experience in building production‑grade MLOps workflows and deploying ML/AI models at scale highly desirable.
Experience designing secure, automated, and compliant infrastructure in regulated environments.
Experience building or enabling NL to SQL GenAI assistants and agents.
Advanced coding skills in Python and Java, plus experience with distributed computing frameworks (Spark, Flink) and building lakes on non‑relational stores.
Prior roles supporting major production systems, operations, cloud support, or similar high‑availability domains.
Strong knowledge of cloud platform (AWS, OCI, Azure) architectures and operational best practices.
Deep understanding of data structures, algorithms, and software engineering best practices.
Experience in progressive rollout strategies and driving the technical vision for AI‑powered analytics solutions.
Benefits
Medical, dental, and vision insurance, including expert medical opinion
Short term disability and long term disability
Life insurance and AD&D
Supplemental life insurance (Employee/Spouse/Child)
Health care and dependent care Flexible Spending Accounts
Pre‑tax commuter and parking benefits
401(k) Savings and Investment Plan with company match
Paid time off: Flexible Vacation is provided to all eligible employees assigned to a salaried (non‑overtime eligible) position. Accrued Vacation is provided to all other employees eligible for vacation benefits. For employees working at least 35 hours per week, the vacation accrual rate is 13 days annually for the first three years of employment and 18 days annually for subsequent years of employment. Vacation accrual is prorated for employees working between 20 and 34 hours per week. Employees working fewer than 20 hours per week are not eligible for vacation.
11 paid holidays
Paid sick leave: 72 hours of paid sick leave upon date of hire. Refreshes each calendar year. Unused balance will carry over each year up to a maximum cap of 112 hours.
Paid parental leave
Adoption assistance
Employee Stock Purchase Plan
Financial planning and group legal
Voluntary benefits including auto, homeowner and pet insurance
Compensation US: Hiring Range in USD from: $96,800 - $251,600 per year. May be eligible for bonus, equity, and compensation deferral. Oracle maintains broad salary ranges for its roles in order to account for variations in knowledge, skills, experience, market conditions and locations, as well as reflect Oracle’s differing products, industries and lines of business. Candidates are typically placed into the range based on the preceding factors as well as internal peer equity.
EEO Statement Certain US customer or client‑facing roles may be required to comply with applicable requirements, such as immunization and occupational health mandates. Range and benefit information provided in this posting are specific to the stated locations only. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
#J-18808-Ljbffr
Responsibilities
Design and build scalable Data platform and Data Lake geared for performance, scale, availability, and data quality.
Lead the design, implementation, and optimization of scalable, high performing, and reliable data pipelines on large volume data sets, which will enable actionable insights and ML workloads for our partner teams in OCI.
Build and enable data and operational governance for the full lifecycle of the data journey from ingestion to consumption.
Build interfaces on top of curated data sets to enable customers to extract insights from diverse sources. Enable the backend and related integration work for delivering NL to SQL based GenAI assistants and agents.
Architect, design, and implement secure, scalable, and highly available infrastructure for cutting‑edge AI‑driven analytics solutions on Oracle Cloud Infrastructure.
Help with proactive query optimization efforts.
Lead and directly contribute to the end‑to‑end implementation and deployment of complex analytics and ML pipelines, ensuring robust performance, high reliability, and minimal downtime across multi‑region environments.
Influence deliverables of multiple engineering teams by leading design reviews and contributing to development and design standards.
Champion and execute automation initiatives, building advanced CI/CD pipelines and comprehensive lifecycle policy enforcement to streamline development and operations.
Set, enact, and maintain high standards for infrastructure security, compliance, and operational excellence at scale.
Collaborate closely with engineering, data science, and business teams—translating strategic goals into innovative, production‑grade solutions and actively addressing operational issues.
Mentor and guide engineers while remaining actively engaged in coding, architecture reviews, and technical problem‑solving.
Basic Qualifications
Bachelor’s degree in computer science, Engineering, or a related quantitative field.
10+ years of hands‑on software or data engineering experience, including direct responsibility for designing, building, and operating well‑governed large‑scale data infrastructure, data platform and data lake.
Extensive programming and coding expertise in Java/Python/Scala plus SQL, with a proven track record of delivering production‑quality data solutions.
Excellent knowledge and experience in RDBMS and Non‑Relational data stores. Demonstrated experience in SQL. Deep query performance tuning experience and knowledge is highly desirable but not mandatory.
Solid experience in building and enabling data and operational governance for the full lifecycle of the data journey from ingestion to consumption.
Hands‑on experience in extensible data pipeline development and orchestration with tools like Apache Airflow, Step Functions, etc.
Experience architecting, implementing, and operating secure, scalable, and highly available cloud‑based data infrastructure.
Solid experience in implementing software engineering security best practices and contributing to design/engineering for SecOps.
Practical experience deploying and optimizing CI/CD pipelines, automation tools, and infrastructure as code (e.g., Terraform, Ansible).
Proven experience influencing and partnering with product management, TPMs in translating business requirements into data and engineering specifications.
Solid understanding of DevOps/ML Ops practices. Desirable to have some hands‑on experience in building, deploying, and managing analytics/ML models in production.
Proven ability to design, evolve, and maintain RESTful APIs (OpenAPI/Swagger).
Proven ability to drill into code, resolve complex technical challenges, and actively contribute to problem resolution in high‑impact projects.
Effective communication and collaboration skills, with the ability to mentor others while remaining involved in implementation deliverables.
Experience and knowledge in containerization technologies.
Preferred Qualifications
Master’s or higher degree in Computer Science or a related field.
12+ years of professional experience in data infrastructure, cloud infrastructure, backend engineering, DevOps, or ML Ops roles.
Recognized certifications in Data platforms, cloud computing, security, or DevOps methodology.
Demonstrated experience operating and optimizing large‑scale analytics workloads (Spark, Autonomous Data Warehouse, Object Storage).
Experience in building production‑grade MLOps workflows and deploying ML/AI models at scale highly desirable.
Experience designing secure, automated, and compliant infrastructure in regulated environments.
Experience building or enabling NL to SQL GenAI assistants and agents.
Advanced coding skills in Python and Java, plus experience with distributed computing frameworks (Spark, Flink) and building lakes on non‑relational stores.
Prior roles supporting major production systems, operations, cloud support, or similar high‑availability domains.
Strong knowledge of cloud platform (AWS, OCI, Azure) architectures and operational best practices.
Deep understanding of data structures, algorithms, and software engineering best practices.
Experience in progressive rollout strategies and driving the technical vision for AI‑powered analytics solutions.
Benefits
Medical, dental, and vision insurance, including expert medical opinion
Short term disability and long term disability
Life insurance and AD&D
Supplemental life insurance (Employee/Spouse/Child)
Health care and dependent care Flexible Spending Accounts
Pre‑tax commuter and parking benefits
401(k) Savings and Investment Plan with company match
Paid time off: Flexible Vacation is provided to all eligible employees assigned to a salaried (non‑overtime eligible) position. Accrued Vacation is provided to all other employees eligible for vacation benefits. For employees working at least 35 hours per week, the vacation accrual rate is 13 days annually for the first three years of employment and 18 days annually for subsequent years of employment. Vacation accrual is prorated for employees working between 20 and 34 hours per week. Employees working fewer than 20 hours per week are not eligible for vacation.
11 paid holidays
Paid sick leave: 72 hours of paid sick leave upon date of hire. Refreshes each calendar year. Unused balance will carry over each year up to a maximum cap of 112 hours.
Paid parental leave
Adoption assistance
Employee Stock Purchase Plan
Financial planning and group legal
Voluntary benefits including auto, homeowner and pet insurance
Compensation US: Hiring Range in USD from: $96,800 - $251,600 per year. May be eligible for bonus, equity, and compensation deferral. Oracle maintains broad salary ranges for its roles in order to account for variations in knowledge, skills, experience, market conditions and locations, as well as reflect Oracle’s differing products, industries and lines of business. Candidates are typically placed into the range based on the preceding factors as well as internal peer equity.
EEO Statement Certain US customer or client‑facing roles may be required to comply with applicable requirements, such as immunization and occupational health mandates. Range and benefit information provided in this posting are specific to the stated locations only. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
#J-18808-Ljbffr