Onyx Point, Inc.
TO BE CONSIDERED FOR THIS POSITION YOUMUST CURRENTLYHAVE AN ACTIVE TS/SCI WITH POLYGRAPH SECURITY CLEARANCE WITH THE FEDERAL GOVERNMENT. (U.S. CITIZENSHIP REQUIRED).
What you’ll do:
Work with team members to operationalize data pipelines and supporting cloud infrastructure
Collaborate with external data producers and consumers to obtain and provide data through interfaces such as REST APIs and S3
Provide day‑to‑day support of deploying Python‑native data pipelines and performing data engineering tasks to enable data brokering and exchange capabilities
Provide Tier 2/3 troubleshooting and incident resolution support for data pipelines in Production
What you’ll need to succeed:
Active TS/SCI with the ability to obtain and maintain a CI polygraph required
4+ years of proven experience in data engineering, with expertise in designing, developing, and maintaining data ingestion, transformation, and loading pipelines and components
Demonstrated experience in designing and deploying data pipelines leveraging AWS cloud infrastructure across multiple classification domains (e.g., IL5 to IL6+)
Experience with Infrastructure‑as‑Code (IaC) tools, including Terraform, CloudFormation, or Ansible, to automate deployment of data pipeline cloud infrastructure
Understanding of RMF security principles and hands‑on experience implementing security controls for data pipelines in cloud environments
Strong scripting and programming skills in languages such as Go, Python, and Bash
Experience with data pipeline tools and technologies such as Nifi, Hadoop, HDFS, and Kafka. Experience implementing data pipelines in the Cloudera Data Platform environment is highly preferred.
Strong communication skills, with the ability to clearly convey complex technical concepts
Compensation: We are committed to providing fair and competitive compensation. The salary range for this position is $78,000 to $250,000 per year. This range reflects the compensation offered across the locations where we hire. The exact salary will be determined based on the candidate's work location, specific role, skill set, and level of expertise.
Benefits:
Health Coverage: Medical, dental, and vision insurance
Additional Insurance: Basic Life/AD&D, Voluntary Life/AD&D, Short and Long‑Term Disability, Accident, Critical Illness, Hospitalization Indemnity, and Pet Insurance
Retirement Plan: 401(k) plan with company match
Paid Time Off: Generous PTO, paid holidays, parental leave, and more
Wellness: Access to wellness programs and mental health support
Professional Development: Opportunities for growth, including tuition reimbursement
Additional Perks:
Flexible work arrangements, including remote work options
Flexible Spending Accounts (FSAs)
Employee referral programs
Bonus opportunities
Technology allowance
A diverse, inclusive, and supportive workplace culture
#J-18808-Ljbffr
What you’ll do:
Work with team members to operationalize data pipelines and supporting cloud infrastructure
Collaborate with external data producers and consumers to obtain and provide data through interfaces such as REST APIs and S3
Provide day‑to‑day support of deploying Python‑native data pipelines and performing data engineering tasks to enable data brokering and exchange capabilities
Provide Tier 2/3 troubleshooting and incident resolution support for data pipelines in Production
What you’ll need to succeed:
Active TS/SCI with the ability to obtain and maintain a CI polygraph required
4+ years of proven experience in data engineering, with expertise in designing, developing, and maintaining data ingestion, transformation, and loading pipelines and components
Demonstrated experience in designing and deploying data pipelines leveraging AWS cloud infrastructure across multiple classification domains (e.g., IL5 to IL6+)
Experience with Infrastructure‑as‑Code (IaC) tools, including Terraform, CloudFormation, or Ansible, to automate deployment of data pipeline cloud infrastructure
Understanding of RMF security principles and hands‑on experience implementing security controls for data pipelines in cloud environments
Strong scripting and programming skills in languages such as Go, Python, and Bash
Experience with data pipeline tools and technologies such as Nifi, Hadoop, HDFS, and Kafka. Experience implementing data pipelines in the Cloudera Data Platform environment is highly preferred.
Strong communication skills, with the ability to clearly convey complex technical concepts
Compensation: We are committed to providing fair and competitive compensation. The salary range for this position is $78,000 to $250,000 per year. This range reflects the compensation offered across the locations where we hire. The exact salary will be determined based on the candidate's work location, specific role, skill set, and level of expertise.
Benefits:
Health Coverage: Medical, dental, and vision insurance
Additional Insurance: Basic Life/AD&D, Voluntary Life/AD&D, Short and Long‑Term Disability, Accident, Critical Illness, Hospitalization Indemnity, and Pet Insurance
Retirement Plan: 401(k) plan with company match
Paid Time Off: Generous PTO, paid holidays, parental leave, and more
Wellness: Access to wellness programs and mental health support
Professional Development: Opportunities for growth, including tuition reimbursement
Additional Perks:
Flexible work arrangements, including remote work options
Flexible Spending Accounts (FSAs)
Employee referral programs
Bonus opportunities
Technology allowance
A diverse, inclusive, and supportive workplace culture
#J-18808-Ljbffr