Red Hat
Senior Data Architect - Product Security
Red Hat, Raleigh, North Carolina, United States, 27601
Senior Data Architect - Product Security
We are seeking a Senior Data Engineer to close a critical gap in our Product Security capabilities. Today, our security data is fragmented and inconsistent, limiting visibility, timely insights, and effective risk management. Previous efforts with Data Analysts and Data Scientists were insufficient—while those roles excel at interpreting or predicting from existing data, they do not provide the foundational engineering expertise required to consolidate, structure, and scale our data ecosystem.
This role is not an add‑on to existing engineering responsibilities. It requires a dedicated expert to design, build, and maintain the data infrastructure that underpins security intelligence. By investing in a Data Engineer, we can transform scattered information into a unified platform that enables proactive, data‑driven risk mitigation.
What You Will Do
Define and implement the strategic vision for a secure, scalable, and reliable data architecture.
Build a centralized data ontology, creating consistency and a shared language across all tools, systems, and teams.
Design, optimize, and maintain complex ETL/ELT pipelines (Python, SQL, etc.) to consolidate data from diverse sources into warehouses, lakes, or analytics platforms.
Automate data flows end‑to‑end, incorporating intelligent automation and AI‑driven solutions to improve efficiency and security operations.
Establish and enforce data governance practices, ensuring data quality, accuracy, compliance, and cost‑efficient performance at scale.
Enable teams with BI tools (e.g., Tableau) to create dashboards, reports, and analytics that surface actionable insights and trends.
Act as the bridge between Product Security teams, translating requirements into scalable solutions and ensuring alignment across stakeholders.
Assess and recommend emerging data and AI/ML technologies that can strengthen security data capabilities and enhance automation.
What You Will Bring
Bachelor’s or Master’s in Computer Science, IT, Business Administration, or related field.
5+ years of experience as a Data Engineer or in a related role, with proven expertise in data pipeline development and architecture.
Strong programming skills (Python, Java, Scala) and deep proficiency in SQL.
Hands‑on experience with relational and NoSQL databases (PostgreSQL, MySQL, MongoDB, Elasticsearch).
Expertise in building and optimizing ETL/ELT processes and data warehouses.
Experience with big data technologies (Hadoop, Spark, Kafka) and cloud platforms (AWS, Azure, GCP) is highly desirable.
Solid understanding of data modeling and schema design principles.
Preferred experience with Tableau/CRMA analytics.
Familiarity with version control systems (Git).
Strong problem‑solving, analytical, and communication skills.
Comfortable working in a fast‑paced, globally distributed environment with minimal supervision.
Proactive mindset, leveraging AI‑assisted tools to accelerate development and enhance automation throughout the lifecycle.
Benefits
Comprehensive medical, dental, and vision coverage
Flexible Spending Account for healthcare and dependent care
Health Savings Account – high deductible medical plan
Retirement 401(k) with employer match
Paid time off and holidays
Paid parental leave plans for all new parents
Leave benefits including disability, paid family medical leave, and paid military leave
Employee stock purchase plan, family planning reimbursement, tuition reimbursement, transportation expense account, employee assistance program, and more!
Equal Opportunity Policy (EEO) Red Hat is proud to be an equal opportunity workplace and an affirmative action employer. We review applications for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, citizenship, age, veteran status, genetic information, physical or mental disability, medical condition, marital status, or any other basis prohibited by law.
Red Hat supports individuals with disabilities and provides reasonable accommodations to job applicants. If you need assistance completing our online job application, email application‑assistance@redhat.com.
#J-18808-Ljbffr
This role is not an add‑on to existing engineering responsibilities. It requires a dedicated expert to design, build, and maintain the data infrastructure that underpins security intelligence. By investing in a Data Engineer, we can transform scattered information into a unified platform that enables proactive, data‑driven risk mitigation.
What You Will Do
Define and implement the strategic vision for a secure, scalable, and reliable data architecture.
Build a centralized data ontology, creating consistency and a shared language across all tools, systems, and teams.
Design, optimize, and maintain complex ETL/ELT pipelines (Python, SQL, etc.) to consolidate data from diverse sources into warehouses, lakes, or analytics platforms.
Automate data flows end‑to‑end, incorporating intelligent automation and AI‑driven solutions to improve efficiency and security operations.
Establish and enforce data governance practices, ensuring data quality, accuracy, compliance, and cost‑efficient performance at scale.
Enable teams with BI tools (e.g., Tableau) to create dashboards, reports, and analytics that surface actionable insights and trends.
Act as the bridge between Product Security teams, translating requirements into scalable solutions and ensuring alignment across stakeholders.
Assess and recommend emerging data and AI/ML technologies that can strengthen security data capabilities and enhance automation.
What You Will Bring
Bachelor’s or Master’s in Computer Science, IT, Business Administration, or related field.
5+ years of experience as a Data Engineer or in a related role, with proven expertise in data pipeline development and architecture.
Strong programming skills (Python, Java, Scala) and deep proficiency in SQL.
Hands‑on experience with relational and NoSQL databases (PostgreSQL, MySQL, MongoDB, Elasticsearch).
Expertise in building and optimizing ETL/ELT processes and data warehouses.
Experience with big data technologies (Hadoop, Spark, Kafka) and cloud platforms (AWS, Azure, GCP) is highly desirable.
Solid understanding of data modeling and schema design principles.
Preferred experience with Tableau/CRMA analytics.
Familiarity with version control systems (Git).
Strong problem‑solving, analytical, and communication skills.
Comfortable working in a fast‑paced, globally distributed environment with minimal supervision.
Proactive mindset, leveraging AI‑assisted tools to accelerate development and enhance automation throughout the lifecycle.
Benefits
Comprehensive medical, dental, and vision coverage
Flexible Spending Account for healthcare and dependent care
Health Savings Account – high deductible medical plan
Retirement 401(k) with employer match
Paid time off and holidays
Paid parental leave plans for all new parents
Leave benefits including disability, paid family medical leave, and paid military leave
Employee stock purchase plan, family planning reimbursement, tuition reimbursement, transportation expense account, employee assistance program, and more!
Equal Opportunity Policy (EEO) Red Hat is proud to be an equal opportunity workplace and an affirmative action employer. We review applications for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, citizenship, age, veteran status, genetic information, physical or mental disability, medical condition, marital status, or any other basis prohibited by law.
Red Hat supports individuals with disabilities and provides reasonable accommodations to job applicants. If you need assistance completing our online job application, email application‑assistance@redhat.com.
#J-18808-Ljbffr