Bigbear.ai
Overview
BigBear.ai
are seeking a highly skilled and motivated
Data Engineer
to join our Data Architecture & Engineering team. In this role, you will design and build scalable, secure, and efficient
data pipelines
that transform
raw data
into actionable insights. You’ll work across cloud environments, databases, and development teams to support mission-critical applications, analytics, and reporting. This position requires expertise in
ETL development ,
AWS cloud services , and
database management , along with a strong foundation in
Agile software development
practices.
What you will do
Design and develop end-to-end data pipelines
that support ingestion, transformation, and delivery of structured and unstructured data using tools and frameworks aligned with
ETL architecture
best practices. Build and maintain
ETL processes
that ensure data quality, consistency, and performance across multiple environments. Integrate data pipelines with
Amazon Web Services (AWS) , leveraging services such as S3, Lambda, Glue, and Redshift to enable scalable and secure data processing. Develop and maintain
schemas ,
data dictionaries , and transformation logic to support robust
data architecture
and documentation. Manage and monitor
production datasets , ensuring
fault tolerance ,
redundancy , and
data integrity
across systems. Collaborate with cross-functional teams to design and launch new data features, including documentation of
dataflows ,
capabilities , and
technical support
procedures. Support
database management and administration
tasks such as performance tuning, indexing, and access control. Apply
Agile software development
practices to deliver iterative improvements and respond to evolving data needs. Develop and integrate
APIs
to support data access and interoperability across systems. Work with
cloud messaging APIs
and implement
push notification
mechanisms where applicable Continuously explore and adopt new tools, technologies, and methods to improve
data engineering
practices and support mission-driven outcomes
What you need to have
Clearance: Must possess and maintain a TS-SCI clearance Bachelor’s degree in computer science, Engineering, Information Systems, or a related field 4–8 years of relevant experience in
data engineering ,
ETL development , or
database administration Proficiency in
Amazon Web Services (AWS)
and cloud-based data integration Strong experience with
ETL architecture ,
data pipeline development , and
end-to-end data processing Solid understanding of
database management , including schema design, indexing, and optimization Experience with
API development
and integration Ability to manage and transform
raw data
into usable formats for analytics and reporting Familiarity with
Agile methodologies
and collaborative development environments Strong problem-solving skills, attention to detail, and ability to provide
technical support
for data systems
What we'd like you to have
Experience with
cloud messaging APIs
and
push notification systems Hands-on experience with
database administration
and performance tuning Keen interest in learning and applying the latest
data tools and technologies
to solve real-world problems Experience supporting
technical support
functions related to data infrastructure Familiarity with
national security
or mission-driven data environments
About BigBear.ai
BigBear.ai is a leading provider of AI-powered decision intelligence solutions for national security, supply chain management, and digital identity. Customers and partners rely on Bigbear.ai’s predictive analytics capabilities in highly complex, distributed, mission-based operating environments. Headquartered in McLean, Virginia, BigBear.ai is a public company traded on the NYSE under the symbol BBAI. For more information, visit https://bigbear.ai/ and follow BigBear.ai on LinkedIn: @BigBear.ai and X: @BigBearai. BigBear.ai is an Equal opportunity employer all protected groups, including protected veterans and individuals with disabilities.
#J-18808-Ljbffr
BigBear.ai
are seeking a highly skilled and motivated
Data Engineer
to join our Data Architecture & Engineering team. In this role, you will design and build scalable, secure, and efficient
data pipelines
that transform
raw data
into actionable insights. You’ll work across cloud environments, databases, and development teams to support mission-critical applications, analytics, and reporting. This position requires expertise in
ETL development ,
AWS cloud services , and
database management , along with a strong foundation in
Agile software development
practices.
What you will do
Design and develop end-to-end data pipelines
that support ingestion, transformation, and delivery of structured and unstructured data using tools and frameworks aligned with
ETL architecture
best practices. Build and maintain
ETL processes
that ensure data quality, consistency, and performance across multiple environments. Integrate data pipelines with
Amazon Web Services (AWS) , leveraging services such as S3, Lambda, Glue, and Redshift to enable scalable and secure data processing. Develop and maintain
schemas ,
data dictionaries , and transformation logic to support robust
data architecture
and documentation. Manage and monitor
production datasets , ensuring
fault tolerance ,
redundancy , and
data integrity
across systems. Collaborate with cross-functional teams to design and launch new data features, including documentation of
dataflows ,
capabilities , and
technical support
procedures. Support
database management and administration
tasks such as performance tuning, indexing, and access control. Apply
Agile software development
practices to deliver iterative improvements and respond to evolving data needs. Develop and integrate
APIs
to support data access and interoperability across systems. Work with
cloud messaging APIs
and implement
push notification
mechanisms where applicable Continuously explore and adopt new tools, technologies, and methods to improve
data engineering
practices and support mission-driven outcomes
What you need to have
Clearance: Must possess and maintain a TS-SCI clearance Bachelor’s degree in computer science, Engineering, Information Systems, or a related field 4–8 years of relevant experience in
data engineering ,
ETL development , or
database administration Proficiency in
Amazon Web Services (AWS)
and cloud-based data integration Strong experience with
ETL architecture ,
data pipeline development , and
end-to-end data processing Solid understanding of
database management , including schema design, indexing, and optimization Experience with
API development
and integration Ability to manage and transform
raw data
into usable formats for analytics and reporting Familiarity with
Agile methodologies
and collaborative development environments Strong problem-solving skills, attention to detail, and ability to provide
technical support
for data systems
What we'd like you to have
Experience with
cloud messaging APIs
and
push notification systems Hands-on experience with
database administration
and performance tuning Keen interest in learning and applying the latest
data tools and technologies
to solve real-world problems Experience supporting
technical support
functions related to data infrastructure Familiarity with
national security
or mission-driven data environments
About BigBear.ai
BigBear.ai is a leading provider of AI-powered decision intelligence solutions for national security, supply chain management, and digital identity. Customers and partners rely on Bigbear.ai’s predictive analytics capabilities in highly complex, distributed, mission-based operating environments. Headquartered in McLean, Virginia, BigBear.ai is a public company traded on the NYSE under the symbol BBAI. For more information, visit https://bigbear.ai/ and follow BigBear.ai on LinkedIn: @BigBear.ai and X: @BigBearai. BigBear.ai is an Equal opportunity employer all protected groups, including protected veterans and individuals with disabilities.
#J-18808-Ljbffr