Jobs via Dice
Overview
BigBear.ai is seeking a Machine Learning Data Engineer in the Northern Virginia area. The successful candidate will support and work within an ML product implementation team supporting a fast-paced, first‑of‑kind, multi‑year contract.
What You Will Do
Rapidly develop custom dataset ingestion workflows using a combination of Python, Java, SQL, and KNIME analytics platform
Develop and troubleshoot ETL functions in Oracle Database, AWS, Linux, and KNIME
Create schemas, data models, and custom feature engineering
Create automated job flows and monitor daily database performance
Provide development support to the Scrum Master in an agile software development environment
Work as a key member of an AWS native solution software development team and identify risks and bottlenecks associated with big data environments
What You Need To Have
Bachelor's Degree and 0 to 2 years of experience
1+ years of experience in ETL and database development
1+ years of Python experience
What We'd Like You To Have
Java skills
Experience with distributed processing
Production implementation experience
Experience with containerization (e.g Docker, Kubernetes, etc.)
About BigBear.ai BigBear.ai is a leading provider of AI‑powered decision intelligence solutions for national security, supply chain management, and digital identity. Customers and partners rely on Bigbear.ai's predictive analytics capabilities in highly complex, distributed, mission‑based operating environments. Headquartered in McLean, Virginia, BigBear.ai is a public company traded on the NYSE under the symbol BBAI. For more information, visit and follow BigBear.ai on LinkedIn: @BigBear.ai and X: @BigBearai.
BigBear.ai is an Equal opportunity employer for all protected groups, including protected veterans and individuals with disabilities.
#J-18808-Ljbffr
What You Will Do
Rapidly develop custom dataset ingestion workflows using a combination of Python, Java, SQL, and KNIME analytics platform
Develop and troubleshoot ETL functions in Oracle Database, AWS, Linux, and KNIME
Create schemas, data models, and custom feature engineering
Create automated job flows and monitor daily database performance
Provide development support to the Scrum Master in an agile software development environment
Work as a key member of an AWS native solution software development team and identify risks and bottlenecks associated with big data environments
What You Need To Have
Bachelor's Degree and 0 to 2 years of experience
1+ years of experience in ETL and database development
1+ years of Python experience
What We'd Like You To Have
Java skills
Experience with distributed processing
Production implementation experience
Experience with containerization (e.g Docker, Kubernetes, etc.)
About BigBear.ai BigBear.ai is a leading provider of AI‑powered decision intelligence solutions for national security, supply chain management, and digital identity. Customers and partners rely on Bigbear.ai's predictive analytics capabilities in highly complex, distributed, mission‑based operating environments. Headquartered in McLean, Virginia, BigBear.ai is a public company traded on the NYSE under the symbol BBAI. For more information, visit and follow BigBear.ai on LinkedIn: @BigBear.ai and X: @BigBearai.
BigBear.ai is an Equal opportunity employer for all protected groups, including protected veterans and individuals with disabilities.
#J-18808-Ljbffr