Procurement Sciences
Company Overview:
Procurement Sciences AI ( PSci.AI ) is at the vanguard of generative artificial intelligence, transforming the government contracting sector as a Series A rocketship, proudly backed by Battery Ventures, a top 1% global technology leading venture capital firm. As a venture-backed B2B SaaS entity, we are dedicated to revolutionizing federal, state, and local business approaches to government contracting with disruptive AI capabilities. Our team is committed to addressing customer pain points through an AI-first strategy, ensuring our solutions are effective and ahead of the curve. Our flagship platform, celebrated for its "Win More Bids" value proposition, enhances revenue streams for our clients while driving unparalleled operational efficiencies. By harnessing the power of generative AI, tailored for the government contracting domain, we offer a unique competitive advantage. Our collaboration with Battery Ventures provides the resources and support to rapidly scale our innovations, redefining success standards and promising a quantum leap in value generation and operational excellence for our clients. Job Description:
We are seeking a skilled
Data Engineer
to join our team, focusing on building and optimizing our data infrastructure. The ideal candidate will have experience with modern data stack tools, cloud platforms (Azure), and strong SQL skills. Mandatory Requirements: Relational Databases:
Strong understanding of schema design, normalization, indexing, and query optimization. Hands-on experience with
PostgreSQL
for data modeling and optimization.
Data Processing & Transformation:
Proficiency with
dbt, SQL, and Python
for data transformation and cleansing. Experience with
workflow orchestration tools
such as
Airflow
or
Azure Data Factory .
Data Integration:
Experience ingesting data from various sources (APIs, flat files, government databases like
SAM.gov ). Familiarity with
ETL/ELT tools
such as
Airbyte, PyAirbyte , and metadata governance tools like
OpenMetadata .
Cloud Platforms:
Hands-on experience with
Azure Data Factory ,
Azure Blob Storage , and
Azure Databricks . Understanding of
Delta Lake
for data lakehouse management,
Kubernetes
as a container orchestrator & control plane.
Programming & Scripting:
Proficient in
Python
for data manipulation and automation. Strong SQL skills, including writing complex queries with
CTEs, window functions , and optimizations.
Version Control:
Experience with
Git , including branching strategies and collaborative development.
Soft Skills:
Excellent communication skills to explain complex concepts to technical and non-technical stakeholders. Strong problem-solving skills and ability to troubleshoot data issues efficiently. Ability to collaborate effectively within a team and lead technical initiatives.
Desired (Nice-to-Have) Skills: Columnar Databases:
Experience with
ClickHouse
and optimization techniques for analytical queries.
In-Memory OLAP Databases:
Familiarity with
DuckDB
for read-only analytical workloads.
Modeling Techniques:
Knowledge of
dimensional modeling
(star/snowflake schemas) and
Data Vault
approaches.
API Integration & Event-Driven Architecture:
Understanding of
OpenAPI, Kafka, and event-driven design patterns .
DevOps/DataOps Experience:
Familiarity with
GitOps ,
CI/CD pipelines , and infrastructure-as-code tools like
Pulumi
and
ARM/Bicep .
Generative AI & Knowledge Graphs:
Understanding of
Langchain/Langgraph ,
knowledge graphs ,
Model Context Protocol , and
semantic embeddings .
Domain Knowledge:
Familiarity with US government procurement regulations (FAR, DFARS) and data sources such as
USASpending.gov .
Location:
Remote Experience Level:
Mid-Senior Employment Type:
Full-time. If you have a passion for working with large datasets and enjoy building scalable data solutions, we'd love to hear from you!
#J-18808-Ljbffr
Procurement Sciences AI ( PSci.AI ) is at the vanguard of generative artificial intelligence, transforming the government contracting sector as a Series A rocketship, proudly backed by Battery Ventures, a top 1% global technology leading venture capital firm. As a venture-backed B2B SaaS entity, we are dedicated to revolutionizing federal, state, and local business approaches to government contracting with disruptive AI capabilities. Our team is committed to addressing customer pain points through an AI-first strategy, ensuring our solutions are effective and ahead of the curve. Our flagship platform, celebrated for its "Win More Bids" value proposition, enhances revenue streams for our clients while driving unparalleled operational efficiencies. By harnessing the power of generative AI, tailored for the government contracting domain, we offer a unique competitive advantage. Our collaboration with Battery Ventures provides the resources and support to rapidly scale our innovations, redefining success standards and promising a quantum leap in value generation and operational excellence for our clients. Job Description:
We are seeking a skilled
Data Engineer
to join our team, focusing on building and optimizing our data infrastructure. The ideal candidate will have experience with modern data stack tools, cloud platforms (Azure), and strong SQL skills. Mandatory Requirements: Relational Databases:
Strong understanding of schema design, normalization, indexing, and query optimization. Hands-on experience with
PostgreSQL
for data modeling and optimization.
Data Processing & Transformation:
Proficiency with
dbt, SQL, and Python
for data transformation and cleansing. Experience with
workflow orchestration tools
such as
Airflow
or
Azure Data Factory .
Data Integration:
Experience ingesting data from various sources (APIs, flat files, government databases like
SAM.gov ). Familiarity with
ETL/ELT tools
such as
Airbyte, PyAirbyte , and metadata governance tools like
OpenMetadata .
Cloud Platforms:
Hands-on experience with
Azure Data Factory ,
Azure Blob Storage , and
Azure Databricks . Understanding of
Delta Lake
for data lakehouse management,
Kubernetes
as a container orchestrator & control plane.
Programming & Scripting:
Proficient in
Python
for data manipulation and automation. Strong SQL skills, including writing complex queries with
CTEs, window functions , and optimizations.
Version Control:
Experience with
Git , including branching strategies and collaborative development.
Soft Skills:
Excellent communication skills to explain complex concepts to technical and non-technical stakeholders. Strong problem-solving skills and ability to troubleshoot data issues efficiently. Ability to collaborate effectively within a team and lead technical initiatives.
Desired (Nice-to-Have) Skills: Columnar Databases:
Experience with
ClickHouse
and optimization techniques for analytical queries.
In-Memory OLAP Databases:
Familiarity with
DuckDB
for read-only analytical workloads.
Modeling Techniques:
Knowledge of
dimensional modeling
(star/snowflake schemas) and
Data Vault
approaches.
API Integration & Event-Driven Architecture:
Understanding of
OpenAPI, Kafka, and event-driven design patterns .
DevOps/DataOps Experience:
Familiarity with
GitOps ,
CI/CD pipelines , and infrastructure-as-code tools like
Pulumi
and
ARM/Bicep .
Generative AI & Knowledge Graphs:
Understanding of
Langchain/Langgraph ,
knowledge graphs ,
Model Context Protocol , and
semantic embeddings .
Domain Knowledge:
Familiarity with US government procurement regulations (FAR, DFARS) and data sources such as
USASpending.gov .
Location:
Remote Experience Level:
Mid-Senior Employment Type:
Full-time. If you have a passion for working with large datasets and enjoy building scalable data solutions, we'd love to hear from you!
#J-18808-Ljbffr