Granica
Staff Software Engineer – Foundational Data Systems for AI
Granica, San Francisco, California, United States, 94199
Staff Software Engineer – Foundational Data Systems for AI
Granica is an AI research and systems company building the infrastructure for a new kind of intelligence: one that is structured, efficient, and deeply integrated with data. Our systems operate at exabyte scale, processing petabytes of data each day for some of the world’s most prominent enterprises in finance, technology, and industry. These systems are already making a measurable difference in how global organizations use data to deploy AI safely and efficiently.
We believe that the next generation of enterprise AI will not come from larger models but from more efficient data systems. By advancing the frontier of how data is represented, stored, and transformed, we aim to make large‑scale intelligence creation sustainable and adaptive.
Our long‑term vision is Efficient Intelligence: AI that learns using fewer resources, generalizes from less data, and reasons through structure rather than scale. To reach that, we are first building the Foundational Data Systems that make structured AI possible.
Base pay range $240,000.00 / yr – $290,000.00 / yr
What You’ll Build
Global Metadata Substrate. Define and evolve the global metadata and transactional substrate that powers atomic consistency and schema evolution across exabyte‑scale data systems.
Adaptive Engines. Architect self‑optimizing systems that continuously reorganize and compress data based on access patterns, achieving order‑of‑magnitude efficiency gains.
Intelligent Data Layouts. Pioneer new approaches to encoding and layout that push theoretical limits of signal per byte read.
Autonomous Compute Pipelines. Lead development of distributed compute platforms that scale predictably and maintain reliability under extreme load and failure conditions.
Research to Production. Collaborate with Granica Research to translate advances in compression and probabilistic modeling into production‑grade, industry‑defining systems.
Latency as Intelligence. Drive system‑wide initiatives to minimize latency from insight to decision, enabling faster model learning and data‑driven reasoning.
What You Bring
Mastery of distributed systems: consensus, replication, consistency, and performance at scale.
Proven track record of architecting and delivering large‑scale data or compute systems with measurable 10× impact.
Expertise with columnar formats and low‑level data representation techniques.
Deep production experience with Spark, Flink, or next‑generation compute frameworks.
Fluency in Java, Rust, Go, or C++, emphasizing simplicity, performance, and maintainability.
Demonstrated leadership—mentoring senior engineers, influencing architecture, and scaling technical excellence.
Systems intuition rooted in theory: compression, entropy, and information efficiency.
Bonus
Familiarity with Iceberg, Delta Lake, or Hudi.
Published or open‑source contributions in distributed systems, compression, or data representation.
Passion for bridging research and production to define the next frontier of efficient AI infrastructure.
Why Granica
Fundamental Research Meets Enterprise Impact. Work at the intersection of science and engineering, turning foundational research into deployed systems serving enterprise workloads at exabyte scale.
AI by Design. Build the infrastructure that defines how efficiently the world can create and apply intelligence.
Real Ownership. Design primitives that will underpin the next decade of AI infrastructure.
High‑Trust Environment. Deep technical work, minimal bureaucracy, shared mission.
Enduring Horizon. Backed by NEA, Bain Capital, and various luminaries from tech and business. We are building a generational company for decades, not quarters or a product cycle.
Compensation & Benefits
Competitive salary, meaningful equity, and substantial bonus for top performers.
Flexible time off plus comprehensive health coverage for you and your family.
Support for research, publication, and deep technical exploration.
Join us to build the foundational data systems that power the future of enterprise AI. At Granica, you will shape the fundamental infrastructure that makes intelligence itself efficient, structured, and enduring.
#J-18808-Ljbffr
We believe that the next generation of enterprise AI will not come from larger models but from more efficient data systems. By advancing the frontier of how data is represented, stored, and transformed, we aim to make large‑scale intelligence creation sustainable and adaptive.
Our long‑term vision is Efficient Intelligence: AI that learns using fewer resources, generalizes from less data, and reasons through structure rather than scale. To reach that, we are first building the Foundational Data Systems that make structured AI possible.
Base pay range $240,000.00 / yr – $290,000.00 / yr
What You’ll Build
Global Metadata Substrate. Define and evolve the global metadata and transactional substrate that powers atomic consistency and schema evolution across exabyte‑scale data systems.
Adaptive Engines. Architect self‑optimizing systems that continuously reorganize and compress data based on access patterns, achieving order‑of‑magnitude efficiency gains.
Intelligent Data Layouts. Pioneer new approaches to encoding and layout that push theoretical limits of signal per byte read.
Autonomous Compute Pipelines. Lead development of distributed compute platforms that scale predictably and maintain reliability under extreme load and failure conditions.
Research to Production. Collaborate with Granica Research to translate advances in compression and probabilistic modeling into production‑grade, industry‑defining systems.
Latency as Intelligence. Drive system‑wide initiatives to minimize latency from insight to decision, enabling faster model learning and data‑driven reasoning.
What You Bring
Mastery of distributed systems: consensus, replication, consistency, and performance at scale.
Proven track record of architecting and delivering large‑scale data or compute systems with measurable 10× impact.
Expertise with columnar formats and low‑level data representation techniques.
Deep production experience with Spark, Flink, or next‑generation compute frameworks.
Fluency in Java, Rust, Go, or C++, emphasizing simplicity, performance, and maintainability.
Demonstrated leadership—mentoring senior engineers, influencing architecture, and scaling technical excellence.
Systems intuition rooted in theory: compression, entropy, and information efficiency.
Bonus
Familiarity with Iceberg, Delta Lake, or Hudi.
Published or open‑source contributions in distributed systems, compression, or data representation.
Passion for bridging research and production to define the next frontier of efficient AI infrastructure.
Why Granica
Fundamental Research Meets Enterprise Impact. Work at the intersection of science and engineering, turning foundational research into deployed systems serving enterprise workloads at exabyte scale.
AI by Design. Build the infrastructure that defines how efficiently the world can create and apply intelligence.
Real Ownership. Design primitives that will underpin the next decade of AI infrastructure.
High‑Trust Environment. Deep technical work, minimal bureaucracy, shared mission.
Enduring Horizon. Backed by NEA, Bain Capital, and various luminaries from tech and business. We are building a generational company for decades, not quarters or a product cycle.
Compensation & Benefits
Competitive salary, meaningful equity, and substantial bonus for top performers.
Flexible time off plus comprehensive health coverage for you and your family.
Support for research, publication, and deep technical exploration.
Join us to build the foundational data systems that power the future of enterprise AI. At Granica, you will shape the fundamental infrastructure that makes intelligence itself efficient, structured, and enduring.
#J-18808-Ljbffr