Granica Computing, Inc.
Granica is redefining how enterprises prepare and optimize data at the most fundamental layer of the AI stack—where raw information becomes usable intelligence. Our technology operates deep in the data infrastructure layer, making data efficient, secure, and ready for scale.
We eliminate the hidden inefficiencies in modern data platforms—slashing storage and compute costs, accelerating pipelines, and boosting platform efficiency. The result: 60%+ lower storage costs, up to 60% lower compute spend, 3× faster data processing, and 20% overall efficiency gains.
Why It Matters
Massive data should fuel innovation, not drain budgets. We remove the bottlenecks holding AI and analytics back—making data lighter, faster, and smarter so teams can ship breakthroughs, not babysit storage and compute bills.
Who We Are
World renowned researchers in compression, information theory, and data systems
Elite engineers from Google, Pure Storage, Cohesity and top cloud teams
Enterprise sellers who turn ROI into seven‑figure wins.
Powered by World-Class Investors & Customers $65M+ raised from NEA, Bain Capital, A* Capital, and operators behind Okta, Eventbrite, Tesla, and Databricks. Our platform already processes hundreds of petabytes for industry leaders Our Mission:
We’re building the default data substrate for AI, and a generational company built to endure. Smarter Infrastructure for the AI Era: We make data efficient, safe, and ready for scale—think smarter, more foundational infrastructure for the AI era. Our technology integrates directly with modern data stacks like Snowflake, Databricks, and S3-based data lakes, enabling: 60%+ reduction in storage costs and up to 60% lower compute spend
3x faster data processing
20% platform efficiency gains
Trusted by Industry Leaders Enterprise leaders globally already rely on Granica to cut costs, boost performance, and unlock more value from their existing data platforms. A Deep Tech Approach to AI We’re unlocking the layers
beneath
platforms like Snowflake and Databricks, making them faster, cheaper, and more AI-native. We combine advanced research with practical productization, powered by a dual-track strategy: Research:
Led by Chief Scientist Andrea Montanari (Stanford Professor), we publish 1–2 top-tier papers per quarter.
Product:
Actively processing 100+ PBs today and targeting Exabyte scale by Q4 2025.
Backed by the Best We’ve raised $60M+ from NEA, Bain Capital, A Capital, and operators behind Okta, Eventbrite, Tesla, and Databricks. Our Mission To
convert entropy into intelligence , so every builder—human or AI—can make the impossible real. We’re building the
default data substrate for AI , and a generational company built to endure beyond any single product cycle. WHAT YOU’LL DO This is a deep systems role for someone who lives and breathes lakehouse internals, knows open source cold, and wants to push the limits of what’s possible with Delta, Iceberg, and Parquet at petabyte scale. Own the ACID backbone.
Design and harden Delta and Iceberg transaction layers so that petabyte-scale tables can time-travel in microseconds and schema evolution becomes a non-event.
Turn metadata into rocket fuel.
Build compaction, caching, and pruning services that keep millions of file pointers within 50ms from lookup to plan.
Squeeze more signal per byte.
Optimize Parquet layouts—from column ordering to dictionary and bit-packing, bloom filters, and zone-map indexes—to cut scan I/O by 10x on real-world workloads.
Ship adaptive indexing with research.
Co-invent machine-driven indexes that learn access patterns and automatically re-partition nightly—no more manual “analyze table” ever again.
Scale the engine, not the babysitting.
Write Spark/Scala pipelines that autoscale across S3, GCS, and ADLS; expose observability hooks; and survive chaos drills without triggering a pager storm.
Code for longevity.
Write clean, test-soaked Java, Scala, or Go. Document key invariants so future teams extend the system—instead of rewriting it.
Measure success in human latency.
If analysts see their dashboards refresh in blink-level time, you’ve won. Publish your breakthrough and mentor the next engineer to raise the bar again.
WHAT WE’RE LOOKING FOR You’ve built systems where
petabyte-scale performance, resilience, and clarity of design all matter . You thrive at the intersection of infrastructure engineering and applied research, and care deeply about both how something works and how well it works at scale. We're looking for someone with experience in: Lakehouse and Transactional Data Systems Proven expertise with formats like
Delta Lake
or
Apache Iceberg , including ACID-compliant table design, schema evolution, and time-travel mechanics.
Columnar Storage Optimization Deep knowledge of
Parquet , including techniques like
column ordering ,
dictionary encoding ,
bit-packing ,
bloom filters , and
zone maps
to reduce scan I/O and improve query efficiency.
Metadata and Indexing Systems Experience building
metadata-driven services —compaction, caching, pruning, and adaptive indexing that accelerate query planning and eliminate manual tuning.
Distributed Compute at Scale Production-grade
Spark/Scala pipeline development
across object stores like
S3, GCS, and ADLS , with an eye toward
autoscaling ,
resilience , and
observability .
Programming for Scale and Longevity Strong coding skills in
Java ,
Scala , or
Go , with a focus on
clean, testable code
and a documented mindset that enables future engineers to build on your work, not rewrite it.
Resilient Systems and Observability You’ve designed systems that survive
chaos drills , avoid pager storms, and surface the right metrics to keep complex infrastructure calm and visible.
Latency as a Product Metric You think in terms of
human latency —how fast a dashboard feels to the analyst, not just the system. You take pride in chasing down every unnecessary millisecond.
Mentorship and Engineering Rigor You publish your breakthroughs, mentor peers, and contribute to a culture of engineering excellence and continuous learning.
WHY JOIN GRANICA If you’ve helped build the modern data stack at a large company—Databricks, Snowflake, Confluent, or similar—you already know how critical lakehouse infrastructure is to AI and analytics at scale. At Granica, you’ll take that knowledge and apply it where it matters most…at the most fundamental layer in the data ecosystem. Own the product, not just the feature.
At Granica, you won’t be optimizing edge cases or maintaining legacy systems. You’ll architect and build foundational components that define how enterprises manage and optimize data for AI.
Move faster, go deeper.
No multi-month review cycles or layers of abstraction—just high-agency engineering work where great ideas ship weekly. You’ll work directly with the founding team, engage closely with design partners, and see your impact hit production fast.
Work on hard, meaningful problems.
From transaction layer design in Delta and Iceberg, to petabyte-scale compaction and schema evolution, to adaptive indexing and cost-aware query planning—this is deep systems engineering at scale.
Join a team of expert builders.
Our engineers have designed the core internals of cloud-scale data systems, and we maintain a culture of peer-driven learning, hands-on prototyping, and technical storytelling.
Core Differentiation:
We’refocused on unlocking a deeper layer of AI infrastructure. By optimizing the way data is stored, processed, and retrieved, we make platforms like Snowflake and Databricks faster, more cost-efficient, and more AI-native. Our work sits at the most fundamental layer of the AI stack: where raw data becomes usable intelligence.
Be part of something early—without the chaos.
Granica has already secured $65M+ from NEA, Bain Capital Ventures, A* Capital, and legendary operators from Okta, Tesla, and Databricks.
Grow with the company.
You’ll have the chance to grow into a technical leadership role, mentor future hires, and shape both the engineering culture and product direction as we scale.
COMPENSATION & BENEFITS Competitive salary and meaningful equity
Flexible hybrid work (Bay Area HQ)
Unlimited PTO + quarterly recharge days
Premium health, vision, and dental
Team offsites, deep tech talks, and learning stipends
Help build the
foundational infrastructure for the AI era
Granica is an equal opportunity employer.
We celebrate diversity and are committed to creating an inclusive environment for all employees.
#J-18808-Ljbffr
Elite engineers from Google, Pure Storage, Cohesity and top cloud teams
Enterprise sellers who turn ROI into seven‑figure wins.
Powered by World-Class Investors & Customers $65M+ raised from NEA, Bain Capital, A* Capital, and operators behind Okta, Eventbrite, Tesla, and Databricks. Our platform already processes hundreds of petabytes for industry leaders Our Mission:
We’re building the default data substrate for AI, and a generational company built to endure. Smarter Infrastructure for the AI Era: We make data efficient, safe, and ready for scale—think smarter, more foundational infrastructure for the AI era. Our technology integrates directly with modern data stacks like Snowflake, Databricks, and S3-based data lakes, enabling: 60%+ reduction in storage costs and up to 60% lower compute spend
3x faster data processing
20% platform efficiency gains
Trusted by Industry Leaders Enterprise leaders globally already rely on Granica to cut costs, boost performance, and unlock more value from their existing data platforms. A Deep Tech Approach to AI We’re unlocking the layers
beneath
platforms like Snowflake and Databricks, making them faster, cheaper, and more AI-native. We combine advanced research with practical productization, powered by a dual-track strategy: Research:
Led by Chief Scientist Andrea Montanari (Stanford Professor), we publish 1–2 top-tier papers per quarter.
Product:
Actively processing 100+ PBs today and targeting Exabyte scale by Q4 2025.
Backed by the Best We’ve raised $60M+ from NEA, Bain Capital, A Capital, and operators behind Okta, Eventbrite, Tesla, and Databricks. Our Mission To
convert entropy into intelligence , so every builder—human or AI—can make the impossible real. We’re building the
default data substrate for AI , and a generational company built to endure beyond any single product cycle. WHAT YOU’LL DO This is a deep systems role for someone who lives and breathes lakehouse internals, knows open source cold, and wants to push the limits of what’s possible with Delta, Iceberg, and Parquet at petabyte scale. Own the ACID backbone.
Design and harden Delta and Iceberg transaction layers so that petabyte-scale tables can time-travel in microseconds and schema evolution becomes a non-event.
Turn metadata into rocket fuel.
Build compaction, caching, and pruning services that keep millions of file pointers within 50ms from lookup to plan.
Squeeze more signal per byte.
Optimize Parquet layouts—from column ordering to dictionary and bit-packing, bloom filters, and zone-map indexes—to cut scan I/O by 10x on real-world workloads.
Ship adaptive indexing with research.
Co-invent machine-driven indexes that learn access patterns and automatically re-partition nightly—no more manual “analyze table” ever again.
Scale the engine, not the babysitting.
Write Spark/Scala pipelines that autoscale across S3, GCS, and ADLS; expose observability hooks; and survive chaos drills without triggering a pager storm.
Code for longevity.
Write clean, test-soaked Java, Scala, or Go. Document key invariants so future teams extend the system—instead of rewriting it.
Measure success in human latency.
If analysts see their dashboards refresh in blink-level time, you’ve won. Publish your breakthrough and mentor the next engineer to raise the bar again.
WHAT WE’RE LOOKING FOR You’ve built systems where
petabyte-scale performance, resilience, and clarity of design all matter . You thrive at the intersection of infrastructure engineering and applied research, and care deeply about both how something works and how well it works at scale. We're looking for someone with experience in: Lakehouse and Transactional Data Systems Proven expertise with formats like
Delta Lake
or
Apache Iceberg , including ACID-compliant table design, schema evolution, and time-travel mechanics.
Columnar Storage Optimization Deep knowledge of
Parquet , including techniques like
column ordering ,
dictionary encoding ,
bit-packing ,
bloom filters , and
zone maps
to reduce scan I/O and improve query efficiency.
Metadata and Indexing Systems Experience building
metadata-driven services —compaction, caching, pruning, and adaptive indexing that accelerate query planning and eliminate manual tuning.
Distributed Compute at Scale Production-grade
Spark/Scala pipeline development
across object stores like
S3, GCS, and ADLS , with an eye toward
autoscaling ,
resilience , and
observability .
Programming for Scale and Longevity Strong coding skills in
Java ,
Scala , or
Go , with a focus on
clean, testable code
and a documented mindset that enables future engineers to build on your work, not rewrite it.
Resilient Systems and Observability You’ve designed systems that survive
chaos drills , avoid pager storms, and surface the right metrics to keep complex infrastructure calm and visible.
Latency as a Product Metric You think in terms of
human latency —how fast a dashboard feels to the analyst, not just the system. You take pride in chasing down every unnecessary millisecond.
Mentorship and Engineering Rigor You publish your breakthroughs, mentor peers, and contribute to a culture of engineering excellence and continuous learning.
WHY JOIN GRANICA If you’ve helped build the modern data stack at a large company—Databricks, Snowflake, Confluent, or similar—you already know how critical lakehouse infrastructure is to AI and analytics at scale. At Granica, you’ll take that knowledge and apply it where it matters most…at the most fundamental layer in the data ecosystem. Own the product, not just the feature.
At Granica, you won’t be optimizing edge cases or maintaining legacy systems. You’ll architect and build foundational components that define how enterprises manage and optimize data for AI.
Move faster, go deeper.
No multi-month review cycles or layers of abstraction—just high-agency engineering work where great ideas ship weekly. You’ll work directly with the founding team, engage closely with design partners, and see your impact hit production fast.
Work on hard, meaningful problems.
From transaction layer design in Delta and Iceberg, to petabyte-scale compaction and schema evolution, to adaptive indexing and cost-aware query planning—this is deep systems engineering at scale.
Join a team of expert builders.
Our engineers have designed the core internals of cloud-scale data systems, and we maintain a culture of peer-driven learning, hands-on prototyping, and technical storytelling.
Core Differentiation:
We’refocused on unlocking a deeper layer of AI infrastructure. By optimizing the way data is stored, processed, and retrieved, we make platforms like Snowflake and Databricks faster, more cost-efficient, and more AI-native. Our work sits at the most fundamental layer of the AI stack: where raw data becomes usable intelligence.
Be part of something early—without the chaos.
Granica has already secured $65M+ from NEA, Bain Capital Ventures, A* Capital, and legendary operators from Okta, Tesla, and Databricks.
Grow with the company.
You’ll have the chance to grow into a technical leadership role, mentor future hires, and shape both the engineering culture and product direction as we scale.
COMPENSATION & BENEFITS Competitive salary and meaningful equity
Flexible hybrid work (Bay Area HQ)
Unlimited PTO + quarterly recharge days
Premium health, vision, and dental
Team offsites, deep tech talks, and learning stipends
Help build the
foundational infrastructure for the AI era
Granica is an equal opportunity employer.
We celebrate diversity and are committed to creating an inclusive environment for all employees.
#J-18808-Ljbffr