Blend360
Company Description
Blend is a premier AI services provider, committed to co-creating meaningful impact for its clients through the power of data science, AI, technology, and people. With a mission to fuel bold visions, Blend tackles significant challenges by seamlessly aligning human expertise with artificial intelligence. The company is dedicated to unlocking value and fostering innovation for its clients by harnessing world-class people and data-driven strategy. We believe that the power of people and AI can have a meaningful impact on your world, creating more fulfilling work and projects for our people and clients. For more information, visit www.blend360.com.
Job Description
This is a CONTRACT TO POSSIBLE HIRE POSITION!
We're seeking a
Mid-Level Data Engineer
with hands-on experience in configuring and optimizing
Apache Iceberg
infrastructure. The role involves building out our foundational Iceberg data lakehouse architecture and integrating it with key cloud and analytics platforms. You will be a core part of our data engineering team, working closely with analytics and BI teams to ensure seamless data access and usability.
Key Responsibilities Iceberg Infrastructure Configuration
Design and implement the initial Apache Iceberg infrastructure. Ensure compatibility and optimization for batch and streaming data use cases.
Platform Integration & Connectivity
Set up and manage connections between Iceberg and:
Google Cloud Platform (GCP) Snowflake
- With a focus on
Federated Data Warehousing (FDW) . MicroStrategy Looker Power BI
(lower priority, but still considered for downstream enablement)
Data Pipeline Development
Build and deploy initial pipelines for data flow from:
Snowflake → Iceberg → MicroStrategy
Monitor and optimize data ingestion, transformation, and delivery. Ensure data quality, lineage, and security compliance throughout the pipeline.
Collaboration & Documentation
Collaborate cross-functionally with data science, analytics, and DevOps teams. Document configuration, design patterns, and integration processes.
Qualifications
Required:
3-5 years of experience in data engineering or related field. Proven experience configuring and managing
Apache Iceberg
environments. Hands-on experience with
Snowflake , including familiarity with
FDW . Experience integrating cloud storage systems and query engines (e.g., BigQuery, GCP). Working knowledge of BI tools:
MicroStrategy ,
Looker ,
Power BI . Proficiency in Python, SQL, and data orchestration tools (e.g., Airflow). Strong understanding of data lakehouse architecture and performance optimization. Preferred:
Familiarity with secure data sharing and access control across tools. Knowledge of metadata catalogs such as Apache Hive, AWS Glue, or Unity Catalog. Background in working with distributed data systems and cloud-native environments.
Additional Information
Blend is a premier AI services provider, committed to co-creating meaningful impact for its clients through the power of data science, AI, technology, and people. With a mission to fuel bold visions, Blend tackles significant challenges by seamlessly aligning human expertise with artificial intelligence. The company is dedicated to unlocking value and fostering innovation for its clients by harnessing world-class people and data-driven strategy. We believe that the power of people and AI can have a meaningful impact on your world, creating more fulfilling work and projects for our people and clients. For more information, visit www.blend360.com.
Job Description
This is a CONTRACT TO POSSIBLE HIRE POSITION!
We're seeking a
Mid-Level Data Engineer
with hands-on experience in configuring and optimizing
Apache Iceberg
infrastructure. The role involves building out our foundational Iceberg data lakehouse architecture and integrating it with key cloud and analytics platforms. You will be a core part of our data engineering team, working closely with analytics and BI teams to ensure seamless data access and usability.
Key Responsibilities Iceberg Infrastructure Configuration
Design and implement the initial Apache Iceberg infrastructure. Ensure compatibility and optimization for batch and streaming data use cases.
Platform Integration & Connectivity
Set up and manage connections between Iceberg and:
Google Cloud Platform (GCP) Snowflake
- With a focus on
Federated Data Warehousing (FDW) . MicroStrategy Looker Power BI
(lower priority, but still considered for downstream enablement)
Data Pipeline Development
Build and deploy initial pipelines for data flow from:
Snowflake → Iceberg → MicroStrategy
Monitor and optimize data ingestion, transformation, and delivery. Ensure data quality, lineage, and security compliance throughout the pipeline.
Collaboration & Documentation
Collaborate cross-functionally with data science, analytics, and DevOps teams. Document configuration, design patterns, and integration processes.
Qualifications
Required:
3-5 years of experience in data engineering or related field. Proven experience configuring and managing
Apache Iceberg
environments. Hands-on experience with
Snowflake , including familiarity with
FDW . Experience integrating cloud storage systems and query engines (e.g., BigQuery, GCP). Working knowledge of BI tools:
MicroStrategy ,
Looker ,
Power BI . Proficiency in Python, SQL, and data orchestration tools (e.g., Airflow). Strong understanding of data lakehouse architecture and performance optimization. Preferred:
Familiarity with secure data sharing and access control across tools. Knowledge of metadata catalogs such as Apache Hive, AWS Glue, or Unity Catalog. Background in working with distributed data systems and cloud-native environments.
Additional Information