Logo
Iconic Hearts Holdings, Inc

Sr. Data Engineer

Iconic Hearts Holdings, Inc, Culver City, California, United States, 90232

Save Job

Iconic Hearts Holdings, Inc. is an AI-native social media platform best known for creating

Sendit -the Gen-Z icebreaker app that blends connection and creativity. Every product we launch puts large-scale AI and vector search at its core. Join a tight-knit, fast-shipping team that values autonomy, craftsmanship, and outsized impact.

Why We're Hiring

Iconic Hearts is seeking a highly skilled and motivated Data Engineer to join our dynamic team. In this role, you will be responsible for designing, building, and maintaining the data infrastructure that powers our AI-driven social media platform. You will work closely with every team to enable the development of innovative features and provide actionable insights from vast datasets. The ideal candidate will have a strong background in data engineering, with specific experience in handling social media data and supporting AI and machine learning workflows.

What You'll Do

Data Pipeline Development: Design, construct, and maintain scalable and reliable data pipelines to ingest, process, and transform large volumes of structured and unstructured data from various social media sources and APIs. AI and Machine Learning Support: Build and manage data infrastructure to support the entire lifecycle of machine learning models, including training, evaluation, and deployment. You will collaborate with data scientists to understand their data requirements and ensure they have access to high-quality, clean, and well-structured data. Data Architecture: Develop and manage data architectures optimized for AI and machine learning applications, including data lakes and data warehouses. Data Quality and Governance: Implement processes and systems to monitor data quality, ensuring accuracy, consistency, and reliability. You will also be responsible for data privacy and security best practices. Performance Optimization: Monitor and optimize the performance of data pipelines and database queries to ensure efficiency and cost-effectiveness. Collaboration: Work in a cross-functional environment with data scientists, software engineers, and product teams to deliver end-to-end data solutions that align with business objectives. What We're Looking For

3-5 years

designing large-scale data pipelines in a production environment. Advanced

Python ,

Typescript , and

SQL

skills. Proven experience with

vector databases

(ElasticSearch, AlloyDB) and building RAG or semantic search systems. Deep familiarity with

GCP data services

(BigQuery, Dataflow, Pub/Sub, Cloud Functions) and Infrastructure-as-Code. Strong grasp of

data modeling, partitioning, and optimization

for petabyte-scale analytics. Knowledge of

ML training workflows

(PyTorch, Transformers, Unsloth), and deploying training/inference code to GPUs. Track record of shipping resilient, well-documented code in an agile startup setting. Nice-to-Haves

Experience with

Snowflake

or other cloud data warehouses. Exposure to

Kubernetes

and serverless GPU pipelines. Familiarity with

privacy-enhancing technologies

(differential privacy, data masking).