Nclusion, Inc.
Software Engineer - Data Platform
Nclusion, Inc., Palo Alto, California, United States, 94306
Nclusion is on a mission to provide
traditional financial services to 1.5 billion people worldwide without access today . Without a secure way to save, invest, or transfer money, individuals are not empowered to accumulate short or long-term wealth. We're changing that by bridging the gap between traditional banking and the communities that need it most.
About the Role As a founding member of our Data Platform team, you’ll lead the design and development of a scalable, company-wide data platform. You’ll have the freedom to choose the best tools and technologies, while working closely with teams across engineering, product, and operations. This role offers a unique opportunity to shape the foundation of our data strategy and how we leverage AI/ML to enhance the platform’s capabilities and impact in the future.
What You’ll Do
Build and Maintain Data Platform
Contribute to the design, implementation, and optimization of our data platform and infrastructure, working with the product team to integrate with our production systems
Build Data Infrastructure:
Proactively identify and implement improvements to our data infrastructure for performance, scalability, and cost-efficiency
Design and Implement Data Pipelines:
Architect and build efficient and reliable real time and offline pipelines to ingest, transform, and load data from various sources (e.g., databases, APIs, streaming platforms)
Ensure Data Quality and Integrity:
Implement monitoring, troubleshooting, and validation processes to guarantee the accuracy, consistency, and reliability of our data
Collaborate with Cross-Functional Teams:
Work across different teams to understand their data needs and provide them with the infrastructure and tools they need to succeed
What You Bring to the Table
5+ years of experience in a Software Engineer or similar role
Experience in building production‑grade product systems that work with large‑scale or complex datasets. Experience deploying machine learning models in production systems is a plus.
Strong proficiency in SQL and experience working with relational databases (e.g., PostgreSQL, MySQL)
Experience with data warehousing solutions (e.g., Snowflake, Redshift, BigQuery)
Experience with ETL/ELT tools and frameworks (e.g., Apache Airflow, Talend, Informatica)
Experience with at least one programming language (e.g., Python, Scala, Java)
Familiarity with big data technologies and frameworks (e.g., Spark, Hadoop, Kafka) is a plus
Experience with cloud platforms (e.g., AWS, Azure, GCP) and their data services is highly desirable
Understanding of data modeling principles and best practices
Excellent problem‑solving and analytical skills
Strong communication and collaboration skills
A Bachelor's or Master's degree in Computer Science, Engineering, or a related field (or equivalent practical experience)
Benefits and Perks
401k with 4% match
✈️
Flexible PTO
– We focus on impact, not tracking vacation days. We encourage a minimum of 14 days.
Yearly Salary: $150,000 - $260,000
In our commitment to fostering a diverse and inclusive workplace, we value the unique perspectives and experiences each individual brings to our team. We encourage all candidates, regardless of background, to apply. Your skills, talents, and potential contributions matter deeply to us, and we believe in creating an environment where everyone has an opportunity to thrive. We recognize that meeting every listed requirement may not always be possible, but we value passion, determination, and a willingness to learn. Your application is an opportunity for us to discover the exceptional qualities you bring.
Ready to apply?
#J-18808-Ljbffr
traditional financial services to 1.5 billion people worldwide without access today . Without a secure way to save, invest, or transfer money, individuals are not empowered to accumulate short or long-term wealth. We're changing that by bridging the gap between traditional banking and the communities that need it most.
About the Role As a founding member of our Data Platform team, you’ll lead the design and development of a scalable, company-wide data platform. You’ll have the freedom to choose the best tools and technologies, while working closely with teams across engineering, product, and operations. This role offers a unique opportunity to shape the foundation of our data strategy and how we leverage AI/ML to enhance the platform’s capabilities and impact in the future.
What You’ll Do
Build and Maintain Data Platform
Contribute to the design, implementation, and optimization of our data platform and infrastructure, working with the product team to integrate with our production systems
Build Data Infrastructure:
Proactively identify and implement improvements to our data infrastructure for performance, scalability, and cost-efficiency
Design and Implement Data Pipelines:
Architect and build efficient and reliable real time and offline pipelines to ingest, transform, and load data from various sources (e.g., databases, APIs, streaming platforms)
Ensure Data Quality and Integrity:
Implement monitoring, troubleshooting, and validation processes to guarantee the accuracy, consistency, and reliability of our data
Collaborate with Cross-Functional Teams:
Work across different teams to understand their data needs and provide them with the infrastructure and tools they need to succeed
What You Bring to the Table
5+ years of experience in a Software Engineer or similar role
Experience in building production‑grade product systems that work with large‑scale or complex datasets. Experience deploying machine learning models in production systems is a plus.
Strong proficiency in SQL and experience working with relational databases (e.g., PostgreSQL, MySQL)
Experience with data warehousing solutions (e.g., Snowflake, Redshift, BigQuery)
Experience with ETL/ELT tools and frameworks (e.g., Apache Airflow, Talend, Informatica)
Experience with at least one programming language (e.g., Python, Scala, Java)
Familiarity with big data technologies and frameworks (e.g., Spark, Hadoop, Kafka) is a plus
Experience with cloud platforms (e.g., AWS, Azure, GCP) and their data services is highly desirable
Understanding of data modeling principles and best practices
Excellent problem‑solving and analytical skills
Strong communication and collaboration skills
A Bachelor's or Master's degree in Computer Science, Engineering, or a related field (or equivalent practical experience)
Benefits and Perks
401k with 4% match
✈️
Flexible PTO
– We focus on impact, not tracking vacation days. We encourage a minimum of 14 days.
Yearly Salary: $150,000 - $260,000
In our commitment to fostering a diverse and inclusive workplace, we value the unique perspectives and experiences each individual brings to our team. We encourage all candidates, regardless of background, to apply. Your skills, talents, and potential contributions matter deeply to us, and we believe in creating an environment where everyone has an opportunity to thrive. We recognize that meeting every listed requirement may not always be possible, but we value passion, determination, and a willingness to learn. Your application is an opportunity for us to discover the exceptional qualities you bring.
Ready to apply?
#J-18808-Ljbffr