The Recruiting Guy
Job Title:
Software Engineer (Data Platform) Location:
Remote; United States Only Employment Type:
Salaried W2 Full-Time Base Pay Range:
$125,000 - $200,000 per year About The Company
We represent a rapidly growing data company in NYC that’s redefining how real‑world assets are represented and traded on public blockchains. Their platform serves investors, issuers, and financial institutions by providing reliable analytics, market intelligence, and transparent data on tokenized assets across the globe. They’re trusted by leading players in finance and blockchain for their accuracy, scale, and forward‑thinking approach to digital asset infrastructure. It’s an exciting opportunity to join a team that’s helping shape the future of real‑world asset tokenization and build technology that’s changing how the financial world connects. Responsibilities
Build and scale core data systems and APIs that serve product‑level analytics Collaborate with application engineers to ensure clean data flow between backend systems and end‑user features Develop and optimize data pipelines using PySpark and Databricks Work closely with the lead data engineer on system architecture and data infrastructure design Participate in system design discussions focused on scalability, performance, and maintainability Contribute to the full software development lifecycle, from design through deployment Support product and engineering teams by turning raw data into usable insights Ideal Background
4 to 5+ years of software engineering experience, preferably focused on large‑scale data systems Strong proficiency in Python and experience with PySpark Experience with distributed frameworks such as Apache Spark, Beam, Flink, or Kafka Streams Proven ability to design, build, and maintain production‑grade data pipelines and APIs Background in computer science, computer engineering, applied mathematics, or a related field (top 50 university or equivalent rigor preferred) Experience working on data‑driven products rather than internal BI or reporting systems Strong communication skills and the ability to explain technical tradeoffs clearly High attention to detail, ownership mindset, and a passion for building high‑quality systems Nice to Have
Experience in fintech, blockchain, or other data‑intensive environments Hands‑on experience with Databricks or real‑time streaming data systems Demonstrated curiosity and craftsmanship through side projects or open‑source work Seniority Level
Mid‑Senior level Skills: Beam, Apache Spark, Software Engineering, API, Flink, Python, Kafka Streams
#J-18808-Ljbffr
Software Engineer (Data Platform) Location:
Remote; United States Only Employment Type:
Salaried W2 Full-Time Base Pay Range:
$125,000 - $200,000 per year About The Company
We represent a rapidly growing data company in NYC that’s redefining how real‑world assets are represented and traded on public blockchains. Their platform serves investors, issuers, and financial institutions by providing reliable analytics, market intelligence, and transparent data on tokenized assets across the globe. They’re trusted by leading players in finance and blockchain for their accuracy, scale, and forward‑thinking approach to digital asset infrastructure. It’s an exciting opportunity to join a team that’s helping shape the future of real‑world asset tokenization and build technology that’s changing how the financial world connects. Responsibilities
Build and scale core data systems and APIs that serve product‑level analytics Collaborate with application engineers to ensure clean data flow between backend systems and end‑user features Develop and optimize data pipelines using PySpark and Databricks Work closely with the lead data engineer on system architecture and data infrastructure design Participate in system design discussions focused on scalability, performance, and maintainability Contribute to the full software development lifecycle, from design through deployment Support product and engineering teams by turning raw data into usable insights Ideal Background
4 to 5+ years of software engineering experience, preferably focused on large‑scale data systems Strong proficiency in Python and experience with PySpark Experience with distributed frameworks such as Apache Spark, Beam, Flink, or Kafka Streams Proven ability to design, build, and maintain production‑grade data pipelines and APIs Background in computer science, computer engineering, applied mathematics, or a related field (top 50 university or equivalent rigor preferred) Experience working on data‑driven products rather than internal BI or reporting systems Strong communication skills and the ability to explain technical tradeoffs clearly High attention to detail, ownership mindset, and a passion for building high‑quality systems Nice to Have
Experience in fintech, blockchain, or other data‑intensive environments Hands‑on experience with Databricks or real‑time streaming data systems Demonstrated curiosity and craftsmanship through side projects or open‑source work Seniority Level
Mid‑Senior level Skills: Beam, Apache Spark, Software Engineering, API, Flink, Python, Kafka Streams
#J-18808-Ljbffr