DW Search
Base pay range: $150,000.00/yr – $200,000.00/yr
Location and Work Arrangement New York – on site with flexibility when needed.
About the Role We’re hiring a
Data Engineer
to design and scale cloud‑native data infrastructure that powers analytics, automation, and AI across trading, portfolio operations, and internal business teams. The role sits close to decision‑makers, providing direct visibility into real commercial problems that require clean, reliable, and well‑modelled data.
Responsibilities
Building cloud‑based data pipelines that power predictive models and advanced analytics.
Ingesting financial, operational, and third‑party data from APIs into scalable storage layers.
Developing dbt transformations and ELT workflows for analytics and machine learning.
Orchestrating workloads using Airflow and modern CI/CD practices.
Designing and building scalable data pipelines in a modern Azure environment.
Developing modular, production‑grade ELT workflows (dbt, Airflow, SQL, Python).
Modelling data for analytics, BI, forecasting, and machine‑learning use cases.
Optimising data architectures for performance, cost, and reliability.
Working closely with data scientists, software engineers, and investment teams.
Troubleshooting and improving existing data processes and infrastructure.
Maintaining high standards around data governance, quality, and documentation.
Qualifications
Solid grounding in Python for data engineering and automation.
Strong SQL skills and experience with modern cloud warehouses (ideally Snowflake).
Hands‑on experience with workflow orchestration tools.
Comfortable working with dbt or similar transformation frameworks.
Experience in Azure preferred; strong engineers from AWS/GCP considered – the role will use Azure for internal projects and AWS for portfolio company projects.
Understanding of infrastructure‑as‑code (Terraform, Pulumi, or similar).
Ability to simplify and communicate complex technical ideas.
Curiosity, ownership, and comfort working in a fast‑moving environment.
Why This Role Is Different You’ll join an elite, high‑autonomy engineering group that acts as an internal technical strike team. The work is varied, senior‑facing, and commercially meaningful – with the chance to shape how a major global organisation uses data and AI.
Additional Information If you want to work with a modern stack, solve real business problems, and build production systems that matter, apply now.
Sponsorship sadlily not available.
#J-18808-Ljbffr
Location and Work Arrangement New York – on site with flexibility when needed.
About the Role We’re hiring a
Data Engineer
to design and scale cloud‑native data infrastructure that powers analytics, automation, and AI across trading, portfolio operations, and internal business teams. The role sits close to decision‑makers, providing direct visibility into real commercial problems that require clean, reliable, and well‑modelled data.
Responsibilities
Building cloud‑based data pipelines that power predictive models and advanced analytics.
Ingesting financial, operational, and third‑party data from APIs into scalable storage layers.
Developing dbt transformations and ELT workflows for analytics and machine learning.
Orchestrating workloads using Airflow and modern CI/CD practices.
Designing and building scalable data pipelines in a modern Azure environment.
Developing modular, production‑grade ELT workflows (dbt, Airflow, SQL, Python).
Modelling data for analytics, BI, forecasting, and machine‑learning use cases.
Optimising data architectures for performance, cost, and reliability.
Working closely with data scientists, software engineers, and investment teams.
Troubleshooting and improving existing data processes and infrastructure.
Maintaining high standards around data governance, quality, and documentation.
Qualifications
Solid grounding in Python for data engineering and automation.
Strong SQL skills and experience with modern cloud warehouses (ideally Snowflake).
Hands‑on experience with workflow orchestration tools.
Comfortable working with dbt or similar transformation frameworks.
Experience in Azure preferred; strong engineers from AWS/GCP considered – the role will use Azure for internal projects and AWS for portfolio company projects.
Understanding of infrastructure‑as‑code (Terraform, Pulumi, or similar).
Ability to simplify and communicate complex technical ideas.
Curiosity, ownership, and comfort working in a fast‑moving environment.
Why This Role Is Different You’ll join an elite, high‑autonomy engineering group that acts as an internal technical strike team. The work is varied, senior‑facing, and commercially meaningful – with the chance to shape how a major global organisation uses data and AI.
Additional Information If you want to work with a modern stack, solve real business problems, and build production systems that matter, apply now.
Sponsorship sadlily not available.
#J-18808-Ljbffr