Tiverton
Data Operations Associate
Join to apply for the Data Operations Associate role at Tiverton
Company Description TIVERTON is an investment firm exclusively focused on the food and production agriculture sector. The firm oversees $2.2+ billion of assets across debt and equity strategies in the US. The team combines deep agricultural operating experience and financial professionals to provide tailored, long‑term capital solutions to the space. For more information, please visit www.tiverton.ag.
Job Description Position Summary: Tiverton is seeking a Data Operations Associate to support our investment process and portfolio operations through data engineering, analytics, and AI‑powered automation. This hybrid role combines data infrastructure development with investment analytics, working across deal sourcing, due diligence, portfolio monitoring, and LP reporting. The ideal candidate is a technically proficient generalist who enjoys building solutions across the full data stack – from pipeline engineering to business intelligence – and is excited to apply AI/ML tools to solve real‑world problems in agricultural private equity. This role offers broad exposure to both the investment side (deal flow, due diligence and fund analytics) and operations side (portfolio company data, reporting automation, and other analytics). The successful candidate will be self‑motivated and energized by working with a group of thoughtful, smart, and skilled colleagues. He or she will enjoy being a part of a young, hungry and collaborative organization focused on becoming the pre‑eminent investment firm in US agriculture.
Primary Responsibilities
Data Infrastructure & Pipeline Engineering (40%)
Build and maintain ETL pipelines pulling data from internal and external sources into our Snowflake data warehouse
Develop Python and SQL automation scripts for recurring data processes
Manage Snowflake data warehouse – schema design, query optimization, and data modeling
Build API integrations for third‑party data sources (pricing data, B2B data providers, market intelligence)
Implement data quality checks, validation rules, and monitoring to ensure pipeline reliability
Create web scraping solutions for data collection from public sources
Maintain code repositories with proper version control and documentation
Investment Analytics & Deal Support (30%)
Support deal pipeline analytics and sourcing workflows in our CRM
Build models and analytics for sector trends (crop prices, land values, farm credit metrics)
Extract and analyze data from appraisal documents, financial statements, and industry reports
Develop due diligence analytical frameworks and data rooms for new investments
Create LP reporting dashboards and automated quarterly reporting processes
Support investment team with ad‑hoc analytical requests and data visualization
AI/ML Implementation & Automation (20%)
Leverage LLMs (OpenAI, Claude) to accelerate document analysis, data extraction, and research workflows
Build AI‑powered automation for deal screening, document processing, and data enrichment
Implement intelligent solutions for pattern recognition, anomaly detection, and data quality
Use prompt engineering and AI coding assistants to rapidly prototype analytical tools
Develop RAG (Retrieval‑Augmented Generation) systems for knowledge management
Portfolio Company Support & Reporting (10%)
Support portfolio company reporting requirements and data requests
Build dashboards and reporting tools for portfolio operations teams
Troubleshoot data issues and provide technical support to portfolio companies
Partner with investment team to ensure clean, reliable data for portfolio monitoring
Qualifications
Technical Skills
Strong proficiency in Python (pandas, requests, sqlalchemy) and SQL for data analysis and automation
Experience with data pipelines, ETL processes/tools (Fivetran etc), or data engineering workflows
Working knowledge of cloud data warehouses (Snowflake, Databricks, BigQuery, Redshift)
Proficiency in business intelligence tools (Power BI, Tableau, Sigma, or Looker)
Advanced Excel skills including complex formulas, pivot tables, and data modeling
Experience with API integrations and web scraping (REST APIs, Beautiful Soup, or similar)
Comfortable with AI/ML tools: LangChain, OpenAI API, Claude API, or similar frameworks
Git version control and collaborative development workflows
Business & Analytical Skills
Ability to translate business problems into technical solutions
Strong problem‑solving skills – can debug data issues independently
Understanding of financial concepts and private equity metrics helpful but not required
Strong communication skills – can explain technical concepts to non‑technical stakeholders
Self‑directed with ability to prioritize and manage multiple projects
Detail‑oriented with focus on data quality and reliability
Experience & Background
1‑3 years of experience in data engineering, analytics, data science, or related technical roles
Bachelor’s degree in Computer Science, Data Science, Engineering, Finance, or related field
Internship or project experience with data pipelines, analytics, or automation acceptable
Preferred / Nice to Have
Experience building LLM‑powered applications or automation tools
Familiarity with CRM systems (Affinity, Salesforce) or investment workflow tools
Experience with document processing and unstructured data extraction
Knowledge of ML libraries (scikit‑learn, numpy) and model deployment
Exposure to private equity, venture capital, or investment banking
Understanding of DevOps practices – testing, monitoring, CI/CD
Knowledge of agricultural markets, farm credit systems, or commodity data
Additional Information
Please submit examples of ETL/Data pipeline‑related technical projects (GitHub repos, portfolio sites, or project descriptions welcome)
Competitive compensation package with eligibility for an annual bonus based on individual and Company performance
Generous PTO and paid holiday policy
Employee benefits package including Healthcare, Dental, Vision, Group Life Insurance, and 401(k)
Location: Raleigh, North Carolina
#J-18808-Ljbffr
Company Description TIVERTON is an investment firm exclusively focused on the food and production agriculture sector. The firm oversees $2.2+ billion of assets across debt and equity strategies in the US. The team combines deep agricultural operating experience and financial professionals to provide tailored, long‑term capital solutions to the space. For more information, please visit www.tiverton.ag.
Job Description Position Summary: Tiverton is seeking a Data Operations Associate to support our investment process and portfolio operations through data engineering, analytics, and AI‑powered automation. This hybrid role combines data infrastructure development with investment analytics, working across deal sourcing, due diligence, portfolio monitoring, and LP reporting. The ideal candidate is a technically proficient generalist who enjoys building solutions across the full data stack – from pipeline engineering to business intelligence – and is excited to apply AI/ML tools to solve real‑world problems in agricultural private equity. This role offers broad exposure to both the investment side (deal flow, due diligence and fund analytics) and operations side (portfolio company data, reporting automation, and other analytics). The successful candidate will be self‑motivated and energized by working with a group of thoughtful, smart, and skilled colleagues. He or she will enjoy being a part of a young, hungry and collaborative organization focused on becoming the pre‑eminent investment firm in US agriculture.
Primary Responsibilities
Data Infrastructure & Pipeline Engineering (40%)
Build and maintain ETL pipelines pulling data from internal and external sources into our Snowflake data warehouse
Develop Python and SQL automation scripts for recurring data processes
Manage Snowflake data warehouse – schema design, query optimization, and data modeling
Build API integrations for third‑party data sources (pricing data, B2B data providers, market intelligence)
Implement data quality checks, validation rules, and monitoring to ensure pipeline reliability
Create web scraping solutions for data collection from public sources
Maintain code repositories with proper version control and documentation
Investment Analytics & Deal Support (30%)
Support deal pipeline analytics and sourcing workflows in our CRM
Build models and analytics for sector trends (crop prices, land values, farm credit metrics)
Extract and analyze data from appraisal documents, financial statements, and industry reports
Develop due diligence analytical frameworks and data rooms for new investments
Create LP reporting dashboards and automated quarterly reporting processes
Support investment team with ad‑hoc analytical requests and data visualization
AI/ML Implementation & Automation (20%)
Leverage LLMs (OpenAI, Claude) to accelerate document analysis, data extraction, and research workflows
Build AI‑powered automation for deal screening, document processing, and data enrichment
Implement intelligent solutions for pattern recognition, anomaly detection, and data quality
Use prompt engineering and AI coding assistants to rapidly prototype analytical tools
Develop RAG (Retrieval‑Augmented Generation) systems for knowledge management
Portfolio Company Support & Reporting (10%)
Support portfolio company reporting requirements and data requests
Build dashboards and reporting tools for portfolio operations teams
Troubleshoot data issues and provide technical support to portfolio companies
Partner with investment team to ensure clean, reliable data for portfolio monitoring
Qualifications
Technical Skills
Strong proficiency in Python (pandas, requests, sqlalchemy) and SQL for data analysis and automation
Experience with data pipelines, ETL processes/tools (Fivetran etc), or data engineering workflows
Working knowledge of cloud data warehouses (Snowflake, Databricks, BigQuery, Redshift)
Proficiency in business intelligence tools (Power BI, Tableau, Sigma, or Looker)
Advanced Excel skills including complex formulas, pivot tables, and data modeling
Experience with API integrations and web scraping (REST APIs, Beautiful Soup, or similar)
Comfortable with AI/ML tools: LangChain, OpenAI API, Claude API, or similar frameworks
Git version control and collaborative development workflows
Business & Analytical Skills
Ability to translate business problems into technical solutions
Strong problem‑solving skills – can debug data issues independently
Understanding of financial concepts and private equity metrics helpful but not required
Strong communication skills – can explain technical concepts to non‑technical stakeholders
Self‑directed with ability to prioritize and manage multiple projects
Detail‑oriented with focus on data quality and reliability
Experience & Background
1‑3 years of experience in data engineering, analytics, data science, or related technical roles
Bachelor’s degree in Computer Science, Data Science, Engineering, Finance, or related field
Internship or project experience with data pipelines, analytics, or automation acceptable
Preferred / Nice to Have
Experience building LLM‑powered applications or automation tools
Familiarity with CRM systems (Affinity, Salesforce) or investment workflow tools
Experience with document processing and unstructured data extraction
Knowledge of ML libraries (scikit‑learn, numpy) and model deployment
Exposure to private equity, venture capital, or investment banking
Understanding of DevOps practices – testing, monitoring, CI/CD
Knowledge of agricultural markets, farm credit systems, or commodity data
Additional Information
Please submit examples of ETL/Data pipeline‑related technical projects (GitHub repos, portfolio sites, or project descriptions welcome)
Competitive compensation package with eligibility for an annual bonus based on individual and Company performance
Generous PTO and paid holiday policy
Employee benefits package including Healthcare, Dental, Vision, Group Life Insurance, and 401(k)
Location: Raleigh, North Carolina
#J-18808-Ljbffr