Dutchie
About Dutchie
Founded in 2017, Dutchie is a comprehensive technology platform powering dispensary operations, while providing consumers with safe and easy access to cannabis. Dutchie aims to further support the positive societal change the cannabis industry brings to the world through wellness benefits, social justice, and empowering local communities through tax revenue. Powering thousands of dispensaries across 40+ markets throughout the United States and Canada, Dutchie is the leading technology company in the cannabis space and was named in Fast Company’s 10 Most Innovative Companies in North America and listed two years in a row on LinkedIn’s Top 50 Startups.
Dutchie has raised over $600M in funding to date, backed by D1 Capital Partners, Tiger Global, Dragoneer, DFJ Growth, Thrive Capital, Howard Schultz, Snoop Dogg’s Casa Verde Capital, Gron Ventures, members of the founding team at DoorDash, Kevin Durant’s Thirty Five Ventures, and other notable angel investors.
About the Role We are seeking a motivated and growing Senior Data Engineer to contribute to our data strategy, architecture, and infrastructure. This individual will serve as a solid team contributor, leveraging their developing expertise to implement effective data solutions. The ideal candidate will possess solid experience with modern data engineering tools and platforms, databases, and cloud technologies. We are looking for someone who excels in building reliable data systems and is eager to grow their skills while contributing to better customer experiences and smarter business decisions.
What You\'ll Do
Technical Contribution: Implement scalable and reliable data pipelines following established best practices and frameworks
Learn and apply data engineering patterns for performance, scalability, and maintainability
Collaborate with senior team members, actively seeking mentorship and growth opportunities
Work independently on discrete data projects from end to end with milestone check-ins
Data Infrastructure Development
Build and maintain ETL/ELT pipelines using tools such as Fivetran and Dagster
Work with data warehouses, with a focus on Snowflake
Learn Infrastructure-as-Code practices (e.g., Pulumi) for resource management
Deploy and manage data services in Kubernetes environments
Data Management and Modeling
Design and maintain data models across database technologies, including SQL Server, PostgreSQL, MongoDB, and AWS RDS
Develop data models to support business intelligence and analytics needs
Learn database performance optimization and data governance practices
Cloud Development
Implement cloud-native solutions in AWS
Learn to optimize cost, performance, and scalability of cloud infrastructure
Monitoring and Quality
Work with observability platforms (e.g., Datadog) to monitor data workflows and system health
Participate in logging, alerting, and dashboard maintenance
Learning and Growth
Stay current with industry trends and learn new tools and methodologies
Collaborate with stakeholders to understand business requirements and translate them into technical solutions
Soft Skills
Strong problem-solving abilities and analytical thinking
Good communication skills, with growing ability to convey technical concepts
Collaborative mindset with willingness to learn from senior team members
What You Bring
3-5 years of hands-on experience in data engineering or a related field
Solid experience with modern data tools and platforms, including Snowflake, Fivetran, and Dagster
Proficiency with database technologies: SQL Server, PostgreSQL, and MongoDB
Experience with AWS cloud services, especially data-centric products like S3, RDS, and DMS
Experience with Infrastructure-as-Code (e.g., Pulumi) and container orchestration tools like Kubernetes
Experience in data modeling, schema design, and basic database optimization
Experience with DBT
Familiarity with observability tools such as Datadog, Grafana, or Prometheus
Proficiency in programming languages such as Python for data engineering tasks, with some exposure to application development languages like C# or Ruby
It\'s a Bonus if You
Experience with additional cloud platforms such as Azure or GCP
Basic knowledge of distributed systems and big data technologies
Familiarity with CI/CD pipelines and version control systems
You\’ll Get We are targeting a starting salary of $125,000 - $200,000 based on the intended level for this role.
In addition to cash compensation, our total rewards package includes:
Full medical benefits including dental and vision plans to ensure you always have the best care.
Equity packages in the form of stock options to all employees.
Technology (hardware, software, reading materials, etc..) allowance
Flexible vacation and sick days
At Dutchie, we’re committed to providing an environment of mutual respect where equal employment opportunities are available to all applicants and teammates without regard to race, color, religion, sex, sexual orientation, gender identity or expression, pregnancy, age, national origin, disability status, genetic information, protected veteran status, or any other characteristic protected by law. Dutchie believes that diversity and inclusion among our teammates is critical to our success, and we seek to recruit, develop and retain the most talented people from a diverse candidate pool.
#J-18808-Ljbffr
Dutchie has raised over $600M in funding to date, backed by D1 Capital Partners, Tiger Global, Dragoneer, DFJ Growth, Thrive Capital, Howard Schultz, Snoop Dogg’s Casa Verde Capital, Gron Ventures, members of the founding team at DoorDash, Kevin Durant’s Thirty Five Ventures, and other notable angel investors.
About the Role We are seeking a motivated and growing Senior Data Engineer to contribute to our data strategy, architecture, and infrastructure. This individual will serve as a solid team contributor, leveraging their developing expertise to implement effective data solutions. The ideal candidate will possess solid experience with modern data engineering tools and platforms, databases, and cloud technologies. We are looking for someone who excels in building reliable data systems and is eager to grow their skills while contributing to better customer experiences and smarter business decisions.
What You\'ll Do
Technical Contribution: Implement scalable and reliable data pipelines following established best practices and frameworks
Learn and apply data engineering patterns for performance, scalability, and maintainability
Collaborate with senior team members, actively seeking mentorship and growth opportunities
Work independently on discrete data projects from end to end with milestone check-ins
Data Infrastructure Development
Build and maintain ETL/ELT pipelines using tools such as Fivetran and Dagster
Work with data warehouses, with a focus on Snowflake
Learn Infrastructure-as-Code practices (e.g., Pulumi) for resource management
Deploy and manage data services in Kubernetes environments
Data Management and Modeling
Design and maintain data models across database technologies, including SQL Server, PostgreSQL, MongoDB, and AWS RDS
Develop data models to support business intelligence and analytics needs
Learn database performance optimization and data governance practices
Cloud Development
Implement cloud-native solutions in AWS
Learn to optimize cost, performance, and scalability of cloud infrastructure
Monitoring and Quality
Work with observability platforms (e.g., Datadog) to monitor data workflows and system health
Participate in logging, alerting, and dashboard maintenance
Learning and Growth
Stay current with industry trends and learn new tools and methodologies
Collaborate with stakeholders to understand business requirements and translate them into technical solutions
Soft Skills
Strong problem-solving abilities and analytical thinking
Good communication skills, with growing ability to convey technical concepts
Collaborative mindset with willingness to learn from senior team members
What You Bring
3-5 years of hands-on experience in data engineering or a related field
Solid experience with modern data tools and platforms, including Snowflake, Fivetran, and Dagster
Proficiency with database technologies: SQL Server, PostgreSQL, and MongoDB
Experience with AWS cloud services, especially data-centric products like S3, RDS, and DMS
Experience with Infrastructure-as-Code (e.g., Pulumi) and container orchestration tools like Kubernetes
Experience in data modeling, schema design, and basic database optimization
Experience with DBT
Familiarity with observability tools such as Datadog, Grafana, or Prometheus
Proficiency in programming languages such as Python for data engineering tasks, with some exposure to application development languages like C# or Ruby
It\'s a Bonus if You
Experience with additional cloud platforms such as Azure or GCP
Basic knowledge of distributed systems and big data technologies
Familiarity with CI/CD pipelines and version control systems
You\’ll Get We are targeting a starting salary of $125,000 - $200,000 based on the intended level for this role.
In addition to cash compensation, our total rewards package includes:
Full medical benefits including dental and vision plans to ensure you always have the best care.
Equity packages in the form of stock options to all employees.
Technology (hardware, software, reading materials, etc..) allowance
Flexible vacation and sick days
At Dutchie, we’re committed to providing an environment of mutual respect where equal employment opportunities are available to all applicants and teammates without regard to race, color, religion, sex, sexual orientation, gender identity or expression, pregnancy, age, national origin, disability status, genetic information, protected veteran status, or any other characteristic protected by law. Dutchie believes that diversity and inclusion among our teammates is critical to our success, and we seek to recruit, develop and retain the most talented people from a diverse candidate pool.
#J-18808-Ljbffr