Bill.com
At BILL, we believe in empowering the businesses that drive our economy. By replacing outdated financial processes with innovative tools, we help businesses—from startups to established brands—make smarter decisions and gain control of their operations. And we don’t stop there: we’re creating the future of financial automation so businesses can spend more time on what matters.
Working here means you become part of a vision-driven team that’s ready to tackle challenges and build cutting-edge solutions. We value purpose, drive, and curiosity—and we thrive in a fast-paced, ever-changing environment. Whether in one of our offices in San Jose, CA, Draper, UT, or working remotely, BILLders collaborate to deliver real impact for businesses that need more time in their busy weeks.
BILL builds high performing teams and we seek to hire the best talent for every role. Were committed to building a workplace that fosters inclusion and diverse perspectives, valuing each person’s unique skills and experiences. We’d love to hear from you—you might be just what we’re looking for, whether in this role or another.
Let’s give businesses more time for what matters.
At BILL, the Data Operations team is central to building capabilities and best practices that enhance the usability of data across the company. We engage with stakeholders to understand their data needs, collect requirements, develop tailored solutions, deploy these solutions to our data ecosystem, and showcase the impactful results. Our engineers bring a sense of pragmatic perfectionism to their work, striving for excellence every day and delivering quality data products that our stakeholders rely on.
Responsibilities
: • Build and manage robust data pipelines that support scalable and efficient operations across various data platforms. • Work closely with different teams to translate business requirements into sustainable technical solutions, facilitating effective data usage and integration • Participate in designing, implementing, and refining data models and database schemas to effectively support business functionalities and operations. • Collaborate on migration projects to modern data platforms like Trino, using Iceberg as the table format, and enhance data flow and architecture to improve data reliability, efficiency, and quality. • Engage in continuous optimization of data models and pipelines, contributing to infrastructure migrations and improvements in CI processes and Airflow orchestrations. • Develop reusable classes, components, and modular scripts to automate and enhance daily tasks and workflows, thereby improving efficiency for both stakeholders and team operations. We’d love to chat if you have: • BS/BA in Computer Science, Information Systems, Mathematics, or a related technical field, or equivalent practical experience. • At least 3 years of experience in data warehousing roles, demonstrating expertise in large-scale data architecture design, implementation, and maintenance. • Proficient in advanced SQL and familiar with database management practices, including experience with cloud data warehouses like Snowflake, Redshift, or similar platforms. • A plus would be experience working in financial services and/or SaaS companies, with a strong understanding of industry-specific data requirements and compliance issues. Technical experience wed love to see: • Python: Must be adept at scripting in Python for data manipulation and integration tasks, with experience in Object-Oriented Programming (OOP). • SQL, dbt, and Data Modeling o Must be adept with advanced SQL techniques for querying, data transformation, and performance optimization. o Familiarity with dbt (data build tool) for managing data transformation workflow. o Must have a strong understanding of data modeling best practices, expertise in normalization and denormalization techniques to optimize analytical queries and database performance. • ETL/ELT Processes: Extensive experience in designing, building, and optimizing ETL/ELT data pipelines, including both batch and streaming data processing. • Version Control: Heavy experience with version control, branching, and collaboration on GitHub/Gitlab. • Data Visualization: Familiarity or interacted with Tableau or similar tools • Collaboration and Communication: Excellent documentation skills and the ability to work closely with diverse teams to translate business requirements into technical solutions. • DevOps Practices: Knowledge of unit testing, CI/CD, and repository management. • Technologies: Familiarity with Docker and cloud technologies such as AWS. • Prompt Engineering for LLMs: Experience with crafting and refining prompts for LLMs like GPT is a plus. Visa Sponsorship: Please note that this position is not eligible for visa sponsorship. Applicants must have authorization to work in the United States without requiring visa sponsorship now or in the future.
: • Build and manage robust data pipelines that support scalable and efficient operations across various data platforms. • Work closely with different teams to translate business requirements into sustainable technical solutions, facilitating effective data usage and integration • Participate in designing, implementing, and refining data models and database schemas to effectively support business functionalities and operations. • Collaborate on migration projects to modern data platforms like Trino, using Iceberg as the table format, and enhance data flow and architecture to improve data reliability, efficiency, and quality. • Engage in continuous optimization of data models and pipelines, contributing to infrastructure migrations and improvements in CI processes and Airflow orchestrations. • Develop reusable classes, components, and modular scripts to automate and enhance daily tasks and workflows, thereby improving efficiency for both stakeholders and team operations. We’d love to chat if you have: • BS/BA in Computer Science, Information Systems, Mathematics, or a related technical field, or equivalent practical experience. • At least 3 years of experience in data warehousing roles, demonstrating expertise in large-scale data architecture design, implementation, and maintenance. • Proficient in advanced SQL and familiar with database management practices, including experience with cloud data warehouses like Snowflake, Redshift, or similar platforms. • A plus would be experience working in financial services and/or SaaS companies, with a strong understanding of industry-specific data requirements and compliance issues. Technical experience wed love to see: • Python: Must be adept at scripting in Python for data manipulation and integration tasks, with experience in Object-Oriented Programming (OOP). • SQL, dbt, and Data Modeling o Must be adept with advanced SQL techniques for querying, data transformation, and performance optimization. o Familiarity with dbt (data build tool) for managing data transformation workflow. o Must have a strong understanding of data modeling best practices, expertise in normalization and denormalization techniques to optimize analytical queries and database performance. • ETL/ELT Processes: Extensive experience in designing, building, and optimizing ETL/ELT data pipelines, including both batch and streaming data processing. • Version Control: Heavy experience with version control, branching, and collaboration on GitHub/Gitlab. • Data Visualization: Familiarity or interacted with Tableau or similar tools • Collaboration and Communication: Excellent documentation skills and the ability to work closely with diverse teams to translate business requirements into technical solutions. • DevOps Practices: Knowledge of unit testing, CI/CD, and repository management. • Technologies: Familiarity with Docker and cloud technologies such as AWS. • Prompt Engineering for LLMs: Experience with crafting and refining prompts for LLMs like GPT is a plus. Visa Sponsorship: Please note that this position is not eligible for visa sponsorship. Applicants must have authorization to work in the United States without requiring visa sponsorship now or in the future.