Verndale
About The Data Engineer Position
At Verndale, we are building a next-generation Data & Analytics practice designed to help organizations transform their complex data ecosystems into engines of clarity and intelligent action. Our vision is to empower clients to unlock hidden opportunities, optimize operations, and accelerate profitable growth through modern data architectures, trusted data management, advanced analytics, and AI-driven solutions. As part of a digital experience services firm with a strong heritage in customer experience and marketing technology, our Data & Analytics team extends Verndale’s core capabilities to deliver seamlessly integrated, insight-driven solutions that fuel personalization, operational excellence, and strategic foresight. We are pragmatic innovators, grounded in Verndale’s entrepreneurial culture, who design scalable, well-governed data platforms and embed intelligence into business workflows, enabling clients to thrive in the era of generative AI and beyond.
What You’ll Do
Design, build, and maintain robust ETL/ELT pipelines (batch and streaming) that move data from diverse source systems into analytics, AI, and machine learning platforms. Develop and optimize data architectures (data lakes, warehouses, lakehouses, marts) that support real-time and batch analytics, ML workflows, and personalization use cases. Build automation and orchestration for data pipelines using modern workflow tools (Airflow, Prefect, dbt, Azure Data Factory, etc.). Partner with data architects, analysts, and client stakeholders to translate business requirements into technical designs and deliver data products with impact. Ensure data quality, lineage, observability, and security through validation, monitoring, and governance best practices. Optimize performance and cost-efficiency of pipelines and storage, leveraging partitioning, indexing, and query tuning. Support cloud-first architectures across AWS, Azure, and GCP, integrating services for compute, storage, and real-time data processing. Contribute to proofs of concept, platform migrations, and modernization efforts for client data platforms. Mentor junior engineers and contribute to the team’s culture of technical excellence and continuous learning. What We’re Looking For
2–4+ years of experience in building production-grade data pipelines and data systems. Strong proficiency in SQL, Python, or Java/Scala, with experience in distributed data frameworks (Apache Spark, Presto, EMR) Experience with modern data platforms (Snowflake, BigQuery, Redshift, Databricks, Synapse) Hands-on skills with streaming technologies (Kafka, Kinesis, Pub/Sub) and real-time data ingestion Knowledge of data modeling principles and schema design for analytics and operational systems Familiarity with DevOps and CI/CD practices for data, including version control (GitHub/GitLab), testing, and deployment automation. Exposure to data governance, security, compliance, and observability frameworks (lineage, monitoring, metadata/catalog tools) Ability to collaborate in a consulting/client-facing environment, balancing technical depth with pragmatic delivery Employs generative AI (e.g. ChatGPT, Gemini, Claude) to accelerate code development, ETL pipeline design, and troubleshooting processes. Excellent communication skills, with the ability to explain trade-offs and design choices to both technical and non-technical stakeholders. Nice-to-Have Skills
Experience with MLOps and preparing data pipelines for ML/AI model training, deployment, and monitoring Familiarity with modern data stack tools (dbt, Fivetran, Matillion, Terraform) and infrastructure-as-code practices Background in real-time personalization or event-driven architectures Exposure to NoSQL and graph databases (MongoDB, Cosmos DB, DynamoDB, Neo4j). Experience with containerization/orchestration platforms (Docker, Kubernetes, ECS). Familiarity with metadata, semantic layers, or catalog tools (e.g. Unity Catalog, Data Catalogs) and/or data observability tooling. Prior experience in a digital experience or consulting services environment, ideally integrating data with DXP, MarTech, or Commerce platforms. Contributions to open source, thought leadership, or participation in data/AI community events. Ten Great Reasons to Work at Verndale
We are a rapidly growing company that is just as entrepreneurial today as when we were founded in 1998. We are relentlessly curious and enthusiastically solve our clients’ complex business problems through technology, data, and design. We foster a culture that enables every person in the organization to do the best work of their career. We offer regular training and professional development to move careers forward. Client and employee satisfaction are our two most important business metrics. We celebrate and champion diversity, equity, and inclusion. We offer generous paid company holidays, vacation, and paid sick time to every employee starting on day one. We provide top-of-the-line benefits including health, dental, vision, 401K, LTD, STD, Life Insurance, EAP, HRA and more. We support a healthy work/life balance. We are fully remote enabled and embrace the evolving definition of the workplace. About Verndale Verndale is a digital experience agency dedicated to driving growth by helping businesses create meaningful human connections in an increasingly digital world. Verndale is an Equal Opportunity Employer. All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law. Compensation & Benefits $85,000 - $120,000 USD
#J-18808-Ljbffr
Design, build, and maintain robust ETL/ELT pipelines (batch and streaming) that move data from diverse source systems into analytics, AI, and machine learning platforms. Develop and optimize data architectures (data lakes, warehouses, lakehouses, marts) that support real-time and batch analytics, ML workflows, and personalization use cases. Build automation and orchestration for data pipelines using modern workflow tools (Airflow, Prefect, dbt, Azure Data Factory, etc.). Partner with data architects, analysts, and client stakeholders to translate business requirements into technical designs and deliver data products with impact. Ensure data quality, lineage, observability, and security through validation, monitoring, and governance best practices. Optimize performance and cost-efficiency of pipelines and storage, leveraging partitioning, indexing, and query tuning. Support cloud-first architectures across AWS, Azure, and GCP, integrating services for compute, storage, and real-time data processing. Contribute to proofs of concept, platform migrations, and modernization efforts for client data platforms. Mentor junior engineers and contribute to the team’s culture of technical excellence and continuous learning. What We’re Looking For
2–4+ years of experience in building production-grade data pipelines and data systems. Strong proficiency in SQL, Python, or Java/Scala, with experience in distributed data frameworks (Apache Spark, Presto, EMR) Experience with modern data platforms (Snowflake, BigQuery, Redshift, Databricks, Synapse) Hands-on skills with streaming technologies (Kafka, Kinesis, Pub/Sub) and real-time data ingestion Knowledge of data modeling principles and schema design for analytics and operational systems Familiarity with DevOps and CI/CD practices for data, including version control (GitHub/GitLab), testing, and deployment automation. Exposure to data governance, security, compliance, and observability frameworks (lineage, monitoring, metadata/catalog tools) Ability to collaborate in a consulting/client-facing environment, balancing technical depth with pragmatic delivery Employs generative AI (e.g. ChatGPT, Gemini, Claude) to accelerate code development, ETL pipeline design, and troubleshooting processes. Excellent communication skills, with the ability to explain trade-offs and design choices to both technical and non-technical stakeholders. Nice-to-Have Skills
Experience with MLOps and preparing data pipelines for ML/AI model training, deployment, and monitoring Familiarity with modern data stack tools (dbt, Fivetran, Matillion, Terraform) and infrastructure-as-code practices Background in real-time personalization or event-driven architectures Exposure to NoSQL and graph databases (MongoDB, Cosmos DB, DynamoDB, Neo4j). Experience with containerization/orchestration platforms (Docker, Kubernetes, ECS). Familiarity with metadata, semantic layers, or catalog tools (e.g. Unity Catalog, Data Catalogs) and/or data observability tooling. Prior experience in a digital experience or consulting services environment, ideally integrating data with DXP, MarTech, or Commerce platforms. Contributions to open source, thought leadership, or participation in data/AI community events. Ten Great Reasons to Work at Verndale
We are a rapidly growing company that is just as entrepreneurial today as when we were founded in 1998. We are relentlessly curious and enthusiastically solve our clients’ complex business problems through technology, data, and design. We foster a culture that enables every person in the organization to do the best work of their career. We offer regular training and professional development to move careers forward. Client and employee satisfaction are our two most important business metrics. We celebrate and champion diversity, equity, and inclusion. We offer generous paid company holidays, vacation, and paid sick time to every employee starting on day one. We provide top-of-the-line benefits including health, dental, vision, 401K, LTD, STD, Life Insurance, EAP, HRA and more. We support a healthy work/life balance. We are fully remote enabled and embrace the evolving definition of the workplace. About Verndale Verndale is a digital experience agency dedicated to driving growth by helping businesses create meaningful human connections in an increasingly digital world. Verndale is an Equal Opportunity Employer. All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law. Compensation & Benefits $85,000 - $120,000 USD
#J-18808-Ljbffr