Werner Enterprises
Data Engineer
As a Data Engineer at Werner, you will design and maintain scalable, governed ETL/ELT processes and data models to support business operations and application needs. In addition to core engineering and data warehouse governance responsibilities, you will play a key role in enabling AI/ML initiatives by preparing structured datasets, collaborating with data scientists and analysts, and leveraging AI-driven tools such as copilots and agents to automate routine tasks and boost productivity. Your work will contribute to a modern, cloud-first data ecosystem that supports operational efficiency, advanced analytics, and data-informed decision-making. Responsibilities: Proficiency in the development, adaptation, and testing of jobs, using appropriate tools, to extract, clean, transform, load, and model data. Work with team members and business subject matter experts to adapt, design, and deploy data models for enterprise-wide consumption. Work with applications teams to replicate and map raw data to appropriate data warehouse targets and models, using automation or AI, where appropriate, to improve efficiency and reusability. Collaborate with team members and business units to develop diagrams, flowcharts, and documentation based on system processes and business requirements. Engage with Enterprise and Application Architecture and senior team members to design and maintain a healthy modern development and deployment environment for data engineering processes. Apply concepts of component re-usability and automation to drive efficiency across data management and evaluate and utilize AI-accelerated development tools such as copilots, autonomous agents, and intelligent assistants to further streamline and automate workflows. Assist senior team members with the governance and maintenance of our cloud data warehouse environment, including virtual warehouse, user, and role management. Ensure high data quality across source systems and data models, engaging with end users and engineering teams to identify and resolve data quality issues. Prepare and optimize datasets to support AI/ML use cases while partnering with data scientists, analysts, and business teams to develop, validate, and operationalize data-driven solutions. Research and resolve production support issues as they occur, including participation in a rotating on-call schedule. Accomplish team and organization goals by completing related tasks as needed. Perform other duties as assigned by supervisory personnel. Timely and regular attendance according to the scheduled shift as determined by supervisory personnel. Qualifications: Bachelor's degree in computer science, M.I.S., mathematics, or similar discipline At least 4 years of Data Engineering, Analytics or Data Science experience required. Strong SQL skills and a strong knowledge of data engineering processes and applications e.g., Matillion, dbt, Fivetran, Qlik Replicate, or comparable products, including relevant AI-powered features. Understanding of testing best-practices for data engineering processes and applications, including the use of automated testing frameworks and AI-assisted validation tools to improve reliability and efficiency. Proficiency with cloud data warehouse platforms and environments e.g., Snowflake or comparable platform, including permissions, and governance (masking, encryption, etc.) and AI-powered features. Familiarity with cloud data warehouse management including virtual warehouse, databases, schemas, and tables. Familiarity with user and role management (DAC, RBA, etc.) Proficiency with cloud platforms (Azure, etc.) and Python for scripting, API integration, data manipulation, and automation; comfortable leveraging AI programming copilots to enhance efficiency and productivity. Familiarity with GitHub and CI/CD for data platforms, with an emphasis on integrating automation and AI capabilities to accelerate delivery and reduce manual effort on repetitive or routine workloads. Interest or experience in AI/ML workflows, including preparing data for model development, working with prompt-based agents, or leveraging intelligent development assistants. Strong analytical and problem-solving skills to perform requirements analysis and data engineering process design. Knowledge of, and ability to work in an agile environment. Strong communication skills and a desire to build strong relationships with colleagues and key individuals within the business. Demonstrated ability to quickly and continuously learn and apply new technologies.
As a Data Engineer at Werner, you will design and maintain scalable, governed ETL/ELT processes and data models to support business operations and application needs. In addition to core engineering and data warehouse governance responsibilities, you will play a key role in enabling AI/ML initiatives by preparing structured datasets, collaborating with data scientists and analysts, and leveraging AI-driven tools such as copilots and agents to automate routine tasks and boost productivity. Your work will contribute to a modern, cloud-first data ecosystem that supports operational efficiency, advanced analytics, and data-informed decision-making. Responsibilities: Proficiency in the development, adaptation, and testing of jobs, using appropriate tools, to extract, clean, transform, load, and model data. Work with team members and business subject matter experts to adapt, design, and deploy data models for enterprise-wide consumption. Work with applications teams to replicate and map raw data to appropriate data warehouse targets and models, using automation or AI, where appropriate, to improve efficiency and reusability. Collaborate with team members and business units to develop diagrams, flowcharts, and documentation based on system processes and business requirements. Engage with Enterprise and Application Architecture and senior team members to design and maintain a healthy modern development and deployment environment for data engineering processes. Apply concepts of component re-usability and automation to drive efficiency across data management and evaluate and utilize AI-accelerated development tools such as copilots, autonomous agents, and intelligent assistants to further streamline and automate workflows. Assist senior team members with the governance and maintenance of our cloud data warehouse environment, including virtual warehouse, user, and role management. Ensure high data quality across source systems and data models, engaging with end users and engineering teams to identify and resolve data quality issues. Prepare and optimize datasets to support AI/ML use cases while partnering with data scientists, analysts, and business teams to develop, validate, and operationalize data-driven solutions. Research and resolve production support issues as they occur, including participation in a rotating on-call schedule. Accomplish team and organization goals by completing related tasks as needed. Perform other duties as assigned by supervisory personnel. Timely and regular attendance according to the scheduled shift as determined by supervisory personnel. Qualifications: Bachelor's degree in computer science, M.I.S., mathematics, or similar discipline At least 4 years of Data Engineering, Analytics or Data Science experience required. Strong SQL skills and a strong knowledge of data engineering processes and applications e.g., Matillion, dbt, Fivetran, Qlik Replicate, or comparable products, including relevant AI-powered features. Understanding of testing best-practices for data engineering processes and applications, including the use of automated testing frameworks and AI-assisted validation tools to improve reliability and efficiency. Proficiency with cloud data warehouse platforms and environments e.g., Snowflake or comparable platform, including permissions, and governance (masking, encryption, etc.) and AI-powered features. Familiarity with cloud data warehouse management including virtual warehouse, databases, schemas, and tables. Familiarity with user and role management (DAC, RBA, etc.) Proficiency with cloud platforms (Azure, etc.) and Python for scripting, API integration, data manipulation, and automation; comfortable leveraging AI programming copilots to enhance efficiency and productivity. Familiarity with GitHub and CI/CD for data platforms, with an emphasis on integrating automation and AI capabilities to accelerate delivery and reduce manual effort on repetitive or routine workloads. Interest or experience in AI/ML workflows, including preparing data for model development, working with prompt-based agents, or leveraging intelligent development assistants. Strong analytical and problem-solving skills to perform requirements analysis and data engineering process design. Knowledge of, and ability to work in an agile environment. Strong communication skills and a desire to build strong relationships with colleagues and key individuals within the business. Demonstrated ability to quickly and continuously learn and apply new technologies.