Logo
Campbell Scientific

IT Data Engineer

Campbell Scientific, Logan, Utah, us, 84322

Save Job

Join to apply for the

IT Data Engineer

role at

Campbell Scientific

Location : Logan, UT •

Closing Date : End of Day, November 4, 2025

Administration is the backbone of an organization. Every individual working within this function connects each of our departments together. To us, this link ensures that there is a smooth and accurate flow of information from each part of the organization to the next. We are looking for an IT Data Engineer to help ensure our teams are connected within our organization.

What’s in it For You?

A premium benefits package. We have competitive Paid Time Off; Medical, Dental, Vision, and Hearing Insurance with no premiums based on Full Time 40-hour weeks; Long-Term and Short-Term Disability; AD&D; 401(k) and a Profit-Sharing Plan; and Gym Memberships.

Industry competitive salaries.

A great work culture where we work hard and make the time to enjoy both our work and the people around us.

Challenging and engaging work that makes a difference on a global scale.

What You’ll Work On As a Data Engineer specializing in data warehousing, you will be responsible for designing, developing, and maintaining data pipelines that support our data warehouse architecture. You will work closely with data scientists, analysts, and other technical teams to ensure that data is collected, transformed, and stored efficiently to provide reliable data for business intelligence and decision-making.

You’ll Support Your Team by Performing the Following Key Tasks

Data Pipeline Development: Design, build, and optimize scalable ETL (Extract, Transform, Load) processes to move data from source systems into the data warehouse.

Data Modeling: Develop and maintain effective data models to ensure data is organized in a way that is easy to query and analyze, including star schemas, snowflake schemas, and normalization.

Data Warehouse Maintenance: Oversee setup, management, and performance tuning of data warehouse platforms, ensuring data is properly stored, backed up, and available for analytical processes.

Data Integration: Integrate data from multiple internal and external sources into the data warehouse, ensuring accurate and timely data synchronization.

Data Transformation: Design data models and develop data transformation processes to ensure data is clean, reliable, and ready for analysis.

Query Optimization: Collaborate with data analysts and data scientists to optimize queries and provide clean, reliable, and well-structured data.

Data Quality, Governance & Security: Monitor data flows and implement error-handling routines; enforce data governance practices to maintain consistency, accuracy, and privacy.

Collaboration with Cross-Functional Teams: Work with business intelligence analysts, data scientists, and other stakeholders to understand data needs and ensure data availability for reports and analytics.

Troubleshooting & Problem-Solving: Investigate and resolve data issues, performance bottlenecks, and errors in ETL processes or the data warehouse environment.

Documentation & Reporting: Maintain clear documentation of data models, workflows, and processes; communicate project progress and data pipeline health to key stakeholders.

Continuous Improvement: Identify opportunities to improve data pipeline efficiency, performance, and scalability; stay current with emerging data technologies and trends.

What We’re Looking For

Bachelor’s degree in Computer Science, Information Technology, Data Engineering, related field, or equivalent experience.

Minimum of 2 years of experience as a data engineer or in a similar role with a focus on Microsoft Fabric, Azure data services, or related platforms.

Technical Expertise:

Strong experience with Microsoft Fabric components such as Data Factory, Data Lake, Data Warehouse, and Power BI.

Proficiency in ETL/ELT processes and designing data pipelines using tools within the Microsoft ecosystem.

Experience with SQL and data querying languages to extract, transform, and manipulate data.

Familiarity with programming languages such as Python, Scala, or Spark for data processing and transformation.

Cloud Technologies: Solid experience with cloud platforms, particularly Microsoft Azure, and other cloud-based data storage, processing, and analytics services.

Data Modeling & Transformation: Experience designing data models, schema, and performing data transformations in a cloud environment.

Data Governance & Security: Familiarity with data governance, security, and compliance frameworks for handling sensitive data in the cloud.

Problem-Solving & Troubleshooting: Strong ability to identify and resolve data pipeline issues, optimize performance, and ensure seamless data flow.

Collaboration & Communication: Excellent communication skills, with the ability to work cross-functionally with data analysts, business stakeholders, and IT teams.

Work Environment General office environment with assigned workstations, computer, and other hardware peripherals. Casual dress allowed. High interaction with systems users throughout the company.

Physical Requirements Must be able to lift, carry, and maneuver up to 40 lbs. of equipment over short distances as needed during fulfilling job responsibilities.

Equal Opportunity Campbell Scientific is an

EQUAL OPPORTUNITY EMPLOYER .

#J-18808-Ljbffr