Logo
Infoverity, Inc.

Databricks Engineer

Infoverity, Inc., Dublin, Ohio, United States, 43016

Save Job

Dublin, United States | Posted on 08/11/2025 Infoverity is a leading systems integrator and global professional services firm driven to simplify and maximize the value of their clients’ information. Founded in 2011, Infoverity provides complementary services for many digital initiatives, including MDM and PIM Strategy and Implementation, Data Governance and Analytics, Content Management, Data Integration, Enterprise Hosting, and Operational Services that help large enterprises in the retail, consumer goods, manufacturing, financial and healthcare sectors simplify and maximize the value of their information. Infoverity, a 100% employee-owned company, has been on the Inc. 5000, recognized by IDG’s Computerworld as one of the Best Places to Work in IT, as a Wonderful Workplace for Young Professionals and as a “Best Place to Work” by Inc. Magazine and Business First. Infoverity’s global headquarters is in Dublin, Ohio. The EMEA headquarters and Global Development Center is in Valencia, Spain. Additional offices are located in Germany and India.

Job Description

Infoverity is seeking an experienced Databricks Engineer to join our dynamic consulting team, helping clients architect, build, and optimize modern data platforms. In this role, you will design and implement scalable data solutions, leveraging cloud technologies, specifically Databricks, to drive innovation in AI/ML, data warehousing, and data integration. As a key player at Infoverity you will collaborate with cross-functional teams, including data scientists, business analysts, and solution architects, to deliver high-impact data solutions tailored to our clients’ needs .

Key Responsibilities

Data Engineering & Architecture: Design and implement scalable, cloud-based data pipelines to ingest, transform, and store data from various sources.

Data Warehousing & Integration: Develop and optimize ETL/ELT workflows, ensuring efficient data movement between systems, specifically Databricks.

AI/ML Enablement: Work alongside data science teams to support feature engineering, model training, and ML operations ( MLOps ) in cloud environments.

Data Modeling & Governance: Develop data models, schemas, and best practices to ensure data integrity, consistency, and security.

Performance Optimization: Monitor, troubleshoot, and optimize query performance, ensuring scalability and efficiency.

Consulting & Client Engagement: Work directly with clients to understand business needs, recommend best practices, and deliver tailored data solutions.

Data Architecture Design: Design and implement scalable, high-performance data architectures using Databricks.

Requirement Gathering: Lead requirement-gathering sessions with clients and internal teams to understand business needs and define best practices for solutions including data workflows.

Cloud Collaboration: Collaborate with cloud platform teams to optimize data storage and retrieval in environments like AWS S3, Azure Data Lake, and Delta Lake , among others .

Workflow Optimization: Translate complex data processes, such as those in Alteryx and Tableau, into optimized Databricks workflows using PySpark and SQL.

Automation Development: Develop reusable automation scripts to streamline workflow migrations and improve operational efficiency.

Development Support: Provide hands-on development and troubleshooting support to ensure smooth implementation and optimal performance.

Governance & Best Practices: Partner with cross-functional teams to establish data governance frameworks, best practices, and standardized reporting processes.

Training & Support: Deliver training, documentation, and ongoing support to empower users and enhance organizational data literacy.

Requirements

Required Qualifications

Minimum of 2 to 3 years of experience in data engineering, data architecture, and data integration.

Strong expertise in Databricks, Snowflake, and/or Microsoft Fabric.

Proficiency in SQL, Python, Spark, and distributed data processing frameworks.

Experience with cloud platforms (Azure, AWS, or GCP) and their native data services.

Hands-on experience with ETL/ELT development, data pipelines, and data warehousing.

Knowledge of AI/ML workflows, including feature engineering and ML model deployment.

Strong understanding of data governance, security, and compliance best practices.

Excellent problem-solving, communication, and client-facing consulting skills.

Ability to work independently and as part of a team.

Preferred Qualifications

Certifications in Databricks, Snowflake, Microsoft Fabric, or a cloud platform (Azure, AWS, GCP).

Experience with Apache Airflow, dbt , Delta Lake, or similar data orchestration tools.

Familiarity with DevOps, CI/CD pipelines, and Infrastructure as Code ( IaC ) tools like Terraform.

#J-18808-Ljbffr