iLink Digital
Job Description
About The Company:
iLink is a Global Software Solution Provider and Systems Integrator, delivers next-generation technology solutions to help clients solve complex business challenges, improve organizational effectiveness, increase business productivity, realize sustainable enterprise value and transform your business inside-out. iLink integrates software systems and develops custom applications, components, and frameworks on the latest platforms for IT departments, commercial accounts, application services providers (ASP) and independent software vendors (ISV). iLink solutions are used in a broad range of industries and functions, including healthcare, telecom, government, oil and gas, education, and life sciences. iLink's expertise includes Cloud Computing & Application Modernization, Data Management & Analytics, Enterprise Mobility, Portal, collaboration & Social Employee Engagement, Embedded Systems and User Experience design etc.
What makes iLink Systems' offerings unique is the fact that we use pre-created frameworks, designed to accelerate software development and implementation of business processes for our clients. iLink has over 60 frameworks (solution accelerators), both industry-specific and horizontal, that can be easily customized and enhanced to meet your current business challenges.
Requirements
We are seeking a hands-on Databricks Architect with deep experience in designing, implementing, and operating large-scale data platforms (work experience in the semiconductor manufacturing will be a big plus). The ideal candidate has worked with the latest Databricks capabilities such as Unity Catalog, Delta Live Tables (DLT), and other Databricks features. The candidate should have experience in data ingestion tools like Fivetran and HVR, with familiarity with real-time/streaming, batch, sensor/OT/ET data flows. The candidate will architect end-to-end data solutions that support yield improvement, process control, predictive maintenance, and operational efficiency, with strict governance, security, and scalability.
Experience and JD :
10+ years of overall and 5+ years of architecture experience with data architecture/data engineering roles with hands-on work on major enterprise data platforms. Proven hands-on experience with Databricks, especially with modern features such as: Unity Catalog: implementing catalog, schemas, permissions, external / managed tables, security, lineage, etc. Delta Live Tables (DLT): building reliable pipelines, CDC, transformations, data quality, scaling/performance tuning. Experience with data ingestion tools such as Fivetran for SaaS / ERP / relational sources, plus experience integrating HVR or equivalent for high velocity / change data capture or replication. Strong working knowledge of cloud infrastructure (AWS or Azure), storage (object stores, data lakes), compute scaling, cluster management within Databricks. Proficiency in programming with Python / PySpark, working with Spark / SQL; good understanding of streaming vs batch processing. Deep understanding of data governance, security, compliance: role-based access control (RBAC), attribute-based, encryption, audit logs; handling data privacy; compliance requirements. Operational excellence: reliability, monitoring, observability, metrics; experience with failover/backup / DR strategies. Strong communication skills: able to work with domain experts and engineering teams, translate business requirements into technical solutions; document architecture and trade-offs. Experience with performance tuning of Spark jobs, optimizing data storage formats, partitioning, and schema design to support high-throughput, low-latency workloads. Nice to have
Experience with machine learning / predictive analytics, especially using Databricks MLflow, or integrating ML pipelines. Experience with infrastructure as code (IaC) tools for provisioning data platform components, cluster policies, configurations (Terraform, Azure ARM / Bicep, AWS CloudFormation). Knowledge of additional tools/frameworks: real-time streaming platforms (e.g., Kafka, Event Hubs), BI/dashboards, data catalog/lineage tools beyond Unity Catalog. Experience with cost optimization in large data platforms: storage, compute, housekeeping (e.g., vacuuming, compaction, deleted file cleanup). Benefits
Competitive salaries Medical, Dental, Vision Insurance Disability, Life & AD&D Insurance 401K With Generous Company Match Paid Vacation and Personal Leave Pre-Paid Commute Options Employee Referral Bonuses Performance Based Bonuses Flexible Work Options & Fun Culture Continuing Education Reimbursements In-House Technology Training
iLink is a Global Software Solution Provider and Systems Integrator, delivers next-generation technology solutions to help clients solve complex business challenges, improve organizational effectiveness, increase business productivity, realize sustainable enterprise value and transform your business inside-out. iLink integrates software systems and develops custom applications, components, and frameworks on the latest platforms for IT departments, commercial accounts, application services providers (ASP) and independent software vendors (ISV). iLink solutions are used in a broad range of industries and functions, including healthcare, telecom, government, oil and gas, education, and life sciences. iLink's expertise includes Cloud Computing & Application Modernization, Data Management & Analytics, Enterprise Mobility, Portal, collaboration & Social Employee Engagement, Embedded Systems and User Experience design etc.
What makes iLink Systems' offerings unique is the fact that we use pre-created frameworks, designed to accelerate software development and implementation of business processes for our clients. iLink has over 60 frameworks (solution accelerators), both industry-specific and horizontal, that can be easily customized and enhanced to meet your current business challenges.
Requirements
We are seeking a hands-on Databricks Architect with deep experience in designing, implementing, and operating large-scale data platforms (work experience in the semiconductor manufacturing will be a big plus). The ideal candidate has worked with the latest Databricks capabilities such as Unity Catalog, Delta Live Tables (DLT), and other Databricks features. The candidate should have experience in data ingestion tools like Fivetran and HVR, with familiarity with real-time/streaming, batch, sensor/OT/ET data flows. The candidate will architect end-to-end data solutions that support yield improvement, process control, predictive maintenance, and operational efficiency, with strict governance, security, and scalability.
Experience and JD :
10+ years of overall and 5+ years of architecture experience with data architecture/data engineering roles with hands-on work on major enterprise data platforms. Proven hands-on experience with Databricks, especially with modern features such as: Unity Catalog: implementing catalog, schemas, permissions, external / managed tables, security, lineage, etc. Delta Live Tables (DLT): building reliable pipelines, CDC, transformations, data quality, scaling/performance tuning. Experience with data ingestion tools such as Fivetran for SaaS / ERP / relational sources, plus experience integrating HVR or equivalent for high velocity / change data capture or replication. Strong working knowledge of cloud infrastructure (AWS or Azure), storage (object stores, data lakes), compute scaling, cluster management within Databricks. Proficiency in programming with Python / PySpark, working with Spark / SQL; good understanding of streaming vs batch processing. Deep understanding of data governance, security, compliance: role-based access control (RBAC), attribute-based, encryption, audit logs; handling data privacy; compliance requirements. Operational excellence: reliability, monitoring, observability, metrics; experience with failover/backup / DR strategies. Strong communication skills: able to work with domain experts and engineering teams, translate business requirements into technical solutions; document architecture and trade-offs. Experience with performance tuning of Spark jobs, optimizing data storage formats, partitioning, and schema design to support high-throughput, low-latency workloads. Nice to have
Experience with machine learning / predictive analytics, especially using Databricks MLflow, or integrating ML pipelines. Experience with infrastructure as code (IaC) tools for provisioning data platform components, cluster policies, configurations (Terraform, Azure ARM / Bicep, AWS CloudFormation). Knowledge of additional tools/frameworks: real-time streaming platforms (e.g., Kafka, Event Hubs), BI/dashboards, data catalog/lineage tools beyond Unity Catalog. Experience with cost optimization in large data platforms: storage, compute, housekeeping (e.g., vacuuming, compaction, deleted file cleanup). Benefits
Competitive salaries Medical, Dental, Vision Insurance Disability, Life & AD&D Insurance 401K With Generous Company Match Paid Vacation and Personal Leave Pre-Paid Commute Options Employee Referral Bonuses Performance Based Bonuses Flexible Work Options & Fun Culture Continuing Education Reimbursements In-House Technology Training