Conflux Systems
We are seeking a motivated Data Engineer to join our team and support the modernization of our data estate. This role focuses on assisting with data pipeline development migration of legacy systems and maintaining scalable secure and efficient data solutions using modern technologies particularly Microsoft Fabric and Azure-based platforms.
Work Location & Attendance
Must be physically located in Georgia
On-site : Tuesday to Thursday (per managers discretion)
Mandatory in-person meetings : All Hands Enterprise Applications DECAL All Staff
Position Type : Contract
Experience Required : 2-3 years
Key Responsibilities
Assist in designing building and maintaining ETL / ELT data pipelines using Microsoft Fabric and Azure Databricks. Support migration and maintenance of SSIS packages from legacy systems. Implement medallion architecture (Bronze Silver Gold) for data lifecycle and quality. Create and manage notebooks (Fabric Notebooks Databricks) for data transformation using Python SQL and Spark. Build curated datasets to support Power BI dashboards. Collaborate with data analysts and business stakeholders to deliver fit-for-purpose data assets. Apply data governance policies in line with Microsoft Purview or Unity Catalog. Support monitoring logging and CI / CD automation using Azure DevOps. Technical Stack
Microsoft Fabric (Dataflows Pipelines Notebooks OneLake) Azure Databricks SQL Server / SQL Managed Instances Power BI SSIS (migration and maintenance) LangGraph and RAG DB (for advanced data workflows) Qualifications
Bachelors degree in computer science Information Systems or related field. 2 3 years of experience in data engineering or related roles. Proficiency in SQL Python Spark. Familiarity with LangGraph and RAG DB concepts. Hands-on experience with Microsoft Fabric and Power BI. Understanding of ETL / ELT pipelines and data warehousing concepts. Preferred
Knowledge of CI / CD automation with Azure DevOps. Familiarity with data governance tools (Microsoft Purview Unity Catalog). Experience with SSIS package migration and maintenance. Key Skills
Apache Hive,S3,Hadoop,Redshift,Spark,AWS,Apache Pig,NoSQL,Big Data,Data Warehouse,Kafka,Scala Employment Type : Full Time Vacancy : 1
#J-18808-Ljbffr
Assist in designing building and maintaining ETL / ELT data pipelines using Microsoft Fabric and Azure Databricks. Support migration and maintenance of SSIS packages from legacy systems. Implement medallion architecture (Bronze Silver Gold) for data lifecycle and quality. Create and manage notebooks (Fabric Notebooks Databricks) for data transformation using Python SQL and Spark. Build curated datasets to support Power BI dashboards. Collaborate with data analysts and business stakeholders to deliver fit-for-purpose data assets. Apply data governance policies in line with Microsoft Purview or Unity Catalog. Support monitoring logging and CI / CD automation using Azure DevOps. Technical Stack
Microsoft Fabric (Dataflows Pipelines Notebooks OneLake) Azure Databricks SQL Server / SQL Managed Instances Power BI SSIS (migration and maintenance) LangGraph and RAG DB (for advanced data workflows) Qualifications
Bachelors degree in computer science Information Systems or related field. 2 3 years of experience in data engineering or related roles. Proficiency in SQL Python Spark. Familiarity with LangGraph and RAG DB concepts. Hands-on experience with Microsoft Fabric and Power BI. Understanding of ETL / ELT pipelines and data warehousing concepts. Preferred
Knowledge of CI / CD automation with Azure DevOps. Familiarity with data governance tools (Microsoft Purview Unity Catalog). Experience with SSIS package migration and maintenance. Key Skills
Apache Hive,S3,Hadoop,Redshift,Spark,AWS,Apache Pig,NoSQL,Big Data,Data Warehouse,Kafka,Scala Employment Type : Full Time Vacancy : 1
#J-18808-Ljbffr