Resource Informatics Group
Remote - Senior MS Fabric Engineer
Resource Informatics Group, Los Angeles, California, United States, 90079
Role: Senior MS Fabric Engineer
Location: Remote
Duration: Long term
Rates: DOE
Prefer USC/GC (client is not sponsering for this opportunity)
Experience:
8+ years of hands-on experience in designing, implementing, and supporting data warehousing and business intelligence solutions, with a strong focus on Microsoft Fabric or similar tools within the Microsoft data stack (e.g., Azure Synapse Analytics, Azure Data Factory, Azure Databricks). Strong proficiency with Microsoft Fabric components: Data Engineering (Spark Notebooks, Spark Job Definitions) Data Factory (Dataflows Gen2, Pipelines) Data Warehouse / SQL Analytics Endpoints OneLake and Lakehouse architecture Power BI integration and DAX Solid understanding of data modeling (dimensional modeling, star schemas), ETL/ELT processes, and data integration patterns. Proficiency in SQL, PySpark, and/or T-SQL. Familiarity with KQL (Kusto Query Language) is a plus. Experience with big data technologies like Spark. Strong understanding of data and analytics concepts, including data governance, data warehousing, and structured/unstructured data. Knowledge of software development best practices (e.g., code modularity, documentation, version control - Git).
Location: Remote
Duration: Long term
Rates: DOE
Prefer USC/GC (client is not sponsering for this opportunity)
Experience:
8+ years of hands-on experience in designing, implementing, and supporting data warehousing and business intelligence solutions, with a strong focus on Microsoft Fabric or similar tools within the Microsoft data stack (e.g., Azure Synapse Analytics, Azure Data Factory, Azure Databricks). Strong proficiency with Microsoft Fabric components: Data Engineering (Spark Notebooks, Spark Job Definitions) Data Factory (Dataflows Gen2, Pipelines) Data Warehouse / SQL Analytics Endpoints OneLake and Lakehouse architecture Power BI integration and DAX Solid understanding of data modeling (dimensional modeling, star schemas), ETL/ELT processes, and data integration patterns. Proficiency in SQL, PySpark, and/or T-SQL. Familiarity with KQL (Kusto Query Language) is a plus. Experience with big data technologies like Spark. Strong understanding of data and analytics concepts, including data governance, data warehousing, and structured/unstructured data. Knowledge of software development best practices (e.g., code modularity, documentation, version control - Git).