Data Freelance Hub
Subject Matter Expert (SME) – Big Data
Data Freelance Hub, California, Missouri, United States, 65018
Job Summary
This role is for a Subject Matter Expert (SME) – Big Data, offering a part‑time, remote contract. Requires 8+ years in data engineering/analytics, Microsoft certifications, and experience in regulated industries. Strong skills in Hadoop, Azure, SQL, and cloud platforms are essential.
Location: United States (Remote) | Job Type: Contract / Part‑time
Responsibilities
Analyze/create learning objectives for each course.
Review/create course outline for each course.
Review video scripts (7‑9 per course) and confirm technical accuracy of the content, suggest edits and updates as required. Incorporate one round of internal and client feedback.
Provide relevant static or recorded demos/screencast to be integrated in the videos. Check the codes and technical accuracy before providing the demos for integration. Incorporate one round of internal and client feedback.
In case of AI/software/tool‑based courses, suggest relevant freeware. Write/review and test the codes to check.
Review readings (4‑6 per course, each up to 1200 words) and confirm technical accuracy of the content, suggest edits and updates as required. Incorporate one round of internal and client feedback.
Create hands‑on activities (1‑2 lab or any other client‑preferred format) per course. Incorporate one round of internal and client feedback.
Review practice quiz and graded assessments (5 files, each comprising 5‑10 questions) and suggest suitable edits, confirm technical accuracy. Incorporate one round of internal and client feedback.
Record talking head videos (onsite/virtually on Zoom) for each course. There will be approximately 20‑25 minutes of video.
Qualifications
8+ years of experience in data engineering, big data architecture, or analytics roles.
Microsoft certifications such as Azure Data Engineer Associate (DP-203), Azure Solutions Architect Expert, Azure AI Engineer Associate.
Experience working with education platforms.
Master’s or Bachelor’s degree in Computer Science, Data Science, Engineering, or related field.
Experience working in regulated industries (e.g., finance, healthcare) with a focus on data compliance and privacy.
Familiarity with AI/ML frameworks like TensorFlow, PyTorch, or MLlib.
Certifications in cloud platforms or big data technologies (e.g., AWS Big Data Specialty, GCP Data Engineer).
Strong expertise in Hadoop ecosystem (HDFS, Hive, Pig, HBase) and Apache Spark.
Strong expertise in Microsoft Fabric, Azure Synapse Analytics, Azure Data Lake Storage, Azure Databricks, Azure Data Factory, Azure Stream Analytics, and Microsoft Purview.
Proficiency in data integration tools and frameworks like Apache NiFi, Airflow, or Talend.
Experience with cloud platforms (AWS Redshift, Azure Synapse, Google BigQuery) and data lake/storage solutions.
Hands‑on experience with SQL, Python, Scala, or Java.
Solid understanding of data warehousing, data modeling, and real‑time data streaming (e.g., Kafka, Flink).
Familiarity with BI tools like Power BI, Tableau, or Looker.
Strong problem‑solving and communication skills with the ability to explain technical concepts to non‑technical stakeholders.
Ability to safeguard the integrity, security, and confidentiality of shared confidential information.
Key Tags: Azure ADLS, HDFS, Spark, Python, Data Architecture, Apache Spark, Java, Talend, Scala, Pig, Data Modeling, Data Lake, Storage, Cloud, Redshift, ADLS, Databricks, Azure Databricks, AWS, ML, Security, HBase, AI, Azure Stream Analytics, Computer Science, TensorFlow, Looker, NiFi, Apache NiFi, SQL, Retool, ADF, Azure Data Factory, Azure Synapse Analytics, BigQuery, BI, GCP, Compliance, Data Engineering, Data Science, Airflow, Synapse, Tableau, Microsoft Power BI, Automation, Azure, PyTorch, Big Data, Kafka.
#J-18808-Ljbffr
Location: United States (Remote) | Job Type: Contract / Part‑time
Responsibilities
Analyze/create learning objectives for each course.
Review/create course outline for each course.
Review video scripts (7‑9 per course) and confirm technical accuracy of the content, suggest edits and updates as required. Incorporate one round of internal and client feedback.
Provide relevant static or recorded demos/screencast to be integrated in the videos. Check the codes and technical accuracy before providing the demos for integration. Incorporate one round of internal and client feedback.
In case of AI/software/tool‑based courses, suggest relevant freeware. Write/review and test the codes to check.
Review readings (4‑6 per course, each up to 1200 words) and confirm technical accuracy of the content, suggest edits and updates as required. Incorporate one round of internal and client feedback.
Create hands‑on activities (1‑2 lab or any other client‑preferred format) per course. Incorporate one round of internal and client feedback.
Review practice quiz and graded assessments (5 files, each comprising 5‑10 questions) and suggest suitable edits, confirm technical accuracy. Incorporate one round of internal and client feedback.
Record talking head videos (onsite/virtually on Zoom) for each course. There will be approximately 20‑25 minutes of video.
Qualifications
8+ years of experience in data engineering, big data architecture, or analytics roles.
Microsoft certifications such as Azure Data Engineer Associate (DP-203), Azure Solutions Architect Expert, Azure AI Engineer Associate.
Experience working with education platforms.
Master’s or Bachelor’s degree in Computer Science, Data Science, Engineering, or related field.
Experience working in regulated industries (e.g., finance, healthcare) with a focus on data compliance and privacy.
Familiarity with AI/ML frameworks like TensorFlow, PyTorch, or MLlib.
Certifications in cloud platforms or big data technologies (e.g., AWS Big Data Specialty, GCP Data Engineer).
Strong expertise in Hadoop ecosystem (HDFS, Hive, Pig, HBase) and Apache Spark.
Strong expertise in Microsoft Fabric, Azure Synapse Analytics, Azure Data Lake Storage, Azure Databricks, Azure Data Factory, Azure Stream Analytics, and Microsoft Purview.
Proficiency in data integration tools and frameworks like Apache NiFi, Airflow, or Talend.
Experience with cloud platforms (AWS Redshift, Azure Synapse, Google BigQuery) and data lake/storage solutions.
Hands‑on experience with SQL, Python, Scala, or Java.
Solid understanding of data warehousing, data modeling, and real‑time data streaming (e.g., Kafka, Flink).
Familiarity with BI tools like Power BI, Tableau, or Looker.
Strong problem‑solving and communication skills with the ability to explain technical concepts to non‑technical stakeholders.
Ability to safeguard the integrity, security, and confidentiality of shared confidential information.
Key Tags: Azure ADLS, HDFS, Spark, Python, Data Architecture, Apache Spark, Java, Talend, Scala, Pig, Data Modeling, Data Lake, Storage, Cloud, Redshift, ADLS, Databricks, Azure Databricks, AWS, ML, Security, HBase, AI, Azure Stream Analytics, Computer Science, TensorFlow, Looker, NiFi, Apache NiFi, SQL, Retool, ADF, Azure Data Factory, Azure Synapse Analytics, BigQuery, BI, GCP, Compliance, Data Engineering, Data Science, Airflow, Synapse, Tableau, Microsoft Power BI, Automation, Azure, PyTorch, Big Data, Kafka.
#J-18808-Ljbffr