eTeam
Job Title: Azure Data Engineer
Location: Washington, DC (Onsite)
Duration: 12 Months
Must Have Skills: • Azure Databricks • PowerBI
Detailed Job Description: Key Responsibilities: • Monitor and troubleshoot Databricks jobs. • Possess a strong understanding of database fundamentals, including relational database design and PL/SQL. • Have substantial hands on experience and knowledge in PowerBI. • Work closely with clients and support teams to review and manage deliverables. • Design and optimize ETL workflows using Databricks, Azure Data Factory. • Manage data ingestion. • Maintain code versioning using Devops and follow SDLC best practices. • Perform root cause analysis and performance tuning using Spark UI, DAGs, and logs.
Required Skills & Qualifications: • Hands-on experience with Spark, Delta Lake, and PySpark. • Proficiency in SQL, Python, and cloud services (Azure). • Experience with Azure Synapse, and Blob Storage. • Familiarity with CI/CD pipelines and monitoring tools. • Strong communication and customer-facing skills. • Proficiency in PowerBI and SQL or PL/SQL. • Strong understanding of database design and data modeling. • Excellent problem-solving skills and attention to detail. • Strong communication and team collaboration skills. • 5+ years of experience with Azure(ADF, synapse) and Databricks in production environments.
Minimum Years of Experience: 8-10 years
Certifications Needed: BE
Top 3 responsibilities you would expect the Subcon to shoulder and execute: • Handson experience with Spark, Delta Lake, and PySpark. • Strong communication and customerfacing skills • Perform root cause analysis and performance tuning using Spark UI, DAGs, and logs.
Must Have Skills: • Azure Databricks • PowerBI
Detailed Job Description: Key Responsibilities: • Monitor and troubleshoot Databricks jobs. • Possess a strong understanding of database fundamentals, including relational database design and PL/SQL. • Have substantial hands on experience and knowledge in PowerBI. • Work closely with clients and support teams to review and manage deliverables. • Design and optimize ETL workflows using Databricks, Azure Data Factory. • Manage data ingestion. • Maintain code versioning using Devops and follow SDLC best practices. • Perform root cause analysis and performance tuning using Spark UI, DAGs, and logs.
Required Skills & Qualifications: • Hands-on experience with Spark, Delta Lake, and PySpark. • Proficiency in SQL, Python, and cloud services (Azure). • Experience with Azure Synapse, and Blob Storage. • Familiarity with CI/CD pipelines and monitoring tools. • Strong communication and customer-facing skills. • Proficiency in PowerBI and SQL or PL/SQL. • Strong understanding of database design and data modeling. • Excellent problem-solving skills and attention to detail. • Strong communication and team collaboration skills. • 5+ years of experience with Azure(ADF, synapse) and Databricks in production environments.
Minimum Years of Experience: 8-10 years
Certifications Needed: BE
Top 3 responsibilities you would expect the Subcon to shoulder and execute: • Handson experience with Spark, Delta Lake, and PySpark. • Strong communication and customerfacing skills • Perform root cause analysis and performance tuning using Spark UI, DAGs, and logs.