Purple Drive
AWS Databricks Developer
Location:
Boston, MA (Onsite)
Type:
Contract
Role Overview
We are seeking a highly experienced
AWS Databricks Developer
with strong expertise in cloud-based development, big data engineering, and advanced data modeling. The ideal candidate will have a proven track record in building scalable data pipelines, integrating AWS cloud services with Databricks, and driving solutions in an agile environment.
Key Responsibilities
Design, develop, and optimize
data engineering pipelines
using
Databricks on AWS . Implement
data integration solutions
leveraging AWS cloud services. Build scalable and reliable
data models
using PySpark, Spark, and SQL. Develop solutions using
Java, J2EE, Python, and Scala
for complex data workflows. Write and optimize
complex SQL queries
for data extraction, transformation, and analysis. Collaborate with cross-functional teams in an
Agile delivery environment . Apply
DevOps best practices
with tools like Jenkins, Git, and CI/CD pipelines. Troubleshoot performance bottlenecks and ensure high-quality, reliable data solutions. Provide technical leadership and contribute to architectural discussions. Required Skills & Experience
10+ years
of overall IT experience. 5+ years
of hands-on experience with
Databricks development on AWS Cloud . Strong expertise in
Java, J2EE, Python, and Scala . Solid understanding of
Big Data frameworks, Data Modeling, and PySpark . Deep knowledge of
Apache Spark technologies
for building data pipelines. Strong proficiency in
complex SQL queries
and query optimization. Experience with
Agile methodologies
and team collaboration tools. Exposure to
DevOps practices
including CI/CD pipelines, Jenkins, and Git. Excellent
communication and problem-solving skills . Tech Stack
Cloud:
AWS (S3, Lambda, Glue, EMR, etc.) Data Engineering:
Databricks, PySpark, Spark, Scala Programming:
Java, J2EE, Python Database:
SQL (Advanced, Complex Queries) Tools:
Jenkins, Git, CI/CD
Location:
Boston, MA (Onsite)
Type:
Contract
Role Overview
We are seeking a highly experienced
AWS Databricks Developer
with strong expertise in cloud-based development, big data engineering, and advanced data modeling. The ideal candidate will have a proven track record in building scalable data pipelines, integrating AWS cloud services with Databricks, and driving solutions in an agile environment.
Key Responsibilities
Design, develop, and optimize
data engineering pipelines
using
Databricks on AWS . Implement
data integration solutions
leveraging AWS cloud services. Build scalable and reliable
data models
using PySpark, Spark, and SQL. Develop solutions using
Java, J2EE, Python, and Scala
for complex data workflows. Write and optimize
complex SQL queries
for data extraction, transformation, and analysis. Collaborate with cross-functional teams in an
Agile delivery environment . Apply
DevOps best practices
with tools like Jenkins, Git, and CI/CD pipelines. Troubleshoot performance bottlenecks and ensure high-quality, reliable data solutions. Provide technical leadership and contribute to architectural discussions. Required Skills & Experience
10+ years
of overall IT experience. 5+ years
of hands-on experience with
Databricks development on AWS Cloud . Strong expertise in
Java, J2EE, Python, and Scala . Solid understanding of
Big Data frameworks, Data Modeling, and PySpark . Deep knowledge of
Apache Spark technologies
for building data pipelines. Strong proficiency in
complex SQL queries
and query optimization. Experience with
Agile methodologies
and team collaboration tools. Exposure to
DevOps practices
including CI/CD pipelines, Jenkins, and Git. Excellent
communication and problem-solving skills . Tech Stack
Cloud:
AWS (S3, Lambda, Glue, EMR, etc.) Data Engineering:
Databricks, PySpark, Spark, Scala Programming:
Java, J2EE, Python Database:
SQL (Advanced, Complex Queries) Tools:
Jenkins, Git, CI/CD