VySystems
Senior Data Engineer with Databricks and Python
Locations: Jersey City, NJ & Wilmington, DE (Day 1 Onsite) (Need Local to Texas & Delaware Only)
Please share suitable resumes to
ram.r@vysystems.com
&
https://www.linkedin.com/in/ramkumarjhen/ .
Key Responsibilities
Design and build ETL/ELT pipelines using Databricks and PySpark.
Develop and maintain data models and data warehouse structures (dimensional modeling, star/snowflake schemas).
Optimize data workflows for performance, scalability, and cost.
Work with cloud platforms (Azure/AWS/GCP) for storage, compute, and orchestration.
Ensure data quality, reliability, and security across pipelines.
Collaborate with cross-functional teams (Data Science, BI, Product).
Write clean, reusable code and follow engineering best practices.
Troubleshoot issues in production data pipelines.
Required Skills
Strong hands‑on skills in Databricks, PySpark, and SQL.
Experience with data warehouse concepts, ETL frameworks, batch/streaming pipelines.
Solid understanding of Delta Lake and Lakehouse architecture.
Experience with at least one cloud platform (Azure preferred).
Experience with workflow orchestration tools (Airflow, ADF, Prefect, etc.).
Nice to Have
Experience with CI/CD for data pipelines.
Knowledge of data governance tools (Unity Catalog or similar).
Exposure to ML data preparation pipelines.
Soft Skills
Strong communication and documentation skills.
Ability to work independently and mentor others.
Problem‑solver with a focus on delivering business value.
Please share the suitable resumes to
ram.r@vysystems.com
&
https://www.linkedin.com/in/ramkumarjhen/ .
Please attach the Updated Resume.
Kindly fill details:
1. Years of Exp:
3. Current Location:
5. Share Updated Resume:
Thanks & Regards, Ramkumar.R | Sr. Technical Recruiter
4701 Patrick Henry Drive Building 16, Santa Clara, CA 95054, USA.
#J-18808-Ljbffr
Please share suitable resumes to
ram.r@vysystems.com
&
https://www.linkedin.com/in/ramkumarjhen/ .
Key Responsibilities
Design and build ETL/ELT pipelines using Databricks and PySpark.
Develop and maintain data models and data warehouse structures (dimensional modeling, star/snowflake schemas).
Optimize data workflows for performance, scalability, and cost.
Work with cloud platforms (Azure/AWS/GCP) for storage, compute, and orchestration.
Ensure data quality, reliability, and security across pipelines.
Collaborate with cross-functional teams (Data Science, BI, Product).
Write clean, reusable code and follow engineering best practices.
Troubleshoot issues in production data pipelines.
Required Skills
Strong hands‑on skills in Databricks, PySpark, and SQL.
Experience with data warehouse concepts, ETL frameworks, batch/streaming pipelines.
Solid understanding of Delta Lake and Lakehouse architecture.
Experience with at least one cloud platform (Azure preferred).
Experience with workflow orchestration tools (Airflow, ADF, Prefect, etc.).
Nice to Have
Experience with CI/CD for data pipelines.
Knowledge of data governance tools (Unity Catalog or similar).
Exposure to ML data preparation pipelines.
Soft Skills
Strong communication and documentation skills.
Ability to work independently and mentor others.
Problem‑solver with a focus on delivering business value.
Please share the suitable resumes to
ram.r@vysystems.com
&
https://www.linkedin.com/in/ramkumarjhen/ .
Please attach the Updated Resume.
Kindly fill details:
1. Years of Exp:
3. Current Location:
5. Share Updated Resume:
Thanks & Regards, Ramkumar.R | Sr. Technical Recruiter
4701 Patrick Henry Drive Building 16, Santa Clara, CA 95054, USA.
#J-18808-Ljbffr