Purple Drive
Job Description:
Palantir Data Engineer
We are seeking a highly skilled
Palantir Data Engineer
to design, develop, and maintain scalable data solutions using Palantir Foundry and associated technologies. The ideal candidate will have hands-on experience in
data modeling, pipeline building, and analytics enablement
across large-scale enterprise environments.
Key Responsibilities:
Develop, deploy, and optimize data pipelines and workflows within
Palantir Foundry
using
Pipeline Builder
and
AIP . Collaborate with cross-functional teams to design data solutions supporting analytical and operational needs. Utilize
Python, PySpark, and TypeScript
to process, transform, and analyze large datasets. Write efficient
SQL queries
for data extraction, transformation, and validation. Manage version control and deployment processes using
Git . Work with
Apache Spark
and
Hadoop
ecosystems for distributed data processing and optimization. Ensure data quality, governance, and performance tuning across the entire data lifecycle. Required Skills & Experience:
Strong expertise in
Palantir Foundry
and
big data frameworks (Spark, Hadoop) . Proficiency in
Python, PySpark, TypeScript, and SQL . Hands-on experience with
data integration, transformation, and orchestration . Excellent analytical, debugging, and problem-solving abilities.
Palantir Data Engineer
We are seeking a highly skilled
Palantir Data Engineer
to design, develop, and maintain scalable data solutions using Palantir Foundry and associated technologies. The ideal candidate will have hands-on experience in
data modeling, pipeline building, and analytics enablement
across large-scale enterprise environments.
Key Responsibilities:
Develop, deploy, and optimize data pipelines and workflows within
Palantir Foundry
using
Pipeline Builder
and
AIP . Collaborate with cross-functional teams to design data solutions supporting analytical and operational needs. Utilize
Python, PySpark, and TypeScript
to process, transform, and analyze large datasets. Write efficient
SQL queries
for data extraction, transformation, and validation. Manage version control and deployment processes using
Git . Work with
Apache Spark
and
Hadoop
ecosystems for distributed data processing and optimization. Ensure data quality, governance, and performance tuning across the entire data lifecycle. Required Skills & Experience:
Strong expertise in
Palantir Foundry
and
big data frameworks (Spark, Hadoop) . Proficiency in
Python, PySpark, TypeScript, and SQL . Hands-on experience with
data integration, transformation, and orchestration . Excellent analytical, debugging, and problem-solving abilities.