Smart Bot Systems
Job Title Senior Data Engineer w / d Databricks and Python Experience (5)
Location : Plano TX and Wilmington DE 5 days onsite role no hybrid
Salary : $130-135K plus benefits
Job Description We are looking for a Senior Data Engineer with strong experience in Databricks PySpark and modern Data Warehouse systems. The ideal candidate can design build and optimize scalable data pipelines and work closely with analytics product and engineering teams.
Key Responsibilities
Design and build ETL / ELT pipelines using Databricks and PySpark
Develop and maintain data models and data warehouse structures (dimensional modeling star / snowflake schemas)
Optimize data workflows for performance scalability and cost
Work with cloud platforms (Azure / AWS / GCP) for storage compute and orchestration
Ensure data quality reliability and security across pipelines
Collaborate with cross-functional teams (Data Science BI Product)
Write clean reusable code and follow engineering best practices
Troubleshoot issues in production data pipelines
Required Skills
Strong hands-on skills in Databricks PySpark and SQL
Experience with data warehouse concepts ETL frameworks batch / streaming pipelines
Solid understanding of Delta Lake and Lakehouse architecture
Experience with at least one cloud platform (Azure preferred)
Experience with workflow orchestration tools (Airflow ADF Prefect etc.)
Nice to Have
Experience with CI / CD for data pipelines
Knowledge of data governance tools (Unity Catalog or similar)
Exposure to ML data preparation pipelines
Soft Skills
Strong communication and documentation skills
Ability to work independently and mentor others
Problem-solver with a focus on delivering business value
Key Skills Apache Hive,S3,Hadoop,Redshift,Spark,AWS,Apache Pig,NoSQL,Big Data,Data Warehouse,Kafka,Scala
#J-18808-Ljbffr
Location : Plano TX and Wilmington DE 5 days onsite role no hybrid
Salary : $130-135K plus benefits
Job Description We are looking for a Senior Data Engineer with strong experience in Databricks PySpark and modern Data Warehouse systems. The ideal candidate can design build and optimize scalable data pipelines and work closely with analytics product and engineering teams.
Key Responsibilities
Design and build ETL / ELT pipelines using Databricks and PySpark
Develop and maintain data models and data warehouse structures (dimensional modeling star / snowflake schemas)
Optimize data workflows for performance scalability and cost
Work with cloud platforms (Azure / AWS / GCP) for storage compute and orchestration
Ensure data quality reliability and security across pipelines
Collaborate with cross-functional teams (Data Science BI Product)
Write clean reusable code and follow engineering best practices
Troubleshoot issues in production data pipelines
Required Skills
Strong hands-on skills in Databricks PySpark and SQL
Experience with data warehouse concepts ETL frameworks batch / streaming pipelines
Solid understanding of Delta Lake and Lakehouse architecture
Experience with at least one cloud platform (Azure preferred)
Experience with workflow orchestration tools (Airflow ADF Prefect etc.)
Nice to Have
Experience with CI / CD for data pipelines
Knowledge of data governance tools (Unity Catalog or similar)
Exposure to ML data preparation pipelines
Soft Skills
Strong communication and documentation skills
Ability to work independently and mentor others
Problem-solver with a focus on delivering business value
Key Skills Apache Hive,S3,Hadoop,Redshift,Spark,AWS,Apache Pig,NoSQL,Big Data,Data Warehouse,Kafka,Scala
#J-18808-Ljbffr