Dexian DISYS
Data Engineer
On site 4 days a week in Dearborn, MI
W2 Only - no sponsorship offered for this position
Python, Cloud (preferably GCP), SQL, Apache Spark, Kafka, Data Mapping & Cataloging, CI/CD
Dexian is seeking a
Data Engineering Engineer
to design, build, and maintain scalable data infrastructure and pipelines in support of advanced product development and AI-driven initiatives This is a hands-on, operations-focused backend role. The engineer will "hold the fort" operationally - ensuring pipelines are healthy, data is mapped and cataloged, and the right data is in the right place. You'll work closely with another data engineer who is currently writing APIs, mapping elements, and building advanced Python scripts to normalize and pipeline data. This role ensures the ongoing health and scalability of pipelines, while also enabling Dexian to industrialize AI applications and scale innovative models into enterprise-ready solutions Key Responsibilities
Collaborate with business, technology, and AI/ML teams to define data requirements and delivery standards Partner with another data engineer to manage backend operations, pipeline stability, and scaling Write advanced Python and SQL scripts for normalization, transformation, and cataloging of data Design, build, and maintain cloud-native ETL pipelines (Kafka, Spark, Beam, GCP Dataflow, Pub/Sub, EventArc) Architect and implement data warehouses and unified data models to integrate siloed data sources Perform data mapping and cataloging to ensure accuracy, traceability, and consistency Automate pipeline orchestration, event-driven triggers, and infrastructure provisioning Troubleshoot and optimize data workflows for performance and scalability Support CI/CD processes with Git/GitHub and Cloud Build Work cross-functionally to integrate new applications into existing data models and scale them effectively Must-Have Skills
Candidates must bring the following as a complete package: Advanced
Python
(scripting, backend operations, transformations) Advanced
SQL
(complex queries, backend optimization) Apache Spark/Kafka
(large-scale ETL/data processing) Cloud experience
(GCP preferred, but AWS or Azure equally acceptable if adaptable) Data mapping and cataloging
(governance, traceability, accuracy) Event-driven pipeline design
(e.g., EventArc, Pub/Sub, AWS equivalents) Data warehouse design & cloud-native storage
(BigQuery, Snowflake, Redshift) CI/CD pipeline tools
(GitHub, Cloud Build, or equivalents) Familiarity with
data governance and orchestration
practices Nice-to-Have Skills
(Not required, but will help a candidate stand out): Java or Powershell scripting Awareness of ML/AI concepts - curiosity and willingness to learn valued Experience & Education
~5 years of data engineering experience Bachelor's Degree in Computer Science, Data Engineering, or related field (Master's in Data Science is a plus) Industry background flexible - non-automotive engineers can succeed if able to adapt quickly Personality & Team Fit
Ownership mentality: treats projects like their own, takes initiative without reminders Curious & proactive: stays current with new technologies, eager to learn Positive & collaborative: fits into a young, energetic team; keeps the atmosphere light but focused Problem solver: navigates cross-functional challenges and proposes actionable solutions Willing to put in extra effort (early/late work when needed) to help the team succeed Dexian is an Equal Opportunity Employer that recruits and hires qualified candidates without regard to race, religion, sex, sexual orientation, gender identity, age, national origin, ancestry, citizenship, disability, or veteran status.
#J-18808-Ljbffr
Data Engineering Engineer
to design, build, and maintain scalable data infrastructure and pipelines in support of advanced product development and AI-driven initiatives This is a hands-on, operations-focused backend role. The engineer will "hold the fort" operationally - ensuring pipelines are healthy, data is mapped and cataloged, and the right data is in the right place. You'll work closely with another data engineer who is currently writing APIs, mapping elements, and building advanced Python scripts to normalize and pipeline data. This role ensures the ongoing health and scalability of pipelines, while also enabling Dexian to industrialize AI applications and scale innovative models into enterprise-ready solutions Key Responsibilities
Collaborate with business, technology, and AI/ML teams to define data requirements and delivery standards Partner with another data engineer to manage backend operations, pipeline stability, and scaling Write advanced Python and SQL scripts for normalization, transformation, and cataloging of data Design, build, and maintain cloud-native ETL pipelines (Kafka, Spark, Beam, GCP Dataflow, Pub/Sub, EventArc) Architect and implement data warehouses and unified data models to integrate siloed data sources Perform data mapping and cataloging to ensure accuracy, traceability, and consistency Automate pipeline orchestration, event-driven triggers, and infrastructure provisioning Troubleshoot and optimize data workflows for performance and scalability Support CI/CD processes with Git/GitHub and Cloud Build Work cross-functionally to integrate new applications into existing data models and scale them effectively Must-Have Skills
Candidates must bring the following as a complete package: Advanced
Python
(scripting, backend operations, transformations) Advanced
SQL
(complex queries, backend optimization) Apache Spark/Kafka
(large-scale ETL/data processing) Cloud experience
(GCP preferred, but AWS or Azure equally acceptable if adaptable) Data mapping and cataloging
(governance, traceability, accuracy) Event-driven pipeline design
(e.g., EventArc, Pub/Sub, AWS equivalents) Data warehouse design & cloud-native storage
(BigQuery, Snowflake, Redshift) CI/CD pipeline tools
(GitHub, Cloud Build, or equivalents) Familiarity with
data governance and orchestration
practices Nice-to-Have Skills
(Not required, but will help a candidate stand out): Java or Powershell scripting Awareness of ML/AI concepts - curiosity and willingness to learn valued Experience & Education
~5 years of data engineering experience Bachelor's Degree in Computer Science, Data Engineering, or related field (Master's in Data Science is a plus) Industry background flexible - non-automotive engineers can succeed if able to adapt quickly Personality & Team Fit
Ownership mentality: treats projects like their own, takes initiative without reminders Curious & proactive: stays current with new technologies, eager to learn Positive & collaborative: fits into a young, energetic team; keeps the atmosphere light but focused Problem solver: navigates cross-functional challenges and proposes actionable solutions Willing to put in extra effort (early/late work when needed) to help the team succeed Dexian is an Equal Opportunity Employer that recruits and hires qualified candidates without regard to race, religion, sex, sexual orientation, gender identity, age, national origin, ancestry, citizenship, disability, or veteran status.
#J-18808-Ljbffr