Jobs via Dice
Data Solutions Architect (Required Local Candidates)
Jobs via Dice, Phoenix, Arizona, United States, 85003
Data Solutions Architect (Required Local Candidates)
Our client, TexcelVision Inc., is seeking a Data Solutions Architect to contribute to the design, modernization, and optimization of enterprise‑scale data systems, as well as the maintenance and operations strategy for CHP.
Overview The architect will design and implement data systems that organize, store, and manage data within our cloud data platform. Responsibilities include continuous maintenance and operations work in the cloud, reviewing and analyzing CHP’s data infrastructure, planning future database solutions, and implementing systems to support data management for CHP users. The role ensures data integrity and adherence to data governance standards, identifying and resolving data discrepancies and quality issues.
Location: Phoenix, AZ
Key Responsibilities
Design scalable data lake and data architectures using Databricks and cloud‑native services.
Develop metadata‑driven, parameterized ingestion frameworks and multi‑layer data architectures.
Optimize data workloads and performance.
Define data governance frameworks for CHP.
Design and develop robust data pipelines.
Architect AI systems, including RAG workflows and prompt engineering.
Lead cloud migration initiatives from legacy systems to modern data platforms.
Provide architectural guidance, best practices, and technical leadership across teams.
Build documentation, reusable modules, and standardized patterns.
Required Skills and Experience
Strong expertise in cloud platforms, primarily Azure or AWS.
Hands‑on experience with Databricks.
Deep proficiency in Python and SQL.
Expertise in building ETL/ELT pipelines and ADF workflows.
Experience architecting data lakes and implementing data governance frameworks.
Hands‑on experience with CI/CD, DevOps, and Git‑based development.
Ability to translate business requirements into technical architecture.
Technical Expertise
Programming:
Python, SQL, R
Big Data:
Hadoop, Spark, Kafka, Hive
Cloud Platforms:
Azure (ADF, Databricks, Azure OpenAI), AWS
Data Warehousing:
Redshift, SQL Server
ETL/ELT Tools:
SSIS
Required Educational Background
Bachelor's degree in Computer Science, Information Technology, Information Systems, Engineering, or a related field.
6+ years of experience in data engineering or .NET development.
Seniority Level Mid‑Senior level
Employment Type Full‑time
Job Function Engineering and Information Technology
#J-18808-Ljbffr
Overview The architect will design and implement data systems that organize, store, and manage data within our cloud data platform. Responsibilities include continuous maintenance and operations work in the cloud, reviewing and analyzing CHP’s data infrastructure, planning future database solutions, and implementing systems to support data management for CHP users. The role ensures data integrity and adherence to data governance standards, identifying and resolving data discrepancies and quality issues.
Location: Phoenix, AZ
Key Responsibilities
Design scalable data lake and data architectures using Databricks and cloud‑native services.
Develop metadata‑driven, parameterized ingestion frameworks and multi‑layer data architectures.
Optimize data workloads and performance.
Define data governance frameworks for CHP.
Design and develop robust data pipelines.
Architect AI systems, including RAG workflows and prompt engineering.
Lead cloud migration initiatives from legacy systems to modern data platforms.
Provide architectural guidance, best practices, and technical leadership across teams.
Build documentation, reusable modules, and standardized patterns.
Required Skills and Experience
Strong expertise in cloud platforms, primarily Azure or AWS.
Hands‑on experience with Databricks.
Deep proficiency in Python and SQL.
Expertise in building ETL/ELT pipelines and ADF workflows.
Experience architecting data lakes and implementing data governance frameworks.
Hands‑on experience with CI/CD, DevOps, and Git‑based development.
Ability to translate business requirements into technical architecture.
Technical Expertise
Programming:
Python, SQL, R
Big Data:
Hadoop, Spark, Kafka, Hive
Cloud Platforms:
Azure (ADF, Databricks, Azure OpenAI), AWS
Data Warehousing:
Redshift, SQL Server
ETL/ELT Tools:
SSIS
Required Educational Background
Bachelor's degree in Computer Science, Information Technology, Information Systems, Engineering, or a related field.
6+ years of experience in data engineering or .NET development.
Seniority Level Mid‑Senior level
Employment Type Full‑time
Job Function Engineering and Information Technology
#J-18808-Ljbffr