NovaLink Solutions
Data Architect – Only Local to Arizona
NovaLink Solutions, Phoenix, Arizona, United States, 85003
Key Responsibilities
Design scalable data lake and data architectures using Databricks and cloud-native services.
Develop metadata‑driven, parameterized ingestion frameworks and multi‑layer data architectures.
Optimize data workloads and performance.
Define data governance frameworks for CHP.
Design and develop robust data pipelines.
Architect AI systems, including RAG workflows and prompt engineering.
Lead cloud migration initiatives from legacy systems to modern data platforms.
Provide architectural guidance, best practices, and technical leadership across teams.
Build documentation, reusable modules, and standardized patterns.
Required Skills and Experience
Strong expertise in cloud platforms, primarily Azure or AWS.
Hands‑on experience with Databricks.
Deep proficiency in Python and SQL.
Expertise in building ETL/ELT pipelines and ADF workflows.
Experience architecting data lakes and implementing data governance frameworks.
Hands‑on experience with CI/CD, DevOps, and Git‑based development.
Ability to translate business requirements into technical architecture.
Technical Expertise
Programming: Python, SQL, R
Big Data: Hadoop, Spark, Kafka, Hive
Cloud Platforms: Azure (ADF, Databricks, Azure OpenAI), AWS
Data Warehousing: Redshift, SQL Server
ETL/ELT Tools: SSIS
#J-18808-Ljbffr
Design scalable data lake and data architectures using Databricks and cloud-native services.
Develop metadata‑driven, parameterized ingestion frameworks and multi‑layer data architectures.
Optimize data workloads and performance.
Define data governance frameworks for CHP.
Design and develop robust data pipelines.
Architect AI systems, including RAG workflows and prompt engineering.
Lead cloud migration initiatives from legacy systems to modern data platforms.
Provide architectural guidance, best practices, and technical leadership across teams.
Build documentation, reusable modules, and standardized patterns.
Required Skills and Experience
Strong expertise in cloud platforms, primarily Azure or AWS.
Hands‑on experience with Databricks.
Deep proficiency in Python and SQL.
Expertise in building ETL/ELT pipelines and ADF workflows.
Experience architecting data lakes and implementing data governance frameworks.
Hands‑on experience with CI/CD, DevOps, and Git‑based development.
Ability to translate business requirements into technical architecture.
Technical Expertise
Programming: Python, SQL, R
Big Data: Hadoop, Spark, Kafka, Hive
Cloud Platforms: Azure (ADF, Databricks, Azure OpenAI), AWS
Data Warehousing: Redshift, SQL Server
ETL/ELT Tools: SSIS
#J-18808-Ljbffr