Logo
Augusta Hitech

Data Architect

Augusta Hitech, Plano, Texas, us, 75086

Save Job

Get AI-powered advice on this job and more exclusive features.

We are seeking an experienced Data Architect to design and lead enterprise-level data solutions that support advanced analytics, business intelligence, and AI initiatives. The ideal candidate will bring deep expertise in data warehousing, big data engineering, and cloud architecture, ensuring the organization’s data ecosystem is scalable, secure, and future-ready.

Key Responsibilities

Design and maintain the enterprise data architecture blueprint to support analytics, reporting, and data science initiatives.

Lead the development of data models, pipelines, and integration frameworks across on‑prem and cloud environments.

Define and enforce data standards, governance, and metadata management policies.

Design scalable ETL/ELT pipelines using tools such as Spark, Hive, SQL, and Databricks.

Implement best practices in data warehousing, data lakes, and data marts for performance and reliability.

Architect and manage data solutions in cloud environments (Azure, AWS, GCP).

Lead data platform modernization and migration initiatives from legacy systems to cloud‑native architectures.

Partner with engineering teams to enable real‑time data processing and streaming architectures (Kafka, Spark Streaming).

Support the adoption of AI/ML capabilities through modernized and well‑structured data pipelines.

Work closely with business, engineering, and analytics teams to translate business needs into technical data solutions.

Collaborate with enterprise architects to align data architecture with overall IT strategy.

Provide technical leadership to data engineers, analysts, and developers.

Ensure data solutions are designed for scalability, quality, and compliance.

Qualifications Required

Bachelor’s or Master’s degree in Computer Science, Engineering, or related discipline.

10+ years of experience in data architecture, data engineering, or enterprise data solutions.

Hands‑on experience with SQL, Python, Spark, Hive, Hadoop, and ETL frameworks.

Strong understanding of data warehousing concepts and data modeling techniques (dimensional, relational, and NoSQL).

Experience with cloud data ecosystems (Azure Data Factory, Databricks, AWS Glue, GCP BigQuery).

Familiarity with data governance, data quality, and metadata tools such as Apache Atlas or Informatica.

Preferred

Certification in Databricks, Azure Data Engineer, or AWS Data Architect.

Experience with AI/ML data pipelines and modern DevOps practices.

Knowledge of data visualization and BI tools (Looker, Power BI, Tableau).

Strong analytical, leadership, and communication skills.

Key Skills

Data Modeling

Data Warehousing

Databricks

Spark

Python

Hive

ETL/ELT

Data Governance

SQL

Data Integration

Metadata Management

Analytics Enablement

Seniority level Mid‑Senior level

Employment type Contract

Job function Engineering and Information Technology

Industries IT Services and IT Consulting

#J-18808-Ljbffr