Logo
Agile Resources, Inc.

Data Architect (Atlanta)

Agile Resources, Inc., Atlanta, Georgia, United States, 30383

Save Job

Location

Hybrid Remote in Atlanta (3 days onsite/week).

Note:

Initial 100% onsite required for the first six months.

Employment Type:

Permanent / Direct Hire / Full-time

Salary

Up to $180,000 (depending on experience) + bonus

The Role:

We're seeking a highly skilled and hands-on Data Architect to lead the design, implementation, and ongoing evolution of our enterprise-grade data systems. This role is crucial for building scalable, secure, and intelligent data infrastructure that supports core analytics, operational excellence, and future AI initiatives. Success requires a seasoned technologist who can seamlessly integrate cloud-native services with traditional data warehousing to create a modern, unified data platform.

What You'll Do:

Architecture & Strategy : Lead the design and implementation of modern data platforms, including Data Lakes, Data Warehouses, and Lakehouse architectures, to enable a single source of truth for the enterprise. Data Modeling & Integration : Architect unified data models that support both modular monoliths and microservices-based platforms. Design and optimize high-volume, low-latency streaming/batch ETL/ELT pipelines. Technical Leadership : Drive the technical execution across the entire data lifecycle. Build and optimize core data processing scripts using Spark and Python. Governance & Quality : Define and enforce standards for data governance, metadata management, and data observability across distributed systems. Implement automated data lineage tracking, schema evolution, and data quality monitoring. Cloud Infrastructure : Configure and manage cloud-native data services, including core data storage and event ingestion infrastructure.

Required Experience:

Experience : 10+ years of proven experience in enterprise data architecture and engineering. Core Platform Expertise : Strong, hands-on experience with the Azure Data Ecosystem including Azure Data Lake Storage (ADLS), Azure Synapse Analytics (or equivalent cloud DW), and Azure Purview (or equivalent data catalog). Processing : Deep expertise in Databricks (or Apache Spark) for ETL/ELT pipeline implementation, using Delta Lake and SQL Server (or equivalent RDBMS). Coding & Scripting : Strong proficiency in Python, Spark, and advanced SQL. Data Governance : Hands-on experience implementing data lineage tracking and data quality monitoring (e.g., using Great Expectations or dbt).

Preferred Skills:

Semantic Technologies : Hands-on experience developing ontology frameworks using OWL, RDF, and SPARQL to enable semantic interoperability. Advanced AI Data : Experience integrating structured/unstructured data into Knowledge Graphs and Vector Databases. Streaming/Telemetry : Experience developing and maintaining semantic telemetry pipelines using services like Azure Event Hubs or Kafka. Emerging Concepts : Exposure to linked data ecosystems, data mesh, or data fabric concepts.