Logo
1872 Consulting

Data Architect

1872 Consulting, Chicago, Illinois, United States, 60290

Save Job

Data Architect Chicago, IL - hybrid WFH: 3 days onsite in the loop, 2 days work from home

Summary We're looking for a Data Architect with a background in Azure, Databricks, Snowflake and Python, which is our primary tech stack for designing, building, and optimizing data pipelines. This role is foundational for our organization to deliver best in class AI models, Machine Learning, Analytics and BI - where we have an industry leading real estate data platform, to power critical business decisions in our space.

What you'll be doing

Design, develop, and maintain robust data pipelines using Databricks to process and transform large-scale datasets. Write efficient, reusable, and scalable SQL and/or Python code for data ingestion, transformation, and integration. Optimize data workflows for performance, reliability, and cost-efficiency in cloud environments. Collaborate with data scientists, analysts, and stakeholders to understand data requirements and deliver solutions. Implement data governance, security, and compliance best practices within data pipelines. Monitor and troubleshoot data pipeline issues, ensuring high availability and data integrity. Leverage Databricks for advanced analytics, machine learning workflows, and real-time data processing. Integrate data from various sources (APIs, databases, blob, SFTP, streams) into Datalake for centralized storage and querying. Mentor junior data engineers and contribute to team knowledge sharing. Stay updated on emerging data technologies and recommend improvements to existing systems. Performance Tuning and Optimization of all Data Ingestion and Data Integration processes, including the Data Platform and databases

Skills we're seeking

7+ years of experience with Data Architecture / Data Engineering

Must have strong experience designing, building and optimizing data pipelines

2+ years of Data Architect experience specifically Must have strong experience with Azure Must have strong experience with Databricks Must have strong experience with Snowflake Must have advanced proficiency with SQL Must have data modeling experience

Understanding of data schema modeling (dimensions, measures, slowly changing dimensions) Inmon vs Kimball (Star-Schema vs 3NF)

Must have excellent communication skills

Nice to haves (in this order)

Strong Python experience, or other programming experience (Python highly preferred) for data processing, scripting, and automation Bachelor's or Master's Degree in Computer Science or related IT or Data field Experience with Azure Durable Functions Experience with Spark Experience with Azure Durable Functions Experience with DuckDB Experience with Parquet, Delta Lake, and/or Iceberg formats Experience with ELT and/or Bronze, Silver, Gold transformation layers Experience with data governance