CloudTech Innovations
Overview
We are seeking a seasoned
Solution Architect
with a strong background in designing and implementing scalable data and cloud architectures. The ideal candidate will have hands-on experience with
Databricks , modern data platforms, and enterprise-grade cloud solutions. You’ll collaborate with cross-functional teams to define technical roadmaps, architect high-performing solutions, and drive innovation across analytics and data-driven platforms.
Responsibilities
Design and lead implementation of end-to-end cloud-native data platforms leveraging tools such as Databricks, Delta Lake, and MLflow.
Define architecture for large-scale ETL/ELT pipelines, data lakes, and real-time/streaming data solutions.
Collaborate with data engineers, data scientists, and stakeholders to convert business goals into scalable technical solutions.
Integrate Databricks notebooks, Apache Spark, and cloud-native services (e.g., AWS Glue, Azure Data Factory) for batch and real-time data processing.
Implement governance and data security using tools like Unity Catalog, IAM, and encryption at rest/in transit.
Define integration patterns using REST APIs, event-driven messaging (Kafka/Pub/Sub), and distributed systems design.
Participate in architectural reviews and performance tuning across distributed compute frameworks.
Stay updated on emerging tools and technologies in data architecture, cloud infrastructure, and ML Ops.
Required Qualifications
Bachelor’s or Master’s in Computer Science, Data Engineering, or related field.
10+ years of experience in enterprise software or data architecture roles.
Strong hands-on experience with Databricks, Apache Spark, and Delta Lake.
Proficiency in at least one cloud platform (AWS, Azure, or GCP), with experience in services like S3, ADLS, BigQuery, or Redshift.
Familiarity with streaming platforms such as Kafka, Kinesis, or Azure Event Hubs.
Experience designing and deploying data lakehouses or analytics platforms.
Strong understanding of data modeling, data governance, and pipeline orchestration (Airflow, dbt, or similar).
Skilled in performance optimization, data security best practices, and cloud cost management.
Excellent communication skills for stakeholder management and cross-functional collaboration.
Preferred Skills
Preferred certifications in Databricks, AWS/Azure/GCP Solution Architecture, or TOGAF.
Knowledge of ML/AI workflows, model versioning, and ML Ops practices.
Experience with Unity Catalog, Great Expectations, or data quality frameworks.
Prior work in regulated environments (e.g., healthcare, finance, insurance) is a plus.
Details
Seniority level: Mid-Senior level
Location: Remote
Employment Type: Contract
Job Function: Engineering and Information Technology
Industries: IT Services and IT Consulting
#J-18808-Ljbffr
Solution Architect
with a strong background in designing and implementing scalable data and cloud architectures. The ideal candidate will have hands-on experience with
Databricks , modern data platforms, and enterprise-grade cloud solutions. You’ll collaborate with cross-functional teams to define technical roadmaps, architect high-performing solutions, and drive innovation across analytics and data-driven platforms.
Responsibilities
Design and lead implementation of end-to-end cloud-native data platforms leveraging tools such as Databricks, Delta Lake, and MLflow.
Define architecture for large-scale ETL/ELT pipelines, data lakes, and real-time/streaming data solutions.
Collaborate with data engineers, data scientists, and stakeholders to convert business goals into scalable technical solutions.
Integrate Databricks notebooks, Apache Spark, and cloud-native services (e.g., AWS Glue, Azure Data Factory) for batch and real-time data processing.
Implement governance and data security using tools like Unity Catalog, IAM, and encryption at rest/in transit.
Define integration patterns using REST APIs, event-driven messaging (Kafka/Pub/Sub), and distributed systems design.
Participate in architectural reviews and performance tuning across distributed compute frameworks.
Stay updated on emerging tools and technologies in data architecture, cloud infrastructure, and ML Ops.
Required Qualifications
Bachelor’s or Master’s in Computer Science, Data Engineering, or related field.
10+ years of experience in enterprise software or data architecture roles.
Strong hands-on experience with Databricks, Apache Spark, and Delta Lake.
Proficiency in at least one cloud platform (AWS, Azure, or GCP), with experience in services like S3, ADLS, BigQuery, or Redshift.
Familiarity with streaming platforms such as Kafka, Kinesis, or Azure Event Hubs.
Experience designing and deploying data lakehouses or analytics platforms.
Strong understanding of data modeling, data governance, and pipeline orchestration (Airflow, dbt, or similar).
Skilled in performance optimization, data security best practices, and cloud cost management.
Excellent communication skills for stakeholder management and cross-functional collaboration.
Preferred Skills
Preferred certifications in Databricks, AWS/Azure/GCP Solution Architecture, or TOGAF.
Knowledge of ML/AI workflows, model versioning, and ML Ops practices.
Experience with Unity Catalog, Great Expectations, or data quality frameworks.
Prior work in regulated environments (e.g., healthcare, finance, insurance) is a plus.
Details
Seniority level: Mid-Senior level
Location: Remote
Employment Type: Contract
Job Function: Engineering and Information Technology
Industries: IT Services and IT Consulting
#J-18808-Ljbffr