Argyll Infotech Enterprise Pvt Ltd
Databricks Engineer
Argyll Infotech Enterprise Pvt Ltd, Baltimore, Maryland, United States, 21276
Get AI-powered advice on this job and more exclusive features.
Argyll Infotech Enterprise Pvt Ltd provided pay range This range is provided by Argyll Infotech Enterprise Pvt Ltd. Your actual pay will be based on your skills and experience — talk with your recruiter to learn more.
Base pay range $60.00/yr - $60.00/yr
Job Role : Databricks Engineer
Location : Maryland
Client : University of Maryland Global Campus
We are seeking a Databricks Engineer to design, build, and operate a Data & AI platform with a strong foundation in the Medallion Architecture (raw/bronze, curated/silver, and mart/gold layers). This platform will orchestrate complex data workflows and scalable ELT pipelines to integrate data from enterprise systems such as PeopleSoft, D2L, and Salesforce, delivering high-quality, governed data for machine learning, AI/BI, and analytics at scale.
You will play a critical role in engineering the infrastructure and workflows that enable seamless data flow across the enterprise, ensure operational excellence, and provide the backbone for strategic decision-making, predictive modeling, and innovation.
Responsibilities
Data & AI Platform Engineering (Databricks-Centric):
Design, implement, and optimize end-to-end data pipelines on Databricks, following the Medallion Architecture principles.
Build robust and scalable ETL/ELT pipelines using Apache Spark and Delta Lake to transform raw (bronze) data into trusted curated (silver) and analytics-ready (gold) data layers.
Operationalize Databricks workflows for orchestration, dependency management, and pipeline automation.
Apply schema evolution and data versioning to support agile data development.
Platform Integration & Data Ingestion:
Connect and ingest data from enterprise systems such as PeopleSoft, D2L, and Salesforce using APIs, JDBC, or other integration frameworks.
Implement connectors and ingestion frameworks that accommodate structured, semi-structured, and unstructured data.
Design standardized data ingestion processes with automated error handling, retries, and alerting.
Data Quality, Monitoring, and Governance:
Develop data quality checks, validation rules, and anomaly detection mechanisms to ensure data integrity across all layers.
Integrate monitoring and observability tools (e.g., Databricks metrics, Grafana) to track ETL performance, latency, and failures.
Implement Unity Catalog or equivalent tools for centralized metadata management, data lineage, and governance policy enforcement.
Security, Privacy, and Compliance:
Enforce data security best practices including row-level security, encryption at rest/in transit, and fine-grained access control via Unity Catalog.
Design and implement data masking, tokenization, and anonymization for compliance with privacy regulations (e.g., GDPR, FERPA).
Work with security teams to audit and certify compliance controls.
Seniority level Entry level
Employment type Contract
Job function Engineering and Information Technology
Industries Software Development
#J-18808-Ljbffr
Argyll Infotech Enterprise Pvt Ltd provided pay range This range is provided by Argyll Infotech Enterprise Pvt Ltd. Your actual pay will be based on your skills and experience — talk with your recruiter to learn more.
Base pay range $60.00/yr - $60.00/yr
Job Role : Databricks Engineer
Location : Maryland
Client : University of Maryland Global Campus
We are seeking a Databricks Engineer to design, build, and operate a Data & AI platform with a strong foundation in the Medallion Architecture (raw/bronze, curated/silver, and mart/gold layers). This platform will orchestrate complex data workflows and scalable ELT pipelines to integrate data from enterprise systems such as PeopleSoft, D2L, and Salesforce, delivering high-quality, governed data for machine learning, AI/BI, and analytics at scale.
You will play a critical role in engineering the infrastructure and workflows that enable seamless data flow across the enterprise, ensure operational excellence, and provide the backbone for strategic decision-making, predictive modeling, and innovation.
Responsibilities
Data & AI Platform Engineering (Databricks-Centric):
Design, implement, and optimize end-to-end data pipelines on Databricks, following the Medallion Architecture principles.
Build robust and scalable ETL/ELT pipelines using Apache Spark and Delta Lake to transform raw (bronze) data into trusted curated (silver) and analytics-ready (gold) data layers.
Operationalize Databricks workflows for orchestration, dependency management, and pipeline automation.
Apply schema evolution and data versioning to support agile data development.
Platform Integration & Data Ingestion:
Connect and ingest data from enterprise systems such as PeopleSoft, D2L, and Salesforce using APIs, JDBC, or other integration frameworks.
Implement connectors and ingestion frameworks that accommodate structured, semi-structured, and unstructured data.
Design standardized data ingestion processes with automated error handling, retries, and alerting.
Data Quality, Monitoring, and Governance:
Develop data quality checks, validation rules, and anomaly detection mechanisms to ensure data integrity across all layers.
Integrate monitoring and observability tools (e.g., Databricks metrics, Grafana) to track ETL performance, latency, and failures.
Implement Unity Catalog or equivalent tools for centralized metadata management, data lineage, and governance policy enforcement.
Security, Privacy, and Compliance:
Enforce data security best practices including row-level security, encryption at rest/in transit, and fine-grained access control via Unity Catalog.
Design and implement data masking, tokenization, and anonymization for compliance with privacy regulations (e.g., GDPR, FERPA).
Work with security teams to audit and certify compliance controls.
Seniority level Entry level
Employment type Contract
Job function Engineering and Information Technology
Industries Software Development
#J-18808-Ljbffr