Pransu Tech Solutions
3 days ago Be among the first 25 applicants
At least ten or more years of experience in AI, Data Science, Software Engineering experience, including knowledge of Data ecosystem
Bachelor’s degree in Computer Science, Information Systems, or other related field is required or related work experience
Data Modeling: Expertise in designing and implementing data models optimized for storage, retrieval, and analytics within Databricks on AWS, including conceptual, logical, and physical data modeling
Databricks Proficiency: In-depth knowledge and hands-on experience with AWS Databricks platform, including Databricks SQL, Runtime, clusters, notebooks, and integrations.
ELT (Extract, Load, Transform) Processes: Proficiency in developing ETL pipelines to extract data from various sources, transform it as per business requirements, and load it into the central data lake using Databricks tools and Spark
Data Integration: Experience integrating data from heterogeneous sources (relational databases, APIs, files) into Databricks while ensuring data quality, consistency, and lineage
Performance Optimization: Ability to optimize data processing workflows and SQL queries in Databricks for performance, scalability, and cost- effectiveness, leveraging partitioning, clustering, caching, and Spark optimization techniques
Data Governance and Security: Understanding of data governance principles and implementing security measures to ensure data integrity, confidentiality, and compliance within the centralized data lake environment
Advanced SQL and Spark Skills: Proficiency in writing complex SQL queries and Spark code (Scala/Python) for data manipulation, transformation, aggregation, and analysis tasks within Databricks notebooks
Cloud Architecture: Understanding of cloud computing principles, AWS architecture, and services for designing scalable and resilient data solutions
Data Visualization: Basic knowledge of data visualization tools (e.g. Tableau) to create insightful visualizations and dashboards for data analysis and reporting purposes
Location:- Washington DC under 50 miles
Skype hire/Need LinkedIn
At least ten or more years of experience in AI, Data Science, Software Engineering experience, including knowledge of Data ecosystem Bachelor’s degree in Computer Science, Information Systems, or other related field is required or related work experience Data Modeling: Expertise in designing and implementing data models optimized for storage, retrieval, and analytics within Databricks on AWS, including conceptual, logical, and physical data modeling Databricks Proficiency: In-depth knowledge and hands-on experience with AWS Databricks platform, including Databricks SQL, Runtime, clusters, notebooks, and integrations. ELT (Extract, Load, Transform) Processes: Proficiency in developing ETL pipelines to extract data from various sources, transform it as per business requirements, and load it into the central data lake using Databricks tools and Spark Data Integration: Experience integrating data from heterogeneous sources (relational databases, APIs, files) into Databricks while ensuring data quality, consistency, and lineage Performance Optimization: Ability to optimize data processing workflows and SQL queries in Databricks for performance, scalability, and cost- effectiveness, leveraging partitioning, clustering, caching, and Spark optimization techniques Data Governance and Security: Understanding of data governance principles and implementing security measures to ensure data integrity, confidentiality, and compliance within the centralized data lake environment Advanced SQL and Spark Skills: Proficiency in writing complex SQL queries and Spark code (Scala/Python) for data manipulation, transformation, aggregation, and analysis tasks within Databricks notebooks Cloud Architecture: Understanding of cloud computing principles, AWS architecture, and services for designing scalable and resilient data solutions Data Visualization: Basic knowledge of data visualization tools (e.g. Tableau) to create insightful visualizations and dashboards for data analysis and reporting purposes
Familiarity with government cloud deployment regulations/compliance policies such as FedRAMP, FISMA, etc.
Leverage financial industry expertise to define conceptual, logical and physical data models in Databricks to support new and existing business domains Work with product owners, system architects, data engineers, and vendors to create data models optimized for query performance, compute and storage costs Define best practices for the implementation of the Bronze/Silver/Gold data layers of the lakehouse Provide data model documentation and artifacts generated from data, data dictionary, data definitions, etc Seniority level
Seniority level Mid-Senior level Employment type
Employment type Contract Job function
Job function Information Technology Industries IT Services and IT Consulting Referrals increase your chances of interviewing at Pransu Tech Solutions by 2x Sign in to set job alerts for “Data Modeler” roles.
Junior Full Stack Software Engineer (JAVA)
Washington, DC $100,000.00-$720,000.00 2 weeks ago Washington, DC $80,000.00-$120,000.00 4 days ago Fort Meade, MD $12,000.00-$150,000.00 4 weeks ago Annapolis Junction, MD $3,000.00-$5,250.00 1 week ago Junior Software Engineer - Fully Cleared (Prime)
Annapolis Junction, MD $127,000.00-$167,000.00 4 weeks ago Columbia, MD $70,000.00-$190,000.00 6 days ago Fort Meade, MD $70,000.00-$100,000.00 1 month ago Junior F/E Software Engineer (Hybrid) - 23684
Columbia, MD $89,769.00-$130,000.00 5 days ago Columbia, MD $70,000.00-$190,000.00 6 days ago Annapolis Junction, MD $130,000.00-$270,000.00 3 weeks ago We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
#J-18808-Ljbffr
Location:- Washington DC under 50 miles
Skype hire/Need LinkedIn
At least ten or more years of experience in AI, Data Science, Software Engineering experience, including knowledge of Data ecosystem Bachelor’s degree in Computer Science, Information Systems, or other related field is required or related work experience Data Modeling: Expertise in designing and implementing data models optimized for storage, retrieval, and analytics within Databricks on AWS, including conceptual, logical, and physical data modeling Databricks Proficiency: In-depth knowledge and hands-on experience with AWS Databricks platform, including Databricks SQL, Runtime, clusters, notebooks, and integrations. ELT (Extract, Load, Transform) Processes: Proficiency in developing ETL pipelines to extract data from various sources, transform it as per business requirements, and load it into the central data lake using Databricks tools and Spark Data Integration: Experience integrating data from heterogeneous sources (relational databases, APIs, files) into Databricks while ensuring data quality, consistency, and lineage Performance Optimization: Ability to optimize data processing workflows and SQL queries in Databricks for performance, scalability, and cost- effectiveness, leveraging partitioning, clustering, caching, and Spark optimization techniques Data Governance and Security: Understanding of data governance principles and implementing security measures to ensure data integrity, confidentiality, and compliance within the centralized data lake environment Advanced SQL and Spark Skills: Proficiency in writing complex SQL queries and Spark code (Scala/Python) for data manipulation, transformation, aggregation, and analysis tasks within Databricks notebooks Cloud Architecture: Understanding of cloud computing principles, AWS architecture, and services for designing scalable and resilient data solutions Data Visualization: Basic knowledge of data visualization tools (e.g. Tableau) to create insightful visualizations and dashboards for data analysis and reporting purposes
Familiarity with government cloud deployment regulations/compliance policies such as FedRAMP, FISMA, etc.
Leverage financial industry expertise to define conceptual, logical and physical data models in Databricks to support new and existing business domains Work with product owners, system architects, data engineers, and vendors to create data models optimized for query performance, compute and storage costs Define best practices for the implementation of the Bronze/Silver/Gold data layers of the lakehouse Provide data model documentation and artifacts generated from data, data dictionary, data definitions, etc Seniority level
Seniority level Mid-Senior level Employment type
Employment type Contract Job function
Job function Information Technology Industries IT Services and IT Consulting Referrals increase your chances of interviewing at Pransu Tech Solutions by 2x Sign in to set job alerts for “Data Modeler” roles.
Junior Full Stack Software Engineer (JAVA)
Washington, DC $100,000.00-$720,000.00 2 weeks ago Washington, DC $80,000.00-$120,000.00 4 days ago Fort Meade, MD $12,000.00-$150,000.00 4 weeks ago Annapolis Junction, MD $3,000.00-$5,250.00 1 week ago Junior Software Engineer - Fully Cleared (Prime)
Annapolis Junction, MD $127,000.00-$167,000.00 4 weeks ago Columbia, MD $70,000.00-$190,000.00 6 days ago Fort Meade, MD $70,000.00-$100,000.00 1 month ago Junior F/E Software Engineer (Hybrid) - 23684
Columbia, MD $89,769.00-$130,000.00 5 days ago Columbia, MD $70,000.00-$190,000.00 6 days ago Annapolis Junction, MD $130,000.00-$270,000.00 3 weeks ago We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
#J-18808-Ljbffr