Open Insights
Open Insights is a Global Data firm that specializes in strategizing and implementing Data-As-A-Service. Our Data Ninjas help organizations understand and generate value from their data, improving revenues while using open-source technologies reduces costs. Open Insights operates across the world, with current operations in the United States, India, South Africa, Chile, United Kingdom, Middle East, and Europe. Open Insights was founded by Dr. Usama Fayyad to help change the world of Data and what it means for data to be an asset in an organization. Our team includes top experts in Analytics and Insights, Big Data Technology and Data Integration, Data Activation and Customer Retention and a team of data science and applied machine learning experts.
Open Insights is committed to providing all employees with engaging and challenging work, opportunities for growth, an equal voice to drive innovation, and an environment that cultivates authenticity. In return, we look for people who are inquisitive, enjoy solving complex problems, collaborate effectively, think creatively, and provide diverse insights to help us all think better and differently.
Responsibilities
Understand a wide range of different types of data systems across the organization, including but not limited to SQL Server, Teradata, Azure SQL Data Warehouse and Snowflake environments
Interpret data models both logical and physical
Ingest data into data lake and data warehouses on technologies such as Snowflake, Teradata, and Azure Synapse
Build scalable and reusable data cleansing algorithms
Transform and merge data using data processing tools like Databricks and Spark
Understand, comply, and build security features (encryption, masking, and tokenization) on data technology platforms
Design simplified data access patterns for data consumers, e.g., building APIs and enabling self-serve
Ensure continuous integration and continuous deployment practices are followed
Perform peer code and performance reviews
Collaborate with architects and other software engineers on design
Collaborate with other teams including data science, business, and business intelligence on successful deliveries
Qualifications
Cloud Platforms – Azure, AWS (any)
Data Tools and Workload Processors – Databricks, Snowflake
Agile Frameworks – Kanban, Scrum
Data Lake, Data Lakehouse and Data Warehouse concepts and tools preferable on the cloud (e.g., Azure Data Lake, S3, Snowflake, Databricks Lakehouse)
Building data pipelines via tools like Apache NiFi, Azure Data Factory, Apache Spark
Programming Languages – Java, Scala, or Python (any two)
Knowledge of data architecture and design patterns
Strong interpersonal, presentation, communication, and organizational skills
Strong attention to detail
Ability to work in a fast-paced environment
Ability to work both independently and as part of a team
Work Experience
Experience with cloud ecosystems building ETL pipelines and working with data lake and warehouses
Seniority level Entry level
Employment type Full-time
Job function
Information Technology
Industries
IT Services and IT Consulting
#J-18808-Ljbffr
Open Insights is committed to providing all employees with engaging and challenging work, opportunities for growth, an equal voice to drive innovation, and an environment that cultivates authenticity. In return, we look for people who are inquisitive, enjoy solving complex problems, collaborate effectively, think creatively, and provide diverse insights to help us all think better and differently.
Responsibilities
Understand a wide range of different types of data systems across the organization, including but not limited to SQL Server, Teradata, Azure SQL Data Warehouse and Snowflake environments
Interpret data models both logical and physical
Ingest data into data lake and data warehouses on technologies such as Snowflake, Teradata, and Azure Synapse
Build scalable and reusable data cleansing algorithms
Transform and merge data using data processing tools like Databricks and Spark
Understand, comply, and build security features (encryption, masking, and tokenization) on data technology platforms
Design simplified data access patterns for data consumers, e.g., building APIs and enabling self-serve
Ensure continuous integration and continuous deployment practices are followed
Perform peer code and performance reviews
Collaborate with architects and other software engineers on design
Collaborate with other teams including data science, business, and business intelligence on successful deliveries
Qualifications
Cloud Platforms – Azure, AWS (any)
Data Tools and Workload Processors – Databricks, Snowflake
Agile Frameworks – Kanban, Scrum
Data Lake, Data Lakehouse and Data Warehouse concepts and tools preferable on the cloud (e.g., Azure Data Lake, S3, Snowflake, Databricks Lakehouse)
Building data pipelines via tools like Apache NiFi, Azure Data Factory, Apache Spark
Programming Languages – Java, Scala, or Python (any two)
Knowledge of data architecture and design patterns
Strong interpersonal, presentation, communication, and organizational skills
Strong attention to detail
Ability to work in a fast-paced environment
Ability to work both independently and as part of a team
Work Experience
Experience with cloud ecosystems building ETL pipelines and working with data lake and warehouses
Seniority level Entry level
Employment type Full-time
Job function
Information Technology
Industries
IT Services and IT Consulting
#J-18808-Ljbffr