InOrg Global
Join to apply for the
[F] Data Architect
role at
InOrg Global Job Summary We're seeking an experienced Data Architect / Senior Data Architect/ Associate Data Architect Expertise in data engineering across major data platforms. The ideal candidate will have a strong background in Python, SQL, ETL, and data modeling, with experience in tools like Teradata, Informatica, Hadoop, Spark, PySpark, ADF, Snowflake, and Big Data. Cloud knowledge (AWS, Azure, or GCP) is a plus. The role requires a willingness to transition and upskill into Databricks & AI/ML projects. Key Responsibilities Design, develop, and maintain large-scale data systems Develop and implement ETL processes using various tools and technologies Collaborate with cross-functional teams to design and implement data models Work with big data tools like Hadoop, Spark, PySpark, and Kafka Develop scalable and efficient data pipelines Troubleshoot data-related issues and optimize data systems Transition and upskill into Databricks & AI/ML projects Requirements Relevant experience in data engineering Strong proficiency in Python, SQL, ETL, and data modeling Experience with one or more of the following: Teradata, Informatica, Hadoop, Spark, PySpark, ADF, Snowflake, Big Data, Scala, Kafka Cloud knowledge (AWS, Azure, or GCP) is a plus Willingness to learn and adapt to new technologies, specifically Databricks & AI/ML Nice To Have Experience with Databricks Knowledge of AI/ML concepts and tools Certification in relevant technologies What We Offer Competitive salary and benefits Opportunity to work on cutting-edge projects Collaborative and dynamic work environment Professional growth and development opportunities Remote work opportunities & flexible hours Seniority level
Mid-Senior level Employment type
Full-time Job function
Engineering and Information Technology Industries
Business Consulting and Services
#J-18808-Ljbffr
[F] Data Architect
role at
InOrg Global Job Summary We're seeking an experienced Data Architect / Senior Data Architect/ Associate Data Architect Expertise in data engineering across major data platforms. The ideal candidate will have a strong background in Python, SQL, ETL, and data modeling, with experience in tools like Teradata, Informatica, Hadoop, Spark, PySpark, ADF, Snowflake, and Big Data. Cloud knowledge (AWS, Azure, or GCP) is a plus. The role requires a willingness to transition and upskill into Databricks & AI/ML projects. Key Responsibilities Design, develop, and maintain large-scale data systems Develop and implement ETL processes using various tools and technologies Collaborate with cross-functional teams to design and implement data models Work with big data tools like Hadoop, Spark, PySpark, and Kafka Develop scalable and efficient data pipelines Troubleshoot data-related issues and optimize data systems Transition and upskill into Databricks & AI/ML projects Requirements Relevant experience in data engineering Strong proficiency in Python, SQL, ETL, and data modeling Experience with one or more of the following: Teradata, Informatica, Hadoop, Spark, PySpark, ADF, Snowflake, Big Data, Scala, Kafka Cloud knowledge (AWS, Azure, or GCP) is a plus Willingness to learn and adapt to new technologies, specifically Databricks & AI/ML Nice To Have Experience with Databricks Knowledge of AI/ML concepts and tools Certification in relevant technologies What We Offer Competitive salary and benefits Opportunity to work on cutting-edge projects Collaborative and dynamic work environment Professional growth and development opportunities Remote work opportunities & flexible hours Seniority level
Mid-Senior level Employment type
Full-time Job function
Engineering and Information Technology Industries
Business Consulting and Services
#J-18808-Ljbffr