Wipro
Job Overview
We are looking for an experienced Knowledge Graph Engineer/Architect to design, build, and scale our enterprise knowledge graph and semantic data platforms. The ideal candidate has strong experience in knowledge representation, graph databases, ontology modeling, and integrating structured/unstructured data to power search, reasoning, and intelligent applications.
Key Responsibilities
Design and develop knowledge graphs, ontologies, and taxonomies aligned with business domains.
Build semantic models using RDF, OWL, SHACL, or property graph approaches.
Implement and maintain graph databases (e.g., Neo4j, AWS Neptune, Azure Cosmos DB Gremlin API, TigerGraph).
Ingest heterogeneous datasets and convert them into graph structures using ETL pipelines.
Work closely with product, data, and engineering teams to understand requirements and map real‑world entities into graph models.
Develop and optimise SPARQL, Cypher, or Gremlin queries for graph analytics.
Integrate the knowledge graph with downstream systems (search, RAG pipelines, analytics, APIs).
Ensure data quality, entity resolution, schema alignment, and consistency across sources.
Implement reasoning, inference, metadata enrichment, and graph‑based recommendations.
Conduct POCs, evaluate graph technologies, and define best practices for knowledge graph architecture.
Required Skills & Experience
3–10+ years of experience in data engineering, semantic technologies, or knowledge graphs.
Strong understanding of graph theory, ontologies, and linked data principles.
Hands‑on experience with at least one major graph database.
Expertise in SPARQL, Cypher, or Gremlin.
Experience with Python or Java for graph data ingestion and pipeline development.
Knowledge of ETL/ELT, data modeling, schema design, and API integration.
Familiarity with modern AI/ML workflows, especially RAG or LLM‑driven applications is a plus.
Understanding of semantic web standards (RDF/OWL), entity resolution, and graph embeddings.
Nice‑to‑Have Skills
Experience with vector databases and hybrid search.
Exposure to LLM orchestration, RAG frameworks, or knowledge‑grounded AI systems.
Background in enterprise domains such as retail, healthcare, insurance, fintech.
Familiarity with tools like GraphQL, LangChain, Kafka, Airflow, Databricks, or Azure Data Factory.
Knowledge of knowledge graph visualization tools (Bloom, GraphXR).
Soft Skills
Strong analytical and problem‑solving ability.
Ability to translate business needs into data models.
Excellent communication and cross‑functional collaboration.
Ownership mindset with ability to work in ambiguous environments.
Education
Bachelor’s or Master’s degree in Computer Science, Data Science, Information Systems, or related field.
Certifications in semantic technologies or graph databases are a plus.
Job Details
Seniority level: Mid‑Senior level
Employment type: Full‑time
Compensation and Benefits The expected compensation for this role ranges from $80,000 to $158,000. Final compensation will depend on various factors, including geographical location, minimum wage obligations, skills, and relevant experience.
Based on the position, the role is also eligible for Wipro's standard benefits, including a full range of medical and dental benefits options, disability insurance, paid time off (inclusive of sick leave), and other paid and unpaid leave options.
EEO Statement Wipro provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local laws. Applications from veterans and people with disabilities are explicitly welcome.
#J-18808-Ljbffr
Key Responsibilities
Design and develop knowledge graphs, ontologies, and taxonomies aligned with business domains.
Build semantic models using RDF, OWL, SHACL, or property graph approaches.
Implement and maintain graph databases (e.g., Neo4j, AWS Neptune, Azure Cosmos DB Gremlin API, TigerGraph).
Ingest heterogeneous datasets and convert them into graph structures using ETL pipelines.
Work closely with product, data, and engineering teams to understand requirements and map real‑world entities into graph models.
Develop and optimise SPARQL, Cypher, or Gremlin queries for graph analytics.
Integrate the knowledge graph with downstream systems (search, RAG pipelines, analytics, APIs).
Ensure data quality, entity resolution, schema alignment, and consistency across sources.
Implement reasoning, inference, metadata enrichment, and graph‑based recommendations.
Conduct POCs, evaluate graph technologies, and define best practices for knowledge graph architecture.
Required Skills & Experience
3–10+ years of experience in data engineering, semantic technologies, or knowledge graphs.
Strong understanding of graph theory, ontologies, and linked data principles.
Hands‑on experience with at least one major graph database.
Expertise in SPARQL, Cypher, or Gremlin.
Experience with Python or Java for graph data ingestion and pipeline development.
Knowledge of ETL/ELT, data modeling, schema design, and API integration.
Familiarity with modern AI/ML workflows, especially RAG or LLM‑driven applications is a plus.
Understanding of semantic web standards (RDF/OWL), entity resolution, and graph embeddings.
Nice‑to‑Have Skills
Experience with vector databases and hybrid search.
Exposure to LLM orchestration, RAG frameworks, or knowledge‑grounded AI systems.
Background in enterprise domains such as retail, healthcare, insurance, fintech.
Familiarity with tools like GraphQL, LangChain, Kafka, Airflow, Databricks, or Azure Data Factory.
Knowledge of knowledge graph visualization tools (Bloom, GraphXR).
Soft Skills
Strong analytical and problem‑solving ability.
Ability to translate business needs into data models.
Excellent communication and cross‑functional collaboration.
Ownership mindset with ability to work in ambiguous environments.
Education
Bachelor’s or Master’s degree in Computer Science, Data Science, Information Systems, or related field.
Certifications in semantic technologies or graph databases are a plus.
Job Details
Seniority level: Mid‑Senior level
Employment type: Full‑time
Compensation and Benefits The expected compensation for this role ranges from $80,000 to $158,000. Final compensation will depend on various factors, including geographical location, minimum wage obligations, skills, and relevant experience.
Based on the position, the role is also eligible for Wipro's standard benefits, including a full range of medical and dental benefits options, disability insurance, paid time off (inclusive of sick leave), and other paid and unpaid leave options.
EEO Statement Wipro provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local laws. Applications from veterans and people with disabilities are explicitly welcome.
#J-18808-Ljbffr