LLMs4OL 2025 Overview: The 2nd Large Language Models for Ontology Learning Challenge Academic Article uri icon

abstract

  • We present the results of the 2nd LLMs4OL 2025 Challenge, a shared task designed to evaluate the effectiveness of large language models (LLMs) for ontology learning. The challenge attracted a diverse set of participants who leveraged a broad spectrum of models, including general-purpose LLMs, domain-specific models, and embedding-based systems. Submissions covered multiple subtasks such as Text2Onto, term typing, taxonomy discovery, and non-taxonomic relationship extractions. The results highlight that hybrid pipelines integrating commercial LLMs with domain-tuned embeddings and fine-tuning approaches achieved the strongest overall performance, while specialized domain models improved results in biomedical and technical datasets. Key insights include the importance of prompt engineering, retrieval-augmented generation (RAG), and ensemble learning. This paper presents the second benchmark of LLM-driven ontology learning, serving as an overview of the participants’ contributions to the challenge. Building on this, this overview presents findings, highlights emerging strategies, and offers practical insights for researchers and practitioners seeking to align unstructured language with structured knowledge.

authors

  • Babaei Giglou, Hamed
  • D'Souza, Jennifer
  • Mihindukulasooriya, Nandana
  • Auer, Sören

publication date

  • 2025

volume

  • 6