Patent Figure Classification Using Large Vision-Language Models Chapter uri icon

abstract

  • Patent figure classification facilitates faceted search in patent retrieval systems, enabling efficient prior art search. Existing approaches have explored patent figure classification for only a single aspect and for aspects with a limited number of concepts. In recent years, large vision-language models (LVLMs) have shown tremendous performance across numerous computer vision downstream tasks, however, they remain unexplored for patent figure classification. Our work explores the efficacy of LVLMs in patent figure visual question answering (VQA) and classification, focusing on zero-shot and few-shot learning scenarios. For this purpose, we adapt existing patent figure datasets to create new datasets, PatFigVQA and PatFigCLS suitable for fine-tuning and evaluation regarding multiple aspects of patent figures (i.e., type, projection, patent class, and objects). For a computational-effective handling of a large number of classes using LVLM, we propose a novel tournament-style classification strategy that leverages a series of multiple-choice questions. Experimental results and comparisons of multiple classification approaches based on LVLMs and Convolutional Neural Networks (CNNs) in few-shot settings show the feasibility of the proposed approaches.

publication date

  • 2025

keywords

  • Patent figure classification, patent figure visual question answering, large vision-language models

International Standard Book Number (ISBN) 13

  • 9783031887109
  • 9783031887116

number of pages

  • 17

start page

  • 20

end page

  • 37