Transfer learning for classifying Spanish and english text by clinical specialties

Alexandra Pomares-Quimbaya, Pilar López-Úbeda, Stefan Schulz

Producción: Capítulo del libro/informe/acta de congresoCapítulorevisión exhaustiva

2 Citas (Scopus)

Resumen

Transfer learning has demonstrated its potential in natural language processing tasks, where models have been pre-trained on large corpora and then tuned to specific tasks. We applied pre-trained transfer models to a Spanish biomedical document classification task. The main goal is to analyze the performance of text classification by clinical specialties using state-of-the-art language models for Spanish, and compared them with the results using corresponding models in English and with the most important pre-trained model for the biomedical domain. The outcomes present interesting perspectives on the performance of language models that are pre-trained for a particular domain. In particular, we found that BioBERT achieved better results on Spanish texts translated into English than the general domain model in Spanish and the state-of-the-art multilingual model.

Idioma originalInglés
Título de la publicación alojadaPublic Health and Informatics
Subtítulo de la publicación alojadaProceedings of MIE 2021
EditorialIOS Press
Páginas377-381
Número de páginas5
ISBN (versión digital)9781643681856
ISBN (versión impresa)9781643681849
DOI
EstadoPublicada - 01 jul. 2021

Huella

Profundice en los temas de investigación de 'Transfer learning for classifying Spanish and english text by clinical specialties'. En conjunto forman una huella única.

Citar esto