Language Models for Multimessenger Astronomy

3Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.

Abstract

With the increasing reliance of astronomy on multi-instrument and multi-messenger observations for detecting transient phenomena, communication among astronomers has become more critical. Apart from automatic prompt follow-up observations, short reports, e.g., GCN circulars and ATels, provide essential human-written interpretations and discussions of observations. These reports lack a defined format, unlike machine-readable messages, making it challenging to associate phenomena with specific objects or coordinates in the sky. This paper examines the use of large language models (LLMs)—machine learning models with billions of trainable parameters or more that are trained on text—such as InstructGPT-3 and open-source Flan-T5-XXL for extracting information from astronomical reports. The study investigates the zero-shot and few-shot learning capabilities of LLMs and demonstrates various techniques to improve the accuracy of predictions. The study shows the importance of careful prompt engineering while working with LLMs, as demonstrated through edge case examples. The study’s findings have significant implications for the development of data-driven applications for astrophysical text analysis.

Cite

CITATION STYLE

APA

Sotnikov, V., & Chaikova, A. (2023). Language Models for Multimessenger Astronomy. Galaxies, 11(3). https://doi.org/10.3390/galaxies11030063

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free