Loading…

Everything is connected: Graph neural networks

In many ways, graphs are the main modality of data we receive from nature. This is due to the fact that most of the patterns we see, both in natural and artificial systems, are elegantly representable using the language of graph structures. Prominent examples include molecules (represented as graphs...

Full description

Saved in:
Bibliographic Details
Published in:Current opinion in structural biology 2023-04, Vol.79, p.102538-102538, Article 102538
Main Author: Veličković, Petar
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In many ways, graphs are the main modality of data we receive from nature. This is due to the fact that most of the patterns we see, both in natural and artificial systems, are elegantly representable using the language of graph structures. Prominent examples include molecules (represented as graphs of atoms and bonds), social networks and transportation networks. This potential has already been seen by key scientific and industrial groups, with already-impacted application areas including traffic forecasting, drug discovery, social network analysis and recommender systems. Further, some of the most successful domains of application for machine learning in previous years—images, text and speech processing—can be seen as special cases of graph representation learning, and consequently there has been significant exchange of information between these areas. The main aim of this short survey is to enable the reader to assimilate the key concepts in the area, and position graph representation learning in a proper context with related fields. [Display omitted] •Graphs—interconnected structures of nodes and edges—represent a key concept for representing natural data.•Graph neural networks (GNNs) power significant recent advances in scientific discovery and industrial deployment.•GNNs are a very general language for representation learning, encompassing models like transformers as a special case.
ISSN:0959-440X
1879-033X
DOI:10.1016/j.sbi.2023.102538