Understanding Graph Neural Networks (GNNs): A Brief Overview

Understanding Graph Neural Networks (GNNs): A Brief Overview

Basics of Graph Neural Networks (GNNs): What? And Why?

Graph neural networks (GNNs) is a subtype of neural networks that operate on data structured as graphs. By enabling the application of deep learning to graph-structured data, GNNs are set to become an important artificial intelligence (AI) concept in future. In other words, GNNs have the ability to prompt advances in domains that do not comply prevailing artificial intelligence algorithms.

In recent times, neural networks have spurred into huge popularity among data and AI community, owing to its ability to mimic human brain neurons. Today neural networks are capable of image/object classification and video processing to speech recognition and data mining. According to an excerpt from, 'A Comprehensive Survey on Graph Neural Networks', published in IEEE, the data in these sectors are typically represented in the Euclidean space. However, there is an increasing number of applications, where data are generated from non-Euclidean domains and are represented as graphs with complex relationships and interdependency between objects. These graph based data pose a major challenge when it comes of machine learning applications.

Enter graph neural network.

In grade 6-8, we must have learned how graphs help in representing the mathematical stats in a fashion that can be understood and analyzed objectively, with ease. In computer science, the non-structured data like images and text can be modelled in the form of graphs to perform graph analysis on them. In general context, a graph is representation of a data structure with two components. Vertices and Edges, i.e.,

G = Ω (V,E)

Here V refers to the set of vertices and E stands for edges between them. It is important to note that V and E are sometimes used interchangeably.

Today, graphs are used to depict things ranging from social media network, to chemical molecules. And, GNNs are effective framework for representation learning of graphs. They are blend of an information diffusion mechanism and neural networks, representing a set of transition functions and a set of output functions.

As per paper, "Graph Neural Networks: A Review of Methods and Applications", graph neural networks are connectionist models that capture the dependence of graphs via message passing between the nodes of graphs. In simpler parlance, they facilitate effective representations learning capability for graph-structured data either from the node level or the graph level. In contrast to standard neural networks, the GNNs retain a state that can represent information from its neighborhood with arbitrary depth. The paper also mentions that GNNs can simultaneously model the diffusion process on the graph with the RNN kernel.

There is another definition for Graph neural network, i.e. it is a form of neural network with two defining attributes:

1. Its' input is a graph

2. Its' output is permutation invariant

In a GNN structure, the nodes add information gathered from neighboring nodes via neural networks. The last layer then combines all this added information and outputs either a prediction or classification. The original GNN formulated by Scarselli et al. 2008 used discrete features and called the edge and node features 'labels'. Here, the process involves an output function that takes as input the nodes' updated states and the nodes' features then produce an output for each node.

Though this concept (GNN) was introduced back in 2005, they started to gain popularity in the last 5 years. Some ready to use implementations of various GNN layers can be found in libraries such as PyTorch Geometric package, DGL, and Spektral. Moreover, graph neural network is better than Convolutional Neural Network (CNN), as the former is inherently rotation and translation invariant, since there is simply no notion of rotation or translation in graphs. Also, applying Convolutional Neural network on graphs is tricky due to the arbitrary size of the graph, and the complex topology, implying no spatial locality.

There are three main types of graph neural network, viz., Recurrent Graph Neural Network, Spatial Convolutional Network, and Spectral Convolutional Network. We also have graph autoencoders, and spatial–temporal GNNs too. One of the first popular GNNs is the Kipf & Welling graph convolutional network (GCN). Further, there is a concept called Quantum graph neural networks (QGNNs), which got introduced in 2019 by Verdon et al. The authors had subdivided their work into two different classes: quantum graph recurrent neural networks and quantum graph convolutional networks.

Applications of a graph neural network can be grouped as

• Node classification: Objective: Make a prediction about each node of a graph by assigning a label to every node in the network.

• Link prediction: Objective: Identify the relationship between two entities in a graph by attaching a label to an entire graph and predict the likelihood of two entities being inter-linked.

• Graph classification: Objective: Find potential or missed edges in a graph by classifying the whole graph into several different categories.

There are Graph visualization and Graph clustering application method of GNN too.

Based on the main three methods, there are numerous real world use cases of graph neural networks. For instance, by applying GNN to molecular graphs, scientists can obtain better molecular fingerprints (feature vectors that represent molecules).  A team of researchers at Stanford used Graph Convolutional Network to produce a model which can predict specific drug-drug interaction effects due to the interaction of more than 2 drugs.

Uber Eats recommends food items and restaurants using GraphSage network. This network is a representation learning technique for dynamic graphs.

Graph neural network also helps in traffic prediction by viewing the traffic network as a spatial-temporal graph. In this, the nodes are sensors installed on roads, the edges are measured by the distance between pairs of nodes, and each node has the average traffic speed within a window as dynamic input features.

Google Brain team leverages GNN to optimize the power, area, and performance of a chip block for new hardware such as Google's TPU. Graph neural networks are also used in computer vision too. E.g. Magic Leap, 3D graphics company released a GNN architecture called SuperGlue that performs graph matching in real-time videos, which is used for tasks such as 3D reconstruction, place recognition, localization and mapping (SLAM).

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net