Correspondence between representations in visual cortices and neural networks

Correspondence between representations in visual cortices and neural networks
The study reported the similar properties between deep neural networks for predicting attention and the primary visual cortex (V1) of primates. Credit: Nobuhiko Wagatsuma

A research group led by Nobuhiko Wagatsuma, Lecturer at Toho University, Akinori Hidaka, Associate Professor at Tokyo Denki University, and Hiroshi Tamura, Associate Professor at Osaka University, found that the neural network structure of attention prediction, based on deep learning used in the development of artificial intelligence, has similar characteristics to the cerebral mechanism of primates.

This discovery was made possible by applying the research method for the comparison of the brain activity between monkeys and humans to . This finding might be helpful not only to understand the cortical of attentional selection but also to develop artificial .

Deep neural networks (DNNs), which are used in the development of artificial intelligence, are mathematical models for obtaining appropriate mechanisms to solve specific problems from the training with a large-scale dataset. However, the detailed mechanisms underlying DNNs through this learning process have not yet been clarified.

A research group led by Nobuhiko Wagatsuma, Lecturer at the Faculty of Science, Toho University, Akinori Hidaka, Associate Professor at the Faculty of Science and Engineering, Tokyo Denki University, and Hiroshi Tamura, Associate Professor at the Graduate School of Frontiers Biosciences, Osaka University, found that the characteristics of responses in DNNs for predicting the attention to the most important location in images were consistent with those of the neural representation in the (V1) of primates. The discovery was made possible by applying the analysis method designed for comparing the characteristics of the neuronal activity in monkeys with that in humans to DNNs.

The result of this study provides important insight into the neural mechanism of attention. Additionally, the application of the attentional mechanism in the primates including human may accelerate the development of artificial intelligence.

Key Points:

  • The correspondence between primate visual cortices and deep neural networks has been revealed by applying the research method for comparing the neural activity between different species, such as humans and monkeys, to artificial neural networks.
  • Recently, deep neural networks are utilized as the main methods for developing artificial intelligence. Wagatsuma et al. have reported the similar properties between deep neural networks for predicting attention and the primary visual cortex (V1) of primates. Additionally, their findings implied that the mechanism of the for attention prediction might be distinct from that for object classification such as VGG 16.
  • Attention is a function that enables us to attend the most important information at the moment, which is the most critical keyword in the recent development of artificial intelligence. The results of this research might provide contributions not only for understanding the neural mechanisms for attention selection of primates including human but also for developing .

More information: Nobuhiko Wagatsuma et al, Correspondence between monkey visual cortices and layers of a saliency map model based on a deep convolutional neural network for representations of natural images, eNeuro (2020). DOI: 10.1523/ENEURO.0200-20.2020

Provided by Toho University
Citation: Correspondence between representations in visual cortices and neural networks (2021, February 8) retrieved 28 March 2024 from https://medicalxpress.com/news/2021-02-correspondence-representations-visual-cortices-neural.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Deep reinforcement-learning architecture combines pre-learned skills to create new sets of skills on the fly

5 shares

Feedback to editors