BIAS-3D: Brain inspired attentional search model fashioned after what and where/how pathways for target search in 3D environment

Kumari, Sweta and Shobha Amala, V. Y. and Nivethithan, M. and Chakravarthy, V. Srinivasa (2022) BIAS-3D: Brain inspired attentional search model fashioned after what and where/how pathways for target search in 3D environment. Frontiers in Computational Neuroscience, 16. ISSN 1662-5188

[thumbnail of pubmed-zip/versions/2/package-entries/fncom-16-1012559-r1/fncom-16-1012559.pdf] Text
pubmed-zip/versions/2/package-entries/fncom-16-1012559-r1/fncom-16-1012559.pdf - Published Version

Download (5MB)

Abstract

We propose a brain inspired attentional search model for target search in a 3D environment, which has two separate channels—one for the object classification, analogous to the “what” pathway in the human visual system, and the other for prediction of the next location of the camera, analogous to the “where” pathway. To evaluate the proposed model, we generated 3D Cluttered Cube datasets that consist of an image on one vertical face, and clutter or background images on the other faces. The camera goes around each cube on a circular orbit and determines the identity of the image pasted on the face. The images pasted on the cube faces were drawn from: MNIST handwriting digit, QuickDraw, and RGB MNIST handwriting digit datasets. The attentional input of three concentric cropped windows resembling the high-resolution central fovea and low-resolution periphery of the retina, flows through a Classifier Network and a Camera Motion Network. The Classifier Network classifies the current view into one of the target classes or the clutter. The Camera Motion Network predicts the camera's next position on the orbit (varying the azimuthal angle or “θ”). Here the camera performs one of three actions: move right, move left, or do not move. The Camera-Position Network adds the camera's current position (θ) into the higher features level of the Classifier Network and the Camera Motion Network. The Camera Motion Network is trained using Q-learning where the reward is 1 if the classifier network gives the correct classification, otherwise 0. Total loss is computed by adding the mean square loss of temporal difference and cross entropy loss. Then the model is trained end-to-end by backpropagating the total loss using Adam optimizer. Results on two grayscale image datasets and one RGB image dataset show that the proposed model is successfully able to discover the desired search pattern to find the target face on the cube, and also classify the target face accurately.

Item Type: Article
Subjects: Impact Archive > Medical Science
Depositing User: Managing Editor
Date Deposited: 25 Mar 2023 12:32
Last Modified: 08 Feb 2024 03:58
URI: http://research.sdpublishers.net/id/eprint/1927

Actions (login required)

View Item
View Item