Search results: Found 6

Listing 1 - 6 of 6
Sort by
What can simple brains teach us about how vision works

Authors: --- --- ---
Book Series: Frontiers Research Topics ISSN: 16648714 ISBN: 9782889196784 Year: Pages: 290 DOI: 10.3389/978-2-88919-678-4 Language: English
Publisher: Frontiers Media SA
Subject: Science (General) --- Neurology
Added to DOAB on : 2016-08-16 10:34:25
License:

Loading...
Export citation

Choose an application

Abstract

Vision is the process of extracting behaviorally-relevant information from patterns of light that fall on retina as the eyes sample the outside world. Traditionally, nonhuman primates (macaque monkeys, in particular) have been viewed by many as the animal model-of-choice for investigating the neuronal substrates of visual processing, not only because their visual systems closely mirror our own, but also because it is often assumed that “simpler” brains lack advanced visual processing machinery. However, this narrow view of visual neuroscience ignores the fact that vision is widely distributed throughout the animal kingdom, enabling a wide repertoire of complex behaviors in species from insects to birds, fish, and mammals. Recent years have seen a resurgence of interest in alternative animal models for vision research, especially rodents. This resurgence is partly due to the availability of increasingly powerful experimental approaches (e.g., optogenetics and two-photon imaging) that are challenging to apply to their full potential in primates. Meanwhile, even more phylogenetically distant species such as birds, fish, and insects have long been workhorse animal models for gaining insight into the core computations underlying visual processing. In many cases, these animal models are valuable precisely because their visual systems are simpler than the primate visual system. Simpler systems are often easier to understand, and studying a diversity of neuronal systems that achieve similar functions can focus attention on those computational principles that are universal and essential. This Research Topic provides a survey of the state of the art in the use of animal models of visual functions that are alternative to macaques. It includes original research, methods articles, reviews, and opinions that exploit a variety of animal models (including rodents, birds, fishes and insects, as well as small New World monkey, the marmoset) to investigate visual function. The experimental approaches covered by these studies range from psychophysics and electrophysiology to histology and genetics, testifying to the richness and depth of visual neuroscience in non-macaque species.

Hierarchical Object Representations in the Visual Cortex and Computer Vision

Authors: --- ---
Book Series: Frontiers Research Topics ISSN: 16648714 ISBN: 9782889197989 Year: Pages: 290 DOI: 10.3389/978-2-88919-798-9 Language: English
Publisher: Frontiers Media SA
Subject: Neurology --- Science (General)
Added to DOAB on : 2017-02-03 17:04:57
License:

Loading...
Export citation

Choose an application

Abstract

Over the past 40 years, neurobiology and computational neuroscience has proved that deeper understanding of visual processes in humans and non-human primates can lead to important advancements in computational perception theories and systems. One of the main difficulties that arises when designing automatic vision systems is developing a mechanism that can recognize - or simply find - an object when faced with all the possible variations that may occur in a natural scene, with the ease of the primate visual system. The area of the brain in primates that is dedicated at analyzing visual information is the visual cortex. The visual cortex performs a wide variety of complex tasks by means of simple operations. These seemingly simple operations are applied to several layers of neurons organized into a hierarchy, the layers representing increasingly complex, abstract intermediate processing stages. In this Research Topic we propose to bring together current efforts in neurophysiology and computer vision in order 1) To understand how the visual cortex encodes an object from a starting point where neurons respond to lines, bars or edges to the representation of an object at the top of the hierarchy that is invariant to illumination, size, location, viewpoint, rotation and robust to occlusions and clutter; and 2) How the design of automatic vision systems benefit from that knowledge to get closer to human accuracy, efficiency and robustness to variations.

Integrating Computational and Neural Findings in Visual Object Perception

Authors: --- ---
Book Series: Frontiers Research Topics ISSN: 16648714 ISBN: 9782889198733 Year: Pages: 137 DOI: 10.3389/978-2-88919-873-3 Language: English
Publisher: Frontiers Media SA
Subject: Neurology --- Science (General)
Added to DOAB on : 2016-01-19 14:05:46
License:

Loading...
Export citation

Choose an application

Abstract

The articles in this Research Topic provide a state-of-the-art overview of the current progress in integrating computational and empirical research on visual object recognition. Developments in this exciting multidisciplinary field have recently gained momentum: High performance computing enabled breakthroughs in computer vision and computational neuroscience. In parallel, innovative machine learning applications have recently become available for datamining the large-scale, high resolution brain data acquired with (ultra-high field) fMRI and dense multi-unit recordings. Finally, new techniques to integrate such rich simulated and empirical datasets for direct model testing could aid the development of a comprehensive brain model. We hope that this Research Topic contributes to these encouraging advances and inspires future research avenues in computational and empirical neuroscience.

The impact of learning to read on visual processing

Authors: ---
Book Series: Frontiers Research Topics ISSN: 16648714 ISBN: 9782889197163 Year: Pages: 73 DOI: 10.3389/978-2-88919-716-3 Language: English
Publisher: Frontiers Media SA
Subject: Psychology --- Science (General)
Added to DOAB on : 2016-04-07 11:22:02
License:

Loading...
Export citation

Choose an application

Abstract

Reading is at the interface between the vision and spoken language domains. An emergent bulk of research indicates that learning to read strongly impacts on non-linguistic visual object processing, both at the behavioral level (e.g., on mirror image processing – enantiomorphy) and at the brain level (e.g., inducing top-down effects as well as neural competition effects). Yet, many questions regarding the exact nature, locus, and consequences of these effects remain hitherto unanswered. The current Special Topic aims at contributing to the understanding of how such a cultural activity as reading might modulate visual processing by providing a landmark forum in which researchers define the state of the art and future directions on this issue. We thus welcome reviews of current work, original research, and opinion articles that focus on the impact of literacy on the cognitive and/or brain visual processes. In addition to studies directly focusing on this topic, we will consider as highly relevant evidence on reading and visual processes in typical and atypical development, including in adult people differing in schooling and literacy, as well as in neuropsychological cases (e.g., developmental dyslexia). We also encourage researchers on nonhuman primate visual processing to consider the potential contribution of their studies to this Special Topic.

Remote Sensing based Building Extraction

Authors: --- --- ---
ISBN: 9783039283828 9783039283835 Year: Pages: 442 DOI: 10.3390/books978-3-03928-383-5 Language: English
Publisher: MDPI - Multidisciplinary Digital Publishing Institute
Subject: Technology (General) --- General and Civil Engineering --- Construction
Added to DOAB on : 2020-04-07 23:07:09
License:

Loading...
Export citation

Choose an application

Abstract

Building extraction from remote sensing data plays an important role in urban planning, disaster management, navigation, updating geographic databases, and several other geospatial applications. Even though significant research has been carried out for more than two decades, the success of automatic building extraction and modeling is still largely impeded by scene complexity, incomplete cue extraction, and sensor dependency of data. Most recently, deep neural networks (DNN) have been widely applied for high classification accuracy in various areas including land-cover and land-use classification. Therefore, intelligent and innovative algorithms are needed for the success of automatic building extraction and modeling. This Special Issue focuses on newly developed methods for classification and feature extraction from remote sensing data for automatic building extraction and 3D

Keywords

roof segmentation --- outline extraction --- convolutional neural network --- boundary regulated network --- very high resolution imagery --- building boundary extraction --- convolutional neural network --- active contour model --- high resolution optical images --- LiDAR --- richer convolution features --- building edges detection --- high spatial resolution remote sensing imagery --- building --- modelling --- reconstruction --- change detection --- LiDAR --- point cloud --- 3-D --- building extraction --- deep learning --- attention mechanism --- very high resolution --- imagery --- building detection --- aerial images --- feature-level-fusion --- straight-line segment matching --- occlusion --- building regularization technique --- point clouds --- boundary extraction --- regularization --- building reconstruction --- digital building height --- 3D urban expansion --- land-use --- DTM extraction --- open data --- developing city --- accuracy analysis --- building detection --- building index --- feature extraction --- mathematical morphology --- morphological attribute filter --- morphological profile --- building extraction --- deep learning --- semantic segmentation --- data fusion --- high-resolution satellite images --- GIS data --- high-resolution aerial images --- deep learning --- generative adversarial network --- semantic segmentation --- Inria aerial image labeling dataset --- Massachusetts buildings dataset --- building extraction --- simple linear iterative clustering (SLIC) --- multiscale Siamese convolutional networks (MSCNs) --- binary decision network --- unmanned aerial vehicle (UAV) --- image fusion --- high spatial resolution remotely sensed imagery --- object recognition --- deep learning --- method comparison --- LiDAR point cloud --- building extraction --- elevation map --- Gabor filter --- feature fusion --- semantic segmentation --- urban building extraction --- deep convolutional neural network --- VHR remote sensing imagery --- U-Net --- remote sensing --- deep learning --- building extraction --- web-net --- ultra-hierarchical sampling --- 3D reconstruction --- indoor modelling --- mobile laser scanning --- point clouds --- 5G signal simulation --- building extraction --- high-resolution aerial imagery --- fully convolutional network --- semantic segmentation --- n/a

Visual Sensors

Authors: ---
ISBN: 9783039283385 9783039283392 Year: Pages: 738 DOI: 10.3390/books978-3-03928-339-2 Language: English
Publisher: MDPI - Multidisciplinary Digital Publishing Institute
Subject: Technology (General) --- General and Civil Engineering
Added to DOAB on : 2020-04-07 23:07:09
License:

Loading...
Export citation

Choose an application

Abstract

Visual sensors are able to capture a large quantity of information from the environment around them. A wide variety of visual systems can be found, from the classical monocular systems to omnidirectional, RGB-D, and more sophisticated 3D systems. Every configuration presents some specific characteristics that make them useful for solving different problems. Their range of applications is wide and varied, including robotics, industry, agriculture, quality control, visual inspection, surveillance, autonomous driving, and navigation aid systems. In this book, several problems that employ visual sensors are presented. Among them, we highlight visual SLAM, image retrieval, manipulation, calibration, object recognition, navigation, etc.

Keywords

3D reconstruction --- RGB-D sensor --- non-rigid reconstruction --- pedestrian detection --- boosted decision tree --- scale invariance --- receptive field correspondence --- soft decision tree --- single-shot 3D shape measurement --- digital image correlation --- warp function --- inverse compositional Gauss-Newton algorithm --- UAV image --- dynamic programming --- seam-line --- optical flow --- image mosaic --- iris recognition --- presentation attack detection --- convolutional neural network --- support vector machines --- content-based image retrieval --- textile retrieval --- textile localization --- texture retrieval --- texture description --- visual sensors --- iris recognition --- iris segmentation --- semantic segmentation --- convolutional neural network (CNN) --- visible light and near-infrared light camera sensors --- laser sensor --- line scan camera --- lane marking detection --- support vector machine (SVM) --- image binarization --- lane marking reconstruction --- automated design --- vision system --- FOV --- illumination --- recognition algorithm --- action localization --- action segmentation --- 3D ConvNets --- LSTM --- visual sensors --- image retrieval --- hybrid histogram descriptor --- perceptually uniform histogram --- motif co-occurrence histogram --- omnidirectional imaging --- visual localization --- catadioptric sensor --- visual information fusion --- image processing --- underwater imaging --- embedded systems --- stereo vision --- visual odometry --- 3D reconstruction --- handshape recognition --- sign language --- finger alphabet --- skeletal data --- visual odometry --- ego-motion estimation --- stereo --- RGB-D --- mobile robots --- around view monitor (AVM) system --- automatic calibration --- lane marking --- parking assist system --- advanced driver assistance system (ADAS) --- pose estimation --- symmetry axis --- point cloud --- sweet pepper --- semantic mapping --- RGB-D SLAM --- visual mapping --- indoor visual SLAM --- adaptive model --- motion estimation --- stereo camera --- person re-identification --- end-to-end architecture --- appearance-temporal features --- Siamese network --- pivotal frames --- visual tracking --- correlation filters --- motion-aware --- adaptive update strategy --- confidence response map --- camera calibration --- Gray code --- checkerboard --- visual sensor --- image retrieval --- human visual system --- local parallel cross pattern --- pose estimation --- straight wing aircraft --- structure extraction --- consistent line clustering --- parallel line --- planes intersection --- salient region detection --- appearance based model --- regression based model --- human visual attention --- background dictionary --- quality control --- fringe projection profilometry --- depth image registration --- 3D reconstruction --- speed measurement --- stereo-vision --- large field of view --- vibration --- calibration --- CLOSIB --- statistical information of gray-levels differences --- Local Binary Patterns --- texture classification --- texture description --- Visual Sensors --- SLAM --- RGB-D --- indoor environment --- Manhattan frame estimation --- orientation relevance --- spatial transformation --- robotic welding --- seam tracking --- visual detection --- narrow butt joint --- GTAW --- LRF --- camera calibration --- extrinsic calibration --- sensors combination --- geometric moments --- camera pose --- rotation-angle --- measurement error --- robotics --- robot manipulation --- depth vision --- star image prediction --- star sensor --- Richardson-Lucy algorithm --- neural network --- tightly-coupled VIO --- SLAM --- fused point and line feature matching --- pose estimates --- simplified initialization strategy --- patrol robot --- map representation --- vision-guided robotic grasping --- object recognition --- pose estimation --- global feature descriptor --- iterative closest point --- n/a

Listing 1 - 6 of 6
Sort by
Narrow your search