• Full Conference
  • Full Conference 1-Day

Date/Time: 7 December 2016, 09:00am - 10:45am
Venue: Sicily 2405, Level 1
Location: The Venetian Macao


Diminished Reality Based on Image Inpainting Considering Background Geometry

Summary: Diminished reality aims to remove real objects from video images and fill in the missing regions with plausible background textures in real-time. Most conventional methods based on image in painting achieve diminished reality by assuming that the background around a target object is almost planar. This paper proposes a new diminished reality method that considers background geometries with less constraints than the conventional ones. In this study, we approximate the background geometry by combining local planes, and improve the quality of image in painting by correcting the perspective distortion of texture and limiting the search area for finding similar textures as exemplars. The temporal coherence of texture is preserved using the geometries and camera pose estimated by visual-simultaneous localization and mapping (SLAM). The mask region that includes a target object is robustly set in each frame by projecting a 3D region, rather than tracking the object in 2D image space. The effectiveness of the proposed method is successfully demonstrated using several experimental environments.

Author(s): Norihiko Kawai, Nara Institute of Science and Technology
Tomokazu Sato, Nara Institute of Science and Technology
Naokazu Yokoya, Nara Institute of Science and Technology

Speaker(s): Norihiko Kawai, Nara Institute of Science and Technology

Gaussian Light Field Estimation for Chromatic Aberration Calibration of Optical See-Through Head-Mounted Displays

Summary: We propose a method to calibrate view-dependent chromatic aberrations of near-eye displays, especially of Optical See-Through Head-Mounted Displays (OST-HMDs). Imperfections in HMD optics cause channel-wise image shifts and blurs known as chromatic aberrations that degrade the image quality of the display at a user’s viewpoint. If we can estimate aberrations perfectly, we could mitigate the effect by applying correction techniques from the computational photography as analogous to cameras. Unfortunately, directly applying existing calibration techniques of cameras to OSTHMDs is not a straightforward task. Unlike ordinary imaging systems, aberrations in OST-HMDs are view-dependent, i.e. optical characteristics dynamically change depending on the current viewpoint of the user. This constraint makes the problem challenging since we must measure aberrations, ideally, over the entire 3D eyebox in which a user would see an image. To overcome this problem, we model this view-dependent aberration as a Gaussian Light Field (GLF) that stores spatial information of the display screen as a light field and the aberration as a Gaussian kernel, i.e. a point-spread function, respectively. We first describe both our GLF model and a calibration procedure to learn a GLF for a given OST-HMD. We then we apply our calibration method to two OST-HMDs that use different optics: a cubic prism or holographic gratings. The results shows that our method achieves significantly better accuracy in Point-Spread Function (PSF) estimations with an accuracy boost of about 2 to 7 dB in Peak SNR.

Author(s): Yuta Itoh, Technical University of Munich
Toshiyuki Amano, Technical University of Munich
Daisuke Iwai, Technical University of Munich
Gudrun Klinker, Technical University of Munich

Speaker(s): Yuta Itoh, Technical University of Munich

A Real-time Augmented Reality System to See-Through Cars

Summary: One of the most hazardous driving scenario is the overtaking of a slower vehicle, indeed, in this case the front vehicle (being overtaken) can occlude an important part of the field of view of the rear vehicle’s driver. This lack of visibility is the most probable cause of accidents in this context. Recent research works tend to prove that augmented reality applied to assisted driving can significantly reduce the risk of accidents. In this paper, we present a real-time marker-less system to see through cars. For this purpose, two cars are equipped with cameras and an appropriate wireless communication system. The stereo vision system mounted on the front car allows to create a sparse 3D map of the environment where the rear car can be localized. Using this inter-car pose estimation, a synthetic image is generated to overcome the occlusion and to create a seamless see-through effect which preserves the structure of the scene.

Author(s): Francois Rameau, Korea Advanced Institute of Science and Technology
Hyowon Ha, Korea Advanced Institute of Science and Technology
Kyungdon Joo, Korea Advanced Institute of Science and Technology
Joo Jinsoo, Korea Advanced Institute of Science and Technology
Choi Kibaek Park, Korea Advanced Institute of Science and Technology
In So Kweon, Korea Advanced Institute of Science and Technology

Speaker(s): Francois Rameau, Korea Advanced Institute of Science and Technology

Simultaneous Localization and Appearance Estimation with a Consumer RGB-D Camera

Summary: Abstract—Acquiring general material appearance with hand-held consumer RGB-D cameras is difficult for casual users, due to the inaccuracy in reconstructed camera poses and geometry, as well as the unknown lighting that is coupled with materials in measured color images. To tackle these challenges, we present a novel technique, called Simultaneous Localization and Appearance Estimation (SLAE), for estimating the spatially varying isotropic surface reflectance, solely from color and depth images captured with an RGB-D camera under unknown environment illumination. The core of our approach is a joint optimization, which alternates among solving for plausible camera poses, materials, the environment lighting and normals. To refine camera poses, we exploit the rich spatial and view-dependent variations of materials, treating the object as a localization-self-calibrating model. To recover the unknown lighting, measured color images along with the current estimate of materials are used in a global optimization, efficiently solved by exploiting the sparsity in the wavelet domain. We demonstrate the substantially improved quality of estimated appearance on a variety of daily objects.

Author(s): Hongzhi Wu, Zhejiang University
Zhaotian Wang, Zhejiang University
Kun Zhou, Zhejiang University

Speaker(s): Hongzhi Wu, Zhejiang University