Design and Analysis of Directional Front Projection Screens

Publication date: Available online 23 May 2018
Source:Computers & Graphics
Author(s): Michal Piovarči, Michael Wessely, Michał Jagielski, Marc Alexa, Wojciech Matusik, Piotr Didyk
Traditional display and screen are designed to maximize the perceived image quality across all viewing directions. However, there is usually a wide range of directions (e.g., towards side walls and ceiling) for which the displayed content does not need to be provided. Ignoring this fact results in energy waste due to a significant amount of light reflected towards these regions. In this work, we propose a new type of front projection screens – directional screens. They are composed of tiny, highly reflective surfaces which reflect the light coming from a projector only towards the audience. Additionally, they avoid “hot-spotting” and can support non-standard audience layouts. In this paper, we describe the design process as well as provide feasibility analysis of the new screens. We also validate the approach in simulations and by fabricating several fragments of big screens. We demonstrate that thanks to the customization, our solution can provide up to three times increased gain when compared to traditional high gain screens and up to eight times higher brightness than a matte screen.

Full article: Design and Analysis of Directional Front Projection Screens

Constraint-based point set denoising using normal voting tensor and restricted quadratic error metrics

Publication date: Available online 23 May 2018
Source:Computers & Graphics
Author(s): Sunil Kumar Yadav, Ulrich Reitebuch, Martin Skrodzki, Eric Zimmermann, Konrad Polthier
In many applications, point set surfaces are acquired by 3D scanners. During this acquisition process, noise and outliers are inevitable. For a high fidelity surface reconstruction from a noisy point set, a feature preserving point set denoising operation has to be performed to remove noise and outliers from the input point set. To suppress these undesired components while preserving features, we introduce an anisotropic point set denoising algorithm in the normal voting tensor framework. The proposed method consists of three different stages that are iteratively applied to the input: in the first stage, noisy vertex normals, are initially computed using principal component analysis, are processed using a vertex-based normal voting tensor and binary eigenvalues optimization. In the second stage, feature points are categorized into corners, edges, and surface patches using a weighted covariance matrix, which is computed based on the processed vertex normals. In the last stage, vertex positions are updated according to the processed vertex normals using restricted quadratic error metrics. For the vertex updates, we add different constraints to the quadratic error metric based on feature (edges and corners) and non-feature (planar) vertices. Finally, we show our method to be robust and comparable to state-of-the-art methods in several experiments.

Full article: Constraint-based point set denoising using normal voting tensor and restricted quadratic error metrics

Hierarchical representation for rasterized planar face complexes

Publication date: Available online 24 May 2018
Source:Computers & Graphics
Author(s): Guillaume Damiand, Aldo Gonzalez-Lorenzo, Jarek Rossignac, Florent Dupont
A useful example of a Planar Face Complex (PFC) is a connected network of streets, each modeled by a zero-thickness curve. The union of these decomposes the plane into faces that may be topologically complex. The previously proposed rasterized representation of the PFC (abbreviated rPFC) (1) uses a fixed resolution pixel grid, (2) quantizes the geometry of the vertices and edges to pixel-resolution, (3) assumes that no street is contained in a single pixel, and (4) encodes the graph connectivity using a small and fixed number of bits per pixel by decomposing the exact topology into per-pixel descriptors. The hierarchical (irregular) version of the rPFC (abbreviated hPFC) proposed here improved on rPFC in several ways: (1) It uses an adaptively constructed tree to eliminate the “no street in a pixel” constraint of the rPFC, hence making it possible to represent exactly any PFC topology and (2) for PFCs of the models tested, and more generally for models with relatively large empty regions, it reduces the storage cost significantly.

Full article: Hierarchical representation for rasterized planar face complexes

Joint planar parameterization of segmented parts and cage deformation for dense correspondence

Publication date: Available online 24 May 2018
Source:Computers & Graphics
Author(s): Srinivasan Ramachandran, Donya Ghafourzadeh, Martin de Lasa, Tiberiu Popa, Eric Paquette
In this paper, we present a robust and efficient approach for computing a dense registration between two surface meshes. The proposed approach exploits a user-provided sparse set of landmarks, positioned at semantic locations, along with closed paths connecting sequences of landmarks. The approach segments the mesh and then flattens the segmented parts using angle-based flattening and low distortion boundary constraints. It adjusts the segmented parts with a cage deformation to align the interior landmarks. As a last step, our approach extracts the dense registration from the flattened and deformed segmented parts. The approach is capable of handling a wide range of surfaces, and is not limited to genus-zero surfaces. It handles small features, such as fingers and facial attributes, as well as non-isometric pairs and pairs in different poses. The results show that the proposed approach is superior to current state-of-the-art methods.

Full article: Joint planar parameterization of segmented parts and cage deformation for dense correspondence

Guided proceduralization: Optimizing geometry processing and grammar extraction for architectural models

Publication date: Available online 25 May 2018
Source:Computers & Graphics
Author(s): İlke Demir, Daniel G. Aliaga
We describe a guided proceduralization framework that optimizes geometry processing on architectural input models to extract target grammars. We aim to provide efficient artistic workflows by creating procedural representations from existing 3D models, where the procedural expressiveness is controlled by the user. Architectural reconstruction and modeling tasks have been handled as either time consuming manual processes or procedural generation with difficult control and artistic influence. We bridge the gap between creation and generation by converting existing manually modeled architecture to procedurally editable parametrized models, and carrying the guidance to procedural domain by letting the user define the target procedural representation. Additionally, we propose various applications of such procedural representations, including guided completion of point cloud models, controllable 3D city modeling, and other benefits of procedural modeling.

Full article: Guided proceduralization: Optimizing geometry processing and grammar extraction for architectural models

GLAM: Glycogen-derived Lactate Absorption Map for visual analysis of dense and sparse surface reconstructions of rodent brain structures on desktop systems and virtual environments

Publication date: Available online 21 May 2018
Source:Computers & Graphics
Author(s): Marco Agus, Daniya Boges, Nicolas Gagnon, Pierre J. Magistretti, Markus Hadwiger, Corrado Calí
Human brain accounts for about one hundred billion neurons, but they cannot work properly without ultrastructural and metabolic support. For this reason, mammalian brains host another type of cells called “glial cells”, whose role is to maintain proper conditions for efficient neuronal function. One type of glial cell, astrocytes, are involved in particular in the metabolic support of neurons, by feeding them with lactate, one byproduct of glucose metabolism that they can take up from blood vessels, and store it under another form, glycogen granules. These energy-storage molecules, whose morphology resembles to spheres with a diameter ranging 10–80 nanometers roughly, can be easily recognized using electron microscopy, the only technique whose resolution is high enough to resolve them. Understanding and quantifying their distribution is of particular relevance for neuroscientists, in order to understand where and when neurons use energy under this form. To answer this question, we developed a visualization technique, dubbed GLAM (Glycogen-derived Lactate Absorption Map), and customized for the analysis of the interaction of astrocytic glycogen on surrounding neurites in order to formulate hypotheses on the energy absorption mechanisms. The method integrates high-resolution surface reconstruction of neurites, astrocytes, and the energy sources in form of glycogen granules from different automated serial electron microscopy methods, like focused ion beam scanning electron microscopy (FIB-SEM) or serial block face electron microscopy (SBEM), together with an absorption map computed as a radiance transfer mechanism. The resulting visual representation provides an immediate and comprehensible illustration of the areas in which the probability of lactate shuttling is higher. The computed dataset can be then explored and quantified in a 3D space, either using 3D modeling software or virtual reality environments. Domain scientists have evaluated the technique by either using the computed maps for formulating functional hypotheses or for planning sparse reconstructions to avoid excessive occlusion. Furthermore, we conducted a pioneering user study showing that immersive VR setups can ease the investigation of the areas of interest and the analysis of the absorption patterns in the cellular structures.

Full article: GLAM: Glycogen-derived Lactate Absorption Map for visual analysis of dense and sparse surface reconstructions of rodent brain structures on desktop systems and virtual environments

3D synthesis of man-made objects based on fine-grained parts

Publication date: Available online 22 May 2018
Source:Computers & Graphics
Author(s): Diego Gonzalez, Oliver van Kaick
We present a novel approach for 3D shape synthesis from a collection of existing models. The main idea of our approach is to synthesize shapes by recombining fine-grained parts extracted from the existing models based purely on the objects’ geometry. Thus, unlike most previous works, a key advantage of our method is that it does not require a semantic segmentation, nor part correspondences between the shapes of the input set. Our method uses a template shape to guide the synthesis. After extracting a set of fine-grained segments from the input dataset, we compute the similarity among the segments in the collection and segments of the template using shape descriptors. Next, we use the similarity estimates to select, from the set of fine-grained segments, compatible replacements for each part of the template. By sampling different segments for each part of the template, and by using different templates, our method can synthesize many distinct shapes that have a variety of local fine details. Additionally, we maintain the plausibility of the objects by preserving the general structure of the template. We show with several experiments performed on different datasets that our algorithm can be used for synthesizing a wide variety of man-made objects.

Full article: 3D synthesis of man-made objects based on fine-grained parts

Texturing and inpainting a complete tubular 3D object reconstructed from partial views

Publication date: Available online 22 May 2018
Source:Computers & Graphics
Author(s): Julien Fayer, Bastien Durix, Simone Gasparini, Géraldine Morin
We present a novel approach to texture 3D tubular objects reconstructed from partial views. Starting from few images of the object, we rely on a 3D reconstruction approach that provides a representation of the object based on a composition of several parametric surfaces, more specifically canal surfaces. Such representation enables a complete reconstruction of the object even for the parts that are hidden or not visible by the input image. The proposed texturing method maps the input images on the parametric surface of the object and complete parts of the surface not visible in any image through an inpainting process. In particular, we first propose a method to select, for each 3D canal surface, the most suitable images and fuse them together for texturing the surface. This process is regulated by a confidence criterion that selects images based on their position and orientation w.r.t. the surface. We also introduce a global method to fuse the images taking into account their exposure difference. Finally, we propose some methods to complete or inpaint the texture in the hidden parts of the surface according to the type of the texture.

Full article: Texturing and inpainting a complete tubular 3D object reconstructed from partial views

Real-time field aligned stripe patterns

Publication date: Available online 22 May 2018
Source:Computers & Graphics
Author(s): Nils Lichtenberg, Noeska Smit, Christian Hansen, Kai Lawonn
In this paper, we present a parameterization technique that can be applied to surface meshes in real-time without time-consuming preprocessing steps. The parameterization is suitable for the display of (un-)oriented patterns and texture patches, and to sample a surface in a periodic fashion. The method is inspired by existing work that solves a global optimization problem to generate a continuous stripe pattern on the surface, from which texture coordinates can be derived. We propose a local optimization approach that is suitable for parallel execution on the GPU, which drastically reduces computation time. With this, we achieve on-the-fly texturing of 3D, medium-sized (up to 70 k vertices) surface meshes. The algorithm takes a tangent vector field as input and aligns the texture coordinates to it. Our technique achieves real-time parameterization of the surface meshes by employing a parallelizable local search algorithm that converges to a local minimum in a few iterations. The calculation in real-time allows for live parameter updates and determination of varying texture coordinates. Furthermore, the method can handle non-manifold meshes. The technique is useful in various applications, e.g., biomedical visualization and flow visualization. We highlight our method’s potential by providing usage scenarios for several applications.

Full article: Real-time field aligned stripe patterns