Construction of bounding volume hierarchies with SAH cost approximation on temporary subtrees

Publication date: February 2017
Source:Computers & Graphics, Volume 62
Author(s): Dominik Wodniok, Michael Goesele
Advances in research on quality metrics for bounding volume hierarchies (BVHs) have shown that greedy top-down SAH builders construct BVHs with superior traversal performance despite the fact that the resulting SAH values are higher than those created by more sophisticated builders. Motivated by this observation we examine three construction algorithms that use recursive SAH values of temporarily constructed SAH-built BVHs to guide the construction and perform an extensive evaluation. The resulting BVHs achieve up to 37% better trace performance for primary rays and up to 32% better trace performance for secondary diffuse rays compared to standard plane sweeping without applying spatial splits. Allowing spatial splits still achieves up to 18% resp.15% better performance. While our approach is not suitable for real-time BVH construction, we show that the proposed algorithm has subquadratic computational complexity in the number of primitives, which renders it usable in practical applications. Additionally, we describe an approach for improving the forecasting abilities of the SAH-EPO ray tracing performance predictor which also increases its relevance for primary rays.

Highlights

fx1

Full article: Construction of bounding volume hierarchies with SAH cost approximation on temporary subtrees

The FAUST framework: Free-form annotations on unfolding vascular structures for treatment planning

Publication date: June 2017
Source:Computers & Graphics, Volume 65
Author(s): Patrick Saalfeld, Sylvia Glaßer, Oliver Beuing, Bernhard Preim
For complex interventions, such as stenting of a cerebral aneurysm, treatment planning is mandatory. Sketching can support the physician as it involves an active involvement with complex spatial relations and bears a great potential to improve communication. These sketches are employed as direct annotation on 2D medical image data and print outs, respectively. Annotating 3D planning models is more difficult due to possible occlusions of the complex spatial anatomy of vascular structures. Furthermore, the annotations should adapt accordingly to view changes and deforming structures. Therefore, we developed the FAUST framework, which allows creating 3D annotations by freely sketching in the 3D environment. Additionally to generic annotations, the physician is supported to create the most common treatment options with sketching single strokes only. We allow an interactive unfolding of vascular structures with adapting annotations to still convey their meta information. Our framework is realized on the zSpace, which combines a semi-immersive stereoscopic display and a stylus with ray-based interaction techniques. We conducted a user study with computer scientists, carried out a demo session with a neuroradiologist and assessed the performance. The user study revealed a positive rating of the interaction techniques and a high sense of presence. The neuroradiologist stated that our framework can support treatment planning and leads to a better understanding of anatomical structures. Our performance evaluation showed that our sketching approach is usable in real-time with a large number of annotations. Furthermore, our approach can be adapted to a wider range of applications including medical documentation.

Full article: The FAUST framework: Free-form annotations on unfolding vascular structures for treatment planning

Art-directed watercolor stylization of 3D animations in real-time

Publication date: June 2017
Source:Computers & Graphics, Volume 65
Author(s): S.E. Montesdeoca, H.S. Seah, H.-M. Rall, D. Benvenuti
This paper presents a direct stylization system to render 3D animated geometry as watercolor painted animation. Featuring low-level art-direction in real-time, our approach focuses on letting users paint custom stylization parameters in the 3D scene. These painted parameters drive watercolor effects in object-space, managing localized control and emulating the characteristic appearance of traditional watercolor. For this purpose, the parameters alter the object-space geometric representations and are rasterized, to coherently control and enhance further image-space effects. The watercolor effects are simulated through improved and novel algorithms to recreate hand tremors, pigment turbulence, color bleeding, edge darkening, paper distortion and granulation. All these represent essential characteristic effects of traditional watercolor. The proposed direct stylization system scales well with scene complexity, can be implemented in most rendering pipelines and can be adapted to simulate a wide range of watercolor looks. The simulation is compared to previous approaches and is evaluated through a user study, involving professional CG artists spending over 50 h stylizing their own assets and sharing their feedback about the watercolor stylization, the direct stylization system and their needs as artists using Non-photorealistic Rendering (NPR).

Full article: Art-directed watercolor stylization of 3D animations in real-time

Daniele Panozzo Associate Editor of the year

Prof. Daniele Panozzo from NYU is the Computers & Graphics Associate Editor of the Year. His performance has been outstanding in fulfilling the duties of an Editorial Board member. Indeed, Daniele has co-guest edited the special section on SMI 2017 with Marco Attene and Sylvain Lefebvre, and the Special Section on SIBGRAPI 2017, while upholding the Journal standards of thoroughness, quality, and timeliness. Furthermore, since 2015, Prof. Panozzo has pioneered the Seal of Replicability to promote the peer verification of results in Computer Graphics. In his regular duties as Associate Editor, Daniele has applied rigorous quality standards while promoting the Journal’s distinctive flair to promote its standing and impact among the Computer Graphics community. Please join me in Congratulating Dr. Daniele Panozzo on a Job Well Done.

Falcon: Visual analysis of large, irregularly sampled, and multivariate time series data in additive manufacturing

Publication date: April 2017
Source:Computers & Graphics, Volume 63
Author(s): Chad A. Steed, William Halsey, Ryan Dehoff, Sean L. Yoder, Vincent Paquit, Sarah Powers
Flexible visual analysis of long, high-resolution, and irregularly sampled time series data from multiple sensor streams is a challenge in several domains. In the field of additive manufacturing, this capability is critical for realizing the full potential of large-scale 3D printers. In this paper, we propose a visual analytics approach that helps additive manufacturing researchers acquire a deep understanding of patterns in log and imagery data collected by 3D printers. Specific goals include discovering patterns related to defects and system performance issues, optimizing build configurations to avoid defects, and increasing production efficiency. We introduce Falcon, a new visual analytics system that allows users to interactively explore large, time-oriented data sets from multiple linked perspectives. Falcon provides overviews, detailed views, and unique segmented time series visualizations, all with adjustable scale options. To illustrate the effectiveness of Falcon at providing thorough and efficient knowledge discovery, we present a practical case study involving experts in additive manufacturing and data from a large-scale 3D printer. Although the focus of this paper is on additive manufacturing, the techniques described are applicable to the analysis of any quantitative time series.

Full article: Falcon: Visual analysis of large, irregularly sampled, and multivariate time series data in additive manufacturing

On Visibility & Empty-Region Graphs

Publication date: Available online 3 June 2017
Source: Computers & Graphics
Author(s): Sagi Katz, Ayellet Tal
Empty-Region graphs are well-studied in Computer Graphics, Geometric Modeling, Computational Geometry, as well as in Robotics and Computer Vision. The vertices of these graphs are points in space, and two vertices are connected by an arc if there exists an empty region of a certain shape and size between them. In most of the graphs discussed in the literature, the empty region is assumed to be a circle or the union/intersection of circles. In this paper we propose a new type of empty-region graphs—the γ-visibility graph. This graph can accommodate a variety of shapes of empty regions and may be defined in any dimension. Interestingly, we will show that commonly-used shapes are a special case of our graph. In this sense, our graph generalizes some empty-region graphs. Though this paper is mostly theoretical, it may have practical implication—the numerous applications that make use of empty-region graphs would be able to select the best shape that suits the problem at hand.

Full article: On Visibility & Empty-Region Graphs

Real-time performance-driven finger motion synthesis

Publication date: June 2017
Source:Computers & Graphics, Volume 65
Author(s): Christos Mousas, Christos-Nikolaos Anagnostopoulos
This paper presents a method to estimate and synthesize the motion of a character’s fingers in real-time during the performance capture process. For the motion estimation and synthesis process, different motion datasets are used that contain the full-body motion of a character, including the motion of its fingers. The motion datasets have been pre-processed for being efficiently handled in real-time estimation process. During the runtime of the application, the system recognizes the actions of the user’s hands and assembles the necessary motion of the character’s fingers. By using a hierarchical Hidden Markov Model (HMM), the system learns the phase of the gestures as well as the progress of the motion. To eliminate the searching process of the most probable motion, prior constraints between segment states were assigned manually. During the runtime of our application, by using a forward HMM algorithm, the system synthesizes the necessary motion of a character’s fingers in real-time. The presented methodology is evaluated both numerically (system performance and estimation rate) and perceptually. The results show that, even when a reduced number of finger gestures are used, the synthesized motion can be described as perceptually consistent with the ground truth motion data.

Full article: Real-time performance-driven finger motion synthesis

Assessment of multi-exposure HDR image deghosting methods

Publication date: April 2017
Source:Computers & Graphics, Volume 63
Author(s): Kanita Karađuzović-Hadžiabdić, Jasminka Hasić Telalović, Rafał K. Mantiuk
To avoid motion artefacts when merging multiple exposures into a high dynamic range image, a number of HDR deghosting algorithms have been proposed. However, these algorithms do not work equally well on all types of scenes, and some may even introduce additional artefacts. As the number of proposed deghosting methods is increasing rapidly, there is an immediate need to evaluate them and compare their results. Even though subjective methods of evaluation provide reliable means of testing, they are often cumbersome and need to be repeated for each new proposed method or even its slight modification. Because of that, there is a need for objective quality metrics that will provide automatic means of evaluation of HDR deghosting algorithms. In this work, we explore several computational approaches of quantitative evaluation of multi-exposure HDR deghosting algorithms and demonstrate their results on five state-of-the-art algorithms. In order to perform a comprehensive evaluation, a new dataset consisting of 36 scenes has been created, where each scene provides a different challenge for a deghosting algorithm. The quality of HDR images produced by deghosting method is measured in a subjective experiment and then evaluated using objective metrics. As this paper is an extension of our conference paper, we add one more objective quality metric, UDQM, as an additional metric in the evaluation. Furthermore, analysis of objective and subjective experiments is performed and explained more extensively in this work. By testing correlation between objective metric and subjective scores, the results show that from the tested metrics, that HDR-VDP-2 is the most reliable metric for evaluating HDR deghosting algorithms. The results also show that for most of the tested scenes, Sen et al.’s deghosting method outperforms other evaluated deghosting methods. The observations based on the obtained results can be used as a vital guide in the development of new HDR deghosting algorithms, which would be robust to a variety of scenes and could produce high quality results.

Full article: Assessment of multi-exposure HDR image deghosting methods

Interactive screenspace fragment rendering for direct illumination from area lights using gradient aware subdivision and radial basis function interpolation

Publication date: May 2017
Source:Computers & Graphics, Volume 64
Author(s): Ming Di Koa, Henry Johan, Alexei Sourin
Interactive rendering of direct illumination from area lights in virtual worlds has always proven to be challenging. In this paper, we propose a deferred multi-resolution approach for rendering direct illumination from area lights. Our approach subdivides the screenspace into multi-resolution 2D-fragments in which higher resolution fragments are generated and placed in regions with geometric, depth and visibility-to-light discontinuities. Compared to former techniques that use inter-fragment binary visibility test, our intra-fragment technique is able to detect shadow more efficiently while using fewer fragments. We also make use of gradient information across our binary visibility tests to further allocate higher resolution fragments to regions with larger visibility discontinuities. Our technique utilizes the stream-compaction feature of the transform feedback shader (TFS) in the graphics shading pipeline to filter out fragments in multiple streams for soft shadow refinement. The bindless texture extension in graphics pipeline allows us to easily process all these generated fragments in an unsorted manner. A single pass screenspace irradiance upsampling scheme which uses radial basis functions (RBF) with an adaptive variance scaling factor is proposed for interpolating the generated fragments. This reduces artifacts caused by large fragments and it also requires fewer fragments to produce reasonable results. Our technique does not require precomputations and is able to render diffuse materials at interactive rates.

Full article: Interactive screenspace fragment rendering for direct illumination from area lights using gradient aware subdivision and radial basis function interpolation