Welcome to the IKCEST
Journal
IEEE Transactions on Visualization and Computer Graphics

IEEE Transactions on Visualization and Computer Graphics

Archives Papers: 810
IEEE Xplore
Please choose volume & issue:
Towards Natural Language Interfaces for Data Visualization: A Survey
Leixian ShenEnya ShenYuyu LuoXiaocong YangXuming HuXiongshuai ZhangZhiwei TaiJianmin Wang
Keywords:Data visualizationVisualizationNatural language processingTask analysisHuman computer interactionSoftwareData miningdata analysisdata visualisationinteractive systemsnatural language interfacesnatural language processingacademic researchadvanced natural language processing technologiesclassic information visualization pipelinecommercial softwarecomplementary input modalitydata transformationdata Visualizationdirect manipulationengaging user experiencetowards Natural Language InterfacesV-NLI communityV-NLI layerV-NLI systemsvisual analyticsvisual mappingvisualization toolsVisualization-oriented Natural Language InterfacesData visualizationnatural language interfacessurvey
Abstracts:Utilizing Visualization-oriented Natural Language Interfaces (V-NLI) as a complementary input modality to direct manipulation for visual analytics can provide an engaging user experience. It enables users to focus on their tasks rather than having to worry about how to operate visualization tools on the interface. In the past two decades, leveraging advanced natural language processing technologies, numerous V-NLI systems have been developed in academic research and commercial software, especially in recent years. In this article, we conduct a comprehensive review of the existing V-NLIs. In order to classify each article, we develop categorical dimensions based on a classic information visualization pipeline with the extension of a V-NLI layer. The following seven stages are used: query interpretation, data transformation, visual mapping, view transformation, human interaction, dialogue management, and presentation. Finally, we also shed light on several promising directions for future work in the V-NLI community.
Aesthetics++: Refining Graphic Designs by Exploring Design Principles and Human Preference
Wenyuan KongZhaoyun JiangShizhao SunZhuoning GuoWeiwei CuiTing LiuJianguang LouDongmei Zhang
Keywords:VisualizationStatistical analysisRefiningPrototypesColordata mininginteractive systemslearning (artificial intelligence)-error processaesthetically pleasingaestheticscandidate designsdata-driven candidate evaluation stagedesign principle-guided candidate generation stagegenerated candidatesgood design knowledgegraphic designshumansimproved aesthetic qualityleveraging design principlesrefined designrefined versionvisual attributesAutomatic refinement suggestiondesign principlesdata-driven approachaesthetic quality
Abstracts:During the creation of graphic designs, individuals inevitably spend a lot of time and effort on adjusting visual attributes (e.g., positions, colors, and fonts) of elements to make them more aesthetically pleasing. It is a trial-and-error process, requires repetitive edits, and relies on good design knowledge. In this work, we seek to alleviate such difficulty by automatically suggesting aesthetic improvements, i.e., taking an existing design as the input and generating a refined version with improved aesthetic quality as the output. This goal presents two challenges: proposing a refined design based on the user-given one, and assessing whether the new design is better aesthetically. To cope with these challenges, we propose a design principle-guided candidate generation stage and a data-driven candidate evaluation stage. In the candidate generation stage, we generate candidate designs by leveraging design principles as the guidance to make changes around the existing design. In the candidate evaluation stage, we learn a ranking model upon a dataset that can reflect humans’ aesthetic preference, and use it to choose the most aesthetically pleasing one from the generated candidates. We implement a prototype system on presentation slides and demonstrate the effectiveness of our approach through quantitative analysis, sample results, and user studies.
Impulse Fluid Simulation
Fan FengJinyuan LiuShiying XiongShuqi YangYaorui ZhangBo Zhu
Keywords:Mathematical modelsNumerical modelsComputational modelingAnimationSurface tensionComputer graphicsHarmonic analysisboundary layerscomputational fluid dynamicsflow simulationNavier-Stokes equationssurface tensionvorticesauxiliary variableCartesian gridfluid impulsefluid simulation tasks including smokefree-surface fluidharmonic boundary treatmentimpulse fluid simulationimpulse gauge transformationimpulse solverimpulse stretchingimpulse-form equationsimpulse-velocity formulationincompressible flow velocitiesincompressible Navier-Stokes solvermathematical modelNavier-Stokes equationsrich vortical flow detailssimulation algorithmsurface tension effectssurface-tension flowFluid simulationvortical structuresgauge methodsphysics-based animation
Abstracts:We propose a new incompressible Navier–Stokes solver based on the impulse gauge transformation. The mathematical model of our approach draws from the impulse–velocity formulation of Navier–Stokes equations, which evolves the fluid impulse as an auxiliary variable of the system that can be projected to obtain the incompressible flow velocities at the end of each time step. We solve the impulse-form equations numerically on a Cartesian grid. At the heart of our simulation algorithm is a novel model to treat the impulse stretching and a harmonic boundary treatment to incorporate the surface tension effects accurately. We also build an impulse PIC/FLIP solver to support free-surface fluid simulation. Our impulse solver can naturally produce rich vortical flow details without artificial enhancements. We showcase this feature by using our solver to facilitate a wide range of fluid simulation tasks including smoke, liquid, and surface-tension flow. In addition, we discuss a convenient mechanism in our framework to control the scale and strength of the turbulent effects of fluid.
Perceptual Assessment of Image and Depth Quality of Dynamically Depth-Compressed Scene for Automultiscopic 3D Display
Yamato MiyashitaYasuhito SawahataKazuteru Komine
Keywords:Three-dimensional displaysImage reconstructionVisualizationStereo image processingGeometryTrackingReal-time systemsimage reconstructionstereo image processingthree-dimensional displaysA3D display simulatorautomultiscopic 3Ddeeper depthdepth enhancing effectdepth qualitydepth reconstruction capabilitiesdynamic depth compressiondynamically depth-compressed sceneoriginal perceptual qualityperceptual qualityperceptual quality degradationphysical depthscene depthscene geometrysubstantially deep scenesCompression technologiesdepth cuesperception and psychophysicsvolumetric
Abstracts:This article discusses the depth range which automultiscopic 3D (A3D) displays should reproduce for ensuring an adequate perceptual quality of substantially deep scenes. These displays usually need sufficient depth reconstruction capabilities covering the whole scene depth, but due to the inherent hardware restriction of these displays this is often difficult, particularly for showing deep scenes. Previous studies have addressed this limitation by introducing <italic>depth compression</italic> that contracts the scene depth into a smaller depth range by modifying the scene geometry, assuming that the scenes were represented as CG data. The previous results showed that reconstructing only a physical depth of 1 m is needed to show scenes with much deeper depth and without large perceptual quality degradation. However, reconstructing a depth of 1 m is still challenging for actual A3D displays. In this study, focusing on a personal viewing situation, we introduce a dynamic depth compression that combines viewpoint tracking with the previous approach and examines the extent to which scene depths can be compressed while keeping the original perceptual quality. Taking into account the viewer&#x0027;s viewpoint movements, which were considered a cause of unnaturalness in the previous approach, we performed an experiment with an A3D display simulator and found that a depth of just 10 cm was sufficient for showing deep scenes without inducing a feeling of unnaturalness. Next, we investigated whether the simulation results were valid even on a real A3D display and found that the dynamic approach induced better perceptual quality than the static one even on the real A3D display and that it had a depth enhancing effect without any hardware updates. These results suggest that providing a physical depth of 10 cm on personalized A3D displays is general enough for showing any deeper 3D scenes with appealing subjective quality.
Reinforcement Learning for Load-Balanced Parallel Particle Tracing
Jiayi XuHanqi GuoHan-Wei ShenMukund RajSkylar W. WursterTom Peterka
Keywords:CostsHeuristic algorithmsEstimationLoad modelingData modelsComputational modelingAdaptation modelsdistributed memory systemslearning (artificial intelligence)optimisationreinforcement learningresource allocationalgorithm adaptscommunication cost modeldata blocksdistributed-memory systemsdonation actionsdonation strategyhigh-order workload estimation modelhigh-workload processesload balanceload-balanced parallel particle tracinglow-workload processesminimized communication costsonline reinforcement learning paradigmparallel efficiencyparallel particle tracing performanceparticle data exchange costsprogram execution timeRL agentsRL-based work donation algorithmweather simulation dataDistributed and parallel particle tracingdynamic load balancingreinforcement learning
Abstracts:We explore an online reinforcement learning (RL) paradigm to dynamically optimize parallel particle tracing performance in distributed-memory systems. Our method combines three novel components: (1) a work donation algorithm, (2) a high-order workload estimation model, and (3) a communication cost model. First, we design an RL-based work donation algorithm. Our algorithm monitors workloads of processes and creates RL agents to donate data blocks and particles from high-workload processes to low-workload processes to minimize program execution time. The agents learn the donation strategy on the fly based on reward and cost functions designed to consider processes&#x2019; workload changes and data transfer costs of donation actions. Second, we propose a workload estimation model, helping RL agents estimate the workload distribution of processes in future computations. Third, we design a communication cost model that considers both block and particle data exchange costs, helping RL agents make effective decisions with minimized communication costs. We demonstrate that our algorithm adapts to different flow behaviors in large-scale fluid dynamics, ocean, and weather simulation data. Our algorithm improves parallel particle tracing performance in terms of parallel efficiency, load balance, and costs of I/O and communication for evaluations with up to 16,384 processors.
Adaptive Joint Optimization for 3D Reconstruction With Differentiable Rendering
Jingbo ZhangZiyu WanJing Liao
Keywords:CamerasOptimizationGeometryThree-dimensional displaysSolid modelingImage reconstructionRendering (computer graphics)camerasimage colour analysisimage reconstructionimage textureoptimisationrendering (computer graphics)solid modelling3D reconstructionadaptive interleaving strategyadaptive joint optimizationcamera driftingcamera posedifferentiable renderingfine-scale geometryhigh-fidelity texturemesh distortionoptimization stabilityRGB-D sensorstexture ghostingTexture optimizationgeometry refinement3D reconstructionadaptive interleaving strategydifferentiable rendering
Abstracts:Due to inevitable noises introduced during scanning and quantization, 3D reconstruction via RGB-D sensors suffers from errors both in geometry and texture, leading to artifacts such as camera drifting, mesh distortion, texture ghosting, and blurriness. Given an imperfect reconstructed 3D model, most previous methods have focused on refining either geometry, texture, or camera pose. Consequently, different optimization schemes and objectives for optimizing each component have been used in previous joint optimization methods, forming a complicated system. In this paper, we propose a novel optimization approach based on differentiable rendering, which integrates the optimization of camera pose, geometry, and texture into a unified framework by enforcing consistency between the rendered results and the corresponding RGB-D inputs. Based on the unified framework, we introduce a joint optimization approach to fully exploit the inter-relationships among the three objective components, and describe an adaptive interleaving strategy to improve optimization stability and efficiency. Using differentiable rendering, an image-level adversarial loss is applied to further improve the 3D model, making it more photorealistic. Experiments on synthetic and real data using quantitative and qualitative evaluation demonstrated the superiority of our approach in recovering both fine-scale geometry and high-fidelity texture.
<italic>GNNLens</italic>: A Visual Analytics Approach for Prediction Error Diagnosis of Graph Neural Networks
Zhihua JinYong WangQianwen WangYao MingTengfei MaHuamin Qu
Keywords:Analytical modelsDeep learningPredictive modelsVisual analyticsData modelsConvolutional neural networksTask analysisconvolutional neural netsdata analysisdata visualisationdeep learning (artificial intelligence)graph neural networksgraph theorymultilayer perceptronsneural netsrecurrent neural netsanalyzing GNNsCNNsConvolutional Neural Networksdeep learning techniquesdeep neural networkserror patternsFeature Matrix Viewgraph analysis tasksGraph Neural Networksgraph structureGraph Viewmodel developersParallel Sets Viewpossible errorsprediction error diagnosisRecurrent Neural NetworksRNNsvisual analytics approachvisual analytics studiesGraph neural networkserror diagnosisvisualization
Abstracts:Graph Neural Networks (GNNs) aim to extend deep learning techniques to graph data and have achieved significant progress in graph analysis tasks (e.g., node classification) in recent years. However, similar to other deep neural networks like Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs), GNNs behave like a black box with their details hidden from model developers and users. It is therefore difficult to diagnose possible errors of GNNs. Despite many visual analytics studies being done on CNNs and RNNs, little research has addressed the challenges for GNNs. This paper fills the research gap with an interactive visual analysis tool, <italic>GNNLens</italic>, to assist model developers and users in understanding and analyzing GNNs. Specifically, Parallel Sets View and Projection View enable users to quickly identify and validate error patterns in the set of wrong predictions; Graph View and Feature Matrix View offer a detailed analysis of individual nodes to assist users in forming hypotheses about the error patterns. Since GNNs jointly model the graph structure and the node features, we reveal the relative influences of the two types of information by comparing the predictions of three models: GNN, Multi-Layer Perceptron (MLP), and GNN Without Using Features (GNNWUF). Two case studies and interviews with domain experts demonstrate the effectiveness of <italic>GNNLens</italic> in facilitating the understanding of GNN models and their errors.
Visual Reasoning for Uncertainty in Spatio-Temporal Events of Historical Figures
Wei ZhangSiwei TanSiming ChenLinghao MengTianye ZhangRongchen ZhuWei Chen
Keywords:UncertaintyVisualizationCognitionData visualizationBiographiesTask analysisData miningdata visualisationhistoryinformation retrievalsemantic networksChina Biographical Database Projectdigitized humanity informationhistorical databasehistorical figureshistorical phenomenamissing dataspatio-temporal eventsspatio-temporal informationuncertain eventsuncertainty visualizationvisual reasoning systemHistoryuncertaintyspatio-temporal eventsvisual reasoning
Abstracts:The development of digitized humanity information provides a new perspective on data-oriented studies of history. Many previous studies have ignored uncertainty in the exploration of historical figures and events, which has limited the capability of researchers to capture complex processes associated with historical phenomena. We propose a visual reasoning system to support visual reasoning of uncertainty associated with spatio-temporal events of historical figures based on data from the China Biographical Database Project. We build a knowledge graph of entities extracted from a historical database to capture uncertainty generated by missing data and error. The proposed system uses an overview of chronology, a map view, and an interpersonal relation matrix to describe and analyse heterogeneous information of events. The system also includes uncertainty visualization to identify uncertain events with missing or imprecise spatio-temporal information. Results from case studies and expert evaluations suggest that the visual reasoning system is able to quantify and reduce uncertainty generated by the data.
StrategyAtlas: Strategy Analysis for Machine Learning Interpretability
Dennis CollarisJarke J. van Wijk
Keywords:Data modelsAnalytical modelsMachine learningPredictive modelsComputational modelingInsuranceData visualizationdata analysislearning (artificial intelligence)automatic insurance acceptancecomplex ML modelcomplex modeldata instancesglobal behaviorhigh-risk environmentsinstance-level explanationsmachine learning interpretabilityproduction modelprofessional data scientistsreference modelstrategy clustersVisual analyticsmachine learningexplainable AI
Abstracts:Businesses in high-risk environments have been reluctant to adopt modern machine learning approaches due to their complex and uninterpretable nature. Most current solutions provide local, instance-level explanations, but this is insufficient for understanding the model as a whole. In this work, we show that strategy clusters (i.e., groups of data instances that are treated distinctly by the model) can be used to understand the global behavior of a complex ML model. To support effective exploration and understanding of these clusters, we introduce <sc>StrategyAtlas</sc>, a system designed to analyze and explain model strategies. Furthermore, it supports multiple ways to utilize these strategies for simplifying and improving the reference model. In collaboration with a large insurance company, we present a use case in automatic insurance acceptance, and show how professional data scientists were enabled to understand a complex model and improve the production model based on these insights.
Roslingifier: Semi-Automated Storytelling for Animated Scatterplots
Minjeong ShinJoohee KimYunha HanLexing XieMitchell WhitelawBum Chul KwonSungahn KoNiklas Elmqvist
Keywords:Data visualizationVisualizationAnnotationsVisual effectsStreaming mediaOrganizationsNatural languagescomputer animationdata visualisationinteractive systemsanimated scatterplotsanimationdata presentationdata-driven storytelling methoddemographic dataengaging data storiesin-person presenterpublic healthquality presentationsRoslingifier methodsemiautomated storytellingspellbinding public speakerstorytelling techniquevisual effectData-driven storytellingnarrative visualizationhans roslinggapmindertrendalyzer
Abstracts:We present Roslingifier, a data-driven storytelling method for animated scatterplots. Like its namesake, Hans Rosling (1948&#x2013;2017), a professor of public health and a spellbinding public speaker, Roslingifier turns a sequence of entities changing over time&#x2014;such as countries and continents with their demographic data&#x2014;into an engaging narrative elling the story of the data. This data-driven storytelling method with an in-person presenter is a new genre of storytelling technique and has never been studied before. In this article, we aim to define a design space for this new genre&#x2014;data presentation&#x2014;and provide a semi-automated authoring tool for helping presenters create quality presentations. From an in-depth analysis of video clips of presentations using interactive visualizations, we derive three specific techniques to achieve this: natural language narratives, visual effects that highlight events, and temporal branching that changes playback time of the animation. Our implementation of the Roslingifier method is capable of identifying and clustering significant movements, automatically generating visual highlighting and a narrative for playback, and enabling the user to customize. From two user studies, we show that Roslingifier allows users to effectively create engaging data stories and the system features help both presenters and viewers find diverse insights.
Hot Journals