Notes on the Seven Scenarios

Understanding environments and work practices (UWP)

  • with the goal of recognizing what features a visualization software should have

  • The below evaluation questions are quoted from the article

  1. "What is the context of use of visualizations?"

  2. "In which daily activities should the visualization tool be integrated?"

  3. "What types of analyses should the visualization tool support?"

  4. "What are the characteristics of the identified user group and work environments?"

  5. "What data are currently used and what tasks are performed on it?"

  6. "What kind of visualizations are currently in use? How do they help to solve current tasks?"

  7. "What challenges and usage barriers can we see for a visualization tool?"

Evaluating visual data analysis and reasoning (VDAR)

  • main purpose is to determine the visualization software's quantifiable scoring or subjective insight regarding it as a platform for visual and data analysis

  • focus on how the platform acts as a comprehensive visualization provider instead of how a specific feature performs

  • The below evaluation questions are quoted from the article

    1. "Data exploration? How does it support processes aimed at seeking information, searching, filtering, and reading and extracting information?"

    2. "Knowledge discovery? How does it support the schematization of information or the (re-)analysis of theories?"

    3. "Hypothesis generation? How does it support hypothesis generation and interactive examination?"

    4. "Decision making? How does it support the communication and application of analysis results?"

  • usually examined through field/case studies to be comprehensive and identify the performance reality (tools used can be diaries or thinking-aloud videos)

Evaluating communication through visualization (CTV)

  • purpose is to identify if and how visualization can aid communication (mostly with a focus on sharing ideas and education)

  • The below evaluation questions are quoted from the article

    • "Do people learn better and/or faster using the visualization tool?"

    • "Is the tool helpful in explaining and communicating concepts to third parties?"

    • "How do people interact with visualizations installed in public areas? Are they used and/or useful?"

    • "Can useful information be extracted from a casual information visualization?"

Evaluating collaborative data analysis (CDA)

  • the main purpose is to analyze whether the visualization software supports collaboration efforts

  • The below evaluation questions are quoted from the article

    • "Does the tool support effective and efficient collaborative data analysis?"

    • "Does the tool support group insight?"

    • "Is social exchange around and communication about the data facilitated?"

    • "How is the collaborative visualization system used?"

    • "How are certain system features used during collaborative work? What are patterns of system use?"

    • "What is the process of collaborative analysis? What are users' requirements?"

  • a study analyzed the following collaboration aspects: "explicit communication, consequential communication, group awareness, coordination of actions, group insight, subjective work preferences, and general user reactions to the collaborative environment"

Evaluating user performance (UP)

  • main purpose is to objectively identify the impact of different features on user performance metrics such as time to complete a task and task accuracy

  • The below evaluation questions are quoted from the article

    • "What are the limits of human visual perception and cognition for specific kinds of visual encoding or interaction techniques?"

    • "How does one visualization or interaction technique compare to another as measured by human performance?"

Evaluating user experience (UE)

  • the main purpose is to identify people's subjective reactions to visualization tools

  • some quoted examples are evaluating "perceived effectiveness, perceived efficiency, perceived correctness. Other measures include satisfaction, trust, and features liked/disliked, etc."

  • the results of the evaluation are integral to design feedback

  • The below evaluation questions are quoted from the article

    • "What features are seen as useful?"

    • "What features are missing?"

    • "How can features be reworked to improve the supported work processes?"

    • "Are there limitations of the current system which would hinder its adoption?"

    • "Is the tool understandable and can it be learned?"

Evaluating visualization algorithms (VA)

  • the main purpose is to use quantitative quality and performance metrics to score the visualization's tools

  • metrics focus on computational efficiency

  • The below evaluation questions are quoted from the article

    • "Which algorithm shows the patterns of interest better?"

    • "Which algorithm provides a more truthful representation of the underlying data?"

    • "Which algorithm produces the least cluttered view?"

    • "Is the algorithm faster than other state-of-the-art techniques? Under what circumstances?"

    • "How does the algorithm scale to different data sizes and complexities?"

    • "How does the algorithm work in extreme cases?"