Last week, I wrote about our new plan to write a visualization paper based on the IQP team's ranking application. At conferences like VAST, there are several different types of papers one can submit, and our first challenge was deciding where to pigeonhole our application. After talking with Professor Harrison, a notable vis expert in our CS department, we determined that the novelty in our project is the user interface. As such, our paper would fall into the category of systems or design study. According to
Processes and Pitfalls in Writing Information Visualization Research Papers, a sort-of guidebook to the world of writing vis, systems papers focus on architecture choices and the design of abstractions. Design study papers, in contrast, demonstrate a new visual design to solve a problem, often using existing algorithms or techniques. If we consider ours to be a systems paper, we would probably focus on explaining the pipeline (from choosing a dataset to building a ranking from partial information to then explaining the learned ranking in a table). If we consider it a design study paper, however, we would talk about the choices we made for the drag-and-drop build stage and the results of the user study.
Professor Harrison suggested making some modifications to the existing application to highlight some of the active learning mechanics that the application is supposed to facilitate. These modifications should make the paper more enticing for reviewers to accept.
I made a couple mockups of the changes we would like to make. Here are a couple screenshots of the current application:
 |
Original Build view, list view is shown. |
 |
Original Explore view (page 2), this bar chart shows the weights of the attributes learned by the ranking. |
In the new mockup, the bar chart in Explore serves a new purpose. Instead of showing the weights of the learned ranking, it now is used in Build to show the values of the attributes of any one object. In this way, the user can hover over an object and see what information is known about it. The hope is that this will help the user understand what the program is using to build the ranking without giving them enough of the raw data to bias their intuition.
 |
New Build view, not hovering on any one object, so bar chart is empty. |
 |
Same view as previous, imagine mousing over one of the states. |
Another new feature would be a progress bar that shows the user how confident the model is that the ranking reflects what the user wants. Confidence will be calculated using some existing active learning algorithm, and the progress bar would go to 100% as confidence increases.
 |
Progress bar. With this new feature, the ranking is re-calculated every time the user adds a new object. |
The last change would be to modify the table in Explore to show the user the relative weights of the attributes and how each attribute really contributes to the model. This will closely mimic the view created by the Podium group.
 |
Explore view with a different looking table. The bar charts for each attribute are colored to match the attributes in the Build view bar chart for continuity. |
These changes aren't very revolutionary, but we only have a couple weeks to implement them before submission. Caitlin and I are hoping to recruit members of the IQP team who are masters of the codebase and can hopefully make these changes quickly.
Comments
Post a Comment