dub Site Redesign: Website Prototype

Design Challenge:

The design challenge for this week was to analyze, redesign, and develop a wireframe prototype for a new dub website that includes the following information or content:

  • Announcements: blog or other listing of news and announcements of interest to the community.
  • Directory: listing of faculty, students, affiliates, etc. who are members of the dub community.
  • Calendar: a calendar system for viewing and subscribing to a schedule of dub events (weekly talks, conferences, events, etc.).
  • Seminar: information about the weekly seminar series — schedule, presenters, abstracts, videos, etc.
  • Research: faculty research areas, projects, publications, collaborators, etc.
  • Membership: a member’s section for dub members (login required) to edit their own information on the site (profile, research, etc.).

For those not yet part of the dub community, the site acts as a place to learn more about the people that make up dub, the research projects and publications that come out of this community, and also the weekly seminars.

Those that are already part of dub might be more interested in getting an overview of dub events via a calendar, checking out what the upcoming weekly seminar is going to be about, or adding new projects and publications to their profile for others to access.

Prototype:

I began by studying the existing dub site and some of the research that had already been conducted to understand what was working well and what needed improvement. From this I realized that the dub site has two main audiences (described above), those interested in dub and those that are already part of the dub community, but want to stay updated on dub news and events. It was also clear that the organization of content on the site could be drastically improved.

After defining the main audiences and use cases for the site, I conducted a content audit on the existing site and created a new inventory of the content I wanted to include on the new site.

Screen Shot 2014-02-15 at 3.09.31 PM.png
Screen Shot 2014-02-15 at 3.09.51 PM.png

Once I had an idea of the types of content to include in the new site, I created a mood board using Pinterest to collect inspiration for the design. Then, using good old paper and pen, I sketched out the basic layouts of my main pages. This really helped me to organize my thoughts before jumping into using Axure to create wireframes and an interactive prototype.

Screen Shot 2014-02-15 at 3.32.28 PM.png
IMAG1066.jpg

The information architecture of the site was important to get right because of the amount of content. In particular, I felt it was important for people to be able to quickly scan and navigate to the information they were most interested in. On the existing site, this was difficult because there wasn’t a way to easily filter or search for people, publications, or projects. Different pages had different options for sorting through the content, and related content wasn’t linked to one another. To address this inconsistency, I decided to include a secondary navigation, seen in the images below, in the left column of each page.

Screen Shot 2014-02-15 at 3.45.15 PM.png
Screen Shot 2014-02-15 at 3.44.52 PM.png
Screen Shot 2014-02-15 at 3.46.00 PM.png

Additionally, I wanted to make it easy for users to move back and forth between different types of related content, which would make the discovery of new content on the site easier. For example, in the Research section, when you click into a particular topic such as Health, you can find all publications and projects related to health. And when on a particular project page such as the Baby Steps project page, you can navigate to any of the collaborators or the related publications directly from the project page without having to leave the page. This is not possible on the existing site.

I defined specific goals to address by the designs I created for each page.

For the People section, I wanted to make it easy for users to search and/or filter people by their status – Faculty, Students or Affiliates. I also wanted to include faces or each member of dub, so that users can scan faces if they can’t remember someone’s name. For each individual’s profile, I wanted to provide a way for users to contact and learn more about the person and their research.

The goal for the Research page was to easy navigate by areas of interest and include both projects and publications together, rather they separating them into different sections.

For Seminars, I wanted to highlight the current weeks seminar, but also provide an easy way to navigate the archive of videos by year and quarter.

The key component I wanted to include for the Calendar page, was a way to toggle between the big picture and the details. For this I created two views – a monthly calendar overview and a list view. It was also important to include a way to add the dub calendar to a users personal calendar client, so I added an option to add the dub calendar to a calendar feed.

Rather than having separate pages for news, announcements, and blog posts written by the dub community, I combined all news, announcements, blog posts into one Blog page.

Finally, I wanted to provide an easy way for those new to dub to create new accounts and for existing member to easily manage and update their profiles. This functionality was not available on the previous dub site.

The wireframe prototype can be accessed online at http://zlgatc.axshare.com/

Analysis:

What worked well:

Since my prototype was presented to the user in wireframe form, it was easy for him to focus on the content rather than the visual design. This worked well because I was more concerned in my testing to find out how well the information architecture and layout were working than on the visual design. The user found it easy to navigate and find specific information by filtering using the secondary navigation. He successfully completed all the tasks I asked. However, I did feel that he was not as interested in exploring the site beyond the tasks I gave him, because there were no visual elements of the design drawing him in to explore further.

What needed improvement:

Without images or certain aspects of the visual design flushed out, it was difficult to show dynamic content and interactions. At times, the user found it difficult to know what content was clickable and fully functional from what was simply a placeholder. The arrow cursor changing to the pointing finger for clickable items was not enough feedback for the user, so I found myself having to verbally explain that the site was not fully functional.

Conclusion & Post Critique Reflections:

At different levels of fidelity I could have probably learned different things from my prototype. With the goal of focusing on information architecture, navigation and layout, the wireframe prototype I put together was successful. However, I might want to include some higher fidelity elements to signal to the user during testing which parts of the site are fully functional rather than placeholders. I may not have needed to provide so much detail in the types of content I did choose to include. In the future, I will probably spend less time on looking for just the right content to include and use more representative blocks of sample text for similar levels of testing. 

3D Gesture Interaction for TV: A Wizard of Oz Behavioral Prototype

Design Challenge:

The challenge for this assignment was to build and test a behavioral prototype for a gestural user interface for a TV system. They system had to allow for basic video function controls (play, pause, stop, fast forward, rewind, etc.) Our team chose to pursue a 3D gesture system. The goal of our prototype and evaluation was to explore the following design research and usability questions:

  • How can a user effectively control video playback using hand gestures?
  • What are the most intuitive gestures for this application?
  • What level of accuracy is required in this gesture recognition technology?

Prototype:

We began by discussing how to set up our prototype to allow for quick iteration and modification. Two key elements to our prototype was 1. figuring out how to manipulate the video content to simulate the controls carried out by a user’s 3D gestures outside of the user’s sight and 2. how to provide feedback to the user while carrying out a specific gestures.

Initially we decided to use Google Chrome Cast to wirelessly control the television through a laptop. However, we soon realized that running Chrome Cast using a Mac had some glitches and delays that would negatively impact our test. We opted instead to use a mini display port to VGA adapter to connect the laptop directly into the television set. This allowed us to control the video in real time without any perceived delay from the user.

To simulate the feedback, we used a laser pointer aimed at the television that mimicked the user’s gesture. We also used this to indicate to the user when the gestures were outside of “the system’s” visible range.

Next, we created three sets of gesture styles to test, which included an open palm style, a fist style, and a thumb style of interaction. 

Palm Instructions-01.png
Fist Instructions-01.png
Thumb Instructions-01.png

With these three varied sets of gestures we wanted to understand how users would perceive the required actions, how easy they would be to conduct, how intuitive each set was, how easy it would be to remember each set of gestures, and whether or not a user would prefer one set to the others.

Next, we decided on roles. We needed a Wizard/operator, a moderator, and a laser feedback controller. We also needed to recruit participants and find a location to conduct the test in the context of a home living room, where a television would likely be set up “in the wild”. 

Setting up our prototype

Setting up our prototype

In our final set up used for testing users in context, we had the moderator and user sitting on a sofa facing the television. The operator sat facing the user able to clearly see each of the user’s gestures, but out of the user’s direct attention. The laser feedback controller also stood out of the user’s attention facing the television in order to be able to recreate the user’s gestures while pointing the laser into the television. A camera set on a tripod was set up behind the user to simultaneously capture the user’s gestures and television response.

The user was given three sets of printed instructions describing each of the three gesture sets, which could be referenced at anytime during the test. A Kinect device was added to the television set up as a prop to represent the way in which the system would capture the user’s gesture input. 

Behavioral Prototype Set Up.png

For the evaluation itself, we put together a script describing what we were testing. We asked the user to review each set of instructions before performing the following predefined tasks while watching an episode of Sherlock:

  • Start the video
  • Fast forward to wedding scene and play the video from there
  • Pause the video
  • Rewind to the scene where Sherlock & Aunt are sharing tea
  • If you only wanted to see the last scene in the show, how would you get there?
  • Play again

We also encouraged the user to use a think aloud protocol, so that we could capture his thoughts during the test.

After testing all three gesture sets, we surveyed the user to gather feedback on how easy it was to remember the gestures, gesture preferences, whether or not there were gestures that were particularly awkward or difficult to conduct, our feedback mechanism, and any abnormal or unexpected behaviors.  We ended with an open feedback session to allow the user to share any additional thoughts. 

Analysis:

What worked well:

Testing in a real living room was helpful for understanding the context in which a user would be using the product. This helped us realize quickly that gestures would most likely be conducted while users were sitting; so recommended gestures needed to take this into account.

Regarding gesture types, the user found the palm style and fist style gesture sets equality intuitive, but found the fist style gestures less easy to remember. The user had a preference for “the palm style…for sure because it’s the most simple...and the instructions were super simple.”

The user also noticed and appreciated the feedback of the green laser light because it was a “nice visual to know that action is registering on the screen.” He also stated that it was “helpful for tracking his motion.”

What needed improvement:

Although, we felt that the thumb gestures could easily be recognized by they system because of the distinct directional cue provided by the thumb pointers created by the shape of the hand, it was not as well received by the user as the other gesture sets. The user found the rewind gesture for the thumb style set to be particularly awkward when using just the right hand, and introduced the idea of using the left hand for the same gesture.

The user felt that when performing fast forward and rewind using the fist style gestures, it was difficult to know how far to stretch his arm or how sensitive the system would be. 

Although, the palm style set of gestures were preferred by the user, we noticed while analyzing the video that when the user repeated the fast forward gesture quickly, the motion could easy be misinterpreted for fast forward, rewind, fast forward, rewind, fast forward. Although the wizard knew the user’s intention and could operate the video correctly, it could be difficult for “the system” to distinguish the direction intent of the user.

We also realized through the testing that one of the main challenges, for all parties – user, Wizard, and laser operator alike, was understanding when the user’s actions were out of range. Our team discussed the possibility of providing some sort of calibration to find a users midpoint and mapping the video content timeline directly to the distance between the users hands when performing the most extreme fast forward and rewind gestures to make this experience more accurate.

Since our Wizard of Oz prototype did not account for other possible moving objects in the space, we felt it would be important to consider how the system would deal with “noise” from other moving objects during viewing.

Conclusion:

Overall, we felt that our behavior prototype was successful in providing a “quick and dirty” way to test assumptions about the 3D gesture interactions we developed. We learned a great deal not only about the user’s gesture preferences, but also about the level of accuracy that the gesture recognition technology would need to account for. 

 

GeoJournal Video Prototype

Unlike other weeks in the course, this last week we didn't have an in-class group exercise. Instead, we spent our class time learning about the composition, production, and post production process for creating product video prototypes. Then, during the week we worked on producing a one to two minute product video applying what we learned. 

Design Challenge:

I chose to create a video utilizing my paper prototype to comprehensively and concisely communicate the motivation, usage, and functionality of the GeoJournal app. Find the details of the assignment here

I set out to highlight some of use cases for the smartwatch app, when a phone might be inaccessible, inconvenient to use, or simply disrupt the experience of user. 

12127715815_106c49765c_b.jpg

Process:

I began with storyboarding and writing a script that would create a story arch to show the circumstance, including the problem and user need, the actions that can be taken with the app, and finally, the result. With my rough sketches and script in hand, I went on to the filming process.

Armed with a Canon Rebel Ti2, we shot at a total of four locations around Seattle (a friend's apartment, Gasworks Park, Milstead & Co. coffee shop, and The Troll) to get the shots outlined in my storyboard. I also created, using Adobe Illustrator, the watch and phone interfaces that would be inserted as overlays into the the video. Finally, I had to find the appropriate music and record the voice over that would set the mood and tell the story. 

IMG_0007_2.jpg

Then, the editing began. This was my first time using Final Cut Pro. It was challenging to get all the pieces just right, while paying attention to the pace of the video. Overall, I'm happy with the result and proud that I was able to produce something within such a short turn around time.

What I learned:

  • It's all about editing, editing, and more editing!
  • It's best to choose and show a simple scenario to get your point across. Trying to do too much at once can confuse the viewer.
  • Video engages many senses at once - sight, sound, motion, titles, and each of these modes should be used strategically to help inform the viewer. 
  • When presenting a value proposition, it's often more impactful to show the larger, overarching macro level emotional payoff than to focus on the specific features of a product. This is the difference between a successful commercial compared to a late night infomercial. 
C1F3C0F8-B528-4BFC-8778-10D3570C9A53.jpg

Pros & cons of video prototyping:

Pros:

  • Video prototyping allows you to tell a story about how you envision the future or ideal state of your product in use.
  • It can be used to gauge interest in your product even before it is fully built. 
  • It allows you to demonstrate the emotional impact your product can have on a user's life. 
  • It can help you pitch your idea to various stakeholders.
  • It is an artifact that can stand alone, without the need for further explanation. 

Cons:

  • Production and editing can be extremely time consuming and expensive.
  • Finding good actors can be challenging to find in a short period of time.

The final product:

Post critique reflections:

It's all about showing verses telling. Video prototyping is a great tool for testing and sharing a product's value proposition. It's the ultimate tool for storytelling as it has the potential to provide an immersive and captivating experience and illicit an emotional response from viewers whether they are internal stakeholders or customers. 

Immersion Blender Model Prototype

Design Challenge:

This week's design challenge was to design and evaluate 3-D lo-fi prototypes to demonstrate how OXO might apply their core competency in new ways. The goal of the exercise was to help OXO explore opportunities to expand their business into new areas that start to incorporate sensors and digital UI to for precision results (think Good Grips meets modernist cuisine tools), as well as expanding into new types of product lines. View assignment details here. 

OXO is known for their emphasis on universal design, which makes their products usable for as many people as possible. They focus on inefficiencies of existing products to help improve people’s everyday lives. In order to apply these principles, my design concepts focused on comfort, ease of use, and simplicity. For each concept, I minimized the number of physical controls and attempted to make the interfaces easy to view from any angle.

I created concepts for an immersion blender. I was inspired by the everyday kitchen products that already exist in the kitchen setting – a mug, saltshaker, and spice jar. I found these products easy to use and comfortable to grasp in both left and right hands. 

Process:

In order to test whether or not my assumption that these forms would translate well to a handheld immersion blender, I created three prototypes inspired by the items above. I decided to focus on the experimenting with the form of the handle and physical placement of the buttons and screens. I didn’t worry too much about the look, finish, or texture of the prototypes.

I chose to prototype only the handle designs and made an interchangeable base to save time having to create three identical bases. This also allows for experimenting with how different handles might interact with other base attachments, such as a whisk or various blade attachments that could also be powered by the same motor. I also didn’t focus on getting the exact weight, since I found it difficult to test all of these aspects at once. However, I did try to keep the weight within reason by adding clay to weigh down areas where the motor might be located.

I began with sketching out rough concepts before jumping in to build the prototypes using form core, clay, cardboard, tape, glue, hot glue, etc. 

IMAG0972.jpg
IMAG0990.jpg

Buttons were placed in areas that were easy to controlled while grasped in the hand. For the hourglass prototype, I envisioned that the up and down buttons would be controlled by the pointer and middle fingers, respectively to increase the speed of the blender. For the ball shaped prototyped, I envisioned the controls being operated by the thumb. For the mug handle prototype, I chose a continuous switch type button that could be operated by the thumb while grasping the handle.

Prototype evaluation:

I tested my prototype in the kitchen of my test participant to get a better idea of how the product would be used in the scenario of blending soup in a pot over the stove, a common use case for handheld immersion blenders. 

I walked her through the scenario and then asked her to explain her thought process while choosing the handle she would most like to use. By using the think aloud protocol, I was able to discover what worked well and needed improvement on my three concepts.  

In order to accommodate both left and right-handed usage, I decided to create rotating interfaces for the hourglass and handle shaped prototypes that could be adjusted to any angle to ensure the most comfortable viewing angle for reading speed and viscosity. This would prevent users from having to interpret readings upside down or sideways. For the round ball shaped prototype, I decided to place the interface and controls vertically to ensure visibility for either handed users. I also assumed that the ball handle would be grasped from above.

Through the testing, I found out that both the mug handle version and the hourglass version worked well, but that the size of handle and space of the handle had to be the proper size for maximum comfort, otherwise fingers may feel cramped. The hourglass in particular seemed to be the most comfortable in the hand with controls that could be pushed by either the thumb or the pointer and middle fingers. In both cases, the displays were easy to view and read from the top. However, I was disappointed to find out that the tester didn’t notice that she could change the angle of the display.

I discovered the most about the ball shaped prototype. Although I intended for the user to grasp the ball from the top using the buttons as a way to orient her hand placement, in reality, she found the ball prototype the least comfortable. She felt more comfortable holding it from the cylinder portion where the display was located. This blocked the ability to read the display easily. She also explained that the angle of the arm required to hold the ball comfortably in the hand was awkward and unnatural.

The tester was also more sensitive to the different weights than I had anticipated. When retesting, I would be more clear about explain what aspects of the prototype to pay attention to and also explain some of the limitations of the prototypes being tested.

The concept that was most comfortable and easiest to use by the test participant was the hourglass shape that allowed the user to wrap all of her fingers around the handle. This is not what I would have predicted while building the prototypes. However, building three very divergent concepts in parallel allowed the tester to compare the designs, rather than just giving feedback on a single concept. This provided insights that I would not have been able to capture if testing a single concept.

Since I was able to test many of the assumptions I made about form, comfort, screen interface visibility, button placement, and hand placement of each concept, I’m happy with the results of my testing.  All three prototypes were effective in helping to answer the questions I set out to answer through building and testing these prototypes. 

Post critique reflections:

  • Model prototypes are a great way to test design assumptions quickly and cheaply. 
  • Creating divergent designs in parallel is very effective for exploring new products.
  • Prototypes can help you evaluated various aspects of a design, so it's always good to know what questions you want answered, so that you can design your prototype to the appropriate level of fidelity. 
  • What you want to evaluate with your prototype can also help you choose the most appropriate materials used to create your prototype. 
  • The context of the evaluation is important. Simulating a scenario in the context of use helps create a more authentic experience. 

Exercise 2 - 3D Model Prototyping

This week we explored 3D model prototyping through redesigning the ecoATM, a kiosk used for recycling old electronic devices for cash. We were shown some examples of low fidelity cardboard, foam core, and foam prototyping techniques, given access to materials, and set loose.

The redesign had to include the following:

  • a touchscreen for text input and display
  • a compartment for depositing devices
  • a feedback mechanism to indicate to users that their devices are safely deposited and secure from theft
  • a way to authenticate users using a photo ID
  • a way to dispense QR code stickers, receipts, and cash

After doing some research to understand on how the current ecoATM operates, we realized that the current user experience was complicated and that the appearance of the kiosks were extremely boxy and unattractive. Users had to take multiple steps and unnecessarily deposit, remove, and redeposit their devices. 

To address these issues we decided to explore a completely new cylindrical kiosk concept, to make the ecoATM a destination rather than an afterthought. Using an interesting and unexpected form to attract users. 

We also wanted to focus on simplifying the experience and provide transparency into the device and ID scanning process.

What I learned from the process:

  • There are positive and negative tradeoffs to building at small scale. Working at small scale let's you explore a concept quickly and provides an overview how an interaction might work without sweating the details. Our team found it helpful to create a cardboard person at small scale as a point of reference. However, it doesn't allow you to really test the ergonomics of a design, which is a benefit of prototyping at full scale. 
  • Sketching in 2D helps for documenting ideas, but actually building in 3D helps you think with your hands. There are ideas that are difficult to describe on paper that can be easily shown in 3D. 
  • Ideas emerge from working. As you work with your hands, not only do you see what doesn't work, but you discover new possibilities that you may not have considered otherwise. 
  • Coordinating work can be challenging when working on physical objects because all the pieces of the design need to fit together.
  • One of the themes brought out through our class critique was the matter of accessibility, which our group didn't consider as much as we should have. However, in our debrief, we talked about other ways our design could have addressed different types of users and even those with disabilities. 

The Pros & Cons of 3D Model Prototyping:

Pros:

  • It's inexpensive and quick. With some very basic easy-to-find materials--cardboard, poster tubing, card stock, glue, tape, wire, box cutter/Exacto knives, and scissors, we were able to explore a completely new design concept in less than 2 hours from start to finish.
  • Good for exploring shape, form, size of physical products. 
  • Good for testing ergonomics, especially when created in full scale.

Cons:

  • At lower fidelities, prototypes are limited in demonstrating the exact texture and materials that could affect user experience
  • Requires a bit more skill and practice in construction than 2D paper prototypes. 
  • Requires physical space to build and work and can get messy quickly. 
  • It can be challenging for multiple people to be working on a single prototype at the same time.