Wrapping Up

Itglass’s the last week of summer research, and the Google Glass Team is placing finishing touches to their project. Grace and Lily thanked their collaborators at Boston University for completing the week long user studies with Glass. Back at Wellesley, the pair is excited to have finished conducting eight user studies with biology summer researchers.

With all three Google Glasses safely returned home to the HCI Lab, Grace and Lily are polishing up their protocol application. Grace has just incorporated voice commands, so when users nod their heads down, a microphone will be activated and respond to the commands, “next,” “previous,” and “cancel.” Meanwhile, Lily has also successfully added a bookmark feature, which enables users to mark their place in the protocol. With an upwards swipe, the background color changes from black to cyan, offering a visual cue of the step the user last completed.

The team has also created a lovely poster for the summer program’s poster session, and they are eager to present their findings at the end of the week. 


“Next” Steps for Google Glass

The Google Glass team has solidified an ambitious To Do list for the last two weeks. As user studies continue during the daytime, Grace and Lily are researching different areas for their app development and beginning a research paper.


When the Wellesley biologists are not engaging with the Google Glass, Grace and Lily are working to add features to the app. Grace is still focusing on embedding voice commands. By the end of the two weeks, she hopes to achieve a simple command of “next” and “previous” to navigate through the protocols. Meanwhile, Lily plans to add bookmark feature that will enable users to mark their current progress in the experiment. Afterwards, she wants to start designing an accompanying program that allows users to upload their protocols directly into the app.

At Boston University, the third user has just started his week-long user study and will continue to send us feedback. By the end of this week, the Google Glass team will have finished all user studies with the BU researchers.


When Life Gives You Google Glasses, Conduct More User Studies

photo 1
The Google Glass team has just begun conducting another wave of user studies. The arrival of a third Google Glass (grey) enabled Grace and Lily to kick off a new study with Wellesley’s own synthetic biology students. They ensured that each Wellesley student also works in a wet lab environment and uses protocols regularly. During a lunch meeting they held, Grace and Lily invited 11 Wellesley biology students to play around with their protocol application and to munch on Boloco burritos. As the Google Glass team explained its purpose and goals for the study, the biology researchers chimed in with their feedback and initial suggestions for further enhancements.

From this collaborative discussion, Grace and Lily noticed that some of the most common suggestions included features to bookmark current steps and to input unknown values through voice commands. The team also learned that lab goggles are potentially problematic, as Google Glass does not fit well over or under them. Companies like XOne seem to be addressing such issues by designing safety goggles with Glass-like features. Fortunately, only one of the Wellesley users is required to wear goggles, and Lily and Grace plan to find her a substitute for the study.

Jusphoto 2t this morning, Grace and Lily handed Google Glasses over to two Wellesley students for a three-day study.  The biology researchers quickly grew accustomed to their customized protocol apps and immediately started using them. In the meantime, the first week-long user from Boston University is turning over the Google Glass to a second week-long user.  

Excitement fills the air, and data is flowing up their stream.

Gesture Recognition with Glass

photo (11)Lily and Grace continue to add features to their protocol app. Although one of the glasses is still at BU, their progress has been steady as they each continue to develop different features. Lily is currently working to provide users a function to view multiple steps of the protocol at once. This idea sprung from one of their BU collaborator’s suggestions for improvement. Meanwhile, Grace is grappling with voice commands within the app. Her goal is to incorporate speech recognition for commands like “next” or “previous” (to navigate the protocol) without having to touch Glass to activate this feature. Another possible feature to look into is incorporating videos in the app, so that users could view a video demo for certain steps of the protocol.

Their friends at Boston University continue to experiment and play around with Glass. Each collaborator will have a week to work extensively with the protocol app that Grace and Lily have developed. At the moment, the app enables the users to move through the protocol steps with hand gestures and touchpad scrolls. The Google Glass team here hopes to continue building off of their collaborators’ suggestions and improving their app as they receive feedback along the way.

Related Links:

 Google Glass Hand Gesture Recognition Demo

The First Glass Study

photo (10)This past week, Grace and Lily have given one of our Glass units to our BU collaborators to try out. Each person will use the Glass for ~3 days and fill out a daily survey, marking which applications they used, whether Glass responded well to their commands, how they felt about the Glass, etc.

The goal of this study is to better understand Glass strengths and faults. Since Glass is such a different technological platform compared to what we are familiar with, we aim to see if user ratings for Glass improve over time. Lily stated, “When I first tried out the Glass, I remember that the first day was primarily driven by my curiosity to understand Glass, instead of being captivated by its provided functions. It was strange to get used to, and I was hardly able to make any practical use of it. By the third day, however, I had settled into the comfortable niche of messaging and sending pictures to my friends via glass, so that I could be chatting even as I cooked a messy dinner.” Hopefully, our participants will experience a similar development.

As for our protocol app, we have finally implemented gesture detection so that the user can swipe through slides via hand-waving motions in front of the camera (as opposed to using the touchpad on the side of the glass). Grace and Lily believe this hands-free interaction is particularly useful for lab researchers, as they may have hazardous materials on their gloves. However, the gesture detector follows the hand based on skin color, which poses problems at times. Our lab participants wear white gloves when working, and unfortunately, many object in the lab are also white, thus interfering with the detection. As we further develop this application, we hope to find a solution to this issue.

Double the Vision

photo (9)The HCI lab recently welcomed the arrival of our second Google Glass, which boasts a soft sky blue look. We plan to continue developing on one Glass and testing with the other.

Last Friday, the entire lab traveled to Boston University for the NEGEM Conference (New England iGEM Conference), where Lily and Grace spent some time sharing our work with iGEM teams from BU, Tufts, MIT, Harvard, Rutgers, and WPI. BU’s team was very enthusiastic when they shared their experiences testing out our initial protocol app. We hope to extend our collaboration and user testing to these other teams.

Back in the lab, Lily and Grace are making incremental progress with incorporating gestures into our protocol app. Inspired by OnTheGo Platform’s Ari demo, which enables users to shuffle through photos by swiping in front of the camera, we are working towards our own version. At the moment, our nemesis seems to be calibrating the camera with our hand motions. However, while both Glasses are still in the lab, we’re doubling our efforts and brainpower to figure things out.

BioDesign for Google Glass

glassGrace and Lily are thrilled to be working with the Google Glass this summer! Their goals is to explore the use and acceptance of Google Glass in BioDesign lab settings.

After a few weeks of learning Android tools and familiarizing ourselves with the Glass’s functions, they have finally developed a prototype of a Google Glass app for BioDesign that they are working towards! They apply a participatory design method in this project and collaborate closely with a Biology iGEM (International Genetically Engineered Machine student research competition) team at Boston University. They aim to create an app that supports users who work in a biology lab, allowing them to view protocols, take pictures notes, and set timers. Right before we headed out to BU to test the first prototype, they added one more feature to our working app: the ability to pick up where you left off upon exiting and restarting the program.

Hopping on the train and heading to BU, they introduced the program to their users and observed how they used Glass app during while they worked on their BioDesign experiment. Luckily, some of Glass’ shortcomings (i.e., overheating or going to sleep after 20 seconds) did not pose problems. We received immediate, positive feedback at the end of the day, and they look forward to visiting them again in two days.