Gaming Futures: Perspectives of Women in Gaming and Play

Recently, Wellesley College was buzzing with video game events. The Davis Museum unveiled its exhibit The Game Worlds of Jason Rohrer, making it the first art museum to host a solo retrospective of a video game artist. Events surrounding the opening of the exhibit included a talk by Jason Rohrer, a student demo session of video games, and a panel discussion about the perspectives of women in games and play.

Student Demos

Demo night attendee crowd      MuSme in action

During the demo session, Wellesley College students showed off video games they designed and developed. In Asiya Yakhina’s mobile social impact game White Gold, players work as a cotton-picker in Uzbekistan. The purpose of the game is to raise awareness about the practice of forced labor in the country, and to draw connections between the reality of cotton picking and the impact that it has on the people. Meanwhile, Whitney Fahnbulleh’s choice-based textual game Privilege simulates the effects of privilege in daily life by letting players take on the role of an underprivileged individual making innocent decisions, such as whether to go to the store to pick up snacks, with potentially life altering consequences. Breaking away from traditional platforms and focusing more on play itself, MuSme (above right) by Amal Tidjani, Priscilla Lee, and Eileen Cho is a musical suit that allows children to use a skin suite on their body as a keyboard to create music.

Gaming Futures Panel Discussion

To extend the discussion of video games beyond the exhibit, the HCI Lab, in collaboration with Professors Orit Shaer, Dave Olsen, and Nicholas Knouf, organized the event Gaming Futures: Perspectives of Women in Games and Play, a panel dedicated to highlighting the experiences of women in the fields of video game research and development. Panelists included:

  • Katherine Isbister, a full professor in the Department of Computational Media at the University of California, Santa Cruz and core faculty member in the Center for Games and Playable Media. Her research focuses on the emotion and social connection of gaming.
  • Soraya Murray, an assistant professor of film and digital media at University of California, Santa Cruz. She is an interdisciplinary scholar who focuses on contemporary visual culture, with particular interest in contemporary art, cultural studies, and games.
  • Rayla Heide ‘10, a scriptwriter at Riot Games. She began her career at Celestial Tiger Entertainment Limited, where she was pitched original concepts for scripted TV shows and features, and tackled the enterprise’s production and marketing.
  • Cassie Hoef ’15, a technical program manager at Microsoft. She works with graphics developers on AAA games to deliver quality, performant games.
  • Claudia Pederson, an assistant professor of art history in new media and technology at Wichita State University. She studies modern and contemporary art, with a focus on technology, media theory, and social practice.
  • Anna Loparev ’10, a postdoctoral researcher at the Wellesley Human-Computer Interaction Lab. With experience in game design both in industry and academia, she centers her research on the intersection of collaboration, education, and play in next generation user interfaces.

The event attracted an audience from the broader Boston community. One of the major draws was the diversity of perspectives brought by the panelists. From academic halls to bustling board rooms, from scriptwriting to API development, from a recent college graduate to leaders in their field, from cultural to historical perspectives, each panelist presented a unique perspective that contributed to overall discussion in meaningful and impactful ways.

Panalists

After all of the panelists shared insights into their work and emerging topics in the fields of game design and development, audience members were encouraged to ask questions, sparking dialog among panelists and attendees. One of the most in-depth conversations centered on the meaning of art and whether a game can be art. The role of money in game development and its effects on a game’s artistic nature came up several times. Other comments from panelists centered on the question of what we are actually asking when we question games as art, as well as the parallels of this discussion with similar ones raised during the rise of other media, such as photography.  The consensus seemed to be that games can in fact be art, but judging whether a game is art or not is a subjective matter.

Another focus was on the industry workplace environment and how women are treated and perceived. Experiences varied, but overall panelists felt respected by male co-workers, despite working in a male-dominated setting. When discussing perception in the wider gamer community, sentiments were more pessimistic. Panelists agreed that there is no denial (and unfortunately no easy fix) concerning prevalent sexism in the community. The only practical course of action may be to simply bear it and try to develop as many positive female role models as possible. As Professor Murray put it, simply “standing in front of a class and speaking competently about games is an act of activism.”

After the panel, many members of the audience shuffled toward the front to talk to Scholarly books about video gamespanelists one-on-one. Students were enthusiastic about the readings panelists had suggested and were eager to learn about how to enter the fields of video game design and development. Suggested readings included Rise of the Videogame Zinesters: How Freaks, Normals, Amateurs, Artists, Dreamers, Drop-outs, Queers, Housewives, and People Like You Are Taking Back an Art Form by Anna Anthropy, Tomb Raiders and Space Invaders: Videogame Forms and Contexts by Geoff King and Tanya Krzywinska, Values at Play in Digital Games by Mary Flanagan and Helen Nissenbaum, and Better Game Characters by Design: A Psychological Approach by Katherine Isbister.

While one panel will not necessarily fix the industry, it was successful in inspiring the audience to learn more about women in games and play and the issues they face. We hope that more women will join this space and set their own positive examples in the community, paving the way for a more welcoming and inclusive space.

SynFlo: Synthetic Biology Through Playful Interaction

SynFlo is a new project that began this semester in an effort to further explore the role of technology within the domain of synthetic biology. This tangible user interface seeks to convey basic synthetic biology concepts to a diverse audience of children and families in an informal science learning setting. SynFlo mimics a wet-lab protocol used in a real synthetic biology experiment. It uses small micro-computers (Sifteo Cubes) with dynamic actuation and display capabilities, which are mounted onto 3D-printed labware such as petri dishes and beakers. The project team is advised by Prof. Orit Shaer and consists of Evan Segreto ’15, Casey Grote ’14, and Johanna Okerlund ’14, in collaboration with Anja Scholze and Romie Littrell from San Jose’s Tech Museum of Innovation.


34ozk3p

sifteo cubes with gene, plasmid, & e. coli

SynFlo was originally conceived as an outreach project for iGEM 2012 as an interactive installation that walks participants in an outreach science program through the basic steps of E. chromi: an experiment conducted by the University of Cambridge iGEM team, involving the creation of E. coli that serve as color-changing biosensors. The installation consisted of three main parts: a triplet of Sifteo cubes, a triplet of tangibles, and the Microsoft Surface. The Sifteo cubes represented the genes, plasmid, and E. coli, respectively; the tangibles represented the toxins; and the Surface acted as an environment in which to deploy the engineered bacteria. Users progressed through a simulation of the E. chromi experiment as they would in the wet lab.

2hdo65w

toxin tangibles & microsoft surface

First, users choose the gene to be used in the experiment, by tilting the cube to scroll and pressing to select. Then, the gene is inserted into the plasmid, which acts as the vehicle for insertion into the E. coli. Users do this by neighboring the gene and plasmid cubes and shaking, simulating a vortex mixer. Finally, the plasmid is inserted into the E. coli by neighboring the plasmid and bacteria cubes. Users can then test their creation by deploying the bacteria within the Microsoft Surface and adding various toxins.

A user floods the environment with a yellow (lead) toxin by touching the tangible to the Surface.

More information regarding this first version of SynFlo can be found here.


The second iteration of SynFlo began in late January as a collaborative effort between the Wellesley HCI Lab and the San Jose Tech Museum of Innovation, after we were contacted by two members of the Tech’s research team, Romie and Anja. They were looking for extant examples of interactive applications to teach children basic synthetic biology concepts and happened to stumble across documentation of the version described above. In collaboration with Romie and Anja, we began work on implementing a new version of SynFlo–one that would work on the new Sifteo SDK (2.0) and would incorporate more tangible elements. The most recent version of the application, with which we ran preliminary user studies with visitors at the Tech, looks like this:

We tested two different implementations of SynFlo: (1) Sifteo cubes alone (which is similar to the original SynFlo implementation) and (2) Combining Sifteo cubes with 3D-printed tangibles and labware. Thus, there are two ways to interact with the system for each step of the experiment.

pick

helixes representing genes

STEP 1. Users select their desired gene. This step does not differ much between versions–in one, users select the tangible representation of their gene, and in the other, users select the Sifteo cube containing the digital representation of their gene.

STEP 2. Users insert the gene into an empty plasmid. In the labware version, they insert the DNA strand into the plasmid beaker. In the other, users simply neighbor the gene and plasmid cubes. In both versions, users shake the gene and plasmid to simulate vortexing.

pouring plasmid into petri dish

pouring plasmid into petri dish

STEP 3. Users insert the plasmid into the E. coli.The labware version requires users to “pour” the plasmid into a petri dish containing the E. coli. Similar to the step above, users neighbor the cubes in the cube-only version.

STEP 4. Users test their E. coli. Using the labware, users “pour” beakers of toxin into the petri dish to introduce the substance to the E. coli’s environment.  Using the cubes alone, users neighbor toxin cubes with the E. coli cube to test for a reaction.


In general, we found that large tangibles help to attract users–the labware drew people over in a way we didn’t see when working with just the Sifteo Cubes. Visitors also spent more time engaged with the tangible version and seemed to remember the process, often repeating it for all the available genes. On average, users who interacted with the tangible representations spent 6.3 minutes engaging with the system: 2.6 minutes more than the 3.7 minutes spent on the cubes alone.  Visitors who were watching (and not interacting directly) seemed to find the tangibles more interesting as well, spending more time engaged with watching than with the cubes only.  Perhaps most importantly, we asked several  children which version they would prefer, and they all indicated a preference for the tangible representations, saying that they made the interaction better, using words like “fun” and “interesting”.

Although the semester is coming to a close, we have big plans for the future of SynFlo. Evan Segreto is presenting a poster on the project at the Wellesley College Computer Science Department’s annual Senior Poster Session. This summer, with our new iGEM 2015 team co-instructed by Orit, Romie, and Anja, we’ll be exploring the interaction space of the Multitaction Surface to supplement our existing tangible prototype and to develop new interactive experiences for communicating synthetic biology to the general public.

TEI Conference at Stanford

A few weeks ago, we traveled to Stanford, CA for the TEI (Tangible, Embedded and Embodied Interaction) Conference. We had a blast – aside from enjoying the beautiful and warm California weather, we were stanfintroduced to a variety of interesting projects and frameworks through the paper talks and demo sessions. Stanford’s campus was definitely one to marvel at, with the red roofs and sandstone colored walls.

The conference opened with a keynote by Frank Wilson, who spoke about the history of the human hand and its relevance to the world of tangible and interactive technologies. The human hand is important, not only from a functional standpoint, but also personally. We use our hands from the moment we are born to start exploring and discovering the nature of the physical world. Our hands are arguably the most direct way to interact with and understand physical objects in front of us. The way young children use their hands to touch, poke, rub, hit, and trace objects is insightful to the nature of all our interactions with the physical world. This also makes our hands an important part of our identity. It is inspiring as an interaction designer to think about our relationship with our hands, but also an important research question to think about how to design for people who don’t have hands.

During the conference, we also demoed and presented our own project, Eugenie (shown below), a system that combines tangible active tokens with a multitouch surface to support biologists in a process for bio-design that involves forming complex queries and exploring a large dataset. Users build rules with the physical tokens and then stamp the rules to the surface, where they can explore and interpret the results. The goal of the system was to explore how tangible interaction can be applied to collaborative experiences for dataset exploration. Many people were interested during the demo session in the block-like structure of the pieces and even some children were compelled to explore the pieces themselves.

eugenie1eugenie2

We also had three students from our TUI Fall 2014 class present their final project, the Emotisphere (shown below), during the TEI poster session. Using an Arduino, galvanic skin sensors, and a heart-rate monitor, this project connects emotion and music. The sensor information is interpreted as an emotion and generated into a song. The sphere itself can be manipulated in different ways (shaking, twisting, rolling) to control various aspects of the music (volume, changing songs, etc.). The attendees seemed to enjoy learning about and playing with the Emotisphere.

emot1 emot2

It was empowering to come to the conference with our own research to present, but it was also humbling to be among so many amazing people and hear about their interesting research. The rest of the conference was packed with paper talks and demo sessions. The presentations covered everything from analysis frameworks to the latest projects incorporating new forms of interaction. A few unique applications included MugShots, which allows for social interaction in the workplace by adding a small display to a mug; PaperFold, which uses thin-film foldable displays to give users a new way to interact with digital “paper;” VoxBox, a tangible machine that helps solicit feedback from event participants; DisplaySkin, which offers a novel approach to wrist wearables by utilizing pose-aware displays; and SPATA, a spatio-tangible measurement tool that connects virtual design environments with the physical world.

One of the most interesting demos was a toy remote control car that was controlled by a smaller toflowery remote control car. You would drive the smaller one around and the larger one would drive in the same pattern. It was such a simple idea, but it makes so much sense. Rather than having to mentally translate the desired path of the car to the left-right/ forward-back switches of typical toy car remotes or even having to translate it to the motion of a steering wheel, you could just imagine the path of the car and it would happen. One of our other favorite demos was a light-up flower (pictured on the right) that is connected to sensors in the user’s chair. The idea is that the flower sits casually on the user’s desk and droops and turns red when they have bad posture. It’s a fun and ambient reminder to have good posture.

The conference closed with an inspiring keynote by Wendy Mackay that was all about paper. She started by sharing many of the projects she’s worked on that were systems inspired by paper-based interactions. It’s interesting that the design principle for the most common computer operating systems is a desktop. There are documents and pages that can be layered, neighbored, or put away when not in use. It’s a powerful metaphor and it makes sense- we are used to working with a physical desktop and we would want our digital environment to be familiar to us. However, many people seem to be working towards digital systems that could completely replace the physical desktop with a digital one, and this is probably not the best approach. If this were to happen, we would lose many of the personal and intimate interactions we have with paper. One example of this Wendy mentioned was how composers notate their compositions while they are in the process of composing. Everyone has a slightly different system that makes sense to only them, and also the symbols they develop naturally take on different meanings based on not only where they are written, but how they are written. It is hard to have computer systems that allow for such personal and unique interactions. So based on all these reasons, Wendy suggests that rather than creating digital environments to replace the physical ones we have always known, perhaps we should rather strive to introduce computation to the physical environments to enhance them and enrich the interactions. Rather than trying to replace the physical world, we should be celebrating it and leveraging our established interactions with it.

Overall, we had a wonderful time at TEI sharing our projects from the HCI Lab and learning about the research practices and projects in the world of tangible, embedded, and embodied interaction. We were inspired by the work and conversations we had with many brilliant computer scientists, artists, and philosophers. The energy from the conference was contagious and we are excited to continue our work in the lab, incorporating some of the energy and inspiration from the conference.

CS320: Tangible User Interfaces FINAL PRESENATIONS !!!

The students from the CS230 class, Tangible User Interfaces, had their final presentations of their interfaces on Monday;

IMAG0354

Check out the cool things they made!

Emotisphere

Audience – all people
Goal – Connect emotion and music
Sensors – galvanic skin sensors and heart-rate monitor
Hardware –  arduino uno

   

Emotisphere senses users emotion through biological reactions and translates into appropriate music. Interaction techniques like twisting for volume control are based on the affordances of spherical objects. In future work they will create more interaction (like changing songs,) identify a wider range of emotions, and create algorithms to generate musical compositions programatically based on bio-rhythmic data.

They will be presenting their work as a Works In Progress paper at TEI2015! 

Team Musical Hoodie

Audience – all people
Goal – to share the music one is listening to with the world and show visually the type of music being heard.
Sensors – temperature sensor
Hardware – lilypad arduino, capacitive thread

    

Lights on hoodie flash with the music to create a social music experience. A temperature sensor on the arm also functions as a touch sensor, to help make the experience more interactive for users outside the sweatshirt.

Do you ever find yourself listening to music but people find you anti-social? Have no fear, Musical Hoodie is here! The MH team created a musical hoodie with a Lilypad and leds. Lights on the sweatshirt flash with the beat of the drums in whatever song the user is listening to so the people around you know what your mood is while you have headphones in. Another feature is that when someone taps the sweatshirt on the shoulder, a temperature sensor senses this and lights up the sensor on the shoulder.

teacher inTUItion

Audience – elementary school children/teachers age 7-13
Goal – allow students more direct communication with the teacher
Sensors  – sliders, buttons
Hardware –  phidgets

The inTUItion team interviewed current teachers and students and created a device to poll students for answers, share emotions and evaluate level of understanding; They also created a web interface to aggregate information for teachers; each student has a profile with info about their emotions. They web interface also allows a teacher to give feedback to students to quiet down, listen up, 2 minute warning.

Elementary school students have a wide variety of social and emotional learning needs. It can be difficult for some students to communicate with their teachers and it can be difficult for teachers to monitor all of their students at once. teacher inTUItion allows students to indicate to their teachers their emotion levels (sad/frustrated to happy) and their understanding levels (green for understanding, yellow for confused, red for totally lost). Teachers can provide feedback to students by clicking a button that will light up the class if they should quiet down, listen up, or if students need a 2 minute warning for finishing an assignment. Each student in the class has a profile which the teacher can monitor online to dynamically change their teaching for the students. 

Calvin Calendar

Audience – families, especially those with children and working parents
Goal – preserve handwriting element of schedule planning and integrate with digital calendar software
Hardware – LiveScribe pen, Microsoft Surface

     

Calvin Calendar creates seamless integration between livescribe+pen interaction and surface+touch interaction. This integration allows users to remotely upload things onto a central calendar for the whole family. This creates and experience os personal and intuitive planning for families

In the backend, Calvin Calendar uses livescribe to upload to evernote where they are saved as images, and regularly fetches these notes to the surface application to be arranged.

As families grow, it can be increasingly difficult to coordinate and keep track of everyone’s schedule. People have both handwritten and digital calendars, but they are not usually in sync. Calvin Calendar aims to seamlessly integrate the handwriting and creativity from planners and the digital convenience of things like Google Calendar. By using a LiveScribe pen and a Microsoft Surface, families can keep a central calendar in the home that is dynamically updated. Family members can write each other notes with the LiveScribe, have the information for themselves, and they would not need to manually sync it up with their digital shared calendar. 

Allergen Aware

Audience- families/groups of people have a lot of dietary issues
Goal – Help prevent cross contamination in shared cooking spaces
Hardware – Microsoft Surface

    

Surface application allows you to create profiles for people with allergies, add fiducial tags to the allergens, make groups of people to see all allergens at once After choosing groups of people to cook for, one can put ingredients on the surface and the fiducial tags will turn the surface red if the ingredient is unacceptable for that group of people. You can check the history of the surface to see if something was recently made on the surface that could trigger a reaction. 

Friends and family frequently share a meal together, but there are often allergens that can complicate cooking meals. When cooking for people with allergies, it is important to be careful that the cooking surface is not contaminated. With Allergan Aware, a Microsoft Surface provides the means to avoid triggering reactions. Allergen Aware allows you to create profiles for friends and family indicated their dietary restrictions. Each person can be added to a group so one could quickly select the group and import all of the restrictions to the surface. When cooking, foods with fiducial tags are set on the surface. If a food contains a dangerous allergen, the surface turns red to indicate that this is a problem. The surface retains a history of the most recently used allergen on the surface so the next user can be aware that they may need to be cautious. 

NutriGlass

Audience – initially children, but extended to adults
Goal – Make kids (and adults) excited to eat healthily.
Sensors – photo interrupter
Hardware – Kinoma, Google Glass

NutriGlass is a two part project, starting with an interface developed for the UIST student innovation contest: BenTUI Box.

BenTui Box used a Kinoma to create an interactive lunchbox to help kids develop good eating habit.
Photo interrupters can sense when there is food between them and then when you eat the food, it would light up and be fun.

After UIST this project moved to google glass and the audience shifted to adults.
Using image recognition glass will overlay information onto your food to give information on the nutritional benefits of the things you are eating. 
Your current progress through your daily nutrition requirements is presented to help users make healthy choices.

The first stage of this project, BenTUI box, was presented at UIST 2014!

TreadLight Timer

Audience – people who like to cook
Goal – simplify timers by creating a easy to see and use timer interface
Sensors – eventually plan to include force sensor or button to detect kick-start
Hardware – Kinoma, plenty of LEDS

     

When cooking, different devices spread out over the kitchen need their own timers and need to be kept track of. Currently, timers require you to be standing in front of them to check and see how much time you have left. TreadLight Timer addresses these issues by indicating the progress of cooking food with ambient colors and light. The lights change from blue to purple to red as the timer counts down, indicating how close to finishing the food is.

This project was presented at UIST 2014!

UIST Student Competition

We brought 3 different projects to the UIST student competition this year. Every year, the competition is centered around a piece of new technology that students are challenged to think of creative and interesting uses for. This year, the technology was the Kinoma Create, an arduino-like board with sensor inputs and outputs as well as a touch screen and web capabilities all built-in. Programs for the Kinoma are written in JavaScript, lowering the barrier for people who have never used pins before to make interesting hardware systems.

The Kinoma fits in well with an HCI concept called the Internet of Things. If you think of the regular Internet as a bunch of connected pages of digital information, then the Internet of Things is the same, but with physical objects. It is about physical objects knowing information about other physical objects, controlling other objects, knowing information about the environment or from the web, communicating that information, and acting based on that information. Because of the sensor and web capabilities of the Kinoma, it lends itself well to adding objects to the Internet of Things. The student competition from UIST required us to use the Kinomas to invent something for the kitchen or general household.

The competition inspired us to think about our vision for the smart kitchen of the future. If we could build a kitchen from scratch, what would that look like? Our vision is grounded in the Internet of Things. We imagine ways that appliances and objects in the kitchen can know information and how that can help the inhabitant. The kitchen could know information about the outside world, such as the weather of the day or traffic reports, and could communicate that information to the inhabitant as well as help them adapt. The kitchen should also be aware of what is in it and what comes and goes, much like how a vending machine keeps track of what’s inside it. Interactions with vending machines are much too stiff to model our home fridges after, but you can’t help be envious of how much the vending machine knows about its contents- what’s in it, how much of each thing, and additional information about each item, such as cost. The entire kitchen could work like that with knowledge about the objects in it and how they are being used.

We developed three projects that are consistent with this vision of a smart kitchen: Weather Blender, TreadLight Timer, and BenTUI Box.

Weather Blender

Screen Shot 2014-12-05 at 3.08.37 PMWeather Blender, as it’s name suggests, is a blender that tells the weather. Based on the weather report for the day, Weather Blender produces a smoothie that reflects the forecast. Weather Blender consists of a blender and a container with four compartments that hold different types of fruit. In our configuration, we use strawberries, mango, banana, and blueberries. When the user wants a smoothie, they press a button on the Kinoma, which gets the weather from the web, generates a recipe, and uses motors to control flaps in each container, allowing the correct proportion of each type of fruit into the blender. For example, if the weather is rainy, a blue smoothie is produced. For sunny weather, we chose to use orange. Clouds are banana, and warning weather is strawberry. For example, on a day that is hot and sunny, the smoothie will be mostly orange, with a bit of red.

Weather Blender is a cool way to explore the possibilities of smart appliances in the kitchen. They can be useful from an information display standpoint- they can use ambient means to communicate to the average human information about the environment and the world beyond. Rather than having to read a weather report, wouldn’t it be much easier for the user to just know the weather based on the smoothie you are drinking anyway. Smart appliances can also help people adapt based on the state of the world outside. For example, perhaps the Weather Blender sneaks some Vitamin D into the smoothie on a rainy day since it knows the human will be lacking sun. See our concept video here.

BenTUI Box

Screen Shot 2014-12-05 at 3.05.58 PM

BenTUI Box is about inspiring kids to eat healthily by giving them an interactive lunchbox. When they eat all the food out of a given compartment, that compartment lights up. It gives children an incentive to eat all their food rather than leaving it in their lunchbox and bringing it back home.

The BenTUI Box demonstrates the Internet of Things because the system gives the lunch box knowledge of what’s inside of it and the lunchbox reacts differently based on how its contents change. The system allows people to interact with a lunchbox in novel ways and demonstrates how a technological intervention can encourage kids to eat healthily. It is a step towards our vision where the entire kitchen knows what comes and goes and can help keep the inhabitant monitor a healthy lifestyle. See our concept video here and a video of our implementation here.

TreadLight Timer

Screen Shot 2014-12-05 at 3.03.31 PM The TreadLight Timer is a system that aims to provide more ambient and easily accessible information to someone cooking in a kitchen. The timer leverages the fact that there are really only 3 different states of the timer the cook is interested in: The food is no where near done, the food is almost done, and the food is done. The system uses a string of colored lights around the cooking apparatus to communicate the state of the timer ambiently, reducing the need for the cook to walk around the kitchen to locate a centralized timer and spend time reading and interpreting the numbers on the tiny display.

The TreadLight Timer illustrates the use of ambient information to reduce the number of things a user has to think about during a cognitively demanding task. It also centralizes the information closer to where the task is taking place, using previously unused space in the kitchen as a canvas for information. This presentation of information allows a clearer mapping between processes and tasks if the user has multiple things going on. For example, when cooking multiple dishes, the user can have a TreadLight Timer on for each burner on the stove. In our vision of the smart kitchen of the future, the stove would have such a timer built into each of the burners as well as the oven and other appliances that a kitchen-user might want to time.

In conclusion, these projects explore ways in which kitchens can be smarter in terms of the information they have access to, ways they act on that information, and how they communicate to the inhabitant. We are excited to imagine further possibilities for a smart kitchen.

Privacy and HCI for Personal Genomics

While considering HCI perspective on personal genomics, we partnered with the Personal Genome Project. The nature of the Personal Genome Project (PGP) as an open source online database of personal genomic information raises important questions regarding a participant’s privacy and willingness to share their information publicly. The PGPHCI team, led by Orit Shaer, in collaboration with Dr. Oded Nov (NYU) and with Dr. Darakhshan Mir (Wellesley College) investigates privacy and sharing in the context of personal genomics.

Screen Shot 2014-08-19 at 4.34.00 PM

While conducting an intensive literature review of the field, student researcher Claire Cerda, led the team to discuss how people have unique attitudes and behaviors when it comes to maintaining their privacy and security. Such behaviors include clearing cookies from a browser before logging off of the computer, or covering the keypad when entering a pin number connected to a debit card. Some people are very concerned about having their credit card information stolen when they pay for products online, but some are more trusting in the system. These attitudes and behaviors may vary based on a person’s technical skill or a person’s age, for example. In order to better understand user’s attitudes and behaviors regarding privacy, the team implemented a privacy index developed by psychology Professor Tom Buchanan of the University of Wesminster. Buchanan’s privacy index builds upon the work of the well-respected scholar Alan Westin, who was one of the first to study privacy and develop a way of measuring people’s feelings and behavioral patterns. Buchanan included more technologically relevant questions about the internet and online personal security that was not fully available during Westin’s time. The Buchanan index is made up of three separate scales: a privacy concern scale, a technical protection scale, and a general caution scale. The privacy concern scale measures a person’s attitude and the technical practice and general caution scales measure a person’s behavior about privacy and security. The PGPHCI team tested the scale among six individuals. Check out the results of our pilot study below. The graph presents a score between 0-4 on three privacy dimensions for each of our 6 pilot participants.

Screen Shot 2014-08-19 at 4.35.56 PM 

From the pilot test, the team was able to assess the effectiveness of the index and also understand its capabilities in an online survey.

The team also explored the risks and benefits of sharing personal genomic information among different circles of people. For example, the risks and benefits of sharing personal genomic information with family, friends, scientists, or on social media. The team will test whether making users aware of the risks and benefits of sharing will positively influence the amount of data they are willing to share. The study will continue in the next upcoming months, and the PGPHCI team looks forward to presenting its work at future conferences. Stay tuned for more information…