miércoles, 12 de febrero de 2014

The Human Body as a Computing Interface

< Interface /ˈint-ər-ˌfās/: The point of interconnection between two entities.>


Interfaces take places into our lives in the form of the various devices, analog or digital, with whom we normally establish some kind of interaction. This means that the interfaces are "tools" extenders for our bodies, such as computers, cell phones, elevators, etc. The concept of interface is applicable to any situation or process where the exchange or transfer of information takes place. Some of the ways of thinking to the interface might be like “the area or place of interaction between two different systems not necessarily a technological system”. Traditional computer input devices leverage the dexterity of our limbs through physical transducers such as keys, buttons, and touch screens. While these controls make great use of our abilities in common scenarios, many everyday situations command the use of our body for purposes other than manipulating an input device (Saponas, 2010, p. 8). Humans are very familiar with their own body. By nature, humans gesture out their body parts to express themselves or communicate ideas. Therefore, body parts naturally lend themselves to various interface metaphors that could be used as interaction tools for computerized systems.

For example, imaging rushing to a class while wearing gloves in a very cold morning, all of the sudden you have to place a phone call to your classmate to remind him to printout a homework, dialing a simple call on a mobile phone’s interface within this situation can be difficult or even impossible. Similarly, when someone is jogging and listening to music on a music player, their arms are typically swinging freely and their eyes are focused on what is in front of them, making it awkward to reach for the controls to skip songs or change the volume. In these situations, people need alternative input techniques for interacting with their computing devices (Saponas, 2009, p. 4).

Appropriating the human body as an input device is appealing not only because we have roughly two square meters of external surface area, but also because much of it is easily accessible by our hands (e.g., arms, upper legs, torso). Furthermore, our sense of how our body is configured in three-dimensional space allows us to accurately interact with our bodies in an eyes-free manner (Harrison, 2010, p. 11).

In terms of interface suitability and human needs, researchers had been looking for ways to provide the user with greater mobility and enable more and more interaction. However, and although this interaction with the new interface is greater, users do not have a clear mental model of its operation, since in some cases cease to be intuitive and demand to the users a constant relearning. However, several research areas offers possibilities for full body incorporation into the interfaces process, such as: speech recognition, gesture detection, computer vision, micro gestures, skin surface, body electricity, brain computing, and muscles gesture, among others.

A Current research that explores different ways to use the features of one’s own body for interacting with computers, presented by The Imaging Research Center of South Korea, has divided this area into four types of human body based interfaces:

  1. Body Inspired Metaphor (BIM): Uses various parts of the body as metaphoric interaction.
  2. Body As An Interaction Surface (BAIS): Uses parts of the body as points of interaction. In this model, researchers are investigating what parts of the human body are more suitable to be used as interface for a given task. They are trying to find the best spot taking into account cognitive and ergonomic factors. So far, they had found that one of the most plausible locations seems to be the forearm of the non-dominant hand for its mobility, accessibility to the dominant hand, and visibility, although other parts of the body may be considered, such as on the lap (Changseok, 2009, p. 264).
  3. Object-Mapping (OM): It transports the user into the location of the object by becoming it, and manipulates it from the first person viewpoint both physically and mentally.
  4. Mixed Mode (MM): A mix of BIM and BAIS.

To draw an example on how the hand, the body and now the skin are being used in digital interaction process, we could refer to the film starring Tom Cruise in which the computer interface is manipulated by the hands of touch; or just how to recognize a body movement or gesture as the recent "Nathan Project" (Commercially known as "Kinect") from Microsoft, as a development of an interface for the Xbox 360. No doubt we are in the era of the touchpad, but we were far from thinking that simply by gestures, we could handle an interface, such as the prototype Gesture Cube, a "cube" that interprets the movement of the hands and we can act with different devices without having to touch a hand tool and moving a short distance. This Gesture Cube has a series of sensors that instantly detect the position and transmits the coordinates to a CPU installed in its interior, so that some previously programmed motions allow the execution of a specific task such as opening a program, call someone, listening to music. 

Unlike what we may think, GestIC as its creators have called this cube, does not have sensors that read the position of the hands in an optical process, but instead is equipped with an array of sensors that are grouped in fours to measure the variation of the magnetic field generated by the human skin that is produced according to the variation in the distance. The interesting addition to this, is that the interface allows you to associate a different device to each of the faces of the cube.

Another area refers to muscle sensing. While muscle-sensing techniques promise to be a suitable mechanism for body interface, previous work suffers from several key limitations. In many existing systems, users are tethered to high-end equipment employing gel-based sensors affixed to users’ arms with adhesives (Saponas, 2009, p. 19). Other efforts developed experiments using motor neurons stimulate muscle fibers into the skeletal muscles causing movement or force. This process generates electrical activity that can be measured as a voltage differential changing over time. While the most accurate method of measuring such electrical activity requires inserting fine needles into the muscle, a noisier signal can be obtained using electrodes on the surface of the skin. So far these experiments have yielded little success. (Mastnik, 2008, p. 64).

However, a recent project, called Skinput, demonstrated by Microsoft research represents an enormous advance in this area. Skinput is an input technology that uses bio-acoustic sensing to localize finger taps on the skin. When augmented with a pico-projector, the device can provide a direct manipulation, graphical user interface on the body (For Example, in a person’s forearm). The technology was developed by Chris Harrison, Desney Tan, and Dan Morris, at Microsoft Research's Computational User Experiences Group.

Wearable computing and virtual reality would be ideal application areas for body interface technologies. For instance, one of the defining goals of the virtual reality system is to create the feeling of being in the environment and one cause of breaking presence is the existence of intrusive wired sensing devices. While body based interfaces may not increase realism, they may still find good uses for imaginary virtual worlds for increasing self-awareness through self-interaction (Changseonk, 2009, p. 271).

Another study conducted by Nokia Research Center suggested the concept of “virtual pockets” for opening and saving documents in a wearable computing setting. Virtual pockets are physical pockets augmented with pressure sensors on one’s clothing woven with a special material for tracking the finger position on the clothing surface. A user can move files between different pockets, a process analogous to the “dragging and dropping” in the familiar desktop environment. Using finger pressure, files can be opened or saved. This can be viewed as mapping the desktop space onto the front surface of the upper body (Changseok, 2009, p. 269).

When exploring the body as an interaction device, challenges are how to utilize the corporal potential in the interaction context; and what influence and significance the use of the body has on interactive experiences. Characteristics to be considered when utilizing the body include: small/large degree of bodily involvement in the interaction; less/large accentuation of the significance of the body in the user experience; and finally, small/large degree of user influence (Karen, 2008, p. 2). According the experts in the subject, body interfaces can contribute to reducing task completion time and errors because it is natural and less confusing to users. However, they also appoint that excessive moving of body parts can cause muscle fatigue. Therefore, not all tasks are suitable for association with body parts

Another trend in the quest to integrate the human body into the interface process states that in order to accomplish such goals, besides HCI other areas area such as electronics, bio informatics and science materials, must evolve in their own subject matter. A research work called Communications Trough Virtual Technologies and sponsored by Association of European Telecoms concluded that unobtrusive hardware miniaturization is assumed to permit the necessary enabling developments in micro and optical electronics that is required for the usage of the body as a computer interaction device. Molecular and atomic manipulation techniques will also be increasingly required to allow the creation of advanced materials, smart materials and nanotechnologies (Fabrizzio, 2009, p. 33). 

In addition to these conclusions, the same study adds that it is also required significant advancements in the areas of: 

a) Self-generating power and micro-power usage in devices.

b) Active devices such as sensors and actuators integrated with interface systems in order to respond to user senses, posture and environment that can change their characteristics by standalone intelligence or by networked interaction.

c) Nano devices to have lower power consumption, higher operation speeds, and ubiquity.

In the current stage of HCI research, a slight finger tap, an acoustic vibration in the air, a movement of the eyes and tongue, or a pulse in the muscle can become a method for information transmission, and people are not only interacting with computers, but also with every object around them (Hui, 2010, p. 1).

Citations and References


Changseok, .C., (2004). Body Based Interfaces. Proceedings of the Fourth IEEE International Conference on Multimodal Interfaces (ICMI’03)Fabrizzio, .D. (2009). Communications Through Virtual Technologies. Galimberti and G. Riva (eds.), La comunicazione virtuale, Guerini e Associati, Milano
Harrison, .S., (2010). Skinput: Appropriating the Body as an Input Surface. In Proceedings ACM CHI 2010 
Hui, .M. (2010). Human Computer Interaction, A Portal to the Future. Microsoft Research. 
Karen, .J. (2008). Interaction Design for Public Spaces. ACM MM’08, October 26–31, 2008, Vancouver, British Columbia, Canada.
Mastnik, S., (2008). EMG-based Hand Gesture Recognition for Realtime Biosignal Interfacing. Proceedings ACM IUI ‘08, 30-39.
Musilek, P. (2007). A Keystroke and Pointer Control Input Interface for Wearable Computers. In Proceedings IEEE PERCOM ’07
Saponas, T., (2009). Enabling Always-available Input with Muscle-Computer Interfaces. In Proceedings ACM UIST ’09.
Saponas, T., (2010). Making Muscle-Computer Interfaces More Practical. In Proceedings ACM CHI 2010.
Saponas, T. (2009). Demonstrating the feasibility of using forearm electromyography for muscle-computer interfaces. In Proceedings ACM CHI ’09.

No hay comentarios:

Publicar un comentario