This week’s science column…

SCIENCE

By Edward Willett

Getting to know all about you

When Oscar Hammerstein wrote “Getting to know you, getting to know all about you…” for “The King & I,” he wasn’t (I think I can safely say) attempting to predict the future of human-computer interaction. But if researchers are successful, your computer will indeed soon be getting to know all about you.

Scientists at Sandia National Laboratories Advanced Concepts Group in Albuquerque, New Mexico, led by project manager Peter Merkle, are attempting to turn a personal computer into what they call an “anthroscope,” meaning a “human-scope.” (Pedantically, hat should be “anthroposcope,” but never mind.)

They’ve combined tiny sensors that measure motion, muscle activity, perspiration, pulse, blood oxygen levels and respiration with face-recognition software and a transmitter to create a device called a Personal Assistance Link, or PAL. PAL sends its data to the computer, which analyzes it to determine a user’s emotional and mental state. This information is then passed on to both the user and to the computers of other people within a working group. The goal is to enhance efficiency by ensuring that each person in a group is working on the task they are currently in the best condition to perform.

Teams of four to six were assembled to play “a Tom Clancy-based computer game” so the researchers could develop a baseline understanding of human response under stress. Once that was established, the researchers were able to test the effect of this kind of monitoring on team performance.

Preliminary results indicate that making personal sensor readings available to the members of a group results in lower arousal states, improved teamwork and better leadership in longer collaborations. (A “lower arousal state” is one in which less energy is being put into remaining alert, leaving more energy for dealing competently with an ongoing task or threat.)

How does it work? As Merkle describes it, if someone is very excited during the game and that correlates to poor performance, the computer might pop up a message telling him to slow down. Alternatively, the computer might suggest to the team leader that that individual be taken out of the loop, and that that person’s responsibilities be given to someone else, whose sensors indicate she’s in great shape to carry them out.

In planned future projects involving Sandia, the University of New Mexico, and Caltech; in the new research, detailed EEG recordings will be made of four people simultaneously as they work on a task, and those readings will be coordinated with other physiological readings and various social phenomena in an effort to further improve both group and individual performance.

Don’t like the idea of imparting that much information to either your computer or your colleagues? Then you might be more comfortable with the more limited research along these lines being carried out at the Human Media Lab at Queen’s University in Kingston.

Researchers there want computers to pay attention to our needs, by sensing when we are busy and when we are available for interruption with an announcement of e-mail or a reminder of an upcoming appointment.

To create these “attentive” devices, the Queen’s University researchers have focused on the function of eye contact in human conversation. They’ve developed eye contact sensors that allow computers (and other electronic devices) to determine whether a user is present and if that user is looking at the device, eye contact sensing glasses that recognize when people look at each other, and (somewhat eerily) “eye proxy,” a pair of cartoonish robotic eyes with embedded eye contact sensors that allow a device to look back at the user, thus visually indicating that it’s ready to communicate.

Among the applications they’ve developed are an attentive videoconferencing system that communicates eye contact through video images, optimizing bandwidth on the basis of the joint attention of users; attentive cell phones that, in conjunction with eye contact sensing glasses, know when users are in face-to-face conversations and automatically switch themselves from ringing to vibrating alerts; attentive speaker phones that allow users to initiate calls by looking at robotic eyes representing the remote person; attentive televisions that automatically pause when nobody is watching them; and attentive home appliances that allow people to use their eyes as pointing devices and their mouths to issue commands (i.e., you’d look at a lamp and say “Turn on” to turn it on; if you weren’t looking at it, it would ignore that phrase if it came up in conversation).

I spend a lot of time staring at my computer and wondering about its internal state (“Why is it doing that?”). I think it’s about time my computer started staring back at me and wondering about mine.

It only seems fair.

Permanent link to this article: https://edwardwillett.com/2004/02/this-weeks-science-column-6/

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Easy AdSense Pro by Unreal