2014
04.10

Robots that become like us in thought and capabilities have long held a prominent position in science fiction going back to its humble beginnings including a stunning “first view” in Fritz Lang’s 1927 film classic Metropolis. Much has been accomplished since then to make the dream a reality, including robots with empathic skills functioning as companions for elderly and ill people. The dream has been to make an autonomously functioning device with near-human capabilities that can stand in for or augment human activity. Humans and robots working together on the production line in factories has been a reality for many years.

There’s a different line of investigation, however, heating up in terms of patent grants, that seeks to make our smart devices interact with us in a manner similar to how humans interact with each other, using non-verbal communication such as gestures. Here we are seeing a clear intent not to transform a smart device into a “traditional” robot, as defined above. Smart devices, used as tools to enable us to do a variety of tasks, remain in the form and use case for which they were originally made. A smartphone is still a phone with applications. The “human–machine interaction” research is intended to allow a device to work with a human who may not be able to hold it, tap a screen, or be within the minimum range with verbal interaction. Seeing a specific gesture from a distance can activate the device and other gestures could launch an application.

This is an area of formal research. For example, Carnegie Mellon’s Human-Computer Interaction Institute is one of a number of prominent centers involved with bringing our devices and us into a better functional alignment. Perusing their current research initiatives makes for interesting reading.

With this in mind, an intriguing grant to Amazon Technologies, Inc., this week is Patent 8,693,726 (User Identification by Gesture Recognition.”) Reading through the Background, it becomes clear that a less resource-intensive means to identify a user to a device while maintaining a high level of password-like security is the objective. An example of a resource-intensive form if user identification described in the patent is facial recognition. The use of specific gestures by the user, captured in the device’s memory, is proposed. Specific gestures such as tracing a letter in the air, or waving your hand in a certain direction, are compared with the stored gesture in the device’s memory to validate the user’s access to the device. The patent covers motion in 3 dimensions as well as time (the 4th dimension), allowing for a sequence of gestures to be used to heighten access security. The use-case implications are interesting, because a gesture recognized from a distance can facilitate further use of the device by voice interaction, keeping the user “hands free” for tasks where holding and using touch to interact with the device may prove impractical or dangerous.

Amazon was also granted Patent 8,694,350 (“Automatically Generating Task Recommendations for Human Task Performers.”) The patent covers the required elements of an electronic marketplace, complete with a Task Recommendation Generator, for human performance tasks. The “Background” section in the document provides a fascinating step through of the logic derived from software program task generation techniques ultimately applied to human tasks. There is a recognition that certain tasks benefit from human capabilities such as contextual and cultural awareness. The objective: “By enabling large numbers of unaffiliated or otherwise unrelated task requesters and task performers to interact via the intermediary electronic marketplace in this manner, free-market mechanisms mediated by the Internet or other public computer networks can be used to programmatically harness the collective intelligence of an ensemble of unrelated human task performers.”

One can ask: To what end?

No Comment.