Total Pageviews

Saturday, 16 November 2013

"inFORM" to "Wolverine" & more: a new interaction technique

I was always a great fan of Hugh Jackman and one of his iconic characters "Logan" in the movie named "Wolverine". Even though its a movie involved as action, adventure and fantasy, it's a pretty decent work. When I saw the latest release in 2013, there were some impressive techniques which caught my attention. In fact, some amazing works. In those, one of the scene which I saw, the character who is also the villain ( Hal Yamanouchi as Yashida) of the movie meeting our Hero, inside a lab. While their conversation were going on, Yashida suddenly got up with a special technique that supported his entire body. That impressive technique got my attention and I had a research on it. It was inspiring and pretty impressive.

The scene from the movie Wolverine, where Yashida tries to get up with the help of a technique to have a conversation with Wolverine character Hero Logan
Ok! Now let us go through the impressive technique which I was talking about. "inFORM" system is a state of art 2.5D shape display that enables dynamic affordances, constraints and actuation of passive objects. Its too scientific. Let me make it simple for you. Its a technique that create an interface that brings computer generated 3D objects and motions to life via real world shapes and movements. It's a technique where we can even control 3D objects virtually through the help of a computer and actuators. Pretty amazing, right? :D :) 

inFORM new interaction techniques for shape changing
"inFORM system" was born in an MIT lab under the team Sean Follmer, Daniel Leithinger, Alex Olwal, Akimitsu Hogge and Hiroshi Ishii. Past researches were primarily focused on rendering content and user interface elements through shape out-put, with less dynamically changing User Interfaces. But this research propose on utilizing shape displays in three different ways to mediate interaction: 1) To facilitate by providing dynamic physical affordances through shape change 2) To restrict by guiding users with dynamic physical constraints 3) To manipulate by actuating physical objects. They introduced Dynamic affordances and Constraints with their inFORM system. 

Dynamic affordances function both as perceived affordances and "real" affordances. They are rendered physically and provide mechanical support for interaction. The team combined the graphical perceived affordances with Dynamic Affordances and sometimes switching between the states. Some of the User Interface controls that the system support with respect to dynamic affordances are 
  • Binary switches:Buttons- Buttons are formed by raising pins from the surrounding surface. Users activate a button by touching it or by pushing it into the surface, which is registered as binary input. 
Button
  • 1D input: Touch tracks- Touch tracks consists of a line or curve of adjacent raised pins which the user can touch at different locations or slide over. 
ID Touch Track
  • 2D input: Touch surfaces- Touch surfaces are created using multiple pins, which are aligned to form surfaces.
  • 2D touch surface
  • Handles- They provide interaction in the Z dimension. These raised pins can be grabbed and then pulled up or pushed down along with one dimension. 
Handles
  • Interactions with dynamic affordances can change shape or to reflect a changing program state. While dynamic affordances facilitate user interactions, dynamic constraints limit the possibilities, making some interactions difficult or impossible to perform. The dynamic constraints make the system more legible, but also guide the user in performing certain interactions through physical interaction with the constraints. They can also mediate interactions through tangible tokens or tools. 
  • The system can sense how tokens interact with constraints. Some of the techniques include: Holding tokens and sensing presence, restricting movement to 1D, affecting movements, interaction with dynamic physical constraints etc. 
The depth of a well affords if the user can grasp a token contained in it.
Slots with indentations and ramps can be used to guide the user's interactions or to provide haptic feedback
There are a lot of parameters that are take into account while implementing this system. For example, let us consider the implementation of Shape Display. The system uses 30by30 motorized white polystyrene pins, in a 381by381 mm area. The pins have a 9.525 mm square foot print, with 3.175 mm inter-pin spacing, and can extend up to 100 mm from the surface. Push-Pull rods are used to link each pin with an actuator, to enable a dense pin arrangement independent of actuator sixe, giving the system a height of 1100mm. The linkage, a nylon rod inside a plastic housing, transmits bi-directional force from a motorized slide potentiometer through a bend. Six slide potentiometers are mounted onto a custom-deigned PCB, powered by an Atmel ATMega 2560 and specialist motor drivers. The linear positions are read by the 10 bit A/D converters on the microcontroller, and allow for user input, in addition to servoing their position using PID control. 150 borads are arranged in 15 rows of vertical panels, each with 5by2 boards. The boards communicate with a PC over five buses bridged to USB. For each pin, we can update the position and PID terms to provide haptic feedback and variable stiffness. The system interact with different application with minimal power consumption based on the requirements. 
The inFORM system actuates and detects shape change with 900 mechanical actuators, while user interaction and objects are tracked with an overhead depth camera. A projector provides visual feedback. 
The system uses an overhead depth camera to track user's hands and surface objects. A Microsoft Kinect with a depth sensor is mounted above the surface and caliberated for extrinsic and intrinsic camera parameters. "We are using two Kinects- one mounted in a remote location to capture the remote user, who had two video screens so they can see both the inFORM table surface and other participants," said Follmer. He said that the remote Kinect is mounted on the ceiling and captures the depth image and 2D colour image of the user's hands or any other objects placed under it. This captured data is sent over the network to the inFORM surface, which renders it physically on the inFORM pins. A projector above the inFORM projects the colour image of the remote user's hands onto the rendered shape. The second Kinect is mounted above the inFORM surface. This allows users collocated with the inFORM to interact with the inFORM gesturally.
inFORM actuate devices, for example by sliding and tilting a tablet towards the user
The development is a major breakthrough and I can say what we saw in Wolverine can be just a beginner's application and more to come. Shape displays allow for new ways to create physical interactions beyond functionality alone. We can hope a major development in this field which can be a major breakthrough in a wide variety of applications and helpful in medical, IT and industrial fields. Let's hope for the best. :D :)  

No comments:

Post a Comment

If anyone has any other questions or requests for future posts,how to posts, you can ask me in comments or email me. Please don't feel shy at all!

I'm certainly not an expert, but I'll try my hardest to explain what I do know and research what I don't know. I would love to hear your thoughts! Be sure to check back again. I would certainly reply to your comments :)