Loading…
Welcome to Augmented Human 2015 in Singapore!

Create your schedule and network with your fellow AH participants. Use the hashtag #AH2015 when posting about AH’15 on your social networks to get increased visibility of your AH work! The mobile version of your schedule is available on the right of this page.
Monday, March 9
 

08:00 GMT+08

Registration
Monday March 9, 2015 08:00 - 09:00 GMT+08
Marina Bay Sands, Singapore Marina Bay Sands Hotel, Singapore

09:00 GMT+08

Welcome and Opening
Moderators
avatar for Ellen Yi-Luen Do

Ellen Yi-Luen Do

Georgia Institute of Technology and National University of Singapore
Creating Unique Technology for Everyone!
avatar for Suranga Nanayakkara

Suranga Nanayakkara

Singapore University of Technology and Design

Monday March 9, 2015 09:00 - 09:15 GMT+08
Room: Heliconia Jr. Marina Bay Sands Hotel, Singapore

09:15 GMT+08

Opening Keynote: The Way We May Interact
From Abacus to Android, the medium of our interaction has been constantly changing. The wonderful history of this change and its next stop excites us and will for the time to come. At this yet another crossroad, we will again break the norms with experiment that may someday become commonplace. But some of those experiments that did not become norms might be as exciting and sometime more. The goal of these quests is to make us more connected, may be to each other or to something inanimate. Or maybe it is just an aimless quest. The dream of 'what will be the Next Medium' keeps me awake, and I would love to share the story of it with you.

Speakers
avatar for Pranav Mistry

Pranav Mistry

Vice President, Samsung Research America
Nothing can be and can not be one and at the same time and I am. I am Pranav Mistry. Currently, I am the Head of Think Tank Team and Director of Research of Samsung Research America. Before that I was a Research Assistant and PhD candidate at the MIT Media Lab. In past, I worked... Read More →


Monday March 9, 2015 09:15 - 10:30 GMT+08
Room: Heliconia Jr. Marina Bay Sands Hotel, Singapore

10:30 GMT+08

Coffee Break
Monday March 9, 2015 10:30 - 11:00 GMT+08
Room: Hibiscus Jr. Marina Bay Sands Hotel, Singapore

11:00 GMT+08

Session 1: Wearable Interfaces
Moderators
avatar for Jun Rekimoto

Jun Rekimoto

The University of Tokyo

Monday March 9, 2015 11:00 - 12:30 GMT+08
Room: Heliconia Jr. Marina Bay Sands Hotel, Singapore

11:01 GMT+08

Vision Enhancement: Defocus Correction via Optical See-Through Head-Mounted Displays
Authors: Yuta Itoh and Gudrun Klinker

Abstract: Vision is our primary, essential sense to perceive the real world. Human beings have been keen to enhance the limit of the eye function by inventing various vision devices such as corrective glasses, sunglasses, telescopes, and night vision goggles. Recently, Optical See-Through Head-Mounted Displays (OST-HMD) have penetrated in the commercial market. While the traditional devices have improved our vision by altering or replacing it, OST-HMDs can augment and mediate it. We believe that future OST-HMDs will dramatically improve our vision capability, combined with wearable sensing systems including image sensors.

For taking a step toward this future, this paper investigates Vision Enhancement (VE) techniques via OST-HMDs. We aim at correcting optical defects of human eyes, especially defocus, by overlaying a compensation image on the user's actual view so that the filter cancels the aberration. Our contributions are threefold. Firstly, we formulate our method by taking the optical relationships between OST-HMD and human eye into consideration. Secondly, we demonstrate the method in proof-of-concept experiments. Lastly and most importantly, we provide a thorough analysis of the results including limitations of the current system, potential research issues necessary for realizing practical VE systems, and possible solutions for the issues for future research.

Speakers
avatar for Yuta Itoh

Yuta Itoh

Technical University of Munich
Head-mounted displays, HMD calibration


Monday March 9, 2015 11:01 - 11:20 GMT+08
Room: Heliconia Jr. Marina Bay Sands Hotel, Singapore

11:20 GMT+08

Exploring Users’ Attitudes towards Social Interaction Assistance on Google Glass
Authors: Qianli Xu, Michal Arika Mukawa, Joo Hwee Lim, Cheston Yin Chet Tan, Shue Ching Chia, Tian Gan, Liyuan Li and Bappaditya Mandal

Abstract: Wearable vision brings about new opportunities for augmenting humans in social interactions. However, along with it comes privacy concerns and possible information overload. We explore users’ needs and attitudes toward augmented interaction in face-to-face communications. In particular, we want to find out whether users need additional information when interacting with acquaintances, what information they want to access, and how they use it in their communications. We design a prototype system on Google Glass that provides the wearer with in-situ personal information about the target person. The prototype was tested with 20 participants in a few interaction scenarios. Based on thorough analysis of users’ behaviors and feedback, we find that users in general appreciated the usefulness of wearable assistance for social interactions. We highlight a few key technical, behavioral, and social implications of wearable vision for interaction assistance to foster technology advancement.

Monday March 9, 2015 11:20 - 11:35 GMT+08
Room: Heliconia Jr. Marina Bay Sands Hotel, Singapore

11:35 GMT+08

PickRing: Seamless Interaction through Pick-Up Detection
Authors: Katrin Wolf and Jonas Willaredt

Abstract: We are frequently switching between devices, and currently we have to unlock most of them. Ideally such devices should be seamlessly accessible and not require an unlock action. We introduce PickRing, a wearable sensor that allows seamless interaction with devices through predicting the intention to interact with them through the device’s pick-up detection. A cross-correlation between the ring and the device’s motion is used as basis for identifying the intention of device usage. In an experiment, we found that the pick-up detection using PickRing cost neither additional effort nor time when comparing it with the pure pick-up action, while it has more hedonic qualities and is rated to be more attractive than a standard smartphone technique. Thus, PickRing can reduce the overhead in using device through seamlessly activating mobile and ubiquitous computers.

Speakers
KW

Katrin Wolf

Universität Stuttgart


Monday March 9, 2015 11:35 - 11:55 GMT+08
Room: Heliconia Jr. Marina Bay Sands Hotel, Singapore

11:55 GMT+08

SkinWatch: Skin Gesture Interaction for Smart Watch
Authors: Masa Ogata and Michita Imai

Abstract: We propose SkinWatch, a new interaction modality for wearable devices. SkinWatch provides gesture input by sensing deformation of the skin under a wearable wrist device, also known as a smart watch. A gesture command that is matched by learning data and two-dimensional linear input recognizes the gestures. The sensing part is small, thin, and stable, to accept accurate input via a user’s skin. We also implement an anti-error mechanism to prevent unexpected input when the user moves or rotates his or her forearm. The whole sensor costs less than $1.50 and the sensor layer does not exceed a height of more than 3 mm in this prototype. We demonstrate sample applications with a practical task, using two-finger skin gesture input.

Monday March 9, 2015 11:55 - 12:10 GMT+08
Room: Heliconia Jr. Marina Bay Sands Hotel, Singapore

12:30 GMT+08

Lunch
Monday March 9, 2015 12:30 - 13:30 GMT+08
Room: Hibiscus Jr. Marina Bay Sands Hotel, Singapore

13:30 GMT+08

Session 2: Altered Experiences
Moderators
KK

Kai Kunze

KMD, Keio University

Monday March 9, 2015 13:30 - 15:00 GMT+08
Room: Heliconia Jr. Marina Bay Sands Hotel, Singapore

13:31 GMT+08

Improving Work Productivity by Controlling the Time Rate Displayed by the Virtual Clock
Authors: Yuki Ban, Sho Sakurai, Takuji Narumi, Tomohiro Tanikawa and Michitaka Hirose

Abstract: The main contribution of this paper is establishing the method for improving work productivity unconsciously by controlling the time rate that a virtual clock displays. Recently, it became clear that the work efficiency is influenced by various environmental factors. One of a way to increase work productivity is improving the work rate during certain duration. On the contrary, it is becoming clarified that the time pressure has the potential to enhance the task performance and the work productivity. The approximation of the work rate per certain time and this time pressure is evoked by the time sensation. In this study, we focus on a “clock” as a tool, which gives the recognition of time rate and length for everyone by displaying the time sensation as if it were information that physically exists. We propose a method to improve a person’s work productivity unconsciously by giving an illusion of false sense of the passaged time by a virtual clock that displays the time rate that differ from real one visually. Also we propose a method to increase work productivity by controlling time rate. Also we conducted experiments to investigate the influence of the changes in the displayed virtual time rate on time perception and work efficiency. The experimental results showed that by displaying an the accelerated time rate, it is possible to improve work efficiency with constant time perception, regardless of whether the relative speed of the displayed time rate is fast or slow.

Monday March 9, 2015 13:31 - 13:50 GMT+08
Room: Heliconia Jr. Marina Bay Sands Hotel, Singapore

13:50 GMT+08

Gravitamine Spice: A System that Changes the Perception of Eating through Virtual Weight Sensation
Authors: Masaharu Hirose, Karin Iwazaki, Kozue Nojiri, Minato Takeda, Yuta Sugiura and Masahiko Inami

Abstract: The flavor of food is not just limited to the sense of taste, however, it changes according to the perceived information from other perception such as the auditory, visual, tactile sense or through individual experiences or cultural background, etc. “Gravitamine Spice”, we proposed focuses on the cross-modal of our perception, which we perceive the weight of food when we carry the utensils. This system consists of a fork and a seasoning called the ”OMOMI”. User can change the weight of the food by adding seasoning in it. Through this sequence of actions, users can enjoy different dining experiences, which may change the taste of their food or the feeling towards the food when they are chewing it.

Monday March 9, 2015 13:50 - 14:10 GMT+08
Room: Heliconia Jr. Marina Bay Sands Hotel, Singapore

14:10 GMT+08

B-C-Invisibility Power: Introducing Optical Camouflage Based on Mental Activity in Augmented Reality
Authors: Jonathan Mercier-Ganady, Maud Marchal and Anatole Lécuyer

Abstract: In this paper we introduce a novel and interactive approach for optical camouflage called "B-C-Invisibility power". We propose to combine augmented reality and Brain-Computer Interface (BCI) technologies to design a system which somehow provides the "power of becoming invisible". Our optical camouflage is obtained on a PC monitor combined with an optical tracking system. A cut out image of the user is computed from a live video stream and superimposed to the prerecorded background image using a transparency effect. The transparency level is controlled by the output of a BCI, making the user able to control her invisibility directly with mental activity. The mental task required to increase/decrease the invisibility is related to a concentration/relaxation state. Results from a preliminary study based on a simple video-game inspired by the Harry Potter universe could notably show that, compared to a standard control made with a keyboard, controlling the optical camouflage directly with the BCI could enhance the user experience and the feeling of "having a super-power".

Speakers
avatar for Jonathan Mercier-Ganady

Jonathan Mercier-Ganady

PhD Student, Inria/INSA


Monday March 9, 2015 14:10 - 14:25 GMT+08
Room: Heliconia Jr. Marina Bay Sands Hotel, Singapore

14:25 GMT+08

Snow Walking: Motion-Limiting Device that Reproduces the Experience of Walking in Deep Snow
Authors: Tomohiro Yokota, Motohiro Ohtake, Yukihiro Nishimura, Toshiya Yui, Rico Uchikura and Tomoko Hashida

Abstract: We propose “Snow Walking,” a boot-shaped device that reproduces the experience of walking in deep snow. The main purpose of this study is reproducing the feel of walking in a special environment that we do not experience daily, particularly one that has depth, such as of deep snow. When you walk in deep snow, you get three feelings: the feel of pulling your foot up from the deep snow, the feel of putting your foot down into the deep snow, and the feel of your feet crunching across the bottom of deep snow. You cannot walk in deep snow easily, and with the system, you get a special feeling not only on the sole of your foot but as if your entire foot is buried in the snow. We reproduce these feelings by using a slider, electromagnet, vibration speaker, hook and loop fastener, and potato starch.

Speakers
YT

Yokota Tomohiro

Waseda University


Monday March 9, 2015 14:25 - 14:40 GMT+08
Room: Heliconia Jr. Marina Bay Sands Hotel, Singapore

14:40 GMT+08

The Kraftwork and The Knittstruments: Augmenting Knitting With Sound
Authors: Enrique Encinas, Konstantia Koulidou and Robb Mitchell

Abstract: This paper presents a novel example of technological augmentation of a craft practice. By translating the skilled, embodied knowledge of knitting practice into the language of sound, our study explores how audio augmentation of routinized motion patterns affects an individual’s awareness of her bodily movements and alters conventional practice. Four different instruments (The Knittstruments: The ThereKnitt, The KnittHat, The Knittomic, and The KraftWork) were designed and tested in four different locations. This research entails cycles of data collection and analysis based on the action and grounded theory methods of noting, coding and memoing. Analysis of the data collected suggests substantial alterations in the knitters performance due to audio feedback at both an individual and group level and improvisation in the process of making. We argue that the usage of Knittstruments can have relevant consequences in the fields of interface design, wearable computing or artistic and musical creation in general and hope to provide a new inspiring venue for designers, artists and knitters to explore.

Monday March 9, 2015 14:40 - 14:55 GMT+08
Room: Heliconia Jr. Marina Bay Sands Hotel, Singapore

15:00 GMT+08

Coffee Break
Monday March 9, 2015 15:00 - 15:30 GMT+08
Room: Hibiscus Jr. Marina Bay Sands Hotel, Singapore

15:30 GMT+08

Session 3: Haptics and Exoskeletons
Moderators
avatar for Hideki Koike

Hideki Koike

Tokyo Institute of Technology

Monday March 9, 2015 15:30 - 17:00 GMT+08
Room: Heliconia Jr. Marina Bay Sands Hotel, Singapore

15:31 GMT+08

Augmenting Spatial Skills with Semi-Immersive Interactive Desktop Displays: Do Immersion Cues Matter?
Authors: Erin Solovey, Johanna Okerlund, Cassandra Hoef, Jasmine Davis and Orit Shaer

Abstract: 3D stereoscopic displays for desktop use show promise for augmenting users’ spatial problem solving tasks. These displays have the capacity for different types of immersion cues including binocular parallax, motion parallax, proprioception, and haptics. Such cues can be powerful tools in increasing the realism of the virtual environment by making interactions in the virtual world more similar to interactions in the real non-digital world [21, 32]. However, little work has been done to understand the effects of such immersive cues on users’ understanding of the virtual environment. We present a study in which users solve spatial puzzles with a 3D stereoscopic display under different immersive conditions while we measure their brain workload using fNIRS and ask them subjective workload questions. We conclude that 1) stereoscopic display leads to lower task completion time, lower physical effort, and lower frustration; 2) vibrotactile feedback results in increased perceived immersion and in higher cognitive workload; 3) increased immersion (which combines stereo vision with vibrotactile feedback) does not result in reduced cognitive workload.

Speakers
avatar for Orit Shaer

Orit Shaer

Associate Professor, Wellesley College
Professor of Computer Science and Media Arts and Science at Wellesley College. Director of Wellesley HCI Lab. Researcher of emerging HCI techniques including tangible and embodied interfaces, interactive surfaces, 3D interaction, and wearable technology; apply HCI research to genomics... Read More →


Monday March 9, 2015 15:31 - 15:50 GMT+08
Room: Heliconia Jr. Marina Bay Sands Hotel, Singapore

15:50 GMT+08

RippleTouch: Initial Exploration of a Wave Resonant Based Full Body Haptic Interface
Authors: Anusha Withana, Shunsuke Koyama, Daniel Saakes, Kouta Minamizawa, Masahiko Inami and Suranga Nanayakkara

Abstract: We propose RippleTouch, a low resolution haptic interface that is capable of providing haptic stimulation to multiple ar- eas of the body via single point of contact. This concept is based on the low frequency acoustic wave propagation prop- erties of the human body. By stimulating the body with differ- ent amplitude modulated frequencies at a single contact point, we were able to dissipate the wave energy in a particular re- gion of the bod. This created a haptic stimulation without direct contact. The RippleTouch system was implemented on a regular chair where four base range speakers were mounted underneath the seat and driven by a simple stereo audio in- terface. The system was evaluated to investigate the effect of frequency characteristics of the amplitude modulation sys- tem. Results demonstrate that we can effectively create haptic sensations at different parts of the body with a single contact point (i.e. chair surface). We believe RippleTouch concept would serve as a scalable solution for providing full-body haptic feedback in variety of situations including entertain- ment, communication, public spaces and vehicular applica- tions.

Monday March 9, 2015 15:50 - 16:10 GMT+08
Room: Heliconia Jr. Marina Bay Sands Hotel, Singapore

16:10 GMT+08

Optimal Design for Individualised Passive Assistance
Authors: Robert Matthew, Victor Shia, Masayoshi Tomizuka and Ruzena Bajcsy

Abstract: Assistive devices are capable of restoring independence and function to people suffering from musculoskeletal impairments. Traditional assistive exoskeletons can be divided into active or passive devices depending on the method used to provide joint torques. The design of these devices often does not take into account the abilities of the individual leading to complex designs, joint misalignment and muscular atrophy due to over assistance at each joint.
We present a novel framework for the design of passive assistive devices whereby the device provides the minimal amount of assistance required to maximise the space that they can reach. In doing so, we effectively remap their capable torque load over their workspace, exercising existing muscle while ensuring that key points in the workspace are reached. In this way we hope to reduce the risk of muscular atrophy while assisting with tasks.
We implement two methods for finding the necessary passive device parameters, one looks at static loading conditions while the second simulates the system dynamics using level set methods. This allows us to determine the set of points that an individual can hold their arms stationary, the statically achievable workspace (SAW). We show the efficacy of these methods on a number of case studies which show that individuals with pronounced muscle weakness and asymmetric muscle weakness can have restored SAW restoring a range of motion.

Speakers
avatar for Robert Matthew

Robert Matthew

Graduate Student Researcher, UC Berkeley


Monday March 9, 2015 16:10 - 16:30 GMT+08
Room: Heliconia Jr. Marina Bay Sands Hotel, Singapore

16:30 GMT+08

Design of a Novel Finger Exoskeleton with a Sliding Six-Bar Joint Mechanism
Authors: Mahasak Surakijboworn and Wittaya Wannasuphoprasit

Abstract: The objective of the paper is to propose a novel design of a finger exoskeleton. The design consists of 3 identical joint mechanisms which, for each, adopts a six-bar RCM as an equivalent revolute joint incorporating with 2 prismatic joints to form a close-chain structure with a finger joint. Cable and hose transmission is designed to reduce burden from prospective diving modules. As a result, the prototype coherently follows finger movement throughout full range of motion for every size of fingers.

Speakers
avatar for Mahasak Surakijboworn

Mahasak Surakijboworn

Chulalongkorn University


Monday March 9, 2015 16:30 - 16:45 GMT+08
Room: Heliconia Jr. Marina Bay Sands Hotel, Singapore

17:00 GMT+08

Break
Monday March 9, 2015 17:00 - 17:30 GMT+08
Marina Bay Sands, Singapore Marina Bay Sands Hotel, Singapore

17:30 GMT+08

Spotlight on: Demonstrations & Welcome Reception
Moderators
avatar for Weiquan Lu

Weiquan Lu

National University of Singapore
avatar for Anusha Withana

Anusha Withana

Singapore University of Technology and Design

Monday March 9, 2015 17:30 - 20:30 GMT+08
Room: Hibiscus Jr. Marina Bay Sands Hotel, Singapore
 
Tuesday, March 10
 

08:00 GMT+08

Registration
Tuesday March 10, 2015 08:00 - 09:00 GMT+08
Marina Bay Sands, Singapore Marina Bay Sands Hotel, Singapore

09:00 GMT+08

Announcements
Moderators
avatar for Ellen Yi-Luen Do

Ellen Yi-Luen Do

Georgia Institute of Technology and National University of Singapore
Creating Unique Technology for Everyone!
avatar for Suranga Nanayakkara

Suranga Nanayakkara

Singapore University of Technology and Design

Tuesday March 10, 2015 09:00 - 09:15 GMT+08
Room: Heliconia Jr. Marina Bay Sands Hotel, Singapore

09:15 GMT+08

Keynote: Cybathlon 2016, An International Championship for Augmented Parathletes
The Cybathlon is an international championship for racing pilots with disabilities (i.e., parathletes) who are using advanced robotic technologies to provide assistance for daily life activities. The competitions are comprised by different disciplines that apply the most modern powered devices such as powered prostheses, wearable exoskeletons, powered wheelchairs, functional electrical stimulation as well as novel brain-computer interfaces. The main goal of the Cybathlon is to provide a platform for the development of novel assistive technologies that are useful for daily life of persons with different motor disabilities. Furthermore, through the organization of the Cybathlon we will help removing barriers between the public, people with disabilities and science. The first Cybathlon will take place in a large indoor stadium, on 8 October 2016 and will be live-broadcasted all over the world. Thereafter, the Cybathlon will be held periodically at least every 4 years.

Speakers
avatar for Robert Riener

Robert Riener

Department of Health Sciences and Technology, ETH Zurich
Robert Riener studied Mechanical Engineering at TU München, Germany, and University of Maryland, USA. He received a Dr.-Ing. degree in Engineering from the TU München in 1997. After postdoctoral work from 1998-1999 at the Centro di Bioingegneria, Politecnico di Milano, he returned... Read More →


Tuesday March 10, 2015 09:15 - 10:30 GMT+08
Room: Heliconia Jr. Marina Bay Sands Hotel, Singapore

10:30 GMT+08

Coffee Break
Tuesday March 10, 2015 10:30 - 11:00 GMT+08
Room: Hibiscus Jr. Marina Bay Sands Hotel, Singapore

11:00 GMT+08

Spotlight on: Posters
Moderators
avatar for Weiquan Lu

Weiquan Lu

National University of Singapore
avatar for Anusha Withana

Anusha Withana

Singapore University of Technology and Design

Tuesday March 10, 2015 11:00 - 12:30 GMT+08
Room: Hibiscus Jr. Marina Bay Sands Hotel, Singapore

12:30 GMT+08

Lunch
Tuesday March 10, 2015 12:30 - 13:30 GMT+08
Room: Hibiscus Jr. Marina Bay Sands Hotel, Singapore

13:30 GMT+08

Spotlight on: Student Design Competition
Moderators
avatar for Yuichiro Katsumoto

Yuichiro Katsumoto

National University of Singapore
avatar for Halley Profita

Halley Profita

Wearable Technology Researcher, PhD, University of Colorado - Boulder

Tuesday March 10, 2015 13:30 - 15:00 GMT+08
Room: Hibiscus Jr. Marina Bay Sands Hotel, Singapore

15:00 GMT+08

Coffee Break
Tuesday March 10, 2015 15:00 - 15:30 GMT+08
Room: Hibiscus Jr. Marina Bay Sands Hotel, Singapore

15:30 GMT+08

Session 4: Augmenting Realities
Moderators
avatar for Woontack Woo

Woontack Woo

Professor, UVR Lab, Korea Advanced Institute of Science and Technology

Tuesday March 10, 2015 15:30 - 16:45 GMT+08
Room: Heliconia Jr. Marina Bay Sands Hotel, Singapore

15:31 GMT+08

A Life Log System that Recognizes the Objects in a Pocket
Authors: Kota Shimozuru, Tsutomu Terada and Masahiko Tsukamoto

Abstract: A novel approach has been developed for recognizing objects in pockets and for recording the events related to the objects. Information on putting an object into or taking it out of a pocket is closely related to user contexts. For example, when a house key is taken out from a pocket, the owner of the key is likely just getting home. We implemented a objects-in-pocket recognition device, which has a pair of infrared sensors arranged in a matrix, and life log software to obtain the time stamp of events happening. We evaluated whether or not the system could deal with one of five objects (a smartphone, ticket, hand, key, and lip balm) using template matching. When one registered object (the smartphone, ticket, or key) was put in the pocket, our system recognized the object correctly 91% of the time on average. We also evaluated our system in one action scenario. With our system's time stamps, user could easily remember what he took on that day and when he used the items.

Tuesday March 10, 2015 15:31 - 15:50 GMT+08
Room: Heliconia Jr. Marina Bay Sands Hotel, Singapore

15:50 GMT+08

VISTouch: Dynamic Three-Dimensional Connection between Multiple Mobile Devices
Authors: Masasuke Yasumoto and Takehiro Teraoka

Abstract: Recently, it has been remarkably becoming common for
people to own multiple mobile devices, but it is still difficult
to effectively use them in combination. In this study, we have
constructed a new system “VISTouch” that achieves a new
operational capability and increases user interest in mobile
devices by enabling multiple devices to be used in
combination dynamically and spatially. By using VISTouch,
for example, when a smart-phone is spatially connected to a
horizontally positioned tablet that is displaying a map as
viewed from above, these devices dynamically obtain the
correct relative position; the smart-phone displays images
viewed from its position, direction, and angle in real time as
a window to show the virtual 3D space. Finally, we applied
VISTouch to two applications that used detailed information
of the relative position in real space between multiple
devices. From these applications, we showed the availability
improvement of using multiple devices in combination.

Speakers

Tuesday March 10, 2015 15:50 - 16:05 GMT+08
Room: Heliconia Jr. Marina Bay Sands Hotel, Singapore

16:05 GMT+08

LumoSpheres: Real-Time Tracking of Flying Objects and Image Projection for a Volumetric Display
Authors: Hiroaki Yamaguchi and Hideki Koike

Abstract: This paper proposes a method for real-time tracking of flying objects and image projection onto them for developing a particle-based volumetric 3D display. First the concept of the particle-based volumetric 3D display, which uses high-speed cameras and projectors is described. After the latency issue in such projector-camera system is pointed out, our solution to use a prediction model with kinematic laws and a Kalman Filter is presented. We conducted experiments that show the accuracy of the projection. We also present an application of our method in entertainment, Digital Juggling.

Tuesday March 10, 2015 16:05 - 16:20 GMT+08
Room: Heliconia Jr. Marina Bay Sands Hotel, Singapore

16:20 GMT+08

DogPulse: Augmenting the Coordination of Dog Walking through an Ambient Awareness System at Home
Authors: Christoffer Skovgaard, Josephine Raun Thomsen, Nervo Verdezoto and Daniel Vestergaard

Abstract: This paper presents DogPulse, an ambient awareness system to support the coordination of dog walking among family members at home. DogPulse augments a dog collar and leash set to activate an ambient shape-changing lamp and visualize the last time the dog was taken for a walk. The lamp gradually changes its form and pulsates its lights in order to keep the family members aware of the dog walking activity. We report the iterative prototyping of DogPulse, its implementation and its preliminary evaluation. Based on our initial findings, we present the limitations and lessons learned as well as highlight recommendations for future work.

Speakers
DV

Daniel Vestergaard

Aarhus University


Tuesday March 10, 2015 16:20 - 16:35 GMT+08
Room: Heliconia Jr. Marina Bay Sands Hotel, Singapore

17:00 GMT+08

Social Event @ Mount Faber
 
  • 5pm – Shuttle service leaves from Marina Bay Sands

  • 6pm – Cable Car Ride

  • 7pm – Conference Dinner

  • 9:30pm: Shuttle service back to Marina Bay Sands


 




Tuesday March 10, 2015 17:00 - 21:30 GMT+08
Mount Faber, Singapore
 
Wednesday, March 11
 

08:00 GMT+08

Registration
Wednesday March 11, 2015 08:00 - 09:00 GMT+08
Marina Bay Sands, Singapore Marina Bay Sands Hotel, Singapore

09:00 GMT+08

Session 5: Learning and Reading
Moderators
avatar for Weiquan Lu

Weiquan Lu

National University of Singapore

Wednesday March 11, 2015 09:00 - 10:30 GMT+08
Room: Heliconia Jr. Marina Bay Sands Hotel, Singapore

09:01 GMT+08

Word Out! Learning the Alphabet and Spelling through Full Body Interactions
Authors: Kelly Yap, Clement Zheng, Angela Tay, Ching-Chiuan Yen and Ellen Yi-Luen Do

Abstract: This paper presents Word Out, an interactive game for learning of alphabet and spelling through full body interaction. Targeted for children 4-7 years old, Word Out employs the Microsoft Kinect to detect the silhouette of players. Players are tasked to twist and form their bodies to match the shapes of the letters displayed on the screen. By adopting full body interactions in games, we aim to promote learning through play, as well as encourage collaboration and kinesthetic learning for children. Over two months, more than 15,000 children have played Word Out installed in two different museums. This paper presents the design and implementation of the Word Out game, preliminary analyses of a survey carried out at the museums to share insights and discusses future work.

Speakers
ZC

Zheng Clement

National University of Singapore


Wednesday March 11, 2015 09:01 - 09:20 GMT+08
Room: Heliconia Jr. Marina Bay Sands Hotel, Singapore

09:20 GMT+08

Unconscious Learning of Speech Sounds using Mismatch Negativity Neurofeedback
Authors: Ming Chang, Hiroyuki Iizuka, Yasushi Naruse, Hideyuki Ando and Taro Maeda

Abstract: Learning the speech sounds of a foreign language is difficult for adults, and often requires significant training and attention. For example, native Japanese speakers are usually unable to differentiate between the “l” and “r” sounds in English; thus, words like “light” and “right” are hardly discriminated. We previously showed that the discrimination ability for similar pure tones can be improved unconsciously using neurofeedback (NF) training with mismatch negativity (MMN), but it is not clear whether it can improve discrimination of the speech sounds of words. We examined whether MMN Neurofeedback is effective in helping native Japanese speakers discriminate ‘light’ and ‘right’ in English. Participants seemed to unconsciously improve significantly in speech sound discrimination through NF training without attention to the auditory stimuli or awareness of what was to be learn. Individual word sound recognition also improved significantly. Furthermore, our results indicate a lasting effect of NF training.

Speakers
CM

Chang Ming

Osaka University


Wednesday March 11, 2015 09:20 - 09:35 GMT+08
Room: Heliconia Jr. Marina Bay Sands Hotel, Singapore

09:35 GMT+08

Use of an Intermediate Face between a Learner and a Teacher in Second Language Learning with Shadowing
Authors: Yoko Nakanishi and Yasuto Nakanishi

Abstract: Shadowing is a language-learning method whereby a learner attempts to repeat, i.e., shadow, what he/she hears immediately. We propose displaying a computer-generated intermediate face between a learner and a teacher as an appropriate intermediate scaffold for shadowing. The intermediate face allows the learner to follow a teacher’s face and mouth movements more effectively. We describe a prototype system that generates an intermediate face from real-time camera input and captured video. We also discuss a user study of the prototype system with crowd-sourced participants. The results of the user study suggest that the prototype system provided better pronunciation cues than video-only shadowing techniques.

Wednesday March 11, 2015 09:35 - 09:50 GMT+08
Room: Heliconia Jr. Marina Bay Sands Hotel, Singapore

09:50 GMT+08

Assessment of Stimuli for Supporting Speed Reading on Electronic Devices
Authors: Tilman Dingler, Alireza Sahami Shirazi, Kai Kunze and Albrecht Schmidt

Abstract: Technology has introduced multimedia to tailor information more broadly to our various senses, but by no means has the ability to consume information through reading lost its importance. To cope with the ever-growing amount of textual information to consume, different techniques have been proposed to increase reading efficiency: rapid serial visual presentation (RSVP) has been suggested to increase reading speed by effectively reducing the number of eye movements. Further, moving a pen, finger or the entire hand across text is a common technique among speed readers to help guide eye movements. We adopted these techniques for electronic devices by introducing stimuli on text that guide users' eye movements. In a series of two user studies we sped up users' reading speed to 150% of their normal rate and evaluated effects on text comprehension, mental load, eye movements and subjective perception. Results show that reading speed can be effectively increased by using such stimuli while keeping comprehension rates nearly stable. We observed initial strain on mental load which significantly decreased after a short while. Subjective feedback conveys that kinetic stimuli are better suited for long, complex text on larger displays, whereas RSVP was preferred for short text on small displays.

Wednesday March 11, 2015 09:50 - 10:10 GMT+08
Room: Heliconia Jr. Marina Bay Sands Hotel, Singapore

10:10 GMT+08

How Much Do You Read? – Counting the Number of Words a User Reads Using Electrooculography
Authors: Kai Kunze, Katsutoshi Masai, Yuji Uema and Masahiko Inami

Abstract: We read to acquire knowledge. Reading is a common activ- ity performed in transit and while sitting, for example during commuting to work or at home on the couch. Although read- ing is associated with high vocabulary skills and even with in- creased critical thinking, we still know very little about effec- tive reading habits. In this paper, we argue that the first step to understanding reading habits in real life we need to quan- tify them with affordable and unobtrusive technology. To- wards this goal, we present a system to track how many words a user reads using electrooculography sensors. Compared to previous work, we use active electrodes with a novel on- body placement optimized for both integration into glasses (or head-worn eyewear etc) and for reading detection. Using this system, we present an algorithm capable of estimating the words read by a user, evaluate it in an user independent approach over experiments with 6 users over 4 different de- vices (8” and 9” tablet, paper, laptop screen). We achieve an error rate as low as 7% (based on eye motions alone) for the word count estimation (std = 0.5%).

Speakers
MK

Masai Katsutoshi

Keio University


Wednesday March 11, 2015 10:10 - 10:25 GMT+08
Room: Heliconia Jr. Marina Bay Sands Hotel, Singapore

10:30 GMT+08

Coffee Break
Wednesday March 11, 2015 10:30 - 11:00 GMT+08
Room: Hibiscus Jr. Marina Bay Sands Hotel, Singapore

11:00 GMT+08

Session 6: Augmenting Sports… and Toilets!
Moderators
avatar for Masahiko Inami

Masahiko Inami

Professor, KMD, Keio University

Wednesday March 11, 2015 11:00 - 12:30 GMT+08
Room: Heliconia Jr. Marina Bay Sands Hotel, Singapore

11:01 GMT+08

Designable Sports Field: Sport Design by a Human in Accordance with the Physical Status of the Player
Authors: Ayaka Sato and Jun Rekimoto

Abstract: We present the Designable Sports Field (DSF), an environment where a “designer” designs a sports field in accordance with the physical intensity of the player. Sports motivate players to compete and interact with teammates. However, the rules are fixed; thus, people who lack experience or physical strength often do not enjoy playing. In addition, the levels of the players should preferably match. On the other hand, in coaching, a coach trains players according to their skills. However, to be a coach requires considerable experience and expertise. We present a DSF application system called SportComposer. In this system, the “designer” and “player,” roles that can be assumed even by amateur players, participate in the sport to achieve different goals. The designer designs a sports field according to the physical status of the player, such as his/her heart rate, in real time. Thus, the player can play a physical game that matches his/her physical intensity. In experiments conducted under this environment, we tested the system with persons ranging from a small child to adults who are not expert in sports and confirmed that both the roles of the designer and the player are functional and enjoyable. We also report findings from a demonstration conducted with 92 participants in a public museum.

Wednesday March 11, 2015 11:01 - 11:20 GMT+08
Room: Heliconia Jr. Marina Bay Sands Hotel, Singapore

11:20 GMT+08

Augmented Dodgeball: An Approach to Designing Augmented Sports
Authors: Takuya Nojima, Ngoc Phuong, Takahiro Kai, Toshiki Sato and Hideki Koike

Abstract: Ubiquitous computing offers enhanced interactive, human-centric experiences including sporting and fitness-based applications. To enhance this experience further, we consider augmenting dodgeball by adding digital elements to a traditional ball game. To achieve this, an understanding of the game mechanics with participating movable bodies, is required. This paper discusses the design process of a ball–player-centric interface that uses live data acquisition during gameplay for augmented dodgeball, which is presented as an application of augmented sports. Initial prototype testing shows that player detection can be achieved using a low-energy wireless sensor based network such as that used with fitness sensors, and a ball with an embedded sensor together with proximity tagging.

Wednesday March 11, 2015 11:20 - 11:35 GMT+08
Room: Heliconia Jr. Marina Bay Sands Hotel, Singapore

11:35 GMT+08

A Mobile Augmented Reality System to Enhance Live Sporting Events
Authors: Samantha Bielli and Christopher G. Harris

Abstract: Sporting events broadcast on television or through the internet are often supplemented with statistics and background information on each player. This information is typically only available for sporting events followed by a large number of spectators. Here we describe an Android-based augmented reality (AR) tool built on the Tesseract API that can store and provide augmented information about each participant in nearly any sporting event. This AR tool provides for a more engaging spectator experience for viewing professional and amateur events alike. We also describe the preliminary field tests we have conducted, some identified limitations of our approach, and how we plan to address each in future work.

Wednesday March 11, 2015 11:35 - 11:50 GMT+08
Room: Heliconia Jr. Marina Bay Sands Hotel, Singapore

11:50 GMT+08

A Teleoperated Bottom Wiper
Authors: Takeo Hamada, Hironori Mitake, Shoichi Hasegawa and Makoto Sato

Abstract: In order to aid elderly and/or disabled people in cleaning and drying their posterior after defecation, a teleoperated bottom wiper is proposed.
The wiper enables a person sitting on the toilet seat to wipe his/her bottom by specifying the wiping position and strength with a computer mouse and keyboard.
The proposed teleoperation is novel in that the operator and target are the same.
The operator feels force feedback through the buttocks instead of the hands.
The result of a user study confirmed that users could successfully wipe the buttocks with appropriate position and strength by teleoperation.
Since it is controller by the user, the teleoperated wiper is suitable for accommodating each participant's preference of the moment.

Wednesday March 11, 2015 11:50 - 12:10 GMT+08
Room: Heliconia Jr. Marina Bay Sands Hotel, Singapore

12:10 GMT+08

The Toilet Companion: A toilet brush that should be there for you and not for others
Authors: Laurens Boer, Nico Hansen, Ragna Lisa Möller, Ana Neto, Anne Holm Nielsen and Robb Mitchell

Abstract: In this article we present the Toilet Companion: an augmented toilet brush that aims to provide moments of joy in the toilet room, and if necessary, stimulates toilet goers to use the brush. Based upon the amount of time a user sits upon the toilet seat, the brush swings it handle with increasing speed: initially to draw attention to its presence, but over time to give a playful impression. Hereafter, the entire brush makes rapid up and downward movements to persuade the user to pick it up. In use, it generates beeps in response to human handling, to provide a sense of reward and accompanying pleasure. Despite our aims in providing joy and stimulation, participants from field trials with the Toilet Companion reported experiencing the brush as undesirable, predominantly because the sounds produced by the brush would make private toilet room activities publicly perceivable. The design intervention thus challenged the social boundaries of the otherwise private context of the toilet room, opening up an interesting area for design-ethnographic research about perception of space, where interactive artifacts can be mobilized to deliberately breach public, social, personal, and intimate spaces.

Wednesday March 11, 2015 12:10 - 12:25 GMT+08
Room: Heliconia Jr. Marina Bay Sands Hotel, Singapore

12:30 GMT+08

Lunch
Wednesday March 11, 2015 12:30 - 13:30 GMT+08
Room: Hibiscus Jr. Marina Bay Sands Hotel, Singapore

13:30 GMT+08

Panel Session: Augmentation and Singularity: The Future of Augmented Human
Moderators
avatar for Jun Rekimoto

Jun Rekimoto

The University of Tokyo

Speakers
avatar for Ellen Yi-Luen Do

Ellen Yi-Luen Do

Georgia Institute of Technology and National University of Singapore
Creating Unique Technology for Everyone!
avatar for Masahiko Inami

Masahiko Inami

Professor, KMD, Keio University
avatar for Suranga Nanayakkara

Suranga Nanayakkara

Singapore University of Technology and Design
avatar for Takuya Nojima

Takuya Nojima

University of Electro-Communications
avatar for Hiroyki Shinoda

Hiroyki Shinoda

The University of Tokyo


Wednesday March 11, 2015 13:30 - 15:00 GMT+08
Room: Heliconia Jr. Marina Bay Sands Hotel, Singapore

15:00 GMT+08

Coffee Break
Wednesday March 11, 2015 15:00 - 15:20 GMT+08
Room: Hibiscus Jr. Marina Bay Sands Hotel, Singapore

15:20 GMT+08

Closing Keynote: New Frontiers in Sensory Substitution and Sensory Augmentation: Technology and Brain Mechanisms
Abstract: In the first part of the talk I will present new ways to teach the brain to see again in blindness. I'll chart several key steps in this direction such as developing novel sensory substitution devices, (SSDs), developing novel training protocols like creating novel virtual training environments, online self-train tools and serious games. Finally I will present the concept of the Multisensory Bionic Eye (MBE), a device that combines invasive recovery of visual input with dedicated built-in auditory and tactile components that are based upon the progress made using SSDs.

In the second part of the talk I will present how the brain learn to process the information arriving from SSDs by tracking down the plastic changes in humans using functional magnetic resonance imaging. Our findings show that the brain area specializations can emerge independently of sensory modality and suggest that this might be mediated by cultural recycling of cortical circuits molded by distinct specializations and connectivity patterns.

Speakers
avatar for Amir Amedi

Amir Amedi

Sorbonne Universités and the Institut de la Vision, Paris France
Amir is an internationally acclaimed brain scientist with 15 years of experience in the field of brain plasticity and multisensory integration. He has a particular interest in visual rehabilitation. He is an Associate Professor at the Department of Medical Neurobiology at the Hebrew... Read More →


Wednesday March 11, 2015 15:20 - 16:35 GMT+08
Room: Heliconia Jr. Marina Bay Sands Hotel, Singapore

16:35 GMT+08

Closing & Award Announcement
Speakers
avatar for Ellen Yi-Luen Do

Ellen Yi-Luen Do

Georgia Institute of Technology and National University of Singapore
Creating Unique Technology for Everyone!
avatar for Suranga Nanayakkara

Suranga Nanayakkara

Singapore University of Technology and Design


Wednesday March 11, 2015 16:35 - 17:00 GMT+08
Room: Heliconia Jr. Marina Bay Sands Hotel, Singapore
 
Filter sessions
Apply filters to sessions.