Subsribe to our RSS Feed

Mixed Reality / Augmented Reality

Overview

Most of the VR systems we have experienced in this decade have poor reality because the environment was entirely synthesized within a computer. From this limitation, people started to utilize rich information in the real world in the VR systems. The technology that deals with the real physical world as well as the virtual synthesized world bacame essential. Mixed Reality or MR is the technology that realizes the environment which is the seamless integration of real and virtual worlds.

Article

(If you want to see more information, click an image.)

Title Image Source

뇌속에 반도체칩 이식 현실같은 가상세계 체험

Chosun ilbo

2003. 5. 29

현실과 가상세계를 실시간 합성한 '환상의 세계'

Chosun ilbo

2003. 1. 10

Project

Title Sponsor Term
실감형 학습 시스템 상용화를 위한 프로그램 테스트 및 패키징 Electronics and Telecommunications Research Institute 2009.5 ~ 2010.1
저가형 카메라용 객체 추적과 제스쳐 인식 기술 개발 Electronics and Telecommunications Research Institute 2008.3 ~ 2009.2
KERIS증강현실 마커인식 기술개발 CREDU 2007.5 ~ 2007.12
IBR기법을 이용한 3D모델러 개발 기술 자문 NVL Soft. 2007.5 ~ 2007.10
기하학 마커 및 혼합현실 Toolkit 상용화 테스트 Electronics and Telecommunications Research Institute 2006.4 ~ 2007.1
Augmented Reality Technology Korea Science and Enginnering Foundation 2000.8 ~ 2003.2
Systems for recognizing and synthesizing of facial expression and gesture Ministry of Science and Technology 1998.11 ~ 2000.11
Research and Development of Computer's Kansei Interface Ministry of Science and Technology 1996.11 ~ 1998.10

Research



Augmented Reality

Title Summary Movie
Scalable Mobile Augmented Reality

For Augmented Reality on mobile devices, scalability and robustness are essential properties. Our recent markerless 3D tracking technology on mobile device are able to handle more than 10,000 targets and also show high performances for immersive augmentation.
Augmented AIM Lab Tour

We have developed the markerless 3D tracking technology on multiple targets (more than 200 targets). Using this technology, we have made the application "Augmented AIM Lab Tour" which recognizes and tracks many pictures in our lab.
Markerless Visual Tracking for Augmented Books

The augmented book is the application which augments multimedia elements such as virtual 3D objects, movie clips, or sound clips to a real book using AR technologies. It is intended to bring additional education effects or amusement to users. We presents the markerless visual tracking method which recognizes the current page among numerous pages and estimates its 6 DOF pose in real-time.

Hybrid Visual Tracking for Augmented Books

The augmented book is the system augmenting multimedia elements onto a book to bring additional education effects or amusement. A book includes many pages and many duplicated designs so tracking a book is quite difficult. For the augmented book, we propose the hybrid visual tracking which merges the merits of two traditional approaches: fiducial marker tracking and markerless tracking. The new method does not cause visual discomfort and can stabilize camera pose estimation in real-time.
The Real-time Posture and Gesture Recognition for an Intelligent Environment

We've developed the real-time posture and gesture recognition system for an intelligent environment. Our system can track robustly five parts (head, two hands, and two feet) and recognize some gestures using five parts.
Musique

Do you remember the movie "Big"? One of the most impressive scenes in the movie is that the actor played the clavier on the floor using his feet. We were motivated by the scene and reconstructed it using AR. Although a real clavier does not exist in the real world, when a player looks at the screen in front of him, he can see the virtual clavier being augmented on the floor and played by his feet.
New Chorongi

We've developed the new Chorongi which extends the previous Chorongi work in various ways; we can track multiple persons and objects, recognize persons' gestures and voice, and interact with Chorongi in real-time. Besides simple actions such as walking, sitting, lying, sniffing, and barking, Chorongi can perform several complex scenarios as follows: Chorongi follows only the owner among some persons, follows a rolling real ball, moves on toward the sitting person, pretends to die when someone pretends to shoot, and passes between legs when a user stretchs his legs.
The 2nd version of Marker Recognition in e-Learning

We developed the 2nd version of the marker recognition system in e-Learning. The marker recognition system can recognize both iconic patterns to give intuition and bit patterns to give enough IDs. Compared to the 1st version, its size (2.6cm x 2.6cm) is much smaller and it can recognize more markers. Furthermore, it is also able to adapt to illumination changes.
Ghost Hunter

Ghost Hunter is a handheld augmented reality game system with a dynamic environment which consists of some movable structures and a controller that enables changes in the virtual world.
The 1st version of Marker Recognition in e-Learning

We developed a marker recognition system for e-Learning which requires robustness against illumination change, partial occlusion, and various cameras in the environment. It is also able to recognize 256 fixed marker patterns and several iconic marker patterns with a small-size (20 pixels x 20 pixels).
Mr. Maze

Mr. Maze is a simple mixed reality game which is motivated by the classic game "pacman" and makes use of both fiducial marker and markerless tracking. This game consists of two small boards and a large board to which a tilt sensor is attached. At first, we can select the male or female character by locating one of the two small boards near the large board. While playing the game, the character is controlled by tilting the large board in the desired direction to move.


Human Image Recognition & Synthesis

Title Summary Movie
Human Face Motion Analysis

The human image means the image that is concerned with human appearance and behavior such as face, expression, gesture, etc. The research on human image recognition and synthesis deals with computer vision technologies, with which we detect human appearance and behavior from images and then recognize and track that modeled information, and enables generating realistic human images bonding with 3D graphic modeling and rendering technologies.

We presented a framework to simultaneously estimate the rigid and the non-rigid motions of the human faces. We first tested whether the face motion is rigid. In the case of rigid motion, the translation and rotation parameters are estimated. Otherwise, the non-rigid motion parameters are estimated based on the SMD model using the optical flow.

Image-based 3D Face Modeling and Expression Animation

- Deformation of a 3D face model
- Deformation of muscle structure in the polygonal 3D face model
- 3D texture map generation
- Texture mapping with 3D face texture map



3D Cyber Character Animation

Title Summary Movie
Oh! My Baby

Oh! My Baby is a full 3D real-time parenting simulation game, developed by Adamsoft Co., which allows you and your mate to create and raise a baby through varieties of events and games. The baby character will bare all the distinguished features of the parents. Your virtual baby will recognize your voice and depending on the way you rear the baby, the baby will develop its unique characteristic traits.
Puppeteer

Puppeteer, a 3D cyber character animation authoring tool, is developed with Adamsoft Co. as an application of the general human head/face & body model generation tool.

- Facial animation with simplified action unit
- Real-time motion control
- Smooth skinning
- Real-time lip-sync
- Non-linear character animation editing
- Script-based digital video production
General Human Head/Body Model Generation Tool
This tool is concentrated on the method from which we can model a variety of human heads/bodies from a general model. We can edit low-level parameters in the human model and control high-level parameters to produce various human head/body models.

Modeling Demo
  • Human head/face model generation
  • Human body model generation
Design Issues
  • User-friendly interface
  • Hierarchical structure
  • MPEG-4 SNHC standard
Application Fields
  • Avatar making in multimedia communication
  • Producing computer game characters
  • 3D cyber character animation




Web-based Multi-user 3D Animation System

Title Summary Movie
AMINET

This system is realized as a distributed system via WWW. Our system is capable of bringing together geographically dispersed users in a single virtual world, and interacting both between multiple users and between users and the elements of the virtual worlds. Also, the avatars that stand for users' characteristics are implemented more realistically by using the high-quality animations, for example face expression animation, gesture, and full body animation.

More information...


Interactive Artificial Life in a Virtual Environment

Title Summary Movie
Chorongi

A cyber character is a kind of artificial life in a virtual environment. To be a life form in the virtual world, cyber characters need sensors and control systems. Current cyber characters lack somewhat direct, immersive interaction capabilities, so we want to add interactiveness to the cyber characters by adding smart and reactive action capabilities to perceive and communicate with the real-world.

System overview
  • Character control and animation system
  • Real-time gesture recognition system
  • Voice recognition system
  • Conceptual Graph
Behavior-based Control System Animation Overview
Gesture Recognition
Demo Scene Video: 1,2,3


Gesture-based VR Interface

Title Summary Movie
Hand Gesture Recognition System

- Hand feature extraction from one camera
- VR navigation by recognizing hand gesture
- Real-time physically based simulation
- Car navigation simulation in virtual environment


3D Scene Reconstruction

Title Summary Movie
3D Scene Reconstruction

This system is aimed at human motion analysis, modeling, and animation, which includes the following research topics: camera calibration, dynamic image capturing, image-based 3D modeling, mixed reality technology, and human animation.


Level-of-Details (LODs) Modeling of 3D Objects

Title Summary Movie
LODs Modeling of 3D Objects

The marching cube octree (MCO) data structure is a new scheme for representing and generating the mesh of various level-of-details (LODs). The marching cube octree is based on the data structure of the Marching Cube algorithm, which is used to generate the mesh from the range data and the octree, this last widely used in computer graphics. Our LOD model can support adaptive simplification, compression, progressive transmission, view dependency rendering and collision detection. Our LOD mesh generation algorithm is faster than previous methods because it directly references the marching cube octree. We can construct the LOD model directly from the range data if the Marching Cube algorithm is used to generate the mesh.
Advantages of MCO
  • Direct creation from range data to LOD model
  • Reduced rendering time by view dependency
  • 3D object collision detection
  • Progressive transmission
  • Direct LOD mesh generation