3D Scene Understanding and Imaging Its Situation
--- for Distance Learning and Lecture Archive ---

[Japanese], Research Index, Minoh Lab, www.kameda-lab.org 2004/10/23


Our research focuses on scene understanding of human working areas by utilizing multiple sensors including multi-cameras. In addition, we are working on intelligent (virtual) cameramen and directors. Video archive and retrieval are also our main research topics.


MULVIS-1 MULVIS-1 (MUlti-camera based Video Imaging System) is designed to visualize a lecture in a classroom by using multiple observation/shoot (8-12) pan-tilt cameras. A virtual director assigns the cameras to virtual cameramen, gives them instructions of camera control, and selects the best video image among them for broadcasting.


MULVIS-2 While the virtual director determines which camera to film the object in focus in the classroom on MULVIS-1, MULVIS-2 (Multi User Live Video Imaging System) can provide customized video stream to each remote student according to his/her preference. This is a kind of a resource assignment problem since the number of the pan-tilt cameras may be insufficient to realize all the videos that they require.

Sensor Fusion Approach for Speaker Detection for Imaging

Multisensor This is an improved version of MULVIS-1. It uses not only image processing technology but also acoustic analysis with microphone array and a position sensor to estimate speaker's status more precisely.


CARMUL CARMUL (Concurrent Automatic Recording for MUltimedia Lecture) captures and records all the kinds of information that emerge inside the classroom during the lecture such as hand-writing on whiteboard, slides with switching time, voices, and multiple videos of objects in the room.

Minimizing Camera Control Adjustment to Keep Specified Camera-work

Optimal camera control It is uncomfortable to see ticking video when a virtual cameraman frequently changes rotation direction and speed so as to follow a moving object. We propose a new method which dramatically depress frequent adjustments by predicting camera and object status.

Brightness Control of Indivisual Objects inside A 3D Scene

Active light control It is difficult to image two objects which are in different brightness on one video frame. We propose a light control method that lights each object indivisually in order to maintain their brightness in good condition.


Please access appropriate articles in these lists.

Join us!

These researches have been conducted in Minoh Lab (of Kyoto University). I am now conducting generalized research issues in massive sensing project. Please refer it if you are interested in my new approaches..

People who are interested in joining the projects may contact Kameda or Prof. Minoh .


As of 2003/03/31.

Leader :
MINOH Michihiko

Researchers :
KAMEDA Yoshinari (1996-2003)
NISHIGUCHI Satoshi (2001-)
YAGI Keisuke (2001-)
MUKUNOKI Masayuki (2001)
Tomasz M. Rutkowski (2002-)
KAKUSHO Koh (2003-)

Students :
KAMIYAMA Daisuke (1996), KICHIYOSHI Kentaro (1996)
MIYAZAKI Hideaki (1996-1998)
MIYATA Shinji (1998)
ISHIZUKA Kentaro (1998-1999)
MURAKAMI Masayuki (1999-2001)
HIGASHI Kazuhide (1999-2001)
ATARASHI Yasutaka (2000-2002), SHINGU Jun (2000-2002)
YOKOO Masahiro(2002-2003),
NIIZU Keiji(2002-), MIKI Kenji(2002-2003)

kameda@image.esys.tsukuba.ac.jp, kameda@media.kyoto-u.ac.jp