Contents of this page are obsolete. This page is preserved and stored at this URL just from historical viewpoint. Original URL was http://www.mm.media.kyoto-u.ac.jp/members/kameda/...
Please visit www.kameda-lab.org for recent information. (2002/12/06, kameda@ieee.org)

Publication, Research, Docs and Info, KAMEDA


next up previous
Next: Camera-work Up: Description of imaging Previous: A-component

Situation features

The dynamic situation is also described by the situation features. Feature extraction methods would differ if the real space and the activities are different. Most of the situation features are extracted via image processing because image sensor does not affect the human activities performed in the real space. As the dynamic situation varies in real time according to the activities, the situation features should be extracted in real time.

With respect to the lectures, we use three kinds of situation features ; lecturer's location, lecturer's voice level, and activation degree of student group. The situation feature description of the dynamic situation defined in Table1 is shown in Table3. These tables are, what we call, knowledge representation of the lecture.

The extraction methods for the situation features are explained in Section4.2.



Yoshinari Kameda
Fri Oct 1 16:26:35 JST 1999