Contents of this page are obsolete. This page is preserved and stored at this URL just from historical viewpoint. Original URL was http://www.mm.media.kyoto-u.ac.jp/members/kameda/...
Please visit www.kameda-lab.org for recent information. (2002/12/06, kameda@ieee.org)

Publication, Research, Docs and Info, KAMEDA


next up previous
Next: Experimental result Up: Experiment Previous: Agent design

Calculation of situation features

 

We implemented a prototype system at a lecture room in the graduate school of informatics in Kyoto University.

The target space is imaged by four Hi8 video cameras which are set on a pan/tilt mount at the center of the walls, and four SONY EVI-G20 video cameras fixed at the corners of the lecture room (Figure3).

  
Figure 3: Camera layout in the lecture room

The system uses three kinds of the situation features to detect the five dynamic situations shown in Table1; lecturer's location, voice level, and activation degree of student group. They are shown in Table3.

  
Table 3: Situation features and A-components

The lecturer's location is measured by the image based triangulation by the two active cameras (b) and (c) in Figure3.

To obtain the lecturer's voice level, the lecturer is asked to equip a wireless microphone. The input level of the A/D converter is used directly as the voice level.

We divide the student desks into six groups and call the students in one desk group the student group. The activation degree of the student group is represented by area of the subtraction region in the image of the student group from their front view. The cameras (a) and (d) in Figure3 are assigned to measure this situation feature.



Yoshinari Kameda
Fri Oct 1 16:26:35 JST 1999