Research
Towards an ambient sensor network
February 19, 2011
0

One of the starting points to foster ubiquitous learning support is the determination of the environmental and personal context of the learning process. A promising way to get there is the use of sensors to measure relevant metrics in situ. Such sensors can be roughly clustered into 2 categories distinguished by the contextual information they convey:

(1) Sensors determining personal context information are usually installed as closely as possible to the measured entity. The most popular ones in this category are currently without a doubt location sensors, e.g. built-in GPS receivers in modern smartphones. These sensors continuously requests the location of the device and thus under certain conditions also the location of the device owner, respectively the learner.

(2) The range of available environmental sensors is wide, starting from sensors measuring the outdoor temperature to indoor motion detectors. An increasing number of these sensors is embedded in the environment in such a way that the collected data can be easily used to determine what is happening in the surrounding, exposing contextual information about the learning environment.

Eventually it becomes interesting when the gathered sensor data from both categories are combined to map the personal and environmental context into one another and thus deduce relevant information to support the learning process. However, a drawback when dealing with different sensors is the complex and non-uniform data they produce. The challenge is to turn each sensor into a valuable independent information source and then ease the aggregation and utilization of the gathered data. Technically this is where concepts like sensor network and information fusion come into play, depicting the integration of sensors into a manageable and extensible network offering the possibility to combine multiple data sources, while still obtaining valuable refined information.

Talking about if and how to build up such a sensor network in the Medialab, we noticed that we already have a couple of sensors in place. Most of them emerged either as integral part or just as side product of our research and development activity. A good example is the use of the built-in location and orientation sensors (compass, accelerometer) for the developed mobile augmented reality prototypes (ARLearn and Locatory) running on Android. There the personal context information is required to map the existing virtual information correctly into the environment surrounding the person using the mobile device. The other way round in my currently running experiment with ambient information displays we make use of motn and a face detection to recognize users in the proximity of the display as well as their interest in the display. Both sensors are implemented in Processing making use of the display’s built-in webcam. The motion detector simply calculates the difference between the single image frames, while the face detector makes use of the open source computer vision library OpenCV for Processing to detect faces within the image.

Back to the envisioned sensor network, we finally decided to start setting up a sensor room within the Medialab, assembling all our available sensors in one place. In addition to that we are currently collecting requirements for a reliable and scaleable backend solution and we started to explore the possibilities of platforms like Arduino, development environments like Processing, tool libraries like openFrameworks, as well as communication specifications like ZigBee. So there is still a lot of interesting work to do and more blog posts to come reporting about the developments towards an ambient sensor network…