Abstract: Many applications utilize deeply embedded sensors and actuators that
are tightly coupled with the physical environment in order to perform their
functionality. Sensor, actuators and embedded computation resources used for
implementing such systems usually exhibit regular local configurations, while
the global structure of the subsystems is either not fixed a priori and can
change at runtime or is not known. Examples include systems that use many
randomly distributed sensing boards, each one having a fixed structure of
computation resources and sensing devices, to autonomously detect events and
take proper actions. This paper discusses the requirements of the aforementioned systems,
their advantages and the issues involved in developing them. Specifically we
focus on dynamic adaptation of the system as a particular feature of such
systems. This feature is discussed in depth in a collaborative and dynamically
adaptive object tracking system that has been built in our lab as the
experimental framework of this study. We exploit reconfigurable hardware
devices embedded in a number of networked cameras in order to achieve our goal.
We justify the need for dynamic adaptation of the system through scenarios and
applications. Experimental results on a set of scenes advocate the fact that
our system works effectively for different scenario of events through
reconfiguration. Comparing results with non-adaptive implementations verify the
fact that our approach improves system's robustness to scene variations and
outperforms the traditional implementations.