How it works
The mobile processing unit uses a convolutional neural network (CNN) to calculate the current position based on a camera image. First, data is automatically acquired from thousands of camera images and the corresponding positions. The data is then used to further train an existing network in order to adapt it to the target environment, so that the finished network can determine the position of new images when deployed on the target platforms. To prevent the system from degenerating over time, updated information is continuously collected on a central processing unit for further training of the network and distribution to the mobile processors.
System components of CNNLok
The mobile processing unit is typically either a simple smartphone or an ARM- or Intel-based single-board computer with a standard camera. With its high level of versatility, the platform allows multiple application scenarios that cannot be covered using traditional, infrastructure-based solutions. Adapted motion models and special preprocessing of the collected data allow new data to populate an existing positioning system. Due to the intensive processing involved in continuously learning more about its environment, the system needs to be connected to other hardware via a network or docking station or by similar means. Depending on the dynamics of the area, there may also be a need for a central server with powerful standard deep learning hardware, such as graphics cards or special vector processors. This central processor would take over the duties of the mobile processing units at appropriate times – for example, when they need to be charged.