Indoor pattern recognition in way finding for vision impairment people using restricted resources

According to world health organisation records, 285 million people are vision impaired worldwide; approximately 39 million are totally blind and 246 have low vision. Way finding is a challenging for vision impaired people. The project attempts to develop deep learning algorithms to assist real time location navigation for vision impaired people. The algorithms have to be computational effective but accurate for classifying obstructions and pathways in poor quality images which are captured in poor conditions under shadows, blurring and low light situations. The developed algorithms attempt to classify static and dynamic obstacles, both in indoor and outdoors environment such as walkways, sidewalks, stairways, path edges etc. The deep learning algorithms will be implemented to build a comprehensive real-time navigation system using arrange sensors in a hand held devices. The system will be helpful in vision impairment navigation.
Person

Principal investigator

Iain Murray kit.chan@curtin.edu.au
Magnifying glass

Area of science

Computing
CPU

Systems used

Zeus
Computer

Applications used

Tensorflow python, gplearn python
Partner Institution: Curtin University| Project Code:

The Challenge

We did not have enough computation facility or GPU machines to run the algorithms which are developed python tensorflow / python gplearn. It took quite a long time to run the program.

The Solution

We did not have enough computation facility or GPU machines to run the program since a long computational time is required. Zeus helps our computation. The simulation time and computation time is much faster when Zeus is used.

The Outcome

We have implemented the following two API, tensorflow python and gplearn python in zeus. We run our algorithms in zeus.