Ethiopian Currency Banknotes using Vision Impaired Peoples

Published: 9 November 2020| Version 1 | DOI: 10.17632/r76dwc7nnw.1


Ethiopian Currency Banknotes dataset is collected for my postgraduate in Software Engineering from Addis Ababa Science and Technology University. The general objective of my thesis work was to develop the Rea-Time Ethiopian Currency Recognition Model for Visually Disabled Peoples Using Convolutional Neural Network. The name of the Ethiopian currency bill is “Birr”. In Ethiopia, there are five paper currencies with different and specific size, color, and other features those are One Birr, Five Birr, Ten Birr, Fifty Birr, and One hundred Birr. The five different values of Ethiopian banknotes from new up to worn was gathered from different banks and then recorded a lot of custom video dataset which shows the real-life scenario of the vision-impaired community. All the data is collected by putting Samsung Galaxy A10 mobile phone beside vision-impaired individual ears for 10 seconds. To obtain the presence of the real-world challenge, all the videos were attempted to be recorded within uncontrolled environments. In the case of recording the videos, the instances of different protrusion including fingers and shadows and also the various lighting conditions including moonlight, daylight, and artificial light were attempted to be included. All the videos were recorded by considering the moonlight daylight, artificial light, and the transition of one lighting condition. To reduce the time-consuming and tedious routine task which is manual labeling, only 1700 images of each banknote are selected so that a total of 8,500 images are selected as a dataset. Ideally, the data should look as close as possible for the real-world situation, that is why the custom video was recorded which was collected at the real-world situation of the vision-impaired community. The images presented in the dataset have similar width and height which is 256 x 256 x 3 (Width x Height x Channel). Data annotation using a bounding box technique is performed on all images presented in the dataset. For each image, the labeled information was saved as an eXtensible Markup Language (XML) file in PASCAL VOC format, the format compatible for RCNN and SSD models. The XML file stores important information such as image name, folder name, size of the image as (width, height, and depth), each bounding box coordinates as (xmin, ymin, xmax, ymax), class name, and others. I try to be rational in the case of include or discard banknotes from labeling. Banknotes that were fully or partially visible and recognizable were included, whereas banknotes that were unrecognizable because of size or position were excluded. The dataset is already spitted into training and test dataset which is used for train the model and is used for evaluating the trained model respectively by adopting the ratio of 9:1 (90% training and 10% testing). This means that 90% from the dataset which is 7650 images are used to train the model, and 10% from the dataset which is 850 images are used to test the trained model.


Steps to reproduce

Download the data and then simply using it to train the Convolutional Neural Network Model. If modification is required it is a must to modify the XML file together.


Object Detection, Object Recognition, Convolutional Neural Network