Most of the machine vision concepts are unable to tackle the problem of recognising handwritten characters due to irregularities of writers' native written pattern.
Intelligent character recognition (ICR) is an emerging extended technology of optical character recognition (OCR); Combined, these tackle the problem of handwritten characters recognition.
Here I’ll show you how to use image processing techniques to build your own handwritten character recognition tool.
1) Basic knowledge of image processing
2) Familiarity with artificial intelligence concepts (support vector machine training)
Install OpenCV for Visual Studio 2015.
Construct the Tool
These are the pre-processing steps:
1) Scan the character set using RGB color space scanner.
2) Apply segmentation algorithm for getting characters separately, one by one.
3) Save the character images in .jpg or .png.
4) Load the image from the saved location
5) Convert the color image to grayscale. All the pixels will be in the range of 0-255.
The resulting image will look like Figure 1 and grayscale image Mat will visible as Figure 2.
6) Apply binarisation on the grayscale image to make the image mat only with zeros and ones. For this you can use the following logic: Consider T as threshold value, f (x, y) as pixel value of the gray image, and g(x ,y) as pixel value of the binary image.
7) Find the effective height and width of the space exactly where the character is located inside the cage.
8) For high accuracy of character recognition, apply thinning algorithm on the characters before proceeding with the feature extraction.
9) Feature Extraction
Identify the curvature
This step is for extracting the discrete features of the character to train the support vector machine.
Below I will mention the curvature-based feature extraction method which gives me more than 90 % accuracy in recognition of Sinhala, English and Tamil letters.
You can implement the code any way you want after understanding the core of the algorithm.
In this method you need to consider the particular pixel’s contribution for creating the predefined set of curvature patterns.
Figures 4, 5 and 6 illustrate how possible and optimal curvature patterns are created using eight neighbor pixels through middle pixel.
First identify the pixel patterns that can create straight lines.
Then identify the pixel patterns which are making arcs.
More than above-mentioned patterns, following curvature groups also can be visible in a character. Here, groups of curvature patterns are created by integrating very similar curvatures.
Break the Character Space into Tiles
Here you will divide the effective space of the character into 25 tiles which will be considered separately when forming the feature vector matrix for SVM training.
Search for Curvature Patterns
To formulate the feature vector, use the cumulative value of particular curvature repeats in a particular tile. For this, all ON (pixels with 1 in the binary image mat) pixels inside the effective area of the character will be considered for curvature patterns defined in Figures 4, 5 and 6 starting from top left corner by selecting each ON pixel as the middle pixel. Then consider adjacent eight neighbor pixels and match all contributed curvature patterns by this middle pixel.
Form the Training Matrix
First, keep the record for each pixel’s contributed curvatures. Then calculate the cumulative value of particular pattern repeats in a particular tile.
Now a letter can be represented as feature vector of 12 columns and 25 rows. 12 columns represent the 12 different patterns while 25 rows are representing 25 tiles. This figure shows the horizontal segment of the feature vector:
For the same letter you’ll need several copies (near 1000) written by different writers for the training process. Then, convert each and every letter into a particular feature vector by applying the above-listed steps properly.
You can use 70 % of the copies of one character as a training set while the remaining 30% you can use as the test set.
Target output for every training vector needs to be fed as labels to the SVM.
Both the training and the test set can be saved to .yml file.
Finally, feed test vector file to the Support Vector Machine together with their particular labels.