opencv practice -- credit card

Posted by coho75 on Wed, 02 Feb 2022 22:45:29 +0100

1, Basic function usage

1. Usage of argparse

1.1
Use the argparse module to create an ArgumentParser parsing object, which can be understood as a container that will contain all the information required to parse the command line into Python data types.

ap = argparse.ArgumentParser()#It can be understood as a container that will contain all the information needed to parse the command line into Python data types.
ap.add_argument("-i", "--image", default='./images/credit_card_01.png',
                help="path to input image")
ap.add_argument("-t", "--template", default='./ocr_a_reference.png',
                help="path to template OCR image")
args = vars(ap.parse_args())

1.2

ap.add_argument("-i", "--image", default='./images/credit_card_01.png',
                help="path to input image")

We pass the add of the object_ Argument function to add parameters. The parameters we added here are image and template, where '- i', '– image' represents the same parameter, and the default parameter indicates that if we do not provide a parameter when running the command, the program will treat this value as a parameter value. Help, the help information of the parameter, when argparse When supporting, it means that the help information of this parameter is not displayed. Print complete help information for all options in the current parser and exit. By default, the help action is automatically added to the parser.
1.3
Finally, the parse of the object is used_ Args gets the parsed parameters. There is a problem to pay attention to here. When '-' and '–' appear at the same time, the system defaults to the latter as the parameter name, the former is not, but there is no difference when entering on the command line. Next, print the parameter information.

2. sorted usage

Function definition:
sorted(iterable, cmp=None, key=None, reverse=False) --> new sorted list
Parameter Description:
iterabele: is an iteratable type
cmp: a function used for comparison. The comparison is determined by the key. It has a default value and is an item in the iteration set
key: use an attribute and function of a list element as a keyword, with a default value and an item in the iteration set
Reverse: sort rule, reverse=True or reverse=False, with default value
Return value: it is a sorted iteratable type, the same as iteratable
The difference between sorted and sort function is that sort rearranges the list in the original position, while sorted () generates a new list
1.1 basic sorting

>>> print sorted([1,2,3,6,5,4])
[1, 2, 3, 4, 5, 6]

1.2cmp parameter sorting

>>> L = [('b',2),('a',5),('c',1),('d',4)]
>>> print sorted(L, cmp=lambda x,y:cmp(x[1],y[1]))
[('c', 1), ('b', 2), ('d', 4), ('a', 5)]
>>> print sorted(L, cmp=lambda x,y:cmp(x[0],y[0]))
[('a', 5), ('b', 2), ('c', 1), ('d', 4)]

1.3key parameter sorting

>>> print sorted(L, key=lambda x:x[1])
[('c', 1), ('b', 2), ('d', 4), ('a', 5)]

1.4 reverse parameter sorting

>>> print sorted([1,2,3,6,5,4], reverse=True)
[6, 5, 4, 3, 2, 1]
>>> print sorted([1,2,3,6,5,4], reverse=False)
[1, 2, 3, 4, 5, 6]

3. Zip and zip

The zip() function is used to iterate objects (intuitively, the objects that can iterate with the for loop are iteratable objects. For example, strings, lists, Yuanzu, dictionaries, collections, etc. are all iteratable objects.) As a parameter, package the corresponding elements in the object into tuples, and then return the list composed of these tuples.

If the number of elements of each iterator is inconsistent, the length of the returned list is the same as that of the shortest object. The tuple can be decompressed into a list by using the * operator.

a = [(1, 2), (2, 3), (3, 4)]  
# Similar to these are iteratable Yuanzu, string and so on. a = ((1, 2), (2, 3), (3, 4)), a="abc"
b = [(5, 6), (7, 8), (9, 9)]
print(zip(a, b))  # <zip object at 0x000001B5EB0CA0C8>
ret = list(zip(a, b))  
# Output: [((1, 2), (5, 6)), ((2, 3), (7, 8)), ((3, 4), (9, 9))]
ret1 = list(zip(*ret))  
# Or write it like this: ret1 = list(zip(*(zip(z,b)))  
# Output: [((1, 2), (2, 3), (3, 4)), ((5, 6), (7, 8), (9, 9))]

4. Usage of items()

The items() method forms a tuple of each pair of key s and value s in the dictionary, and returns these tuples in the list.

D = {'Google': 'www.google.com', 'Runoob': 'www.runoob.com', 'taobao': 'www.taobao.com'}
print(D.items())
print(list(D.items()))
# Traverse dictionary list
for key, value in D.items():
    print(key, value)
# dict_items([('Google', 'www.google.com'), ('Runoob', 'www.runoob.com'), ('taobao', 'www.taobao.com')])
#Results: [('google ',' www.google. Com '), ('runoob', 'www.runoob. Com'), ('taobao ',' www.taobao. Com ')]

5. format() usage

print("{0} {1}".format("Hello","World"))
print("{1} {0}".format("Hello","World"))
print("{0} {1} {0}".format("Hello","World"))
print("{1} {1} {0}".format("Hello","World"))

Hello World
World Hello
Hello World Hello
World World Hello

2, Important function usage in opencv

Value 2, operation 1

type:There are five types of binarization operations: cv2.THRESH_BINARY;cv2.THRESH_BINARY_INV;cv2.THRESH_TRUNC;cv2.THREAH_TOZREO; cv2.THRESH_TOZERO_INV;

cv2.THRESH_BINARY: The part exceeding the threshold value takes the maximum value, otherwise it is 0; Example: 127255.Compare each pixel. Compared with 127, 0 is less than and 255 is greater than.The brighter the light, the greater the threshold

cv2.THRESH_BINARY_INV:  THRESH_BINARY Reversal of

cv2.THRESH_TRUNC:The part greater than the threshold value is set as the threshold value, otherwise it remains unchanged. Example: 127 is the threshold value, all values greater than 127 are 127, and values less than or equal to 127 remain unchanged.

cv2.THREAH_TOZREO: The part greater than the threshold does not change, otherwise it is 0

cv2.THRESH_TOZERO_INV: cv2.THRESH_TOZERO Reversal of

**

3. Closing treatment

#Closed: expand first and then corrode
img = cv2.imread('dige.png')

kernel = np.ones((5,5),np.uint8)
closing = cv2.morphologyEx(img,cv2.MORPH_CLOSE,kernel)
cv_show('closing',closing)

3. Top hat

#Top hat = original input - open operation result 
img = cv2.imread('dige.png')
tophat = cv2.morphologyEx(img,cv2.MORPH_TOPHAT,kernel)
cv_show('tophat',tophat)

4,cv2. Shape processing of getstructuringelement() box

kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(11,11))
The first parameter of this function represents the shape of the kernel. There are three shapes to choose from.
Rectangle: MORPH_RECT;
Cross shape: MORPH_CROSS;
ellipse: MORPH_ELLIPSE;
The second and third parameters are the size of the kernel and the location of the anchor, respectively. Generally called erode as well as dilate Function, define a Mat Variable of type

getStructuringElement Return value of function:
There is a default value for the location of the anchor point Point(-1,-1),Indicates that the anchor point is at the center point. element The shape only depends on the anchor position. In other cases, the anchor only affects the offset of morphological operation results.

5. Contour detection CV2 findContours(img,mode,method)

cv2.findContours(img,mode,method)
mode:Contour retrieval mode

RETR_EXTERNAL : Retrieve only the outermost contour;
RETR_LIST: Retrieve all contours and save them in a linked list;
RETR_CCOMP: Retrieve all contours and organize them into two layers: the top layer is the outer boundary of each part, and the second layer is the boundary of the cavity;
RETR_TREE: Retrieve all contours and reconstruct the entire hierarchy of nested contours;
method:Contour approximation method

CHAIN_APPROX_NONE: with Freeman The contour is output by chain code, and the polygon (sequence of vertices) is output by all other methods.
CHAIN_APPROX_SIMPLE:Compress the horizontal, vertical, and oblique parts, that is, functions retain only their end parts.

6. Template matching

#method: 
 (1)cv2.TM_SQDIFF: If the calculated square is different, the smaller the calculated value is, the more relevant it is
 (2)cv2.TM_CCORR: Calculate the correlation. The larger the calculated value, the more relevant it is
 (3)cv2.TM_CCOFFF: Calculate the correlation coefficient. The larger the calculated value is, the more relevant it is
 (4)cv2.TM_SQDIFF_NORMED: The calculated normalized square is different. The closer the calculated value is to 0, the more relevant it is
 (5)cv2.TM_CCORR_NORMED: Calculate the normalized correlation. The closer the calculated value is to 1, the more relevant it is
 (6)cv2.TM_CCOEFF_NORMED: Calculate the normalized correlation coefficient. The closer the calculated value is to 1, the more relevant it is

7,min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res)

 min_val: Minimum value calculated by algorithm
 max_val: The maximum value calculated by the algorithm
 min_loc: The coordinate of the minimum value (because we get h,w,So you can get a rectangle)
 max_loc : Coordinates of the maximum value

3, Code implementation

# Import Toolkit
from imutils import contours
import numpy as np
import argparse
import cv2
import myutils

# [I. basic settings]
# Set parameters
ap = argparse.ArgumentParser()#It can be understood as a container that will contain all the information needed to parse the command line into Python data types.
ap.add_argument("-i", "--image", default='./images/credit_card_01.png',
                help="path to input image")
ap.add_argument("-t", "--template", default='./ocr_a_reference.png',
                help="path to template OCR image")
args = vars(ap.parse_args())

# Specify credit card type
FIRST_NUMBER = {
    "3": "American Express",
    "4": "Visa",
    "5": "MasterCard",
    "6": "Discover Card"
}


# Drawing display
def cv_show(name, img):
    cv2.imshow(name, img)
    cv2.waitKey(0)
    cv2.destroyAllWindows()


# Read a template image
img = cv2.imread(args["template"])
# cv_show('template',img)
# Grayscale image
ref = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# cv_show('template_gray',ref)
# Binary image
ref = cv2.threshold(ref, 10, 255, cv2.THRESH_BINARY_INV)[1]
#This function forms an image of white words on a black background
cv_show('template_bi', ref)

# Two. Template processing flow: contour detection, external rectangle, template extraction, so that template corresponds to each value.
# 1. Calculate the profile
'''
cv2.findContours()The parameters accepted by the function are binary images, that is, black-and-white (not grayscale images),
	cv2.RETR_EXTERNAL Only the outer contour is detected,
	cv2.CHAIN_APPROX_SIMPLE only http://127.0.0.1:8888/?token=91f9f40d29f63f7d3e3303fea42ed280d4eeb7008242e771 keep the end coordinates
	Returned list Each element in the image is an outline in the image
'''
refCnts = cv2.findContours(ref.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)[0]
img = cv2.drawContours(img, refCnts, -1, (0, 0, 255), 2)  # The outline is obtained on the binary graph. The drawing is drawn on the original graph, - 1: draw all the contours. 2: Width of line
cv_show('template_Contours', img)
print(np.array(refCnts).shape)  # 10 contours, so it's 10
refCnts = myutils.sort_contours(refCnts, method="left-to-right")[0]#Sort profiles
digits = {}
# 2. Traverse each contour and circumscribe the rectangle
for (i, c) in enumerate(refCnts):  # c is the end coordinate of each contour
    # Calculate the circumscribed rectangle and resize to the appropriate size
    (x, y, w, h) = cv2.boundingRect(c)
    # 3. Pull out the formwork
    roi = ref[y:y + h, x:x + w]  # Each roi corresponds to a number
    # print(roi.shape)
    roi = cv2.resize(roi, (57, 88))  # It's too small. Turn it up. This pulls out each number

    # 4. Each number corresponds to each template, and i just corresponds to the value in the roi contour
    digits[i] = roi
# cv2.imshow('roi_'+str(i),roi)
# cv2.waitKey(0)
# print(digits)

# [III. input image processing]
#Avoid background and other text interference
# Morphological operation, HAT + close operation can highlight the bright area, but it does not have to be HAT + close operation

# 1. Initialize the convolution kernel and specify the size according to the actual task, not necessarily 3x3
rectKernel = cv2.getStructuringElement(cv2.MORPH_RECT, (9, 3))
#The first parameter represents the shape of the kernel. The second and third parameters are the size of the kernel and the location of the anchor, respectively.
sqKernel = cv2.getStructuringElement(cv2.MORPH_RECT, (5, 5))
onekernel = np.ones((9, 9), np.uint8)

# 2. Read the input image and preprocess it
image = cv2.imread(args["image"])
# cv_show('Input_img',image)
image = myutils.resize(image, width=300)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# cv_show('Input_gray',gray)

# 3. Top hat operation to highlight brighter areas
tophat = cv2.morphologyEx(gray, cv2.MORPH_TOPHAT, rectKernel)
# cv_show('Input_tophat',tophat)
# 4. Sobel operator in X direction. The experiment shows that the effect of adding y is not good
gradX = cv2.Sobel(tophat, cv2.CV_32F, 1, 0, ksize=3)  # ksize=-1 is equivalent to using 3 * 3

gradX = np.absolute(gradX)  # Absolute: calculate absolute value
min_Val, max_val = np.min(gradX), np.max(gradX)
gradX = (255 * (gradX - min_Val) / (max_val - min_Val))
gradX = gradX.astype("uint8")

print(np.array(gradX).shape)
# cv_show('Input_Sobel_gradX',gradX)

# 5. Connect the numbers by closing operation (expansion first, then corrosion) In the future, four boxes with four numbers will expand into one box and will not corrode
gradX = cv2.morphologyEx(gradX, cv2.MORPH_CLOSE, rectKernel)
# cv_show('Input_CLOSE_gradX',gradX)

# 6.THRESH_ Set the threshold value of ot to 0 and set it to the appropriate threshold value automatically
thresh = cv2.threshold(gradX, 0, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU)[1]
# cv_show('Input_thresh',thresh)

# 7. Another closing operation to fill the hole
thresh = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, sqKernel)
# cv_show('Input_thresh_CLOSE',thresh)

# 8. Calculate contour
threshCnts = cv2.findContours(thresh.copy(),
                              cv2.RETR_EXTERNAL,
                              cv2.CHAIN_APPROX_SIMPLE)[0]
cur_img = image.copy()
cv2.drawContours(cur_img, threshCnts, -1, (0, 0, 255), 2)
#Draw the outline into the original image. At this time, all the contours are drawn, so it is necessary to filter
# cv_show('Input_Contours',cur_img)

# [IV. traversing contours and numbers]
# 1. Traverse the contour
locs = []  # Save qualified contour
for i, c in enumerate(threshCnts):
    # Calculation rectangle
    x, y, w, h = cv2.boundingRect(c)

    ar = w / float(h)
    print(ar,w,h)
    # Select the appropriate area. According to the actual task, here are basically a group of four numbers
    if ar > 2.5 and ar < 4.0:
        if (w > 40 and w < 55) and (h > 10 and h < 20):
            # Stay where you fit
            locs.append((x, y, w, h))

# Sort the matching contours from left to right
locs = sorted(locs, key=lambda x: x[0])#sorted from small to large

# 2. Traverse the numbers in each contour
output = []  # Save the correct number
for (i, (gx, gy, gw, gh)) in enumerate(locs):  # Traverse each set of large contours (including 4 numbers)
    # initialize the list of group digits
    groupOutput = []

    # Extract each group according to the coordinates (4 values)
    group = gray[gy - 5:gy + gh + 5, gx - 5:gx + gw + 5]  # Expand out a little
    cv_show('group_' + str(i), group)
    # 2.1 pretreatment
    group = cv2.threshold(group, 0, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU)[1]  # Binary group
    # cv_show('group_'+str(i),group)
    # Calculate the profile of each group, so it is divided into four small profiles
    digitCnts = cv2.findContours(group.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)[0]
    # sort
    digitCnts = myutils.sort_contours(digitCnts, method="left-to-right")[0]

    # 2.2 calculate and match each value in each group
    for c in digitCnts:  # c represents the end coordinates of each small contour
        z = 0
        # Find the outline of the current value and resize it to an appropriate size
        (x, y, w, h) = cv2.boundingRect(c)  # Circumscribed rectangle
        roi = group[y:y + h, x:x + w]  # Take out the coverage area of the small profile, i.e. the number, from the original drawing
        roi = cv2.resize(roi, (57, 88))
        # cv_show("roi_"+str(z),roi)

        # Calculate matching score: 0 score, 1 score
        scores = []  # In a single cycle, scores stores the maximum score of a value matching 10 template values

        # Calculate each score in the template
        # The digit of digits is exactly the value 0,1, 9; Digitroi is the characteristic representation of each value
        for (digit, digitROI) in digits.items():
            # For template matching, res is the result matrix
            res = cv2.matchTemplate(roi, digitROI, cv2.TM_CCOEFF)  # At this time, roi is x, digitroi is 0, followed by 1,2 Match 10 times to see the highest score of the template
            Max_score = cv2.minMaxLoc(res)[1]  # Return 4, and take the second maximum value Maxscore
            scores.append(Max_score)  # 10 maximum values
        # print("scores: ",scores)
        # Get the most appropriate number
        groupOutput.append(str(np.argmax(scores)))  # Returns the position of the maximum value in the input list
        z = z + 1
    # 2.3 draw
    cv2.rectangle(image, (gx - 5, gy - 5), (gx + gw + 5, gy + gh + 5), (0, 0, 255), 1)  # Upper left corner, lower right corner
    # 2.4 putText parameters: picture, added text, upper left coordinate, font, font size, color, font thickness
    cv2.putText(image, "".join(groupOutput), (gx, gy - 15),
                cv2.FONT_HERSHEY_SIMPLEX, 0.65, (0, 0, 255), 2)

    # 2.5 results obtained
    output.extend(groupOutput)
    print("groupOutput:", groupOutput)
# cv2.imshow("Output_image_"+str(i), image)
# cv2.waitKey(0)
# 3. Print results
print("Credit Card Type: {}".format(FIRST_NUMBER[output[0]]))
print("Credit Card #: {}".format("".join(output)))
cv2.imshow("Output_image", image)
cv2.waitKey(0)

Topics: OpenCV