Python3+OpenCV實現(xiàn)簡單交通標志識別流程分析
由于該項目是針對中小學生競賽并且是第一次舉行,所以識別的目標交通標志僅僅只有直行、右轉、左轉和停車讓行。
數(shù)據(jù)集:
鏈接: https://pan.baidu.com/s/1SL0qE-qd4cuatmfZeNuK0Q 提取碼: vuvi?
源代碼:https://github.com/ccxiao5/Traffic_sign_recognition
整體流程如下:
- 數(shù)據(jù)集收集(包括訓練集和測試集的分類)
- 圖像預處理
- 圖像標注
- 根據(jù)標注分割得到目標圖像
- HOG特征提取
- 訓練得到模型
- 將模型帶入識別算法進行識別
我的數(shù)據(jù)目錄樹。其中test_images/train_images是收集得到原始數(shù)據(jù)集。realTest/realTrain是預處理后的圖像。dataTest/dataTrain是經過分類處理得到的圖像,HogTest/HogTrain是通過XML標注后裁剪得到的圖像。HogTest_affine/HogTrain_affine是經過仿射變換處理擴充的訓練集和測試集。imgTest_hog.txt/imgTrain_hog.txt是測試集和訓練集的Hog特征

一、圖像處理
由于得到的數(shù)據(jù)集圖像大小不一(如下),我們首先從中心區(qū)域裁剪并調整正方形圖像的大小,然后將處理后的圖像保存到realTrain和realTest里面。

圖片名稱對應關系如下:
img_label = {
"000":"Speed_limit_5",
"001":"Speed_limit_15",
"002":"Speed_limit_30",
"003":"Speed_limit_40",
"004":"Speed_limit_50",
"005":"Speed_limit_60",
"006":"Speed_limit_70",
"007":"Speed_limit_80",
"008":"No straight or right turn",
"009":"No straight or left turn",
"010":"No straight",
"011":"No left turn",
"012":"Do not turn left and right",
"013":"No right turn",
"014":"No Overhead",
"015":"No U-turn",
"016":"No Motor vehicle",
"017":"No whistle",
"018":"Unrestricted speed_40",
"019":"Unrestricted speed_50",
"020":"Straight or turn right",
"021":"Straight",
"022":"Turn left",
"023":"Turn left or turn right",
"024":"Turn right",
"025":"Drive on the left side of the road",
"026":"Drive on the right side of the road",
"027":"Driving around the island",
"028":"Motor vehicle driving",
"029":"Whistle",
"030":"Non-motorized",
"031":"U-turn",
"032":"Left-right detour",
"033":"traffic light",
"034":"Drive cautiously",
"035":"Caution Pedestrians",
"036":"Attention non-motor vehicle",
"037":"Mind the children",
"038":"Sharp turn to the right",
"039":"Sharp turn to the left",
"040":"Downhill steep slope",
"041":"Uphill steep slope",
"042":"Go slow",
"044":"Right T-shaped cross",
"043":"Left T-shaped cross",
"045":"village",
"046":"Reverse detour",
"047":"Railway crossing-1",
"048":"construction",
"049":"Continuous detour",
"050":"Railway crossing-2",
"051":"Accident-prone road section",
"052":"stop",
"053":"No passing",
"054":"No Parking",
"055":"No entry",
"056":"Deceleration and concession",
"057":"Stop For Check"
}
def center_crop(img_array, crop_size=-1, resize=-1, write_path=None):
##從中心區(qū)域裁剪并調整正方形圖像的大小。
rows = img_array.shape[0]
cols = img_array.shape[1]
if crop_size==-1 or crop_size>max(rows,cols):
crop_size = min(rows, cols)
row_s = max(int((rows-crop_size)/2), 0)
row_e = min(row_s+crop_size, rows)
col_s = max(int((cols-crop_size)/2), 0)
col_e = min(col_s+crop_size, cols)
img_crop = img_array[row_s:row_e,col_s:col_e,]
if resize>0:
img_crop = cv2.resize(img_crop, (resize, resize))
if write_path is not None:
cv2.imwrite(write_path, img_crop)
return img_crop
然后根據(jù)得到的realTrain和realTest自動生成帶有<size><width><height><depth><filename>的xml文件
def write_img_to_xml(imgfile, xmlfile):
img = cv2.imread(imgfile)
img_folder, img_name = os.path.split(imgfile)
img_height, img_width, img_depth = img.shape
doc = Document()
annotation = doc.createElement("annotation")
doc.appendChild(annotation)
folder = doc.createElement("folder")
folder.appendChild(doc.createTextNode(img_folder))
annotation.appendChild(folder)
filename = doc.createElement("filename")
filename.appendChild(doc.createTextNode(img_name))
annotation.appendChild(filename)
size = doc.createElement("size")
annotation.appendChild(size)
width = doc.createElement("width")
width.appendChild(doc.createTextNode(str(img_width)))
size.appendChild(width)
height = doc.createElement("height")
height.appendChild(doc.createTextNode(str(img_height)))
size.appendChild(height)
depth = doc.createElement("depth")
depth.appendChild(doc.createTextNode(str(img_depth)))
size.appendChild(depth)
with open(xmlfile, "w") as f:
doc.writexml(f, indent="\t", addindent="\t", newl="\n", encoding="utf-8")
<annotation> <folder>/home/xiao5/Desktop/Test2/data/realTest/PNGImages</folder> <filename>000_1_0001_1_j.png</filename> <size> <width>640</width> <height>640</height> <depth>3</depth> </size> </annotation>
然后對realTrain和realTest的圖片進行標注,向默認XML添加新的信息(矩形信息)。
<annotation> <folder>PNGImages</folder> <filename>021_1_0001_1_j.png</filename> <path> C:\Users\xiao5\Desktop\realTest\PNGImages\021_1_0001_1_j.png </path> <source> <database>Unknown</database> </source> <size> <width>640</width> <height>640</height> <depth>3</depth> </size> <segmented>0</segmented> <object> <name>Straight</name> <pose>Unspecified</pose> <truncated>0</truncated> <difficult>0</difficult> <bndbox> <xmin>13</xmin> <ymin>22</ymin> <xmax>573</xmax> <ymax>580</ymax> </bndbox> </object> </annotation>
處理完后利用我們添加的矩形將圖片裁剪下來并且重命名進行分類。主要思路是:解析XML文檔,根據(jù)<name>標簽進行分類,如果是直行、右轉、左轉、停止,那么就把它從原圖中裁剪下來并重命名,如果沒有<object>那么就認為是負樣本,其中在處理負樣本的時候,我進行了顏色識別,把一張負樣本圖片根據(jù)顏色(紅色、藍色)裁剪成幾張負樣本,這樣做的好處是:我們在進行交通標志的識別時,也是使用的顏色識別來選取到交通標志,我們從負樣本中分割出來的相近顏色樣本有利于負樣本的訓練,提高模型精度。
def produce_proposals(xml_dir, write_dir, square=False, min_size=30):
##返回proposal_num對象
proposal_num = {}
for cls_name in classes_name:
proposal_num[cls_name] = 0
index = 0
for xml_file in os.listdir(xml_dir):
img_path, labels = parse_xml(os.path.join(xml_dir,xml_file))
img = cv2.imread(img_path)
##如果圖片中沒有出現(xiàn)定義的那幾種交通標志就把它當成負樣本
if len(labels) == 0:
neg_proposal_num = produce_neg_proposals(img_path, write_dir, min_size, square, proposal_num["background"])
proposal_num["background"] = neg_proposal_num
else:
proposal_num = produce_pos_proposals(img_path, write_dir, labels, min_size, square=True, proposal_num=proposal_num)
if index%100 == 0:
print ("total xml file number = ", len(os.listdir(xml_dir)), "current xml file number = ", index)
print ("proposal num = ", proposal_num)
index += 1
return proposal_num

為了提高模型的精確度,還對目標圖片(四類圖片)進行仿射變換來擴充訓練集。
def affine(img, delta_pix):
rows, cols, _ = img.shape
pts1 = np.float32([[0,0], [rows,0], [0, cols]])
pts2 = pts1 + delta_pix
M = cv2.getAffineTransform(pts1, pts2)
res = cv2.warpAffine(img, M, (rows, cols))
return res
def affine_dir(img_dir, write_dir, max_delta_pix):
img_names = os.listdir(img_dir)
img_names = [img_name for img_name in img_names if img_name.split(".")[-1]=="png"]
for index, img_name in enumerate(img_names):
img = cv2.imread(os.path.join(img_dir,img_name))
save_name = os.path.join(write_dir, img_name.split(".")[0]+"f.png")
delta_pix = np.float32(np.random.randint(-max_delta_pix,max_delta_pix+1,[3,2]))
img_a = affine(img, delta_pix)
cv2.imwrite(save_name, img_a)

二、HOG特征提取
處理好圖片后分別對訓練集和測試集進行特征提取得到imgTest_HOG.txt和imgTrain_HOG.txt
def hog_feature(img_array, resize=(64,64)):
##提取HOG特征
img = cv2.cvtColor(img_array, cv2.COLOR_BGR2GRAY)
img = cv2.resize(img, resize)
bins = 9
cell_size = (8, 8)
cpb = (2, 2)
norm = "L2"
features = ft.hog(img, orientations=bins, pixels_per_cell=cell_size,
cells_per_block=cpb, block_norm=norm, transform_sqrt=True)
return features
def extra_hog_features_dir(img_dir, write_txt, resize=(64,64)):
##提取目錄中所有圖像HOG特征
img_names = os.listdir(img_dir)
img_names = [os.path.join(img_dir, img_name) for img_name in img_names]
if os.path.exists(write_txt):
os.remove(write_txt)
with open(write_txt, "a") as f:
index = 0
for img_name in img_names:
img_array = cv2.imread(img_name)
features = hog_feature(img_array, resize)
label_name = img_name.split("/")[-1].split("_")[0]
label_num = img_label[label_name]
row_data = img_name + "\t" + str(label_num) + "\t"
for element in features:
row_data = row_data + str(round(element,3)) + " "
row_data = row_data + "\n"
f.write(row_data)
if index%100 == 0:
print ("total image number = ", len(img_names), "current image number = ", index)
index += 1
三、模型訓練
利用得到的HOG特征進行訓練模型得到svm_model.pkl
def load_hog_data(hog_txt):
img_names = []
labels = []
hog_features = []
with open(hog_txt, "r") as f:
data = f.readlines()
for row_data in data:
row_data = row_data.rstrip()
img_path, label, hog_str = row_data.split("\t")
img_name = img_path.split("/")[-1]
hog_feature = hog_str.split(" ")
hog_feature = [float(hog) for hog in hog_feature]
#print "hog feature length = ", len(hog_feature)
img_names.append(img_name)
labels.append(label)
hog_features.append(hog_feature)
return img_names, np.array(labels), np.array(hog_features)
def svm_train(hog_features, labels, save_path="./svm_model.pkl"):
clf = SVC(C=10, tol=1e-3, probability = True)
clf.fit(hog_features, labels)
joblib.dump(clf, save_path)
print ("finished.")
四、交通標志識別及實驗測試
交通標志識別的流程:顏色識別得到閾值范圍內的二值圖、然后進行輪廓識別、剔除多余矩陣。
def preprocess_img(imgBGR):
##將圖像由RGB模型轉化成HSV模型
imgHSV = cv2.cvtColor(imgBGR, cv2.COLOR_BGR2HSV)
Bmin = np.array([110, 43, 46])
Bmax = np.array([124, 255, 255])
##使用inrange(HSV,lower,upper)設置閾值去除背景顏色
img_Bbin = cv2.inRange(imgHSV,Bmin, Bmax)
Rmin2 = np.array([165, 43, 46])
Rmax2 = np.array([180, 255, 255])
img_Rbin = cv2.inRange(imgHSV,Rmin2, Rmax2)
img_bin = np.maximum(img_Bbin, img_Rbin)
return img_bin
'''
提取輪廓,返回輪廓矩形框
'''
def contour_detect(img_bin, min_area=0, max_area=-1, wh_ratio=2.0):
rects = []
##檢測輪廓,其中cv2.RETR_EXTERNAL只檢測外輪廓,cv2.CHAIN_APPROX_NONE 存儲所有的邊界點
##findContours返回三個值:第一個值返回img,第二個值返回輪廓信息,第三個返回相應輪廓的關系
contours, hierarchy= cv2.findContours(img_bin.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
if len(contours) == 0:
return rects
max_area = img_bin.shape[0]*img_bin.shape[1] if max_area<0 else max_area
for contour in contours:
area = cv2.contourArea(contour)
if area >= min_area and area <= max_area:
x, y, w, h = cv2.boundingRect(contour)
if 1.0*w/h < wh_ratio and 1.0*h/w < wh_ratio:
rects.append([x,y,w,h])
return rects
然后加載模型進行測驗
if __name__ == "__main__":
cap = cv2.VideoCapture(0)
cv2.namedWindow('camera')
cv2.resizeWindow("camera",640,480)
cols = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
rows = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
clf = joblib.load("/home/xiao5/Desktop/Test2/svm_model.pkl")
i=0
while (1):
i+=1
ret, img = cap.read()
img_bin = preprocess_img(img)
min_area = img_bin.shape[0]*img.shape[1]/(25*25)
rects = contour_detect(img_bin, min_area=min_area)
if rects:
Max_X=0
Max_Y=0
Max_W=0
Max_H=0
for r in rects:
if r[2]*r[3]>=Max_W*Max_H:
Max_X,Max_Y,Max_W,Max_H=r
proposal = img[Max_Y:(Max_Y+Max_H),Max_X:(Max_X+Max_W)]##用Numpy數(shù)組對圖像像素進行訪問時,應該先寫圖像高度所對應的坐標(y,row),再寫圖像寬度對應的坐標(x,col)。
cv2.rectangle(img,(Max_X,Max_Y), (Max_X+Max_W,Max_Y+Max_H), (0,255,0), 2)
cv2.imshow("proposal", proposal)
cls_prop = hog_extra_and_svm_class(proposal, clf)
cls_prop = np.round(cls_prop, 2)
cls_num = np.argmax(cls_prop)##找到最大相似度的索引
if cls_names[cls_num] is not "background":
print(cls_names[cls_num])
else:
print("N/A")
cv2.imshow('camera',img)
cv2.waitKey(40)
cv2.destroyAllWindows()
cap.release()

到此這篇關于Python3+OpenCV實現(xiàn)簡單交通標志識別的文章就介紹到這了,更多相關Python3?OpenCV交通標志識別內容請搜索腳本之家以前的文章或繼續(xù)瀏覽下面的相關文章希望大家以后多多支持腳本之家!
相關文章
一個基于flask的web應用誕生 使用模板引擎和表單插件(2)
一個基于flask的web應用誕生第二篇,這篇文章主要介紹了如何使用jinja2模板引擎和wtf表單插件,具有一定的參考價值,感興趣的小伙伴們可以參考一下2017-04-04
Python sklearn庫實現(xiàn)PCA教程(以鳶尾花分類為例)
今天小編就為大家分享一篇Python sklearn庫實現(xiàn)PCA教程(以鳶尾花分類為例),具有很好的參考價值,希望對大家有所幫助。一起跟隨小編過來看看吧2020-02-02
使用grappelli為django admin后臺添加模板
本文介紹了一款非常流行的Django模板系統(tǒng)--grappelli,以及如何給Django的admin后臺添加模板,非常的實用,這里推薦給大家。2014-11-11

