亚洲乱码中文字幕综合,中国熟女仑乱hd,亚洲精品乱拍国产一区二区三区,一本大道卡一卡二卡三乱码全集资源,又粗又黄又硬又爽的免费视频

python神經(jīng)網(wǎng)絡(luò)facenet人臉檢測及keras實現(xiàn)

 更新時間:2022年05月10日 18:36:07   作者:Bubbliiiing  
這篇文章主要為大家介紹了python神經(jīng)網(wǎng)絡(luò)facenet人臉檢測及keras實現(xiàn),有需要的朋友可以借鑒參考下,希望能夠有所幫助,祝大家多多進(jìn)步,早日升職加薪

什么是facenet

最近學(xué)了我最喜歡的mtcnn,可是光有人臉有啥用啊,咱得知道who啊,開始facenet提取特征之旅。

谷歌人臉檢測算法,發(fā)表于 CVPR 2015,利用相同人臉在不同角度等姿態(tài)的照片下有高內(nèi)聚性,不同人臉有低耦合性,提出使用 cnn + triplet mining 方法,在 LFW 數(shù)據(jù)集上準(zhǔn)確度達(dá)到 99.63%。

通過 CNN 將人臉映射到歐式空間的特征向量上,實質(zhì)上:不同圖片人臉特征的距離較大;通過相同個體的人臉的距離,總是小于不同個體的人臉這一先驗知識訓(xùn)練網(wǎng)絡(luò)。

測試時只需要計算人臉特征EMBEDDING,然后計算距離使用閾值即可判定兩張人臉照片是否屬于相同的個體。

簡單來講,在使用階段,facenet即是:

1、輸入一張人臉圖片

2、通過深度學(xué)習(xí)網(wǎng)絡(luò)提取特征

3、L2標(biāo)準(zhǔn)化

4、得到128維特征向量。

代碼下載鏈接:https://pan.baidu.com/s/1T2b5u2mZ9yMtKt3TvLxTaw 

提取碼:xmg0 

Inception-ResNetV1

Inception-ResNetV1是facenet使用的主干網(wǎng)絡(luò)。

它的結(jié)構(gòu)很有意思!

如圖所示為整個網(wǎng)絡(luò)的主干結(jié)構(gòu):

可以看到里面的結(jié)構(gòu)分為幾個重要的部分

1、stem

2、Inception-resnet-A

3、Inception-resnet-B

4、Inception-resnet-C

1、Stem的結(jié)構(gòu):

在facenet里,它的Input為160x160x3大小,輸入后進(jìn)行:

兩次卷積 -> 一次最大池化 -> 兩次卷積

python實現(xiàn)代碼如下:

inputs = Input(shape=input_shape)
# 160,160,3 -> 77,77,64
x = conv2d_bn(inputs, 32, 3, strides=2, padding='valid', name='Conv2d_1a_3x3')
x = conv2d_bn(x, 32, 3, padding='valid', name='Conv2d_2a_3x3')
x = conv2d_bn(x, 64, 3, name='Conv2d_2b_3x3')
# 77,77,64 -> 38,38,64
x = MaxPooling2D(3, strides=2, name='MaxPool_3a_3x3')(x)
# 38,38,64 -> 17,17,256
x = conv2d_bn(x, 80, 1, padding='valid', name='Conv2d_3b_1x1')
x = conv2d_bn(x, 192, 3, padding='valid', name='Conv2d_4a_3x3')
x = conv2d_bn(x, 256, 3, strides=2, padding='valid', name='Conv2d_4b_3x3')

2、Inception-resnet-A的結(jié)構(gòu):

Inception-resnet-A的結(jié)構(gòu)分為四個分支

1、未經(jīng)處理直接輸出

2、經(jīng)過一次1x1的32通道的卷積處理

3、經(jīng)過一次1x1的32通道的卷積處理和一次3x3的32通道的卷積處理

4、經(jīng)過一次1x1的32通道的卷積處理和兩次3x3的32通道的卷積處理

234步的結(jié)果堆疊后j進(jìn)行一次卷積,并與第一步的結(jié)果相加,實質(zhì)上這是一個殘差網(wǎng)絡(luò)結(jié)構(gòu)。

實現(xiàn)代碼如下:

branch_0 = conv2d_bn(x, 32, 1, name=name_fmt('Conv2d_1x1', 0))
branch_1 = conv2d_bn(x, 32, 1, name=name_fmt('Conv2d_0a_1x1', 1))
branch_1 = conv2d_bn(branch_1, 32, 3, name=name_fmt('Conv2d_0b_3x3', 1))
branch_2 = conv2d_bn(x, 32, 1, name=name_fmt('Conv2d_0a_1x1', 2))
branch_2 = conv2d_bn(branch_2, 32, 3, name=name_fmt('Conv2d_0b_3x3', 2))
branch_2 = conv2d_bn(branch_2, 32, 3, name=name_fmt('Conv2d_0c_3x3', 2))
branches = [branch_0, branch_1, branch_2]
mixed = Concatenate(axis=channel_axis, name=name_fmt('Concatenate'))(branches)
up = conv2d_bn(mixed,K.int_shape(x)[channel_axis],1,activation=None,use_bias=True,
                name=name_fmt('Conv2d_1x1'))
up = Lambda(scaling,
            output_shape=K.int_shape(up)[1:],
            arguments={'scale': scale})(up)
x = add([x, up])
if activation is not None:
    x = Activation(activation, name=name_fmt('Activation'))(x)

3、Inception-resnet-B的結(jié)構(gòu):

Inception-resnet-B的結(jié)構(gòu)分為四個分支

1、未經(jīng)處理直接輸出

2、經(jīng)過一次1x1的128通道的卷積處理

3、經(jīng)過一次1x1的128通道的卷積處理、一次1x7的128通道的卷積處理和一次7x1的128通道的卷積處理

23步的結(jié)果堆疊后j進(jìn)行一次卷積,并與第一步的結(jié)果相加,實質(zhì)上這是一個殘差網(wǎng)絡(luò)結(jié)構(gòu)。

實現(xiàn)代碼如下:

branch_0 = conv2d_bn(x, 128, 1, name=name_fmt('Conv2d_1x1', 0))
branch_1 = conv2d_bn(x, 128, 1, name=name_fmt('Conv2d_0a_1x1', 1))
branch_1 = conv2d_bn(branch_1, 128, [1, 7], name=name_fmt('Conv2d_0b_1x7', 1))
branch_1 = conv2d_bn(branch_1, 128, [7, 1], name=name_fmt('Conv2d_0c_7x1', 1))
branches = [branch_0, branch_1]
mixed = Concatenate(axis=channel_axis, name=name_fmt('Concatenate'))(branches)
up = conv2d_bn(mixed,K.int_shape(x)[channel_axis],1,activation=None,use_bias=True,
                name=name_fmt('Conv2d_1x1'))
up = Lambda(scaling,
            output_shape=K.int_shape(up)[1:],
            arguments={'scale': scale})(up)
x = add([x, up])
if activation is not None:
    x = Activation(activation, name=name_fmt('Activation'))(x)

4、Inception-resnet-C的結(jié)構(gòu):

Inception-resnet-B的結(jié)構(gòu)分為四個分支

1、未經(jīng)處理直接輸出

2、經(jīng)過一次1x1的128通道的卷積處理

3、經(jīng)過一次1x1的192通道的卷積處理、一次1x3的192通道的卷積處理和一次3x1的128通道的卷積處理

23步的結(jié)果堆疊后j進(jìn)行一次卷積,并與第一步的結(jié)果相加,實質(zhì)上這是一個殘差網(wǎng)絡(luò)結(jié)構(gòu)。

實現(xiàn)代碼如下:

branch_0 = conv2d_bn(x, 192, 1, name=name_fmt('Conv2d_1x1', 0))
branch_1 = conv2d_bn(x, 192, 1, name=name_fmt('Conv2d_0a_1x1', 1))
branch_1 = conv2d_bn(branch_1, 192, [1, 3], name=name_fmt('Conv2d_0b_1x3', 1))
branch_1 = conv2d_bn(branch_1, 192, [3, 1], name=name_fmt('Conv2d_0c_3x1', 1))
branches = [branch_0, branch_1]
mixed = Concatenate(axis=channel_axis, name=name_fmt('Concatenate'))(branches)
up = conv2d_bn(mixed,K.int_shape(x)[channel_axis],1,activation=None,use_bias=True,
                name=name_fmt('Conv2d_1x1'))
up = Lambda(scaling,
            output_shape=K.int_shape(up)[1:],
            arguments={'scale': scale})(up)
x = add([x, up])
if activation is not None:
    x = Activation(activation, name=name_fmt('Activation'))(x)

5、全部代碼

from functools import partial
from keras.models import Model
from keras.layers import Activation
from keras.layers import BatchNormalization
from keras.layers import Concatenate
from keras.layers import Conv2D
from keras.layers import Dense
from keras.layers import Dropout
from keras.layers import GlobalAveragePooling2D
from keras.layers import Input
from keras.layers import Lambda
from keras.layers import MaxPooling2D
from keras.layers import add
from keras import backend as K
def scaling(x, scale):
    return x * scale
def _generate_layer_name(name, branch_idx=None, prefix=None):
    if prefix is None:
        return None
    if branch_idx is None:
        return '_'.join((prefix, name))
    return '_'.join((prefix, 'Branch', str(branch_idx), name))
def conv2d_bn(x,filters,kernel_size,strides=1,padding='same',activation='relu',use_bias=False,name=None):
    x = Conv2D(filters,
               kernel_size,
               strides=strides,
               padding=padding,
               use_bias=use_bias,
               name=name)(x)
    if not use_bias:
        x = BatchNormalization(axis=3, momentum=0.995, epsilon=0.001,
                               scale=False, name=_generate_layer_name('BatchNorm', prefix=name))(x)
    if activation is not None:
        x = Activation(activation, name=_generate_layer_name('Activation', prefix=name))(x)
    return x
def _inception_resnet_block(x, scale, block_type, block_idx, activation='relu'):
    channel_axis = 3
    if block_idx is None:
        prefix = None
    else:
        prefix = '_'.join((block_type, str(block_idx)))
    name_fmt = partial(_generate_layer_name, prefix=prefix)
    if block_type == 'Block35':
        branch_0 = conv2d_bn(x, 32, 1, name=name_fmt('Conv2d_1x1', 0))
        branch_1 = conv2d_bn(x, 32, 1, name=name_fmt('Conv2d_0a_1x1', 1))
        branch_1 = conv2d_bn(branch_1, 32, 3, name=name_fmt('Conv2d_0b_3x3', 1))
        branch_2 = conv2d_bn(x, 32, 1, name=name_fmt('Conv2d_0a_1x1', 2))
        branch_2 = conv2d_bn(branch_2, 32, 3, name=name_fmt('Conv2d_0b_3x3', 2))
        branch_2 = conv2d_bn(branch_2, 32, 3, name=name_fmt('Conv2d_0c_3x3', 2))
        branches = [branch_0, branch_1, branch_2]
    elif block_type == 'Block17':
        branch_0 = conv2d_bn(x, 128, 1, name=name_fmt('Conv2d_1x1', 0))
        branch_1 = conv2d_bn(x, 128, 1, name=name_fmt('Conv2d_0a_1x1', 1))
        branch_1 = conv2d_bn(branch_1, 128, [1, 7], name=name_fmt('Conv2d_0b_1x7', 1))
        branch_1 = conv2d_bn(branch_1, 128, [7, 1], name=name_fmt('Conv2d_0c_7x1', 1))
        branches = [branch_0, branch_1]
    elif block_type == 'Block8':
        branch_0 = conv2d_bn(x, 192, 1, name=name_fmt('Conv2d_1x1', 0))
        branch_1 = conv2d_bn(x, 192, 1, name=name_fmt('Conv2d_0a_1x1', 1))
        branch_1 = conv2d_bn(branch_1, 192, [1, 3], name=name_fmt('Conv2d_0b_1x3', 1))
        branch_1 = conv2d_bn(branch_1, 192, [3, 1], name=name_fmt('Conv2d_0c_3x1', 1))
        branches = [branch_0, branch_1]
    mixed = Concatenate(axis=channel_axis, name=name_fmt('Concatenate'))(branches)
    up = conv2d_bn(mixed,K.int_shape(x)[channel_axis],1,activation=None,use_bias=True,
                   name=name_fmt('Conv2d_1x1'))
    up = Lambda(scaling,
                output_shape=K.int_shape(up)[1:],
                arguments={'scale': scale})(up)
    x = add([x, up])
    if activation is not None:
        x = Activation(activation, name=name_fmt('Activation'))(x)
    return x
def InceptionResNetV1(input_shape=(160, 160, 3),
                      classes=128,
                      dropout_keep_prob=0.8):
    channel_axis = 3
    inputs = Input(shape=input_shape)
    # 160,160,3 -> 77,77,64
    x = conv2d_bn(inputs, 32, 3, strides=2, padding='valid', name='Conv2d_1a_3x3')
    x = conv2d_bn(x, 32, 3, padding='valid', name='Conv2d_2a_3x3')
    x = conv2d_bn(x, 64, 3, name='Conv2d_2b_3x3')
    # 77,77,64 -> 38,38,64
    x = MaxPooling2D(3, strides=2, name='MaxPool_3a_3x3')(x)
    # 38,38,64 -> 17,17,256
    x = conv2d_bn(x, 80, 1, padding='valid', name='Conv2d_3b_1x1')
    x = conv2d_bn(x, 192, 3, padding='valid', name='Conv2d_4a_3x3')
    x = conv2d_bn(x, 256, 3, strides=2, padding='valid', name='Conv2d_4b_3x3')
    # 5x Block35 (Inception-ResNet-A block):
    for block_idx in range(1, 6):
        x = _inception_resnet_block(x,scale=0.17,block_type='Block35',block_idx=block_idx)
    # Reduction-A block:
    # 17,17,256 -> 8,8,896
    name_fmt = partial(_generate_layer_name, prefix='Mixed_6a')
    branch_0 = conv2d_bn(x, 384, 3,strides=2,padding='valid',name=name_fmt('Conv2d_1a_3x3', 0))
    branch_1 = conv2d_bn(x, 192, 1, name=name_fmt('Conv2d_0a_1x1', 1))
    branch_1 = conv2d_bn(branch_1, 192, 3, name=name_fmt('Conv2d_0b_3x3', 1))
    branch_1 = conv2d_bn(branch_1,256,3,strides=2,padding='valid',name=name_fmt('Conv2d_1a_3x3', 1))
    branch_pool = MaxPooling2D(3,strides=2,padding='valid',name=name_fmt('MaxPool_1a_3x3', 2))(x)
    branches = [branch_0, branch_1, branch_pool]
    x = Concatenate(axis=channel_axis, name='Mixed_6a')(branches)
    # 10x Block17 (Inception-ResNet-B block):
    for block_idx in range(1, 11):
        x = _inception_resnet_block(x,
                                    scale=0.1,
                                    block_type='Block17',
                                    block_idx=block_idx)
    # Reduction-B block
    # 8,8,896 -> 3,3,1792
    name_fmt = partial(_generate_layer_name, prefix='Mixed_7a')
    branch_0 = conv2d_bn(x, 256, 1, name=name_fmt('Conv2d_0a_1x1', 0))
    branch_0 = conv2d_bn(branch_0,384,3,strides=2,padding='valid',name=name_fmt('Conv2d_1a_3x3', 0))
    branch_1 = conv2d_bn(x, 256, 1, name=name_fmt('Conv2d_0a_1x1', 1))
    branch_1 = conv2d_bn(branch_1,256,3,strides=2,padding='valid',name=name_fmt('Conv2d_1a_3x3', 1))
    branch_2 = conv2d_bn(x, 256, 1, name=name_fmt('Conv2d_0a_1x1', 2))
    branch_2 = conv2d_bn(branch_2, 256, 3, name=name_fmt('Conv2d_0b_3x3', 2))
    branch_2 = conv2d_bn(branch_2,256,3,strides=2,padding='valid',name=name_fmt('Conv2d_1a_3x3', 2))
    branch_pool = MaxPooling2D(3,strides=2,padding='valid',name=name_fmt('MaxPool_1a_3x3', 3))(x)
    branches = [branch_0, branch_1, branch_2, branch_pool]
    x = Concatenate(axis=channel_axis, name='Mixed_7a')(branches)
    # 5x Block8 (Inception-ResNet-C block):
    for block_idx in range(1, 6):
        x = _inception_resnet_block(x,
                                    scale=0.2,
                                    block_type='Block8',
                                    block_idx=block_idx)
    x = _inception_resnet_block(x,scale=1.,activation=None,block_type='Block8',block_idx=6)
    # 平均池化
    x = GlobalAveragePooling2D(name='AvgPool')(x)
    x = Dropout(1.0 - dropout_keep_prob, name='Dropout')(x)
    # 全連接層到128
    x = Dense(classes, use_bias=False, name='Bottleneck')(x)
    bn_name = _generate_layer_name('BatchNorm', prefix='Bottleneck')
    x = BatchNormalization(momentum=0.995, epsilon=0.001, scale=False,
                           name=bn_name)(x)
    # 創(chuàng)建模型
    model = Model(inputs, x, name='inception_resnet_v1')
    return model

檢測人臉并實現(xiàn)比較:

利用opencv自帶的cv2.CascadeClassifier檢測人臉并實現(xiàn)人臉的比較:根目錄擺放方式如下:

demo文件如下:

import numpy as np
import cv2
from net.inception import InceptionResNetV1
from keras.models import load_model
import face_recognition
#---------------------------------#
#   圖片預(yù)處理
#   高斯歸一化
#---------------------------------#
def pre_process(x):
    if x.ndim == 4:
        axis = (1, 2, 3)
        size = x[0].size
    elif x.ndim == 3:
        axis = (0, 1, 2)
        size = x.size
    else:
        raise ValueError('Dimension should be 3 or 4')
    mean = np.mean(x, axis=axis, keepdims=True)
    std = np.std(x, axis=axis, keepdims=True)
    std_adj = np.maximum(std, 1.0/np.sqrt(size))
    y = (x - mean) / std_adj
    return y
#---------------------------------#
#   l2標(biāo)準(zhǔn)化
#---------------------------------#
def l2_normalize(x, axis=-1, epsilon=1e-10):
    output = x / np.sqrt(np.maximum(np.sum(np.square(x), axis=axis, keepdims=True), epsilon))
    return output
#---------------------------------#
#   計算128特征值
#---------------------------------#
def calc_128_vec(model,img):
    face_img = pre_process(img)
    pre = model.predict(face_img)
    pre = l2_normalize(np.concatenate(pre))
    pre = np.reshape(pre,[1,128])
    return pre
#---------------------------------#
#   獲取人臉框
#---------------------------------#
def get_face_img(cascade,filepaths,margin):
    aligned_images = []
    img = cv2.imread(filepaths)
    img = cv2.cvtColor(img,cv2.COLOR_BGRA2RGB)
    faces = cascade.detectMultiScale(img,
                                        scaleFactor=1.1,
                                        minNeighbors=3)
    (x, y, w, h) = faces[0]
    print(x, y, w, h)
    cropped = img[y-margin//2:y+h+margin//2,
                    x-margin//2:x+w+margin//2, :]
    aligned = cv2.resize(cropped, (160, 160))
    aligned_images.append(aligned)
    return np.array(aligned_images)
#---------------------------------#
#   計算人臉距離
#---------------------------------#
def face_distance(face_encodings, face_to_compare):
    if len(face_encodings) == 0:
        return np.empty((0))
    return np.linalg.norm(face_encodings - face_to_compare, axis=1)
if __name__ == "__main__":
    cascade_path = './model/haarcascade_frontalface_alt2.xml'
    cascade = cv2.CascadeClassifier(cascade_path)
    image_size = 160
    model = InceptionResNetV1()
    # model.summary()
    model_path = './model/facenet_keras.h5'
    model.load_weights(model_path)
    img1 = get_face_img(cascade,r"img/Larry_Page_0000.jpg",10)
    img2 = get_face_img(cascade,r"img/Larry_Page_0001.jpg",10)
    img3 = get_face_img(cascade,r"img/Mark_Zuckerberg_0000.jpg",10)
    print(face_distance(calc_128_vec(model,img1),calc_128_vec(model,img2)))
    print(face_distance(calc_128_vec(model,img2),calc_128_vec(model,img3)))

實現(xiàn)效果為:

[0.6534328]
[1.3536944]

以上就是python神經(jīng)網(wǎng)絡(luò)facenet人臉檢測及keras實現(xiàn)的詳細(xì)內(nèi)容,更多關(guān)于facenet人臉檢測keras實現(xiàn)的資料請關(guān)注腳本之家其它相關(guān)文章!

相關(guān)文章

  • python 實現(xiàn)仿微信聊天時間格式化顯示的代碼

    python 實現(xiàn)仿微信聊天時間格式化顯示的代碼

    這篇文章主要介紹了python 實現(xiàn)仿微信聊天時間格式化顯示,本文通過實例代碼給大家介紹的非常詳細(xì),對大家的學(xué)習(xí)或工作具有一定的參考借鑒價值,需要的朋友參考下吧
    2020-04-04
  • Python方差特征過濾的實例分析

    Python方差特征過濾的實例分析

    在本篇文章里小編給大家整理了一篇關(guān)于Python方差特征過濾的實例分析內(nèi)容,有需要的朋友們可以跟著學(xué)習(xí)下。
    2021-08-08
  • 在Python中預(yù)先初始化列表內(nèi)容和長度的實現(xiàn)

    在Python中預(yù)先初始化列表內(nèi)容和長度的實現(xiàn)

    今天小編就為大家分享一篇在Python中預(yù)先初始化列表內(nèi)容和長度的實現(xiàn),具有很好的參考價值,希望對大家有所幫助。一起跟隨小編過來看看吧
    2019-11-11
  • python pyppeteer 破解京東滑塊功能的代碼

    python pyppeteer 破解京東滑塊功能的代碼

    這篇文章主要介紹了python pyppeteer 破解京東滑塊功能的代碼,代碼簡單易懂,對大家的學(xué)習(xí)或工作具有一定的參考借鑒價值,需要的朋友可以參考下
    2021-03-03
  • anaconda中更改python版本的方法步驟

    anaconda中更改python版本的方法步驟

    這篇文章主要介紹了anaconda中更改python版本的方法步驟,文中通過示例代碼介紹的非常詳細(xì),對大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價值,需要的朋友們下面隨著小編來一起學(xué)習(xí)學(xué)習(xí)吧
    2019-07-07
  • python實現(xiàn)快遞價格查詢系統(tǒng)

    python實現(xiàn)快遞價格查詢系統(tǒng)

    這篇文章主要為大家詳細(xì)介紹了python實現(xiàn)快遞價格查詢系統(tǒng),文中示例代碼介紹的非常詳細(xì),具有一定的參考價值,感興趣的小伙伴們可以參考一下
    2020-03-03
  • Python中pytest命令行實現(xiàn)環(huán)境切換

    Python中pytest命令行實現(xiàn)環(huán)境切換

    在自動化測試過程中經(jīng)常需要在不同的環(huán)境下進(jìn)行測試驗證,所以寫自動化測試代碼時需要考慮不同環(huán)境切換的情況,本文主要介紹了Python中pytest命令行實現(xiàn)環(huán)境切換,感興趣的可以了解一下
    2023-07-07
  • 關(guān)于python+scapy抓包與解析

    關(guān)于python+scapy抓包與解析

    這篇文章主要介紹了關(guān)于python+scapy抓包與解析,具有很好的參考價值,希望對大家有所幫助,如有錯誤或未考慮完全的地方,望不吝賜教
    2023-08-08
  • Python實現(xiàn)圖片壓縮的案例詳解

    Python實現(xiàn)圖片壓縮的案例詳解

    這篇文章主要介紹了如何用最簡潔的Python代碼實現(xiàn)圖片壓縮效果,還可以保證照片不失真,感興趣的小伙伴可以跟隨小編一起動手試試
    2022-01-01
  • 在scrapy中使用phantomJS實現(xiàn)異步爬取的方法

    在scrapy中使用phantomJS實現(xiàn)異步爬取的方法

    今天小編就為大家分享一篇在scrapy中使用phantomJS實現(xiàn)異步爬取的方法,具有很好的參考價值,希望對大家有所幫助。一起跟隨小編過來看看吧
    2018-12-12

最新評論