Ure which consists of is applied to an intermediate and binarymapping network. The formerformer two components: an code mapping network. The is utilized to extract extract feature network network extractionand vector, as well as the latter is usually to map the extracted feature vector to bibinary former is used into intermediate vector, and and latter is code mapping network. The vector into binary extract an function function the to map the extracted feature code. intermediate feature vector, along with the Figureis to map nextextractedwe introduce two This latter three. Inside the the section, feature vector into binary code. This architecture is shown in the next section, we introduce two components from the architecture is shown in Figure shown in Figure three. In the next section, we introduce two three. In nary code. This architecture is components of the biometrics mapping network. biometrics mapping network. components from the biometrics mapping network.Feature vector Feature vector Binary code Binary code 0 0 1 J3 Loss 1 0 J3 Loss 0 1 1 . Binary code mapping . J2 Loss . Binary code mapping network J2 Loss . . network . 1 1 0 J1 Loss 0 1 J1 Loss 1 0Feature extraction Function network extraction networkFull Loss L Full Loss LFigure three. The3. The framework of our proposed biometrics mapping network depending on a DNN for generating binary code. This Figure framework of our proposed biometrics mapping network based on a DNN for producing binary code. This Figure 3. The framework of our proposed biometrics binary code mapping network based on a DNN for producing binary code. This architecture consists of a function extraction network and aand a binary mapping network. architecture consists of a feature extraction network code mapping network. architecture consists of a feature extraction network and a binary code mapping network.Appl. Sci. 2021, 11, 8497 PEER Review Appl. Sci. 2021, 11, x FORof 23 77ofFeature three.two.1. Function Extraction Network very first depthwise (DW) convoluTo resolve the very first challenge, we adopt pointwise (PW) and depthwise (DW) convolutions alternatively of regular convolution to construct a lightweight feature extraction network computational power while Valsartan Ethyl Ester In Vitro prewhich can minimize the quantity of memory storage and computational energy even though preserving accuracy On this basis, serving accuracy [57]. On this basis, we increase the bottleneck architecture for any superior intermediate feature Verrucarin A custom synthesis representation. The architecture in the network is shown in Figure 4. architecture is shown in Figure four. Especially, around the 1 hand, we first use PW toto expand input characteristics intohigherdion the one particular hand, we very first use PW expand input capabilities into a a higherSpecifically, dimensional feature space for extracting rich feature maps, after which make use of DW to lessen mensional feature space for extracting wealthy feature maps, and then utilize DW to cut down the computation redundancy. However, we add an attention module named a the computation redundancy. However, we add an focus module named a squeezeandexcitation network (SENet) [58] among two nodes in the bottleneck, which squeezeandexcitation network (SENet) [58] involving two nodes in the bottleneck, which can selectively strengthen useful capabilities and suppress useless characteristics or much less valuable ones can selectively strengthen beneficial functions and suppress useless capabilities or significantly less beneficial ones for enhancing the capability of feature representation. For that reason, these key components can for enhancing the ability of function representation. Consequently,.