sensors Article Monitoring of Assembly Process Using Deep Learning Technology Chengjun Chen 1,2, *, Chunlin Zhang 1,2, Tiannuo Wang 1,2, Dongnian Li 1

Size: px
Start display at page:

Download "sensors Article Monitoring of Assembly Process Using Deep Learning Technology Chengjun Chen 1,2, *, Chunlin Zhang 1,2, Tiannuo Wang 1,2, Dongnian Li 1"

Transcription

1 sensors Article Monitoring Assembly Process Using Deep Learning Technology Chengjun Chen 1,2, *, Chunlin Zhang 1,2, Tiannuo Wang 1,2, Dongnian Li 1,2, Yang Guo 1,2, Zhengxu Zhao 1,2 and Jun Hong 3 1 School Mechanical and Automotive Engineering, Qingdao University Technology, Qingdao , China; (C.Z.); (T.W.); (D.L.); (Y.G.); (Z.Z.) 2 Key Lab Industrial Fluid Energy Conservation and Pollution Control, Ministry Education, Qingdao University Technology, Qingdao , China 3 School Mechanical Engineering, Xi an Jiaotong University, Xi an , China; * Correspondence: Tel.: Received: 18 June 2020; Accepted: 23 July 2020; Published: 29 July 2020 Abstract: Monitoring assembly process is a challenge in manual assembly mass customization production, in which operator needs to change assembly process according to different products. If an assembly error is not immediately detected during assembly process a product, it may lead to errors and loss time and money in subsequent assembly process, and will affect product quality. To monitor assembly process, this paper explored two methods: recognizing assembly action and recognizing parts from complicated assembled products. In assembly action recognition, an improved three-dimensional convolutional neural network (3D CNN) model with batch normalization is proposed to detect a missing assembly action. In parts recognition, a fully convolutional network (FCN) is employed to segment, recognize different parts from complicated assembled products to check assembly sequence for missing or misaligned parts. An assembly actions data set and an assembly segmentation data set are created. The experimental results assembly action recognition show that 3D CNN model with batch normalization reduces computational complexity, improves training speed and speeds up convergence model, while maintaining accuracy. Experimental results FCN show that FCN-2S provides a higher pixel recognition accuracy than or FCNs. Keywords: monitoring assembly process; assembly action recognition; segmentation assembled products; 3D CNN; batch normalization; fully convolutional network 1. Introduction During assembly process product, if an assembly error is not immediately detected, it may lead to errors and loss time and money in subsequent assembly process, and will affect product quality. Using computer vision technology to monitor assembly process can reduce operating costs, shorten product production cycles, and reduce rate defective products. This is especially case for mass customization production, when assembly lines are ten restructured to produce different products. In a changing production environment, assembly quality can ten be affected by missed operation steps or by irregular operations workers. This paper considers use computer vision-based monitoring assembly process, with aim quickly and accurately recognizing assembly action workers, and recognizing different parts from complicated assembled products. In this way, assembly efficiency and quality products can be improved. Therefore, research issues this paper include assembly action recognition and parts recognition from complicated assembled products. Sensors 2020, 20, 4208; doi: /s

2 Sensors 2020, 20, As shown in 1, assembly action recognition process falls within research field Sensors 2020, 20, x FOR PEER REVIEW 2 18 human action recognition. Existing human action recognition methods are mainly divided into two categoriesas according shown to different 1, assembly feature extraction recognition methods: process artificial falls within feature extraction research field and deep learning. human In artificial action recognition. feature extraction, Existing human action action features recognition images methods or videoare frames mainly are divided extracted, into two and n classified categories usingaccording differentto kinds different classifiers. feature extraction For example, methods: Bobick artificial et feature al. [1] presented extraction and a view-based deep learning. In artificial feature extraction, action features images or video frames are extracted, and n approach to representation action. Firstly, motion energy maps motion features are calculated, classified using different kinds classifiers. For example, Bobick et al. [1] presented a view-based and n motion classification is performed by matching with stored action templates. Weinland et al. [2] approach to representation action. Firstly, motion energy maps motion features are introduced Motion History Volume (MHV) as action feature for action recognition. This feature can calculated, and n motion classification is performed by matching with stored action templates. avoidweinland influence et al. [2] introduced different human Motion body History features Volume on (MHV) action as recognition. action feature Dalal for action et al. recognition. [3] used HOG features This teature describe can human avoid features influence and classify different m human withbody SVM. features Chaudhry on action et al. recognition. [4] represented each Dalal frameet al. a video [3] used using HOG a Histogram features describe Oriented human Optical features Flow and (HOOF) classify and m recognize with SVM. human actions Chaudhry by classifying et al. [4] HOOF represented time-series. each frame Schuldt a video et al. [5] using constructed a Histogram video Oriented representations Optical Flow in terms local (HOOF) space-time and recognize features human and integrate actions by such classifying representations HOOF time-series. with SVM Schuldt classification et al. [5] constructed schemes for recognition. video representations Wang et al. [6] in introduce terms local space-time IDT algorithm. features The and IDT integrate algorithm such is representations currently widely with used SVM classification schemes for recognition. Wang et al. [6] introduce IDT algorithm. The IDT recognition algorithm based on artificial design features. The algorithm uses dense trajectories and algorithm is currently widely used recognition algorithm based on artificial design features. The motion boundary descriptors to represent video features for action recognition. Artificial feature algorithm uses dense trajectories and motion boundary descriptors to represent video features for extraction-based action recognition. action Artificial recognition feature methods extraction-based usually action require recognition a complex methods data usually preprocessing require a stage, and recognition complex data accuracy preprocessing and efficiency stage, and can recognition be significantly accuracy influenced and efficiency by feature can be significantly selection. Deep learning-based influenced by methods feature selection. enable adaptive Deep learning-based feature learning methods with enable simple adaptive data preprocessing feature learning and with have recently simple been data developed preprocessing and used and have in recently area been computer developed vision. and used Chen in et al. area [7] studied computer recognition vision. repetitive Chen assembly et al. [7] studied actions recognition to monitor repetitive assembly assembly process actions workers to monitor and prevent assembly assembly process quality problems workers caused and by prevent irregular assembly operation quality workers. problems The caused YOLOv3 by irregular algorithm operation [8] is applied workers. to judge The assembly YOLOv3 toolsalgorithm and recognize [8] is applied workers to judge assembly assembly action. tools The and pose recognize estimation workers algorithm assembly CPM [9] is action. The pose estimation algorithm CPM [9] is employed to recognize human joint which is employed to recognize human joint which is subsequently used to judge operating times subsequently used to judge operating times repetitive assembly actions. This paper addresses repetitive assembly actions. This paper addresses problem assembly action recognition based on problem assembly action recognition based on deep learning, and proposes a neural network deep model learning, for assembly and proposes action a recognition neural network and monitoring. model for assembly action recognition and monitoring. 1. Assembly action. 1. Assembly action. In mass customization, a wide range personalized products, with large differences in In mass customization, a wide range personalized products, with large differences in assembly process, are produced. Therefore, an assembly monitoring method that detects deviations assembly process, are produced. Therefore, an assembly monitoring method that detects deviations from assembly sequence, missing parts, misaligned parts is needed. Therefore, parts recognition from assembly sequence, missing parts, misaligned parts is needed. Therefore, parts recognition is necessary for monitoring assembly process. Computer vision-based assembly monitoring is key to is necessary improving for monitoring efficiency and assembly quality process. manual Computer assembly. vision-based Kim et al. assembly proposed a monitoring vision-based is key to improving system for monitoring efficiency and block assembly quality in ship manual building. assembly. Their system Kim et can al. extract proposed areas a vision-based blocks; system for extracted monitoring blocks are block n identified assemblyand in ship compared building. with Their CAD data system in an can effort extract to estimate areas blocks; extracted assembly blocks progress are[10]. n Although identified and abovementioned compared with research CADrealizes data in anmonitoring effort estimate assembly assembly progress progress, [10]. it Although does not provide abovementioned part recognition, research part positioning realizes and monitoring assembly recognition assembly progress, in itoverall does not process. provide Židek partet recognition, al. [11] conducted part positioning experiments and regarding assembly recognition use convolutional in overall neural networks (CNN) to achieve a robust identification standard assembly parts (such as screws, process. Židek et al. [11] conducted experiments regarding use convolutional neural networks nuts) and features. However, experiments show that approach fails to detect parts which are (CNN) to achieve a robust identification standard assembly parts (such as screws, nuts) and features. located within a group overlapping parts and for shiny surfaces which show reflections. As shown However, experiments show that approach fails to detect parts which are located within a in 2, different from identification scattered parts before assembling, assembly a group overlapping parts and for shiny surfaces which show reflections. As shown in 2, different from identification scattered parts before assembling, assembly a product usually

3 Sensors 2020, 20, Sensors 2020, 20, x FOR PEER REVIEW 3 18 contains multiple parts which overlap each or. Some parts are only partly exposed due to occlusion, product usually contains multiple parts which overlap each or. Some parts are only partly exposed which brings difficulties to detect a whole part from complex assembly. due to occlusion, which brings difficulties to detect a whole part from a complex assembly. 2. Parts recognition from assembled products. 2. Parts recognition from assembled products. The main motivation this paper is to monitor assembly process by recognizing assembly The main motivation this paper is to monitor assembly process by recognizing assembly action and recognizing parts from complicated assembled products. The main innovations and action and recognizing parts from complicated assembled products. The main innovations and contributions present study are as follows: contributions present study are as follows: (1) We propose a three-dimensional convolutional neural network (3D CNN) model with batch (1) We propose normalization a three-dimensional to recognize assembly convolutional actions. neural The proposed network 3D (3D CNN CNN) model model with with batch batch normalization to recognize can effectively assembly reduce actions. The number proposed training 3D CNN parameters model with and batch improving normalization can effectively convergence reduce speed. number training parameters and improving convergence speed. (2) The fully convolutional networks (FCN) is employed for segmenting different parts from (2) The fully convolutional networks (FCN) is employed for segmenting different parts from complicated assembled product. After parts segmentation, recognition different parts complicated assembled product. After parts segmentation, recognition different parts from complicated assembled products is conducted to check assembly sequence for missing from or complicated misaligned parts. assembled As far products as we know, is conducted we are to first check to apply assembly depth image sequence segmentation for missing or misaligned technology parts. to application As far as wemonitoring know, we are assembly first process. to apply depth image segmentation technology to application monitoring assembly process. This paper is organized as follows: Section 2 summarizes state art. Section 3 outlines a This neural paper network is organized model for asassembly follows: recognition. Section 2 summarizes Section 4 describes state FCN art. employed Sectionfor 3 outlines a neural semantic network segmentation model for assembled assemblyproducts. recognition. Section Section 5 explained 4 describes process FCN creating employed data sets. for Experiments and analyses that demonstrate effectiveness and efficiency our method are semantic segmentation assembled products. Section 5 explained process creating data sets. provided in Section 6. Section 7 contains our conclusions and future work. Experiments and analyses that demonstrate effectiveness and efficiency our method are provided in Section 2. Related 6. Section Work 7 contains our conclusions and future work. 2. RelatedWithin Work research field human action recognition based on deep learning, Feichtenher et al. [12] proposed a convolutional two-stream network fusion for video action Within recognition, research fusing ConvNet field human towers both action spatially recognition and temporally. based on deep A single learning, frame Feichtenher is used as input et al. [12] proposed to a convolutional spatial stream two-stream ConvNet, while network multi-frame fusion for optical video flow action is recognition, input to fusing temporal ConvNet stream. towers both spatially The two streams and temporally. are fused by A a single 3D filter frame that is used able to aslearn correspondences input to spatial between stream highly ConvNet, whileabstract multi-frame features optical flow spatial is stream input and to temporal stream. stream. This The method twohas streams a high are accuracy, fused but by a 3D filter because that is able toneed learn to correspondences extract optical flow between characteristics highly abstract video in features advance, training spatial is slow, stream and method is not suitable for long-time video frames. and temporal stream. This method has a high accuracy, but because need to extract Wang et al. [13] proposed a temporal segment network (TSN) for video-based action recognition. optical flow characteristics video in advance, training is slow, and method is not suitable for TSN network is established on base two-stream convolutional neural network. In addition to long-time video frames. using optical flow graph as input, TSN uses RGB difference and warped optical flow graph as input. Wang Tran et etal. al. [14] [13] proposed proposeda ac3d temporal (Convolutional segment3d) network approach (TSN) for for spatiotemporal video-basedfeature actionlearning recognition. TSN network using deep isthree-dimensional established convolutional base two-stream neural networks convolutional (3D CNN) neural [15]. This network. method In is simple, addition to usingand optical easy flow to train graph and use. as input, Du et al. TSN [16] uses proposed RGB a difference recurrent pose-attention and warped optical network flow (RPAN). graph RPAN as input. Tran et is an al. end-to-end [14] proposed recurrent a C3D network. (Convolutional This method uses 3D) approach postural for attention spatiotemporal mechanism and feature can learning usingsome deephuman three-dimensional features by sharing convolutional parameters neural on human networks joints. (3D Then CNN) se [15]. features Thisare method fed into is simple, and easy aggregation to trainlayer and use. to construct Du et al. [16] posture proposed correlation a recurrent representation pose-attention for temporal network motion (RPAN). modeling. RPAN Donahue [17] proposed a long-term recursive convolution network (LRCN). In this model, CNN is an end-to-end recurrent network. This method uses postural attention mechanism and can features extracted in time sequence are used as input LSTM network, which can process time learn some human features by sharing parameters on human joints. Then se features are fed into aggregation layer to construct posture correlation representation for temporal motion modeling. Donahue [17] proposed a long-term recursive convolution network (LRCN). In this model,

4 Sensors 2020, 20, CNN features extracted in time sequence are used as input LSTM network, which can process time information better. Xu et al. [18] presented a region convolutional 3D network (R-C3D) model. R-C3D firstly extracts features from network by using features C3D network, n obtains time regions that may contain activities according to C3D features, and finally obtains actual activity types in regions according to C3D features and recommended areas. R-C3D can handle any length video input. Human action recognition is widely studied, but re are few existing studies relating to assembly action recognition in industry, and re is no public data set for industrial assembly action. Assembly action recognition refore requires higher recognition accuracy, higher recognition speed, and good adaptability to working environment, such as changes in light or texture. Most traditional artificial feature extraction methods are problematic because complicated preprocessing, low speed, and poor stability, so are unsuitable for industrial applications. Common action recognition models based on deep learning include LSTM-based LRCN model [17], two-stream convolutional model [10], and C3D model [14]. The recognition accuracy se three models on public data set UCF-101 [19] is similar, but C3D model is fastest, reaching 313 fps [14], while two-stream convolutional model is 1.2 fps [12]. This result is due to fact that C3D model is relatively simple in its data preprocessing and network structure; two-stream convolutional model not only needs to process image sequence but also needs to extract optical flow information, slowing it down. Due to difficulty parallelism required for RNN network, LRCN model is also slow. Using a 3D CNN for assembly action recognition has advantages simple data preprocessing, fast training speed and high recognition accuracy, making it more suitable for industrial field applications. It is difficult to avoid influence changes in illumination intensity on recognition accuracy when simply using 3D CNN to process RGB video sequences. Under complex production environment a factory, it is important to lessen effect environment and improve recognition speed. This paper refore considers effects depth image, binary image and gray image on training speed and accuracy. A 3D CNN model based on dimensional transformation single-channel gray video sequences is designed. In addition, 3D CNN model is improved by introducing batch normalization layer [20] into model, which improves performance neural network. Neir single-channel gray images nor three-channel RGB images can affect understanding motion, and gray images can reduce sensitivity model to different illumination conditions. The improved model is shown to effectively reduce number training data parameters and to accelerate convergence and training speeds model, while maintaining accuracy. Semantic segmentation or image segmentation is a computer vision method judging to which object each pixel in image belongs. Shotton et al. [21] mapped difficult pose estimation problem into a simpler per-pixel classification problem, and a depth comparison feature is presented and used to represent features each pixel in depth image. Joo et al. [22] proposed a method to detect hand region in real-time using feature depth difference. Long et al. [23] presented fully convolutional networks (FCN) for end-to-end semantic segmentation. FCN has become cornerstone deep learning to solve segmentation problem. Ronneberger et al. [24] presented a U-Net network, which includes an encoding network that extracts context information and a decoding network that accurately locates its symmetric recovery target. The U-Net can achieve end-to-end training using a small amount data sets. In addition, it has achieved good results in biomedical image segmentation. Zhao et al. [25] proposed a pyramid scene parsing network (PSPNet), which implemented function capturing global context information by fusing up and down different regions through pyramid pool module. The PSPNet network has a good performance in scene analysis tasks. Peng et al. [26] explored role large convolution kernels (and effective acceptance domains) in facing simultaneous classification and localization tasks, and proposed a global convolutional network in which atrous convolution can expand receptive field without reducing resolution. Chen et al. [27] combined

5 Sensors 2020, 20, atrous convolution with pyramid pool module to propose a new spatial pyramid aggregation algorithm (ASPP). The ASPP can segment target object at multiple scales. Li et al. [28] used graph convolution in semantic segmentation, and improved Laplace algorithm to be suitable for semantic segmentation tasks. Zhong et al. [29] presented a model that uses a novel squeeze and attention module composition (SANet). In order to make full use interdependence spatial channels, pixel group attention is introduced into attention convolution channel through SA module and imposed on conventional convolution. The outputs SANet s four stratification stages are combined. Huang et al. [30] presented Criss-Cross network (CCNet). The network proposes a crisscross attention module to obtain context information all pixels on different paths. Fu et al. [31] presented stacked deconvolutional network. To fuse context information and restore location information, network superimposes multiple modular shallow deconvolution networks (called SDN units) one by one. Artacho et al. [32] presented a network based on waterfall Atrous space pooling, which not only achieves improved accuracy, but also reduces network parameters and memory footprint. Sharma et al. [33] presented a method which used DeconvNet as a pre-training network in order to solve problem differences between networks in process transfer learning. From abovementioned research, we can see that image segmentation technology is mainly used in pose estimation, biomedical image segmentation, scene analysis, face classification and so on. As far as we know, re are few applications image segmentation in monitoring assembled products. In contrast to an identification scattered parts before assembling, assembled product usually contains multiple parts which overlap each or. Thus, some parts are only partly visible due to occlusions. For this reason, it is difficult to detect complete parts within a complex assembled product. As shown in 2, compared with a color image, a depth image is less affected by light and texture conditions. Therefore, it is more suitable for recognition metal parts. To monitor assembly process, this paper performs semantic segmentation, which is also known as pixel-wise classification, on depth image assembled product, to determine to which part each pixel belongs. We propose a depth image segmentation method employing FCN [23] to recognize parts from complicated assembled products. 3. Three-Dimensional CNN Model with Batch Normalization The 3D CNN is an extension 2D CNN, which adds a time dimension to base 2D CNN. Since re is no need for complex processing input sample data, processing speed 3D CNN is faster, making it more suitable for application assembly operations. The conventional 3D CNN model [15] consists input layer, three-dimensional convolutional layer, pooling layer, fully connected layer, and output layer. The input is usually original RGB video frame or optical flow. Due to large sample size, training time is long and training result is unstable. In this paper, based on 3D CNN [15] and batch normalization [20], a batch normalization layer is added between three-dimensional convolutional layer and activation function on base 3D CNN. The batch normalization layer preprocesses output 3D convolutional layer so that its mean value is 0 and its variance is 1, which speeds up training speed and convergence speed, and improves generalization model. The structure improved 3D CNN is shown in 1. Firstly, continuous video frames are transferred to three-dimensional convolutional layer, and n inactivated features obtained from convolutional layer are transferred to batch normalization layer. Finally, features are activated by ReLu function [34] and transferred to three-dimensional pooling layer. The features obtained by last pooling layer are transferred to stmax function through fully connected layer for classification and output. The improved 3D CNN model is different from 3D CNN model [15] by inserting batch normalization layer after Conv1, Conv2 and Conv3. In addition, this paper investigates effects gray image, binary image and depth image on training results in addition to RGB image.

6 Sensors 2020, 20, Experiments show that RGB video frame can be transformed into a single-channel gray image by image processing, and its array can be dimensionally transformed to conform to input requirements 3D CNN. Under 3D CNN model with batch normalization model proposed in this paper, training speed can be improved and convergence time network can be reduced while accuracy is guaranteed. The detailed network structure is shown in 3. Sensors 2020, 20, x FOR PEER REVIEW D CNN model with batch normalization. 3. 3D CNN model with batch normalization. The three-dimensional convolutional layer is shown in blue part 3. The video frame sequence The three-dimensional is used as input convolutional three-dimensional layer is shown convolutional in blue layer, part and three-dimensional 3. The video frame convolutional sequence iskernel used(as shown input in three-dimensional green part convolutional 3) is used to layer, convolute and three-dimensional input video convolutional frame. The kernel size (as shown data ininputted greeninto partthree-dimensional 3) is usedconvolutional to convolutelayer input is a 1 video a 2 frame. The size a 3 (length, height, data inputted width), into number three-dimensional channel convolutional is c, and layer size is a 1 athree-dimensional 2 a 3 (length, height, width), convolutional number kernel is channel f f f, is c, and and convolution size kernel three-dimensional is f convolutional f f c. If kernel is f f number f, and 3D convolutional kernels is dimension n, n is output f f N after f c. If convolutional number operation 3D convolutional can be kernels expressed is n, n as shown in output Equation N after (1). convolutional operation can be expressed as shown in Equation (1). N = (a 1 f + 1) (a 2 f + 1) (a 3 f + 1) n (1) N = (a The batch normalization layer 1 f + 1) (a is shown in 2 f + 1) (a red part 3 f + 1) n (1) 3. Like convolutional layer, Thepooling batch normalization layer and a fully layer connected is shown layer, in batch rednormalization part can 3. Like also be used convolutional as a neural layer, pooling network layer layer. and When a fully each connected layer layer, network batchis normalization input, a normalization can alsolayer be used is inserted, as a neural which network is equivalent to preprocessing data obtained from each convolutional layer, and n entering layer. When each layer network is input, a normalization layer is inserted, which is equivalent to next layer network to maintain data between 0 and 1. The transformation and preprocessing data obtained from each convolutional layer, and n entering next layer reconstruction methods that are used will not destroy distribution features learned by network to maintain data between 0 and 1. The transformation and reconstruction methods that are convolutional layer. The formula for batch normalization is as shown in Equation (2). used will not destroy distribution features learned by convolutional layer. The formula for batch normalization is as shown in Equation x = x(k) (2). E[x (k) ], Var[x (k) ] (2) where x (k) represents neuron parameter, E[x (x) ] represents mean, and Var[x (k) ] represents ˆx = x(k) E [ x (k)], (2) variance. The formula for transforming and reconstructing Var[x (k) batch ] normalized parameters is shown in Equation (3), where γ (k) and β (k) are learnable transformation and reconstruction parameters. where x (k) represents neuron parameter, y (k) E[x = γ (k) (x) x (k) ] represents + β (k) mean, and Var [ x (k)] represents (3) variance. TheThree-dimensional formula for transforming pooling operations and reconstructing are included. batch Usually, normalized pooling parameters operations isinclude shown in Equation maximum (3), where pooling γ (k) (taking and β (k) are local learnable maximum) transformation and mean pooling and reconstruction (taking local parameters. mean). The pooling operation can effectively reduce number features and reduce amount calculation, while also retaining local features. The y maximum (k) = γ (k) ˆx pooling (k) + β (k) operation is adopted in this model. The (3) pooling operations were carried out after first, second and fourth batch-normalized layers and after Three-dimensional third and fifth pooling convolutional operations layers. are included. Usually, pooling operations include maximum The fully connected layer is shown in grey part 3. The main function fully pooling (taking local maximum) and mean pooling (taking local mean). The pooling operation connected layer is to act as a bridge between hidden layer and output layer (which can flatten can effectively reduce number features and reduce amount calculation, while also retaining characteristic values convolutional layer and pool layer) and n to transmit results to output layer for classification. The dropout processing is ten carried out in fully connected layer, and some nodes are randomly hidden to prevent over-fitting. Anor method to prevent overfitting is L2 regularization, which is shown in Equation (4).

7 Sensors 2020, 20, local features. The maximum pooling operation is adopted in this model. The pooling operations were carried out after first, second and fourth batch-normalized layers and after third and fifth convolutional layers. The fully connected layer is shown in grey part 3. The main function fully connected layer is to act as a bridge between hidden layer and output layer (which can flatten characteristic values convolutional layer and pool layer) and n to transmit results to output layer for classification. The dropout processing is ten carried out in fully connected layer, and some nodes are randomly hidden to prevent over-fitting. Anor method to prevent over-fitting is L2 regularization, which is shown in Equation (4). Sensors 2020, 20, x FOR PEER REVIEW 7 18 J(θ) = 1 m n (h 2m θ (x i ) y i ) 2 + λ θ 2 j (4) i=1 j=1 J(θ) = 1 m n where 2m [ (h θ(x i ) y i ) λ ( θ j )] (4) ( m i=1 (h θ(x i ) y i ) 2 n ) is loss function, i=1θ is parameters j=1 CNN model, λ j=1 θ2 is j regular where term, and m (hλ θ (x is i regularization ) y i ) 2 n 2 i=1 is loss coefficient. function, θ is parameters CNN model, λ( j=1 θ j ) is The output regular layer term, and is classified λ is regularization by stmax coefficient. function. The output layer is classified by stmax function. 4. FCN for Semantic Segmentation Assembled Product 4. FCN for Semantic Segmentation Assembled Product As can be seen in 4, we use FCN for semantic segmentation a depth image an assembled As can product. be seen in The FCN 4, we can use be divided FCN for into two semantic stages: segmentation feature learning a depth image stage and an assembled product. The FCN can be divided into two stages: feature learning stage and semantic segmentation stage. In feature learning stage, VGG classification nets [35] were semantic segmentation stage. In feature learning stage, VGG classification nets [35] were reinterpreted as fully convolutional nets. We furr use transfer learning [36] approach to retrain reinterpreted as fully convolutional nets. We furr use transfer learning [36] approach to retrain parameters convolution layers VGG with depth images and pixel labeled images parameters convolution layers VGG with depth images and pixel labeled images assembled assembled products. products. The The semantic semantic segmentation stage is is composed a askip skiparchitecture, which which combines combines coarse, coarse, high high layer layer information with with fine, fine, low-layer information. The Thecombined semantic semantic information information is up-sampled is up-sampled to to dimension dimension input input image image using using deconvolution layer. Therefore, layer. atherefore, label prediction a label prediction for each pixel for each is generated pixel is generated while preserving while preserving spatial spatial resolution input image. input Using image. Using predictions predictions all pixels, all pixels, a semantic a semantic segmentation depth image an assembled an assembled product product is obtained. is obtained. 224*224*3(Image) pool1 pool2 pool3 VGG (feature learning module) pool4 pool Deconv 2X 256 Deconv 2X 512 Deconv 2X 512 Deconv 2X Deconv 7*7*4096 Heat Map 7*7*4096 7*7*15 Deconv 2X Deconv 2X FCN-2S Deconv 4X FCN-4S Deconv 8X FCN-8S 16X upsampled prediction(fcn-16s) Deconv 16X Skip ARchitecture (semantic segmentation module) 224*224*15 224*224*15 224*224*15 224*224*15 Conv+relu Pool Fully connected Deconv+Pool 4. The FCN structure. 4. The FCN structure. The lower selected layer is, more refined obtained semantics are. Therefore, on basis The FCN-8S lower nets, selected lower layers were is, used more for refined up-sampling obtained to semantics generate are. more Therefore, refined on semantic basis FCN-8S nets, lower layers were used for up-sampling to generate more refined semantic segmentations. We furr defined FCN-4S and FCN-2S nets as well as FCN-16S nets to compare effects semantic segmentation. All above mentioned nets are shown in Creating Data Sets 5.1. Creating Data Set for Assembly Action Recognition

8 Sensors 2020, 20, x FOR PEER REVIEW 8 19 results to obtain appropriate 3D CNN model with batch normalization. The process is shown in 5. Sensors 2020, 20, segmentations. We furr defined FCN-4S and FCN-2S nets as well as FCN-16S nets to compare effects semantic segmentation. All above mentioned nets are shown in Creating Data Sets 5.1. Creating Data Set for Assembly Action Recognition Sensors 2020, 20, x FOR PEER REVIEW 8 18 Training a 3D CNN requires a data set with enough sample, followed by creation and training results model. to obtain The network appropriate structure 3D CNN is nmodel continuously with batch adjusted normalization. through analysis The process is shown results in to obtain 5. appropriate 3D CNN model with batch normalization. The process is shown in Flowchart for creating for creating data set data for set assembly for assembly action recognition. action recognition. The The creation creation an an an assembly assembly action action action data data data set is set set required is is required required before before before neural neural network neural network network can be trained. can can be be trained. trained. There There is is is currently currently no no no assembly assembly action action action data data set, data so set, set, this so research so this this research research includes includes includes creation creation such creation a set. such The such a set. a The set. The RGB RGB video video and and and depth depth depth video video video were were simultaneously were simultaneously simultaneously recorded recorded recorded by a Kinect by by a a Kinect depth Kinect camera, depth depth camera, and camera, and and video video frames frames were were separately separately extracted extracted from from two videos. two videos. Assembly Assembly action is action different is different from common from common human actions (e.g., running, jumping, squatting, etc.). It is common human actions human (e.g., actions running, (e.g., jumping, running, jumping, squatting, squatting, etc.). It is etc.). mainly It is upper mainly body upper movement, body movement, which is which is usually repeated, using appropriate assembly tools. Many are which usually is repeated, usually repeated, using appropriate using appropriate assembly assembly tools. Many tools. assembly Many assembly actions are actions similar are but similar tools but used tools may used be may different, be different, so tool so information tool information is also is also useful useful for recognizing for recognizing assembly assembly action. action. tools The data used set may be assembly different, actions so tool created information this is research also useful includes for recognizing nine kinds assembly common action. The data set assembly actions created for this research includes nine kinds common assembly assembly The actions data set (screw assembly twisting, nut actions twisting, created hammering, for this tape research wrapping, includes spray painting, nine kinds brushing, common actions (screw twisting, nut twisting, hammering, tape wrapping, spray painting, brushing, clamping, clamping, assembly sawing actions and (screw filing) twisting, each nut which twisting, operated hammering, by 12 people tape ( wrapping, participants ). spray painting, To ensure brushing, data clamping, sawing set diversity and sawing filing) and and each to filing) enhance which each is generalization operated which is by operated characteristics 12 people by 12 ( people participants ). assembly ( participants ). actions, To ensure two or To data ensure set three data diversity set tools diversity and are provided to enhance and to finish enhance generalization each assembly generalization characteristics action, chosen characteristics by participants. assembly actions, assembly When recording two actions, or three two tools or three are provided video, tools are to participant provided finish each to performs finish assembly each action, corresponding assembly chosen action, assembly by chosen participants. action by with participants. respect When recording to his When own recording video, understanding participant video, performs participant action. The performs corresponding assembly tools corresponding assembly are shown action assembly with6. respect action to his with own respect understanding to his own understanding action. The assembly action. tools The are assembly shown in tools are 6. shown in Assembly Assembly tools. tools. The video for each type action for each participant was edited and divided into three or four video data samples, which were each associated with one nine assembly action classification labels. Each action category contained 36 segments video data samples, each which ranged between 3 and 6 s in duration. Both deep video and RGB video adopted same processing method. The RGB images were converted into gray images and binary images, respectively. Accordingly, four

9 Sensors 2020, 20, The video for each type action for each participant was edited and divided into three or four video data samples, which were each associated with one nine assembly action classification labels. Each action category contained 36 segments video data samples, each which ranged between 3 and 6 s in duration. Both deep video and RGB video adopted same processing method. The RGB images were converted into gray images and binary images, respectively. Accordingly, four types data set were obtained: RGB video sequence, depth image video sequence, gray image video sequence, and binary image video sequence. 7 shows four different types data sets images corresponding to same assembly action. Sensors 2020, 20, x FOR PEER REVIEW 9 18 (a) (b) (c) (d) 7. Comparison four image types data set. (a) RGB image; (b) Depth image; (c) Binary image; (d) Gray image. The RGB image is a color image consisting three primary colors red, green and blue. Each picture contains three channels information, and and values values each each channel channel are are The RGB Theimage RGB is image rich in iscontent, rich in content, but it has but three it has color three channels color channels and is sensitive and isto sensitive changes toin changes light intensity. light intensity. The depth image with depth information is obtained using Kinect depth sensor; position information contained in each pixel reflects distance from sensor. The binary graph is obtained by binarizing RGB image, using only values 0 and 1 for each pixel. It results in loss image information to varying degrees, and it is difficult to distinguish what experimenter is doing with a single-frame image. The gray image is a single channel image, which no longer contains color information, and range values for each pixel is The gray image can reduce amount data while ensuring integrity image information Creating Data Set Set for for Image Segmentation Assembled Products As shown in in 8, 8, we design a flowchart to create sample set for training FCN model to to recognize parts from complicated assembled products. The process for computer generating depth images and labelling RGB images is as follows: (1) Commercial CAD stware such such as as SolidWorks is selected is selected to build to build CAD model CAD model product product and and CAD model CAD model product product is savedis insaved obj format. in obj format. (2) Mutigen Creator modeling stware is used to load assembly model in obj format. Each part in assembly modelis is labeled withone one unique color. Therefore, different parts parts correspond to different to different RGB RGB values. values. The The assembly modelsfor for different assembly stages are saved in OpenFlight format. (3) The Open Scene Graph (OSG) 3D rendering engine is used to design an assembly labeling stware, which can load and render assembly model in OpenFlight format, and establish depth camera imaging model and RGB camera imaging model. By changing viewpoint orientation depth camera imaging model and RGB camera imaging model, depth images and RGB images product in different assembly stages and and different perspectives can can be be synsized by by a computer. a Using above process, data set for image segmentation assembled products can be synsized by computer without using physical assembly. Therefore, it is suitable for training FCN model to recognize parts from personalized products, which is usually not produced before monitoring.

10 (3) The Open Scene Graph (OSG) 3D rendering engine is used to design an assembly labeling stware, which can load and render assembly model in OpenFlight format, and establish depth camera imaging model and RGB camera imaging model. By changing viewpoint orientation depth camera imaging model and RGB camera imaging model, depth Sensors images 2020, 20, and 4208RGB images product in different assembly stages and different perspectives 10 can 18 be synsized by a computer. Using above process, data set for image segmentation assembled products can be synsized by computer without using physical assembly. Therefore, it is suitable for training FCN model to recognize parts from personalized products, which is isusually not produced before monitoring. 6. Experiments and Results Analysis Flowchart Flowchart creating creating image image segmentation segmentation data data set. set. The system used in this experiment is Ubuntu (64 bits), graphics card is NVIDIA QuadroM4000 and CPU is Intel E GHz 20, and 64G RAM. Experiments for both recognizing assembly action and recognizing parts from complicated assembled products are conducted Assembly Action Recognition Experiments and Results Analysis Assembly Action Recognition Experiments The Adam optimization algorithm [37] is used. The basic network structure in experiment was first determined based on training results RGB data set assembly action. Subsequently, batch normalization layer was introduced and tested on different data set images to adjust network structure. Ultimately, four data sets were compared and evaluated. The sample size all data sets is identical, with same comprising 16-frame sequence images in sub-folder under each action classification. That is, input is or , where 3 and 1 are number channels. Table 1 shows settings parameters and functions using in CNN model. Three quarters each data set were randomly selected as training set, with 20% m being validation set. The rest quarter each data set is test set. Table 1. 3D CNN parameter configuration. Method Crop Size Loss Function Optimizer Learning Rate Batch Size Decay Rate 3D CNN Cross_entropy Adam Decay Steps First, 3D CNN model is built based on RGB data set. The model structure adopts structure C3D model [14], comprising only a stack a three-dimensional convolutional layer and a three-dimensional pooling layer. 9 shows comparison between training results different convolutional layers. When number three-dimensional convolutional layers is four and five, accuracy test set deviates greatly from that training set, and training result is in an under-fitting state. When number three-dimensional convolutional layers is seven and eight, deviation between test set accuracy and training set accuracy is gradually increased, and phenomenon over-fitting appears. When depth convolutional layer is six, 3D CNN model achieves better results. In absence any preprocessing data set, accuracy test set is 82.85%. Next, structure 3D CNN model is finally determined by introducing batch normalization layer, adjusting model, and testing and optimizing on different types data set. As shown in 3, 3D CNN model with batch normalization consists five three-dimensional

11 curacy Loss Accuracy Loss Accuracy rate structure C3D model [14], comprising only a stack a three-dimensional convolutional layer and a three-dimensional pooling layer. 9 shows comparison between training results different convolutional layers. When number three-dimensional convolutional layers is four and five, accuracy test set deviates greatly from that training set, and training result is in an under-fitting state. When number three-dimensional convolutional layers is Sensors 2020, 20, seven and eight, deviation between test set accuracy and training set accuracy is gradually increased, and phenomenon over-fitting appears. When depth convolutional layer is six, convolutional 3D CNN model layers, achieves five three-dimensional better results. In pooling absence layers, three any batch preprocessing normalization layers data and set, two accuracy fully connected test layers. set is 82.85% train acc(%) test acc(%) Sensors 2020, 20, x FOR PEER REVIEW Number convolutional layers As shown in 3, 3D CNN model with batch normalization consists five three-dimensional convolutional layers, five three-dimensional 9. Comparison pooling layers, network three depth. batch normalization layers and two fully connected layers. The Next, Thedimensions structure single single 3D channel channel CNN data model datasets sets is are finally aren ndetermined transformed by to toconform introducing conformto to input batch input requirements normalization for for layer, 3D 3D adjusting CNN. CNN. For For example, model, and size testing size and gray gray optimizing image imageis is112 on 112 different 112, 112, a two-dimensional atypes data set. matrix matrixwhich whichcannot cannot be beused usedas as input input to to 3D 3DCNN. The Thedimension gray grayimage imageis isthus thus transformed into into Finally, Finally, four fourtypes types data dataset set assembly assemblyaction actionare areused usedas as input input to to improved improved3d 3DCNN CNN model. model. The The training training results results are are compared compared and and analyzed analyzed with with respect respect to to four four criteria criteria stability, stability, training trainingtime, time, convergence speed speedand andaccuracy Analysis Experimental Results Analysis Experimental Results Comparison Stability and Convergence Speed Comparison Stability and Convergence Speed s show results obtained from training using four data sets assembly s show results obtained from training using four data sets assembly action. The (a) part figures show accuracy comparison, in which ordinates are accuracy action. The (a) part figures show accuracy comparison, in which ordinates are accuracy and abscissa are number training samples. The (b) show loss comparison, in which and abscissa are number training samples. The (b) show loss comparison, in which abscissa is loss values and abscissa is training steps or times. The and abscissa is loss values and abscissa is training steps or times. The and Without BN curves show difference between results with and without batch normalization Without BN curves show difference between results with and without batch layer, respectively. normalization layer, respectively Without (a) Accuracy rate comparison 2.0 Without (b) Loss comparison Without 10. Comparison curves for RGB image. 10. Comparison curves for RGB image. From s 10 13, we can see that introduction batch normalization layer improves convergence speed for RGB video sequence, binary image video sequence, gray image Without

12 L Acc Without Sensors , 20, (b) Loss comparison (a) Accuracy rate comparison times video sequence, and depth image video sequence, and improved 3D CNN model provides better stability on training set. 14 shows a comparison training results batch Comparison normalization layer model on10. four data curves sets. for RGB image. 0.8 Without Loss Accuracy Without times 30 (b) Loss comparison Sensors 2020, 20, FOR PEER REVIEW (a)xaccuracy rate comparison 11. Comparison curves for binary comparison image. 11. Comparison curves for binary image. Sensors 2020, 20, x FOR PEER REVIEW Accuracy rate Without BN Without BN 2.0 Without BN Without BN Loss Loss Accuracy rate (a) Accuracy rate comparison (a) Accuracy rate comparison times (b) Loss comparison times (b) Loss comparison Comparison curves for gray image. 12. Comparison curves for gray image. 12. Comparison curves for gray image. Accuracy rate Loss Without BN Without BN Without BN Without BN (a) Accuracy rate comparison Loss Accuracy rate times (b) Loss comparison times (a) Accuracy rate comparison (b) Loss comparison 13. Comparison curves for depth image. 13. Comparison curves for depth image. The convergence speeds training using video sequences binary image and depth 13. Comparison curves for depth image. From canthat seethat introduction batch normalization layer image are s slightly 10 13, slowerwe than RGB image and gray image, and effect improves binary convergence speed for RGB video sequence, binary image video sequence, gray image image is worst. From s 10 13, we can see that introduction batch normalization layer improves video sequence, and depth image video sequence, and improved 3D CNN model provides convergence speed for RGB video sequence, binary image video sequence, gray image better stability on training set. 14 shows a comparison training results batch video sequence, and depth image video sequence, and improved 3D CNN model provides normalization layer model on four data sets. better stability on training set. 14 shows a comparison training results batch normalization layer model on four data sets. e RGB image Gray image

13 Accuracy rate 13. Comparison curves for depth image. Loss From s 10 13, we can see that introduction batch normalization layer improves convergence speed for RGB video sequence, binary image video sequence, gray image video sequence, and depth image video sequence, and improved 3D CNN model provides Sensors 2020, 20, better stability on training set. 14 shows a comparison training results batch normalization layer model on four data sets RGB image Gray image Depth image Binary image (a) Accuracy rate comparison with BN 2.0 RGB image Gray image Depth image Binary image times (b) Loss comparison with BN 14. Comparison training for four data sets. 14. Comparison training for four data sets. Comparison Accuracy and Training Time The training results common 3D CNN model and improved 3D CNN model on four different types data sets are compared and tested with test set. The introduction batch normalization into 3D CNN model [15] should improve initial learning rate and improve training speed. For sake fairness, initial learning rates common 3D CNN and improved 3D CNN are set to be same to avoid impact learning rate on training speed. Table 2 shows comparison accuracy and training time for different test sets. The accuracy and each data sets is average 10 tests. Table 2. Comparison results for four data sets. Data Set Type RBG Image Binary Image Gray Image Depth Image Accuracy (without BN) 82.85% 79.78% 80.86% 70% Accuracy (with BN) 83.70% 79.88% 81.89% 68.75% Training Time (without BN) 50 m 37 s 54 m 34 s 46 m 45 s 55 m 9 s Training Time (with BN) 51 m 10 s 54 m 35 s 48 m 3 s 55 m 50 s In comparing four data sets, training times for binary image and depth image are slower, confirming result shown in 14; that is, that convergence speeds binary image and depth image are slower than that or two types. The training speed can be improved significantly by transforming RGB image into gray image through image processing. In addition, introduction batch normalization layer does not directly improve training speed, but since batch normalization layer can improve convergence speed, training time can be reduced by decreasing number training iterations. The accuracy RGB video sequences is highest owing to abundant picture information, followed by gray video sequence, but re is little difference between se two video sequences. In gray image data set, identifying accuracy screw twisting, nut twisting, hammering, tape wrapping, spraying, brushing, clamping, sawing, and filing is 75%, 80%, 78%, 80%, 87.5%, 78%, 88%, 75% and 90%, respectively. The average speed for recognition an action is about 18.8 fps. However, both depth image and binary image will lose image information to varying degrees, resulting in low test results. This is particularly case for depth image, when re may be serious misjudgments. When depth image is acquired, true depth value can be recorded. The depth value ( mm) when represented by a gray scale will bring a depth error about 15 mm. s 15 and 16 show depth video frames assembly actions screw twisting and brushing, respectively. It is difficult to see from picture which tool is in hand and what participant is doing.

14 Sensors 2020, 20, Sensors Sensors 2020, 2020, 20, 20, x FOR FOR PEER PEER REVIEW REVIEW Screw twisting Brushing. Brushing. 16. Brushing. Analysis Analysis experimental experimental results results has shown has shown that that 3D CNN with 3D CNN fusion batch with normalization fusion batch normalization can effectively can reduce effectively number reduce training number parameters training and improve parameters convergence and improve speed. convergence speed. The single-channel grayscale image video sequence can preserve image content The single-channel grayscale image video sequence can preserve image content well, and training well, and training speed is improved while ensuring training precision. speed is improved while ensuring training precision Parts Recognition Experiments and Results Analysis As shown in 2, a gear reducer consisting 14 parts and a worm gear reducer consisting seven partsare areused usedas as assembled product. product. The The gear gear reducer reducer and and worm worm gear reducer gear reducer were were modeled modeled using using 3D modeling 3D modeling stware. stware. Each part Each part assembled assembled product product is marked is marked with a unique with a color. unique3d color. models 3D models different different assembly assembly phases phases were rendered were rendered using using Open Scene OpenGraph Scene Graph (OSG) (OSG) rendering rendering engine. engine. Using Using a depth a depth buffer buffer technology, technology, depth depth images images with with different different viewpoints viewpoints for different for different assembly assembly phase phase can can be be generated. For For each each product, in in total 180 computer-generated depth images were obtained; 120 computer-generated depth images contribute to training set and to set. In 10 depth images 60 computer-generated depth images contribute to validation set. In addition, 10 depth images a physical object are used as test set. In a physical assembled object are used as test set. In training FCN, training set and set were to 405 and 135, respectively, by data augmentation method. A validation set were increased to 405 and 135, respectively, by data augmentation method. A transfer transfer learning strategy was employed to initialize FCN. learning strategy was employed to initialize FCN. In order to evaluate performance proposed methods, accuracy pixel (as shown in Equation (5)) is used as one evaluation criteria. The pixel classification (as shown in Equation (5)) is used as one evaluation criteria. The pixel classification classification accuracy PA is defined as follows: accuracy PA is defined as follows: PA = P Y, P N (5) (5) where PY PN P PY Y is is number number correctly predicted pixels, PPN N is is total totalnumber number pixels. pixels. Table 3 shows shows FCN FCN network network parameter parameter configuration and and number parameters. parameters. FCN-2S has only a slight increasein FCN-2S compared to to FCN-8sin terms terms number network parameters. Y N

15 Sensors 2020, 20, Sensors 2020, 20, x FOR PEER REVIEW Table Table Network parameter configuration and and number parameters. Batch The Number Method The Number Image Size Loss Loss Function Optimizer Learning Learning Rate Rate Batch Size Size Parameters FCN-16S 224 ** 224 Cross_entropy Adam Adam FCN-8S 224 ** 224 Cross_entropy Adam Adam FCN-4S * * 224 Cross_entropy Adam Adam FCN-2S 224 * 224 Cross_entropy Adam FCN-2S 224 * 224 Cross_entropy Adam shows pixel pixel classification accuracy FCN-8S network and and number iterations. With With an an increase in in iteration time, time, pixel pixel classification accuracy both both training set set and and validation set set improve. After After iterations, final final pixel pixel classification accuracy validation set set is is as as high highas as 96.1%. The The online training took took h. h. The The above above results show showthat that proposed method based on onfcn FCNfor for assembly monitoring achieves a agood performance. 0 Accuracy (%) Validation set acc(%) Train set acc(%) times Training accuracy FCN. FCN. Table Table4 4 shows comparison pixel pixel classification classification accuracy accuracy for for test settest andset and validation validation set. Fromset. From comparison comparison pixel classification pixel classification accuracy accuracy and and network network training training time between time between FCNs with FCNs different with output different structures, output structures, it can be seen it that can be FCN-2S seen that has FCN-2S highest accuracy has regarding highest accuracy pixel classification regarding pixel on classification test set. Theon average test run set. time The average pixel classification run time pixel for one classification depth image for is one about depth image s. is about s. Table 4. Comparison pixel classification accuracy. Table 4. Comparison pixel classification accuracy. Pixel Classification Accuracy Pixel Classification Accuracy Method Data Set Pixel Classification Accuracy Pixel Classification Accuracy Method Data Set (PA) (Gear Reducer) (PA) (Worm Gear Reducer) (PA) (Gear Reducer) (PA) (Worm Gear Reducer) FCN-16S 93.84% 97.64% FCN-16S % 97.64% FCN-8S 96.10% 97.83% FCN-8S validation validation set FCN-4S % 97.72% 97.83% 98.59% FCN-4S FCN-2S set % 98.80% 98.59% 99.53% FCN-2S FCN-2S test set % 94.95% 99.53% 96.52% FCN-2S test set % 96.52% As shown in Table 4, experimental results gear reducer show that FCN-4S and FCN-2S As shown are 1.62% in Table and 4, 2.7% higher experimental than FCN-8S results in pixel accuracy. gear reducer Theshow pixel accuracy that FCN-4S FCN-16S and is FCN-2S 2.26% lower are 1.62% thanand that 2.7% FCN-8S. higher than For feature FCN-8S learning, pixel accuracy. FCN uses The pixel invariant accuracy method FCN-16S spatial is 2.26% transformation, lower than which that also FCN-8S. limitsfor feature spatial accuracy learning, FCN object. uses The lower-level invariant method convolutional spatial layer transformation, has accurate location which information. also limits In spatial lower-level accuracy convolutional object. layer, The lower-level FCN network convolutional can learn layer has accurate location information. In lower-level convolutional layer, FCN network can learn accurate location information, reby improving network performance. The FCN-2S network

16 Sensors 2020, 20, Sensors 2020, 20, x FOR PEER REVIEW Sensors 2020, 20, x FOR PEER REVIEW accurate has reached location 98.80% information, pixel accuracy, reby and improving test pixel network accuracy performance. has reached The 94.95%. FCN-2S The experimental network has has reached results reached 98.80% 98.80% worm pixel pixel accuracy, gear accuracy, reducer and and also test show pixel test pixel accuracy that FCN-2S accuracy has reached has achieved reached 94.95% %. Thebest experimental The results, experimental with results an results accuracy worm rate gear worm reducer 99.53%, gear also which reducer showis also that 1.7% FCN-2S show higher that has than FCN-2S achieved FCN-8S. has The achieved besttest results, pixel with best accuracy anresults, accuracy has with reached ratean accuracy 99.53%, 96.52%. which In rate summary, is 99.53%, 1.7% higher which FCN-2S than is FCN-8S. 1.7% network higher The has test than achieved pixel FCN-8S. accuracy best The has results test reached pixel in accuracy 96.52%. data set has In summary, for reached image 96.52%. segmentation FCN-2S In summary, network mechanical has achieved FCN-2S assembly. network best s results has achieved 18 in and data 19 show setbest for image results segmentation in data set depth mechanical for images. segmentation assembly. The left figure s shows mechanical 18 and depth 19 show assembly. image segmentation inputting s 18 into and depth 19 FCN-2S show images. model. segmentation The The leftmiddle figure shows figure depth shows images. depth The image output left inputting figure FCN-2S. shows into The right depth FCN-2S figure image model. is inputting ground The middle into truth figure FCN-2S shows left model. depth The output image. middle FCN-2S. figure shows The right output figure is FCN-2S. groundthe truth right figure leftis depth ground image. truth left depth image. 18. Depth image segmentation gear reducer. 18. Depth image segmentation gear reducer. 18. Depth image segmentation gear reducer. 19. Depth image segmentation worm gear reducer. 7. Conclusions and Future Work 19. Depth image segmentation worm gear reducer. 7. Conclusions and Future Work 7. Conclusions In this paper, and afuture 3D CNN Work model with batch normalization is proposed. An assembly actions In this paper, 3D CNN model with batch normalization is proposed. An assembly actions data data set including a gray, binary, depth and RGB image is created. The 3D CNN model with batch set including In this paper, a gray, a 3D binary, CNN model depth with and batch RGB normalization image is created. is proposed. The 3D An CNN assembly model actions with batch normalization is tested on four types data set images. The experimental results show that data set normalization including a is gray, tested binary, on depth four types and RGB data image set images. is created. The experimental The 3D CNN results model show with that batch improved 3D CNN model with batch normalization can effectively reduce number training normalization improved 3D is CNN tested model on with four batch types normalization data set images. can effectively The experimental reduce results number show that training parameters, reduce computational complexity, improve training speed and convergence speed, improved parameters, 3D reduce CNN model computational with batch normalization complexity, can improve effectively training reduce speed number and convergence while maintaining accuracy. These results are a significant contribution to research on recognition training parameters, speed, while reduce maintaining computational accuracy. These complexity, results are improve a significant training contribution speed to and convergence research on and monitoring assembly action and assembly quality in mass customization production. speed, recognition while and maintaining monitoring accuracy. assembly These action results and are a assembly significant quality contribution in mass to customization The FCN-based semantic segmentation method is employed to segment parts from complicated research on recognition production. and The monitoring FCN-based semantic assembly segmentation action and assembly method is quality employed in to mass segment customization parts from assembled products. The experimental results demonstrate that FC-2S network provides production. complicated The assembled FCN-based products. semantic The segmentation experimental method results demonstrate is employed that to segment FC-2S parts network highest pixel classification accuracy and fastest run time. Both 3D CNN model with batch from complicated provides assembled highest pixel products. classification The experimental accuracy and results fastest demonstrate run time. that Both FC-2S 3D CNN network model normalization and FCN-based semantic segmentation method can thus serve purpose online provides with batch normalization highest pixel classification and FCN-based accuracy semantic and fastest segmentation run time. method Both can 3D thus CNN serve model monitoring assembly process mass customization production. Future work may include research with purpose batch normalization online monitoring and assembly FCN-based process semantic mass segmentation customization method production. can thus Future serve work on pose estimation each part in product, and judging wher each part is placed to its right place. purpose may include online research monitoring pose estimation assembly process each part mass in product, customization and judging production. wher Future each part work is It is also possible to combine four types images in same decision system, even combine may placed include to its research right place. on pose It is also estimation possible to each combine part in product, four types and judging images wher in same each decision assembly action monitoring and semantic segmentation to improve results in both tasks. part placed system, to even its right combine place. assembly It is also action possible monitoring combine and semantic four types segmentation images to in improve same decision results system, Author in both Contributions: even tasks. combine Conceptualization, assembly action monitoring C.C. and J.H.; and methodology, semantic segmentation C.C. and Z.Z.; to stware, improve C.Z., T.W. results and in C.C.; Author both validation, tasks. T.W., Y.G. and C.C.; formal analysis, T.W., D.L. and C.C.; writing original draft preparation, C.C.; writing review Contributions: and editing, Conceptualization, C.C. All authors C.C. have and read J.H.; and methodology, agreed to C.C. published and Z.Z.; version stware, C.Z., manuscript. T.W. and Author C.C; validation, Contributions: T.W., Y. Conceptualization, G. and C.C.; formal C.C. analysis, and J.H.; T.W. methodology, D.L. and C.C; C.C. writing original and Z.Z.; stware, draft C.Z., preparation, Funding: T.W. and C.C; C.C.; validation, writing review This research T.W., Y. and work G. and editing, was co-supported C.C.; C.C. formal All analysis, authors by T.W. have National D.L. read Natural and and C.C; agreed Science writing original to Foundation published draft version China (Grant preparation, No , ) and Key Research & Development Programs Shandong Province (Grant No. C.C.; 2017GGX203003). manuscript. writing review and editing, C.C. All authors have read and agreed to published version manuscript. Acknowledgments: Funding: This research We thank work operators was co-supported (whose names by were National asked tonatural be keptscience secret) participated Foundation in China experiment (Grant Funding: to No. establish , This research ) dataset. work and was co-supported Key Research by & Development National Natural Programs Science Shandong Foundation Province China (Grant (Grant No. No. 2017GGX203003) , ) and Key Research & Development Programs Shandong Province (Grant No. 2017GGX203003).

17 Sensors 2020, 20, Conflicts Interest: The authors declare no conflict interest. References 1. Bobick, A.; Davis, J. An appearance-based representation action. In Proceedings 13th International Conference on Pattern Recognition, Vienna, Austria, August 1996; pp Weinland, D.; Ronfard, R.; Boyer, E. Free viewpoint action recognition using motion history volumes. Comput. Vis. Image Underst. 2006, 104, [CrossRef] 3. Dalal, N.; Triggs, B. Histograms oriented gradients for human detection. In Proceedings IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 05), San Diego, CA, USA, June 2005; pp Chaudhry, R.; Ravichandran, A.; Hager, G.; Vidal, R. Histograms oriented optical flow and Binet-Cauchy kernels on nonlinear dynamical systems for recognition human actions. In Proceedings IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, June 2009; pp Schuldt, C.; Laptev, I.; Caputo, B. Recognizing human actions: A local SVM approach. In Proceedings 17th International Conference on Pattern Recognitio, Cambridge, UK, 26 August 2004; Volume 3, pp Wang, H.; Kläser, A.; Schmid, C.; Liu, C.L. Dense trajectories and motion boundary descriptors for action recognition. Int. J. Comput. Vis. 2013, 103, [CrossRef] 7. Chen, C.; Wang, T.; Li, D.; Hong, J. Repetitive assembly action recognition based on object detection and pose estimation. J. Manuf. Syst. 2020, 55, [CrossRef] 8. Redmon, J.; Farhadi, A. Yolov3: An incremental improvement. arxiv 2018, arxiv: Wei, S.E.; Ramakrishna, V.; Kanade, T.; Sheikh, Y. Convolutional pose machines. In Proceedings IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, June 2016; pp Kim, M.; Choi, W.; Kim, B.C.; Kim, H.; Seol, J.H.; Woo, J.; Ko, K.H. A vision-based system for monitoring block assembly in shipbuilding. Comput. Aided Des. 2015, 59, [CrossRef] 11. Židek, K.; Hosovsky, A.; Pitel, J.; Bednár, S. Recognition Assembly Parts by Convolutional Neural Networks. In Advances in Manufacturing Engineering and Materials; Lecture Notes in Mechanical Engineering; Springer: Cham, Switzerland, 2019; pp Feichtenher, C.; Pinz, A.; Zisserman, A. Convolutional two-stream network fusion for video action recognition. In Proceedings IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, June 2016; pp Wang, L.; Xiong, Y.; Wang, Z.; Qiao, Y.; Lin, D.; Tang, X.; Van Gool, L. Temporal segment networks: Towards good practices for deep action recognition. In Proceedings European Conference on Computer Vision, Amsterdam, The Nerlands, October 2016; Lecture Notes in Computer Science. Volume 9912, pp Tran, D.; Bourdev, L.; Fergus, R.; Torresani, L.; Paluri, M. Learning spatiotemporal features with 3d convolutional networks. In Proceedings IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7 13 December 2015; pp Ji, S.; Xu, W.; Yang, M.; Yu, K. 3D convolutional neural networks for human action recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, [CrossRef] [PubMed] 16. Du, W.; Wang, Y.; Qiao, Y. RPAN: An end-to-end recurrent pose-attention network for action recognition in videos. In Proceedings IEEE International Conference on Computer Vision (ICCV), Venice, Italy, October 2017; pp Donahue, J.; Hendricks, L.A.; Guadarrama, S.; Rohrbach, M.; Venugopalan, S.; Darrell, T.; Saenko, K. Long-term recurrent convolutional networks for visual recognition and description. In Proceedings IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7 12 June 2015; pp Xu, H.; Das, A.; Saenko, K. R-C3D: Region Convolutional 3D Network for Temporal Activity Detection. In Proceedings IEEE International Conference on Computer Vision (ICCV), Venice, Italy, October 2017; pp Soomro, K.; Zamir, A.R.; Shah, M. Ucf101: A dataset 101 human actions classes from videos in wild. arxiv 2012, arxiv:

18 Sensors 2020, 20, Ife, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings 32nd International Conference on Machine Learning, Lille, France, 11 February Shotton, J.; Fitzgibbon, A.; Cook, M.; Sharp, T.; Finocchio, M.; Moore, R.; Kipman, A.; Blake, A. Real-time human pose recognition in parts from single depth images. In Proceedings 24th IEEE Conference on Computer Vision and Pattern Recognition, Colorado Springs, CO, USA, June 2011; pp Joo, S.I.; Weon, S.H.; Hong, J.M.; Choi, H.I. Hand detection in depth images using features depth difference. In Proceedings International Conference on Image Processing, Computer Vision, and Pattern Recognition (IPCV). The Steering Committee World Congress in Computer Science, Computer Engineering and Applied Computing (World Comp), Las Vegas, NV, USA, July 2013; Volume Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5 9 October 2015; Springer: Cham, Switzerland; pp Zhao, H.; Shi, J.; Qi, X.; Wang, X.; Jia, J. Pyramid scene parsing network. In Proceedings IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, July 2017; pp Peng, C.; Zhang, X.; Yu, G.; Luo, G.; Sun, J. Large kernel matters Improve semantic segmentation by global convolutional network. In Proceedings IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, July 2017; pp Chen, L.C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 40, [CrossRef] [PubMed] 28. Li, X.; Yang, Y.; Zhao, Q.; Shen, T.; Lin, Z.; Liu, H. Spatial pyramid based graph reasoning for semantic segmentation. In Proceedings IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, June 2020; pp Zhong, Z.; Lin, Z.Q.; Bidart, R.; Hu, X.; Daya, I.B.; Li, Z.; Wong, A. Squeeze-and-attention networks for semantic segmentation. In Proceedings IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, June 2020; pp Huang, Z.; Wang, X.; Huang, L.; Huang, C.; Wei, Y.; Liu, W. Ccnet: Criss-cross attention for semantic segmentation. In Proceedings IEEE International Conference on Computer Vision, Seoul, Korea, 27 October 3 November 2019; pp Fu, J.; Liu, J.; Wang, Y.; Zhou, J.; Wang, C.; Lu, H. Stacked deconvolutional network for semantic segmentation. IEEE Trans. Image Process [CrossRef] [PubMed] 32. Artacho, B.; Savakis, A. Waterfall atrous spatial pooling architecture for efficient semantic segmentation. Sensors 2019, 19, [CrossRef] [PubMed] 33. Sharma, S.; Ball, J.E.; Tang, B.; Carruth, D.W.; Doude, M.; Islam, M.A. Semantic segmentation with transfer learning for f-road autonomous driving. Sensors 2019, 19, [CrossRef] [PubMed] 34. Glorot, X.; Bordes, A.; Bengio, Y. Deep sparse rectifier neural networks. In Proceedings Fourteenth International Conference on Artificial Intelligence and Statistics, Naha, Okinawa, Japan, April 2019; pp Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arxiv 2014, arxiv: Yosinski, J.; Clune, J.; Bengio, Y.; Lipson, H. How transferable are features in deep neural networks? In Proceedings 27th International Conference on Neural Information Processing Systems, Montreal, QC, Canada, 8 13 December 2014; pp Kingma, D.; Ba, J. Adam: A method for stochastic optimization. In Proceedings 3rd International Conference on Learning Representations, ICLR, San Diego, CA, USA, 7 9 May by authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under terms and conditions Creative Commons Attribution (CC BY) license (

Microsoft PowerPoint - Aqua-Sim.pptx

Microsoft PowerPoint - Aqua-Sim.pptx Peng Xie, Zhong Zhou, Zheng Peng, Hai Yan, Tiansi Hu, Jun-Hong Cui, Zhijie Shi, Yunsi Fei, Shengli Zhou Underwater Sensor Network Lab 1 Outline Motivations System Overview Aqua-Sim Components Experimental

More information

國家圖書館典藏電子全文

國家圖書館典藏電子全文 i ii Abstract The most important task in human resource management is to encourage and help employees to develop their potential so that they can fully contribute to the organization s goals. The main

More information

untitled

untitled LBS Research and Application of Location Information Management Technology in LBS TP319 10290 UDC LBS Research and Application of Location Information Management Technology in LBS , LBS PDA LBS

More information

报 告 1: 郑 斌 教 授, 美 国 俄 克 拉 荷 马 大 学 医 学 图 像 特 征 分 析 与 癌 症 风 险 评 估 方 法 摘 要 : 准 确 的 评 估 癌 症 近 期 发 病 风 险 和 预 后 或 者 治 疗 效 果 是 发 展 和 建 立 精 准 医 学 的 一 个 重 要 前

报 告 1: 郑 斌 教 授, 美 国 俄 克 拉 荷 马 大 学 医 学 图 像 特 征 分 析 与 癌 症 风 险 评 估 方 法 摘 要 : 准 确 的 评 估 癌 症 近 期 发 病 风 险 和 预 后 或 者 治 疗 效 果 是 发 展 和 建 立 精 准 医 学 的 一 个 重 要 前 东 北 大 学 中 荷 生 物 医 学 与 信 息 工 程 学 院 2016 年 度 生 物 医 学 与 信 息 工 程 论 坛 会 议 时 间 2016 年 6 月 8 日, 星 期 三,9:30 至 16:00 会 议 地 址 会 议 网 址 主 办 单 位 东 北 大 学 浑 南 校 区 沈 阳 市 浑 南 区 创 新 路 195 号 生 命 科 学 大 楼 B 座 619 报 告 厅 http://www.bmie.neu.edu.cn

More information

θ 1 = φ n -n 2 2 n AR n φ i = 0 1 = a t - θ θ m a t-m 3 3 m MA m 1. 2 ρ k = R k /R 0 5 Akaike ρ k 1 AIC = n ln δ 2

θ 1 = φ n -n 2 2 n AR n φ i = 0 1 = a t - θ θ m a t-m 3 3 m MA m 1. 2 ρ k = R k /R 0 5 Akaike ρ k 1 AIC = n ln δ 2 35 2 2012 2 GEOMATICS & SPATIAL INFORMATION TECHNOLOGY Vol. 35 No. 2 Feb. 2012 1 2 3 4 1. 450008 2. 450005 3. 450008 4. 572000 20 J 101 20 ARMA TU196 B 1672-5867 2012 02-0213 - 04 Application of Time Series

More information

IP TCP/IP PC OS µclinux MPEG4 Blackfin DSP MPEG4 IP UDP Winsock I/O DirectShow Filter DirectShow MPEG4 µclinux TCP/IP IP COM, DirectShow I

IP TCP/IP PC OS µclinux MPEG4 Blackfin DSP MPEG4 IP UDP Winsock I/O DirectShow Filter DirectShow MPEG4 µclinux TCP/IP IP COM, DirectShow I 2004 5 IP TCP/IP PC OS µclinux MPEG4 Blackfin DSP MPEG4 IP UDP Winsock I/O DirectShow Filter DirectShow MPEG4 µclinux TCP/IP IP COM, DirectShow I Abstract The techniques of digital video processing, transferring

More information

1 引言

1 引言 P P 第 40 卷 Vol.40 第 7 期 No.7 计 算 机 工 程 Computer Engineering 014 年 7 月 July 014 开 发 研 究 与 工 程 应 用 文 章 编 号 :1000-348(014)07-081-05 文 献 标 识 码 :A 中 图 分 类 号 :TP391.41 摘 基 于 图 像 识 别 的 震 象 云 地 震 预 测 方 法 谢 庭,

More information

Microsoft Word - KSAE06-S0262.doc

Microsoft Word - KSAE06-S0262.doc Stereo Vision based Forward Collision Warning and Avoidance System Yunhee LeeByungjoo KimHogi JungPaljoo Yoon Central R&D Center, MANDO Corporation, 413-5, Gomae-Ri, Gibeung-Eub, Youngin-Si, Kyonggi-Do,

More information

Microsoft Word - TIP006SCH Uni-edit Writing Tip - Presentperfecttenseandpasttenseinyourintroduction readytopublish

Microsoft Word - TIP006SCH Uni-edit Writing Tip - Presentperfecttenseandpasttenseinyourintroduction readytopublish 我 难 度 : 高 级 对 们 现 不 在 知 仍 道 有 听 影 过 响 多 少 那 次 么 : 研 英 究 过 文 论 去 写 文 时 作 的 表 技 引 示 巧 言 事 : 部 情 引 分 发 言 该 生 使 在 中 用 过 去, 而 现 在 完 成 时 仅 表 示 事 情 发 生 在 过 去, 并 的 哪 现 种 在 时 完 态 成 呢 时? 和 难 过 道 去 不 时 相 关? 是 所 有

More information

2/80 2

2/80 2 2/80 2 3/80 3 DSP2400 is a high performance Digital Signal Processor (DSP) designed and developed by author s laboratory. It is designed for multimedia and wireless application. To develop application

More information

McGraw-Hill School Education Group Physics : Principles and Problems G S 24

McGraw-Hill School Education Group Physics : Principles and Problems G S 24 2017 4 357 GLOBAL EDUCATION Vol. 46 No4, 2017 * 1 / 400715 / 400715 / 400715 1 2010-2020 2 * mjzxzd1401 2012 AHA120008 1 23 3 4-7 8 9 McGraw-Hill School Education Group Physics : Principles and Problems

More information

I

I The Effect of Guided Discovery on The Learning Achievement and Learning Transfer of Grade 5 Students in Primary Schools I II Abstract The Effect of Guided Discovery on The Learning Achievement And Learning

More information

The Development of Color Constancy and Calibration System

The Development of Color Constancy and Calibration System The Development of Color Constancy and Calibration System The Development of Color Constancy and Calibration System LabVIEW CCD BMP ii Abstract The modern technologies develop more and more faster, and

More information

2005 5,,,,,,,,,,,,,,,,, , , 2174, 7014 %, % 4, 1961, ,30, 30,, 4,1976,627,,,,, 3 (1993,12 ),, 2

2005 5,,,,,,,,,,,,,,,,, , , 2174, 7014 %, % 4, 1961, ,30, 30,, 4,1976,627,,,,, 3 (1993,12 ),, 2 3,,,,,, 1872,,,, 3 2004 ( 04BZS030),, 1 2005 5,,,,,,,,,,,,,,,,, 1928 716,1935 6 2682 1928 2 1935 6 1966, 2174, 7014 %, 94137 % 4, 1961, 59 1929,30, 30,, 4,1976,627,,,,, 3 (1993,12 ),, 2 , :,,,, :,,,,,,

More information

附件1:

附件1: 附 件 1: 全 国 优 秀 教 育 硕 士 专 业 学 位 论 文 推 荐 表 单 位 名 称 : 西 南 大 学 论 文 题 目 填 表 日 期 :2014 年 4 月 30 日 数 学 小 组 合 作 学 习 的 课 堂 管 理 攻 硕 期 间 及 获 得 硕 士 学 位 后 一 年 内 获 得 与 硕 士 学 位 论 文 有 关 的 成 果 作 者 姓 名 论 文 答 辩 日 期 学 科 专

More information

COCO18-DensePose-BUPT-PRIV

COCO18-DensePose-BUPT-PRIV * Beijing University of Posts and Telecommunications (BUPT) COCO 2018 DensePose Test AP AP50 AP75 APm APl BUPT-PRIV (Ours) 64 92 75 57 67 PlumSix 58 89 66 50 61 ML_Lab 57 89 64 51 59 Sound of silent 57

More information

08陈会广

08陈会广 第 34 卷 第 10 期 2012 年 10 月 2012,34(10):1871-1880 Resources Science Vol.34,No.10 Oct.,2012 文 章 编 号 :1007-7588(2012)10-1871-10 房 地 产 市 场 及 其 细 分 的 调 控 重 点 区 域 划 分 理 论 与 实 证 以 中 国 35 个 大 中 城 市 为 例 陈 会 广 1,

More information

Microsoft PowerPoint - ATF2015.ppt [相容模式]

Microsoft PowerPoint - ATF2015.ppt [相容模式] Improving the Video Totalized Method of Stopwatch Calibration Samuel C.K. Ko, Aaron Y.K. Yan and Henry C.K. Ma The Government of Hong Kong Special Administrative Region (SCL) 31 Oct 2015 1 Contents Introduction

More information

彩色地图中道路的识别和提取

彩色地图中道路的识别和提取 9310016, i ii Abstract This thesis is on the researching of recognizing the roads in map image by computer. Based on the theory of Pattern Recognition, there is a method to be discussed, which can recognize

More information

Microsoft Word - ED-774.docx

Microsoft Word - ED-774.docx journal.newcenturyscience.com/index.php/gjanp Global Journal of Advanced Nursing Practice,214,Vol.1,No.1 The practicality of an improved method of intravenous infusion exhaust specialized in operating

More information

Microsoft PowerPoint _代工實例-1

Microsoft PowerPoint _代工實例-1 4302 動態光散射儀 (Dynamic Light Scattering) 代工實例與結果解析 生醫暨非破壞性分析團隊 2016.10 updated Which Size to Measure? Diameter Many techniques make the useful and convenient assumption that every particle is a sphere. The

More information

http / /yxxy. cbpt. cnki. net / % % %

http / /yxxy. cbpt. cnki. net / % % % 2017 3 Mar. 2017 5 2 Chongqing Higher Education Research Vol. 5 No. 2 DOI 10. 15998 /j. cnki. issn1673-8012. 2017. 02. 006 230039 2011 2015 2016 G649. 21 A 1673-8012 2017 02-0037-11 2017-01-03 2015zdjy024

More information

文档 9

文档 9 : : :2001 5 10 :2001 6 10 : < > :Rudimental Studies on A Classified and Annotated Bibliography of Books on Calligraphy and Painting : : :K2904.6 Yu Shaosong A classified and Annotated Bibliography of Books

More information

Microsoft Word - 24217010311110028谢雯雯.doc

Microsoft Word - 24217010311110028谢雯雯.doc HUAZHONG AGRICULTURAL UNIVERSITY 硕 士 学 位 论 文 MASTER S DEGREE DISSERTATION 80 后 女 硕 士 生 择 偶 现 状 以 武 汉 市 七 所 高 校 为 例 POST-80S FEMALE POSTGRADUATE MATE SELECTION STATUS STUDY TAKE WUHAN SEVEN UNIVERSITIES

More information

Revit Revit Revit BIM BIM 7-9 3D 1 BIM BIM 6 Revit 0 4D 1 2 Revit Revit 2. 1 Revit Revit Revit Revit 2 2 Autodesk Revit Aut

Revit Revit Revit BIM BIM 7-9 3D 1 BIM BIM 6 Revit 0 4D 1 2 Revit Revit 2. 1 Revit Revit Revit Revit 2 2 Autodesk Revit Aut 60 2 2016 2 RAILWAY STANDARD DESIGN Vol. 60 No. 2 Feb. 2016 1004-2954201602-0071-06 BIM 1 1 2 2 1 1. 7140992. 710054 BIM BIM 3D 4D nd BIM 1 3D 4D Revit BIM BIM U442. 5TP391. 72 A DOI10. 13238 /j. issn.

More information

度 身 體 活 動 量 ; 芬 蘭 幼 兒 呈 現 中 度 身 體 活 動 量 之 比 例 高 於 臺 灣 幼 兒 (5) 幼 兒 在 投 入 度 方 面 亦 達 顯 著 差 異 (χ²=185.35, p <.001), 芬 蘭 與 臺 灣 幼 兒 多 半 表 現 出 中 度 投 入 與 高 度

度 身 體 活 動 量 ; 芬 蘭 幼 兒 呈 現 中 度 身 體 活 動 量 之 比 例 高 於 臺 灣 幼 兒 (5) 幼 兒 在 投 入 度 方 面 亦 達 顯 著 差 異 (χ²=185.35, p <.001), 芬 蘭 與 臺 灣 幼 兒 多 半 表 現 出 中 度 投 入 與 高 度 臺 灣 與 芬 蘭 幼 兒 園 室 內 自 由 遊 戲 內 涵 之 探 討 林 昭 溶 毛 萬 儀 經 國 管 理 暨 健 康 學 院 幼 兒 保 育 系 副 教 授 joyce@ems.cku.edu.tw 吳 敏 而 國 家 教 育 研 究 院 研 究 員 rozwu@mail.naer.edu.tw wanyi@ems.cku.edu.tw 摘 要 自 由 遊 戲 被 視 為 是 幼 兒 的

More information

WTO

WTO 10384 200015128 UDC Exploration on Design of CIB s Human Resources System in the New Stage (MBA) 2004 2004 2 3 2004 3 2 0 0 4 2 WTO Abstract Abstract With the rapid development of the high and new technique

More information

Construction of Chinese pediatric standard database A Dissertation Submitted for the Master s Degree Candidate:linan Adviser:Prof. Han Xinmin Nanjing

Construction of Chinese pediatric standard database A Dissertation Submitted for the Master s Degree Candidate:linan Adviser:Prof. Han Xinmin Nanjing 密 级 : 公 开 学 号 :20081209 硕 士 学 位 论 文 中 医 儿 科 标 准 数 据 库 建 设 研 究 研 究 生 李 楠 指 导 教 师 学 科 专 业 所 在 学 院 毕 业 时 间 韩 新 民 教 授 中 医 儿 科 学 第 一 临 床 医 学 院 2011 年 06 月 Construction of Chinese pediatric standard database

More information

Microsoft PowerPoint - talk8.ppt

Microsoft PowerPoint - talk8.ppt Adaptive Playout Scheduling Using Time-scale Modification Yi Liang, Nikolaus Färber Bernd Girod, Balaji Prabhakar Outline QoS concerns and tradeoffs Jitter adaptation as a playout scheduling scheme Packet

More information

% GIS / / Fig. 1 Characteristics of flood disaster variation in suburbs of Shang

% GIS / / Fig. 1 Characteristics of flood disaster variation in suburbs of Shang 20 6 2011 12 JOURNAL OF NATURAL DISASTERS Vol. 20 No. 6 Dec. 2011 1004-4574 2011 06-0094 - 05 200062 1949-1990 1949 1977 0. 8 0. 03345 0. 01243 30 100 P426. 616 A Risk analysis of flood disaster in Shanghai

More information

致 谢 开 始 这 篇 致 谢 的 时 候, 以 为 这 是 最 轻 松 最 愉 快 的 部 分, 而 此 时 心 头 却 充 满 了 沉 甸 甸 的 回 忆 和 感 恩, 一 时 间 竟 无 从 下 笔 虽 然 这 远 不 是 一 篇 完 美 的 论 文, 但 完 成 这 篇 论 文 要 感 谢

致 谢 开 始 这 篇 致 谢 的 时 候, 以 为 这 是 最 轻 松 最 愉 快 的 部 分, 而 此 时 心 头 却 充 满 了 沉 甸 甸 的 回 忆 和 感 恩, 一 时 间 竟 无 从 下 笔 虽 然 这 远 不 是 一 篇 完 美 的 论 文, 但 完 成 这 篇 论 文 要 感 谢 中 国 科 学 技 术 大 学 博 士 学 位 论 文 论 文 课 题 : 一 个 新 型 简 易 电 子 直 线 加 速 器 的 关 键 技 术 研 究 学 生 姓 名 : 导 师 姓 名 : 单 位 名 称 : 专 业 名 称 : 研 究 方 向 : 完 成 时 间 : 谢 家 麟 院 士 王 相 綦 教 授 国 家 同 步 辐 射 实 验 室 核 技 术 及 应 用 加 速 器 物 理 2006

More information

第 2 期 王 向 东 等 : 一 种 运 动 轨 迹 引 导 下 的 举 重 视 频 关 键 姿 态 提 取 方 法 257 竞 技 体 育 比 赛 越 来 越 激 烈, 为 了 提 高 体 育 训 练 的 效 率, 有 必 要 在 体 育 训 练 中 引 入 科 学 定 量 的 方 法 许 多

第 2 期 王 向 东 等 : 一 种 运 动 轨 迹 引 导 下 的 举 重 视 频 关 键 姿 态 提 取 方 法 257 竞 技 体 育 比 赛 越 来 越 激 烈, 为 了 提 高 体 育 训 练 的 效 率, 有 必 要 在 体 育 训 练 中 引 入 科 学 定 量 的 方 法 许 多 2014 年 4 月 图 学 学 报 April 2014 第 35 卷 第 2 期 JOURNAL OF GRAPHICS Vol.35 No.2 一 种 运 动 轨 迹 引 导 下 的 举 重 视 频 关 键 姿 态 提 取 方 法 王 向 东 1, 张 静 文 2, 毋 立 芳 2, 徐 文 泉 (1. 国 家 体 育 总 局 体 育 科 学 研 究 所, 北 京 100061;2. 北 京

More information

第一章 出口退税制改革的内容

第一章  出口退税制改革的内容 密 级 学 号 2 0 0 1 0 3 2 9 毕 业 设 计 ( 论 文 ) 出 口 退 税 制 改 革 对 我 国 出 口 的 影 响 院 ( 系 部 ): 经 济 管 理 学 院 姓 名 : 王 晓 年 级 : 2001 级 专 业 : 国 际 经 济 与 贸 易 指 导 教 师 : 杜 秀 芳 教 师 职 称 : 讲 师 2005 年 6 月 10 日 北 京 北 京 石 油 化 工 学 院

More information

一次辽宁暴雨过程的诊断及风场反演分析

一次辽宁暴雨过程的诊断及风场反演分析 Climate Change Research Letters 气 候 变 化 研 究 快 报, 2013, 2, 139-146 http://dx.doi.org/10.12677/ccrl.2013.24024 Published Online October 2013 (http://www.hanspub.org/journal/ccrl.html) Analysis of the Diagnosis

More information

國立中山大學學位論文典藏.PDF

國立中山大學學位論文典藏.PDF 國 立 中 山 大 學 企 業 管 理 學 系 碩 士 論 文 以 系 統 動 力 學 建 構 美 食 餐 廳 異 國 麵 坊 之 管 理 飛 行 模 擬 器 研 究 生 : 簡 蓮 因 撰 指 導 教 授 : 楊 碩 英 博 士 中 華 民 國 九 十 七 年 七 月 致 謝 詞 寫 作 論 文 的 過 程 是 一 段 充 滿 艱 辛 與 淚 水 感 動 與 窩 心 的 歷 程, 感 謝 這 一

More information

K301Q-D VRT中英文说明书141009

K301Q-D VRT中英文说明书141009 THE INSTALLING INSTRUCTION FOR CONCEALED TANK Important instuction:.. Please confirm the structure and shape before installing the toilet bowl. Meanwhile measure the exact size H between outfall and infall

More information

A VALIDATION STUDY OF THE ACHIEVEMENT TEST OF TEACHING CHINESE AS THE SECOND LANGUAGE by Chen Wei A Thesis Submitted to the Graduate School and Colleg

A VALIDATION STUDY OF THE ACHIEVEMENT TEST OF TEACHING CHINESE AS THE SECOND LANGUAGE by Chen Wei A Thesis Submitted to the Graduate School and Colleg 上 海 外 国 语 大 学 SHANGHAI INTERNATIONAL STUDIES UNIVERSITY 硕 士 学 位 论 文 MASTER DISSERTATION 学 院 国 际 文 化 交 流 学 院 专 业 汉 语 国 际 教 育 硕 士 题 目 届 别 2010 届 学 生 陈 炜 导 师 张 艳 莉 副 教 授 日 期 2010 年 4 月 A VALIDATION STUDY

More information

豐佳燕.PDF

豐佳燕.PDF Application of Information Literacy to chiayen@estmtc.tp.edu.tw information literacy Theme-oriented teaching. Abstract Based on the definition of Information Literacy and Six core concepts of the problem

More information

Microsoft PowerPoint - STU_EC_Ch08.ppt

Microsoft PowerPoint - STU_EC_Ch08.ppt 樹德科技大學資訊工程系 Chapter 8: Counters Shi-Huang Chen Fall 2010 1 Outline Asynchronous Counter Operation Synchronous Counter Operation Up/Down Synchronous Counters Design of Synchronous Counters Cascaded Counters

More information

Microsoft Word - 口試本封面.doc

Microsoft Word - 口試本封面.doc 國 立 屏 東 教 育 大 學 客 家 文 化 研 究 所 碩 士 論 文 指 導 教 授 : 劉 明 宗 博 士 台 灣 客 家 俗 諺 中 的 數 詞 研 究 研 究 生 : 謝 淑 援 中 華 民 國 九 十 九 年 六 月 本 論 文 獲 行 政 院 客 家 委 員 會 99 度 客 家 研 究 優 良 博 碩 論 文 獎 助 行 政 院 客 家 委 員 會 獎 助 客 家 研 究 優 良

More information

LH_Series_Rev2014.pdf

LH_Series_Rev2014.pdf REMINDERS Product information in this catalog is as of October 2013. All of the contents specified herein are subject to change without notice due to technical improvements, etc. Therefore, please check

More information

第三章 国内外小组合作学习的应用情况

第三章 国内外小组合作学习的应用情况 摘 要 论 文 题 目 : 小 组 合 作 学 习 在 上 海 高 中 信 息 科 技 教 学 中 的 应 用 专 业 : 现 代 教 育 技 术 学 位 申 请 人 : 朱 翠 凤 指 导 教 师 : 孟 琦 摘 要 小 组 合 作 学 习 是 目 前 世 界 上 许 多 国 家 普 遍 采 用 的 一 种 富 有 创 意 的 教 学 理 论 与 策 略, 其 在 培 养 学 生 的 合 作 精

More information

(Pattern Recognition) 1 1. CCD

(Pattern Recognition) 1 1. CCD ********************************* ********************************* (Pattern Recognition) 1 1. CCD 2. 3. 4. 1 ABSTRACT KeywordsMachine Vision, Real Time Inspection, Image Processing The purpose of this

More information

2015年4月11日雅思阅读预测机经(新东方版)

2015年4月11日雅思阅读预测机经(新东方版) 剑 桥 雅 思 10 第 一 时 间 解 析 阅 读 部 分 1 剑 桥 雅 思 10 整 体 内 容 统 计 2 剑 桥 雅 思 10 话 题 类 型 从 以 上 统 计 可 以 看 出, 雅 思 阅 读 的 考 试 话 题 一 直 广 泛 多 样 而 题 型 则 稳 中 有 变 以 剑 桥 10 的 test 4 为 例 出 现 的 三 篇 文 章 分 别 是 自 然 类, 心 理 研 究 类,

More information

THE APPLICATION OF ISOTOPE RATIO ANALYSIS BY INDUCTIVELY COUPLED PLASMA MASS SPECTROMETER A Dissertation Presented By Chaoyong YANG Supervisor: Prof.D

THE APPLICATION OF ISOTOPE RATIO ANALYSIS BY INDUCTIVELY COUPLED PLASMA MASS SPECTROMETER A Dissertation Presented By Chaoyong YANG Supervisor: Prof.D 10384 070302 9825042 UDC 2001.6. 2001.7. 20016 THE APPLICATION OF ISOTOPE RATIO ANALYSIS BY INDUCTIVELY COUPLED PLASMA MASS SPECTROMETER A Dissertation Presented By Chaoyong YANG Supervisor: Prof.Dr. Xiaoru

More information

第二十四屆全國學術研討會論文中文格式摘要

第二十四屆全國學術研討會論文中文格式摘要 以 田 口 動 態 法 設 計 物 理 治 療 用 牽 引 機 與 機 構 改 善 1, 2 簡 志 達 馮 榮 豐 1 國 立 高 雄 第 一 科 技 大 學 機 械 與 自 動 化 工 程 系 2 傑 邁 電 子 股 份 有 限 公 司 1 摘 要 物 理 治 療 用 牽 引 機 的 主 要 功 能 是 將 兩 脊 椎 骨 之 距 離 拉 開, 使 神 經 根 不 致 受 到 壓 迫 該 類 牽

More information

168 健 等 木醋对几种小浆果扦插繁殖的影响 第1期 the view of the comprehensive rooting quality, spraying wood vinegar can change rooting situation, and the optimal concent

168 健 等 木醋对几种小浆果扦插繁殖的影响 第1期 the view of the comprehensive rooting quality, spraying wood vinegar can change rooting situation, and the optimal concent 第 31 卷 第 1 期 2013 年 3 月 经 济 林 研 究 Nonwood Forest Research Vol. 31 No.1 Mar. 2013 木醋对几种小浆果扦插繁殖的影响 健 1,2 杨国亭 1 刘德江 2 (1. 东北林业大学 生态研究中心 黑龙江 哈尔滨 150040 2. 佳木斯大学 生命科学学院 黑龙江 佳木斯 154007) 摘 要 为了解决小浆果扦插繁殖中生根率及成活率低等问题

More information

Microsoft PowerPoint - ryz_030708_pwo.ppt

Microsoft PowerPoint - ryz_030708_pwo.ppt Long Term Recovery of Seven PWO Crystals Ren-yuan Zhu California Institute of Technology CMS ECAL Week, CERN Introduction 20 endcap and 5 barrel PWO crystals went through (1) thermal annealing at 200 o

More information

% % 34

% % 34 * 2000 2005 1% 1% 1% 1% * VZDA2010-15 33 2011. 3 2009 2009 2004 2008 1982 1990 2000 2005 1% 1 1 2005 1% 34 2000 2005 1% 35 2011. 3 2000 0. 95 20-30 209592 70982 33. 9% 2005 1% 258 20-30 372301 115483 31.

More information

IPCC CO (IPCC2006) 1 : = ( 1) 1 (kj/kg) (kgc/gj) (tc/t)

IPCC CO (IPCC2006) 1 : = ( 1) 1 (kj/kg) (kgc/gj) (tc/t) 2011 5 5 (278 ) China Industrial Economics May 2011 No.5 1 12 (1. 100005; 2. 066004) [ ] : ; ; : ; ; [ ] ; ; ; [ ]F290 [ ]A [ ]1006-480X(2011)05-0047-11 2008 CO 2 ( ) (2009) (GDP) (Binhocker et al. 2008)

More information

Microsoft PowerPoint SSBSE .ppt [Modo de Compatibilidade]

Microsoft PowerPoint SSBSE .ppt [Modo de Compatibilidade] SSBSE 2015, Bergamo Transformed Search Based Software Engineering: A New Paradigm of SBSE He JIANG, Zhilei Ren, Xiaochen Li, Xiaochen Lai jianghe@dlut.edu.cn School of Software, Dalian Univ. of Tech. Outline

More information

by industrial structure evolution from 1952 to 2007 and its influence effect was first acceleration and then deceleration second the effects of indust

by industrial structure evolution from 1952 to 2007 and its influence effect was first acceleration and then deceleration second the effects of indust 2011 2 1 1 2 3 4 1. 100101 2. 100124 3. 100039 4. 650092 - - - 3 GDP U 20-30 60% 10% TK01 A 1002-9753 2011 02-0042 - 10 Analysis on Character and Potential of Energy Saving and Carbon Reducing by Structure

More information

Abstract Due to the improving of living standards, people gradually seek lighting quality from capacityto quality. And color temperature is the important subject of it. According to the research from aboard,

More information

國立中山大學學位論文典藏.PDF

國立中山大學學位論文典藏.PDF I II III The Study of Factors to the Failure or Success of Applying to Holding International Sport Games Abstract For years, holding international sport games has been Taiwan s goal and we are on the way

More information

影響新產品開發成效之造型要素探討

影響新產品開發成效之造型要素探討 異 行 車 例 A Study on the Product Forms Recognition Difference between Designer and Consumer --- Electrical Bicycle as Example. 行 車 省 力 力 綠 老 女 行 車 行 車 了 不 了 行 行 車 行 車 不 行 車 異 行 車 車 車 行 行 異 數 量 I 類 行 異 異

More information

2 ( 自 然 科 学 版 ) 第 20 卷 波 ). 这 种 压 缩 波 空 气 必 然 有 一 部 分 要 绕 流 到 车 身 两 端 的 环 状 空 间 中, 形 成 与 列 车 运 行 方 向 相 反 的 空 气 流 动. 在 列 车 尾 部, 会 产 生 低 于 大 气 压 的 空 气 流

2 ( 自 然 科 学 版 ) 第 20 卷 波 ). 这 种 压 缩 波 空 气 必 然 有 一 部 分 要 绕 流 到 车 身 两 端 的 环 状 空 间 中, 形 成 与 列 车 运 行 方 向 相 反 的 空 气 流 动. 在 列 车 尾 部, 会 产 生 低 于 大 气 压 的 空 气 流 第 20 卷 第 3 期 2014 年 6 月 ( 自 然 科 学 版 ) JOURNAL OF SHANGHAI UNIVERSITY (NATURAL SCIENCE) Vol. 20 No. 3 June 2014 DOI: 10.3969/j.issn.1007-2861.2013.07.031 基 于 FLUENT 测 轨 道 交 通 隧 道 中 电 波 折 射 率 结 构 常 数 张 永

More information

Microsoft Word 張嘉玲-_76-83_

Microsoft Word 張嘉玲-_76-83_ 64 4 Journal of Taiwan Agricultural Engineering 107 12 Vol. 64, No. 4, December 2018 DOI: 10.29974/JTAE.201812_64(4).0005 WASP - Applying the WASP Model to Evaluate the Effect of Wastewater Sewer Takeover

More information

Preface This guide is intended to standardize the use of the WeChat brand and ensure the brand's integrity and consistency. The guide applies to all d

Preface This guide is intended to standardize the use of the WeChat brand and ensure the brand's integrity and consistency. The guide applies to all d WeChat Search Visual Identity Guidelines WEDESIGN 2018. 04 Preface This guide is intended to standardize the use of the WeChat brand and ensure the brand's integrity and consistency. The guide applies

More information

Our Mission ICAPlants has been working since a long time in industrial automation, developing specific solutions for many industrial purposes to satis

Our Mission ICAPlants has been working since a long time in industrial automation, developing specific solutions for many industrial purposes to satis Tyres Assembly Systems Our Mission ICAPlants has been working since a long time in industrial automation, developing specific solutions for many industrial purposes to satisfy Customers worldwide. Our

More information

Improved Preimage Attacks on AES-like Hash Functions: Applications to Whirlpool and Grøstl

Improved Preimage Attacks on AES-like Hash Functions: Applications to Whirlpool and Grøstl SKLOIS (Pseudo) Preimage Attack on Reduced-Round Grøstl Hash Function and Others Shuang Wu, Dengguo Feng, Wenling Wu, Jian Guo, Le Dong, Jian Zou March 20, 2012 Institute. of Software, Chinese Academy

More information

1

1 Activity- based Cost Management: A New Mode of Medical cost Management () 1 Activity - based Cost Management A New Mode of Medical cost Management Abstract With the development of medical market, the defects

More information

Microsoft PowerPoint - NCBA_Cattlemens_College_Darrh_B

Microsoft PowerPoint - NCBA_Cattlemens_College_Darrh_B Introduction to Genetics Darrh Bullock University of Kentucky The Model Trait = Genetics + Environment Genetics Additive Predictable effects that get passed from generation to generation Non-Additive Primarily

More information

9 * B0-0 * 16ZD097 10 2018 5 3 11 117 2011 349 120 121 123 46 38-39 12 2018 5 23 92 939 536 2009 98 1844 13 1 25 926-927 3 304 305-306 1 23 95 14 2018 5 25 926-927 122 1 1 self-ownership 15 22 119 a b

More information

Microsoft Word - 01李惠玲ok.doc

Microsoft Word - 01李惠玲ok.doc 康 寧 學 報 11:1-20(2009) 1 數 位 學 習 於 護 理 技 術 課 程 之 運 用 與 評 值 * 李 惠 玲 ** 高 清 華 *** 呂 莉 婷 摘 要 背 景 : 網 路 科 技 在 教 育 的 使 用 已 成 為 一 種 有 利 的 教 學 輔 助 工 具 網 路 教 學 的 特 性, 在 使 學 習 可 不 分 時 間 與 空 間 不 同 進 度 把 握 即 時 性 資

More information

摘 要 張 捷 明 是 台 灣 當 代 重 要 的 客 語 兒 童 文 學 作 家, 他 的 作 品 記 錄 著 客 家 人 的 思 想 文 化 與 觀 念, 也 曾 榮 獲 多 項 文 學 大 獎 的 肯 定, 對 台 灣 這 塊 土 地 上 的 客 家 人 有 著 深 厚 的 情 感 張 氏 於

摘 要 張 捷 明 是 台 灣 當 代 重 要 的 客 語 兒 童 文 學 作 家, 他 的 作 品 記 錄 著 客 家 人 的 思 想 文 化 與 觀 念, 也 曾 榮 獲 多 項 文 學 大 獎 的 肯 定, 對 台 灣 這 塊 土 地 上 的 客 家 人 有 著 深 厚 的 情 感 張 氏 於 玄 奘 大 學 中 國 語 文 學 系 碩 士 論 文 客 家 安 徒 生 張 捷 明 童 話 研 究 指 導 教 授 : 羅 宗 濤 博 士 研 究 生 : 黃 春 芳 撰 中 華 民 國 一 0 二 年 六 月 摘 要 張 捷 明 是 台 灣 當 代 重 要 的 客 語 兒 童 文 學 作 家, 他 的 作 品 記 錄 著 客 家 人 的 思 想 文 化 與 觀 念, 也 曾 榮 獲 多 項 文

More information

第一章

第一章 樹 德 科 技 大 學 應 用 設 計 研 究 所 碩 士 論 文 人 工 美 甲 藝 術 應 用 於 飾 品 創 作 The Application of Artificial Nail to the Creation of Accessories 研 究 生 : 周 嘉 政 Chou-Chia Cheng 指 導 教 授 : 王 安 黎 Wang-An Lee 中 華 民 國 一 百 年 六 月

More information

Microsoft PowerPoint - Performance Analysis of Video Streaming over LTE using.pptx

Microsoft PowerPoint - Performance Analysis of Video Streaming over LTE using.pptx ENSC 427 Communication Networks Spring 2016 Group #2 Project URL: http://www.sfu.ca/~rkieu/ensc427_project.html Amer, Zargham 301149920 Kieu, Ritchie 301149668 Xiao, Lei 301133381 1 Roadmap Introduction

More information

南華大學數位論文

南華大學數位論文 南華大學 碩士論文 中華民國九十五年六月十四日 Elfin Excel I II III ABSTRACT Since Ming Hwa Yuan Taiwanese Opera Company started to cooperate with the Chinese orchestra, the problem of how the participation of Chinese music

More information

JOURNAL OF EARTHQUAKE ENGINEERING AND ENGINEERING VIBRATION Vol. 31 No. 5 Oct /35 TU3521 P315.

JOURNAL OF EARTHQUAKE ENGINEERING AND ENGINEERING VIBRATION Vol. 31 No. 5 Oct /35 TU3521 P315. 31 5 2011 10 JOURNAL OF EARTHQUAKE ENGINEERING AND ENGINEERING VIBRATION Vol. 31 No. 5 Oct. 2011 1000-1301 2011 05-0075 - 09 510405 1 /35 TU3521 P315. 8 A Earthquake simulation shaking table test and analysis

More information

中 國 學 研 究 期 刊 泰 國 農 業 大 學 บ นทอนเช นก น และส งผลก บการด ดแปลงจากวรรณกรรมมาเป นบทภาพยนตร และบทละคร โทรท ศน ด วยเช นก น จากการเคารพวรรณกรรมต นฉบ บเป นหล

中 國 學 研 究 期 刊 泰 國 農 業 大 學 บ นทอนเช นก น และส งผลก บการด ดแปลงจากวรรณกรรมมาเป นบทภาพยนตร และบทละคร โทรท ศน ด วยเช นก น จากการเคารพวรรณกรรมต นฉบ บเป นหล วารสารจ นศ กษา มหาว ทยาล ยเกษตรศาสตร การเล อกสรรของย คสม ยท แตกต างก น โดยว เคราะห การด ดแปลง บทละครโทรท ศน หร อบทภาพยนต จากผลงานคลาสส กวรรณกรรม สม ยใหม ของจ น The Choice of Times Film Adaptation of Chinese

More information

致 谢 本 人 自 2008 年 6 月 从 上 海 外 国 语 大 学 毕 业 之 后, 于 2010 年 3 月 再 次 进 入 上 外, 非 常 有 幸 成 为 汉 语 国 际 教 育 专 业 的 研 究 生 回 顾 三 年 以 来 的 学 习 和 生 活, 顿 时 感 觉 这 段 时 间 也

致 谢 本 人 自 2008 年 6 月 从 上 海 外 国 语 大 学 毕 业 之 后, 于 2010 年 3 月 再 次 进 入 上 外, 非 常 有 幸 成 为 汉 语 国 际 教 育 专 业 的 研 究 生 回 顾 三 年 以 来 的 学 习 和 生 活, 顿 时 感 觉 这 段 时 间 也 精 英 汉 语 和 新 实 用 汉 语 课 本 的 对 比 研 究 The Comparative Study of Jing Ying Chinese and The New Practical Chinese Textbook 专 业 : 届 别 : 姓 名 : 导 师 : 汉 语 国 际 教 育 2013 届 王 泉 玲 杨 金 华 1 致 谢 本 人 自 2008 年 6 月 从 上 海 外

More information

國 立 屏 東 教 育 大 學 中 國 語 文 學 系 碩 士 班 碩 士 論 文 國 小 國 語 教 科 書 修 辭 格 分 析 以 南 一 版 為 例 指 導 教 授 : 柯 明 傑 博 士 研 究 生 : 鄺 綺 暖 撰 中 華 民 國 一 百 零 二 年 七 月 謝 辭 寫 作 論 文 的 日 子 終 於 畫 下 了 句 點, 三 年 前 懷 著 對 文 學 的 熱 愛, 報 考 了 中

More information

Microsoft Word - A200810-897.doc

Microsoft Word - A200810-897.doc 基 于 胜 任 特 征 模 型 的 结 构 化 面 试 信 度 和 效 度 验 证 张 玮 北 京 邮 电 大 学 经 济 管 理 学 院, 北 京 (100876) E-mail: weeo1984@sina.com 摘 要 : 提 高 结 构 化 面 试 信 度 和 效 度 是 面 试 技 术 研 究 的 核 心 内 容 近 年 来 国 内 有 少 数 学 者 探 讨 过 基 于 胜 任 特 征

More information

a b

a b 38 3 2014 5 Vol. 38 No. 3 May 2014 55 Population Research + + 3 100038 A Study on Implementation of Residence Permit System Based on Three Local Cases of Shanghai Chengdu and Zhengzhou Wang Yang Abstract

More information

國立中山大學學位論文典藏

國立中山大學學位論文典藏 I II III IV The theories of leadership seldom explain the difference of male leaders and female leaders. Instead of the assumption that the leaders leading traits and leading styles of two sexes are the

More information

東莞工商總會劉百樂中學

東莞工商總會劉百樂中學 /2015/ 頁 (2015 年 版 ) 目 錄 : 中 文 1 English Language 2-3 數 學 4-5 通 識 教 育 6 物 理 7 化 學 8 生 物 9 組 合 科 學 ( 化 學 ) 10 組 合 科 學 ( 生 物 ) 11 企 業 會 計 及 財 務 概 論 12 中 國 歷 史 13 歷 史 14 地 理 15 經 濟 16 資 訊 及 通 訊 科 技 17 視 覺

More information

PowerPoint Presentation

PowerPoint Presentation Decision analysis 量化決策分析方法專論 2011/5/26 1 Problem formulation- states of nature In the decision analysis, decision alternatives are referred to as chance events. The possible outcomes for a chance event

More information

85% NCEP CFS 10 CFS CFS BP BP BP ~ 15 d CFS BP r - 1 r CFS 2. 1 CFS 10% 50% 3 d CFS Cli

85% NCEP CFS 10 CFS CFS BP BP BP ~ 15 d CFS BP r - 1 r CFS 2. 1 CFS 10% 50% 3 d CFS Cli 1 2 3 1. 310030 2. 100054 3. 116000 CFS BP doi 10. 13928 /j. cnki. wrahe. 2016. 04. 020 TV697. 1 A 1000-0860 2016 04-0088-05 Abandoned water risk ratio control-based reservoir pre-discharge control method

More information

University of Science and Technology of China A dissertation for master s degree Research of e-learning style for public servants under the context of

University of Science and Technology of China A dissertation for master s degree Research of e-learning style for public servants under the context of 中 国 科 学 技 术 大 学 硕 士 学 位 论 文 新 媒 体 环 境 下 公 务 员 在 线 培 训 模 式 研 究 作 者 姓 名 : 学 科 专 业 : 导 师 姓 名 : 完 成 时 间 : 潘 琳 数 字 媒 体 周 荣 庭 教 授 二 一 二 年 五 月 University of Science and Technology of China A dissertation for

More information

Thesis for the Master degree in Engineering Research on Negative Pressure Wave Simulation and Signal Processing of Fluid-Conveying Pipeline Leak Candi

Thesis for the Master degree in Engineering Research on Negative Pressure Wave Simulation and Signal Processing of Fluid-Conveying Pipeline Leak Candi U17 10220 UDC624 Thesis for the Master degree in Engineering Research on Negative Pressure Wave Simulation and Signal Processing of Fluid-Conveying Pipeline Leak Candidate:Chen Hao Tutor: Xue Jinghong

More information

東方設計學院文化創意設計研究所

東方設計學院文化創意設計研究所 東 方 設 計 學 院 文 化 創 意 設 計 研 究 所 碩 士 學 位 論 文 應 用 德 爾 菲 法 建 立 社 區 業 餘 油 畫 課 程 之 探 討 - 以 高 雄 市 湖 內 區 為 例 指 導 教 授 : 薛 淞 林 教 授 研 究 生 : 賴 秀 紅 中 華 民 國 一 o 四 年 一 月 東 方 設 計 學 院 文 化 創 意 設 計 研 究 所 碩 士 學 位 論 文 Graduate

More information

硕 士 学 位 论 文 论 文 题 目 : 北 岛 诗 歌 创 作 的 双 重 困 境 专 业 名 称 : 中 国 现 当 代 文 学 研 究 方 向 : 中 国 新 诗 研 究 论 文 作 者 : 奚 荣 荣 指 导 老 师 : 姜 玉 琴 2014 年 12 月

硕 士 学 位 论 文 论 文 题 目 : 北 岛 诗 歌 创 作 的 双 重 困 境 专 业 名 称 : 中 国 现 当 代 文 学 研 究 方 向 : 中 国 新 诗 研 究 论 文 作 者 : 奚 荣 荣 指 导 老 师 : 姜 玉 琴 2014 年 12 月 硕 士 学 位 论 文 论 文 题 目 : 北 岛 诗 歌 创 作 的 双 重 困 境 专 业 名 称 : 中 国 现 当 代 文 学 研 究 方 向 : 中 国 新 诗 研 究 论 文 作 者 : 奚 荣 荣 指 导 老 师 : 姜 玉 琴 2014 年 12 月 致 谢 文 学 是 我 们 人 类 宝 贵 的 精 神 财 富 两 年 半 的 硕 士 学 习 让 我 进 一 步 接 近 文 学,

More information

:1949, 1936, 1713 %, 63 % (, 1957, 5 ), :?,,,,,, (,1999, 329 ),,,,,,,,,, ( ) ; ( ), 1945,,,,,,,,, 100, 1952,,,,,, ,, :,,, 1928,,,,, (,1984, 109

:1949, 1936, 1713 %, 63 % (, 1957, 5 ), :?,,,,,, (,1999, 329 ),,,,,,,,,, ( ) ; ( ), 1945,,,,,,,,, 100, 1952,,,,,, ,, :,,, 1928,,,,, (,1984, 109 2006 9 1949 3 : 1949 2005, : 1949 1978, ; 1979 1997, ; 1998 2005,,, :,,, 1949, :, ;,,,, 50, 1952 1957 ; ; 60 ; 1978 ; 2003,,,,,,, 1953 1978 1953 1978,,,, 100,,,,, 3,, :100836, :wulijjs @263. net ;,, :

More information

PCA+LDA 14 1 PEN mL mL mL 16 DJX-AB DJ X AB DJ2 -YS % PEN

PCA+LDA 14 1 PEN mL mL mL 16 DJX-AB DJ X AB DJ2 -YS % PEN 21 11 2011 11 COMPUTER TECHNOLOGY AND DEVELOPMENT Vol. 21 No. 11 Nov. 2011 510006 PEN3 5 PCA + PCA+LDA 5 5 100% TP301 A 1673-629X 2011 11-0177-05 Application of Electronic Nose in Discrimination of Different

More information

本 論 文 獲 行 政 院 客 家 委 員 會 一 年 客 家 研 究 優 良 博 碩 士 論 文 獎 助

本 論 文 獲 行 政 院 客 家 委 員 會 一 年 客 家 研 究 優 良 博 碩 士 論 文 獎 助 國 立 中 央 大 學 客 家 社 會 文 化 研 究 所 碩 士 論 文 遠 渡 重 洋 的 美 食 臺 灣 客 家 擂 茶 的 流 變 研 究 生 : 黃 智 絹 指 導 教 授 : 周 錦 宏 博 士 中 華 民 國 一 百 年 六 月 本 論 文 獲 行 政 院 客 家 委 員 會 一 年 客 家 研 究 優 良 博 碩 士 論 文 獎 助 國 立 中 央 大 學 圖 書 館 碩 博 士 論

More information

E I

E I Research on Using Art-play to Construct Elementary School Students' Visual Art Aesthetic Sensibility ~Case of Da-Yuan Elementary School E I E II Abstract Research on Using Art-play to Construct Elementary

More information

穨control.PDF

穨control.PDF TCP congestion control yhmiu Outline Congestion control algorithms Purpose of RFC2581 Purpose of RFC2582 TCP SS-DR 1998 TCP Extensions RFC1072 1988 SACK RFC2018 1996 FACK 1996 Rate-Halving 1997 OldTahoe

More information

UDC Empirical Researches on Pricing of Corporate Bonds with Macro Factors 厦门大学博硕士论文摘要库

UDC Empirical Researches on Pricing of Corporate Bonds with Macro Factors 厦门大学博硕士论文摘要库 10384 15620071151397 UDC Empirical Researches on Pricing of Corporate Bonds with Macro Factors 2010 4 Duffee 1999 AAA Vasicek RMSE RMSE Abstract In order to investigate whether adding macro factors

More information

1.0 % 0.25 % 85μm % U416 Sulfate expansion deformation law and mechanism of cement stabilized macadam base of saline areas in Xinjiang Song

1.0 % 0.25 % 85μm % U416 Sulfate expansion deformation law and mechanism of cement stabilized macadam base of saline areas in Xinjiang Song 1.0 % 0.25 % 85μm 0.97 0.136 % U416 Sulfate expansion deformation law and mechanism of cement stabilized macadam base of saline areas in Xinjiang Song Liang 1,2 Wang Xuan-cang 1 1 School of Highway, Chang

More information

课题调查对象:

课题调查对象: 1 大 陆 地 方 政 府 大 文 化 管 理 职 能 与 机 构 整 合 模 式 比 较 研 究 武 汉 大 学 陈 世 香 [ 内 容 摘 要 ] 迄 今 为 止, 大 陆 地 方 政 府 文 化 管 理 体 制 改 革 已 经 由 试 点 改 革 进 入 到 全 面 推 行 阶 段 本 文 主 要 通 过 结 合 典 型 调 查 法 与 比 较 研 究 方 法, 对 已 经 进 行 了 政 府

More information

10384 199928010 UDC 2002 4 2002 6 2002 2002 4 DICOM DICOM 1. 2. 3. Canny 4. 5. DICOM DICOM DICOM DICOM I Abstract Eyes are very important to our lives. Biologic parameters of anterior segment are criterions

More information

BC04 Module_antenna__ doc

BC04 Module_antenna__ doc http://www.infobluetooth.com TEL:+86-23-68798999 Fax: +86-23-68889515 Page 1 of 10 http://www.infobluetooth.com TEL:+86-23-68798999 Fax: +86-23-68889515 Page 2 of 10 http://www.infobluetooth.com TEL:+86-23-68798999

More information

2008 Nankai Business Review 61

2008 Nankai Business Review 61 150 5 * 71272026 60 2008 Nankai Business Review 61 / 62 Nankai Business Review 63 64 Nankai Business Review 65 66 Nankai Business Review 67 68 Nankai Business Review 69 Mechanism of Luxury Brands Formation

More information

[1-3] (Smile) [4] 808 nm (CW) W 1 50% 1 W 1 W Fig.1 Thermal design of semiconductor laser vertical stack ; Ansys 20 bar ; bar 2 25 Fig

[1-3] (Smile) [4] 808 nm (CW) W 1 50% 1 W 1 W Fig.1 Thermal design of semiconductor laser vertical stack ; Ansys 20 bar ; bar 2 25 Fig 40 6 2011 6 Vol.40 No.6 Infrared and Laser Engineering Jun. 2011 808 nm 2000 W 1 1 1 1 2 2 2 2 2 12 (1. 710119 2. 710119) : bar 808 nm bar 100 W 808 nm 20 bar 2 000 W bar LIV bar 808 nm : : TN248.4 TN365

More information

Fig. 1 1 The sketch for forced lead shear damper mm 45 mm 4 mm 200 mm 25 mm 2 mm mm Table 2 The energy dissip

Fig. 1 1 The sketch for forced lead shear damper mm 45 mm 4 mm 200 mm 25 mm 2 mm mm Table 2 The energy dissip * - 1 1 2 3 1. 100124 2. 100124 3. 210018 - ABAQUS - DOI 10. 13204 /j. gyjz201511033 EXPERIMENTAL STUDY AND THEORETICAL MODEL OF A NEW TYPE OF STEEL-LEAD DAMPING Shen Fei 1 Xue Suduo 1 Peng Lingyun 2 Ye

More information

1 * 1 *

1 * 1 * 1 * 1 * taka@unii.ac.jp 1992, p. 233 2013, p. 78 2. 1. 2014 1992, p. 233 1995, p. 134 2. 2. 3. 1. 2014 2011, 118 3. 2. Psathas 1995, p. 12 seen but unnoticed B B Psathas 1995, p. 23 2004 2006 2004 4 ah

More information

東吳大學

東吳大學 律 律 論 論 療 行 The Study on Medical Practice and Coercion 林 年 律 律 論 論 療 行 The Study on Medical Practice and Coercion 林 年 i 讀 臨 療 留 館 讀 臨 律 六 礪 讀 不 冷 療 臨 年 裡 歷 練 禮 更 老 林 了 更 臨 不 吝 麗 老 劉 老 論 諸 見 了 年 金 歷 了 年

More information

(Microsoft Word - 11-\261i\256m\253i.doc)

(Microsoft Word - 11-\261i\256m\253i.doc) 不 同 接 棒 方 法 對 國 小 學 童 大 隊 接 力 成 績 影 響 之 研 究 不 同 接 棒 方 法 對 國 小 學 童 大 隊 接 力 成 績 影 響 之 研 究 張 峻 詠 林 瑞 興 林 耀 豐 國 立 屏 東 教 育 大 學 摘 要 本 研 究 主 要 目 的 在 探 討 不 同 接 棒 方 法 對 國 小 學 童 大 隊 接 力 成 績 影 響 之 研 究, 以 高 雄 市 楠

More information

1 VLBI VLBI 2 32 MHz 2 Gbps X J VLBI [3] CDAS IVS [4,5] CDAS MHz, 16 MHz, 8 MHz, 4 MHz, 2 MHz [6] CDAS VLBI CDAS 2 CDAS CDAS 5 2

1 VLBI VLBI 2 32 MHz 2 Gbps X J VLBI [3] CDAS IVS [4,5] CDAS MHz, 16 MHz, 8 MHz, 4 MHz, 2 MHz [6] CDAS VLBI CDAS 2 CDAS CDAS 5 2 32 1 Vol. 32, No. 1 2014 2 PROGRESS IN ASTRONOMY Feb., 2014 doi: 10.3969/j.issn.1000-8349.2014.01.07 VLBI 1,2 1,2 (1. 200030 2. 200030) VLBI (Digital Baseband Convertor DBBC) CDAS (Chinese VLBI Data Acquisition

More information

國立中山大學學位論文典藏.PDF

國立中山大學學位論文典藏.PDF ( ) 2-1 p33 3-1 p78 3-2 p79 3-3 p80 3-4 p90 4-1 p95 4-2 p97 4-3 p100 4-4 p103 4-5 p105 4-6 p107 4-7 p108 4-8 p108 4-9 p112 4-10 p114 4-11 p117 4-12 p119 4-13 p121 4-14 p123 4-15 p124 4-16 p131 4-17 p133

More information