Self-Instantiated Recurrent Units with Dynamic Soft Recursion Aston Zhang, Yi Tay, Yikang Shen, Alvin Chan, Shuai Zhang Amazon Web Services AI, Google

Size: px
Start display at page:

Download "Self-Instantiated Recurrent Units with Dynamic Soft Recursion Aston Zhang, Yi Tay, Yikang Shen, Alvin Chan, Shuai Zhang Amazon Web Services AI, Google"

Transcription

1 Self-Instantiated Recurrent Units with Dynamic Soft Recursion Aston Zhang, Yi Tay, Yikang Shen, Alvin Chan, Shuai Zhang Amazon Web Services AI, Google Research Mila, Université de Montréal, NTU, Singapore, ETH Zürich Abstract While standard recurrent neural networks explicitly impose a chain structure on different forms of data, they do not have an explicit bias towards recursive selfinstantiation where the extent of recursion is dynamic. Given diverse and even growing data modalities (e.g., logic, algorithmic input and output, music, code, images, and language) that can be expressed in sequences and may benefit from more architectural flexibility, we propose the self-instantiated recurrent unit (Self- IRU) with a novel inductive bias towards dynamic soft recursion. On one hand, the is characterized by recursive self-instantiation via its gating functions, i.e., gating mechanisms of the are controlled by instances of the itself, which are repeatedly invoked in a recursive fashion. On the other hand, the extent of the recursion is controlled by gates whose values are between 0 and 1 and may vary across the temporal dimension of sequences, enabling dynamic soft recursion depth at each time step. The architectural flexibility and effectiveness of our proposed approach are demonstrated across multiple data modalities. For example, the achieves state-of-the-art performance on the logical inference dataset [Bowman et al., 2014] even when comparing with competitive models that have access to ground-truth syntactic information. 1 Introduction Models based on the notion of recurrence have enjoyed pervasive impact across various applications. In particular, most effective recurrent neural networks (RNNs) operate with gating functions. Such gating functions not only ameliorate vanishing gradient issues when modeling and capturing long-range dependencies, but also benefit from fine-grained control over temporal composition for sequences [Hochreiter and Schmidhuber, 1997, Cho et al., 2014]. With diverse and even growing data modalities (e.g., logic, algorithmic input and output, music, code, images, and language) that can be expressed in sequences and may benefit from more architectural flexibility, recurrent neural networks that only explicitly impose a chain structure on such data but lack an explicit bias towards recursive self-instantiation may be limiting. For example, their gating functions are typically static across the temporal dimension of sequences. In view of such, this paper aims at studying an inductive bias towards recursive self-instantiation where the extent of recursion is dynamic at different time steps. We propose a novel recurrent unit whose gating functions are repeatedly controlled by instances of the recurrent unit itself. Our proposed model is called the self-instantiated recurrent unit (), where self-instantiation indicates modeling via the own instances of the model itself in a recursive fashion. Specifically, two gates of the are controlled by instances. Biologically, Work was done at NTU. 35th Conference on Neural Information Processing Systems (NeurIPS 2021).

2 this design is motivated by the prefrontal cortex/basal ganglia working memory indirection [Kriete et al., 2013]. For example, a child instance drives the gating for outputting from its parent instance. Our proposed is also characterized by the dynamically controlled recursion depths. Specifically, we design a dynamic soft recursion mechanism, which softly learns the depth of recursive self-instantiation on a per-time-step basis. More concretely, certain gates are reserved to control the extent of the recursion. Since values of these gates are between 0 and 1 and may vary across the temporal dimension, they make dynamic soft recursion depth at each time step possible, which could lead to more architectural flexibility across diverse data modalities. This design of the is mainly inspired by the adaptive computation time (ACT) [Graves, 2016] that learns the number of computational steps between an input and an output and recursive neural networks that operate on directed acyclic graphs. On one hand, the is reminiscent of the ACT, albeit operated at the parameter level. While seemingly similar, the and ACT are very different in the context of what the objective is. Specifically, the goal of the is to dynamically expand the parameters of the model, not dynamically decide how long to deliberate on input tokens in a sequence. On the other hand, the marries the benefit of recursive reasoning with recurrent models. However, in contrast to recursive neural networks, the is neither concerned with syntax-guided composition [Tai et al., 2015, Socher et al., 2013, Dyer et al., 2016, Wang and Pan, 2020] nor unsupervised grammar induction [Shen et al., 2017, Choi et al., 2018, Yogatama et al., 2016, Havrylov et al., 2019]. Our Contributions All in all, sequences are fundamentally native to the world, so the design of effective inductive biases for data in this form has far-reaching benefits across a diverse range of real-world applications. Our main contributions are outlined below: We propose the self-instantiated recurrent unit (). It is distinctly characterized by a novel inductive bias towards modeling via the own instances of the unit itself in a recursive fashion, where the extent of recursion is dynamically learned across the temporal dimension of sequences. We evaluate the on a wide spectrum of sequence modeling tasks across multiple modalities: logical inference, sorting, tree traversal, music modeling, semantic parsing, code generation, and pixel-wise sequential image classification. Overall, the empirical results demonstrate architectural flexibility and effectiveness of the. For example, the achieves state-of-the-art performance on the logical inference dataset [Bowman et al., 2014] even when comparing with competitive models that have access to ground-truth syntactic information. Notation For readability, all vectors and matrices are denoted by lowercase and uppercase bold letters such as x and X, respectively. When a scalar is added to a vector, the addition is applied element-wise [Zhang et al., 2021]. 2 Method This section introduces the proposed. The is fundamentally a recurrent model, but distinguishes itself in that the gating functions that control compositions over time are recursively modeled by instances of the itself, where the extent of recursion is dynamic. In the following, we begin with the model architecture that can recursively self-instantiate. Then we detail its key components such as how dynamic soft recursion is enabled. 2.1 Self-Instantiation Given an input sequence of tokens x 1,..., x T, the transforms them into hidden states throughout all the time steps: h 1,..., h T. Denoting by L the user-specified maximum recursion depth, the hidden state at time step t is h t = (L) (x t, h (L) t 1 ), 2

3 ct 1 ft ct σ zt tanh αt Fz Ff h(l 1) t h(l 1) t (l 1) βt σ ot h(l 1) t 1 Fo h t 1 ht + xt Figure 1: The self-instantiated recurrent unit () model architecture. Circles represent gates that control information flow from dotted lines, and squares represent transformations or operators. (L) where (L) is an instance of the model at recursion depth L and ht 1 is a hidden state at time step t 1 and recursion depth L. In general, a instance at any recursion depth 0 l L returns a hidden state for that depth: (xt, ht 1 ) = ht, which involves the following computation: (l 1) ft = σ αt (l 1) (xt, ht 1 ) + (1 αt )Ff (xt, ht 1 ) (l 1) ot = σ βt (l 1) (xt, ht 1 ) + (1 βt )Fo (xt, ht 1 ) zt = tanh Fz (xt, ht 1 ) ct = ft ht = ot ct 1 + (1 ft ) zt (2.1) (2.2) (2.3) (2.4) ct + xt, (2.5) where denotes the element-wise multiplication, σ denotes the sigmoid function, scalars αt and βt are soft depth gates at time step t and recursion node n in the unrolled recursion paths, and Ff, Fo, and Fz are base transformations at recursion depth l. Without losing sight of the big picture, we will provide more details of such soft depth gates and base transformations later. On a high level, Figure 1 depicts the model architecture. We highlight that two gating functions of a, the forget gate ft in (2.1) and the output gate ot in (2.2), are recursively controlled by instances of the itself. Therefore, we call both the forget and output gates the self-instantiation gates. The base case (l = 0) for self-instantiation gates is ft = σ Ff (xt, ht 1 ) and ot = σ Fo (xt, ht 1 ). At each recursion depth l, the candidate memory cell zt at time step t is computed in (2.3). Then in (2.4), the forget gate ft controls the information flow from zt and the memory cell ct 1 at the previous time step to produce the memory cell ct at the current time step t. As illustrated by the bottom arrow starting from xt in Figure 1, the output gating in (2.5) also adds a skip connection from residual networks to facilitate gradient flow throughout the recursive self-instantiation of the [He et al., 2016]. 2.2 Dynamic Soft Recursion Now let us detail the soft depth gates αt and βt in (2.1) and (2.2) for time step t and recursion node n in the unrolled recursion paths. The index n is used to distinguish nodes at different positions 3

4 Time step (1) (2) βt(fr) αt(rf) t+1 αt(f) αt(ff) t βt(r) (1) βt(rr) (F) αt+1 (FF) αt+1 (1) (FR) βt+1 (2) (RF) αt+1 (R) βt+1 (1) (RR) βt+1 Figure 2: Soft depth gates αt and βt for time step t and recursion node n (denoting F and R as the left child and the right child, respectively) control the extent of the recursion. The extent is indicated by greyscale of any node at the beginning of an arrow along an unrolled recursion path. These gates are between 0 and 1 and may vary across the temporal dimension, enabling dynamic soft recursion depth at each time step (here maximum depth L = 2). in the recursion tree (e.g., in Figure 2) that is determined by the maximum recursion depth L. We propose learning them in a data-driven fashion. Specifically, we parameterize αt and βt with αt = σ(fα (xt )) and βt = σ(fβ (xt )), where F (xt ) = W xt + b R ( {α, β}) with weight parameters W parameters b both learned from data. and bias Together with the sigmoid function σ, these simple linear transformations of the input token xt are applied dynamically at each time step t across the input sequence. Moreover, as shown in (2.1) and (2.2), mathematically 0 < αt, βt < 1 control the extent of recursion at each recursion node n, enabling soft depth along any unrolled recursive self-instantiation path. Thus, αt and βt are called the soft depth gates. Putting these together, Figure 2 unrolls the recursive self-instantiation paths at two consecutive time steps to illustrate dynamic soft recursion depth. Specifically, the softness is indicated by greyscale of any node at the beginning of an arrow along an unrolled recursion path. In sharp contrast to multi-layer RNNs, s enable tree structures of self-instantiation, where the extent of recursion is dynamic (to be visualized in Section 3.7). 2.3 Base Transformations At any recursion depth l, Ff in (2.1), Fo in (2.2), and Fz in (2.3) are base transformations of the input xt and the hidden state ht 1. For example, we can model base transformations using RNN units (e.g., LSTM): at recursion depth l, for {f, o, z} we have F (xt, ht 1 ) = RNN (xt, ht 1 ). Alternatively, we may also model base transformations with linear layers that only transform the input xt using learnable weight parameters W and bias parameters b for {f, o, z}: F (xt ) = W xt + b. The is agnostic to the choice of base transformations and we will evaluate different choices in the experiments. We will discuss how the can be useful as a (parallel) non-autoregressive model and connects to other recurrent models in the supplementary material. 3 Experiments To demonstrate the architectural flexibility and effectiveness, we evaluate s on a wide range of publicly available benchmarks, perform ablation studies on the maximum recursion depth and base transformations, and analyze dynamics of soft depth gates. 4

5 3.1 Pixel-wise Sequential Image Classification The sequential pixel-wise image classification problem treats pixels in images as sequences. We use the well-established pixel-wise MNIST and CIFAR-10 datasets. Table 1: Experimental results (accuracy) on the pixel-wise sequential image classification task. Model #Params MNIST CIFAR-10 Independently R-RNN [Li et al., 2018a] r-lstm with Aux Loss [Trinh et al., 2018] Transformer (self-attention) [Trinh et al., 2018] TrellisNet [Bai et al., 2018b] (reported) 8.0M TrellisNet [Bai et al., 2018b] (our run) 8.0M M Table 1 reports the results of s against independently recurrent RNNs [Li et al., 2018a], r-lstms with aux loss [Trinh et al., 2018], Transformers (self-attention) [Trinh et al., 2018], and TrellisNets [Bai et al., 2018b]. On both the MNIST and CIFAR-10 datasets, the outperforms most of the other investigated baseline models. For the only exception, parameters of the are only about 1/8 of those of the TrellisNet [Bai et al., 2018b] while still achieving comparable performance. This supports that the is a reasonably competitive sequence encoder. 3.2 Logical Inference We experiment for the logical inference task on the standard dataset 2 proposed by Bowman et al. [2014]. This classification task is to determine the semantic equivalence of two statements expressed with logic operators such as not, and, and or. As per prior work [Shen et al., 2018], the model is trained on sequences with 6 or fewer operations and evaluated on sequences of 6 to 12 operations. Table 2: Experimental results (accuracy) on the logical inference task (symbol denotes models with access to ground-truth syntax). The baseline results are reported from [Shen et al., 2018]. The achieves state-of-the-art performance. #Operations Model = 7 = 8 = 9 = 10 = 11 = 12 Tree-LSTM [Tai et al., 2015] LSTM [Bowman et al., 2014] RRNet [Jacob et al., 2018] ON-LSTM [Shen et al., 2018] We compare s with Tree-LSTMs [Tai et al., 2015], LSTMs [Bowman et al., 2014], RR- Nets [Jacob et al., 2018], and ordered-neuron (ON-) LSTMs [Shen et al., 2018] based on the common experimental setting in these works. Table 2 reports our results on the logical inference task. The is a strong and competitive model on this task, outperforming ON-LSTM by a wide margin (+12% on the longest number of operations). Notably, the achieves state-of-the-art performance on this dataset even when comparing with Tree-LSTMs that have access to ground-truth syntactic information. 3.3 Sorting and Tree Traversal We also evaluate s on two algorithmic tasks that are solvable by recursion: sorting and tree traversal. In sorting, the input to the model is a sequence of integers. The correct output is the sorted sequence of integers. Since mapping sorted inputs to outputs can be implemented in a recursive fashion, we evaluate the s ability to model recursively structured sequence data. An example input-output pair would be 9, 1, 10, 5, 3 1, 3, 5, 9, 10. We evaluate on sequence length m = {5, 10}

6 In the tree traversal problem, we construct a binary tree of maximum depth n. The goal is to generate the postorder tree traversal given the inorder and preorder traversal of the tree. This is known to arrive at only one unique solution. The constructed trees have random sparsity where trees can grow up to maximum depth n. Hence, these trees can have varying depths (models can solve the task entirely when trees are fixed and full). We concatenate the postorder and inorder sequences, delimited by a special token. We evaluate on maximum depth n {3, 4, 5, 8, 10}. For n {5, 8}, we ensure that each tree traversal has at least 10 tokens. For n = 10, we ensure that each path has at least 15 tokens. An example input-output pair would be 13, 15, 4, 7, 5, X, 13, 4, 15, 5, 7 7, 15, 13, 4, 5. We frame sorting and tree traversal as sequence-to-sequence [Sutskever et al., 2014] tasks and evaluate models with measures of exact match (EM) accuracy and perplexity (PPL). We use a standard encoder-decoder architecture with attention [Bahdanau et al., 2014], and vary the encoder module with BiLSTMs, stacked BiLSTMs, and ordered-neuron (ON-) LSTMs [Shen et al., 2018]. Table 3: Experimental results on the sorting and tree traversal tasks. SORTING TREE TRAVERSAL m = 5 m = 10 n = 3 n = 4 n = 5 n = 8 n = 10 Model EM PPL EM PPL EM PPL EM PPL EM PPL EM PPL EM PPL BiLSTM Stacked BiLSTM ON-LSTM Table 3 reports our results on the sorting and tree traversal tasks. In fact, all the models solve the tree traversal task with n = 3. However, the task gets increasingly harder with a greater maximum possible depth and largely still remains a challenge for neural models today. On one hand, stacked BiLSTMs always perform better than BiLSTMs and ON-LSTMs occasionally perform worse than standard BiLSTMs on tree traversal, while for the sorting task ON-LSTMs perform much better than standard BiLSTMs. On the other hand, the relative performance of the is generally better than any of these baselines, especially pertaining to perplexity. 3.4 Music Modeling Moreover, we evaluate the on the polyphonic music modeling task, i.e., generative modeling of musical sequences. We use three well-established datasets: Nottingham, JSB Chorales, and Piano Midi [Boulanger-Lewandowski et al., 2012]. The inputs are 88-bit (88 piano keys) sequences. Table 4: Experimental results (negative log-likelihood) on the music modeling task. Model Nottingham JSB Piano Midi GRU [Chung et al., 2014] LSTM [Song et al., 2019] G2-LSTM [Li et al., 2018b] B-LSTM [Song et al., 2019] TCN [Bai et al., 2018a] (reported) TCN [Bai et al., 2018a] (our run) Table 4 compares the with a wide range of published works: GRU [Chung et al., 2014], LSTM [Song et al., 2019], G2-LSTM [Li et al., 2018b], B-LSTM [Song et al., 2019], and TCN [Bai et al., 2018a]. The achieves the best performance on the Nottingham and Piano midi datasets. It also achieves competitive performance on the JSB Chorales dataset, only underperforming the state-of-the-art by 0.02 negative log-likelihood. 3.5 Semantic Parsing and Code Generation We further evaluate s on the semantic parsing (the Geo, Atis, and Jobs datasets) and code generation (the Django dataset) tasks. They are mainly concerned with learning to parse and generate structured data. We run our experiments on the publicly released source code 3 of [Yin and Neubig, 3 6

7 2018], replacing the recurrent decoder with our decoder (TranX + ). We only replace the recurrent decoder since our early experiments showed that varying the encoder did not yield any benefits in performance. Overall, our hyperparameter details strictly follow the codebase of [Yin and Neubig, 2018], i.e., we run every model from their codebase as it is. Table 5: Experimental results (accuracy) on the semantic parsing (the Geo, Atis, and Jobs datasets) and code generation tasks (the Django dataset). Model Geo Atis Jobs Django Seq2Tree [Dong and Lapata, 2016] LPN [Ling et al., 2016] NMT [Neubig, 2015] YN17 [Yin and Neubig, 2017] ASN [Rabinovich et al., 2017] ASN + Supv. Attn. [Rabinovich et al., 2017] TranX [Yin and Neubig, 2018] (reported in code) TranX [Yin and Neubig, 2018] (our run) TranX Table 5 reports the experimental results in comparison with the other competitive baselines such as Seq2Tree [Dong and Lapata, 2016], LPN [Ling et al., 2016], NMT [Neubig, 2015], YN17 [Yin and Neubig, 2017], ASN (with and without supervised attention) [Rabinovich et al., 2017], and TranX [Yin and Neubig, 2018]. We observe that TranX + outperforms all the other approaches, achieving state-of-the-art performance. On the code generation task, TranX + outperforms TranX by +1.6% and +1% on all the semantic parsing tasks. More importantly, the performance gain over the base TranX method allows us to observe the ablative benefit of the that is achieved by only varying the recurrent decoder. 3.6 Ablation Studies of the Maximum Recursion Depth and Base Transformations Table 6: Ablation studies of the maximum recursion depth and base transformation on the semantic parsing (SP) and code generation (CG) tasks. Max Depth Base Transformations SP CG 1 Linear Linear Linear LSTM LSTM Table 6 presents ablation studies of the maximum recursion depth (Section 2.1) and base transformations (Section 2.3) of Self- IRUs. The results are based on the semantic parsing (Atis) and code generation (Django) tasks. We can see that their optimal choice is task dependent: (i) on the semantic parsing task, using the linear layer performs better than the LSTM for base transformations; (ii) conversely, the linear transformation performs worse than the LSTM on the code generation task. On the whole, we also observe this across the other tasks in the experiments. Table 7 reports their optimal combinations for diverse tasks in the experiments, where the maximum recursion depth is evaluated on L = {0, 1, 2, 3}. As we can tell from different optimal combinations in Table 7, choices of the maximum recursion depth and base transformations of Self- IRUs depend on tasks. Table 7: The optimal maximum recursion depth and base transformations for different tasks in the experiments. Task Max Depth Base Transformations Image classification 1 LSTM Logical inference 2 LSTM Tree traversal 1 LSTM Sorting 1 LSTM Music modeling 2 Linear Semantic parsing 1 Linear Code generation 1 LSTM 7

8 3.7 Analysis of Soft Depth Gates Besides the task-specific maximum recursion depth and base transformations, empirical effectiveness of s may also be explained by the modeling flexibility via the inductive bias towards dynamic soft recursion (Section 2.2). We will analyze in two aspects below. (a) Initial (CIFAR-10) (b) Epoch 10 (CIFAR-10) (c) Initial (MNIST) (d) Epoch 10 (MNIST) Figure 3: Soft depth gate values at initialization and training epoch 10 on the CIFAR-10 and MNIST datasets. First, during training, the has the flexibility of building datadependent recursive patterns of selfinstantiation. Figure 3 displays values of all the soft depth gates at all the three recursion depths on the CIFAR- 10 and MNIST datasets, depicting how the recursive pattern of the is updated during training. For different datasets, the also flexibly learns to construct different soft recursive (via soft depth gates of values between 0 and 1) patterns. Second, we want to find out whether the has the flexibility of softly learning the recursion depth on a pertime-step basis via the inductive bias towards dynamic soft recursion. Figure 4 visualizes such patterns (i) for pixelwise sequential image classification on the CIFAR-10 and MNIST datasets and (ii) for music modeling on the Nottingham dataset. Notably, all the datasets have very diverse temporal compositions of recursive patterns. More concretely, the soft depth gate values fluctuate aggressively on the CIFAR-10 dataset (consisting of color images) in Figure 4a while remaining more stable for music modeling in Figure 4c. Moreover, these soft depth gate values remain totally constant on the MNIST dataset (consisting of much simpler grayscale images) in Figure 4b. These provide compelling empirical evidence for the architectural flexibility of s: they can adjust the dynamic construction adaptively and can even revert to static recursion over time if necessary (such as for simpler tasks). Activation (a) Image classification (CIFAR-10) (b) Image classification (MNIST) Temporal (c) Music modeling (Nottingham) Figure 4: Soft depth gate values across the temporal dimension. L and R denote α t and β t, respectively (e.g., LLR denotes the node at the end of the unrolled recursive path α t α t β t ). L R LL LR RL RR LLR LLL RRL RRR 8

9 The dynamic soft recursion pattern is made more intriguing by observing how the softness alters on the CIFAR-10 and Nottingham datasets. From Figure 4c we observe that the soft recursion pattern of the model changes in a rhythmic fashion, in line with our intuition of musical data. When dealing with pixel information, the recursive pattern in Figure 4a changes adaptively according to the more complex color-image information. Though these empirical results are intuitive, a better understanding of such behaviors may benefit from theoretical or biological perspectives in the future. 4 Related Work The study of effective inductive biases for sequential representation learning has been a prosperous research direction. This has spurred on research across multiple fronts, starting from gated recurrent models [Hochreiter and Schmidhuber, 1997, Cho et al., 2014], convolution [Kim, 2014], to selfattention-based models [Vaswani et al., 2017]. The intrinsic hierarchical structure native to many forms of sequences has long fascinated and inspired researchers [Socher et al., 2013, Bowman et al., 2014, 2016, Dyer et al., 2016]. Nested LSTMs use hierarchical memories [Moniz and Krueger, 2017]. The study of recursive networks, popularized by Socher et al. [2013], has provided a foundation for learning syntax-guided composition. Along the same vein, Tai et al. [2015] proposed Tree-LSTMs that guide LSTM composition with grammar. Recent attempts have been made to learn this process without guidance or syntax-based supervision [Yogatama et al., 2016, Shen et al., 2017, Choi et al., 2018, Havrylov et al., 2019, Kim et al., 2019]. Specifically, ordered-neuron LSTMs [Shen et al., 2018] propose structured gating mechanisms, imbuing the recurrent unit with a tree-structured inductive bias. Besides, Tran et al. [2018] showed that recurrence is important for modeling hierarchical structure. Notably, learning hierarchical representations across multiple time-scales [El Hihi and Bengio, 1996, Schmidhuber, 1992, Koutnik et al., 2014, Chung et al., 2016, Hafner et al., 2017] has also demonstrated effectiveness. Learning an abstraction and controller over a base recurrent unit is also another compelling direction. First proposed in fast weights by Schmidhuber [1992], several recent works explored this notion. HyperNetworks [Ha et al., 2016] learn to generate weights for another recurrent unit, i.e., a form of relaxed weight sharing. On the other hand, RCRN [Tay et al., 2018] explicitly parameterizes the gates of an RNN unit with other RNN units. Recent studies on the recurrent unit are also reminiscent of this particular notion [Bradbury et al., 2016, Lei et al., 2018]. The fusion of recursive and recurrent architectures is also notable. This direction is probably the closest relevance to our proposed method, although with vast differences. Liu et al. [2014] proposed recursive recurrent networks for machine translation that are concerned with the more traditional syntactic supervision concept of vanilla recursive networks. Jacob et al. [2018] proposed the RRNet, which learns hierarchical structures on the fly. The RRNet proposes to learn to split or merge nodes at each time step, which makes it reminiscent of other works [Choi et al., 2018, Shen et al., 2018]. Lee and Osindero [2016] and Aydin and Güngör [2020] proposed to feed recursive neural network output into recurrent models. Alvarez-Melis and Jaakkola [2016] proposed doubly recurrent decoders for tree-structured decoding. The core of their method is a depth and breath-wise recurrence which is similar to our model. However, our is concerned with learning recursive self-instantiation, which is in sharp contrast to their objective of decoding trees. Last, our work combines the idea of external meta-controllers [Schmidhuber, 1992, Ha et al., 2016, Tay et al., 2018] with recursive architectures. Specifically, our recursive parameterization is also a form of dynamic memory that offers improved expressiveness in similar spirit to memory-augmented recurrent models [Santoro et al., 2018, Graves et al., 2014, Tran et al., 2016]. 5 Summary and Discussions We proposed the that is characterized by recursive instantiation of the model itself, where the extent of the recursion may vary temporally. The experiments across multiple modalities demonstrated the architectural flexibility and effectiveness of the. While there is a risk of abusing the such as for generating fake contents, we believe that our model is overall beneficial through effective understanding of our digitalized world across diverse modalities. Acknowledgements. We thank the anonymous reviewers for the insightful comments on this paper. 9

10 References David Alvarez-Melis and Tommi S Jaakkola. Tree-structured decoding with doubly-recurrent neural networks Cem Rifki Aydin and Tunga Güngör. Combination of recursive and recurrent neural networks for aspect-based sentiment analysis using inter-aspect relations. IEEE Access, 8: , Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arxiv preprint arxiv: , Shaojie Bai, J Zico Kolter, and Vladlen Koltun. An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. arxiv preprint arxiv: , 2018a. Shaojie Bai, J Zico Kolter, and Vladlen Koltun. Trellis networks for sequence modeling. arxiv preprint arxiv: , 2018b. Nicolas Boulanger-Lewandowski, Yoshua Bengio, and Pascal Vincent. Modeling temporal dependencies in high-dimensional sequences: Application to polyphonic music generation and transcription. arxiv preprint arxiv: , Samuel R Bowman, Christopher Potts, and Christopher D Manning. Recursive neural networks can learn logical semantics. arxiv preprint arxiv: , Samuel R Bowman, Jon Gauthier, Abhinav Rastogi, Raghav Gupta, Christopher D Manning, and Christopher Potts. A fast unified model for parsing and sentence understanding. arxiv preprint arxiv: , James Bradbury, Stephen Merity, Caiming Xiong, and Richard Socher. Quasi-recurrent neural networks. arxiv preprint arxiv: , Kyunghyun Cho, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arxiv preprint arxiv: , Jihun Choi, Kang Min Yoo, and Sang-goo Lee. Learning to compose task-specific tree structures. In Thirty-Second AAAI Conference on Artificial Intelligence, Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. arxiv preprint arxiv: , Junyoung Chung, Sungjin Ahn, and Yoshua Bengio. Hierarchical multiscale recurrent neural networks. arxiv preprint arxiv: , Li Dong and Mirella Lapata. Language to logical form with neural attention. arxiv preprint arxiv: , Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A Smith. Recurrent neural network grammars. arxiv preprint arxiv: , Salah El Hihi and Yoshua Bengio. Hierarchical recurrent neural networks for long-term dependencies. In Advances in neural information processing systems, pages , Alex Graves. Adaptive computation time for recurrent neural networks. arxiv preprint arxiv: , Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. arxiv preprint arxiv: , David Ha, Andrew Dai, and Quoc V Le. Hypernetworks. arxiv preprint arxiv: , Danijar Hafner, Alexander Irpan, James Davidson, and Nicolas Heess. Learning hierarchical information flow with recurrent neural modules. In Advances in Neural Information Processing Systems, pages ,

11 Serhii Havrylov, Germán Kruszewski, and Armand Joulin. Cooperative learning of disjoint syntax and semantics. arxiv preprint arxiv: , Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages , Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8): , Athul Paul Jacob, Zhouhan Lin, Alessandro Sordoni, and Yoshua Bengio. Learning hierarchical structures on-the-fly with a recurrent-recursive model for sequences. In Proceedings of The Third Workshop on Representation Learning for NLP, pages , Yoon Kim. Convolutional neural networks for sentence classification, Yoon Kim, Chris Dyer, and Alexander M Rush. Compound probabilistic context-free grammars for grammar induction. arxiv preprint arxiv: , Jan Koutnik, Klaus Greff, Faustino Gomez, and Juergen Schmidhuber. A clockwork rnn. arxiv preprint arxiv: , Trenton Kriete, David C Noelle, Jonathan D Cohen, and Randall C O Reilly. Indirection and symbollike processing in the prefrontal cortex and basal ganglia. Proceedings of the National Academy of Sciences, 110(41): , Chen-Yu Lee and Simon Osindero. Recursive recurrent nets with attention modeling for ocr in the wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages , Tao Lei, Yu Zhang, Sida I Wang, Hui Dai, and Yoav Artzi. Simple recurrent units for highly parallelizable recurrence. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages , Shuai Li, Wanqing Li, Chris Cook, Ce Zhu, and Yanbo Gao. Independently recurrent neural network (indrnn): Building a longer and deeper rnn. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages , 2018a. Zhuohan Li, Di He, Fei Tian, Wei Chen, Tao Qin, Liwei Wang, and Tie-Yan Liu. Towards binaryvalued gates for robust lstm training. arxiv preprint arxiv: , 2018b. Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiskỳ, Andrew Senior, Fumin Wang, and Phil Blunsom. Latent predictor networks for code generation. arxiv preprint arxiv: , Shujie Liu, Nan Yang, Mu Li, and Ming Zhou. A recursive recurrent neural network for statistical machine translation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages , Joel Ruben Antony Moniz and David Krueger. Nested lstms. In Asian Conference on Machine Learning, pages PMLR, Graham Neubig. lamtram: A toolkit for language and translation modeling using neural networks, Maxim Rabinovich, Mitchell Stern, and Dan Klein. Abstract syntax networks for code generation and semantic parsing. arxiv preprint arxiv: , Adam Santoro, Ryan Faulkner, David Raposo, Jack Rae, Mike Chrzanowski, Theophane Weber, Daan Wierstra, Oriol Vinyals, Razvan Pascanu, and Timothy Lillicrap. Relational recurrent neural networks. In Advances in Neural Information Processing Systems, pages , Jürgen Schmidhuber. Learning complex, extended sequences using the principle of history compression. Neural Computation, 4(2): ,

12 Yikang Shen, Zhouhan Lin, Chin-Wei Huang, and Aaron Courville. Neural language modeling by jointly learning syntax and lexicon. arxiv preprint arxiv: , Yikang Shen, Shawn Tan, Alessandro Sordoni, and Aaron Courville. Ordered neurons: Integrating tree structures into recurrent neural networks. arxiv preprint arxiv: , Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 conference on empirical methods in natural language processing, pages , Kyungwoo Song, JoonHo Jang, Il-Chul Moon, et al. Bivariate beta lstm. arxiv preprint arxiv: , Rupesh Kumar Srivastava, Klaus Greff, and Jürgen Schmidhuber. Highway networks. arxiv preprint arxiv: , Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages , Kai Sheng Tai, Richard Socher, and Christopher D Manning. Improved semantic representations from tree-structured long short-term memory networks. arxiv preprint arxiv: , Yi Tay, Anh Tuan Luu, and Siu Cheung Hui. Recurrently controlled recurrent networks. In Advances in Neural Information Processing Systems, pages , Ke Tran, Arianna Bisazza, and Christof Monz. Recurrent memory networks for language modeling. arxiv preprint arxiv: , Ke Tran, Arianna Bisazza, and Christof Monz. The importance of being recurrent for modeling hierarchical structure. arxiv preprint arxiv: , Trieu H Trinh, Andrew M Dai, Minh-Thang Luong, and Quoc V Le. Learning longer-term dependencies in rnns with auxiliary losses. arxiv preprint arxiv: , Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, pages , Ashish Vaswani, Samy Bengio, Eugene Brevdo, Francois Chollet, Aidan N. Gomez, Stephan Gouws, Llion Jones, Łukasz Kaiser, Nal Kalchbrenner, Niki Parmar, Ryan Sepassi, Noam Shazeer, and Jakob Uszkoreit. Tensor2tensor for neural machine translation. CoRR, abs/ , URL Wenya Wang and Sinno Jialin Pan. Syntactically meaningful and transferable recursive neural networks for aspect and opinion extraction. Computational Linguistics, 45(4): , Pengcheng Yin and Graham Neubig. A syntactic neural model for general-purpose code generation. arxiv preprint arxiv: , Pengcheng Yin and Graham Neubig. Tranx: A transition-based neural abstract syntax parser for semantic parsing and code generation. arxiv preprint arxiv: , Dani Yogatama, Phil Blunsom, Chris Dyer, Edward Grefenstette, and Wang Ling. Learning to compose words into sentences with reinforcement learning. arxiv preprint arxiv: , Aston Zhang, Zachary C Lipton, Mu Li, and Alexander J Smola. Dive into deep learning. arxiv preprint arxiv: ,

2/80 2

2/80 2 2/80 2 3/80 3 DSP2400 is a high performance Digital Signal Processor (DSP) designed and developed by author s laboratory. It is designed for multimedia and wireless application. To develop application

More information

Improved Preimage Attacks on AES-like Hash Functions: Applications to Whirlpool and Grøstl

Improved Preimage Attacks on AES-like Hash Functions: Applications to Whirlpool and Grøstl SKLOIS (Pseudo) Preimage Attack on Reduced-Round Grøstl Hash Function and Others Shuang Wu, Dengguo Feng, Wenling Wu, Jian Guo, Le Dong, Jian Zou March 20, 2012 Institute. of Software, Chinese Academy

More information

國家圖書館典藏電子全文

國家圖書館典藏電子全文 i ii Abstract The most important task in human resource management is to encourage and help employees to develop their potential so that they can fully contribute to the organization s goals. The main

More information

Microsoft PowerPoint - STU_EC_Ch08.ppt

Microsoft PowerPoint - STU_EC_Ch08.ppt 樹德科技大學資訊工程系 Chapter 8: Counters Shi-Huang Chen Fall 2010 1 Outline Asynchronous Counter Operation Synchronous Counter Operation Up/Down Synchronous Counters Design of Synchronous Counters Cascaded Counters

More information

acl2017_linguistically-regularized-lstm-MinlieHuang

acl2017_linguistically-regularized-lstm-MinlieHuang ACL 2017 Linguistically Regularized LSTM for Sentiment Classification Qiao Qian, Minlie Huang, Jinhao Lei, Xiaoyan Zhu Dept. of Computer Science Tsinghua University 1 aihuang@tsinghua.edu.cn Outline Introduction

More information

Microsoft Word - TIP006SCH Uni-edit Writing Tip - Presentperfecttenseandpasttenseinyourintroduction readytopublish

Microsoft Word - TIP006SCH Uni-edit Writing Tip - Presentperfecttenseandpasttenseinyourintroduction readytopublish 我 难 度 : 高 级 对 们 现 不 在 知 仍 道 有 听 影 过 响 多 少 那 次 么 : 研 英 究 过 文 论 去 写 文 时 作 的 表 技 引 示 巧 言 事 : 部 情 引 分 发 言 该 生 使 在 中 用 过 去, 而 现 在 完 成 时 仅 表 示 事 情 发 生 在 过 去, 并 的 哪 现 种 在 时 完 态 成 呢 时? 和 难 过 道 去 不 时 相 关? 是 所 有

More information

% % 34

% % 34 * 2000 2005 1% 1% 1% 1% * VZDA2010-15 33 2011. 3 2009 2009 2004 2008 1982 1990 2000 2005 1% 1 1 2005 1% 34 2000 2005 1% 35 2011. 3 2000 0. 95 20-30 209592 70982 33. 9% 2005 1% 258 20-30 372301 115483 31.

More information

BC04 Module_antenna__ doc

BC04 Module_antenna__ doc http://www.infobluetooth.com TEL:+86-23-68798999 Fax: +86-23-68889515 Page 1 of 10 http://www.infobluetooth.com TEL:+86-23-68798999 Fax: +86-23-68889515 Page 2 of 10 http://www.infobluetooth.com TEL:+86-23-68798999

More information

高中英文科教師甄試心得

高中英文科教師甄試心得 高 中 英 文 科 教 師 甄 試 心 得 英 語 學 系 碩 士 班 林 俊 呈 高 雄 市 立 高 雄 高 級 中 學 今 年 第 一 次 參 加 教 師 甄 試, 能 夠 在 尚 未 服 兵 役 前 便 考 上 高 雄 市 立 高 雄 高 級 中 學 專 任 教 師, 自 己 覺 得 很 意 外, 也 很 幸 運 考 上 後 不 久 在 與 雄 中 校 長 的 會 談 中, 校 長 的 一 句

More information

Shanghai International Studies University THE STUDY AND PRACTICE OF SITUATIONAL LANGUAGE TEACHING OF ADVERB AT BEGINNING AND INTERMEDIATE LEVEL A Thes

Shanghai International Studies University THE STUDY AND PRACTICE OF SITUATIONAL LANGUAGE TEACHING OF ADVERB AT BEGINNING AND INTERMEDIATE LEVEL A Thes 上 海 外 国 语 大 学 硕 士 学 位 论 文 对 外 汉 语 初 中 级 副 词 情 境 教 学 研 究 与 实 践 院 系 : 国 际 文 化 交 流 学 院 学 科 专 业 : 汉 语 国 际 教 育 姓 名 : 顾 妍 指 导 教 师 : 缪 俊 2016 年 5 月 Shanghai International Studies University THE STUDY AND PRACTICE

More information

硕 士 学 位 论 文 论 文 题 目 : 北 岛 诗 歌 创 作 的 双 重 困 境 专 业 名 称 : 中 国 现 当 代 文 学 研 究 方 向 : 中 国 新 诗 研 究 论 文 作 者 : 奚 荣 荣 指 导 老 师 : 姜 玉 琴 2014 年 12 月

硕 士 学 位 论 文 论 文 题 目 : 北 岛 诗 歌 创 作 的 双 重 困 境 专 业 名 称 : 中 国 现 当 代 文 学 研 究 方 向 : 中 国 新 诗 研 究 论 文 作 者 : 奚 荣 荣 指 导 老 师 : 姜 玉 琴 2014 年 12 月 硕 士 学 位 论 文 论 文 题 目 : 北 岛 诗 歌 创 作 的 双 重 困 境 专 业 名 称 : 中 国 现 当 代 文 学 研 究 方 向 : 中 国 新 诗 研 究 论 文 作 者 : 奚 荣 荣 指 导 老 师 : 姜 玉 琴 2014 年 12 月 致 谢 文 学 是 我 们 人 类 宝 贵 的 精 神 财 富 两 年 半 的 硕 士 学 习 让 我 进 一 步 接 近 文 学,

More information

54 48 6-7 word2vec 8-10 GloVe 11 Word2vec X king - X man X queen - X woman Recurrent Neural Network X shirt - X clothing X chair - X furniture 2 n-gra

54 48 6-7 word2vec 8-10 GloVe 11 Word2vec X king - X man X queen - X woman Recurrent Neural Network X shirt - X clothing X chair - X furniture 2 n-gra Journal of South China Normal University Natural Science Edition 2016 48 3 53-58 doi 106054 /jjscnun201605006 1 2* 2 3 2 1 510631 2 3 510225 Glove TP3911 A 1000-5463 2016 03-0053-06 Research on Academic

More information

國立中山大學學位論文典藏.PDF

國立中山大學學位論文典藏.PDF I II III The Study of Factors to the Failure or Success of Applying to Holding International Sport Games Abstract For years, holding international sport games has been Taiwan s goal and we are on the way

More information

<4D6963726F736F667420576F7264202D20D0ECB7C9D4C6A3A8C5C5B0E6A3A92E646F63>

<4D6963726F736F667420576F7264202D20D0ECB7C9D4C6A3A8C5C5B0E6A3A92E646F63> 硕 士 专 业 学 位 论 文 论 文 题 目 性 灵 文 学 思 想 与 高 中 作 文 教 学 研 究 生 姓 名 指 导 教 师 姓 名 专 业 名 称 徐 飞 云 卞 兆 明 教 育 硕 士 研 究 方 向 学 科 教 学 ( 语 文 ) 论 文 提 交 日 期 2012 年 9 月 性 灵 文 学 思 想 与 高 中 作 文 教 学 中 文 摘 要 性 灵 文 学 思 想 与 高 中 作

More information

南華大學數位論文

南華大學數位論文 南華大學 碩士論文 中華民國九十五年六月十四日 Elfin Excel I II III ABSTRACT Since Ming Hwa Yuan Taiwanese Opera Company started to cooperate with the Chinese orchestra, the problem of how the participation of Chinese music

More information

Microsoft PowerPoint - Aqua-Sim.pptx

Microsoft PowerPoint - Aqua-Sim.pptx Peng Xie, Zhong Zhou, Zheng Peng, Hai Yan, Tiansi Hu, Jun-Hong Cui, Zhijie Shi, Yunsi Fei, Shengli Zhou Underwater Sensor Network Lab 1 Outline Motivations System Overview Aqua-Sim Components Experimental

More information

論文封面

論文封面 6 21 1973 13 274 A Study of Children s Poetry by Lin Huan-Chang Chen, Chun-Yu National Taitung Teachers College The Graduate Institute of Children s Literature Abstract Lin Huan-Chang, the poet who devoted

More information

Microsoft PowerPoint _代工實例-1

Microsoft PowerPoint _代工實例-1 4302 動態光散射儀 (Dynamic Light Scattering) 代工實例與結果解析 生醫暨非破壞性分析團隊 2016.10 updated Which Size to Measure? Diameter Many techniques make the useful and convenient assumption that every particle is a sphere. The

More information

Microsoft Word - 第四組心得.doc

Microsoft Word - 第四組心得.doc 徐 婉 真 這 四 天 的 綠 島 人 權 體 驗 營 令 我 印 象 深 刻, 尤 其 第 三 天 晚 上 吳 豪 人 教 授 的 那 堂 課, 他 讓 我 聽 到 不 同 於 以 往 的 正 義 之 聲 轉 型 正 義, 透 過 他 幽 默 熱 情 的 語 調 激 起 了 我 對 政 治 的 興 趣, 願 意 在 未 來 多 關 心 社 會 多 了 解 政 治 第 一 天 抵 達 綠 島 不 久,

More information

报 告 1: 郑 斌 教 授, 美 国 俄 克 拉 荷 马 大 学 医 学 图 像 特 征 分 析 与 癌 症 风 险 评 估 方 法 摘 要 : 准 确 的 评 估 癌 症 近 期 发 病 风 险 和 预 后 或 者 治 疗 效 果 是 发 展 和 建 立 精 准 医 学 的 一 个 重 要 前

报 告 1: 郑 斌 教 授, 美 国 俄 克 拉 荷 马 大 学 医 学 图 像 特 征 分 析 与 癌 症 风 险 评 估 方 法 摘 要 : 准 确 的 评 估 癌 症 近 期 发 病 风 险 和 预 后 或 者 治 疗 效 果 是 发 展 和 建 立 精 准 医 学 的 一 个 重 要 前 东 北 大 学 中 荷 生 物 医 学 与 信 息 工 程 学 院 2016 年 度 生 物 医 学 与 信 息 工 程 论 坛 会 议 时 间 2016 年 6 月 8 日, 星 期 三,9:30 至 16:00 会 议 地 址 会 议 网 址 主 办 单 位 东 北 大 学 浑 南 校 区 沈 阳 市 浑 南 区 创 新 路 195 号 生 命 科 学 大 楼 B 座 619 报 告 厅 http://www.bmie.neu.edu.cn

More information

~ ~ ~

~ ~ ~ 33 4 2014 467 478 Studies in the History of Natural Sciences Vol. 33 No. 4 2014 030006 20 20 N092 O6-092 A 1000-1224 2014 04-0467-12 200 13 Roger Bacon 1214 ~ 1292 14 Berthold Schwarz 20 Luther Carrington

More information

致 谢 本 人 自 2008 年 6 月 从 上 海 外 国 语 大 学 毕 业 之 后, 于 2010 年 3 月 再 次 进 入 上 外, 非 常 有 幸 成 为 汉 语 国 际 教 育 专 业 的 研 究 生 回 顾 三 年 以 来 的 学 习 和 生 活, 顿 时 感 觉 这 段 时 间 也

致 谢 本 人 自 2008 年 6 月 从 上 海 外 国 语 大 学 毕 业 之 后, 于 2010 年 3 月 再 次 进 入 上 外, 非 常 有 幸 成 为 汉 语 国 际 教 育 专 业 的 研 究 生 回 顾 三 年 以 来 的 学 习 和 生 活, 顿 时 感 觉 这 段 时 间 也 精 英 汉 语 和 新 实 用 汉 语 课 本 的 对 比 研 究 The Comparative Study of Jing Ying Chinese and The New Practical Chinese Textbook 专 业 : 届 别 : 姓 名 : 导 师 : 汉 语 国 际 教 育 2013 届 王 泉 玲 杨 金 华 1 致 谢 本 人 自 2008 年 6 月 从 上 海 外

More information

國立中山大學學位論文典藏.PDF

國立中山大學學位論文典藏.PDF 國 立 中 山 大 學 企 業 管 理 學 系 碩 士 論 文 以 系 統 動 力 學 建 構 美 食 餐 廳 異 國 麵 坊 之 管 理 飛 行 模 擬 器 研 究 生 : 簡 蓮 因 撰 指 導 教 授 : 楊 碩 英 博 士 中 華 民 國 九 十 七 年 七 月 致 謝 詞 寫 作 論 文 的 過 程 是 一 段 充 滿 艱 辛 與 淚 水 感 動 與 窩 心 的 歷 程, 感 謝 這 一

More information

UDC Empirical Researches on Pricing of Corporate Bonds with Macro Factors 厦门大学博硕士论文摘要库

UDC Empirical Researches on Pricing of Corporate Bonds with Macro Factors 厦门大学博硕士论文摘要库 10384 15620071151397 UDC Empirical Researches on Pricing of Corporate Bonds with Macro Factors 2010 4 Duffee 1999 AAA Vasicek RMSE RMSE Abstract In order to investigate whether adding macro factors

More information

南華大學數位論文

南華大學數位論文 南 華 大 學 ( 文 學 所 ) 碩 士 論 文 論 文 題 目 ( 陳 千 武 小 說 活 著 回 來 及 其 相 關 事 例 研 究 ) 論 文 題 目 (Chen Chien Wu Return Alive And Some Research About It) 研 究 生 : 朱 妍 淩 指 導 教 授 : 林 葉 連 中 華 民 國 一 0 一 年 6 月 8 日 陳 千 武 小 說

More information

<4D6963726F736F667420576F7264202D2032303130C4EAC0EDB9A4C0E04142BCB6D4C4B6C1C5D0B6CFC0FDCCE2BEABD1A15F325F2E646F63>

<4D6963726F736F667420576F7264202D2032303130C4EAC0EDB9A4C0E04142BCB6D4C4B6C1C5D0B6CFC0FDCCE2BEABD1A15F325F2E646F63> 2010 年 理 工 类 AB 级 阅 读 判 断 例 题 精 选 (2) Computer mouse How does the mouse work? We have to start at the bottom, so think upside down for now. It all starts with mouse ball. As the mouse ball in the bottom

More information

A VALIDATION STUDY OF THE ACHIEVEMENT TEST OF TEACHING CHINESE AS THE SECOND LANGUAGE by Chen Wei A Thesis Submitted to the Graduate School and Colleg

A VALIDATION STUDY OF THE ACHIEVEMENT TEST OF TEACHING CHINESE AS THE SECOND LANGUAGE by Chen Wei A Thesis Submitted to the Graduate School and Colleg 上 海 外 国 语 大 学 SHANGHAI INTERNATIONAL STUDIES UNIVERSITY 硕 士 学 位 论 文 MASTER DISSERTATION 学 院 国 际 文 化 交 流 学 院 专 业 汉 语 国 际 教 育 硕 士 题 目 届 别 2010 届 学 生 陈 炜 导 师 张 艳 莉 副 教 授 日 期 2010 年 4 月 A VALIDATION STUDY

More information

Abstract Due to the improving of living standards, people gradually seek lighting quality from capacityto quality. And color temperature is the important subject of it. According to the research from aboard,

More information

Microsoft Word - 24217010311110028谢雯雯.doc

Microsoft Word - 24217010311110028谢雯雯.doc HUAZHONG AGRICULTURAL UNIVERSITY 硕 士 学 位 论 文 MASTER S DEGREE DISSERTATION 80 后 女 硕 士 生 择 偶 现 状 以 武 汉 市 七 所 高 校 为 例 POST-80S FEMALE POSTGRADUATE MATE SELECTION STATUS STUDY TAKE WUHAN SEVEN UNIVERSITIES

More information

國立中山大學學位論文典藏.PDF

國立中山大學學位論文典藏.PDF 中 國 文 學 系 國 立 中 山 大 學, 碩 士 論 文 國 立 中 山 大 學 中 國 文 學 系 碩 士 論 文 Department of Chinese Literature 肉 蒲 團 研 究 National Sun Yat-sen University Master Thesis 肉 蒲 團 研 究 The Research of Rou Pu Tuan 研 究 生 : 林 欣 穎

More information

2005 5,,,,,,,,,,,,,,,,, , , 2174, 7014 %, % 4, 1961, ,30, 30,, 4,1976,627,,,,, 3 (1993,12 ),, 2

2005 5,,,,,,,,,,,,,,,,, , , 2174, 7014 %, % 4, 1961, ,30, 30,, 4,1976,627,,,,, 3 (1993,12 ),, 2 3,,,,,, 1872,,,, 3 2004 ( 04BZS030),, 1 2005 5,,,,,,,,,,,,,,,,, 1928 716,1935 6 2682 1928 2 1935 6 1966, 2174, 7014 %, 94137 % 4, 1961, 59 1929,30, 30,, 4,1976,627,,,,, 3 (1993,12 ),, 2 , :,,,, :,,,,,,

More information

mode of puzzle-solving

mode of puzzle-solving 91 12 145 174 * * 146 1 1 mode of puzzle-solving 91 12 147 83-105 148 2 3 2 3 151 91 12 149 150 4 4 101-104 91 12 151 identity 5 6 7 5 6 7 100 140 152 8 9 10 8 31-32 9 27-29 10 sense of political efficacy

More information

untitled

untitled Co-integration and VECM Yi-Nung Yang CYCU, Taiwan May, 2012 不 列 1 Learning objectives Integrated variables Co-integration Vector Error correction model (VECM) Engle-Granger 2-step co-integration test Johansen

More information

Microsoft Word - 103袁光儀.doc

Microsoft Word - 103袁光儀.doc 成 大 中 文 學 報 第 二 十 九 期 2010 年 07 月 頁 51-82 國 立 成 功 大 學 中 文 系 蒙 以 養 正 李 贄 九 正 易 因 之 蒙 卦 解 與 童 心 說 袁 光 儀 * 摘 要 本 論 文 主 要 藉 由 李 贄 的 九 正 易 因 一 書 中 對 蒙 卦 的 闡 釋, 與 其 眾 所 週 知 的 童 心 說 作 一 對 照 與 印 證 李 贄 長 期 被 視

More information

PowerPoint Presentation

PowerPoint Presentation Decision analysis 量化決策分析方法專論 2011/5/26 1 Problem formulation- states of nature In the decision analysis, decision alternatives are referred to as chance events. The possible outcomes for a chance event

More information

2008 Nankai Business Review 61

2008 Nankai Business Review 61 150 5 * 71272026 60 2008 Nankai Business Review 61 / 62 Nankai Business Review 63 64 Nankai Business Review 65 66 Nankai Business Review 67 68 Nankai Business Review 69 Mechanism of Luxury Brands Formation

More information

A Study on Grading and Sequencing of Senses of Grade-A Polysemous Adjectives in A Syllabus of Graded Vocabulary for Chinese Proficiency 2002 I II Abstract ublished in 1992, A Syllabus of Graded Vocabulary

More information

[9] R Ã : (1) x 0 R A(x 0 ) = 1; (2) α [0 1] Ã α = {x A(x) α} = [A α A α ]. A(x) Ã. R R. Ã 1 m x m α x m α > 0; α A(x) = 1 x m m x m +

[9] R Ã : (1) x 0 R A(x 0 ) = 1; (2) α [0 1] Ã α = {x A(x) α} = [A α A α ]. A(x) Ã. R R. Ã 1 m x m α x m α > 0; α A(x) = 1 x m m x m + 2012 12 Chinese Journal of Applied Probability and Statistics Vol.28 No.6 Dec. 2012 ( 224002) Euclidean Lebesgue... :. : O212.2 O159. 1.. Zadeh [1 2]. Tanaa (1982) ; Diamond (1988) (FLS) FLS LS ; Savic

More information

Microsoft PowerPoint - Performance Analysis of Video Streaming over LTE using.pptx

Microsoft PowerPoint - Performance Analysis of Video Streaming over LTE using.pptx ENSC 427 Communication Networks Spring 2016 Group #2 Project URL: http://www.sfu.ca/~rkieu/ensc427_project.html Amer, Zargham 301149920 Kieu, Ritchie 301149668 Xiao, Lei 301133381 1 Roadmap Introduction

More information

University of Science and Technology of China A dissertation for master s degree Research of e-learning style for public servants under the context of

University of Science and Technology of China A dissertation for master s degree Research of e-learning style for public servants under the context of 中 国 科 学 技 术 大 学 硕 士 学 位 论 文 新 媒 体 环 境 下 公 务 员 在 线 培 训 模 式 研 究 作 者 姓 名 : 学 科 专 业 : 导 师 姓 名 : 完 成 时 间 : 潘 琳 数 字 媒 体 周 荣 庭 教 授 二 一 二 年 五 月 University of Science and Technology of China A dissertation for

More information

Microsoft Word - KSAE06-S0262.doc

Microsoft Word - KSAE06-S0262.doc Stereo Vision based Forward Collision Warning and Avoidance System Yunhee LeeByungjoo KimHogi JungPaljoo Yoon Central R&D Center, MANDO Corporation, 413-5, Gomae-Ri, Gibeung-Eub, Youngin-Si, Kyonggi-Do,

More information

% % % % % % ~

% % % % % % ~ 1001-5558 2015 03-0021-16 2010 C91 A 2014 5 2010 N. W. Journal of Ethnology 2015 3 86 2015.No.3 Total No.86 2010 2010 2181.58 882.99 40.47% 1298.59 59.53% 2013 2232.78 847.29 37.95% 1385.49 62.05% 1990

More information

I

I The Effect of Guided Discovery on The Learning Achievement and Learning Transfer of Grade 5 Students in Primary Schools I II Abstract The Effect of Guided Discovery on The Learning Achievement And Learning

More information

hks298cover&back

hks298cover&back 2957 6364 2377 3300 2302 1087 www.scout.org.hk scoutcraft@scout.org.hk 2675 0011 5,500 Service and Scouting Recently, I had an opportunity to learn more about current state of service in Hong Kong

More information

1 引言

1 引言 P P 第 40 卷 Vol.40 第 7 期 No.7 计 算 机 工 程 Computer Engineering 014 年 7 月 July 014 开 发 研 究 与 工 程 应 用 文 章 编 号 :1000-348(014)07-081-05 文 献 标 识 码 :A 中 图 分 类 号 :TP391.41 摘 基 于 图 像 识 别 的 震 象 云 地 震 预 测 方 法 谢 庭,

More information

10384 19020101152519 UDC Rayleigh Quasi-Rayleigh Method for computing eigenvalues of symmetric tensors 2 0 1 3 2 0 1 3 2 0 1 3 2013 , 1. 2. [4], [27].,. [6] E- ; [7], Z-. [15]. Ramara G. kolda [1, 2],

More information

WTO

WTO 10384 200015128 UDC Exploration on Design of CIB s Human Resources System in the New Stage (MBA) 2004 2004 2 3 2004 3 2 0 0 4 2 WTO Abstract Abstract With the rapid development of the high and new technique

More information

Microsoft Word - chnInfoPaper6

Microsoft Word - chnInfoPaper6 文 章 编 号 :3-77(2)-- 文 章 编 号 :92 基 于 中 文 拼 音 输 入 法 数 据 的 汉 语 方 言 词 汇 自 动 识 别 张 燕, 张 扬 2, 孙 茂 松 (. 清 华 大 学 计 算 机 系, 北 京 市 84;2. 搜 狗 科 技 公 司, 北 京 市 84) 摘 要 : 方 言 研 究 领 域 中 的 语 音 研 究 词 汇 研 究 及 语 法 研 究 是 方 言

More information

Microsoft Word - Final Exam Review Packet.docx

Microsoft Word - Final Exam Review Packet.docx Do you know these words?... 3.1 3.5 Can you do the following?... Ask for and say the date. Use the adverbial of time correctly. Use Use to ask a tag question. Form a yes/no question with the verb / not

More information

Microsoft PowerPoint - CH 04 Techniques of Circuit Analysis

Microsoft PowerPoint - CH 04 Techniques of Circuit Analysis Chap. 4 Techniques of Circuit Analysis Contents 4.1 Terminology 4.2 Introduction to the Node-Voltage Method 4.3 The Node-Voltage Method and Dependent Sources 4.4 The Node-Voltage Method: Some Special Cases

More information

<4D6963726F736F667420576F7264202D20312E5FA473AEFCB867AED5AA605FBB50B04BCFC8AABAAFABB8DCACE3A8732E646F63>

<4D6963726F736F667420576F7264202D20312E5FA473AEFCB867AED5AA605FBB50B04BCFC8AABAAFABB8DCACE3A8732E646F63> 國 立 臺 南 大 學 人 文 與 社 會 研 究 學 報 第 44 卷 第 2 期 ( 民 國 99.10):1-24 山 海 經 校 注 與 袁 珂 的 神 話 研 究 鍾 佩 衿 國 立 政 治 大 學 中 文 研 究 所 碩 士 生 摘 要 作 為 中 國 神 話 研 究 的 重 要 學 者, 袁 珂 的 研 究 重 心 即 在 於 對 山 海 經 神 話 進 行 詮 釋 與 探 討 ; 研

More information

Microsoft Word - A200810-897.doc

Microsoft Word - A200810-897.doc 基 于 胜 任 特 征 模 型 的 结 构 化 面 试 信 度 和 效 度 验 证 张 玮 北 京 邮 电 大 学 经 济 管 理 学 院, 北 京 (100876) E-mail: weeo1984@sina.com 摘 要 : 提 高 结 构 化 面 试 信 度 和 效 度 是 面 试 技 术 研 究 的 核 心 内 容 近 年 来 国 内 有 少 数 学 者 探 讨 过 基 于 胜 任 特 征

More information

<4D6963726F736F667420576F7264202D203338B4C12D42A448A4E5C3C0B34EC3FE2DAB65ABE1>

<4D6963726F736F667420576F7264202D203338B4C12D42A448A4E5C3C0B34EC3FE2DAB65ABE1> ϲ ฯ र ቑ ጯ 高雄師大學報 2015, 38, 63-93 高雄港港史館歷史變遷之研究 李文環 1 楊晴惠 2 摘 要 古老的建築物往往承載許多回憶 也能追溯某些歷史發展的軌跡 位於高雄市蓬 萊路三號 現為高雄港港史館的紅磚式建築 在高雄港三號碼頭作業區旁的一片倉庫 群中 格外搶眼 這棟建築建成於西元 1917 年 至今已將近百年 不僅躲過二戰戰 火無情轟炸 並保存至今 十分可貴 本文透過歷史考證

More information

59-81

59-81 BIBLID 0254-4466(2001)19:2 pp. 59-81 19 2 90 12 * 59 60 19 2 1498-1583 6 1572 12 27 1525-1582 1572-1620 1368-1398 1426-1435 1450-1456 1610-1695 15 1538-1588 1535-1608 61 1 1503-1583 1516-1591 1472-1528

More information

Preface This guide is intended to standardize the use of the WeChat brand and ensure the brand's integrity and consistency. The guide applies to all d

Preface This guide is intended to standardize the use of the WeChat brand and ensure the brand's integrity and consistency. The guide applies to all d WeChat Search Visual Identity Guidelines WEDESIGN 2018. 04 Preface This guide is intended to standardize the use of the WeChat brand and ensure the brand's integrity and consistency. The guide applies

More information

VASP应用运行优化

VASP应用运行优化 1 VASP wszhang@ustc.edu.cn April 8, 2018 Contents 1 2 2 2 3 2 4 2 4.1........................................................ 2 4.2..................................................... 3 5 4 5.1..........................................................

More information

08陈会广

08陈会广 第 34 卷 第 10 期 2012 年 10 月 2012,34(10):1871-1880 Resources Science Vol.34,No.10 Oct.,2012 文 章 编 号 :1007-7588(2012)10-1871-10 房 地 产 市 场 及 其 细 分 的 调 控 重 点 区 域 划 分 理 论 与 实 证 以 中 国 35 个 大 中 城 市 为 例 陈 会 广 1,

More information

東莞工商總會劉百樂中學

東莞工商總會劉百樂中學 /2015/ 頁 (2015 年 版 ) 目 錄 : 中 文 1 English Language 2-3 數 學 4-5 通 識 教 育 6 物 理 7 化 學 8 生 物 9 組 合 科 學 ( 化 學 ) 10 組 合 科 學 ( 生 物 ) 11 企 業 會 計 及 財 務 概 論 12 中 國 歷 史 13 歷 史 14 地 理 15 經 濟 16 資 訊 及 通 訊 科 技 17 視 覺

More information

元代題畫女性詩歌研究

元代題畫女性詩歌研究 國 立 成 功 大 學 中 國 文 學 研 究 所 碩 士 論 文 元 代 題 畫 女 性 詩 歌 研 究 The Research of Painting Poetries of Women Pictures in Yuan Dynasty 研 究 生 : 張 書 容 指 導 教 授 : 張 高 評 中 華 民 國 一 二 年 七 月 摘 要 元 代 題 畫 詩 在 唐 宋 完 善 的 基 礎

More information

D A

D A 2015 4 D822.333 A 0452 8832 2015 4 0014-12 14 The Second ASEAN Regional Forum: The ASEAN Regional Forum, A Concept Paper, in ASEAN Regional Forum Documents Series 1994-2006, ASEAN Secretariat, Jakarta,

More information

一般社団法人電子情報通信学会 信学技報 THE INSTITUTE OF ELECTRONICS, IEICE Technical Report INFORMATION THE INSTITUTE OF AND ELECTRONICS, COMMUNICATION ENGINEERS IEICE L

一般社団法人電子情報通信学会 信学技報 THE INSTITUTE OF ELECTRONICS, IEICE Technical Report INFORMATION THE INSTITUTE OF AND ELECTRONICS, COMMUNICATION ENGINEERS IEICE L 一般社団法人電子情報通信学会 信学技報 THE INSTITUTE OF ELECTRONICS, IEICE Technical Report INFORMATION THE INSTITUTE OF AND ELECTRONICS, COMMUNICATION ENGINEERS IEICE LOIS2016-85(2017-03) Technical Report INFORMATION AND

More information

9 * B0-0 * 16ZD097 10 2018 5 3 11 117 2011 349 120 121 123 46 38-39 12 2018 5 23 92 939 536 2009 98 1844 13 1 25 926-927 3 304 305-306 1 23 95 14 2018 5 25 926-927 122 1 1 self-ownership 15 22 119 a b

More information

Olav Lundström MicroSCADA Pro Marketing & Sales 2005 ABB - 1-1MRS755673

Olav Lundström MicroSCADA Pro Marketing & Sales 2005 ABB - 1-1MRS755673 Olav Lundström MicroSCADA Pro Marketing & Sales 2005 ABB - 1 - Contents MicroSCADA Pro Portal Marketing and sales Ordering MicroSCADA Pro Partners Club 2005 ABB - 2 - MicroSCADA Pro - Portal Imagine that

More information

受訪者編號:

受訪者編號: 台 灣 社 會 變 遷 基 本 調 查 計 畫 第 六 期 第 四 次 調 查 計 畫 執 行 報 告 傅 仰 止 章 英 華 杜 素 豪 主 編 廖 培 珊 計 畫 編 號 :NSC 102-2420-H-001-007-SS2 中 央 研 究 院 社 會 學 研 究 所 二 一 四 年 三 月 參 與 教 授 傅 仰 止 計 畫 主 持 人 中 央 研 究 院 社 會 學 研 究 所 研 究 員

More information

C doc

C doc No. C2004010 2004-7 No. C2004010 2004 7 16 1 No. C2004010 2004 7 16 2000 1990 1990 2000 ( ),, 2 / 2000 1990 1990 2000 ( ),, (1952-) 100871 It is not Appropriate to Remove the Birth Spacing Policy Now,

More information

m m m ~ mm

m m m ~ mm 2011 10 10 157 JOURNAL OF RAILWAY ENGINEERING SOCIETY Oct 2011 NO. 10 Ser. 157 1006-2106 2011 10-0007 - 0124-05 710043 6 TBM TBM U455. 43 A Structural Calculation and Analysis of Transfer Node of Three

More information

國立中山大學學位論文典藏

國立中山大學學位論文典藏 i Examinations have long been adopting for the selection of the public officials and become an essential tradition in our country. For centuries, the examination system, incorporated with fairness, has

More information

致 谢 本 论 文 能 得 以 完 成, 首 先 要 感 谢 我 的 导 师 胡 曙 中 教 授 正 是 他 的 悉 心 指 导 和 关 怀 下, 我 才 能 够 最 终 选 定 了 研 究 方 向, 确 定 了 论 文 题 目, 并 逐 步 深 化 了 对 研 究 课 题 的 认 识, 从 而 一

致 谢 本 论 文 能 得 以 完 成, 首 先 要 感 谢 我 的 导 师 胡 曙 中 教 授 正 是 他 的 悉 心 指 导 和 关 怀 下, 我 才 能 够 最 终 选 定 了 研 究 方 向, 确 定 了 论 文 题 目, 并 逐 步 深 化 了 对 研 究 课 题 的 认 识, 从 而 一 中 美 国 际 新 闻 的 叙 事 学 比 较 分 析 以 英 伊 水 兵 事 件 为 例 A Comparative Analysis on Narration of Sino-US International News Case Study:UK-Iran Marine Issue 姓 名 : 李 英 专 业 : 新 闻 学 学 号 : 05390 指 导 老 师 : 胡 曙 中 教 授 上 海

More information

從詩歌的鑒賞談生命價值的建構

從詩歌的鑒賞談生命價值的建構 Viktor E. Frankl (logotherapy) (will-to-meaning) (creative values) Ture (Good) (Beauty) (experiential values) (attitudinal values) 1 2 (logotherapy) (biological) (2) (psychological) (3) (noölogical) (4)

More information

<4D6963726F736F667420576F7264202D203033BDD7A16DA576B04FA145A4ADABD2A5BBACF6A16EADBAB6C0ABD2A4A7B74EB8712E646F63>

<4D6963726F736F667420576F7264202D203033BDD7A16DA576B04FA145A4ADABD2A5BBACF6A16EADBAB6C0ABD2A4A7B74EB8712E646F63> 論 史 記 五 帝 本 紀 首 黃 帝 之 意 義 林 立 仁 明 志 科 技 大 學 通 識 教 育 中 心 副 教 授 摘 要 太 史 公 司 馬 遷 承 父 著 史 遺 志, 並 以 身 膺 五 百 年 大 運, 上 繼 孔 子 春 秋 之 史 學 文 化 道 統 為 其 職 志, 著 史 記 欲 達 究 天 人 之 際, 通 古 今 之 變, 成 一 家 之 言 之 境 界 然 史 記 百

More information

<4D6963726F736F667420576F7264202D20342EC555A5DFA5C1A7EFADB2B67DA9F1A548A8D3A4A4A640B0EAAE61B56FAE69BED4B2A4B357B9BA2E646F63>

<4D6963726F736F667420576F7264202D20342EC555A5DFA5C1A7EFADB2B67DA9F1A548A8D3A4A4A640B0EAAE61B56FAE69BED4B2A4B357B9BA2E646F63> 改 革 開 放 以 來 的 中 共 國 家 發 展 規 劃 : 以 經 濟 發 展 為 中 心 的 探 討 顧 立 民 國 防 大 學 戰 略 研 究 所 助 理 教 授 摘 要 鄧 小 平 於 1978 年 提 出 改 革 開 放 的 國 家 戰 略, 並 提 出 三 步 走 的 國 家 發 展 策 略, 江 澤 民 進 一 步 表 示 二 十 一 世 紀 的 頭 二 十 年, 是 中 共 國 家

More information

BIBLID 0254-4466(2000)18: pp. 175-198 18 89 12 * 175 176 20 177 1980 1982 1985 1985 1972 -p -t -k 178 1985 1987 1990 1992 1991 1985 1980 1980 1980 1981 1981 1980 1990 1995 1982 1991 1985 1993 1992 1992

More information

13-4-Cover-1

13-4-Cover-1 106 13 4 301-323 302 2009 2007 2009 2007 Dewey 1960 1970 1964 1967 303 1994 2008 2007 2008 2001 2003 2006 2007 2007 7 2013 2007 2009 2009 2007 2009 2012 Kendall 1990 Jacoby 1996 Sigmon 1996 1 2 3 20062000

More information

2009.05

2009.05 2009 05 2009.05 2009.05 璆 2009.05 1 亿 平 方 米 6 万 套 10 名 20 亿 元 5 个 月 30 万 亿 60 万 平 方 米 Data 围 观 CCDI 公 司 内 刊 企 业 版 P08 围 观 CCDI 管 理 学 上 有 句 名 言 : 做 正 确 的 事, 比 正 确 地 做 事 更 重 要 方 向 的 对 错 于 大 局 的 意 义 而 言,

More information

Microsoft Word - 口試本封面.doc

Microsoft Word - 口試本封面.doc 國 立 屏 東 教 育 大 學 客 家 文 化 研 究 所 碩 士 論 文 指 導 教 授 : 劉 明 宗 博 士 台 灣 客 家 俗 諺 中 的 數 詞 研 究 研 究 生 : 謝 淑 援 中 華 民 國 九 十 九 年 六 月 本 論 文 獲 行 政 院 客 家 委 員 會 99 度 客 家 研 究 優 良 博 碩 論 文 獎 助 行 政 院 客 家 委 員 會 獎 助 客 家 研 究 優 良

More information

ABSTRACT ABSTRACT As we know the Sinology has a long history. As earily as 19 th century some works have already been done in this field. And among this the studies of lineages and folk beliefs in Southeast

More information

10384 200115009 UDC Management Buy-outs MBO MBO MBO 2002 MBO MBO MBO MBO 000527 MBO MBO MBO MBO MBO MBO MBO MBO MBO MBO MBO Q MBO MBO MBO Abstract Its related empirical study demonstrates a remarkable

More information

摘 要 張 捷 明 是 台 灣 當 代 重 要 的 客 語 兒 童 文 學 作 家, 他 的 作 品 記 錄 著 客 家 人 的 思 想 文 化 與 觀 念, 也 曾 榮 獲 多 項 文 學 大 獎 的 肯 定, 對 台 灣 這 塊 土 地 上 的 客 家 人 有 著 深 厚 的 情 感 張 氏 於

摘 要 張 捷 明 是 台 灣 當 代 重 要 的 客 語 兒 童 文 學 作 家, 他 的 作 品 記 錄 著 客 家 人 的 思 想 文 化 與 觀 念, 也 曾 榮 獲 多 項 文 學 大 獎 的 肯 定, 對 台 灣 這 塊 土 地 上 的 客 家 人 有 著 深 厚 的 情 感 張 氏 於 玄 奘 大 學 中 國 語 文 學 系 碩 士 論 文 客 家 安 徒 生 張 捷 明 童 話 研 究 指 導 教 授 : 羅 宗 濤 博 士 研 究 生 : 黃 春 芳 撰 中 華 民 國 一 0 二 年 六 月 摘 要 張 捷 明 是 台 灣 當 代 重 要 的 客 語 兒 童 文 學 作 家, 他 的 作 品 記 錄 著 客 家 人 的 思 想 文 化 與 觀 念, 也 曾 榮 獲 多 項 文

More information

BIBLID 0254-4466(2000)18: pp. 231-260 18 89 12 ** 1992 1987 * ** 231 232 1991 1998 1958 1995 1998 1994 1989 233 1987 196 1989 82-83 234 1992 1994 1 2 1994 60 1 1. 2. 2 235 1989 37 3 4 1992 74 3 4 236 1-2

More information

9330.doc

9330.doc The research of the ecotourism operated by the cooperative operating system in northern Tapajen Mountain The research of the ecotourism operated by the cooperative operating system in northern Tapajen

More information

1 * 1 *

1 * 1 * 1 * 1 * taka@unii.ac.jp 1992, p. 233 2013, p. 78 2. 1. 2014 1992, p. 233 1995, p. 134 2. 2. 3. 1. 2014 2011, 118 3. 2. Psathas 1995, p. 12 seen but unnoticed B B Psathas 1995, p. 23 2004 2006 2004 4 ah

More information

曹美秀.pdf

曹美秀.pdf 2006 3 219 256 (1858-1927) (1846-1894) 1 2 3 1 1988 70 2 1998 51 3 5 1991 12 37-219- 4 5 6 7 8 9 10 11 12 13 14 15 4 1998 5 1998 6 1988 7 1994 8 1995 725-732 9 1987 170 10 52 11 1994 121 12 2000 51 13

More information

文档 9

文档 9 : : :2001 5 10 :2001 6 10 : < > :Rudimental Studies on A Classified and Annotated Bibliography of Books on Calligraphy and Painting : : :K2904.6 Yu Shaosong A classified and Annotated Bibliography of Books

More information

SVM OA 1 SVM MLP Tab 1 1 Drug feature data quantization table

SVM OA 1 SVM MLP Tab 1 1 Drug feature data quantization table 38 2 2010 4 Journal of Fuzhou University Natural Science Vol 38 No 2 Apr 2010 1000-2243 2010 02-0213 - 06 MLP SVM 1 1 2 1 350108 2 350108 MIP SVM OA MLP - SVM TP391 72 A Research of dialectical classification

More information

穨423.PDF

穨423.PDF Chinese Journal of Science Education 2002,, 423-439 2002, 10(4), 423-439 1 2 1 1 1 2 90 8 10 91 4 9 91 8 22 ) NII 1995 7 14, 1999 1997 (Cooperative Remotely Accessible Learning CORAL) 424 (Collaborative

More information

The Development of Color Constancy and Calibration System

The Development of Color Constancy and Calibration System The Development of Color Constancy and Calibration System The Development of Color Constancy and Calibration System LabVIEW CCD BMP ii Abstract The modern technologies develop more and more faster, and

More information

Fig. 1 Frame calculation model 1 mm Table 1 Joints displacement mm

Fig. 1 Frame calculation model 1 mm Table 1 Joints displacement mm 33 2 2011 4 ol. 33 No. 2 Apr. 2011 1002-8412 2011 02-0104-08 1 1 1 2 361003 3. 361009 3 1. 361005 2. GB50023-2009 TU746. 3 A Study on Single-span RC Frame Reinforced with Steel Truss System Yuan Xing-ren

More information

Improving the Effectiveness of the Training of Civil Service by Applying Learning Science and Technology: The Case Study of the National Academy of Ci

Improving the Effectiveness of the Training of Civil Service by Applying Learning Science and Technology: The Case Study of the National Academy of Ci 善 用 學 習 科 技 提 升 公 務 人 員 培 訓 之 效 能 : 以 國 家 文 官 學 院 學 習 科 技 之 建 構 與 運 用 為 例 蔡 璧 煌 鍾 廣 翰 摘 要 公 務 人 員 的 素 質 代 表 一 國 國 力, 除 攸 關 國 家 施 政 外, 也 影 響 國 家 整 體 之 發 展, 因 此 如 何 善 用 學 習 科 技 協 助 公 務 人 員 培 訓 與 管 理, 未 來

More information

20

20 37 92 19 40 19 20 21 1 7 22 1/5 6/30 5/3030 23 24 25 26 1 2 27 1 2 28 29 30 5 8 8 3 31 32 33 34 35 36 37 38 39 A Study Investigating Elementary School Students Concept of the Unit in Fraction in Northern

More information

國 立 屏 東 教 育 大 學 中 國 語 文 學 系 碩 士 論 文 指 導 教 授 : 朱 書 萱 博 士 文 徵 明 師 門 書 藝 傳 承 研 究 研 究 生 : 黃 乾 殷 撰 中 華 民 國 一 二 年 八 月 文 徵 明 師 門 書 藝 傳 承 研 究 摘 要 明 代 吳 中 書 派 自 祝 允 明 文 徵 明 一 出, 引 領 時 代 潮 流, 將 整 個 吳 中 書 法 藝 術

More information

Outline Speech Signals Processing Dual-Tone Multifrequency Signal Detection 云南大学滇池学院课程 : 数字信号处理 Applications of Digital Signal Processing 2

Outline Speech Signals Processing Dual-Tone Multifrequency Signal Detection 云南大学滇池学院课程 : 数字信号处理 Applications of Digital Signal Processing 2 CHAPTER 10 Applications of Digital Signal Processing Wang Weilian wlwang@ynu.edu.cn School of Information Science and Technology Yunnan University Outline Speech Signals Processing Dual-Tone Multifrequency

More information

國立臺灣藝術大學

國立臺灣藝術大學 國 立 臺 灣 藝 術 大 學 藝 術 與 人 文 教 學 研 究 所 碩 士 學 位 論 文 本 論 文 獲 國 家 教 育 研 究 院 博 ( 碩 ) 士 論 文 研 究 獎 助 課 外 讀 物 對 於 國 小 低 年 級 國 語 科 教 科 書 輔 助 性 之 研 究 - 以 新 北 市 100 年 度 國 民 小 學 推 動 閱 讀 計 畫 優 良 圖 書 為 例 指 導 教 授 : 張 純

More information

Microsoft Word - 201506定版

Microsoft Word - 201506定版 56 Chinese Journal of Library and Information Science for Traditional Chinese Medicine Dec. 2015 Vol. 39 No. 6 综 述 中 医 药 学 语 言 系 统 研 究 综 述 于 彤, 贾 李 蓉, 刘 静, 杨 硕 *, 董 燕, 朱 玲 中 国 中 医 科 学 院 中 医 药 信 息 研 究 所,

More information

優 秀 的 構 圖 設 計 可 以 引 起 眾 的 注 意, 書 籍 封 面 的 構 圖 影 響 消 費 者 的 購 買 意 願 海 報 設 計 的 構 圖 影 響 的 傳 達 效 益 照 片 的 構 圖 影 響 美 感 的 表 現 與 傳 遞 經 典 名 作 在 構 圖 上 皆 有 細 膩 的 安

優 秀 的 構 圖 設 計 可 以 引 起 眾 的 注 意, 書 籍 封 面 的 構 圖 影 響 消 費 者 的 購 買 意 願 海 報 設 計 的 構 圖 影 響 的 傳 達 效 益 照 片 的 構 圖 影 響 美 感 的 表 現 與 傳 遞 經 典 名 作 在 構 圖 上 皆 有 細 膩 的 安 攝 影 作 品 觀 看 順 序 的 變 因 探 討 An Analysis of Photography Viewing Paths 戴 孟 宗 Tai, Meng-Tsung Ph.D. 國 立 臺 灣 藝 術 學 圖 傳 播 藝 術 學 系 副 教 授 Department of Graphic Communication Arts. National Taiwan University of

More information

*王心齋說得好:「天理者,」

*王心齋說得好:「天理者,」 樂 是 樂 此 學 學 是 學 此 樂 - 梁 漱 溟 對 泰 州 學 派 的 現 代 繼 承 與 改 造 王 汝 華 摘 要 以 發 皇 新 孔 學 為 畢 生 志 業 的 民 初 大 儒 梁 漱 溟, 其 由 佛 歸 儒 的 主 要 思 想 進 路 即 是 泰 州 學 派 本 文 乃 扣 緊 梁 漱 溟 與 泰 州 學 派 的 關 係 而 發, 參 稽 梁 漱 溟 的 系 列 著 作 ; 檢 視

More information

< F5FB77CB6BCBD672028B0B6A46AABE4B751A874A643295F5FB8D5C5AA28A668ADB6292E706466>

< F5FB77CB6BCBD672028B0B6A46AABE4B751A874A643295F5FB8D5C5AA28A668ADB6292E706466> A A A A A i A A A A A A A ii Introduction to the Chinese Editions of Great Ideas Penguin s Great Ideas series began publication in 2004. A somewhat smaller list is published in the USA and a related, even

More information

國立高雄大學○○○○○○學系(研究所)(標楷體18號字

國立高雄大學○○○○○○學系(研究所)(標楷體18號字 國 立 高 雄 大 學 都 市 發 展 與 建 築 研 究 所 碩 士 論 文 高 雄 後 勁 地 區 傳 統 民 居 特 徵 之 研 究 The Study of The Characteristic of Traditional Residential Buildings of Houjing District in Kaohsiung 研 究 生 : 許 輝 隆 撰 指 導 教 授 : 陳 啟

More information

豐佳燕.PDF

豐佳燕.PDF Application of Information Literacy to chiayen@estmtc.tp.edu.tw information literacy Theme-oriented teaching. Abstract Based on the definition of Information Literacy and Six core concepts of the problem

More information

:1949, 1936, 1713 %, 63 % (, 1957, 5 ), :?,,,,,, (,1999, 329 ),,,,,,,,,, ( ) ; ( ), 1945,,,,,,,,, 100, 1952,,,,,, ,, :,,, 1928,,,,, (,1984, 109

:1949, 1936, 1713 %, 63 % (, 1957, 5 ), :?,,,,,, (,1999, 329 ),,,,,,,,,, ( ) ; ( ), 1945,,,,,,,,, 100, 1952,,,,,, ,, :,,, 1928,,,,, (,1984, 109 2006 9 1949 3 : 1949 2005, : 1949 1978, ; 1979 1997, ; 1998 2005,,, :,,, 1949, :, ;,,,, 50, 1952 1957 ; ; 60 ; 1978 ; 2003,,,,,,, 1953 1978 1953 1978,,,, 100,,,,, 3,, :100836, :wulijjs @263. net ;,, :

More information