entropy Article Multisensor Estimation Fusion with Gaussian Process for Nonlinear Dynamic Systems Yiwei Liao 1, Jiangqiong Xie 1, Zhiguo Wang 2,3 and


 夯皿 仰
 13 days ago
 Views:
Transcription
1 entropy Article Multisensor Estimation Fusion with Gaussian Process for Nonlinear Dynamic Systems Yiwei Liao 1, Jiangqiong Xie 1, Zhiguo Wang 2,3 and Xiaojing Shen 1, * 1 School of Mathematics, Sichuan University, Chengdu , China; (Y.L.); (J.X.) 2 School of Science and Engineering, The Chinese University of Hong Kong, Shenzhen , China; 3 Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei , China * Correspondence: Received: 9 October 2019; Accepted: 11 November 2019 ; Published: 16 November 2019 Abstract: The Gaussian process is gaining increasing importance in different areas such as signal processing, machine learning, robotics, control and aerospace and electronic systems, since it can represent unnown system functions by posterior probability. This paper investigates multisensor fusion in the setting of Gaussian process estimation for nonlinear dynamic systems. In order to overcome the difficulty caused by the unnown nonlinear system models, we associate the transition and measurement functions with the Gaussian process regression models, then the advantages of the nonparametric feature of the Gaussian process can be fully extracted for state estimation. Next, based on the Gaussian process filters, we propose two different fusion methods, centralized estimation fusion and distributed estimation fusion, to utilize the multisensor measurement information. Furthermore, the equivalence of the two proposed fusion methods is established by rigorous analysis. Finally, numerical examples for nonlinear target tracing systems demonstrate the equivalence and show that the multisensor estimation fusion performs better than the single sensor. Meanwhile, the proposed fusion methods outperform the convex combination method and the relaxed Chebyshev center covariance intersection fusion algorithm. Keywords: multisensor estimation fusion; Gaussian process; nonlinear dynamic systems; data driven modeling; target tracing; information fusion 1. Introduction Estimation in nonlinear systems is extremely important because almost all practical systems involve nonlinearities of one ind or another [1,2], such as target tracing, vehicle navigation, automatic control and robotics [3,4]. In the case of nonlinearities, estimation cannot be obtained analytically in general. Some methods based on exact parametric models and ideas of the Kalman filter (KF) have been developed. The Extended Kalman filter (EKF) [5] was the most common application to nonlinear systems, which simply linearizes all nonlinear functions via Taylorseries expansion and substitutes Jacobian matrices for the linear transformations into the KF equations. The unscented transformation was introduced to address the deficiencies of linearization, namely unscented Kalman filters (UKF) [1], which provided a more direct and explicit mechanism for transforming mean and covariance matrices. Under the Bayesian framewor, particle filter (PF) was presented by constructing the posterior probability density function of the state based on all available information in Reference [6]. However, for nonlinear systems, these methods were usually assumed that the nonlinear relationships are nown. Lac of modeling accuracy, including the identification of the noise and the model Entropy 2019, 21, 1126; doi: /e
2 Entropy 2019, 21, of 26 parameters, was inevitable [7]. In many applications, most real dynamic systems are difficult to obtain the exact models and system noises due to the complexity of the target motion environment, therefore, the parameterized estimation methods may be invalid. To overcome the limitations of parametric models, researchers have recently employed so called nonparametric methods lie the Gaussian process [8] to learn models for dynamic systems. More specifically, the functional representation needs to be learned from the training data before the filtering prediction and update step [9]. Gaussian processes have been increasingly attracting the interests in machine learning, signal processing, robotics, control and aerospace and electronic systems [10 12]. For example, Gaussian process models are used as the surrogate models for complex physics models in Reference [13]. The advantages stemmed from the fact that Gaussian processes tae both the noise in the system and the uncertainty in the model into consideration [14]. In the context of modeling the dynamic systems, Gaussian processes can be used as prior over the transition function and measurement function. By analyzing the correlation among given training data, Gaussian process models can provide the posterior distributions over functions through the combination of the prior and the data. For the cases that ground truth data are unavailable or can only be determined approximately, Gaussian process latent variable models were developed in References [15,16] and they were extended to the setting of dynamical robotics systems in Reference [17]. In order to reduce the cubic complexity of Gaussian process training for the fixed number of training points, sparse Gaussian processes were developed (see e.g., [18 23]). Koptimality was used to improve the stability of the Gaussian process prediction in Reference [24]. So far, Gaussian process models have been applied successfully to massive nonlinear dynamic systems. Motivated by modeling human motion, learning nonlinear dynamical models with Gaussian process was investigated in Reference [16]. Many filtering methods, such as the Gaussian process extended Kalman filters (GPEKF) [25], the Gaussian process unscented Kalman filters (GPUKF) [26], the Gaussian process particle filters (GPPF) [27], GPBayesFilters [14] and the Gaussian process assumed density filters (GPADF) [7,10] were derived by incorporating the Gaussian process transition model and Gaussian process measurement model into the classic filters. GPADF was an efficient form of assumed density filtering (ADF) introduced in References [28 30] and propagated the full Gaussian density [7]. Although Gaussian processes have been around for decades, they mainly focus on single sensor systems. With the rapid development of the sensor technology, computer science, communication technology and information technology, multisensor systems are widely used in military and civil fields [31 35]. Benefited from the application of multiple sensors, multisensor data fusion maes more comprehensive and accurate decision by integrating the available information from multiple sensors and has attracted lots of research interests. Generally speaing, the multisensor estimation fusion mainly contains centralized estimation fusion and distributed estimation fusion. Centralized estimation fusion is that a central processor receives all measurement data from sensors without processing and uses them to estimate the state [36,37]. In general, many filtering algorithms in single sensor system can be applied to the multisensor systems, since measurement value can be staced and regarded as one measurement. On the other hand, distributed estimation fusion has several advantages in terms of reliability, survivability, communication bandwidth and computational burdens [38 42], which mae it desirable in real applications such as surveillance, tracing and battle management. In the distributed setting, each local sensor deals with its own measurement data and transmits the local state estimation to the fusion center for the purpose of more accurate estimation fusion. So far, a variety of distributed fusion methods have been investigated for different occasions, such as References [37,40,43 46]. A convex combination method was given to fuse the local estimates in Reference [43]. For the unavailable cross correlation matrix problem, a relaxed Chebyshev center covariance intersection (RCCCI) algorithm was also provided to fuse the local estimates in Reference [37]. A current review of distributed multisensor data fusion under unnown correlation can be seen in Reference [47]. However, these multisensor estimation fusion methods are mainly based on
3 Entropy 2019, 21, of 26 the exact dynamic models or nown nonlinear functions. For the cases in which accurate parametric models are difficult to obtain, it is worth considering integrating Gaussian processes with multisensor estimation fusion to improve the system s performance. In this paper, we focus on multisensor estimation fusion including the centralized and distributed fusion methods with Gaussian process for nonlinear dynamic systems. Firstly, given the training data, we learn the dynamic models with the Gaussian process and derive multisensor estimation fusion methods based on the Gaussian process models for nonlinear dynamic systems, which can avoid inappropriate parametric models and improve predictive ability. Combining with the single senor GPADF and GPUKF, respectively, the prediction step and update step of the multisensor estimation fusion are provided. In general, it is hard to analyze the performance of the nonparametric fusion methods. Since the Gaussian process fusion methods have analytic mean and covariance, we show that the distributed estimation fusion is equivalent to the centralized estimation fusion with the single sensor cross terms being full column ran. Numerical examples show that the equivalence is satisfied under the condition and the multisensor estimation fusion performs better than the single sensor. We also compare the proposed fusion methods with the RCCCI [37] algorithm and the convex combination method [43]. In the Gaussian process model setting, the simulation results show that the multisensor estimation fusion methods outperform the RCCCI algorithm and the convex combination method, as far as the estimation accuracy is concerned. This article also extends our earlier wor [48]. Compared with the conference paper, the main differences are as follows: Detailed proofs are provided. The enhancement of the multisensor fusion algorithm with GPUKF is presented. Additional set of extensive experiments are carried out. The equivalence condition of Proposition 1 is analyzed. Comparison between GPADF fusion and GPUKF is given. The rest of the paper is organized as follows. In Section 2, the problem formulation and the Gaussian process are introduced. In Section 3, the centralized estimation fusion and the distributed estimation fusion methods are presented. In Section 4, simulations are provided to confirm our analysis. Some conclusions are drawn in Section Preliminaries 2.1. Problem Formulation In this paper, we consider the state estimation problem of a nonlinear dynamic system with additive noise and N (N 2) sensor measurements. The multisensor nonlinear dynamic system with state equation and N measurement equations (see Figure 1) is described as follows: x = h(x 1 ) + w, = 0, 1,..., (1) y m = gm (x ) + v m, m = 1,..., N, (2) where x R D is the state of the dynamic system at time and y m Rl m is the measurement of the m th sensor at time, m = 1,..., N. h(x ) is the transition function of the state x and g m (x ) is the nonlinear measurement function of x at the m th sensor. w N(0, Q ) is a Gaussian system noise and v m N(0, Rm ) is a Gaussian measurement noise, which is independent of each other. Our goal is to estimate the state from all the available sensor measurement information. Gaussian processes are regarded as the prior of the transition function h(x 1 ) and the sensor measurement function g m (x ), m = 1,..., N, respectively. Then we mae inference about the posterior distribution of the transition function and the sensor measurement function. The Gaussian process model represents a powerful tool to mae Bayesian inference about functions [49].
4 Entropy 2019, 21, of 26 h() x 1 x x 1 y m 1 y m y m 1 m g () 2.2. Gaussian Processes Figure 1. Graphical structure for multisensor nonlinear dynamic systems. A Gaussian process is defined over functions, which is a generalization of the Gaussian probability distribution [8]. It means that we should consider inference directly in function space. Also, it gives a formal definition, the Gaussian process is defined a collection of random variables, any finite number of which have a joint Gaussian distribution, in [8]. Similar to a Gaussian distribution, nowledge of both mean and covariance function can specify a Gaussian process. The mean function m(x) and covariance function (also called a ernel) (x, x ) of a Gaussian process f (x) are defined as follows, m(x) = E[ f (x)], (x, x ) = E[( f (x) m(x))( f (x ) m(x ))], where E[ ] denotes the expectation. Thus, we write the Gaussian process as f (x) GP(m(x), (x, x )). Unless stated otherwise, a prior mean function is usually assumed to be 0. The choice of the covariance functions depends on the application [14]. In this paper, we employ the most commonlyused squared exponential (SE) ernel in machine learning, which is the prototypical stationary covariance function and useful for modeling particularly smooth functions. It is defined as Cov( f (x), f (x )) = (x, x ) = α 2 exp{ 1 2 (x x ) T Λ 1 (x x )}, (3) where parameter α 2 represents the variance of the function f that controls the uncertainty of predictions in areas of less training sample density and parameter Λ is a diagonal matrix of the characteristic lengthscales of the SE ernel. Other commonly employed ernel functions can be seen in Reference [8]. A Gaussian process implies a distribution over functions based upon the obtained training data. Assume that we have obtained a set of training data X = {X, y}. X and y are made up of multiple samples drawn from the following standard Gaussian process regression model y = f (x) + ε, ε N (0, σ 2 ε ), (4)
5 Entropy 2019, 21, of 26 where f : R D R and f (x) GP, N denotes the normalized Gaussian probability density function. Note that Gaussian process regression uses the fact that any finite set of training data and testing data of a Gaussian process are jointly Gaussian distributed. Let θ = {α 2, Λ, σ 2 ε }, called the hyperparameters of the Gaussian process. Using the evidence maximization method [8,50], we obtain a point estimate ˆθ = arg max θ logp(y X, θ), from the nown training data [51]. We can solve the optimization problem with numerical optimization techniques such as conjugate gradient ascent [8,14]. Next, we infer the posterior distribution over function value f (x ) from training data for each inputs x R D. In addition, the test input x can be categorized into two classes, relying on whether it is uncertain or not. The corresponding predictive distribution over f (x ) will be presented as follows. (1) Predictive Distribution over Univariate Function Suppose that the training data is X = (X, y)={(x i, y i ) : i = 1,..., n}, where x i is a Ddimensional input vector, y i is a scalar output, and n is the number of training data. Next, two cases, deterministic inputs and uncertain inputs, are considered. Deterministic Inputs Conditioned on the training data and the deterministic test input x, the predictive distribution over f (x ) is a Gaussian with the mean m f (x ) = E[ f ] = T (K + σ 2 ε I) 1 y = T β, (5) and the variance where Var f [ ] represents the variance with respect to f, σ 2 f (x ) = Var f [ f ] = T (K + σ 2 ε I) 1, (6) := (X, x ), := (x, x ), β := (K + σ 2 ε I) 1 y, and K is the ernel matrix with each element satisfying K ij = (x i, x j ). The uncertainty of prediction is characterized by the variance σ 2 f (x ). More details can be seen in Reference [8]. Uncertain Inputs When the test input is uncertain, namely x has a probability distribution, the prediction problem is relatively difficult. Let us consider the predictive distribution of f (x ) for the uncertain input x N (µ, Σ). According to the results in Reference [7] (or reviewing References [52,53]), we introduce the predictive distribution over the function value as p( f (x ) µ, Σ) = p( f (x ) x )p(x µ, Σ)dx, (7)
6 Entropy 2019, 21, of 26 where the mean and variance of the distribution p( f (x ) x ) are provided with Equations (5) and (6), respectively. Based on the conditional expectation and variance formulae, it yields the mean µ and variance σ 2 of the distribution p( f (x ) µ, Σ) in closed form. In particular, we have and µ = E x [E f ( f (x ) x ) µ, Σ] = E x [m f (x ) µ, Σ] = m f (x )N (x µ, Σ)dx = β T l, (8) σ 2 = E x [m f (x ) 2 µ, Σ] + E x [σ 2 f (x ) µ, Σ] E x [m f (x ) µ, Σ] 2 Note that l in Equation (8) is from l = [l 1,..., l n ] T, = β T Lβ + α 2 tr((k + σ 2 ε I) 1 L) (µ ) 2. (9) l j = f (x j, x )p(x )dx = α 2 ΣΛ 1 + I 1 2 exp{ 1 2 (x j µ) T (Σ + Λ) 1 (x j µ)}. tr( ) represents the trace in Equation (9) and we have and L ij = f (x i, µ) f (x j, µ) 2ΣΛ 1 + I 1 2 exp{( z ij µ) T (Σ Λ) 1 ΣΛ 1 ( z ij µ)}, z ij := 1 2 (x i + x j ). In general, the predictive distribution in Equation (7) cannot be analytically calculated since it leads to a nongaussian distribution, if a Gaussian distribution is mapped through a nonlinear function. Using momentmatching method, the distribution p( f (x ) µ, Σ) can be approximated by the Gaussian distribution N (µ, σ 2 ). (2) Predictive Distribution over Multivariate Function For the model (4), let us turn to the multiple output case that f : R D R E, f GP. Following the results in References [7,51], we need to train E Gaussian process regression models independently. The ath model is learned from the training data [X, y a ], y a = [y1 a,..., ya n] T, a = 1,..., E, where y a j is the ath element of y j. It implies that any two targeted dimensions are conditionally independent given the input. For any deterministic input x, the mean and variance of each target dimension are obtained by Equations (5) and (6). The predictive mean of f (x ) is a vector staced by E targeted dimension mean, and the corresponding covariance is a diagonal matrix whose diagonal element is the variance of each targeted dimension. With the uncertain input x N (µ, Σ), the predictive mean µ of f (x ) also is the staced E predictive mean µ a given by Equation (8), a = 1,..., E. Unlie predicting at deterministic
7 Entropy 2019, 21, of 26 inputs, targeted dimensions are dependent due to the uncertain input. Denote fa the corresponding predictive covariance matrix is given by = f a (x ) and Σ µ, Σ Var[ f1 µ, Σ]... Cov[ f 1, f E µ, Σ] = (10) Cov[ fe, f 1 µ, Σ]... Var[ f E µ, Σ] It is obvious that the predictive covariance is no longer a diagonal matrix. Using Equation (9), we can obtain the diagonal element of covariance matrix (10). For each a, b {1,..., E}, the offdiagonal elements satisfy Cov[ f a, f b µ, Σ] = E f,x [ f a (x ) f b (x ) µ, Σ] µ a µ b. Given x, f a (x ) and f b (x ) are independent, then we have E f,x [ fa fb µ, Σ] = fa fb p( f a, f b x )p(x µ, Σ)d f dx (11) = β T a a f (X, x ) b f (x, X)p(x µ, Σ)dx β b, where β a, β b can be similarly obtained in Equation (5). For notational simplicity, define where L := a f (X, x ) b f (x, X)p(x µ, Σ)dx, L ij = α 2 aα 2 b (Λ 1 a + Λ 1 b )Σ + I 1 2 exp{ 1 2 (x i x j ) T (Λ a + Λ b ) 1 (x i x j )} exp{ 1 2 (z ij µ) T R 1 (z ij µ)}, R := (Λ 1 a + Λ 1 b ) 1 + Σ, z ij := Λ b (Λ a + Λ b ) 1 x i + Λ a (Λ a + Λ b ) 1 x j. Thus, we also approximate the distribution p( f (x ) µ, Σ) with the Gaussian distribution N (µ, Σ ). More details can be found in Reference [51]. 3. Multisensor Estimation Fusion In this section, an essential prerequisite is that the transition function (1) and the measurement functions (2) are either not nown or no longer accessible. Thus, we model the latent functions with Gaussian processes. For the Nsensor dynamic systems (1) and (2), suppose that we have obtained some training data of the target state and sensor measurement. In the following, we discuss the centralized and distributed estimation fusion methods to estimate the state from a sequence of noisy sensor measurements.
8 Entropy 2019, 21, of Centralized Estimation Fusion First, we consider the centralized estimation fusion method. To facilitate fusion, we stac the multisensor measurement information as follows y = [(y 1 )T,..., (y N )T ] T, g(x ) = [(g 1 (x )) T,..., (g N (x )) T ] T, (12) v = [(v 1 )T,..., (v N )T ] T. Thus the dynamic system (1) and (2) can be rewritten as where the covariance of the noise v is given by x = h(x 1 ) + w, (13) y = g(x ) + v, (14) Cov(v ) =Σ = diag(σ 1,..., ΣN ), Cov(v m ) =Σm, m = 1,..., N. Considering the Gaussian process dynamic system setup, two Gaussian process models can be trained using evidence maximization. GP h models the mapping x 1 x, R D R D and GP g models the mapping x y, R D R N E. As we now, the ey of the estimation is to recursively infer the posterior distribution over the state x of the dynamic system (13) (14), which are based on all the sensor measurements y 1: = {y τ } τ=1, including the prediction step and the update step. In particular, the prediction step uses the posterior distribution from the previous time step to produce a predictive distribution of the state at the current time step. In the update step, the current predictive distribution is combined with current measurement information to refine the posterior distribution at the current time step. Next, centralized estimation fusion methods with GPADF and GPUKF are presented, respectively. (1) Fusion with GPADF Note that some approximation methods are required to obtain an analytic posterior distribution for the nonlinear dynamic systems. In particular, for the Gaussian process dynamic system, it is easy to mae some Gaussian approximation to the posterior distribution. GPADF exploits the fact that the true moments of the Gaussian process predictive distribution can be computed in closed form. The predictive distribution is approximated by a Gaussian with the exact predictive mean and covariance [7]. Based on the following lemma, we present the centralized estimation fusion method with GPADF. Lemma 1. For the Gaussian process dynamic system (13) (14), the posterior distribution of the state x can be approximated by the Gaussian distribution N (µ e, Ce ), in which the mean µe and covariance Ce are given as [7]: µ e = µp + C x y (C y ) 1 (y µ y ), (15) C e = Cp C x y (C y ) 1 C T x y, (16)
9 Entropy 2019, 21, of 26 where µ p = E[x y 1: 1 ], C p = Cov(x y 1: 1 ), µ y = E[y y 1: 1 ], C y = Cov(y y 1: 1 ), C x y = Cov(x, y y 1: 1 ). (17) Remar 1. Since GPADF algorithm [7] is a state estimation method in the single sensor Gaussian process dynamic system, it is applicable to the multisensor system with all the sensor measurements staced into a vector. Therefore, Equation (15) and Equation (16) are called the centralized estimation fusion method with GPADF here. Then, let us present the prediction step and update step of the above fusion method in detail. Prediction Step First, this step needs to use the previous posterior distribution p(x 1 y 1: 1 ) N (µ e 1, Ce 1 ) to produce a current predictive distribution p(x y 1: 1 ). We write the predictive distribution as p(x y 1: 1 ) = p(x x 1 )p(x 1 y 1: 1 )dx 1, and p(x x 1 ) is a Gaussian distribution due to h GP and w N (0, Σ w ). From this, an analogy with Equation (7) yields the mean µ p and the variance Cp of the state x conditioned on the measurement y 1: 1. In particular, µ p = E(x y 1: 1 ) = E[h(x 1 ) y 1: 1 ] (18) = E x 1 [E h (h(x 1 ) x 1 ) y 1: 1 ]. µ p is a Ddimension mean vector and the computation of every target dimension can be seen in Equation (8). The corresponding covariance matrix C p = Cov(h(x 1) y 1: 1 ) + Cov(w ) = Σ h + Σ w, (19) where Σ w is the covariance matrix of transition noise obtained by the evidence maximization method, and the computation of Σ h can be seen in Equation (10). Update Step Next, we approximate the joint distribution p(x, y y 1: 1 ) with a joint Gaussian N (µ, C ), where [ µ p ] [ µ = µ y, C = C p C T x y C x y C y Note that we are not aiming at approximating the distribution p(x y 1: ) directly. To obtain the joint approximation Gaussian distribution, we use the Gaussian distribution N (µ y, Cy ) to approximate the distribution p(y y 1: 1 ), which can be done in a same way to the prediction step due to p(y y 1: 1 ) = p(y x )p(x y 1: 1 )dx. ].
10 Entropy 2019, 21, of 26 On the other hand, the cross term satisfies C x,y = E x,g[x (y ) T y 1: 1 ] µ p (µy )T. For the unnown term E x,g[x (y ) T y 1: 1 ], we have E x,gã[x yã y 1: 1] = E x,gã[x gã(x ) y 1: 1 ] [ n ] = x βãi gã(x, x i ) p(x )dx i=1 = n βãi i=1 x c 1 N (x x i, Λã)N (x µ p, Cp )dx =c 1 (c 2 ) 1 n βãi ψ(x i, µ p ), (20) i=1 where ã = 1, 2,..., N E, c1 1 is the normalization constant of the unnormalized SE ernel, ψ(x i, µ p ) is the mean of a new Gaussian distribution that is the product of two Gaussian density functions and c2 1 is the normalization constant of the new Gaussian distribution. Consequently, from the joint distribution p(x, y y 1: 1 ), we obtain the posterior distribution p(x y 1: ) N (µ e, Ce ) with the mean and variance as Equations (15) and (16), respectively. Remar 2. From Equations (15) and (16), we can see that the centralized fusion method with GPADF is similar to the unified optimal linear estimation fusion [40,54]. The difference is that the computational way of µ p, Cp, µy,cy and C x y and they are mainly based on the training data due to the nonparametric dynamic system model. (2) Fusion with GPUKF Following the single sensor GPUKF (see e.g., References [14,26]), we present the prediction step and update step of the GPUKF fusion. Prediction Step At time 1, based on the unscented transform [1], we obtain X 1 containing 2D + 1 points through the mean µ e 1 and covariance Ce 1. Using the GP prediction model, the transformed set is given by X [i] = GP [µ] h and the process noise is computed as [i] (X 1 ), f or i = 1,..., 2D + 1, (21) Q = GP [Σ] h (µe 1 ), (22) where GP [µ] h and GP [Σ] h are the mean and covariance models with respect to the Gaussian process GP h. The predictive mean and covariance are µ p = 2D+1 i=1 C p = 2D+1 i=1 W [i] X [i], (23) W [i] ( X [i] µ p )( X [i] µ p )T + Q, (24)
11 Entropy 2019, 21, of 26 where W [i] is the weight generated in the unscented transform. Using the predictive mean µ p and covariance C p, we obtain X ˆ [i] through the unscented transform. Thus, based on the GP observation model, Ŷ [i] = GP [µ] g ( ˆ X [i] and the observation noise matrix is determined by R = GP [Σ] g ) f or i = 1,..., 2D + 1, (25) ( ˆ X [i] Then the predicted observation and innovation covariance are calculated by ŷ = S = Meanwhile, the cross covariance is given by Update Step C x y = ). (26) 2D+1 W [i] Ŷ [i], (27) i=1 2D+1 W [i] (Ŷ [i] ŷ )(Ŷ [i] ŷ ) T + R. (28) i=1 2D+1 i=1 Finally, the update can be obtained by the following equations: W [i] ( X ˆ [i] µ p [i] )(Ŷ ŷ ) T (29) µ e = µp + C x y S 1 (y ŷ ), (30) C e = Cp C x y S 1 C T x y. (31) Remar 3. Note that GPADF propagates the full Gaussian density by exploiting specific properties of Gaussian process models [7]. GPUKF maps the sigma points through the Gaussian process models instead of the parametric functions and the distributions are described by finitesample approximations. In contrast to GPUKF, GPADF is consistent and moment preserving [7]. In addition GPEKF requires linearization of the nonlinear prediction and observation models in order to propagate the state and observation, respectively [14]. GPPF needs to perform one Gaussian process mean and variance computation per particle and has very high computational cost. For more details about these single sensor filtering methods, it can be seen in [7,14], and so forth. Due to the linearization loss of GPEKF and high computational cost of GPPF, we do not consider them and only introduce the multisensor estimation fusion with GPADF and GPUKF in this paper Distributed Estimation Fusion In this subsection, we mainly present the distributed estimation fusion with GPADF. For the dynamic systems (1) and (2), assume that there is a prior on initial state x 0 and each local sensor sends its estimation to the fusion center. Thus, the posterior distribution of m th local state estimation is a Gaussian distribution N (µ em, Cem µ em C em ), where µem = µ pm = C pm and C em are calculated as follows + C x y m(cym ) 1 (y m µym ), (32) C x y m(cym ) 1 C T x y m, (33)
12 Entropy 2019, 21, of 26 with µ pm = E[x y1: 1 m ], C pm = Cov(x y1: 1 m ), µ ym = E[y m ym 1: 1 ], C ym = Cov(y m ym 1: 1 ), = Cov(x, y m ym 1: 1 ). C x y m (34) Then, the local posterior mean and covariance are transmitted to the fusion center to yield the posterior distribution of global state estimation. In general, the centralized estimation fusion method with directly fusing raw data has better fusion performance than the distributed estimation fusion method based on the processed data. However, in some cases, the distributed estimation fusion is equivalent to the centralized estimation fusion. In particular, in what follows, we propose a distributed estimation fusion method and prove that the distributed estimation fusion method is equivalent to the above centralized estimation fusion method when all cross terms C x y m, m = 1,..., N, have full column ran. Proposition 1. Assume that all cross terms C x y m, m = 1,..., N are full column ran, the distributed estimation fusion is equivalent to the centralized estimation fusion as follows where µ e N = µp + C x y (C y ) 1 (m)(µ ym µ y (m)) m=1 N + C x y (C y ) 1 (m)c ym m=1 C + x y m µ y = (µy (1),..., µy (N)), µ y (m) = E(ym y 1: 1), (µ em µ pm ), (35) and (C y ) 1 = [(C y ) 1 (1), (C y ) 1 (2),..., (C y ) 1 (N)] is an appropriate partition of matrix (C y ) 1 such that Equation (35) holds. The superscript +" stands for pseudoinverse. Proof. According to Equation (15), the mean of centralized fused state is given by µ e = µp + C x y (C y ) 1 (y µ y ) (36) = µ p C x y (C y ) 1 µ y + C x y (C y ) 1 y (37) = µ p C x y (C y ) 1 µ y + C x y N m=1(c y ) 1 (m)y m. (38) Based on the assumption that all C x y m have full column ran, we obtain C + x y m C x y m = I, m = 1,..., N. (39)
13 Entropy 2019, 21, of 26 Thus, combining Equations (37) and (38) and Equation (39) yields C x y (C y ) 1 y =C x y N m=1(c y ) 1 (m)y m N =C x y (C y ) 1 (m)c ym m=1 C + x y m C x y m (Cym ) 1 y m. (40) In order to obtain the centralized estimation mean, that is, fuse the mean of local sensor state estimation at the fusion center, we use Equations (32) and (40) to eliminate y from Equation (36). From Equation (32), we have C x y m(cym ) 1 y m =µ em µ pm + C x y m(cym ) 1 µ ym. (41) Thus, substituting Equation (41) into Equation (40) and recalling Equations (36) (38), we have µ e = µp C x y (C y ) 1 µ y N + C x y (C y ) 1 (m)c ym m=1 (µ em µ pm C + x y m + C x y m(cym ) 1 µ ym ) = µ p N + C x y (C y ) 1 (m)(µ ym µ y (m)) m=1 N + C x y (C y ) 1 (m)c ym m=1 C + x y m (µ em µ pm ). Therefore, we obtain the state mean of the distributed estimation fusion. Remar 4. The proposed distributed estimation fusion formula is equivalent to the centralized estimation fusion, therefore, the two fusion methods have the same fusion performance. However, distributed estimation fusion is more desirable in real applications in terms of reliability, survivability, communication bandwidth and computational burdens. From Equation (35), we can see that µ p, Cp, µy, Cy, C x y can be calculated in advance and the terms µ pm, µ ym, µ em, Cym, C x y m need to be transmitted by local sensors in real time. Remar 5. From the update step (30) of the centralized estimation fusion with GPUKF, the distributed estimation fusion method (35) is also suitable for the GPUKF. Since the detailed description and proof for the distributed GPUKF fusion are similar to Proposition 1, we omit the results. Note that the distributed GPUKF fusion is different from the distributed GPADF fusion in the prediction step. 4. Numerical Examples In this section, we assess the performance of our fusion methods with two different examples D Example We consider the nonlinear dynamic system with two sensor measurements as follows: x +1 = 0.5x + 25x 1 + x 2 + w, w N (0, σ 2 ) (42) y m = 5 sin(2x + z m ) + v m, vm N (0, 0.12 ), m = 1, 2, (43)
14 Entropy 2019, 21, of 26 where z m, m = 1, 2 are given by z 1 = 0 and z 2 = π 4, respectively. This nonlinear system is popularly used in many papers (see References [7,10]) to measure the performance of nonlinear filters. We regard the first 200 samples as the training data and use the rest 50 samples called testing data to test the fusion methods. Assume that the x 200 is a Gaussian distribution with prior mean µ 200 = 0 and variance σ = The estimation performance of the single sensor and multisensor fusion is evaluated by the average Mahalanobis distances that are also used in Reference [7]. The Mahalanobis distance, which is defined between the ground truth and the filtered mean, is as follows: Maha = (x ˆx ) T (C e ) 1 (x ˆx ), (44) where C e is the estimated covariance. For the Mahalanobis distance, lower values indicate better performance [7]. Figures 2 and 3 show the average Mahalanobis distances of the single filter with GPADF and GPUKF, and the fusion methods after 1000 independent runs. Figure 4 shows the average Mahalanobis distances for different system noises. According to the average Mahalanobis distances in Figures 2 and 3, we can see that the estimation performance of Sensor 2 is better than that of Sensor 1. Furthermore, the multisensor estimation fusion methods perform better than the single sensor filters with GPADF and GPUKF, respectively. Thus the effectiveness and advantages of the multisensor estimation fusion with Gaussian process can be observed. In addition, GPADF fusion performs better than GPUKF fusion for the system noise σ 2 = 1 and GPUKF fusion performs better than GPADF fusion for the system noise σ 2 = 2. It means that GPADF fusion and GPUKF fusion have their comparative advantages for different system noises. It suggests that GPADF fusion may be a better choice for the small system noise and GPUKF fusion may be worth considering for the relatively large system noise. From Figure 4, we can see that the proposed fusion methods enjoy better performance for different system noises. It demonstrates the consistence of our fusion methods for different system noises. average Mahalanobis distances FusionGPADF FusionGPUKF Sensor1GPADF Sensor2GPADF Sensor1GPUKF Sensor2GPUKF time step Figure 2. The average Mahalanobis distances of the single sensor and multisensor fusion with σ 2 = 1.
15 Entropy 2019, 21, of 26 average Mahalanobis distances FusionGPADF FusionGPUKF Sensor1GPADF Sensor2GPADF Sensor1GPUKF Sensor2GPUKF time step Figure 3. The average Mahalanobis distances of the single sensor and multisensor fusion with σ 2 = average Mahalanobis distances FusionGPADF FusionGPUKF SingleGPADF SingleGPUKF Figure 4. The average Mahalanobis distances for different system noises σ D Nonlinear Dynamic System In this subsection, we elaborate the performance of the fusion methods with GPADF and GPUKF, and we consider the example with constant turn motion model in target tracing. The root mean square error (RMSE) is used as the estimation performance measure. It is defined as follows: RMSE = 1 M M x j ˆxj 2 2, (45) j=1 where M is the total number of simulation runs, x j is the true simulated state for the jth simulation, and ˆx j is the estimated state value for the jth simulation at time, j = 1,..., M. The lower the value of the RMSE is, the better the performance of corresponding method is.
16 Entropy 2019, 21, of Experimental Setup We consider the constant turn motion model with two sensors as follows: x +1 = Fx + w, w N (0, Q ), (46) [ ] (x y m = (1) z m (1)) 2 + (x (2) z m (2)) 2 arctan x (2) z m (2) + v m, x (1) z m (1) v m N (0, R ), m = 1, 2, (47) where the state transition matrix F is given by [ cos Ω F = sin Ω ] sin Ω cos Ω with angular velocity Ω, the covariance matrix of transition noise satisfies [ ] 5 0 Q =, 0 5 and the covariance matrix of measurement noise is [ ] 5 R = ( 0.1 π. 180 )2 The sensor positions are given by z 1 = [ 200, 200] T and z 2 = [200, 200] T, respectively. We use the transition function (46) and measurement functions (47) to simulate a group of data. The values of angular velocity Ω are given by Ω = 2π 90 rad/s and Ω = 0.5 rad/s, respectively, which correspond to the small turn motion model and large turn motion model. The first κ samples are regarded as the training data and the rest 50 samples called testing data are used to test the fusion methods. The {x τ, y 1 τ, y2 τ } 1 τ= κ are the true simulated state and twosensor observations used to train the models. Assume that the x κ is a Gaussian distribution with the prior mean µ κ = [0, 0] T and the identity matrix variance S κ = I. The initial value of filtering for each sensor is set as the Cartesian coordinate transformation value based on the Polar coordinate observation with a random perturbation for each dimension. We use the random perturbation N(10, 1) for the κ = 300 training data case and N(20, 1) for the κ = 30, 60 training data cases, respectively. The initial value of fusion filtering is the mean of the initial values of all sensors. Based on the available observation information, the outputs of GPADF and GPUKF for each single sensor are the mean and covariance terms µ pm, C pm, µ em, Cem, µym,c ym, C x y m. The distributed fusion methods, the RCCCI algorithm [37] and the convex combination method [43], are also used as a comparison with our distributed fusion method. The RCCCI algorithm and the convex combination method directly use the estimated mean and covariance matrix to fuse the estimated results. Our methods tae full advantages of the results of the single sensor Gaussian process filters such as the cross term C x y m which is easily obtained. Note that all of the fusion methods are based on the local estimates that are distilled from the measurements. Thus, the comparison is fair from the available observation information. The simulation results of the RCCCI algorithm are based on the YALMIP toolbox [55]. We compare the fusion methods with the RCCCI algorithm and the convex combination method by RMSE after M = 1000 independent simulation runs. Meanwhile, we also compare the computation time of the multisensor estimation fusion methods with the RCCCI algorithm and the convex combination method. The ratio of full ran, defined as the percentage of full column ran of the cross term C x y m for the single sensor filtering at every time step after M = 1000 independent runs, is used to test the condition of equivalence. If the
17 Entropy 2019, 21, of 26 ratio of full ran is less than 100%, the condition is broen. The simulations are done under Matlab (MathWors, Inc., Natic, MA, USA) R2018b with ThinPad W540. Figure 5 describes the ratios of full column ran in the single sensor case for testing data with κ = 300 training data and Ω = 2π 2π 90, 0.5 rad/s. The RMSEs for Ω = 90, 0.5 rad/s in the case of κ = 300 training data are depicted in Figures 6 and 7, respectively. The average computation time of these fusion methods for Ω = 2π 90, 0.5 rad/s in the case of κ = 300 training data after 1000 independent simulation runs are depicted in Figures 8 and 9, respectively. The diamond line represents the RMSE of the centralized estimation fusion with GPADF (CMFGPADF) and the star line represents the distributed estimation fusion with GPADF (DMFGPADF). The circle line represents the RMSE of the centralized estimation fusion with GPUKF (CMFGPUKF) and the x line represents the distributed estimation fusion with GPUKF (DMFGPUKF). The upper triangle line represents the RMSE of the RCCCI fusion algorithm with GPADF (RCCCIGPADF) and the lower triangle line represents the RMSE of the RCCCI fusion algorithm with GPUKF (RCCCIGPUKF). The square line represents the RMSE of the convex combination method, that is, covariance weight, with GPADF (CWGPADF) and the + line represents the RMSE of the convex combination method with GPUKF (CWGPUKF). At the same time, solid line is used for the single sensor GPADF filter and dashdotted line is used for the single GPUKF filter. Similarly, the ratios of full column ran in the single sensor case for testing data with κ = 30, 60 training data and Ω = 2π 90 rad/s are depicted in Figures 10 and 11, respectively. The RMSEs for Ω = 2π 90 rad/s in the case of κ = 30, 60 training data are depicted in Figures 12 and 13, respectively. The average computation time of these fusion methods for testing data with κ = 30, 60 training data and Ω = 2π 90 rad/s after 1000 independent simulation runs are depicted in Figures 14 and 15, respectively Experimental Analysis We divide the experimental analysis into two cases, that is, the equivalence condition is satisfied or not. Some phenomena and analysis are described as follows: Case 1: the equivalence condition is satisfied. From Figure 5, we can see that the ratios of full ran are all equal to 100%. It means that the cross terms of the single sensor filters are all full column ran for the κ = 300 training data case and thus the equivalence condition of centralized fusion and distributed fusion is satisfied in Proposition 1. Meanwhile, from Figures 6 and 7, the RMSE of the distributed estimation fusion is the same as that of the centralized estimation fusion based on GPADF and GPUKF, respectively. It demonstrates the equivalence between the centralized estimation fusion and the distributed estimation fusion in Proposition 1 under the condition of full column ran. From Figures 6 and 7, it can be seen that the RMSE of the multisensor estimation fusion method is lower than that of the single sensor filtering. It shows that the multisensor fusion improves the estimation accuracy. In addition, the RMSE of multisensor estimation fusion method is lower than that of the RCCCI algorithm and the convex combination method. It implies the effectiveness of the fusion methods based on Gaussian processes. The possible reason is that our estimation fusion methods extract more extra correlation information, and the RCCCI algorithm and the convex combination method only use the local estimates with mean and covariance. Compared Figure 6 with Figure 7, we can find that our methods do well in the different angular velocity cases, i.e., the small turn motion model and large turn motion model. It confirms that our fusion methods have fine applicability with better performance. From Figures 8 and 9, we can find that the computation time of the distributed estimation fusion is less than that of the centralized estimation fusion. It demonstrates the superiority of the distributed estimation fusion under the same fusion performance. The computation time of the proposed fusion methods is much less than that of the RCCCI algorithm. The possible reason is
18 Entropy 2019, 21, of 26 due to solve a optimization problem for the RCCCI algorithm. The convex combination method taes the least computation time, since it directly uses the weight combination with covariance. percentage of full ran (%) Sensor1GPADF Sensor2GPADF Sensor1GPUKF Sensor2GPUKF time step Figure 5. The ratios of full column ran of the cross term C x y m with κ = 300 training data and Ω = 2π 90 rad/s, 0.5 rad/s. in the single sensor case for testing data RMSE (m) Sensor1GPADF Sensor2GPADF RCCCIGPADF CWGPADF CMFGPADF DMFGPADF Sensor1GPUKF Sensor2GPUKF RCCCIGPUKF CWGPUKF CMFGPUKF DMFGPUKF time step Figure 6. The RMSE for testing data with κ = 300 training data and Ω = 2π 90 rad/s.
19 Entropy 2019, 21, of 26 RMSE (m) Sensor1GPADF Sensor2GPADF RCCCIGPADF CWGPADF CMFGPADF DMFGPADF Sensor1GPUKF Sensor2GPUKF RCCCIGPUKF CWGPUKF CMFGPUKF DMFGPUKF time step Figure 7. The RMSE for testing data with κ = 300 training data and Ω = 0.5 rad/s. 0.1 average computation time (s) RCCCIGPADF RCCCIGPUKF CMFGPADF CMFGPUKF DMFGPADF DMFGPUKF CWGPADF CWGPUKF time step Figure 8. Average computation time of the three fusion methods for testing data with κ = 300 training data and Ω = 2π 90 rad/s.
20 Entropy 2019, 21, of average computation time (s) RCCCIGPADF RCCCIGPUKF CMFGPADF CMFGPUKF DMFGPADF DMFGPUKF CWGPADF CWGPUKF time step Figure 9. Average computation time of the three fusion methods for testing data with κ = 300 training data and Ω = 0.5 rad/s. Case 2: the equivalence condition is not satisfied. From Figures 10 and 11, it can be seen that the ratios of full ran are both less than 100% for the single sensor GPUKF case with κ = 30 and κ = 60 training data. Thus, the equivalence between the centralized and distributed estimation fusion with GPUKF is broen in Figures 12 and 13, respectively. The reason may be that the Gaussian models are relatively inaccurate with less training data, which can be nown from the comparison with Figures 6, 12 and 13 for the same methods. At the same time, the finitesample approximation of GPUKF seriously depends on the Gaussian process models and the computation way of the cross terms is the sum about ranone matrices for GPUKF. However, the equivalence is still satisfied for the GPADF fusion. It implies GPADF is more stable than GPUKF, which is also referred to in Reference [7]. We can also see that the performance of GPUKF fusion is better than that of GPADF fusion with κ = 300 training data from Figure 6 and a little worse with κ = 30, 60 training data from Figures 12 and 13. Meanwhile, from Figures 8, 14 and 15, the average computation time of GPUKF fusion is less than that of GPADF fusion with κ = 300 training data and is contrary with κ = 30, 60 training data. It may inspire us that the GPUKF fusion is suitable for the enough training data case and GPADF fusion does well in the small number of training data case for the turn motion systems. In a word, distributed estimation fusion with GPADF is more stable, and has a better performance in the case of the small number of training data. If we have enough training data, GPUKF fusion may be the better choice.
21 Entropy 2019, 21, of percentage of full ran (%) Sensor1GPADF Sensor2GPADF Sensor1GPUKF Sensor2GPUKF time step Figure 10. The ratios of full column ran of the cross term C x y m in the single sensor case for testing data with κ = 30 training data and Ω = 2π 90 rad/s. 1 percentage of full ran (%) Sensor1GPADF Sensor2GPADF Sensor1GPUKF Sensor2GPUKF time step Figure 11. The ratios of full column ran of the cross term C x y m in the single sensor case for testing data with κ = 60 training data and Ω = 2π 90 rad/s.
22 Entropy 2019, 21, of CMFGPUKF DMFGPUKF CMFGPADF DMFGPADF 100 RMSE (m) time step Figure 12. The RMSE for testing data with κ = 30 training data and Ω = 2π 90 rad/s CMFGPUKF DMFGPUKF CMFGPADF DMFGPADF RMSE (m) time step Figure 13. The RMSE for testing data with κ = 60 training data and Ω = 2π 90 rad/s.
23 Entropy 2019, 21, of average computation time (s) CMFGPUKF DMFGPUKF CMFGPADF DMFGPADF time step Figure 14. Average computation time of the three fusion methods for testing data with κ = 30 training data and Ω = 2π 90 rad/s average computation time (s) CMFGPUKF DMFGPUKF CMFGPADF DMFGPADF time step Figure 15. Average computation time of the three fusion methods for testing data with κ = 60 training data and Ω = 2π 90 rad/s. 5. Conclusions In the context of this paper, the property matters if the real system does not closely follow idealized models or a parametric model cannot easily be determined for nonlinear systems. A nonparametric method, the Gaussian process, is introduced to learn models from training data, since it taes both model uncertainty and sensor measurement noise into account. In order to estimate the state of the multisensor nonlinear dynamic systems, we have used the Gaussian process models as the prior of the transition and measurement function of the dynamic system. Then the transition function and measurement function have been trained with Gaussian processes, respectively. Based on
24 Entropy 2019, 21, of 26 the Gaussian process models for all available local sensor measurements, we have developed two fusion methods, centralized estimation fusion and distributed estimation fusion, with GPADF and GPUKF, respectively. Taing full advantage of the nature of the Gaussian process, the equivalence between centralized estimation fusion and distributed estimation fusion has been derived under mild conditions. Simulations show that the equivalence is satisfied under given conditions and the multisensor estimation fusion performs better than the single sensor filters. Compared with the RCCCI algorithm, the multisensor estimation fusion methods not only have higher accuracy, but also require less computation time. The estimation performance of multisensor estimation fusion methods is also better than that of the convex combination method. Future wor may involve state initialization, Gaussian process latent variable models without ground truth states, multiple model fusion with different Gaussian processes for maneuvering target tracing and validate the methods with the real sensor data. Author Contributions: Conceptualization, Y.L. and J.X.; Gaussian process methodology, Z.W. and Y.L.; fusion methodology, X.S. and Y.L.; data curation, J.X. and Y.L.; software, Y.L. and J.X.; validation, Y.L. and Z.W.; formal analysis, Y.L.; writing original draft preparation, J.X. and Y.L.; writing review and editing, Y.L.; supervision, X.S.; project administration, X.S.; funding acquisition, X.S. Funding: This wor was supported in part by the NSFC under Grant , the PCSIRT under Grant PCSIRT16R53 and the Fundamental Research Funds for the Central Universities under Grand No yjsy140. Acnowledgments: We would lie to than Yunmin Zhu for his insightful comments and helpful suggestions that greatly improved the quality of this paper. Conflicts of Interest: The authors declare no conflict of interest. References 1. Julier, S.J.; Uhlmann, J.K. Unscented filtering and nonlinear estimation. Proc. IEEE 2004, 92, [CrossRef] 2. Lan, J.; Li, X.R. Multiple conversions of measurements for nonlinear estimation. IEEE Trans. Signal Process. 2017, 65, [CrossRef] 3. BarShalom, Y.; Li, X.R.; Kirubarajan, T. Estimation with Applications to Tracing and Navigation: Theory, Algorithms and Software; Wiley: New Yor, NY, USA, Thrun, S.; Burgard, W.; Fox, D. Probabilistic Robotics; MIT Press: Cambridge, MA, USA, Simon, D. Optimal State Estimation: Kalman, H, and Nonlinear Approaches; WileyInterscience: New Yor, NY, USA, Arulampalam, M.S.; Masell, S.; Gordon, N.; Clapp, T. A tutorial on particle filters for online nonlinear/nongaussian Bayesian tracing. IEEE Trans. Signal Process. 2002, 50, [CrossRef] 7. Deisenroth, M.P.; Huber, M.F.; Hanebec, U.D. Analytic momentbased Gaussian process filtering. In Proceedings of the International Conference on Machine Learning, Montreal, QC, Canada, June Rasmussen, C.E.; Williams, C.K.I. Gaussian Processes for Machine Learning; MIT Press: Cambridge, MA, USA, Huber, M.F. Nonlinear Gaussian Filtering: Theory, Algorithms, and Applications. Ph.D. Thesis, Karlsruhe Institute of Technology, Karlsruhe, Germany, Deisenroth, M.P.; Turner, R.D.; Huber, M.F.; Hanebec, U.D.; Rasmussen, C.E. Robust filtering and smoothing with Gaussian processes. IEEE Trans. Automat. Control 2012, 57, [CrossRef] 11. Jacobs, M.A.; DeLaurentis, D. Distributed Kalman filter with a Gaussian process for machine learning. In Proceedings of the 2018 IEEE Aerospace Conference, Big Sy, MT, USA, 3 10 March 2018; pp Guo, Y.; Li, Y.; Tharmarasa, R.; Kirubarajan, T.; Efe, M.; Sariaya, B. GPPDA filter for extended target tracing with measurement origin uncertainty. IEEE Trans. Aerosp. Electron. Syst. 2019, 55, [CrossRef] 13. Preuss, R.; Von Toussaint, U. Global optimization employing Gaussian processbased Bayesian surrogates. Entropy 2018, 20, 201. [CrossRef] 14. Ko, J.; Fox, D. GPBayesFilters: Bayesian filtering using Gaussian process prediction and observation models. Autonomous Robots 2009, 27, [CrossRef]
25 Entropy 2019, 21, of Lawrence, N.D. Gaussian process latent variable models for visualisation of high dimensional data. In Proceedings of the NIPS 03 16th International Conference on Neural Information Processing Systems, Whistler, BC, Canada, 9 11 December 2003; pp Wang, J.M.; Fleet, D.J.; Hertzmann, A. Gaussian process dynamical models. In Proceedings of the Advances in Neural Information Processing Systems, Vancouver, BC, Canada, 5 8 December 2005; pp Ko, J.; Fox, D. Learning GPBayesFilters via Gaussian process latent variable models. Autonomous Robots 2011, 30, [CrossRef] 18. Csató, L.; Opper, M. Sparse online Gaussian processes. Neural Comput. 2002, 14, [CrossRef] 19. Snelson, E.; Ghahramani, Z. Sparse Gaussian processes using pseudoinputs. In Proceedings of the 18th International Conference on Neural Information Processing Systems, Vancouver, BC, Canada, 5 8 December 2005; pp , 20. Seeger, M.; Williams, C.K.I.; Lawrence, N.D. Fast forward selection to speed up sparse Gaussian process regression. In Proceedings of the Worshop on AI and Statistics 9, Key West, FL, USA, 3 6 January Smola, A.J.; Bartlett, P. Sparse greedy Gaussian process regression. In Proceedings of the Advances in Neural Information Processing Systems, Denver, CO, USA, 27 November 2 December QuiñoneroCandela, J.; Rasmussen, C. A unifying view of sparse approximate Gaussian process regression. J. Mach. Learn. Res. 2005, 6, Velycho, D.; Knopp, B.; Endres, D. Maing the coupled Gaussian process dynamical model modular and scalable with variational approximations. Entropy 2018, 20, 724. [CrossRef] 24. Yan, L.; Duan, X.; Liu, B.; Xu, J. Bayesian optimization based on Koptimality. Entropy 2018, 20, 594. [CrossRef] 25. Ko, J.; Klein, D.J.; Fox, D.; Hähnel, D. Gaussian processes and reinforcement learning for identification and control of an autonomous blimp. In Proceedings of the IEEE International Conference on Robotics and Automation, Roma, Italy, April Ko, J.; Klein, D.J.; Fox, D.; Haehnel, D. GPUKF: Unscented Kalman filters with Gaussian process prediction and observation models. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, San Diego, CA, USA, 29 October 2 November Ferris, B.; Hähnel, D.; Fox, D. Gaussian processes for signal strengthbased location estimation. In Proceedings of the Robotics: Science and Systems, Philadelphia, PA, USA, August 2006; pp Maybec, P.S. Stochastic Models, Estimation, and Control; Academic Press, Inc.: New Yor, NY, USA, Boyen, X.; Koller, D. Tractable inference for complex stochastic processes. In Proceedings of the 14th Conference on Uncertainty in Artificial Intelligence, Madison, WI, USA, July 1998; pp Opper, M. Online learning in neural networs. In ch. A Bayesian Approach to Online Learning; Cambridge University Press: Cambridge, UK, 1999; pp Liggins, M.E.; Hall, D.L.; Llinas, J. Handboo of Multisensor Data Fusion: Theory and Practice; CRC Press: Boca Raton, FL, USA, BarShalom, Y.; Willett, P.K.; Tian, X. Tracing and Data Fusion: A Handboo of Algorithms; YBS Publishing: Storrs, CT, USA, Shen, X.; Zhu, Y.; Song, E.; Luo, Y. Optimal centralized update with multiple local outofsequence measurements. IEEE Trans. Signal Process. 2009, 57, [CrossRef] 34. Wu, D.; Zhou, J.; Hu, A. A new approximate algorithm for the Chebyshev center. Automatica 2013, 49, [CrossRef] 35. Li, M.; Zhang, X. Information fusion in a multisource incomplete information system based on information entropy. Entropy 2017, 19, 570. [CrossRef] 36. Gao, X.; Chen, J.; Tao, D.; Liu, W. Multisensor centralized fusion without measurement noise covariance by variational Bayesian approximation. IEEE Trans. Aerosp. Electron. Syst. 2011, 47, [CrossRef] 37. Wang, Y.; Li, X.R. Distributed estimation fusion with unavailable crosscorrelation. IEEE Trans. Aerosp. Electron. Syst. 2012, 48, [CrossRef] 38. Chong, C.Y.; Chang, K.; Mori, S. Distributed tracing in distributed sensor networs. In Proceedings of the American Control Conference, Seattle, WA, USA, June 1986; pp , 39. Zhu, Y.; You, Z.; Zhao, J.; Zhang, K.; Li, X.R. The optimality for the distributed Kalman filtering fusion with feedbac. Automatica 2001, 37, [CrossRef]
26 Entropy 2019, 21, of Li, X.R.; Zhu, Y.; Jie, W.; Han, C. Optimal linear estimation fusion Part I: Unified fusion rules. IEEE Trans. Inf. Theory 2003, 49, [CrossRef] 41. Duan, Z.; Li, X.R. Lossless linear transformation of sensor data for distributed estimation fusion. IEEE Trans. Signal Process. 2010, 59, [CrossRef] 42. Shen, X.J.; Luo, Y.T.; Zhu, Y.M.; Song, E.B. Globally optimal distributed Kalman filtering fusion. Sci. China Inf. Sci. 2012, 55, [CrossRef] 43. Chong, C.Y.; Mori, S. Convex combination and covariance intersection algorithms in distributed fusion. In Proceedings of the 4th International Conference on Information Fusion, Montreal, QC, Canada, 7 10 August Sun, S.; Deng, Z.L. Multisensor optimal information fusion Kalman filter. Automatica 2004, 40, [CrossRef] 45. Song, E.; Zhu, Y.; Zhou, J.; You, Z. Optimal Kalman filtering fusion with crosscorrelated sensor noises. Automatica 2007, 43, [CrossRef] 46. Hu, C.; Lin, H.; Li, Z.; He, B.; Liu, G. Kullbac Leibler divergence based distributed cubature Kalman filter and its application in cooperative space object tracing. Entropy 2018, 20, 116. [CrossRef] 47. Bar, M.A.; Lee, S. Distributed multisensor data fusion under unnown correlation and data inconsistency. Sensors 2017, 17, [CrossRef] 48. Xie, J.; Shen, X.; Wang, Z.; Zhu, Y. Gaussian process fusion for multisensor nonlinear dynamic systems. In Proceedings of the 37th Chinese Control Conference (CCC), Wuhan, China, July 2018; pp Osborne, M. Bayesian Gaussian Processes for Sequential Prediction, Optimisation and Quadrature. Ph.D. Thesis, University of Oxford, Oxford, UK, NguyenTuong, D.; Seeger, M.; Peters, J. Local Gaussian process regression for real time online model learning. In Proceedings of the International Conference on Neural Information Processing Systems, Vancouver, BC, Canada, 8 10 December Deisenroth, M. Efficient Reinforcement Learning using Gaussian Processes; KIT Scientific Publishing: Karlsruhe, Germany, Ghahramani, Z.; Rasmussen, C.E. Bayesian Monte Carlo. Adv. Neural Inf. Process. Syst. 2003, 15, Candela, J.Q.; Girard, A.; Larsen, J.; Rasmussen, C.E. Propagation of uncertainty in Bayesian ernel models  application to multiplestep ahead forecasting. IEEE Int. Conf. Acoust. Speech Signal Process. 2003, 2, Zhu, Y. Multisensor Decision and Estimation Fusion; Kluwer Academic Publishers: Boston, MA, USA, Öfberg, J.L. Yalmip: A toolbox for modeling and optimization in matlab. In Proceedings of the CACSD Conference, Taipei, Taiwan, China, 2 4 September by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (
ENGG1410F Tutorial 6
Jianwen Zhao Department of Computer Science and Engineering The Chinese University of Hong Kong 1/16 Problem 1. Matrix Diagonalization Diagonalize the following matrix: A = [ ] 1 2 4 3 2/16 Solution The
More informationUDC Empirical Researches on Pricing of Corporate Bonds with Macro Factors 厦门大学博硕士论文摘要库
10384 15620071151397 UDC Empirical Researches on Pricing of Corporate Bonds with Macro Factors 2010 4 Duffee 1999 AAA Vasicek RMSE RMSE Abstract In order to investigate whether adding macro factors
More informationStochastic Processes (XI) Hanjun Zhang School of Mathematics and Computational Science, Xiangtan University 508 YiFu Lou talk 06/
Stochastic Processes (XI) Hanjun Zhang School of Mathematics and Computational Science, Xiangtan University hjzhang001@gmail.com 508 YiFu Lou talk 06/04/2010  Page 1 Outline 508 YiFu Lou talk 06/04/2010
More informationuntitled
Cointegration and VECM YiNung Yang CYCU, Taiwan May, 2012 不 列 1 Learning objectives Integrated variables Cointegration Vector Error correction model (VECM) EngleGranger 2step cointegration test Johansen
More informationWelch & Bishop, [Kalman60] [Maybeck79] [Sorenson70] [Gelb74, Grewal93, Maybeck79, Lewis86, Brown92, Jacobs93] x R n x k = Ax k 1 + Bu k 1 + w
Greg Welch 1 and Gary Bishop 2 TR 95041 Department of Computer Science University of North Carolina at Chapel Hill 3 Chapel Hill, NC 275993175 : 2006 7 24 2007 1 8 1960 1 welch@cs.unc.edu, http://www.cs.unc.edu/
More informationUntitled3
SEC.. Separable Equations In each of problems 1 through 8 solve the given differential equation : ü 1. y ' x y x y, y 0 fl y  x 0 fl y  x 0 fl y  x3 3 c, y 0 ü. y ' x ^ y 1 + x 3 x y 1 + x 3, y 0 fl
More information國家圖書館典藏電子全文
i ii Abstract The most important task in human resource management is to encourage and help employees to develop their potential so that they can fully contribute to the organization s goals. The main
More informationThesis for the Master degree in Engineering Research on Negative Pressure Wave Simulation and Signal Processing of FluidConveying Pipeline Leak Candi
U17 10220 UDC624 Thesis for the Master degree in Engineering Research on Negative Pressure Wave Simulation and Signal Processing of FluidConveying Pipeline Leak Candidate:Chen Hao Tutor: Xue Jinghong
More information[9] R Ã : (1) x 0 R A(x 0 ) = 1; (2) α [0 1] Ã α = {x A(x) α} = [A α A α ]. A(x) Ã. R R. Ã 1 m x m α x m α > 0; α A(x) = 1 x m m x m +
2012 12 Chinese Journal of Applied Probability and Statistics Vol.28 No.6 Dec. 2012 ( 224002) Euclidean Lebesgue... :. : O212.2 O159. 1.. Zadeh [1 2]. Tanaa (1982) ; Diamond (1988) (FLS) FLS LS ; Savic
More information: 29 : n ( ),,. T, T +,. y ij i =, 2,, n, j =, 2,, T, y ij y ij = β + jβ 2 + α i + ɛ ij i =, 2,, n, j =, 2,, T, (.) β, β 2,. jβ 2,. β, β 2, α i i, ɛ i
2009 6 Chinese Journal of Applied Probability and Statistics Vol.25 No.3 Jun. 2009 (,, 20024;,, 54004).,,., P,. :,,. : O22... (Credibility Theory) 20 20, 80. ( []).,.,,,.,,,,.,. Buhlmann BuhlmannStraub
More informationTHE APPLICATION OF ISOTOPE RATIO ANALYSIS BY INDUCTIVELY COUPLED PLASMA MASS SPECTROMETER A Dissertation Presented By Chaoyong YANG Supervisor: Prof.D
10384 070302 9825042 UDC 2001.6. 2001.7. 20016 THE APPLICATION OF ISOTOPE RATIO ANALYSIS BY INDUCTIVELY COUPLED PLASMA MASS SPECTROMETER A Dissertation Presented By Chaoyong YANG Supervisor: Prof.Dr. Xiaoru
More informationMicrosoft PowerPoint _代工實例1
4302 動態光散射儀 (Dynamic Light Scattering) 代工實例與結果解析 生醫暨非破壞性分析團隊 2016.10 updated Which Size to Measure? Diameter Many techniques make the useful and convenient assumption that every particle is a sphere. The
More informationKnowledge and its Place in Nature by Hilary Kornblith
Deduction by Daniel Bonevac Chapter 7 Quantified Natural Deduction Quantified Natural Deduction As with truth trees, natural deduction in Q depends on the addition of some new rules to handle the quantifiers.
More information10384 27720071152270 UDC SHIBOR  Research on Dynamics of Shortterm Shibor via Parametric and Nonparametric Models 2 0 1 0 0 5 2 0 1 0 0 5 2 0 1 0 0 5 2010 , 1. 2. Shibor 2006 10 8 2007 1 4 Shibor
More informationMicrosoft Word  TIP006SCH Uniedit Writing Tip  Presentperfecttenseandpasttenseinyourintroduction readytopublish
我 难 度 : 高 级 对 们 现 不 在 知 仍 道 有 听 影 过 响 多 少 那 次 么 : 研 英 究 过 文 论 去 写 文 时 作 的 表 技 引 示 巧 言 事 : 部 情 引 分 发 言 该 生 使 在 中 用 过 去, 而 现 在 完 成 时 仅 表 示 事 情 发 生 在 过 去, 并 的 哪 现 种 在 时 完 态 成 呢 时? 和 难 过 道 去 不 时 相 关? 是 所 有
More informationPowerPoint Presentation
Decision analysis 量化決策分析方法專論 2011/5/26 1 Problem formulation states of nature In the decision analysis, decision alternatives are referred to as chance events. The possible outcomes for a chance event
More informationA VALIDATION STUDY OF THE ACHIEVEMENT TEST OF TEACHING CHINESE AS THE SECOND LANGUAGE by Chen Wei A Thesis Submitted to the Graduate School and Colleg
上 海 外 国 语 大 学 SHANGHAI INTERNATIONAL STUDIES UNIVERSITY 硕 士 学 位 论 文 MASTER DISSERTATION 学 院 国 际 文 化 交 流 学 院 专 业 汉 语 国 际 教 育 硕 士 题 目 届 别 2010 届 学 生 陈 炜 导 师 张 艳 莉 副 教 授 日 期 2010 年 4 月 A VALIDATION STUDY
More information2005 5,,,,,,,,,,,,,,,,, , , 2174, 7014 %, % 4, 1961, ,30, 30,, 4,1976,627,,,,, 3 (1993,12 ),, 2
3,,,,,, 1872,,,, 3 2004 ( 04BZS030),, 1 2005 5,,,,,,,,,,,,,,,,, 1928 716,1935 6 2682 1928 2 1935 6 1966, 2174, 7014 %, 94137 % 4, 1961, 59 1929,30, 30,, 4,1976,627,,,,, 3 (1993,12 ),, 2 , :,,,, :,,,,,,
More information10384 19020101152519 UDC Rayleigh QuasiRayleigh Method for computing eigenvalues of symmetric tensors 2 0 1 3 2 0 1 3 2 0 1 3 2013 , 1. 2. [4], [27].,. [6] E ; [7], Z. [15]. Ramara G. kolda [1, 2],
More information~ ~ ~
33 4 2014 467 478 Studies in the History of Natural Sciences Vol. 33 No. 4 2014 030006 20 20 N092 O6092 A 10001224 2014 04046712 200 13 Roger Bacon 1214 ~ 1292 14 Berthold Schwarz 20 Luther Carrington
More information中国科学技术大学学位论文模板示例文档
University of Science and Technology of China A dissertation for doctor s degree An Example of USTC Thesis Template for Bachelor, Master and Doctor Author: Zeping Li Speciality: Mathematics and Applied
More information~ 10 2 P Y i t = my i t W Y i t 1000 PY i t Y t i W Y i t t i m Y i t t i 15 ~ 49 1 Y Y Y 15 ~ j j t j t = j P i t i = 15 P n i t n Y
* 35 4 2011 7 Vol. 35 No. 4 July 2011 3 Population Research 1950 ~ 1981 The Estimation Method and Its Application of Cohort Age  specific Fertility Rates Wang Gongzhou Hu Yaoling Abstract Based on the
More informationLH_Series_Rev2014.pdf
REMINDERS Product information in this catalog is as of October 2013. All of the contents specified herein are subject to change without notice due to technical improvements, etc. Therefore, please check
More informationIntroduction to HamiltonJacobi Equations and Periodic Homogenization
Introduction to HamiltonJacobi Equations and Periodic YuYu Liu NCKU Math August 22, 2012 YuYu Liu (NCKU Math) HJ equation and August 22, 2012 1 / 15 HJ equations HJ equations A HamiltonJacobi equation
More informationA Study on Grading and Sequencing of Senses of GradeA Polysemous Adjectives in A Syllabus of Graded Vocabulary for Chinese Proficiency 2002 I II Abstract ublished in 1992, A Syllabus of Graded Vocabulary
More information東吳大學
律 律 論 論 療 行 The Study on Medical Practice and Coercion 林 年 律 律 論 論 療 行 The Study on Medical Practice and Coercion 林 年 i 讀 臨 療 留 館 讀 臨 律 六 礪 讀 不 冷 療 臨 年 裡 歷 練 禮 更 老 林 了 更 臨 不 吝 麗 老 劉 老 論 諸 見 了 年 金 歷 了 年
More information附件1：
附 件 1: 全 国 优 秀 教 育 硕 士 专 业 学 位 论 文 推 荐 表 单 位 名 称 : 西 南 大 学 论 文 题 目 填 表 日 期 :2014 年 4 月 30 日 数 学 小 组 合 作 学 习 的 课 堂 管 理 攻 硕 期 间 及 获 得 硕 士 学 位 后 一 年 内 获 得 与 硕 士 学 位 论 文 有 关 的 成 果 作 者 姓 名 论 文 答 辩 日 期 学 科 专
More informationShanghai International Studies University THE STUDY AND PRACTICE OF SITUATIONAL LANGUAGE TEACHING OF ADVERB AT BEGINNING AND INTERMEDIATE LEVEL A Thes
上 海 外 国 语 大 学 硕 士 学 位 论 文 对 外 汉 语 初 中 级 副 词 情 境 教 学 研 究 与 实 践 院 系 : 国 际 文 化 交 流 学 院 学 科 专 业 : 汉 语 国 际 教 育 姓 名 : 顾 妍 指 导 教 师 : 缪 俊 2016 年 5 月 Shanghai International Studies University THE STUDY AND PRACTICE
More informationMicrosoft Word  KSAE06S0262.doc
Stereo Vision based Forward Collision Warning and Avoidance System Yunhee LeeByungjoo KimHogi JungPaljoo Yoon Central R&D Center, MANDO Corporation, 4135, GomaeRi, GibeungEub, YounginSi, KyonggiDo,
More informationUniversity of Science and Technology of China A dissertation for master s degree Research of elearning style for public servants under the context of
中 国 科 学 技 术 大 学 硕 士 学 位 论 文 新 媒 体 环 境 下 公 务 员 在 线 培 训 模 式 研 究 作 者 姓 名 : 学 科 专 业 : 导 师 姓 名 : 完 成 时 间 : 潘 琳 数 字 媒 体 周 荣 庭 教 授 二 一 二 年 五 月 University of Science and Technology of China A dissertation for
More informationSTEAM STEAM STEAM ( ) STEAM STEAM ( ) 1977 [13] [10] STEM STEM 2. [11] [14] ( )STEAM [15] [16] STEAM [12] ( ) STEAM STEAM [17] STEAM STEAM STEA
2017 8 ( 292 ) DOI:10.13811/j.cnki.eer.2017.08.017 STEAM 1 1 2 3 4 (1. 130117; 2. + 130117; 3. 130022;4. 518100) [ ] 21 STEAM STEAM STEAM STEAM STEAM STEAM [ ] STEAM ; ; [ ] G434 [ ] A [ ] (1970 ) Email:ddzhou@nenu.edu.cn
More informationWTO
10384 200015128 UDC Exploration on Design of CIB s Human Resources System in the New Stage (MBA) 2004 2004 2 3 2004 3 2 0 0 4 2 WTO Abstract Abstract With the rapid development of the high and new technique
More informationMicrosoft Word  24.doc
水 陸 畢 陳 晚 明 飲 食 風 尚 初 探 蕭 慧 媛 桃 園 創 新 技 術 學 院 觀 光 與 休 閒 事 業 管 理 系 摘 要 飲 食 是 人 類 維 持 與 發 展 生 命 的 基 礎 之 一, 飲 食 風 尚 會 隨 著 社 會 地 位 物 質 條 件 以 及 人 為 因 素 轉 移, 不 同 階 層 的 飲 食 方 式, 往 往 標 誌 著 他 們 的 社 會 身 分, 甚 至 反
More information致 谢 本 论 文 能 得 以 完 成, 首 先 要 感 谢 我 的 导 师 胡 曙 中 教 授 正 是 他 的 悉 心 指 导 和 关 怀 下, 我 才 能 够 最 终 选 定 了 研 究 方 向, 确 定 了 论 文 题 目, 并 逐 步 深 化 了 对 研 究 课 题 的 认 识, 从 而 一
中 美 国 际 新 闻 的 叙 事 学 比 较 分 析 以 英 伊 水 兵 事 件 为 例 A Comparative Analysis on Narration of SinoUS International News Case Study:UKIran Marine Issue 姓 名 : 李 英 专 业 : 新 闻 学 学 号 : 05390 指 导 老 师 : 胡 曙 中 教 授 上 海
More informationMicrosoft Word  p11.doc
() 111 ()Classification Analysis( ) m() p.d.f prior (decision) (loss function) Bayes Risk for any decision d( ) posterior risk posterior risk Posterior prob. j (uniform prior) where Mahalanobis Distance(Mdistance)
More informationMcGrawHill School Education Group Physics : Principles and Problems G S 24
2017 4 357 GLOBAL EDUCATION Vol. 46 No4, 2017 * 1 / 400715 / 400715 / 400715 1 20102020 2 * mjzxzd1401 2012 AHA120008 1 23 3 47 8 9 McGrawHill School Education Group Physics : Principles and Problems
More informationAbstract Since 1980 s, the CocaCola came into China and developed rapidly. From 1985 to now, the numbers of bottlers has increased from 3 to 23, and
Abstract Since 1980 s, the CocaCola came into China and developed rapidly. From 1985 to now, the numbers of bottlers has increased from 3 to 23, and increases ulteriorly. When the CocaCola company came
More informationOutline Speech Signals Processing DualTone Multifrequency Signal Detection 云南大学滇池学院课程 : 数字信号处理 Applications of Digital Signal Processing 2
CHAPTER 10 Applications of Digital Signal Processing Wang Weilian wlwang@ynu.edu.cn School of Information Science and Technology Yunnan University Outline Speech Signals Processing DualTone Multifrequency
More information國立中山大學學位論文典藏.PDF
國 立 中 山 大 學 企 業 管 理 學 系 碩 士 論 文 以 系 統 動 力 學 建 構 美 食 餐 廳 異 國 麵 坊 之 管 理 飛 行 模 擬 器 研 究 生 : 簡 蓮 因 撰 指 導 教 授 : 楊 碩 英 博 士 中 華 民 國 九 十 七 年 七 月 致 謝 詞 寫 作 論 文 的 過 程 是 一 段 充 滿 艱 辛 與 淚 水 感 動 與 窩 心 的 歷 程, 感 謝 這 一
More information我国原奶及乳制品安全生产和质量安全管理研究
密 级 论 文 编 号 中 国 农 业 科 学 院 硕 士 学 位 论 文 我 国 原 奶 及 乳 制 品 质 量 安 全 管 理 研 究 Study on Quality and Safety Management of Raw Milk and Dairy Products in China 申 请 人 : 段 成 立 指 导 教 师 : 叶 志 华 研 究 员 张 蕙 杰 研 究 员 申 请
More informationAbstract Today, the structures of domestic bus industry have been changed greatly. Many manufacturers enter into the field because of its lower thresh
SWOT 5 Abstract Today, the structures of domestic bus industry have been changed greatly. Many manufacturers enter into the field because of its lower threshold. All of these lead to aggravate drastically
More information% % % % % % ~
10015558 2015 03002116 2010 C91 A 2014 5 2010 N. W. Journal of Ethnology 2015 3 86 2015.No.3 Total No.86 2010 2010 2181.58 882.99 40.47% 1298.59 59.53% 2013 2232.78 847.29 37.95% 1385.49 62.05% 1990
More informationMicrosoft PowerPoint  ATF2015.ppt [相容模式]
Improving the Video Totalized Method of Stopwatch Calibration Samuel C.K. Ko, Aaron Y.K. Yan and Henry C.K. Ma The Government of Hong Kong Special Administrative Region (SCL) 31 Oct 2015 1 Contents Introduction
More information[ ],,,,,,,,,,,,,,, [ ] ; ; ; [ ]F120 [ ]A [ ] X(2018) , :,,,, ( ),,,,,,, [ ] [ ],, 5
2018 4 [ ] [ ] ; ; ; [ ]F120 [ ]A [ ]1006480X(2018)04000514 : ( ) [ ] 20180302 [ ] :jinpei8859@163.com 5 : ( ) ( ) : ( ) ( ) (1983) : 6 2018 4 ; (1983) : : ; : ; : ; ; : W G W G W G 7 :?! : ( ) (
More information5 1 linear 5 circular 2 6 2003 10 3000 2 400 ~ 500 4500 7 2013 3500 400 ~ 500 8 3 1900 1. 2 9 1 65
2014 2 43 EXAMINATIONS RESEARCH No. 2 2014 General No. 43 李 迅 辉 随 着 新 课 程 改 革 的 不 断 深 入, 教 学 理 念 逐 步 更 新, 学 生 的 英 语 水 平 也 在 逐 渐 提 高, 但 沿 用 多 年 的 高 考 英 语 书 面 表 达 的 评 分 标 准 并 没 有 与 时 俱 进, 已 经 不 能 完 全 适 应
More information(baking powder) 1 ( ) ( ) 1 10g g (two level design, Doptimal) 32 1/2 fraction Two Level Fractional Factorial Design DOptimal D
( ) 4 1 1 1 145 1 110 1 (baking powder) 1 ( ) ( ) 1 10g 1 1 2.5g 1 1 1 1 60 10 (two level design, Doptimal) 32 1/2 fraction Two Level Fractional Factorial Design DOptimal Design 1. 60 120 2. 3. 40 10
More informationMonetary Policy Regime Shifts under the Zero Lower Bound: An Application of a Stochastic Rational Expectations Equilibrium to a Markov Switching DSGE
Procedure of Calculating Policy Functions 1 Motivation Previous Works 2 Advantages and Summary 3 Model NK Model with MS Taylor Rule under ZLB Expectations Function Static OnePeriod Problem of a MSDSGE
More informationMicrosoft PowerPoint  STU_EC_Ch08.ppt
樹德科技大學資訊工程系 Chapter 8: Counters ShiHuang Chen Fall 2010 1 Outline Asynchronous Counter Operation Synchronous Counter Operation Up/Down Synchronous Counters Design of Synchronous Counters Cascaded Counters
More information<4D6963726F736F667420576F7264202D203033BDD7A16DA576B04FA145A4ADABD2A5BBACF6A16EADBAB6C0ABD2A4A7B74EB8712E646F63>
論 史 記 五 帝 本 紀 首 黃 帝 之 意 義 林 立 仁 明 志 科 技 大 學 通 識 教 育 中 心 副 教 授 摘 要 太 史 公 司 馬 遷 承 父 著 史 遺 志, 並 以 身 膺 五 百 年 大 運, 上 繼 孔 子 春 秋 之 史 學 文 化 道 統 為 其 職 志, 著 史 記 欲 達 究 天 人 之 際, 通 古 今 之 變, 成 一 家 之 言 之 境 界 然 史 記 百
More information: (2012) Control Theory & Applications Vol. 29 No. 1 Jan DezertSmarandache 1,2, 2,3, 2 (1., ; 2., ;
29 1 2012 1 : 1000 8152(2012)01 0079 06 Control Theory & Applications Vol. 29 No. 1 Jan. 2012 DezertSmarandache 1,2, 2,3, 2 (1., 102249; 2., 264001; 3., 410073) :, DezertSmarandache (DSmT),. DSmT 3 :.,,
More information1 * 1 *
1 * 1 * taka@unii.ac.jp 1992, p. 233 2013, p. 78 2. 1. 2014 1992, p. 233 1995, p. 134 2. 2. 3. 1. 2014 2011, 118 3. 2. Psathas 1995, p. 12 seen but unnoticed B B Psathas 1995, p. 23 2004 2006 2004 4 ah
More informationBC04 Module_antenna__ doc
http://www.infobluetooth.com TEL:+862368798999 Fax: +862368889515 Page 1 of 10 http://www.infobluetooth.com TEL:+862368798999 Fax: +862368889515 Page 2 of 10 http://www.infobluetooth.com TEL:+862368798999
More informationMicrosoft Word doc
中 考 英 语 科 考 试 标 准 及 试 卷 结 构 技 术 指 标 构 想 1 王 后 雄 童 祥 林 ( 华 中 师 范 大 学 考 试 研 究 院, 武 汉,430079, 湖 北 ) 提 要 : 本 文 从 结 构 模 式 内 容 要 素 能 力 要 素 题 型 要 素 难 度 要 素 分 数 要 素 时 限 要 素 等 方 面 细 致 分 析 了 中 考 英 语 科 试 卷 结 构 的
More information國立中山大學學位論文典藏.PDF
93 2 () ()A Study of Virtual Project Team's Knowledge Integration and Effectiveness  A Case Study of ERP Implementation N924020024 () () ()Yu ()YuanHang () ()Ho,ChinFu () ()Virtual Team,Knowledge Integration,Project
More information134Cover1
106 13 4 301323 302 2009 2007 2009 2007 Dewey 1960 1970 1964 1967 303 1994 2008 2007 2008 2001 2003 2006 2007 2007 7 2013 2007 2009 2009 2007 2009 2012 Kendall 1990 Jacoby 1996 Sigmon 1996 1 2 3 20062000
More information报 告 1: 郑 斌 教 授, 美 国 俄 克 拉 荷 马 大 学 医 学 图 像 特 征 分 析 与 癌 症 风 险 评 估 方 法 摘 要 : 准 确 的 评 估 癌 症 近 期 发 病 风 险 和 预 后 或 者 治 疗 效 果 是 发 展 和 建 立 精 准 医 学 的 一 个 重 要 前
东 北 大 学 中 荷 生 物 医 学 与 信 息 工 程 学 院 2016 年 度 生 物 医 学 与 信 息 工 程 论 坛 会 议 时 间 2016 年 6 月 8 日, 星 期 三,9:30 至 16:00 会 议 地 址 会 议 网 址 主 办 单 位 东 北 大 学 浑 南 校 区 沈 阳 市 浑 南 区 创 新 路 195 号 生 命 科 学 大 楼 B 座 619 报 告 厅 http://www.bmie.neu.edu.cn
More informationuntitled
數 Quadratic Equations 數 Contents 錄 : Quadratic Equations Distinction between identities and equations. Linear equation in one unknown 3 ways to solve quadratic equations 3 Equations transformed to quadratic
More informationMicrosoft Word  ED774.docx
journal.newcenturyscience.com/index.php/gjanp Global Journal of Advanced Nursing Practice,214,Vol.1,No.1 The practicality of an improved method of intravenous infusion exhaust specialized in operating
More informationuntitled
LBS Research and Application of Location Information Management Technology in LBS TP319 10290 UDC LBS Research and Application of Location Information Management Technology in LBS , LBS PDA LBS
More information國立中山大學學位論文典藏.PDF
I II III The Study of Factors to the Failure or Success of Applying to Holding International Sport Games Abstract For years, holding international sport games has been Taiwan s goal and we are on the way
More informationMicrosoft PowerPoint  NCBA_Cattlemens_College_Darrh_B
Introduction to Genetics Darrh Bullock University of Kentucky The Model Trait = Genetics + Environment Genetics Additive Predictable effects that get passed from generation to generation NonAdditive Primarily
More information85% NCEP CFS 10 CFS CFS BP BP BP ~ 15 d CFS BP r  1 r CFS 2. 1 CFS 10% 50% 3 d CFS Cli
1 2 3 1. 310030 2. 100054 3. 116000 CFS BP doi 10. 13928 /j. cnki. wrahe. 2016. 04. 020 TV697. 1 A 10000860 2016 04008805 Abandoned water risk ratio controlbased reservoir predischarge control method
More informationIP TCP/IP PC OS µclinux MPEG4 Blackfin DSP MPEG4 IP UDP Winsock I/O DirectShow Filter DirectShow MPEG4 µclinux TCP/IP IP COM, DirectShow I
2004 5 IP TCP/IP PC OS µclinux MPEG4 Blackfin DSP MPEG4 IP UDP Winsock I/O DirectShow Filter DirectShow MPEG4 µclinux TCP/IP IP COM, DirectShow I Abstract The techniques of digital video processing, transferring
More informationVol. 36 ( 2016 ) No. 6 J. of Math. (PRC) HS, (, ) :. HS,. HS. : ; HS ; ; Nesterov MR(2010) : 90C05; 65K05 : O221.1 : A : (2016)
Vol. 36 ( 6 ) No. 6 J. of Math. (PRC) HS, (, 454) :. HS,. HS. : ; HS ; ; Nesterov MR() : 9C5; 65K5 : O. : A : 557797(6)698 ū R n, A R m n (m n), b R m, b = Aū. ū,,., ( ), l ū min u s.t. Au = b, (.)
More information2008 Nankai Business Review 61
150 5 * 71272026 60 2008 Nankai Business Review 61 / 62 Nankai Business Review 63 64 Nankai Business Review 65 66 Nankai Business Review 67 68 Nankai Business Review 69 Mechanism of Luxury Brands Formation
More information穨control.PDF
TCP congestion control yhmiu Outline Congestion control algorithms Purpose of RFC2581 Purpose of RFC2582 TCP SSDR 1998 TCP Extensions RFC1072 1988 SACK RFC2018 1996 FACK 1996 RateHalving 1997 OldTahoe
More informationθ 1 = φ n n 2 2 n AR n φ i = 0 1 = a t  θ θ m a tm 3 3 m MA m 1. 2 ρ k = R k /R 0 5 Akaike ρ k 1 AIC = n ln δ 2
35 2 2012 2 GEOMATICS & SPATIAL INFORMATION TECHNOLOGY Vol. 35 No. 2 Feb. 2012 1 2 3 4 1. 450008 2. 450005 3. 450008 4. 572000 20 J 101 20 ARMA TU196 B 16725867 2012 020213  04 Application of Time Series
More information國 史 館 館 刊 第 23 期 Chiang Chingkuo s Educational Innovation in Southern Jiangxi and Its Effects (19411943) Abstract Wenyuan Chu * Chiang Chingkuo wa
國 史 館 館 刊 第 二 十 三 期 (2010 年 3 月 ) 119164 國 史 館 19411943 朱 文 原 摘 要 1 關 鍵 詞 : 蔣 經 國 贛 南 學 校 教 育 社 會 教 育 掃 盲 運 動 119 國 史 館 館 刊 第 23 期 Chiang Chingkuo s Educational Innovation in Southern Jiangxi and
More informationhks298cover&back
2957 6364 2377 3300 2302 1087 www.scout.org.hk scoutcraft@scout.org.hk 2675 0011 5,500 Service and Scouting Recently, I had an opportunity to learn more about current state of service in Hong Kong
More information第二十四屆全國學術研討會論文中文格式摘要
以 田 口 動 態 法 設 計 物 理 治 療 用 牽 引 機 與 機 構 改 善 1, 2 簡 志 達 馮 榮 豐 1 國 立 高 雄 第 一 科 技 大 學 機 械 與 自 動 化 工 程 系 2 傑 邁 電 子 股 份 有 限 公 司 1 摘 要 物 理 治 療 用 牽 引 機 的 主 要 功 能 是 將 兩 脊 椎 骨 之 距 離 拉 開, 使 神 經 根 不 致 受 到 壓 迫 該 類 牽
More information國立中山大學學位論文典藏
i Examinations have long been adopting for the selection of the public officials and become an essential tradition in our country. For centuries, the examination system, incorporated with fairness, has
More informationChinese Journal of Applied Probability and Statistics Vol.25 No.4 Aug (,, ;,, ) (,, ) 应用概率统计 版权所有, Zhang (2002). λ q(t)
2009 8 Chinese Journal of Applied Probability and Statistics Vol.25 No.4 Aug. 2009,, 541004;,, 100124),, 100190), Zhang 2002). λ qt), KolmogorovSmirov, Berk and Jones 1979). λ qt).,,, λ qt),. λ qt) 1,.
More informationMicrosoft Word  刘藤升答辩修改论文.doc
武 汉 体 育 学 院 硕 士 学 位 论 文 ( 全 日 制 硕 士 ) 社 会 需 求 视 角 下 武 汉 体 院 乒 乓 球 硕 士 研 究 生 就 业 状 况 研 究 研 究 生 : 刘 藤 升 导 师 : 滕 守 刚 副 教 授 专 业 : 体 育 教 育 训 练 学 研 究 方 向 : 乒 乓 球 教 学 训 练 理 论 与 方 法 2015 年 5 月 分 类 号 : G8 学 校 代
More informationThe Development of Color Constancy and Calibration System
The Development of Color Constancy and Calibration System The Development of Color Constancy and Calibration System LabVIEW CCD BMP ii Abstract The modern technologies develop more and more faster, and
More informationMicrosoft PowerPoint  talk8.ppt
Adaptive Playout Scheduling Using Timescale Modification Yi Liang, Nikolaus Färber Bernd Girod, Balaji Prabhakar Outline QoS concerns and tradeoffs Jitter adaptation as a playout scheduling scheme Packet
More informationAbstract There arouses a fever pursuing the position of being a civil servant in China recently and the phenomenon of thousands of people running to a
Abstract There arouses a fever pursuing the position of being a civil servant in China recently and the phenomenon of thousands of people running to attend the entrance examination of civil servant is
More information10384 X0015101 UDC The Preliminary Survey of the Development Patterns of Security Analysts in China (MBA) 2004 2 2004 3 2004 3 2 0 0 4 2 14 Abstract Abstract The security analysts are respectable oversea,
More information% GIS / / Fig. 1 Characteristics of flood disaster variation in suburbs of Shang
20 6 2011 12 JOURNAL OF NATURAL DISASTERS Vol. 20 No. 6 Dec. 2011 10044574 2011 060094  05 200062 19491990 1949 1977 0. 8 0. 03345 0. 01243 30 100 P426. 616 A Risk analysis of flood disaster in Shanghai
More informationSettlement Equation " H = CrH 1+ e o log p' o + ( p' p' c o! p' o ) CcH + 1+ e o log p' c + p' f! ( p' p' c c! p' o ) where ΔH = consolidation settlem
Prediction of Compression and Recompression Indices of Texas Overconsolidated Clays Presented By: Sayeed Javed, Ph.D., P.E. Settlement Equation " H = CrH 1+ e o log p' o + ( p' p' c o! p' o ) CcH + 1+
More informationa b
38 3 2014 5 Vol. 38 No. 3 May 2014 55 Population Research + + 3 100038 A Study on Implementation of Residence Permit System Based on Three Local Cases of Shanghai Chengdu and Zhengzhou Wang Yang Abstract
More informationMicrosoft Word  专论综述1.doc
2016 年 第 25 卷 第 期 http://www.csa.org.cn 计 算 机 系 统 应 用 1 基 于 节 点 融 合 分 层 法 的 电 网 并 行 拓 扑 分 析 王 惠 中 1,2, 赵 燕 魏 1,2, 詹 克 非 1, 朱 宏 毅 1 ( 兰 州 理 工 大 学 电 气 工 程 与 信 息 工 程 学 院, 兰 州 730050) 2 ( 甘 肃 省 工 业 过 程 先
More information广 州 市 花 都 区 公 务 员 培 训 需 求 分 析 的 研 究 A STUDY OF TRAINING NEEDS ANALYSIS ON CIVIL SERVANTS OF HUADU DISTRICT IN GUANGZHOU 作 者 姓 名 : 黄 宁 宁 领 域 ( 方 向 ): 公
广 州 市 花 都 区 公 务 员 培 训 需 求 分 析 的 研 究 分 类 号 :C93 单 位 代 码 :10183 研 究 生 学 号 :201223A022 密 级 : 公 开 研 吉 林 大 学 硕 士 学 位 论 文 ( 专 业 学 位 ) 黄 宁 宁 广 州 市 花 都 区 公 务 员 培 训 需 求 分 析 的 研 究 A STUDY OF TRAINING NEEDS ANALYSIS
More informationMicrosoft PowerPoint  TTCNIntroductionv5.ppt
Conformance Testing and TTCN 工研院無線通訊技術部林牧台 / Morton Lin 035912360 mtlin@itri.org.tw 1 Outline Introduction and Terminology Conformance Testing Process 3GPP conformance testing and test cases A real world
More informationMicrosoft PowerPoint  CH 04 Techniques of Circuit Analysis
Chap. 4 Techniques of Circuit Analysis Contents 4.1 Terminology 4.2 Introduction to the NodeVoltage Method 4.3 The NodeVoltage Method and Dependent Sources 4.4 The NodeVoltage Method: Some Special Cases
More informationMicrosoft PowerPoint SSBSE .ppt [Modo de Compatibilidade]
SSBSE 2015, Bergamo Transformed Search Based Software Engineering: A New Paradigm of SBSE He JIANG, Zhilei Ren, Xiaochen Li, Xiaochen Lai jianghe@dlut.edu.cn School of Software, Dalian Univ. of Tech. Outline
More informationA Study on the Relationships of the Coconstruction Contract A Study on the Relationships of the CoConstruction Contract ( ) ABSTRACT Coconstructio in the real estate development, holds the quite
More informationShanghai International Studies University A STUDY ON SYNERGY BUYING PRACTICE IN ABC COMPANY A Thesis Submitted to the Graduate School and MBA Center I
上 海 外 国 语 大 学 工 商 管 理 硕 士 学 位 论 文 ABC 中 国 食 品 公 司 的 整 合 采 购 研 究 学 科 专 业 : 工 商 管 理 硕 士 (MBA) 作 者 姓 名 :0113700719 指 导 教 师 : 答 辩 日 期 : 2013 年 12 月 上 海 外 国 语 大 学 二 一 四 年 一 月 Shanghai International Studies
More informationMicrosoft Word  林文晟3.doc
台 灣 管 理 學 刊 第 8 卷 第 期,008 年 8 月 pp. 3346 建 構 農 產 運 銷 物 流 中 心 評 選 模 式 決 策 之 研 究 林 文 晟 清 雲 科 技 大 學 企 業 管 理 系 助 理 教 授 梁 榮 輝 崇 右 技 術 學 院 企 業 管 理 系 教 授 崇 右 技 術 學 院 校 長 摘 要 台 灣 乃 以 農 立 國, 農 業 經 濟 在 台 灣 經 濟
More information1 引言
P P 第 40 卷 Vol.40 第 7 期 No.7 计 算 机 工 程 Computer Engineering 014 年 7 月 July 014 开 发 研 究 与 工 程 应 用 文 章 编 号 :1000348(014)0708105 文 献 标 识 码 :A 中 图 分 类 号 :TP391.41 摘 基 于 图 像 识 别 的 震 象 云 地 震 预 测 方 法 谢 庭,
More informationGassama Abdoul Gadiri University of Science and Technology of China A dissertation for master degree Ordinal Probit Regression Model and Application in Credit Rating for Users of Credit Card Author :
More informationPublic Projects A Thesis Submitted to Department of Construction Engineering National Kaohsiung First University of Science and Technology In Partial
Public Projects A Thesis Submitted to Department of Construction Engineering National Kaohsiung First University of Science and Technology In Partial Fulfillment of the Requirements For the Degree of Master
More information世新稿件end.doc
Research Center For Taiwan Economic Development (RCTED) 2003 8 1 2 Study of Operational Strategies on Biotechnology Pharmaceutical Products Industry in Taiwan  Case Study on Sinphar Pharmaceutical Company
More informationImproved Preimage Attacks on AESlike Hash Functions: Applications to Whirlpool and Grøstl
SKLOIS (Pseudo) Preimage Attack on ReducedRound Grøstl Hash Function and Others Shuang Wu, Dengguo Feng, Wenling Wu, Jian Guo, Le Dong, Jian Zou March 20, 2012 Institute. of Software, Chinese Academy
More information课题调查对象：
1 大 陆 地 方 政 府 大 文 化 管 理 职 能 与 机 构 整 合 模 式 比 较 研 究 武 汉 大 学 陈 世 香 [ 内 容 摘 要 ] 迄 今 为 止, 大 陆 地 方 政 府 文 化 管 理 体 制 改 革 已 经 由 试 点 改 革 进 入 到 全 面 推 行 阶 段 本 文 主 要 通 过 结 合 典 型 调 查 法 与 比 较 研 究 方 法, 对 已 经 进 行 了 政 府
More information论成都报业群体的生存环境与体制创新
中 华 传 播 会 议 征 稿 : 论 成 都 报 业 群 体 的 生 存 环 境 与 体 制 创 新 On the Ecological Environment and Structural innovations of Press Groups in Chengdu 作 者 : 李 苓 Li Ling 单 位 : 四 川 大 学 新 闻 学 院 College of Journalism, Sichuan
More information% % 99% Sautman B. Preferential Policies for Ethnic Minorities in China The Case
10015558 2015 03003711 2000 2010 C95 DOI:10.16486/j.cnki.621035/d.2015.03.005 A 1 2014 14CRK014 2013 13SHC012 1 47 2181 N. W. Journal of Ethnology 2015 3 86 2015.No.3 Total No.86 20 70 122000 2007
More information* CO3 A 16742486 2011 040005  18 P. 253 * 5 1. 1949 1991 1949 1991 6 2. 7 1 2001 2 2008 8 1 2 2008 11 http / /www. rnd. ncnu. edu. tw /hdcheng /method /ways. doc 2008 / 9 disciplinary matrix 1 1. 2001
More informationC doc
No. C2004010 20047 No. C2004010 2004 7 16 1 No. C2004010 2004 7 16 2000 1990 1990 2000 ( ),, 2 / 2000 1990 1990 2000 ( ),, (1952) 100871 It is not Appropriate to Remove the Birth Spacing Policy Now,
More information08陈会广
第 34 卷 第 10 期 2012 年 10 月 2012,34(10):18711880 Resources Science Vol.34,No.10 Oct.,2012 文 章 编 号 :10077588(2012)10187110 房 地 产 市 场 及 其 细 分 的 调 控 重 点 区 域 划 分 理 论 与 实 证 以 中 国 35 个 大 中 城 市 为 例 陈 会 广 1,
More information