Terrypape8419

From DigitalMaine Transcription Project
Jump to: navigation, search

We aim to solve the consensus problem of linear multiagent systems (MASs) with input saturation under directed interaction graphs in this article, where only local output information of neighbors is available for each agent. By introducing the multilevel saturation feedback control approach, a fully distributed adaptive anti-windup protocol is proposed, where a local observer, a distributed observer, as well as an anti-windup observer are separately constructed for each agent to estimate consensus error, achieve consensus for a certain internal state, and provide anti-windup compensator, respectively. A dual protocol is further presented with the distributed observer designed based on the input matrix, which gives a thorough view on the connection between the distributed observer and the anti-windup observer, and provides the opportunity to reduce the order of the controller by designing the integrated distributed anti-windup observer. Then, three types of distributed anti-windup protocols are proposed based on the integrated distributed anti-windup observer, which requires different assumptions. Specifically, the first protocol needs two-hop relay information to generate the local observer to estimate consensus error; the second protocol designs the local observer with absolute output information to estimate the state instead; while the last protocol introduces certain assumption on transmission zero of agents' dynamics to design the unknown input observer to estimate consensus error. All of the protocols are validated by strictly theoretical proof, and are illustrated by performing simulation examples.The time-triggered impulsive control of complex homogeneous dynamical networks has received wide attention due to its occasional occupation of the communication channels. This article is devoted to quasisynchronization of heterogeneous dynamical networks via event-triggered impulsive controls with less channel occupation. PKC inhibitor Two kinds of triggered mechanisms, that is, the centralized event-triggered mechanism in which the control is updated based upon the state information of all nodes, and the distributed event-triggered mechanism where the control is updated according to the state information of each node and its neighboring node, are proposed, respectively, such that the synchronization error between the heterogeneous dynamical networks and a virtual target is not more than a nonzero bound. What is more, the Zeno behavior is shown to be excluded. It is found that the combination method of the event-triggered control and the impulsive control, that is, the distributed event-triggered impulsive control has the advantage of low-energy consumption and takes up many fewer communication channels over the time-triggered impulsive control. Two numerical examples are conducted to illustrate the effectiveness of the proposed event-triggered impulsive controls.Deep neural networks (DNNs), characterized by sophisticated architectures capable of learning a hierarchy of feature representations, have achieved remarkable successes in various applications. Learning DNN's parameters is a crucial but challenging task that is commonly resolved by using gradient-based backpropagation (BP) methods. However, BP-based methods suffer from severe initialization sensitivity and proneness to getting trapped into inferior local optima. To address these issues, we propose a DNN learning framework that hybridizes CC-based optimization with BP-based gradient descent, called BPCC, and implement it by devising a computationally efficient CC-based optimization technique dedicated to DNN parameter learning. In BPCC, BP will intermittently execute for multiple training epochs. Whenever the execution of BP in a training epoch cannot sufficiently decrease the training objective function value, CC will kick in to execute by using the parameter values derived by BP as the starting point. The best parameter values obtained by CC will act as the starting point of BP in its next training epoch. In CC-based optimization, the overall parameter learning task is decomposed into many subtasks of learning a small portion of parameters. These subtasks are individually addressed in a cooperative manner. In this article, we treat neurons as basic decomposition units. Furthermore, to reduce the computational cost, we devise a maturity-based subtask selection strategy to selectively solve some subtasks of higher priority. Experimental results demonstrate the superiority of the proposed method over common-practice DNN parameter learning techniques.Recently, many convolutional neural network (CNN) methods have been designed for hyperspectral image (HSI) classification since CNNs are able to produce good representations of data, which greatly benefits from a huge number of parameters. However, solving such a high-dimensional optimization problem often requires a large number of training samples in order to avoid overfitting. In addition, it is a typical nonconvex problem affected by many local minima and flat regions. To address these problems, in this article, we introduce the naive Gabor networks or Gabor-Nets that, for the first time in the literature, design and learn CNN kernels strictly in the form of Gabor filters, aiming to reduce the number of involved parameters and constrain the solution space and, hence, improve the performances of CNNs. Specifically, we develop an innovative phase-induced Gabor kernel, which is trickily designed to perform the Gabor feature learning via a linear combination of local low-frequency and high-frequency components of data controlled by the kernel phase. With the phase-induced Gabor kernel, the proposed Gabor-Nets gains the ability to automatically adapt to the local harmonic characteristics of the HSI data and, thus, yields more representative harmonic features. Also, this kernel can fulfill the traditional complex-valued Gabor filtering in a real-valued manner, hence making Gabor-Nets easily perform in a usual CNN thread. We evaluated our newly developed Gabor-Nets on three well-known HSIs, suggesting that our proposed Gabor-Nets can significantly improve the performance of CNNs, particularly with a small training set.