Updated on 2021/06/22

写真b

 
NITTA Tohru
 
*Items subject to periodic update by Rikkyo University (The rest are reprinted from information registered on researchmap.)
Affiliation*
Graduate School of Artificial Intelligence and Science Artificial Intelligence and Science
Title*
Specially Appointed Professor
Degree
Ph. D. degree in engineering ( University of Tsukuba )
Contact information
Mail Address
Research Theme*
  • これまで実数領域で考えられてきた通常のニューラルネットワークを複素数や四元数といった高次元の代数に拡張したニューラルネットワークの研究をしている。実ニューラルネットワークには備わっていない興味深い特性や新たなモデルの創出を探っている。最近は深層ニューラルネットワークの特異点にも興味を持っている。

  • Research Interests
  • Neural Network

  • Clifford algebra

  • singular point

  • Complex number

  • Quaternion

  • Campus Career*
    • 4 2020 - Present 
      Graduate School of Artificial Intelligence and Science   Artificial Intelligence and Science   Specially Appointed Professor
     

    Research Areas

    • Informatics / Soft computing

    Research History

    • 4 2020 - Present 
      Rikkyo University

      More details

    • 4 2001 - 3 2021 
      National Institute of Advanced Industrial Science and Technology

      More details

    • 4 2006 - 3 2008 
      Osaka University

      More details

    • 5 2000 - 3 2006 
      大阪大学大学院   理学研究科 数学専攻   助教授(併任)

      More details

    • 4 1990 - 3 2001 
      Electrotechnical Laboratory

      More details

    • 4 1985 - 3 1990 
      NEC Corporation

      More details

    ▼display all

    Education

    • 4 1983 - 3 1985 
      University of Tsukuba

      More details

      Country: Japan

      researchmap

    • 4 1979 - 3 1983 
      University of Tsukuba   First Cluster of Colleges,College of Natural Sciences

      More details

      Country: Japan

      researchmap

    Papers

    • Fundamental Structure of Orthogonal Variable Commutative Quaternion Neurons. Peer-reviewed

      Tohru Nitta, Hui Hu Gan

      Proceedings of Joint 11th International Conference on Soft Computing and Intelligent Systems and 21st International Symposium on Advanced Intelligent Systems, SCIS & ISIS2020   1 - 3   12 2020

      More details

      Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper (international conference proceedings)  

      DOI: 10.1109/SCISISIS50064.2020.9322711

      researchmap

      Other Link: https://dblp.uni-trier.de/db/conf/scisisis/scisisis2020.html#NittaG20

    • No Bad Local Minima in Wide and Deep Nonlinear Neural Networks

      Tohru Nitta

      arXiv PreprintarXiv:1806.04884v2   2020

      More details

      Authorship:Lead author, Corresponding author   Publishing type:Research paper (other academic)  

      researchmap

    • Hypercomplex Widely Linear Estimation Through the Lens of Underpinning Geometry Peer-reviewed

      Tohru Nitta, Masaki Kobayashi, Danilo P. Mandic

      IEEE TRANSACTIONS ON SIGNAL PROCESSING67 ( 15 ) 3985 - 3994   8 2019

      More details

      Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper (scientific journal)   Publisher:IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC  

      We provide a rigorous account of the equivalence between the complex-valued widely linear estimation method and the quaternion involution widely linear estimation method with their vector-valued real linear estimation counterparts. This is achieved by an account of degrees of freedom and by providing matrix mappings between a complex variable and an isomorphic bivariate real vector, and a quaternion variable versus a quadri-variate real vector. Furthermore, we show that the parameters in the complex-valued linear estimation method, the complex-valued widely linear estimation method, the quaternion linear estimation method, the quaternion semi-widely linear estimation method, and the quaternion involution widely linear estimation method include distinct geometric structures imposed on complex numbers and quaternions, respectively, whereas the real-valued linear estimation methods do not exhibit any structure. This key difference explains, both in theoretical and practical terms, the advantage of estimation in division algebras (complex, quaternion) over their multivariate real vector counterparts. In addition, we discuss the computational complexities of the estimators of the hypercomplex widely linear estimation methods.

      DOI: 10.1109/TSP.2019.2922151

      researchmap

    • Resolution of Singularities via Deep Complex-Valued Neural Networks Peer-reviewed

      Tohru Nitta

      MATHEMATICAL METHODS IN THE APPLIED SCIENCES41 ( 11 ) 4170 - 4178   7 2018

      More details

      Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper (scientific journal)   Publisher:WILEY  

      It has been reported that training deep neural networks is more difficult than training shallow neural networks. Hinton etal. proposed deep belief networks with a learning algorithm that trains one layer at a time. A much better generalization can be achieved when pre-training each layer with an unsupervised learning algorithm. Since then, deep neural networks have been extensively studied. On the other hand, it has been revealed that singular points affect the training dynamics of the learning models such as neural networks and cause a standstill of training. Naturally, training deep neural networks suffer singular points. As described in this paper, we present a deep neural network model that has fewer singular points than the usual one. First, we demonstrate that some singular points in the deep real-valued neural network, which is equivalent to a deep complex-valued neural network, have been resolved as its inherent property. Such deep neural networks are less likely to become trapped in local minima or plateaus caused by critical points. Results of experiments on the two spirals problem, which has an extreme nonlinearity, support our theory. Copyright (c) 2017 John Wiley & Sons, Ltd.

      DOI: 10.1002/mma.4434

      researchmap

    • Hyperbolic Gradient Operator and Hyperbolic Back-Propagation Learning Algorithms Peer-reviewed

      Tohru Nitta, Yasuaki Kuroe

      IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS29 ( 5 ) 1689 - 1702   5 2018

      More details

      Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper (scientific journal)   Publisher:IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC  

      In this paper, we first extend the Wirtinger derivative which is defined for complex functions to hyperbolic functions, and derive the hyperbolic gradient operator yielding the steepest descent direction by using it. Next, we derive the hyperbolic backpropagation learning algorithms for some multilayered hyperbolic neural networks (NNs) using the hyperbolic gradient operator. It is shown that the use of the Wirtinger derivative reduces the effort necessary for the derivation of the learning algorithms by half, simplifies the representation of the learning algorithms, and makes their computer programs easier to code. In addition, we discuss the differences between the derived Hyperbolic-BP rules and the complex-valued backpropagation learning rule (Complex-BP). Finally, we make some experiments with the derived learning algorithms. As a result, we find that the convergence rates of the Hyperbolic-BP learning algorithms are high even if the fully activation functions are used, and discover that the Hyperbolic-BP learning algorithm for the hyperbolic NN with the split-type hyperbolic activation function has an ability to learn hyperbolic rotation as its inherent property.

      DOI: 10.1109/TNNLS.2017.2677446

      researchmap

    • Weight Initialization without Local Minima in Deep Nonlinear Neural Networks.

      Tohru NItta

      arXiv PreprintarXiv: 1806.04884   2018

      More details

      Authorship:Lead author, Corresponding author  

      researchmap

      Other Link: https://dblp.uni-trier.de/db/journals/corr/corr1806.html#abs-1806-04884

    • Resolution of Singularities Introduced by Hierarchical Structure in Deep Neural Networks Peer-reviewed

      Tohru Nitta

      IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS28 ( 10 ) 2282 - 2293   10 2017

      More details

      Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper (scientific journal)   Publisher:IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC  

      We present a theoretical analysis of singular points of artificial deep neural networks, resulting in providing deep neural network models having no critical points introduced by a hierarchical structure. It is considered that such deep neural network models have good nature for gradient-based optimization. First, we show that there exist a large number of critical points introduced by a hierarchical structure in deep neural networks as straight lines, depending on the number of hidden layers and the number of hidden neurons. Second, we derive a sufficient condition for deep neural networks having no critical points introduced by a hierarchical structure, which can be applied to general deep neural networks. It is also shown that the existence of critical points introduced by a hierarchical structure is determined by the rank and the regularity of weight matrices for a specific class of deep neural networks. Finally, two kinds of implementation methods of the sufficient conditions to have no critical points are provided. One is a learning algorithm that can avoid critical points introduced by the hierarchical structure during learning (called avoidant learning algorithm). The other is a neural network that does not have some critical points introduced by the hierarchical structure as an inherent property (called avoidant neural network).

      DOI: 10.1109/TNNLS.2016.2580741

      researchmap

    • On the Singularity in Deep Neural Networks Peer-reviewed

      Tohru Nitta

      Proceedings of the 23rd International Conference on Neural Information Processing, ICONIP2016-Kyoto, Part IV, LNCS9950   389 - 396   2016

      More details

      Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper (international conference proceedings)   Publisher:SPRINGER INT PUBLISHING AG  

      In this paper, we analyze a deep neural network model from the viewpoint of singularities. First, we show that there exist a large number of critical points introduced by a hierarchical structure in the deep neural network as straight lines. Next, we derive sufficient conditions for the deep neural network having no critical points introduced by a hierarchical structure.

      DOI: 10.1007/978-3-319-46681-1_47

      researchmap

    • Performance Bounds of Quaternion Estimators Peer-reviewed

      Yili Xia, Cyrus Jahanchahi, Tohru Nitta, Danilo P. Mandic

      IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS26 ( 12 ) 3287 - 3292   12 2015

      More details

      Language:English   Publishing type:Research paper (scientific journal)   Publisher:IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC  

      The quaternion widely linear (WL) estimator has been recently introduced for optimal second-order modeling of the generality of quaternion data, both second-order circular (proper) and second-order noncircular (improper). Experimental evidence exists of its performance advantage over the conventional strictly linear (SL) as well as the semi-WL (SWL) estimators for improper data. However, rigorous theoretical and practical performance bounds are still missing in the literature, yet this is crucial for the development of quaternion valued learning systems for 3-D and 4-D data. To this end, based on the orthogonality principle, we introduce a rigorous closed-form solution to quantify the degree of performance benefits, in terms of the mean square error, obtained when using the WL models. The cases when the optimal WL estimation can simplify into the SWL or the SL estimation are also discussed.

      DOI: 10.1109/TNNLS.2015.2388782

      researchmap

    • Complex-Valued Neurocomputing and Singular Points Invited Peer-reviewed

      Tohru Nitta

      ARCHIVES OF NEUROSCIENCE2 ( 4 )   10 2015

      More details

      Authorship:Lead author, Corresponding author   Language:English   Publisher:KOWSAR CORP  

      Context: Recently, the singular points of neural networks have attracted attention from the artificial intelligence community, and their interesting properties have been demonstrated. The objective of this study is to provide an overview of studies on the singularities of complex-valued neural networks.Evidence Acquisition: This review is based on the relevant literature on complex-valued neural networks and singular points.Results: Review of the studies and available literature on the subject area shows that the singular points of complex-valued neural networks have negative effects on learning, as do those of real-valued neural networks. However, the nature of the singular points in complex-valued neural networks is superior in quality, and the methods for improving the learning performance have been proposed.Conclusions: A complex-valued neural network could be a promising learning method from the viewpoint of a singularity.

      DOI: 10.5812/archneurosci.27461

      researchmap

    • Learning Dynamics of a Single Polar Variable Complex-Valued Neuron Peer-reviewed

      Tohru Nitta

      NEURAL COMPUTATION27 ( 5 ) 1120 - 1141   5 2015

      More details

      Authorship:Lead author   Language:English   Publishing type:Research paper (scientific journal)   Publisher:MIT PRESS  

      This letter investigates the characteristics of the complex-valued neuron model with parameters represented by polar coordinates (called polar variable complex-valued neuron). The parameters of the polar variable complex-valued neuron are unidentifiable. The plateau phenomenon can occur during learning of the polar variable complex-valued neuron. Furthermore, computer simulations suggest that a single polar variable complex-valued neuron has the following characteristics in the case of using the steepest gradient-descent method with square error: (1) unidentifiable parameters (singular points) degrade the learning speed and (2) a plateau can occur during learning. When the weight is attracted to the singular point, the learning tends to become stuck. However, computer simulations also show that the steepest gradient-descent method with amplitude-phase error and the complex-valued natural gradient method could reduce the effects of the singular points. The learning dynamics near singular points depends on the error functions and the training algorithms used.

      DOI: 10.1162/NECO_a_00729

      researchmap

    • Learning Dynamics of the Complex-Valued Neural Network in the Neighborhood of Singular Points Peer-reviewed

      Tohru Nitta

      Journal of Computer and Communications2 ( 1 ) 27 - 32   2014

      More details

      Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper (scientific journal)  

      researchmap

    • Natural Gradient Descent for Training Stochastic Complex-Valued Neural Networks Peer-reviewed

      Tohru Nitta

      International Journal of Advanced Computer Science and Applications5 ( 7 ) 193 - 198   2014

      More details

      Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper (scientific journal)  

      researchmap

    • Plateau in a Polar Variable Complex-Valued Neuron Peer-reviewed

      Tohru Nitta

      Proceedings of the 6th International Conference on Agents and Artificial Intelligence, ICAART2014-Anger1   526 - 531   2014

      More details

      Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper (international conference proceedings)  

      In this paper, the characteristics of the complex-valued neuron model with parameters represented by polar coordinates (called polar variable complex-valued neuron) are investigated. The main results are as reported below. The polar variable complex-valued neuron is unidentifiable: there exists a parameter that does not affect the output value of the neuron and one cannot identify its value. The plateau phenomenon can occur during learning of the polar variable complex-valued neuron: the learning error does not decrease in a period. Furthermore, it is suggested by computer simulations that a single polar variable complex-valued neuron has the following characteristics: (a) Unidentifiable parameters (singular points) degrade the learning speed. (b) A plateau can occur during learning. When the weight is attracted to the singular point, the learning tends to be stuck.

      DOI: 10.5220/0004887905260531

      Scopus

      researchmap

      Other Link: https://dblp.uni-trier.de/db/conf/icaart/icaart2014-1.html#Nitta14

    • Local Minima in Hierarchical Structures of Complex-Valued Neural Networks Peer-reviewed

      Tohru Nitta

      NEURAL NETWORKS43   1 - 7   7 2013

      More details

      Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper (scientific journal)   Publisher:PERGAMON-ELSEVIER SCIENCE LTD  

      Most of local minima caused by the hierarchical structure can be resolved by extending the real-valued neural network to complex numbers. It was proved in 2000 that a critical point of the real-valued neural network with H - 1 hidden neurons always gives many critical points of the real-valued neural network with H hidden neurons. These critical points consist of many lines in the parameter space which could be local minima or saddle points. Local minima cause plateaus which have a strong negative influence on learning. However, most of the critical points of complex-valued neural network are saddle points unlike those of the real-valued neural network. This is a prominent property of the complex-valued neural network. (c) 2013 Elsevier Ltd. All rights reserved.

      DOI: 10.1016/j.neunet.2013.02.002

      researchmap

    • Applications of Clifford's Geometric Algebra Peer-reviewed

      Eckhard Hitzer, Tohru Nitta, Yasuaki Kuroe

      Advances in Applied Clifford Algebras23 ( 2 ) 377 - 404   24 5 2013

      More details

      Language:English   Publishing type:Research paper (scientific journal)   Publisher:SPRINGER BASEL AG  

      We survey the development of Clifford's geometric algebra and some of its
      engineering applications during the last 15 years. Several recently developed
      applications and their merits are discussed in some detail. We thus hope to
      clearly demonstrate the benefit of developing problem solutions in a unified
      framework for algebra and geometry with the widest possible scope: from quantum
      computing and electromagnetism to satellite navigation, from neural computing
      to camera geometry, image processing, robotics and beyond.

      DOI: 10.1007/s00006-013-0378-4

      researchmap

      Other Link: http://arxiv.org/pdf/1305.5663v1

    • A Theoretical Foundation for the Widely Linear Processing of Quaternion-Valued Data Peer-reviewed

      Tohru Nitta

      Applied Mathematics4 ( 12 ) 1616 - 1620   2013

      More details

      Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper (scientific journal)  

      researchmap

    • Construction of Neural Networks that Do Not Have Critical Points Based on Hierarchical Structures Peer-reviewed

      Tohru Nitta

      International Journal of Advanced Computer Science and Applications4 ( 9 ) 68 - 73   2013

      More details

      Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper (scientific journal)  

      researchmap

    • Neural Networks with Comparatively Few Critical Points Peer-reviewed

      Tohru Nitta

      Proceedings of 17th International Conference on Knowledge-Based and Intelligent Information & Engineering Systems, KES2013-Kitakyushu   269 - 275   2013

      More details

      Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper (international conference proceedings)   Publisher:ELSEVIER SCIENCE BV  

      A critical point is a point on which the derivatives of an error function are all zero. It has been shown in the literatures that the critical points caused by the hierarchical structure of the real-valued neural network could be local minima or saddle points, whereas most of the critical points caused by the hierarchical structure are saddle points in the case of complex-valued neural networks. Several studies have demonstrated that that kind of singularity has a negative effect on learning dynamics in neural networks. In this paper, we will demonstrate via some examples that the decomposition of high-dimensional NNs into real-valued NNs equivalent to the original NNs yields the NNs that do not have critical points based on the hierarchical structure. (C) 2013 The Authors. Published by Elsevier B.V.

      DOI: 10.1016/j.procs.2013.09.103

      researchmap

    • Learning Transformations with Complex-Valued Neurocomputing. Peer-reviewed

      Tohru Nitta

      International Journal of Organizational and Collective Intelligence3 ( 2 ) 81 - 116   2012

      More details

      Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper (scientific journal)  

      DOI: 10.4018/joci.2012040103

      researchmap

    • Widely Linear Processing of Hypercomplex Signals. Peer-reviewed

      Tohru Nitta

      Proceedings of International Conference on Neural Information Processing (Lecture Notes in Computer Science), ICONIP2011-Shanghai   519 - 525   2011

      More details

      Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper (international conference proceedings)   Publisher:SPRINGER-VERLAG BERLIN  

      In this paper, we formulate a Clifford-valued widely linear estimation framework. Clifford number is a hypercomplex number that generalizes real, complex numbers, quaternions, and higher dimensional numbers. And also, as a first step, we will give a theoretical foundation for a quaternion-valued widely linear estimation framework. The estimation error obtained with the quaternion-valued widely linear estimation method is proven to be smaller than that obtained using the usual quaternion-valued linear estimation method.

      DOI: 10.1007/978-3-642-24955-6_62

      researchmap

      Other Link: https://dblp.uni-trier.de/db/conf/iconip/iconip2011-1.html#Nitta11

    • 複素ニューロンの特異性について Peer-reviewed

      新田徹

      電子情報通信学会論文誌DJ93-D ( 8 ) 1614 - 1621   2010

      More details

      Authorship:Lead author, Corresponding author   Language:Japanese   Publishing type:Research paper (scientific journal)  

      researchmap

    • The Uniqueness Theorem for Complex-Valued Neural Networks with Threshold Parameters and the Redundancy of the Parameters Peer-reviewed

      Tohru Nitta

      Internatinoal Journal of Neural Systems18 ( 2 ) 123 - 134   4 2008

      More details

      Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper (scientific journal)   Publisher:WORLD SCIENTIFIC PUBL CO PTE LTD  

      This paper will prove the uniqueness theorem for 3-layered complex-valued neural networks where the threshold parameters of the hidden neurons can take non-zeros. That is, if a 3-layered complex-valued neural network is irreducible, the 3-layered complex-valued neural network that approximates a given complex-valued function is uniquely determined up to a finite group on the transformations of the learnable parameters of the complex-valued neural network.

      DOI: 10.1142/S0129065708001439

      researchmap

    • On the Decision Boundaries of Hyperbolic Neurons Peer-reviewed

      Tohru Nitta, Sven Buchholz

      Proceedings of International Joint Conference on Neural Networks, IJCNN'08-HongKong   2973 - 2979   2008

      More details

      Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper (international conference proceedings)   Publisher:IEEE  

      In this paper, the basic properties, especially decision boundary, of the hyperbolic neurons used in the hyperbolic neural networks are investigated. And also, a non-split hyperbolic sigmoid activation function is proposed.

      DOI: 10.1109/IJCNN.2008.4634216

      researchmap

    • N-dimensional Vector Neuron Peer-reviewed

      Tohru Nitta

      Proceedings of the IJCAI-2007 Workshop on Complex-Valued Neural Networks and Neuro-Computing: Novel Methods, Applications and Implementations   2 - 7   2007

      More details

      Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper (international conference proceedings)  

      researchmap

    • Three-Dimensional Vector Valued Neural Network and its Generalization Ability Peer-reviewed

      Tohru Nitta

      Neural Information Processing - Letters and Reviews10 ( 10 ) 237 - 242   2006

      More details

      Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper (scientific journal)  

      researchmap

    • Orthogonality of Decision Boundaries in Complex-Valued Neural Networks. Peer-reviewed International journal

      Tohru Nitta

      Neural Computation16 ( 1 ) 73 - 97   1 2004

      More details

      Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper (scientific journal)   Publisher:MIT PRESS  

      This letter presents some results of an analysis on the decision boundaries of complex-valued neural networks whose weights, threshold values, input and output signals are all complex numbers. The main results may be summarized as follows. (1) A decision boundary of a single complex-valued neuron consists of two hypersurfaces that intersect orthogonally, and divides a decision region into four equal sections. The XOR problem and the detection of symmetry problem that cannot be solved with two-layered real-valued neural networks, can be solved by two-layered complex-valued neural networks with the orthogonal decision boundaries, which reveals a potent computational power of complex-valued neural nets. Furthermore, the fading equalization problem can be successfully solved by the two-layered complex-valued neural network with the highest generalization ability. (2) A decision boundary of a three-layered complex-valued neural network has the orthogonal property as a basic structure, and its two hypersurfaces approach orthogonality as all the net inputs to each hidden neuron grow. In particular, most of the decision boundaries in the three-layered complex-valued neural network inetersect orthogonally when the network is trained using Complex-BP algorithm. As a result, the orthogonality of the decision boundaries improves its generalization ability. (3) The average of the learning speed of the Complex-BP is several times faster than that of the Real-BP. The standard deviation of the learning speed of the Complex-BP is smaller than that of the Real-BP. It seems that the complex-valued neural network and the related algorithm are natural for learning complex-valued patterns for the above reasons.

      DOI: 10.1162/08997660460734001

      PubMed

      CiNii Article

      researchmap

      Other Link: https://dblp.uni-trier.de/db/journals/neco/neco16.html#Nitta04

    • A Solution to the 4-bit Parity Problem with a Single Quaternary Neuron Peer-reviewed

      Tohru Nitta

      Neural Information Processing - Letters and Reviews5 ( 2 ) 33 - 39   2004

      More details

      Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper (scientific journal)  

      researchmap

    • Reducibility of the Complex-Valued Neural Network Peer-reviewed

      Neural Information Processing - Letters and Reviews2 ( 3 ) 53 - 56   2004

      More details

      Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper (scientific journal)  

      researchmap

    • On the Inherent Property of the Decision Boundary in Complex-Valued Neural Networks. Peer-reviewed

      Tohru Nitta

      Neurocomputing50 ( C ) 291 - 303   2003

      More details

      Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper (scientific journal)   Publisher:ELSEVIER SCIENCE BV  

      This paper shows the differences between the real-valued neural network and the complex-valued neural network by analyzing their fundamental properties from the view of architectures. The main results may be summarized as follows: (a) A single complex-valued neuron with n-inputs is equivalent to two real-valued neurons with 2n-inputs which have a restriction on a set of weight parameters. (b) The decision boundary of a single complex-valued neuron consists of two hypersurfaces which intersect orthogonally. (c) The decision boundary of a three-layered complex-valued neural network has the orthogonal structure. (d) The orthogonality of the decision boundary in the three-layered Complex-BP network can improve its generalization ability. (e) The average of the learning speed of the Complex-BP is several times faster than that of the Real-BP, and the standard deviation of the learning speed of the Complex-BP is smaller than that of the Real-BP. (C) 2002 Elsevier Science B.V. All rights reserved.

      DOI: 10.1016/S0925-2312(02)00568-4

      researchmap

    • Solving the XOR Problem and the Detection of Symmetry Using a Single Complex-Valued Neuron. Peer-reviewed

      Tohru Nitta

      Neural Networks16 ( 8 ) 1101 - 1105   2003

      More details

      Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper (scientific journal)   Publisher:PERGAMON-ELSEVIER SCIENCE LTD  

      This letter presents some results on the computational power of complex-valued neurons. The main results may be summarized as follows. The XOR problem and the detection of symmetry problem which cannot be solved with a single real-valued neuron (i.e. a two-layered real-valued neural network), can be solved with a single complex-valued neuron (i.e. a two-layered complex-valued neural network) with the orthogonal decision boundaries, which reveals the potent computational power of complex-valued neurons. Furthermore, the fading equalization problem can be successfully solved with a single complex-valued neuron with the highest generalization ability. (C) 2003 Elsevier Ltd. All rights reserved.

      DOI: 10.1016/S0893-6080(03)00168-0

      PubMed

      CiNii Article

      researchmap

      Other Link: https://dblp.uni-trier.de/db/journals/nn/nn16.html#Nitta03

    • The Computational Power of Complex-Valued Neuron. Peer-reviewed

      Tohru Nitta

      Artificial Neural Networks and Neural Information Processing, Lecture Notes in Computer Science (Proceedings of International Conference on Artificial Neural Networks/International Conference on Neural Information Processing, ICANN/ICONIP'03-Istanbul)2714   993 - 1000   2003

      More details

      Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper (international conference proceedings)  

      DOI: 10.1007/3-540-44989-2_118

      researchmap

      Other Link: https://dblp.uni-trier.de/db/conf/icann/icann2003.html#Nitta03

    • The Uniqueness Theorem for Complex-Valued Neural Networks and the Redundancy of the Parameters. Peer-reviewed

      Tohru Nitta

      Systems and Computers in Japan34 ( 14 ) 54 - 62   2003

      More details

      Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper (scientific journal)  

      A complex neural network is obtained from an ordinary network by extending the (real-valued) parameters, such as the weights and the thresholds, to complex values. Applications to problems involving complex numbers, such as communications systems, are expected. This paper presents the following uniqueness theorem. When a complex function is given, the three-layered neural network that approximates the function is uniquely determined by a certain finite group, if it is irreducible. The above finite group specifies the redundancy of the parameters in the complex neural network, but has a structure which is different from that of the real-valued neural network. The order of the finite group is examined, and it is shown that the redundancy of the complex-valued neural network is an exponent multiple of the redundancy of the real-valued neural network. Analysis of the redundancy is important in the theoretical investigation of the basic characteristics of complex-valued neural networks, such as the local minimum property. A sufficient condition is derived for the given three-layered complex-valued neural network to be minimal. The above results are shown, in essence, by extending the approach of Sussmann for real-valued neural networks.

      DOI: 10.1002/scj.10363

      Scopus

      researchmap

    • 複素ニューラルネットワークにおける一意性定理とパラメータの冗長性 Peer-reviewed

      新田徹

      電子情報通信学会論文誌 DIIJ85-D-II ( 5 ) 796 - 804   2002

      More details

      Authorship:Lead author, Corresponding author   Language:Japanese   Publishing type:Research paper (scientific journal)  

      researchmap

    • Generalization of the Complex-Valued Neural Networks with the Orthogonal Decision Boundary Invited Peer-reviewed

      Tohru Nitta

      Proceedings of 6th International Conference on Knowledge-based Intelligent Information Engineering Systems & Allied Technologies, KES2002I   628 - 632   2002

      More details

      Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper (international conference proceedings)  

      researchmap

    • On the Critical Points of the Complex-Valued Neural Network Peer-reviewed

      Tohru Nitta

      Proceedings of International Conference on Neural Information Processing, ICONIP'02-Singapore3   1099 - 1103   2002

      More details

      Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper (international conference proceedings)  

      researchmap

    • Redundancy of the Parameters of the Complex-Valued Neural Network. Peer-reviewed

      Tohru Nitta

      Neurocomputing49 ( 1-4 ) 423 - 428   2002

      More details

      Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper (scientific journal)   Publisher:ELSEVIER SCIENCE BV  

      In this letter, we will clarify the redundancy of the parameters of the complex-valued neural network. The results may be summarized as follows. There exist the four transformations which can cause the redundancy of the parameters of the complex-valued neural network, including the two transformations which can cause the redundancy of the parameters of the real-valued neural network (i.e., the interchange of the two hidden neurons and the sign flip of the parameters on the hidden neurons). (C) 2002 Elsevier Science B.V. All rights reserved.

      DOI: 10.1016/S0925-2312(02)00669-0

      researchmap

    • Uniqueness of Feedforward Complex-Valued Neural Network with a Given Complex-valued Function Invited Peer-reviewed

      Tohru Nitta

      Proceedings of 5th International Conference on Knowledge-based Intelligent Information Engineering Systems & Allied Technologies, KES2001   550 - 554   2001

      More details

      Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper (international conference proceedings)  

      researchmap

    • An Analysis of the Fundamental Structure of Complex-Valued Neurons. Peer-reviewed

      Tohru Nitta

      Neural Processing Letters12 ( 3 ) 239 - 246   2000

      More details

      Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper (scientific journal)  

      DOI: 10.1023/A:1026582217675

      researchmap

    • Concept Forming in Emotion-Memory Model Peer-reviewed

      Kenji Nishida, Tohru Nitta, Toshio Tanaka, Hiroaki Inayoshi

      Proceedings of the Joint Conference on Information Sciences1   815 - 818   2000

      More details

      Language:English   Publishing type:Research paper (international conference proceedings)  

      In human memory, impressive objects (those with attachments to strong emotions, such as happiness and sadness) are retained easily, while non-impressive objects (those without attachments to strong emotions) are not. Furthermore, one can easily acquire systematic knowledge about a favorite field, while it is difficult to acquire such knowledge about a non-favorite field. Emotions thus seem to play an important role in retaining the memory of an object and in forming a conceptual memory of objects. We have therefore developed an emotion-memory model, and in this paper, we present a simulation result on concept forming.

      Scopus

      researchmap

    • 'Neuronoid', A New Model of Neuron, Detects the 'Coincidence with Delays' Peer-reviewed

      Hiroaki Inayoshi, Toshio Tanaka, Kenji Nishida, Tohru Nitta

      Proceedings of the IEEE International Conference on Systems, Man and Cybernetics3   217 - 222   1999

      More details

      Language:English   Publishing type:Research paper (international conference proceedings)  

      This paper describes a new model of neuron, named neuron-oid, that can detect the 'coincidence with delays'. A neuronoid is modeled as a collection of chemicals. Each Chemical belongs to either 'rover' category or 'borderer' category. The interaction of chemicals between these two categories leads to the decision of a neuronoid to fire or not.

      Scopus

      researchmap

    • A Computational Model of Personality Peer-reviewed

      Proceedings of Toward a Science of Consciousness   29 - 30   1999

      More details

      Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper (international conference proceedings)  

      researchmap

    • A Sufficient Condition for Decision Boundaries in Neural Networks with Complex Numbered Weights to Intersect Orthogonally Peer-reviewed

      Tohru Nitta

      Proceedings of International Conference on Neural Information Processing, ICONIP'99-Perth   95 - 100   1999

      More details

      Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper (international conference proceedings)  

      researchmap

    • Emotion Memory Model. Peer-reviewed

      Kenji Nishida, Tohru Nitta, Toshio Tanaka

      Proceedings of the 17th IASTED International Conference APPLIED INFORMATICS   222 - 225   1999

      More details

      Language:English   Publishing type:Research paper (international conference proceedings)  

      researchmap

      Other Link: https://dblp.uni-trier.de/db/conf/appinf/appinf1999.html#NishidaNT99

    • Modeling Human Mind Peer-reviewed

      Tohru Nitta, Toshio Tanaka, Kenji Nishida, Hiroaki Inayoshi

      Proceedings of the IEEE International Conference on Systems, Man and Cybernetics2   342 - 347   1999

      More details

      Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper (international conference proceedings)  

      In this paper, we propose a computational model of personality (called personality model) for the purpose of implementing non-intellectual functions of human mind on computer systems. The personality model will be formulated based on psychoanalysis, assuming that defensive mechanism plays an essential role in a personality. Inductive probability will be employed for modeling defense mechanism. The personality model is useful for the expression of feelings, and will be used in virtual reality, computer game characters, agent secretaries, and robotics.

      Scopus

      researchmap

    • Neuronoid as the coincidence detector: A New Model of Neuron Which Can Provide Neuronal Syncronization Peer-reviewed

      H. Inayoshi, T. Tanaka, K. Nishida and T. Nitta

      Proceedings of Toward a Science of Consciousness   48 - 49   1999

      More details

      Language:English   Publishing type:Research paper (international conference proceedings)  

      researchmap

    • The Stability of the Solution in Fully Connected Neural Networks with the Encouragement Factor. Peer-reviewed

      Tohru Nitta

      Proceedings of International Conference on Neural Information Processing, ICONIP'98-Kitakyushu1   518 - 521   1998

      More details

      Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper (international conference proceedings)  

      researchmap

      Other Link: https://dblp.uni-trier.de/db/conf/iconip/iconip1998.html#Nitta98

    • An Extension of the Back-Propagation Algorithm to Complex Numbers. Peer-reviewed

      Tohru Nitta

      Neural Networks10 ( 8 ) 1391 - 1415   1997

      More details

      Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper (scientific journal)  

      DOI: 10.1016/S0893-6080(97)00036-1

      researchmap

      Other Link: https://dblp.uni-trier.de/db/journals/nn/nn10.html#Nitta97

    • The Supremum and the Infimum of the Encouragement Factor in the Gaussian Machine. Peer-reviewed

      Tohru Nitta

      Proceedings of International Conference on Neural Information Processing, ICONIP'97-Dunedin1   482 - 485   1997

      More details

      Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper (international conference proceedings)  

      researchmap

      Other Link: https://dblp.uni-trier.de/db/conf/iconip/iconip1997-1.html#Nitta97

    • An Extension of the Back-propagation Algorithm to Quaternions Peer-reviewed

      Tohru Nitta

      Proceedings of International Conference on Neural Information Processing, ICONIP'96-HongKong1   247 - 250   1996

      More details

      Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper (international conference proceedings)  

      researchmap

    • Ability of the Complex Back-Propagation Algorithm to Learn Similar Transformation Peer-reviewed

      Tohru Nitta

      Proceedings of IEEE International Conference on Neural Networks, ICNN'95-Perth3   1513 - 1516   11 1995

      More details

      Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper (international conference proceedings)  

      It has been discovered by computational experiments that the `Complex-BP' algorithm can transform geometrical figures (e.g. rotation, similar transformation and parallel displacement), and reported that this ability can be successfully applied to computer vision. In this paper, the ability of the `Complex-BP' algorithm to learn similar transformation of geometrical figures is analyzed. A Complex-BP network which has learned similar transformation, has the ability to generalize the similitude ratio with a distance error which is represented by the sine of the difference between the argument of the test pattern and that of the training pattern.

      DOI: 10.1109/ICNN.1995.487386

      Scopus

      researchmap

      Other Link: https://dblp.uni-trier.de/db/conf/icnn/icnn1995.html#Nitta95

    • A Quaternary Version of the Back-Propagation Algorithm Peer-reviewed

      Tohru Nitta

      Proceedings of IEEE International Conference on Neural Networks, ICNN'95-Perth5   2753 - 2756   1995

      More details

      Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper (international conference proceedings)  

      DOI: 10.1109/ICNN.1995.488166

      Scopus

      J-GLOBAL

      researchmap

      Other Link: https://dblp.uni-trier.de/db/conf/icnn/icnn1995.html#Nitta95a

    • ニュ-ラルネットワ-クの3次元への拡張 Peer-reviewed

      新田徹

      情報処理学会論文誌35 ( 7 ) 1300 - 1310   1994

      More details

      Authorship:Lead author, Corresponding author   Language:Japanese   Publishing type:Research paper (scientific journal)  

      researchmap

    • Ability of the 3D Vector Version of the Back-Propagation to Learn 3D Motion Peer-reviewed

      Tohru Nitta

      Proceedings of INNS World Congress on Neural Networks, WCNN'94-SanDiego   262 - 267   1994

      More details

      Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper (international conference proceedings)  

      researchmap

    • An Analysis on Decision Boundaries in the Complex Back-Propagation Network Peer-reviewed

      Tohru Nitta

      Proceedings of IEEE International Conference on Neural Networks, ICNN'94-Orlando2   934 - 939   1994

      More details

      Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper (international conference proceedings)  

      Scopus

      J-GLOBAL

      researchmap

    • An Analysis on the Learning Rule in the Complex Back-Propagation Algorithm Peer-reviewed

      Tohru Nitta

      Proceedings of INNS World Congress on Neural Networks, WCNN'94-SanDiego3   702 - 707   1994

      More details

      Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper (international conference proceedings)  

      researchmap

    • Behavior of the Complex Numbered Back-Propagation Network which has Learned Similar Transformation Peer-reviewed

      Tohur Nitta

      Proceedings of INNS World Congress on Neural Networks, WCNN'94-SanDiego4   765 - 770   1994

      More details

      Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper (international conference proceedings)  

      researchmap

    • Decision Boundaries of the Complex Valued Neural Networks Peer-reviewed

      Tohru Nitta

      Proceedings of INNS World Congress on Neural Networks, WCNN'94-SanDiego4   727 - 732   1994

      More details

      Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper (international conference proceedings)  

      researchmap

    • Generalization Ability of the Three-Dimensional Back-Propagation Network Peer-reviewed

      Tohru Nitta

      Proceedings of IEEE International Conference on Neural Networks, ICNN'94-Orlando5   2895 - 2900   1994

      More details

      Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper (international conference proceedings)  

      The 3D vector version of the back-propagation algorithm (called '3DV-BP') is a natural extension of the complex-valued version of the back-propagation algorithm (called 'Complex-BP'). The Complex-BP can be applied to multi-layered neural networks whose weights, threshold values, input and output signals are all complex numbers, and the 3DV-BP can be applied to multi-layered neural networks whose threshold values, input and output signals are all 3D real valued vectors, and whose weights are all 3D orthogonal matrices. It has already been reported that an inherent property of the Complex-BP is its ability to learn '2D motion'. This paper shows in computational experiments that the 3DV-BP has the ability to learn '3D motion', which corresponds to the ability of the Complex-BP to learn '2D motion'.

      Scopus

      researchmap

    • Structure of Learning in the Complex Numbered Back-Propagation Network Peer-reviewed

      Tohru Nitta

      Proceedings of IEEE International Conference on Neural Networks, ICNN'94-Orlando1   269 - 274   1994

      More details

      Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper (international conference proceedings)  

      DOI: 10.1109/icnn.1994.374173

      Scopus

      J-GLOBAL

      researchmap

    • 回転を学習した複素BPネットワークのふるまい Peer-reviewed

      新田徹

      情報処理学会論文誌35 ( 7 ) 39 - 51   1993

      More details

      Authorship:Lead author, Corresponding author   Language:Japanese   Publishing type:Research paper (scientific journal)  

      researchmap

    • 複素バックプロパゲーション学習アルゴリズムの学習特性 Peer-reviewed

      新田徹, 古谷立美

      情報処理学会論文誌34 ( 1 ) 29 - 38   1993

      More details

      Authorship:Lead author, Corresponding author   Language:Japanese   Publishing type:Research paper (scientific journal)  

      researchmap

    • A Back-Propagation Algorithm for Complex Numbered Neural Networks Peer-reviewed

      Tohru Nitta

      Proceedings of IEEE/INNS International Joint Conference on Neural Networks, IJCNN'93-Nagoya2   1649 - 1652   1993

      More details

      Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper (international conference proceedings)  

      This paper introduces a complex numbered version of the back-propagation algorithm, which can be applied to neural networks whose weights, threshold values, input and output signals are all complex numbers. This new algorithm can be used to learn complex numbered patterns in a natural way. We show that ″Complex-BP″ can transform geometrical figures.

      Scopus

      researchmap

    • A Back-Propagation Algorithm for Neural Networks Based on 3D Vector Product Peer-reviewed

      Tohru Nitta

      Proceedings of IEEE/INNS International Joint Conference on Neural Networks, IJCNN'93-Nagoya1   589 - 592   1993

      More details

      Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper (international conference proceedings)  

      A 3D vector version of the back-propagation algorithm is proposed for multi-layered neural networks in which vector product operation is performed, and whose weights, threshold values, input and output signals are all 3D real numbered vectors. This new algorithm can be used to learn patterns consisted of 3D vectors in a natural way. The XOR problem was used to successfully test the new formulation.

      Scopus

      researchmap

    • A Complex Numbered Version of the Back-propagation Algorithm Peer-reviewed

      Tohru Nitta

      Proceedings of INNS World Congress on Neural Networks, WCNN'93-Portland3   576 - 579   1993

      More details

      Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper (international conference proceedings)  

      researchmap

    • A Supervised Learning Algorithm for Neural Networks with Two-Dimensional Weights Peer-reviewed

      Tohru Nitta

      Proceedings of CIE/IEEE International Conference on Neural Networks and Signal Processing,ICNNSP'93-Guangzhou   425 - 430   1993

      More details

      Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper (international conference proceedings)  

      researchmap

    • A Three-dimensional Back-propagation Peer-reviewed

      Tohru Nitta

      Proceedings of INNS World Congress on Neural Networks, WCNN'93-Portland3   572 - 575   1993

      More details

      Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper (international conference proceedings)  

      researchmap

    • Extension of the Back-Propagation Algorithm to Three Dimensions by Vector Product Peer-reviewed

      Tohru Nitta

      Proceedings of 5th IEEE International Conference on Tools with Artificial Intelligence, TAI'93-Boston   460 - 461   1993

      More details

      Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper (international conference proceedings)  

      A SD vector version of the back-propagation algorithm is proposed for multi-layered neural networks in which vector product operation is performed, and whose weights, threshold values, input and output signals are all 3D real numbered vectors. This new algorithm can be used to learn patterns consisted of 3D vectors in a natural way. A 3D-example was used to successfully test the new formulation.

      DOI: 10.1109/TAI.1993.634002

      Scopus

      researchmap

      Other Link: https://dblp.uni-trier.de/db/conf/ictai/ictai1993.html#Nitta93

    • Fundamental Structures of the Complex Back-Propagation Algorithm Peer-reviewed

      Proceedings of NCTU/IEEE International Symposium on Artificial Neural Networks, ISANN'93-Taiwan   30 - 38   1993

      More details

      Language:English   Publishing type:Research paper (international conference proceedings)  

      researchmap

    • Proposal of Neural Networks Based on Vector Product Peer-reviewed

      Tohru Nitta

      Proceedings of CIE/IEEE International Conference on Neural Networks and Signal Processing,ICNNSP'93-Guangzhou   397 - 402   1993

      More details

      Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper (international conference proceedings)  

      researchmap

    • Structure of Weight Parametera and Decision Boundary in Complex Back - propagation Network Peer-reviewed

      NITTA Tohru, AKAHO Shotaro, AKIYAMA Yutaka, FURUYA Tatsumi

      IPSJ Journal33 ( 11 ) 1306 - 1313   15 11 1992

      More details

      Authorship:Lead author, Corresponding author   Language:Japanese   Publishing type:Research paper (scientific journal)   Publisher:Information Processing Society of Japan (IPSJ)  

      近年 階層型ニューラルネットワークの学習方式として提案されたバックプロパゲーション学習アルゴリズム(BP)が注目され 様々な分野に応用されている.複素パターンに対する学習アルゴリズムとして提案された複素BPは その学習パラメータがすべて複素数となっており 実数値を用いる通常のBPを複素数に拡張したものであると言える.複素BPには 従来のBPには見られない2次元運動学習能力が備わっていることは既に報告した.本稿では ネットワークアーキテクチャの観点から 複素BPの基本特性を解析的 実験的に調べ 従来のBPとの差異を明確にしたので報告する.得られた主要な結果は 次のとおりである(1)複素BPネットワークにおける重み係数は 従来のBPネットワークにおけるそれとは異なり 2次元運動に関係した制約を持っており 学習は基本的にその制約を保ちつつ行われる.(2)複素ニューロンの実部の決定表面と虚部の決定表面とは互いに直交しており 決定領域を均等に4つに分割する3層の複素BPネットワークにおける決定表面は基本的にこの構造を内包しており 中間ニューロンヘの総入力が十分大きいときに直交する.この意味で 複素BPは複素パターンに対する自然な学習アルゴリズムであると思われる・これらの特性は意図されたものではなく 複素数への拡張の結果として自ずと現れてきたものであることに注意されたい.

      CiNii Article

      researchmap

    • 非負値生産関数を持つ資源配分モデルの解析 Peer-reviewed

      新田徹, 古谷立美

      日本オペレーションズリサーチ学会論文誌35 ( 1 ) 15 - 30   1992

      More details

      Authorship:Lead author, Corresponding author   Language:Japanese   Publishing type:Research paper (scientific journal)  

      researchmap

    • A 3D Vector Version of the Back-Propagation Algorithm Peer-reviewed

      T. Nitta, H. D. Garis

      Proceedings of IEEE/INNS International Joint Conference on Neural Networks, IJCNN'92-Beijing2   511 - 516   1992

      More details

      Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper (international conference proceedings)  

      researchmap

    • Probabilistic Distributed Memory - A Memory-Based Parallel Implementation Model - Peer-reviewed

      T. Furuya, H. Ito, T. Tanaka, T. Higuchi, T. Nitta, T. Niwa, H. Inayoshi, H. D. Garis

      Proceedings of IEEE/INNS International Joint Conference on Neural Networks, IJCNN'92-Beijing   607 - 612   1992

      More details

      Language:English   Publishing type:Research paper (international conference proceedings)  

      researchmap

    • フイードバック付き多層ニュ-ラルネットワ-ク Peer-reviewed

      古谷立美, 秋山泰, 田中敏雄, 新田徹

      情報処理学会論文誌32 ( 9 ) 1210 - 1213   1991

      More details

      Language:Japanese   Publishing type:Research paper (scientific journal)  

      researchmap

    • 複素バックプロパゲ-ション学習 Peer-reviewed

      新田徹, 古谷立美

      情報処理学会論文誌32 ( 10 ) 1319 - 1329   1991

      More details

      Authorship:Lead author, Corresponding author   Language:Japanese   Publishing type:Research paper (scientific journal)  

      researchmap

    • An Analysis of a Stochastic Resource Allocation Model with Various Utility Functions Peer-reviewed

      Tohru Nitta

      Journal of the Operations Research Society of Japan32 ( 1 ) 1 - 15   1989

      More details

      Authorship:Lead author, Corresponding author   Language:English   Publishing type:Research paper (scientific journal)  

      researchmap

    ▼display all

    Books and Other Publications

    • Complex-Valued Neural Networks: Adcances and Applications in The IEEE Press Series on Computational Intelligence, ed. Akira Hirose

      T. Nitta( Role: Contributor)

      Wiley-IEEE Press  2013 

      More details

    • Efficiency and Scalability Methods for Computational Intellect, eds. Boris Igelnik and Jacek M. Zurada

      T. Nitta( Role: Contributor)

      Pennsylvania, Information Science Reference, USA  2013 

      More details

    • Computational Modeling and Simulation of Intellect: Current State and Future Perspectives, ed. Boris Igelnik

      T. Nitta( Role: Contributor)

      Pennsylvania, Information Science Reference, USA  2011 

      More details

    • Complex-Valued Neural Networks : Utilizing High-Dimensional Parameters

      Tohru Nitta( Role: Edit)

      Information Science Reference, Pennsylvania, USA  2 2009  ( ISBN:9781605662145

      More details

      Total pages:xxiii, 479 p.   Language:English

      CiNii Books

      researchmap

    • Advances in Imaging and Electron Physics, Vol.152, eds. Peter W. Hawkes

      T. Nitta( Role: Contributor)

      Elsevier, Amsterdam, The Netherlands  2008 

      More details

    • Encyclopedia of Artificial Intelligence, eds. Juan R. Rabunal, Julian Dorado & Alejandro Pazos, Pennsylvania

      T. Nitta( Role: Contributor)

      Pennsylvania, IGI Global, USA  2008 

      More details

    • Complex-Valued Neural Networks: Theories and Applications, A. Hirose(ed)

      T. Nitta( Role: Contributor)

      World Scientific Publishing Co. Pte. Ltd.  2003 

      More details

    • No Matter, Never Mind, eds. K. Yasue et al.

      H.Inayoshi, T.Tanaka, K.Nishida, T.Nitta( Role: Contributor)

      John Benjamins Publishing Co., The Netherlands  2002 

      More details

    • No Matter, Never Mind, eds. K. Yasue et al.

      T. Nitta( Role: Contributor)

      John Benjamins Publishing Co., The Netherlands  2002 

      More details

    • 心とは何か - 心理学と諸科学との対話, 足立他編

      ( Role: Contributor)

      北大路書房  2001 

      More details

    • Genetic Progamming

      ( Role: Joint translator)

      2001  ( ISBN:487653330X

      More details

      Total pages:xxi, 423p   Language:Japanese

      CiNii Books

      researchmap

    ▼display all

    Professional Memberships

    Research Projects

    • ニューラルネットワークの特異点の解消

      日本学術振興会  科学研究費助成事業 基盤研究(C) 

      新田 徹

      More details

      4 2016 - 3 2021

      Grant number:16K00347

      Grant amount:\4550000 ( Direct Cost: \3500000 、 Indirect Cost:\1050000 )

      本年度から、世界的な研究動向に鑑みて、個々の特異点の特定と解消に関する研究から、特異点の定性的な特性に関する研究に重点を移した。
      線形な深層ニューラルネットワークが(学習性能に悪い影響を与える)悪い局所解(学習誤差の大きい局所解)を持たないことが2016年に証明された。それに対して、非線形な深層ニューラルネットワーク(活性化関数はReLU関数)の場合は、2つの仮定が置かれた上で悪い局所解を持たないことが証明された。本年度は、(これまで未解決となっていた)当該2つの仮定が成り立つための十分条件を探索的に調べた。その結果、重み等のパラメータが従う確率分布の確率密度関数が偶関数(縦軸に関して対称)であるならば、当該2つの仮定が満足されることを数理的に証明した。当該2つの仮定を外すことに成功したのは本結果が最初であると思われる。そして、その性質を満足するようなパラメータをもたらす初期化方法(even initialization)を提案した。これによって、最急降下法を使う場合は、少なくとも学習開始直後において、悪い局所解が存在しない状態を作り出すことが可能になった。提案したeven initializationがもらたすパラメータが取る値は、これまでに発見的に使われてきた一様分布を使う方法、およびKaiming Heらが提案した(正規分布を使った)He初期化法がもらたすパラメータが取る区間の一部に含まれることも明らかにした。

      researchmap

    • 組み合せ構造を持つ確率モデル構築のための学習理論

      日本学術振興会  科学研究費助成事業 萌芽研究 

      高畠 一哉, 赤穂 昭太郎, 新田 徹, 神嶌 敏弘, 西森 康則, 藤木 淳

      More details

      2002 - 2004

      Grant number:14658106

      Grant amount:\3100000 ( Direct Cost: \3100000 )

      1.MCMC(Markov chain Monte-Carlo)法においてモンテカルロ積分を速く収束させる手法を提案した.本手法はマルコフ連鎖の推移において系が同じ状態をなるべく取らないように推移行列を設計するものである.本手法の優位性を理論と実験の両面から示した.
      2.MCMC法の一種であるGibbs sampler法は状態推移において1つの変数のみを更新するが,複数の変数を同時に更新する手法を提案した.本手法は単一変数更新の通常のGibbs sampler法に比べ,分布の収束の速さや得られる乱数列の独立性において優位であり,計算量も同じオーダーである.さらに数値実験により本手法の優位性を示した.
      3.複数の確率分布があるとき,これらを最も良く近似する低次元のパラメータのなす多様体を求める手法を提案した.多様体をe-平坦なものに取るe-PCAとm-平坦なものに取るm-PCAが考えられる.本手法では反復法により多様体を変形し近似する多様体を求める.これは大域的最適解に至る保証は無いものの,初期の多様体をうまく選ぶ方法を与え,実験的にも良い結果が得られた.
      4.複素ニューラルネットワークの計算能力を落とさずにネットワークの大きさを小さくするための十分条件を明らかにした.

      researchmap