DOCX

The latest progress and deep learning laboratory study of Noah's ark

By Gilbert Shaw,2015-02-06 07:47
11 views 0
The latest progress and deep learning laboratory study of Noah's ark

    The latest progress and deep learning laboratory study of

    Noah's ark

    Deep learning is a field of machine learning, the research of complex algorithm, theory, and the application of artificial neural network.Since such as Hinton has been put forward in 2006 [1], the depth of the study has made great development, has been successfully applied to image processing, voice processing, natural language processing, and other fields, has achieved great success, has received the widespread attention, become the representative of advanced IT technology today.

    Figure 1 historically deep learning's relationship with other machine learning techniques

    Deep learning is essentially complex nonlinear model of learning, from the perspective of the history of machine learning, the rise of deep learning represents the natural evolution of machine learning technology.Perceptron model is put forward in 1957, Rosenblatt (Perceptron), is a linear model, can be regarded as a two layer neural network;In 1986, Rumelhart developed backward Propagation algorithm (Back Propagation), used in three layers of neural networks, represents a simple nonlinear model;1995, Vapnik invented the Support Vector machine (Support Vector those), RBF kernel Support Vector machine (SVM) is equivalent to three layers of neural network, is also a kind of simple nonlinear model.After 2006, the depth study of practical use more than three layers of neural networks, neural networks, also known as depth is a complex nonlinear model (see figure 1).Depth of Neural Network and a number of varieties, such as convolution Neural Network (Convolutional Neural Network), cyclic Neural Network (Recurrent Neural Network).

    This article first answer some common questions, about the depth of the study introduces the latest progress in the study of deep learning, in particular, some representative works, at the same time, an overview of our deep learning and natural language processing, finally the future development trend of deep learning.

    A few common questions about deep learning

    Here try to answer three common problems about deep learning.Why is strong deep learning?Deep learning is everything?Deep learning and what is the relationship between the human brain?

    Why is strong deep learning?

    Depth is a complex nonlinear neural network model, with complex structure and a large number of parameters, have very strong ability, especially suitable for complicated pattern recognition problem.

    As shown in figure 2 is a simple example of neural network, can represent Boolean function XNOR, this model can do simple nonlinear classification.It is three layer neural network has the famous example of nonlinear classification ability.Generally, increase with the layer of neural network, the number of neurons increases, its ability to deal with complex nonlinear problems also increase.

    Figure 2 XNOR neural network

    Figure 3 is called Alex.net neural network

    As shown in figure 3 is called Alex.net [2], the neural network is a convolutional neural networks, there are 11 layer, 650000 neurons and 60 million parameters.This model ImageNet competition in 2012 made the first good result, the top five accuracy is 85%, far higher than the second.It is also famous instances prove that deep learning is very effective.The task is to 1.2 million images into 1000 categories, to people also to have certain challenges, as you can see deep neural network can achieve strong image detection ability.

    FIG. 4 and neural network

    The characteristics of deep learning is deep neural network learning, network layer, has important significance, reflected in a better statistical efficiency (statistical efficiency).

    Shown in figure 4 is a 4 layer and - product of neural network, the neurons according to the logic and, or logical volume, the whole networks corresponds to a logical expression.Can put the "flattening", the neural network become a three layers neural networks, is equivalent to the logical expression.Two expression ability of neural network are equivalent, but shallow neural networks have more neurons, more parameters.We know how the parameters of the

    model usually need more training data.So deep neural network only need less training data can be good, that is to say, better statistical efficiency.Note that when pressed deep neural network flat, the shallow layer of the neural network parameter number is increased exponentially, although said ability is the same, but in reality it is impossible to learn.This conclusion also apply to general neural network.

    Interestingly, the brain series also has a multilayer structure (cascaded structure), which is the deep structure of neural network.From Hubel and Wiesel, and neuroscience research results can be seen that people are able to do complex information processing, and has a lot to do with this structure.

    Deep learning is everything?

    Deep learning is not everything.First of all, the deep learning is not suitable for all problems.If the problem is simple, linear and simple nonlinear problems, for example, deep learning were at best and support vector machine (SVM) has the same accuracy.If learning into local optimum, may also are less likely than other methods.In essence this is equivalent to kill on the wheel.

    In addition, if the amount of training data is not big enough, the depth the neural network can not be fully learn, effect is not very good.The deep learning this "swift horse", is "thousands of miles away is to eat not to dullness, force is not enough, nothing more than just beautiful".

    Again, deep learning in theory and is not everything.The famous "no free lunch" theorem illustrates this point., according to the theorem for arbitrary two machine learning methods: method 1 and method 2, if there is a problem, method one learn the model prediction accuracy is high, so there must be another problem, method 2 than a knowledge of the model prediction accuracy is high.The theorem in actual said, there is no any method can fit all.Note here only guarantee after the kind of situation is there, not involving its more likely.So, in the sense of mean there is some of advantages and disadvantages for the methods of learning, at least on the empirical.

    A corollary of the theorem is that deep learning is not everything, at least there are some problems in theory, other methods than deep learning can do better, although sometimes may encounter this kind of circumstance of probability is not high.

    Deep learning and what is the relationship between the human brain?

    In history, the invention of the artificial neural network, to a certain extent, inspired by the human brain information processing mechanism.But, artificial neural network, neural networks, including depth, or a machine learning model in nature.

    First of all, we have very limited understanding of the human brain.Quite apart from the material level of different (the human brain is a biological system, the computer is a

    electronic system), both as an information processing system (neumann, for example, the computer and the brain as different automata - automaton), we still can see many similarities and differences between them.

    Same as below.Artificial neural network of nodes and links, (neuron) corresponds to the brain neurons and synapses (synapse), sometimes we call them directly neurons and synapses.Artificial neural network architecture, such as convolution neural network architecture for reference of the human brain information processing mechanism, including serial structure (cascaded structure), local receptive field (local receptive field).Artificial neural network, the analog signal and digital signal interaction (e.g., XNOR network), which is similar with the brain neural network.

    The difference is very obvious.Artificial neural network learning algorithm is usually after the broadcast algorithm, is a less training error, need multiple iterations, network parameter optimization algorithm of learning, the learning mechanism with the brain may have a different nature.Depth of neural network is essentially mathematical models, such as convolution neural network using convolution (convolution) and maximum pool (Max pooling) operation, in order to achieve to image recognition is not affected by image translation and rotation of the effect of these operations is essentially mathematical functions, and what is the relationship between the processing of the brain is not clear.The most important of all, is the purpose of deep learning improve forecast accuracy on specific tasks, not simulate a human brain function.

    The latest progress of deep learning

    Deep learning was born in 2006, but the rise of the real, or the major impact on work, is after 2012, for example, such as Krizhevsky with deep learning greatly improves the accuracy of image classification, namely, Alex Net work [2];Dahl, greatly improved the accuracy of speech recognition [3].

    More deep learning work represents a powerful learning and using of classification and regression model, can be thought of as the development of the traditional support vector machine (SVM) and ascend.Here are four deep learning work, have important innovation in concept.

    Deep learning methods are usually supervised learning, such as Le put forward a [4] the depth of the unsupervised learning method, can from a large number of not marked in the image data, learn to identify the image concept of neurons, for example, can detect the cat's concept of neurons.Whole neural network has 9 floors, repeat 3 times the same processing, each processing includes filtering, pool, gauge operation, by the 3 layer neural network implementation.Learning is achieved by automatic encoding and decoding, through this process, the automatic learning to exist in the data model (concept).The another characteristic of this work is a massive parallelism.Neural network model has 10 billion parameters, with 10 million images in 1000 machines training for three days.Supervised learning need to use the annotation data, often the cost is very high, sometimes it's hard to get a large amount of

    training data;On the other hand people learning have a lot of supervision.So, this work let people see the depth study for the future development of a new direction.

    Mnih used to deep learning technologies such as reinforcement learning

    [5].Reinforcement learning is suitable for the main body in the process of interaction with the environment automatically learn to choose the best strategy, the best action.Mnih using reinforcement learning, such as a system, can automatically learn how to play computer games, and strengthen the core by the depth of learning.On the Atari game machines, the system can learn faster than a human player, play better.Specifically, reinforcement learning is Q - learning, including Q function by a convolution neural network, according to state said computer games such as environment, action is the operation of the game, reward is score of the game.The core idea here is to use parameterized neural network to represent the Q function, compared with the traditional method of using the linear model, the accuracy by increased significantly.This work is to expand the application of deep learning a new field.

    Another work is put forward by the Graves of nerve Turing Machine (Neural Turing Machine, NTM) [6], a new study of computer architecture based on depth.Deep learning is typically used in the prediction, analysis, here the author puts forward it used computer control.Computer is one of the most important function is conducted on external memory (external) memory read and write operations, which have great ability of information processing.NTM this computer, use external memory, its characteristic is to assume that the controller of the external memory is based on the multilayer neural network and so on the external memory read and write, is not deterministic, but depend on the input and output, uncertain.Graves certificate of NTM can learn from the data to the external memory control, implement the "copy", "sort" operation, etc.The deep learning to control the computer storage, have the feeling that find everything new and fresh.

    Weston et al proposed Memory Network (the Memory Network, MemNN) model, can be simple questions and answers, as shown in figure 5 [7].Although in response to a need of a relatively complex reasoning MemNN accuracy is not ideal, but the job makes deep learning technology extends to the quiz, reasoning, etc on the problem of traditional artificial intelligence, has been widely concerned.MemNN model features below, there is a long-term Memory (Long Term) Memory, to store a series of middle semantic representation, a word from the given input, the system converts it to intermediate representation, update the status of long-term Memory (e.g., to join the new intermediate representation), create a new said that eventually produce the output of an answer.

    Figure 5 memory networks can do simple question and answer

    The depth of the laboratory study the Noah's ark

    Noah's ark in natural language processing lab and deep learning research, the goal is to build better machine translation, natural language dialogue system.Recently has obtained certain achievements in the research, also received the affirmation of the industry peers.Here are a few representative work, and summarizes the main conclusion.

    Natural language dialogue is one of the most challenging problems of artificial intelligence, now there are many practical dialogue system, such as apple's Siri, will be able to make simple conversations with people, do some simple tasks, such as weather, check the stock, but now the distance with others dialogue system is still a far cry from what the ideal free dialogue, conversation needs to constantly develop more advanced technology in the future.The current technology can mainly single rounds of dialogue, if we can do the rounds of dialogue is also on the basis of single wheel dialogue with some simple processing.Technology is divided into single dialogue, based on the rules, and based on the data, as far as we know, now the system is based on the two ways.Noah's ark is the main contribution of the lab, are systematically studied based on the data of single wheel dialogue system, with a deep learning developed the industry's most advanced technology.Multiple deep learning model is put forward,9,10 [8], the nerve transponder (Neural Responding Machine), is the industry's first model of single wheel dialogue generated based on deep learning [10].Are given in a word, this system can automatically generate a response, the system is completely by large-scale dialogue data automatic building, the core is circular neural network model.The system can generate amazing responses, the success rate of dialogue is better than the existing system based on translation model has been increased greatly.Our conversation research, now the focus shifted to several rounds of dialogue, such aspects as knowledge, easy to use reasoning.

    Machine translation can help mankind to overcome the language barrier, is a major application of natural language processing.Now the mainstream of Machine Translation is still a Statistical Machine Translation (Statistical Machine Translation, SMT), especially the word level Translation methods (phrase - -based).In recent years, many scholars try to deep learning combined with SMT technology, for example, BBN scholars found that the depth model is used to implement joint model of the source language and target language, and this model is used as a feature of SMT model, can improve the accuracy of SMT.Along the way of thinking, we also proposed two convolution neural network model [11], as a joint model of the source language and target language and the language of the target language model, and will they use for SMT model, the characteristics of the overall raised BLEU scores two points.Another bolder also is the direction of the high hopes, is entirely in deep learning machine translation system, called neural machine translation (neural machine translation, NMT), has achieved preliminary results, achieve good results with SMT flat.For example, academics at the university of Montreal RNNSearch neural network system was proposed based on cycle.Its basic idea is to use the RNN sentences from the source language into the intermediate representation, then use another RNN converts intermediate representation to the

    target language sentence, in addition, they also import concentration (attention) mechanism, to further enhance the accuracy in translation.We also in NMT research, put forward the deep memory model (DeepMemory, DM) [14].DM through a series of nonlinear transformation of sentence, from the source language into the intermediate representation, then converted into the target language of the sentence.Inspired by neural Turing machine, DM will intermediate representation are stored in different memory, through neural network control for memory read and write operations, so as to realize all kinds of complex in the middle of the transformation, such as the order of exchange, suitable for Yu Xiangyuan translation between languages.DM in chinese-english translation achieves the single model and SMT Moses benchmark system level.

    Above is applied to natural language processing depth study of the basic technology, we also did some in-depth study, took the lead.Lab, including the main contribution of the Noah's ark with convolution neural network (CNN) model is put forward said sentence semantics, is applied to the automatic question answering, single wheel dialogue, image search, machine translation, sentiment analysis, greatly improved the accuracy in all of these tasks,11,12,13,15,16 [8];Systematically compared the CNN on several tasks and RNN (cycle) neural network, the relationship between the CNN is obtained more suited to the language of matching (matching), RNN is more suitable for the conversion of languages (translation).In fact, CNN is powerful tool for natural language processing (NLP).It scans of a sentence, extracting features, choose features, the last sentence semantic representation.Its characteristic is don't need to do a syntactic analysis, feature extraction and selection more robust;Starting from the whole of statements for feature extraction and selection, more suitable for the entire statement matching (does not need to generate statements), such as question and answer the question and answer matching.

    The future development trend of deep learning

    Deep learning leader LeCun, Bengio & Hinton is in the journal natureForesee the future development trend of deep learning[17] : first of all, despite recent unsupervised learning is learning supervision to the limelight, but in the long run, is still more important problem;In the field of computer vision, combined with a variety of models, such as the depth of the learning and reinforcement learning, to build an end-to-end system, may achieve more close to people's recognition mechanism;Natural language processing will be deep learning play a prominent role in the future, major breakthroughs were made in the field, better able to "understand" statements and textual semantic system will appear;Finally, the depth of learning and the combination of artificial intelligence, will bring revolutionary change to the field of artificial intelligence.

    Deep learning indeed for artificial intelligence, computer science and opened up a new land, looking to the future, really let a person feel happy, based on deep learning and other related technologies, we may indeed make computers more close to the people, make some fiction in the science fiction film, science fiction into reality.

Report this document

For any questions or suggestions please email
cust-service@docsford.com