question
stringclasses 283
values | provided_answer
stringlengths 1
3.56k
⌀ | reference_answer
stringclasses 486
values | grade
float64 0
100
| data_source
stringclasses 7
values | normalized_grade
float64 0
1
| weight
float64 0
0.04
| index
int64 0
4.97k
|
---|---|---|---|---|---|---|---|
null | Backpropogation is used for training multi layer networks. It constitutes of forward pass and backward pass. In forward pass network computes the output. Based on this the errors are calculated based on difference between network output and desired output. These errors are the backpropogated to network during backward pass and used for adjusting the synaptic weights. | Backpropagation lowers the error of a MLP level by level recursively backwards. It back propagates an error from the last layer to the first layer by updating the weights. The updates are determined by the local gradient at each level which is computed by partial derivatives of the error and chain rule. | 1 | DigiKlausur | 0.5 | 0.001522 | 358 |
null | HERE: Backpropagation is used for Multilayer perceptron network. It consists of two passes. - Forward pass: The outputs are calculated at every computational node and passed till the output node where the error is calculated by difference of desired output and the actual output. In this pass, the weights of the synaptic links are not changed. - Backward pass: The error generated at the output neuron is passed in the backward direction i.e., against the direction of the synapses and the local gradient of the error is calculated at every neuron. | Backpropagation lowers the error of a MLP level by level recursively backwards. It back propagates an error from the last layer to the first layer by updating the weights. The updates are determined by the local gradient at each level which is computed by partial derivatives of the error and chain rule. | 2 | DigiKlausur | 1 | 0.001001 | 359 |
null | Back prop is a way of training a neural network by adapting the weights using error produced. It consists of two phases, forward and backward. Forward phase computes the output along the network using the function signal. In the backward phase, the error of thr outpur fromthe derired output is computed and a local gradient of the error is used to update the weights iof the network. The local gradient considers the credit or blame of the corresponding weights of neuron in producing the output. | Backpropagation lowers the error of a MLP level by level recursively backwards. It back propagates an error from the last layer to the first layer by updating the weights. The updates are determined by the local gradient at each level which is computed by partial derivatives of the error and chain rule. | 2 | DigiKlausur | 1 | 0.001001 | 360 |
null | The back propagation algorithm is based on the error correction learning rule and consists of two passes: 1. Forward pass : The input signal applied to the source nodes of the network is propagated forwards through the different layers of the network, and the output is computed at the output layer of the network. 2. Backward pass : The error signal computed at the output is propagated backwards, with a local gradient computed at each of the hidden layer neurons, in order to adjust the synaptic weightsof the neuron in the network. | Backpropagation lowers the error of a MLP level by level recursively backwards. It back propagates an error from the last layer to the first layer by updating the weights. The updates are determined by the local gradient at each level which is computed by partial derivatives of the error and chain rule. | 2 | DigiKlausur | 1 | 0.001001 | 361 |
null | Back propagation is moving the error backwards recursively through the network by calculating the local field of every neuron to update the weights. It is based on the chaining rule of derivatives. | Backpropagation lowers the error of a MLP level by level recursively backwards. It back propagates an error from the last layer to the first layer by updating the weights. The updates are determined by the local gradient at each level which is computed by partial derivatives of the error and chain rule. | 1 | DigiKlausur | 0.5 | 0.001522 | 362 |
null | Backpropagation is a neural network which has two stages: -Forward pass: In forward pass the error is calculated in the output layer with the help of the desired output and the given output. e = d - y - Backward pass: It begins in the output layer , in this case the error is passed backwards with the calculation of gradients at each layer of the neural network So in back propagation the adjustment to weights is made based on the local gradients which is calculated at each layer. | Backpropagation lowers the error of a MLP level by level recursively backwards. It back propagates an error from the last layer to the first layer by updating the weights. The updates are determined by the local gradient at each level which is computed by partial derivatives of the error and chain rule. | 2 | DigiKlausur | 1 | 0.001001 | 363 |
null | It contains forward pass and backward pass. In the forward pass, input is applied to the network and propagate it forward through the network, then compute the output of neurons in output layer and errors for output neurons. In the backward pass, compute local gradients and update the synaptic weights according to error correction rule for each neuron layer by layer in a backward direction. | Backpropagation lowers the error of a MLP level by level recursively backwards. It back propagates an error from the last layer to the first layer by updating the weights. The updates are determined by the local gradient at each level which is computed by partial derivatives of the error and chain rule. | 2 | DigiKlausur | 1 | 0.001001 | 364 |
null | Back-propagation algorithm consists of two passes:<br> 1. Forward pass: the input vector is applied to the network layer by layer 2. Backward pass: the weight is adjusted based on error correction learning rule. <br> <br> Back propagation uses error correction learning rule and the objective is to minimize the average of squared error. | Backpropagation lowers the error of a MLP level by level recursively backwards. It back propagates an error from the last layer to the first layer by updating the weights. The updates are determined by the local gradient at each level which is computed by partial derivatives of the error and chain rule. | 1 | DigiKlausur | 0.5 | 0.001522 | 365 |
null | * Backpropagation is a steepest decent method that calcualtes the error at the output neurons and backpropagates those errors backwards to update the weights of each neuron. * The synaptic weight updated is directly proportional to **partial derivatives** * Local gradient is calculated at ouput neurons and hidden neurons. * Local gradient at output neurons are calculated using the observed error. * But the error function is missing in the hidden neurons, so the local gradient of hidden neuron j is calculated recursively from the local gradients of all neurons which are connected directly to the hidden neuron j. | Backpropagation lowers the error of a MLP level by level recursively backwards. It back propagates an error from the last layer to the first layer by updating the weights. The updates are determined by the local gradient at each level which is computed by partial derivatives of the error and chain rule. | 2 | DigiKlausur | 1 | 0.001001 | 366 |
null | Backpropogation has 2 steps. Forward pass: In forward pass the data is run through the network and the error is calculated. Backward pass: In Backward pass the weight is adjusted using local gradient of error such that the error is minimized. There are many ways for weight adjustment like, steepest descent, Newtons method, Gauss newton method. | Backpropagation lowers the error of a MLP level by level recursively backwards. It back propagates an error from the last layer to the first layer by updating the weights. The updates are determined by the local gradient at each level which is computed by partial derivatives of the error and chain rule. | 1 | DigiKlausur | 0.5 | 0.001522 | 367 |
null | The back propagation is a learning method in neural networks. Back propagation enables the feed forward netwowrk to represent XOR gate. It has two phases: forward pass: the initial weights are used to calculate the value of the output neuron backward pass: starts from the output layer and travels backward. During this phase the weights are changed based on the local gradients of each neuron| | Backpropagation lowers the error of a MLP level by level recursively backwards. It back propagates an error from the last layer to the first layer by updating the weights. The updates are determined by the local gradient at each level which is computed by partial derivatives of the error and chain rule. | 1 | DigiKlausur | 0.5 | 0.001522 | 368 |
null | Back propagation is used to learn weights in a multi-layer feed forward network. It is divided into two steps: forward and backward. In the forward step one input is passed through the network to calculate the output of the network. This output is used to calculate the error of each output neuron given the desired output. After this forward step, in the backward step the weights are changed beginning in the end of the network. Each weight is changed by taking the derivative of the activation function of the neuron times either the error, if the following neuron is an output neuron, or all local gradients of connected neurons times the corresponding weights. The weight changes are the local fields. | Backpropagation lowers the error of a MLP level by level recursively backwards. It back propagates an error from the last layer to the first layer by updating the weights. The updates are determined by the local gradient at each level which is computed by partial derivatives of the error and chain rule. | 2 | DigiKlausur | 1 | 0.001001 | 369 |
null | Backpropagation is used in Multilayer Perceptrons to give a method of adapting the weights. First the forward phase is run like in a regular feedforward network. Then after the output and thus the error is determined the error is backpropagated from ouput layer through the network. Since we have multiple layers, there is only a desired output of the network for the last layer. To counteract this problem a gradient is calculated for every neuron during the backward pass. The gradient is giving a measure of the contribution of this neuron to the final error. The gradient is then used to update the neurons weights. If the neuron is not part of the output layer, the previous gradients are used to calculate the new gradient instead of using the error. | Backpropagation lowers the error of a MLP level by level recursively backwards. It back propagates an error from the last layer to the first layer by updating the weights. The updates are determined by the local gradient at each level which is computed by partial derivatives of the error and chain rule. | 1 | DigiKlausur | 0.5 | 0.001522 | 370 |
null | Back propagation consists of two steps: 1. step - Forward pass: Here the input data is fed into the network and the output is calculated at the output nodes. The usual calculations of the induced local field are done by using this formula $v = \sum wx + b$. The output is then calculated using this formula $y = f(v)$, where f() is the activation function. 2. step - Backward pass: Here the error is backpropagated through the network from the output layer to the input layer. In the output layer the error is calculated using this formula $\delta = d - y$, using the desired output d and the actual output y. In the layers before the output layer the local gradient is used to calculate the error using the error from the output layer $\delta = w\delta x$. Additionally the weights are updated using $w{new} = w{old} - learning\rate \cdot \delta x$ | Backpropagation lowers the error of a MLP level by level recursively backwards. It back propagates an error from the last layer to the first layer by updating the weights. The updates are determined by the local gradient at each level which is computed by partial derivatives of the error and chain rule. | 2 | DigiKlausur | 1 | 0.001001 | 371 |
null | Back propagation is a learning algorithm for multilayer neural networks. At first, the input is propagated through the network until the end is reached. Here the error is calculated with the desired result. Then the error is used to update the weights from the back to the front. For the output layer the weights can be updated directly with the calculated error. The following layers have to use the local gradient of the previous error, which is calculated with the derivative of the activation function and its error. This is then used to update the weights and repeated until the front is reached. | Backpropagation lowers the error of a MLP level by level recursively backwards. It back propagates an error from the last layer to the first layer by updating the weights. The updates are determined by the local gradient at each level which is computed by partial derivatives of the error and chain rule. | 2 | DigiKlausur | 1 | 0.001001 | 372 |
null | back propagation is used in multilayer feedforward networks. first the forward pass is computed. The given error at the output nodes is used to compute the weight changes using widrow-hoff learning rule. then the error is given back layer by layer in the backward pass to compute the error and weight changing for each layer recursivly. The learning can be done in sequential (online) or batch mode (offline) | Backpropagation lowers the error of a MLP level by level recursively backwards. It back propagates an error from the last layer to the first layer by updating the weights. The updates are determined by the local gradient at each level which is computed by partial derivatives of the error and chain rule. | 1 | DigiKlausur | 0.5 | 0.001522 | 373 |
null | In multi layer ff networks the error is only available in the last layer. Therefore the error is propagated back through the network using the backpropagtion algorithm. In order to do so the local gradient has to be calculated. Update of the weight: w+1 = w + n * x * gradient where the iput x is the output of the previous layer. The local gradient is calculated diffrently depending if the neuron is in the output layer or in the hidden layer. Output layer: $ gradient = phi`(x) * (y -d) $ Hidden Layer: $ gradient = phij`(x) * SUM(wi * local gradienti) $ | Backpropagation lowers the error of a MLP level by level recursively backwards. It back propagates an error from the last layer to the first layer by updating the weights. The updates are determined by the local gradient at each level which is computed by partial derivatives of the error and chain rule. | 2 | DigiKlausur | 1 | 0.001001 | 374 |
null | In steepest gradient weights are adjusted in decreasing direction of error function. But for hidden neurons there is no labels available to calculate the error. Hence final ouput error is backpropogated through the layers inside the hidden layers of NN. This is possible with continuous activation function and chain rule on its derivatives. Final error is differentiated with respect to hidden weights. Chain rule is applied to find local error on hidden neurons. | Backpropagation lowers the error of a MLP level by level recursively backwards. It back propagates an error from the last layer to the first layer by updating the weights. The updates are determined by the local gradient at each level which is computed by partial derivatives of the error and chain rule. | 1 | DigiKlausur | 0.5 | 0.001522 | 375 |
null | the back propagation algorithm it consist of forward pass and backward pass computes the output of the neuron then it propagates in backward direction while recursively compute local gradient of the neuron weights are adjusted accordingle. | Backpropagation lowers the error of a MLP level by level recursively backwards. It back propagates an error from the last layer to the first layer by updating the weights. The updates are determined by the local gradient at each level which is computed by partial derivatives of the error and chain rule. | 1 | DigiKlausur | 0.5 | 0.001522 | 376 |
null | Back Propagation is the process of learning in Multi Layer Perceptron in which the error from, the output of the network is fed back into the network to adjust the weights in the hidden layer. That is the error back prpagates into the network to enable the network to learn by adjusting the synaptic weights based on it. | Backpropagation lowers the error of a MLP level by level recursively backwards. It back propagates an error from the last layer to the first layer by updating the weights. The updates are determined by the local gradient at each level which is computed by partial derivatives of the error and chain rule. | 1 | DigiKlausur | 0.5 | 0.001522 | 377 |
null | * Back propagation is a process to make adjustment to the weights of a neural network in a way that minimizes the average squared error of the training data. * It uses steepest decent method. In each step it moves towards the direction that gives maximum decrease of the error. * In back propagation the error is propaged backward from the last layer towards the earlier layers. The adjustments made to the weights is proportional to the partial derivative of the error with respect to the weight. * The partial derivative is calculated using repeated application of the chain rule. | Backpropagation lowers the error of a MLP level by level recursively backwards. It back propagates an error from the last layer to the first layer by updating the weights. The updates are determined by the local gradient at each level which is computed by partial derivatives of the error and chain rule. | 1 | DigiKlausur | 0.5 | 0.001522 | 378 |
null | The idea of back propagation method is to propagate error from ouput (final) layer backward to hidden layers, and adjust the weighs of neurond in hidden layer, based of this error. This is required because we do not have error information for hidden layers, only for output neurons. The error from output layer is propagated to hidden layers using idea from steepest descent method. Namely, local gradients are computed for each neuron in backpropagation, and these local gradients define how error changes, in terms of weights. Local gradients are derived from chain rule for each layer. The fact that local gradient for each hidden layer is derived based on local gradient of a previos layer, defines that as we propagate more and more in hidden layers of NN, the gradient of a error function vanishes, which means that as we go deeply back in NN, the change in weights is becominng smaller and smaller. This is a drawback of back propagation method. | Backpropagation lowers the error of a MLP level by level recursively backwards. It back propagates an error from the last layer to the first layer by updating the weights. The updates are determined by the local gradient at each level which is computed by partial derivatives of the error and chain rule. | 2 | DigiKlausur | 1 | 0.001001 | 379 |
null | Learning rate controls the speed of the descent. When learning rate is low, the weight updation is overdamped and convergence is slow. When the learning rate is high, the weight updation is underdamped and a zigzagging behaviour is exhibited in the weight space. When the learning rate is too large, learning becomes unstable. | Learning rate controls the speed of the convergence. When the learning rate is low, the convergence is overdamped and slow. When the learning rate is high, the convergence is underdamped and follows a zigzagging path. When the learning rate exceeds a critical value learning becomes unstable. | 2 | DigiKlausur | 1 | 0.001001 | 380 |
null | If learning rate is very smaller, then transition are over-damping, trajectory of weight vector follows the smooth path. If learning rate is large, then transition are under-damping, trajectory of weight vector exhibits the zigzagging(or oscillatory) behavior If learning gets higher than some threshold, then learning algorithm gets unstable or diverges | Learning rate controls the speed of the convergence. When the learning rate is low, the convergence is overdamped and slow. When the learning rate is high, the convergence is underdamped and follows a zigzagging path. When the learning rate exceeds a critical value learning becomes unstable. | 1 | DigiKlausur | 0.5 | 0.001522 | 381 |
null | Learning rate n determines stride of delta of weight. If learning rate is too large weights starts to ziggerate. | Learning rate controls the speed of the convergence. When the learning rate is low, the convergence is overdamped and slow. When the learning rate is high, the convergence is underdamped and follows a zigzagging path. When the learning rate exceeds a critical value learning becomes unstable. | 1 | DigiKlausur | 0.5 | 0.001522 | 382 |
null | When training with SD, the learning rate determines the step size we take towards the negative gradient. When the learning rate is too small, the weights may be overdamped and reach the error function minimum slowly, eventually getting stuck in local minima. When step size is too big, the weights may be underdampened, bouncing between ridges of the error surface and never find the minimum (especially when the minimum is in a steep ravine of the error surface) | Learning rate controls the speed of the convergence. When the learning rate is low, the convergence is overdamped and slow. When the learning rate is high, the convergence is underdamped and follows a zigzagging path. When the learning rate exceeds a critical value learning becomes unstable. | 2 | DigiKlausur | 1 | 0.001001 | 383 |
null | - Learning rate is used to control how much the wright update is affected by the error correction or so on. - Learning rate too low: Learning is slow and takes more time - Learning rate too high: Learning is fast, but causes zigzagging behaviour in convergence. - If the learning rate is too high, it may result in situations where the zigzagging behaviour will cause it to overshoot, and may never finally converge. | Learning rate controls the speed of the convergence. When the learning rate is low, the convergence is overdamped and slow. When the learning rate is high, the convergence is underdamped and follows a zigzagging path. When the learning rate exceeds a critical value learning becomes unstable. | 2 | DigiKlausur | 1 | 0.001001 | 384 |
null | The learning rate defines the speed of the weight change. A learning rate too high can lead to oscillation around the optimal weight such that its never reached. A learning rate to low results in very slow learning and slow convergence. | Learning rate controls the speed of the convergence. When the learning rate is low, the convergence is overdamped and slow. When the learning rate is high, the convergence is underdamped and follows a zigzagging path. When the learning rate exceeds a critical value learning becomes unstable. | 1 | DigiKlausur | 0.5 | 0.001522 | 385 |
null | The learning rate is needet to make the algorithm more stable. A high learning rate makes the weightchanges zickzacking and the algorithm might not converge A low learning rate makes the path in the W-plane more smooth. If the learning rate gets to a certan critical value the algorithm might not converge at all | Learning rate controls the speed of the convergence. When the learning rate is low, the convergence is overdamped and slow. When the learning rate is high, the convergence is underdamped and follows a zigzagging path. When the learning rate exceeds a critical value learning becomes unstable. | 2 | DigiKlausur | 1 | 0.001001 | 386 |
null | The learning reate is a factor of how much we trust the datapoint. Normally it is in the range of [0,1]. A high learning rate normally results in a faster convergence while a lower rate in a slower conversion. If the rate is choosen to high, it is possible that the cost function diverges. If the rate is to slow it is possible that the rate so conversion is so slow that we never reach a local minimum. | Learning rate controls the speed of the convergence. When the learning rate is low, the convergence is overdamped and slow. When the learning rate is high, the convergence is underdamped and follows a zigzagging path. When the learning rate exceeds a critical value learning becomes unstable. | 2 | DigiKlausur | 1 | 0.001001 | 387 |
null | The learning rate is a parameter using on updating the weights in a given iteration. This parameter represents the importance that is given to the adaptation of the weights. So when setting the learning rate small, the learning machine will learn slower but also in a more stable way. On the other hand, when setting the learning rate with a large value, the learning machine will learn faster but in an unstable way. The danger here, is that depending on the learning rate's value, the algorithm may never come into the perfect value. If the learning rate is too small, it may land into a local minimum and never approach the global minimum of the function. If the learning rate is too big, the learning progression will have a zig-zagging behaviour and never approach the ideal value. | Learning rate controls the speed of the convergence. When the learning rate is low, the convergence is overdamped and slow. When the learning rate is high, the convergence is underdamped and follows a zigzagging path. When the learning rate exceeds a critical value learning becomes unstable. | 2 | DigiKlausur | 1 | 0.001001 | 388 |
null | - The learning rate defines the size of steps that the method moves in the search space. - If the learning rate is too small the method needs to take huge number of steps and maybe it stuck in a local minima - If the learning rate is too big the method will converge very fast toward the global minima but there is a probability that it oscilates around the global minima and never reachs it | Learning rate controls the speed of the convergence. When the learning rate is low, the convergence is overdamped and slow. When the learning rate is high, the convergence is underdamped and follows a zigzagging path. When the learning rate exceeds a critical value learning becomes unstable. | 2 | DigiKlausur | 1 | 0.001001 | 389 |
null | If learning rate is to large, then proccess will oscillate a lot and might not converge. If learning rate is to small, then convergance will happen very slowly | Learning rate controls the speed of the convergence. When the learning rate is low, the convergence is overdamped and slow. When the learning rate is high, the convergence is underdamped and follows a zigzagging path. When the learning rate exceeds a critical value learning becomes unstable. | 1 | DigiKlausur | 0.5 | 0.001522 | 390 |
null | The learning rate tells us how confident we are of the error, and it affect the convergence rate. A low learning rate will slow the convergence, making the system overdamped. A high learning rate will speed the convergence but the value oscilates, making the system underdamped. The system can become unstable if the learning rate is above a threshold value. | Learning rate controls the speed of the convergence. When the learning rate is low, the convergence is overdamped and slow. When the learning rate is high, the convergence is underdamped and follows a zigzagging path. When the learning rate exceeds a critical value learning becomes unstable. | 2 | DigiKlausur | 1 | 0.001001 | 391 |
null | + If the learning rate is too small, then the system is overdamped and the algorithm takes a long time to converge. + If the learning rate is too large, then the system is underdamped and the algorithm oscillates around and optimal solution or could potentially make the system unstable. | Learning rate controls the speed of the convergence. When the learning rate is low, the convergence is overdamped and slow. When the learning rate is high, the convergence is underdamped and follows a zigzagging path. When the learning rate exceeds a critical value learning becomes unstable. | 2 | DigiKlausur | 1 | 0.001001 | 392 |
null | The steepest descent method is an algorithm for finding the nearest local minimum of a function which presupposes that the gradient of the function can be computed. The method of steepest descent starts at a point P0 and as many times needed moves from Pi to P(i+1) by minimizing along the line extending from Pi in the direction of gradient f(Pi) the local downhill gradient. The danger of the algorithm is, that it can get stuck in a local minima. | Learning rate controls the speed of the convergence. When the learning rate is low, the convergence is overdamped and slow. When the learning rate is high, the convergence is underdamped and follows a zigzagging path. When the learning rate exceeds a critical value learning becomes unstable. | 2 | DigiKlausur | 1 | 0.001001 | 393 |
null | The learning rate determines the rate of learning: the smaller the learing rate is, the slower the learning process is, but the path of weight adjustment is smoother. The larger the value is, the faster the learing process is, but it can result in oscillation and instability. | Learning rate controls the speed of the convergence. When the learning rate is low, the convergence is overdamped and slow. When the learning rate is high, the convergence is underdamped and follows a zigzagging path. When the learning rate exceeds a critical value learning becomes unstable. | 2 | DigiKlausur | 1 | 0.001001 | 394 |
null | The learning rate is $\eta$ So based on the learning rate, it undergoes various oscillation. We could see zigzagging behaviours. 1. When the learning rate is large, the system is said to be under damped. 2. When the learning rate is small, the system is said to be over damped. Here we can see a zigzagging behaviour towards the convergence phase. 3. After the learning rate crosses a certain value it becomes unstable. It may stuck in a local minima which is considered to be another danger | Learning rate controls the speed of the convergence. When the learning rate is low, the convergence is overdamped and slow. When the learning rate is high, the convergence is underdamped and follows a zigzagging path. When the learning rate exceeds a critical value learning becomes unstable. | 2 | DigiKlausur | 1 | 0.001001 | 395 |
null | Learning rate in steepest descent can directly affect the convergence of the algorithm. If the learning rate is very small then algorithm can take long time to converge i.e response is ovderdamped. But if the learning rate is amde very high then we may observe zig-zagging (oscillatory) behaviour and sometimes algorithm may fail to converge (underdamped response). | Learning rate controls the speed of the convergence. When the learning rate is low, the convergence is overdamped and slow. When the learning rate is high, the convergence is underdamped and follows a zigzagging path. When the learning rate exceeds a critical value learning becomes unstable. | 2 | DigiKlausur | 1 | 0.001001 | 396 |
null | HERE: - When the learning rate is small, the learning is very slow. - When the learning rate is large, the learning is unstable and can exhibit zigzag behavior. - When the learning rate is too large, the learning never converges. | Learning rate controls the speed of the convergence. When the learning rate is low, the convergence is overdamped and slow. When the learning rate is high, the convergence is underdamped and follows a zigzagging path. When the learning rate exceeds a critical value learning becomes unstable. | 2 | DigiKlausur | 1 | 0.001001 | 397 |
null | The learning rate defines the efficiency of learning machine. If it is small, the system response may be overdamped, if large , the response may be underdamped and if it exceeds a critical value, the response may diverge. The danger is the possibility of the system output to not converge. This should be ensured by scaling the learning rate using the largest eigen value of the correation matrix of the input. | Learning rate controls the speed of the convergence. When the learning rate is low, the convergence is overdamped and slow. When the learning rate is high, the convergence is underdamped and follows a zigzagging path. When the learning rate exceeds a critical value learning becomes unstable. | 2 | DigiKlausur | 1 | 0.001001 | 398 |
null | The value of the learning rate parameter $\eta$ controls the speed of descent and convergence towards the optimal weight vector. For small values of $\eta$, the transient response of the algorithm is overdamped and the weight trajectory follows a smooth path. On the other hand if the value of $\eta$ is large, the transient repsonse of the algorithm is underdamped, and the weight trajectory follows an oscillatory path in the W-plane. | Learning rate controls the speed of the convergence. When the learning rate is low, the convergence is overdamped and slow. When the learning rate is high, the convergence is underdamped and follows a zigzagging path. When the learning rate exceeds a critical value learning becomes unstable. | 2 | DigiKlausur | 1 | 0.001001 | 399 |
null | Learning rate $\eta$ has a profound impact on the learning in steepest descent. 1. If $\eta$ is too small, the system is underdamped and convergence is slow. 2. For larger $\eta$, the system is overdamped and tends to oscillate. 3. If $\eta$ exceeds a certain critical value, steepest descent may even diverge! | Learning rate controls the speed of the convergence. When the learning rate is low, the convergence is overdamped and slow. When the learning rate is high, the convergence is underdamped and follows a zigzagging path. When the learning rate exceeds a critical value learning becomes unstable. | 2 | DigiKlausur | 1 | 0.001001 | 400 |
null | Learning rate has huge impact on convergence of the network. If the learning rate is low then the transient response of the algorithm is overdamped and the trajectory of w(n) is smooth. If the learning rate is high then the transient response of the algorithm is underdamped and trajectory of the w(n) is zigzag. If we choose the wrong learning rate then the network might not converge. | Learning rate controls the speed of the convergence. When the learning rate is low, the convergence is overdamped and slow. When the learning rate is high, the convergence is underdamped and follows a zigzagging path. When the learning rate exceeds a critical value learning becomes unstable. | 2 | DigiKlausur | 1 | 0.001001 | 401 |
null | learning rate controls the speed and convergence of steepest descent method. 1. if it is small, the trajectory of weight vector follows a smooth path in W plane; 2. if it is large, the trajectory of weight vector follows a zigzaging path; 3. if it exceeds a critical value, then the algorithm is unstable. | Learning rate controls the speed of the convergence. When the learning rate is low, the convergence is overdamped and slow. When the learning rate is high, the convergence is underdamped and follows a zigzagging path. When the learning rate exceeds a critical value learning becomes unstable. | 2 | DigiKlausur | 1 | 0.001001 | 402 |
null | 1. Large learning rate $\eta$ results in a zigzagging behavior but it can converge quickly. 2. Small learning rate $\eta$ results in a smooth behavior but it is slow to converge | Learning rate controls the speed of the convergence. When the learning rate is low, the convergence is overdamped and slow. When the learning rate is high, the convergence is underdamped and follows a zigzagging path. When the learning rate exceeds a critical value learning becomes unstable. | 2 | DigiKlausur | 1 | 0.001001 | 403 |
null | * Learning rate tells the network that how much steps it should move towards direction opposite to the gradient vector. * If the learning rate is too large, the weight updation will be high. * So the danger is, learning may oscillate or the network overfit the data. | Learning rate controls the speed of the convergence. When the learning rate is low, the convergence is overdamped and slow. When the learning rate is high, the convergence is underdamped and follows a zigzagging path. When the learning rate exceeds a critical value learning becomes unstable. | 2 | DigiKlausur | 1 | 0.001001 | 404 |
null | Learning rate is used to regulate the speed of learning. If the learning rate is small then the learning is slow and if the learning rate is high then it oscillates. If it exceeds the critical value then the algorithm is unstable. | Learning rate controls the speed of the convergence. When the learning rate is low, the convergence is overdamped and slow. When the learning rate is high, the convergence is underdamped and follows a zigzagging path. When the learning rate exceeds a critical value learning becomes unstable. | 2 | DigiKlausur | 1 | 0.001001 | 405 |
null | Learning rate is used to decide how fast the network should converge during the training phase If the learning rate is too high - the system oscillates and becomes overdamped too low - the system becomes underdamped and learns very slow | Learning rate controls the speed of the convergence. When the learning rate is low, the convergence is overdamped and slow. When the learning rate is high, the convergence is underdamped and follows a zigzagging path. When the learning rate exceeds a critical value learning becomes unstable. | 2 | DigiKlausur | 1 | 0.001001 | 406 |
null | The learning rate tells how long one step in the method of steepest descent is. If the learning rate is too high, the learning will oscillate and may not converge. If the learning rate is too small the convergence will take many iterations. | Learning rate controls the speed of the convergence. When the learning rate is low, the convergence is overdamped and slow. When the learning rate is high, the convergence is underdamped and follows a zigzagging path. When the learning rate exceeds a critical value learning becomes unstable. | 2 | DigiKlausur | 1 | 0.001001 | 407 |
null | If we use steepest descent we use the learning rate to adjust the speed of the convergence to a minimum error. If the learning rate is too small, the learning is going on rather slow. If the rate is high, the error is zigzagging on the error surface towards the minimum. If the learning rate is to high, it might not converge but diverge. | Learning rate controls the speed of the convergence. When the learning rate is low, the convergence is overdamped and slow. When the learning rate is high, the convergence is underdamped and follows a zigzagging path. When the learning rate exceeds a critical value learning becomes unstable. | 2 | DigiKlausur | 1 | 0.001001 | 408 |
null | The learning rate is a value between 0 and 1, which determines how fast the network learns. When using small values for the learning rate, the network converges slowly and needs alot of processing. When choosing big values the learning oscillates and becomes unstable. The goal is to choose the learning rate in a way that it does not learn to slow, which needs more input data for convergence, and that it does not become unstable. | Learning rate controls the speed of the convergence. When the learning rate is low, the convergence is overdamped and slow. When the learning rate is high, the convergence is underdamped and follows a zigzagging path. When the learning rate exceeds a critical value learning becomes unstable. | 2 | DigiKlausur | 1 | 0.001001 | 409 |
null | The learning rate defines the speed of the learning convergence. High values lead to faster learning und low values to slower learning. However, high values can lead to oscillations in the learning space and may overshoot the desired result and never reach it. | Learning rate controls the speed of the convergence. When the learning rate is low, the convergence is overdamped and slow. When the learning rate is high, the convergence is underdamped and follows a zigzagging path. When the learning rate exceeds a critical value learning becomes unstable. | 2 | DigiKlausur | 1 | 0.001001 | 410 |
null | The learning rate gives the speed of learning. It defines the stepwidth in direction of steepest descent. If the learning rate is small, the learning is more stable but slower. When it is high, the learning is more unstable but faster. The danger is to overcome a minimum and result in oscillating behaviour | Learning rate controls the speed of the convergence. When the learning rate is low, the convergence is overdamped and slow. When the learning rate is high, the convergence is underdamped and follows a zigzagging path. When the learning rate exceeds a critical value learning becomes unstable. | 2 | DigiKlausur | 1 | 0.001001 | 411 |
null | A too small learing rate can lead to a very slow convergence or to no convergence at all if the time learn becomes too long. A high learning rate can lead to an oscillating behavior and prevent convergence. | Learning rate controls the speed of the convergence. When the learning rate is low, the convergence is overdamped and slow. When the learning rate is high, the convergence is underdamped and follows a zigzagging path. When the learning rate exceeds a critical value learning becomes unstable. | 2 | DigiKlausur | 1 | 0.001001 | 412 |
null | Learning rate is a scalar multiplied with adjustment term to adjsut the weights. It ensures the rate of learning. It is typical greater than 0 and less than equal to 1. It govers the rate of sliding alond the curve towards the minima. 1. Lower learning rate will result in slow learning but chances of finding optimal minima are greater. 2. Higher learning will result in hopping on either side of minima hence zigzag behaviour. 3. Very high learning may not converge. | Learning rate controls the speed of the convergence. When the learning rate is low, the convergence is overdamped and slow. When the learning rate is high, the convergence is underdamped and follows a zigzagging path. When the learning rate exceeds a critical value learning becomes unstable. | 2 | DigiKlausur | 1 | 0.001001 | 413 |
null | if the learning rate is large then the it follows the zizag motion. if the learning rate is too low then it takes time for converging . if the learning rate is very large or critcal then it becomes unstable. while processing there is possiblity that it will get stuck in local minima. | Learning rate controls the speed of the convergence. When the learning rate is low, the convergence is overdamped and slow. When the learning rate is high, the convergence is underdamped and follows a zigzagging path. When the learning rate exceeds a critical value learning becomes unstable. | 2 | DigiKlausur | 1 | 0.001001 | 414 |
null | When using steepest descent the learning rate($\eta$) determines the speed at which the weihts are adjusted in the NN. There can be two possible danger related to leraning rate depending on its magnitude: 1. Low learning rate(eg, $\eta = 0.01$) results in smooth variation of the weights but makes the process becomes slow. 2. Hight learning rate (eg, $\eta = 0.01$) results in faster weight adjustment but it leads to an oscillatory nature in the learning which is unwanted. | Learning rate controls the speed of the convergence. When the learning rate is low, the convergence is overdamped and slow. When the learning rate is high, the convergence is underdamped and follows a zigzagging path. When the learning rate exceeds a critical value learning becomes unstable. | 2 | DigiKlausur | 1 | 0.001001 | 415 |
null | * With a small learning rate the network will converge very slowly towards the optimal weight of the network but it will give better perfomance in generalization. * With a high learning rate there can be zigzag effect. because of the large rate the network may miss a local minima and jump to a higher point. * With a very high learning rate the network may become unstable. | Learning rate controls the speed of the convergence. When the learning rate is low, the convergence is overdamped and slow. When the learning rate is high, the convergence is underdamped and follows a zigzagging path. When the learning rate exceeds a critical value learning becomes unstable. | 2 | DigiKlausur | 1 | 0.001001 | 416 |
null | The learning rate defines the speed of steepest descent search for min of a error funtion. In other words, it defines how strong the change in weights will be, throughout optimization procedure. Higher learning rate, faster learning, but then learning is characterized by oscilations in searhc for min. This is dangerous because if learning rate, becomes bigger that a certain value, it can make search with steepest descent unstable. IN this case steepest descent will start to diverge, istead of converging to min. In other case, when learing rate is small that lerning is slower but safer, and the learining path is not oscilatory. | Learning rate controls the speed of the convergence. When the learning rate is low, the convergence is overdamped and slow. When the learning rate is high, the convergence is underdamped and follows a zigzagging path. When the learning rate exceeds a critical value learning becomes unstable. | 2 | DigiKlausur | 1 | 0.001001 | 417 |
null | The reduced blotzman machine works by flipping the states of binary neurons based on a probability determined by the activation produced at the neuron. Neurons are arranged in a visible and a hidden layer in a recurrent fashion. There are two states involved called the clamped state in which the visible neuron is connected to the input and a free running state in which both layers run free. | The reduced Boltzmann machine is a bi-parted graph which works by flipping the states of binary neurons based on a probability determined by the activation produced at the neuron. Neurons are arranged in a visible and a hidden layer in a recurrent fashion. There are two states involved called the clamped state in which the visible neuron is connected to the input and a free running state in which both layers run free. | 2 | DigiKlausur | 1 | 0.001001 | 418 |
null | It is a recurrent network. It opreates by flipping. It has two groups of neurons: Visible neurons and hidden neurons. Visible neurons provides interaction between environment and network. Hidden neurons are running freely. It has two modes of operation: . Clamped State: states of the neurons are clamped. . Free running state: Neurons are running in free condition | The reduced Boltzmann machine is a bi-parted graph which works by flipping the states of binary neurons based on a probability determined by the activation produced at the neuron. Neurons are arranged in a visible and a hidden layer in a recurrent fashion. There are two states involved called the clamped state in which the visible neuron is connected to the input and a free running state in which both layers run free. | 2 | DigiKlausur | 1 | 0.001001 | 419 |
null | null | The reduced Boltzmann machine is a bi-parted graph which works by flipping the states of binary neurons based on a probability determined by the activation produced at the neuron. Neurons are arranged in a visible and a hidden layer in a recurrent fashion. There are two states involved called the clamped state in which the visible neuron is connected to the input and a free running state in which both layers run free. | 0 | DigiKlausur | 0 | 0.003546 | 420 |
null | RBM implement a combination of graphical and probabalistic ideas, using probabilites of activations inspired from energy based networks. We present a training input to the RBM, and determine the hidden activations based on a probability of net input and edge weights. Then, when unclamping the training data from the network, sample from the distribution of the hidden layer, where the RBM tries to rebuild the distribution of the input data. RBM may be used for data completion or denoising, where e.g. incomplete images are complted based on the learned probability distribution. | The reduced Boltzmann machine is a bi-parted graph which works by flipping the states of binary neurons based on a probability determined by the activation produced at the neuron. Neurons are arranged in a visible and a hidden layer in a recurrent fashion. There are two states involved called the clamped state in which the visible neuron is connected to the input and a free running state in which both layers run free. | 2 | DigiKlausur | 1 | 0.001001 | 421 |
null | RBM has two layers and are interconnected (recurrent) operates by flipping the internal states (+/- 1)> Unlike the boltzmann machine, reduced boltzmann machine does not contain interconnections among the same layer. The weight update is done by the differnce in correleation in clamped and free running mode. | The reduced Boltzmann machine is a bi-parted graph which works by flipping the states of binary neurons based on a probability determined by the activation produced at the neuron. Neurons are arranged in a visible and a hidden layer in a recurrent fashion. There are two states involved called the clamped state in which the visible neuron is connected to the input and a free running state in which both layers run free. | 2 | DigiKlausur | 1 | 0.001001 | 422 |
null | It consists of only two layers: input and hidden layer. During training data is presented to the input. The hidden layer starts oscillating. | The reduced Boltzmann machine is a bi-parted graph which works by flipping the states of binary neurons based on a probability determined by the activation produced at the neuron. Neurons are arranged in a visible and a hidden layer in a recurrent fashion. There are two states involved called the clamped state in which the visible neuron is connected to the input and a free running state in which both layers run free. | 0 | DigiKlausur | 0 | 0.003546 | 423 |
null | The Reduced Boltzman Machine is an stochastical recurrent ANN, that operates with two classes of neurons : hidden and visible. It operates by neuron-flipping with a probability impacted by the neurons arount. So it uses the hebbian rule. An Reduced Boltzman Machine can learn the classify data and can repoduce the learned patterns. | The reduced Boltzmann machine is a bi-parted graph which works by flipping the states of binary neurons based on a probability determined by the activation produced at the neuron. Neurons are arranged in a visible and a hidden layer in a recurrent fashion. There are two states involved called the clamped state in which the visible neuron is connected to the input and a free running state in which both layers run free. | 2 | DigiKlausur | 1 | 0.001001 | 424 |
null | The structed of RBM is a bitpartied graph. It uses hebbian learning for training and the neurons used are binary stoachastic neurons, which have a binary state, which fire based on a probability. The training is achived by passing the information a many times between the hidden layer and the input layer. There weightsare updated on the pass into the hidden layer. Weigths between input and activations in the hidden layer are increased, weights between gernerated inputs of the rbm and the hidden layer are decreased. | The reduced Boltzmann machine is a bi-parted graph which works by flipping the states of binary neurons based on a probability determined by the activation produced at the neuron. Neurons are arranged in a visible and a hidden layer in a recurrent fashion. There are two states involved called the clamped state in which the visible neuron is connected to the input and a free running state in which both layers run free. | 2 | DigiKlausur | 1 | 0.001001 | 425 |
null | The main idea of an RBM can be defined as follows: - Two layers will be defined, where each neuron will be connected to every neuron of the other layer. - The input will be passed from the first layer to the second one, and the state of each neuron of the second layer will be calculated. - The neurons with active states will pass again its values to the input layer. - The values given from the second layer will be compared with the input values, and with the two states, the weights will be adjusted. | The reduced Boltzmann machine is a bi-parted graph which works by flipping the states of binary neurons based on a probability determined by the activation produced at the neuron. Neurons are arranged in a visible and a hidden layer in a recurrent fashion. There are two states involved called the clamped state in which the visible neuron is connected to the input and a free running state in which both layers run free. | 1 | DigiKlausur | 0.5 | 0.001522 | 426 |
null | null | The reduced Boltzmann machine is a bi-parted graph which works by flipping the states of binary neurons based on a probability determined by the activation produced at the neuron. Neurons are arranged in a visible and a hidden layer in a recurrent fashion. There are two states involved called the clamped state in which the visible neuron is connected to the input and a free running state in which both layers run free. | 0 | DigiKlausur | 0 | 0.003546 | 427 |
null | They are neural network with only one hidden layer, neurons from input to hidden layer are fully connected, neurons from hidden layer to output layer are fully connected as well. | The reduced Boltzmann machine is a bi-parted graph which works by flipping the states of binary neurons based on a probability determined by the activation produced at the neuron. Neurons are arranged in a visible and a hidden layer in a recurrent fashion. There are two states involved called the clamped state in which the visible neuron is connected to the input and a free running state in which both layers run free. | 0 | DigiKlausur | 0 | 0.003546 | 428 |
null | The Reduced boltzman machine works by flipping neurons. It can operate in clamped or free running state. - If two connected neurons are activated at the same time, the weight is increased. - If any of the two neurons are fired asynchronously, then the weight is reduced or removed. | The reduced Boltzmann machine is a bi-parted graph which works by flipping the states of binary neurons based on a probability determined by the activation produced at the neuron. Neurons are arranged in a visible and a hidden layer in a recurrent fashion. There are two states involved called the clamped state in which the visible neuron is connected to the input and a free running state in which both layers run free. | 1 | DigiKlausur | 0.5 | 0.001522 | 429 |
null | + Reduced Boltzman Machines (reduced because inputs do not share information via synapses) are one of the initial NNs which consists of input layer and hidden layer. The system adapts its internal weights and tries to reproduce the inputs. | The reduced Boltzmann machine is a bi-parted graph which works by flipping the states of binary neurons based on a probability determined by the activation produced at the neuron. Neurons are arranged in a visible and a hidden layer in a recurrent fashion. There are two states involved called the clamped state in which the visible neuron is connected to the input and a free running state in which both layers run free. | 1 | DigiKlausur | 0.5 | 0.001522 | 430 |
null | A RBM is a shallow two layer network containing a visible and a hidden layer. Each noden in the visible layer is connected to each node of the hidden layer. It is considered as restricted, because no two nodes of one layer share a connection. A RBM is the mathematical equivalent of a two way translator. In the forward pass a RBM takes the inputs and translates them to a set of numbers that encode the inputs. In the backward pass it takes the set of numbers and translates them back to form the reconstructed inputs. A well trained RBM will be able to perform the backward translation with a higher degree of accuracy. Three steps are repeated over and over through the training process: 1) Forward pass. 2) Backward pass. 3) Evaluate quality of reconstruction as visible layer (often solved with KL divergece) | The reduced Boltzmann machine is a bi-parted graph which works by flipping the states of binary neurons based on a probability determined by the activation produced at the neuron. Neurons are arranged in a visible and a hidden layer in a recurrent fashion. There are two states involved called the clamped state in which the visible neuron is connected to the input and a free running state in which both layers run free. | 1 | DigiKlausur | 0.5 | 0.001522 | 431 |
null | null | The reduced Boltzmann machine is a bi-parted graph which works by flipping the states of binary neurons based on a probability determined by the activation produced at the neuron. Neurons are arranged in a visible and a hidden layer in a recurrent fashion. There are two states involved called the clamped state in which the visible neuron is connected to the input and a free running state in which both layers run free. | 0 | DigiKlausur | 0 | 0.003546 | 432 |
null | null | The reduced Boltzmann machine is a bi-parted graph which works by flipping the states of binary neurons based on a probability determined by the activation produced at the neuron. Neurons are arranged in a visible and a hidden layer in a recurrent fashion. There are two states involved called the clamped state in which the visible neuron is connected to the input and a free running state in which both layers run free. | 0 | DigiKlausur | 0 | 0.003546 | 433 |
null | RBM is an unsupervised learning technique. It has visible neurons and hidden neurons. Neurons are in either +1 or -1 states. It uses the idea of simuilated annealing to flip the neuron states based on energy function and pseudo temperature. It operates in 2 states - clamped state and free flowing state. In clamped state only hidden neurons are flipped and in free flowing state both visible and hidden neurons are flipped. Weights are adjusted based on avergage correlation difference between all the neurons in clamped and free flowing state. | The reduced Boltzmann machine is a bi-parted graph which works by flipping the states of binary neurons based on a probability determined by the activation produced at the neuron. Neurons are arranged in a visible and a hidden layer in a recurrent fashion. There are two states involved called the clamped state in which the visible neuron is connected to the input and a free running state in which both layers run free. | 2 | DigiKlausur | 1 | 0.001001 | 434 |
null | null | The reduced Boltzmann machine is a bi-parted graph which works by flipping the states of binary neurons based on a probability determined by the activation produced at the neuron. Neurons are arranged in a visible and a hidden layer in a recurrent fashion. There are two states involved called the clamped state in which the visible neuron is connected to the input and a free running state in which both layers run free. | 0 | DigiKlausur | 0 | 0.003546 | 435 |
null | RBMs work on the principle of binary states, free-running or clamped. The weight update is done based on the Botlzmann's formula using the pseudotemperature, which gives the proobability of error. | The reduced Boltzmann machine is a bi-parted graph which works by flipping the states of binary neurons based on a probability determined by the activation produced at the neuron. Neurons are arranged in a visible and a hidden layer in a recurrent fashion. There are two states involved called the clamped state in which the visible neuron is connected to the input and a free running state in which both layers run free. | 1 | DigiKlausur | 0.5 | 0.001522 | 436 |
null | The Reduced Boltzman Machines function by using two types of neurons : visible neurons that provide an interface between the environment and he network, an hidden neurons that operate freely. The learning can proceed under two conditions, namely: 1. Clamped state : where the visible neurons are clamped to a particular state of the environment 2. Free running state : where both visible and hidden neurons operate freely. If $\rho^+{ij}$ indicates the probability of correlation between the states of neurons i and j in clamped state, $\rho^{-}{ij}$ indicates the probability of correlation between the states of neurons i and j in free running state, then the weight adjustment $\Delta w{ij} = \eta (\rho^+{ij} - \rho^{-}{ij})$ | The reduced Boltzmann machine is a bi-parted graph which works by flipping the states of binary neurons based on a probability determined by the activation produced at the neuron. Neurons are arranged in a visible and a hidden layer in a recurrent fashion. There are two states involved called the clamped state in which the visible neuron is connected to the input and a free running state in which both layers run free. | 2 | DigiKlausur | 1 | 0.001001 | 437 |
null | A Reduced Boltzmann machine (RBM) consists of two layers of neurons: visible and hidden. The neurons may only have two states i.e. activated or not and they flip according to a certain probability based on the weights and states of other neurons. The RBM has two modes: 1. Clamped: The visible layer is clamped to a certain input while the hidden neurons are allowed to change state until the network settles. The correlation in this state is given by $\rho{ij}^+$ 2. Free-running: In this state, the network is allowed to flip all neurons until it settles. The correlation is $\rho{ij}^-$ The weight update rule is given by $$\Delta w{ij} = \eta (\rho{ij}^+ - \rho{ij}^-)$$ | The reduced Boltzmann machine is a bi-parted graph which works by flipping the states of binary neurons based on a probability determined by the activation produced at the neuron. Neurons are arranged in a visible and a hidden layer in a recurrent fashion. There are two states involved called the clamped state in which the visible neuron is connected to the input and a free running state in which both layers run free. | 2 | DigiKlausur | 1 | 0.001001 | 438 |
null | Boltzmann machines is a neural network having recurrent structure.It is in two states either on which is +1 or off which is -1.The energy function is given by $E = 1/(1+exp(-delta E/Temperature))$ The state of the input x is turned from +1 to -1 based on the change of the energy deltaE and the pseudo temeperature T. | The reduced Boltzmann machine is a bi-parted graph which works by flipping the states of binary neurons based on a probability determined by the activation produced at the neuron. Neurons are arranged in a visible and a hidden layer in a recurrent fashion. There are two states involved called the clamped state in which the visible neuron is connected to the input and a free running state in which both layers run free. | 1 | DigiKlausur | 0.5 | 0.001522 | 439 |
null | The neurons operate in a binary states, "on" or "off". In clamped condition, all visible neurons are clamped into specific states by the environment; in free running condition, all neurons including visible and hidden neurons operate freely. | The reduced Boltzmann machine is a bi-parted graph which works by flipping the states of binary neurons based on a probability determined by the activation produced at the neuron. Neurons are arranged in a visible and a hidden layer in a recurrent fashion. There are two states involved called the clamped state in which the visible neuron is connected to the input and a free running state in which both layers run free. | 1 | DigiKlausur | 0.5 | 0.001522 | 440 |
null | It uses an energy function to oversee the learning process | The reduced Boltzmann machine is a bi-parted graph which works by flipping the states of binary neurons based on a probability determined by the activation produced at the neuron. Neurons are arranged in a visible and a hidden layer in a recurrent fashion. There are two states involved called the clamped state in which the visible neuron is connected to the input and a free running state in which both layers run free. | 0 | DigiKlausur | 0 | 0.003546 | 441 |
null | Reduced boltzman machine work based on **flipping operation** and calculating the probability invariances of clamped state and freely running state. | The reduced Boltzmann machine is a bi-parted graph which works by flipping the states of binary neurons based on a probability determined by the activation produced at the neuron. Neurons are arranged in a visible and a hidden layer in a recurrent fashion. There are two states involved called the clamped state in which the visible neuron is connected to the input and a free running state in which both layers run free. | 1 | DigiKlausur | 0.5 | 0.001522 | 442 |
null | RBMs run on boltzmann learning rule. The neurons have 2 modes of operation clipped and free running. All the neurons are binary units. Their status can be changed by flipping. All the neurons that are in on position are clipped together. | The reduced Boltzmann machine is a bi-parted graph which works by flipping the states of binary neurons based on a probability determined by the activation produced at the neuron. Neurons are arranged in a visible and a hidden layer in a recurrent fashion. There are two states involved called the clamped state in which the visible neuron is connected to the input and a free running state in which both layers run free. | 1 | DigiKlausur | 0.5 | 0.001522 | 443 |
null | It has the structure of recurrent neural network. It has two layers of neuron visible and hidden. the neuron can store only binary values they work based on flipping theere are modes free running and clamped the weights are changes based on the correlation of the neurons in the free running mode and clamped mode | The reduced Boltzmann machine is a bi-parted graph which works by flipping the states of binary neurons based on a probability determined by the activation produced at the neuron. Neurons are arranged in a visible and a hidden layer in a recurrent fashion. There are two states involved called the clamped state in which the visible neuron is connected to the input and a free running state in which both layers run free. | 1 | DigiKlausur | 0.5 | 0.001522 | 444 |
null | In a Reduced Boltzman Machine there are one visible and at least one hidden layer. The visible layer is the input and acts as output at the same time. For each input the neurons of the visible layer will be assigned with a value. With their weights, hidden neurons may either be activated or not. Once the input has been passed through the hidden layers, the values are passed all the way back to the visible layer. For this, different weights are used since the values move in the opposite direction. | The reduced Boltzmann machine is a bi-parted graph which works by flipping the states of binary neurons based on a probability determined by the activation produced at the neuron. Neurons are arranged in a visible and a hidden layer in a recurrent fashion. There are two states involved called the clamped state in which the visible neuron is connected to the input and a free running state in which both layers run free. | 1 | DigiKlausur | 0.5 | 0.001522 | 445 |
null | In RBMs there are two states, the free running and the clamped state. During the clamped state, the input neurons are clamped to the output neurons. While the network is clamped the probabilities of the Hidden states to be in a certain state are calculated to determine a probability of the output to be correct. | The reduced Boltzmann machine is a bi-parted graph which works by flipping the states of binary neurons based on a probability determined by the activation produced at the neuron. Neurons are arranged in a visible and a hidden layer in a recurrent fashion. There are two states involved called the clamped state in which the visible neuron is connected to the input and a free running state in which both layers run free. | 1 | DigiKlausur | 0.5 | 0.001522 | 446 |
null | The Reduced Boltzman Machine hast an input layer and a hidden layer. Each neuron has a state and a probability to turn on. If the neuron turns on the data passes trough it and the weights are updated. The probability of turning on is calculated by the network. | The reduced Boltzmann machine is a bi-parted graph which works by flipping the states of binary neurons based on a probability determined by the activation produced at the neuron. Neurons are arranged in a visible and a hidden layer in a recurrent fashion. There are two states involved called the clamped state in which the visible neuron is connected to the input and a free running state in which both layers run free. | 1 | DigiKlausur | 0.5 | 0.001522 | 447 |
null | Two fully connected layers, one input and one hidden layer are used. The input layer is the only connection to the environment. The RBM has a specified energy level which can not be changed. However the distribution of this energy to the nodes can be changed. Based on the data input every node has a chance to flip based on its input connections. | The reduced Boltzmann machine is a bi-parted graph which works by flipping the states of binary neurons based on a probability determined by the activation produced at the neuron. Neurons are arranged in a visible and a hidden layer in a recurrent fashion. There are two states involved called the clamped state in which the visible neuron is connected to the input and a free running state in which both layers run free. | 1 | DigiKlausur | 0.5 | 0.001522 | 448 |
null | the binary state of each neuron is flipped by a given probability. Stochastical learning | The reduced Boltzmann machine is a bi-parted graph which works by flipping the states of binary neurons based on a probability determined by the activation produced at the neuron. Neurons are arranged in a visible and a hidden layer in a recurrent fashion. There are two states involved called the clamped state in which the visible neuron is connected to the input and a free running state in which both layers run free. | 0 | DigiKlausur | 0 | 0.003546 | 449 |
null | Neurons have to states e.g. on or off. Each neuron has a probability to flip from one state to another. | The reduced Boltzmann machine is a bi-parted graph which works by flipping the states of binary neurons based on a probability determined by the activation produced at the neuron. Neurons are arranged in a visible and a hidden layer in a recurrent fashion. There are two states involved called the clamped state in which the visible neuron is connected to the input and a free running state in which both layers run free. | 0 | DigiKlausur | 0 | 0.003546 | 450 |
null | null | The reduced Boltzmann machine is a bi-parted graph which works by flipping the states of binary neurons based on a probability determined by the activation produced at the neuron. Neurons are arranged in a visible and a hidden layer in a recurrent fashion. There are two states involved called the clamped state in which the visible neuron is connected to the input and a free running state in which both layers run free. | 0 | DigiKlausur | 0 | 0.003546 | 451 |
null | the main idea of the RBM is compute the Least mean square error of the difference between expected output and real output. | The reduced Boltzmann machine is a bi-parted graph which works by flipping the states of binary neurons based on a probability determined by the activation produced at the neuron. Neurons are arranged in a visible and a hidden layer in a recurrent fashion. There are two states involved called the clamped state in which the visible neuron is connected to the input and a free running state in which both layers run free. | 0 | DigiKlausur | 0 | 0.003546 | 452 |
null | null | The reduced Boltzmann machine is a bi-parted graph which works by flipping the states of binary neurons based on a probability determined by the activation produced at the neuron. Neurons are arranged in a visible and a hidden layer in a recurrent fashion. There are two states involved called the clamped state in which the visible neuron is connected to the input and a free running state in which both layers run free. | 0 | DigiKlausur | 0 | 0.003546 | 453 |
null | * It is a Recurrent neural netwokr * It uses two groups of neurons, hidden and visible * It process the training data by flipping the neurons | The reduced Boltzmann machine is a bi-parted graph which works by flipping the states of binary neurons based on a probability determined by the activation produced at the neuron. Neurons are arranged in a visible and a hidden layer in a recurrent fashion. There are two states involved called the clamped state in which the visible neuron is connected to the input and a free running state in which both layers run free. | 1 | DigiKlausur | 0.5 | 0.001522 | 454 |
null | Reduced Boltzman Machine is a biparted (two parts) Reccurent NN, that has two layers visible and hidden layers. In Reduced Boltzman Machine neurons can have two states, namely, + or - 1, depening on current time step. At each time step, the states of neurons are flipped. Here the visible layer represent interface for connection between evironment and hidden layer, and it operates in clamped mode (limited values by environment). WHile hidden layer, operates in free mode. | The reduced Boltzmann machine is a bi-parted graph which works by flipping the states of binary neurons based on a probability determined by the activation produced at the neuron. Neurons are arranged in a visible and a hidden layer in a recurrent fashion. There are two states involved called the clamped state in which the visible neuron is connected to the input and a free running state in which both layers run free. | 2 | DigiKlausur | 1 | 0.001001 | 455 |
null | Echo State Network is a type of Recurrent Neural Network and has atleat one cyclic (feedback) connection. ESN consists of a dynamic reservoir and a output layer with neurons. The dynamic reservoir consists of randomly initialized neurons with random sturcture and connections (with atleast one feedback connection). The output layer combines the dynamic behaviours of the reservoir in a required fashion. Only the weights of the output neurons are updated while learning. An ESN consists of feedback connections while a FF NN does not. An ESN could have persisting activations even when there is no input which is not the case in FF NN. An ESN can approximate dynamic systems while a FF NN cannot. | Echo State Network is a type of Recurrent Neural Network and has at least one cyclic (feedback) connection. Only the weights of the output layers are updated while learning. ESN consists of feedback connections while a FF NN does not. ESN can approximate dynamic systems while FF NN cannot. | 2 | DigiKlausur | 1 | 0.001001 | 456 |
null | Echo State network are recurrent neural network, which means these networks have feedback. While, in feedforward neural networks, there is no feedback. In feedfoward, training data or inputs are not dependent on each other. They do not have any system memory. In ESNs, training inputs are dependent on each other and they have system memory In Echo state network, there are fixed, random generate reservoir weights. These weights are not trained. While, only output weights are trained | Echo State Network is a type of Recurrent Neural Network and has at least one cyclic (feedback) connection. Only the weights of the output layers are updated while learning. ESN consists of feedback connections while a FF NN does not. ESN can approximate dynamic systems while FF NN cannot. | 2 | DigiKlausur | 1 | 0.001001 | 457 |