question
stringclasses 283
values | provided_answer
stringlengths 1
3.56k
⌀ | reference_answer
stringclasses 486
values | grade
float64 0
100
| data_source
stringclasses 7
values | normalized_grade
float64 0
1
| weight
float64 0
0.04
| index
int64 0
4.97k
|
---|---|---|---|---|---|---|---|
null | Steepest descent is the basic learning algorithm others are derived from. The goal when learning a network is to minimize the error. This is achieved by starting at a random position and going in the opposite direction of the gradient vector, the steepest descent. | Steepest descent is used to update the weights in a NN during the learning phase. It helps to navigate the cost function and find the parameters for which the cost is minimum. The weights are updated in the direction of the steepest descent which is in a direction opposite to the gradient vector. This method could suffer from local minima and may become unstable. | 2 | DigiKlausur | 1 | 0.001001 | 258 |
null | the error function is computet. to adapt the weights (learn the network) the error function is followed in small steps in direction of steepest descent to decrease the error. using iterations the error is decreased in each step and end in a (local) minimum used in back-propagation | Steepest descent is used to update the weights in a NN during the learning phase. It helps to navigate the cost function and find the parameters for which the cost is minimum. The weights are updated in the direction of the steepest descent which is in a direction opposite to the gradient vector. This method could suffer from local minima and may become unstable. | 1 | DigiKlausur | 0.5 | 0.001522 | 259 |
null | In error correction learning the weights of a network are learned in a way that e(x) is minimized, where e(x) is some error function. In order to minimize the error function the method of steepest desend is used. The negative gradient of e(x) points in the direction of steepest decend. Doing steepest descend in a single layer feed forward network leads to the delta rule. | Steepest descent is used to update the weights in a NN during the learning phase. It helps to navigate the cost function and find the parameters for which the cost is minimum. The weights are updated in the direction of the steepest descent which is in a direction opposite to the gradient vector. This method could suffer from local minima and may become unstable. | 2 | DigiKlausur | 1 | 0.001001 | 260 |
null | Steepest descent adjust the parameters (weights and bias) of the NN to minimize the error. It does so by adjustinmg the weights in the direction of steepest descent of the error function. | Steepest descent is used to update the weights in a NN during the learning phase. It helps to navigate the cost function and find the parameters for which the cost is minimum. The weights are updated in the direction of the steepest descent which is in a direction opposite to the gradient vector. This method could suffer from local minima and may become unstable. | 1 | DigiKlausur | 0.5 | 0.001522 | 261 |
null | Steepest decent while move in the direction of the max improvement ( in terms of decrasing) in the cost funtion or error. if the learning rate is large then the it follows the zizag motion. if the learning rate is too low then it takes time for converging . | Steepest descent is used to update the weights in a NN during the learning phase. It helps to navigate the cost function and find the parameters for which the cost is minimum. The weights are updated in the direction of the steepest descent which is in a direction opposite to the gradient vector. This method could suffer from local minima and may become unstable. | 1 | DigiKlausur | 0.5 | 0.001522 | 262 |
null | The method of steepest descent is responsible for weight adjustments in the network. The weights are adjusted in the direction of the steepest descent that is equal to the negative grad of the error. It ensures that the weights are decreased in every iteration step. | Steepest descent is used to update the weights in a NN during the learning phase. It helps to navigate the cost function and find the parameters for which the cost is minimum. The weights are updated in the direction of the steepest descent which is in a direction opposite to the gradient vector. This method could suffer from local minima and may become unstable. | 2 | DigiKlausur | 1 | 0.001001 | 263 |
null | Steepest decent method helps in making the adjustments of the weights in a neural network in a way that minimizes the average squared error. In each step it gives the direction towards which the maximum decrease of the average squared error can be achieved. | Steepest descent is used to update the weights in a NN during the learning phase. It helps to navigate the cost function and find the parameters for which the cost is minimum. The weights are updated in the direction of the steepest descent which is in a direction opposite to the gradient vector. This method could suffer from local minima and may become unstable. | 1 | DigiKlausur | 0.5 | 0.001522 | 264 |
null | The method of steepest decent is used for finding minimum of a cost(error) function. The steepest decent iterates over possible values of weight vector to optimize the function. It is used for deriving error function in ADALINE (adaptive linear element) algorithm, and it is used also in backpropagation method in training of Multi-layer NNs. | Steepest descent is used to update the weights in a NN during the learning phase. It helps to navigate the cost function and find the parameters for which the cost is minimum. The weights are updated in the direction of the steepest descent which is in a direction opposite to the gradient vector. This method could suffer from local minima and may become unstable. | 2 | DigiKlausur | 1 | 0.001001 | 265 |
null | A dataset $A \subseteq X $ with N datapoints has $2^N$ binary maps. If for any of these binary maps, a hypothesis $h \in H$ splits the positive data from the negative data such that there is no training error, then it is said that h shatters the dataset A. | For all 2^N possible binary labeling of every data, if a hypothesis h splits the positive data from the negative data with no error, then it means that the hypothesis h shatters the dataset A. | 2 | DigiKlausur | 1 | 0.001001 | 266 |
null | null | For all 2^N possible binary labeling of every data, if a hypothesis h splits the positive data from the negative data with no error, then it means that the hypothesis h shatters the dataset A. | 0 | DigiKlausur | 0 | 0.003546 | 267 |
null | A a hypothesis $h \in H$ shatters a dataset $A \subseteq X \Leftrightarrow \ldots$ if there exists a an $\alpha for every training set with zero training error | For all 2^N possible binary labeling of every data, if a hypothesis h splits the positive data from the negative data with no error, then it means that the hypothesis h shatters the dataset A. | 1 | DigiKlausur | 0.5 | 0.001522 | 268 |
null | ... when our learned machine achieves zero training error on every classification problem of the dataset A. Since we got a selection of $n$ points in the dataset A, the number of problems in binary classification is 2 to the power of $n$ (I didnt find the 'Dach' symbol on the english keyboard :) ) | For all 2^N possible binary labeling of every data, if a hypothesis h splits the positive data from the negative data with no error, then it means that the hypothesis h shatters the dataset A. | 2 | DigiKlausur | 1 | 0.001001 | 269 |
null | Considereing a dataset $A \subseteq X$ ,where X is the instance space and A contains N elements. Now there are $2^N$ binary maps or learning problems when we wnat to separate two classes. If any of these problems can be separated completely by hypothesis $h \in H$ then h is said to shatter A. i.e., a hypothesis shatters a dataset, if it can completely separate the classes with zero error for all possible combination of labels in the dataset. | For all 2^N possible binary labeling of every data, if a hypothesis h splits the positive data from the negative data with no error, then it means that the hypothesis h shatters the dataset A. | 2 | DigiKlausur | 1 | 0.001001 | 270 |
null | when every possible combination of input and desired output can be classified using $h$ | For all 2^N possible binary labeling of every data, if a hypothesis h splits the positive data from the negative data with no error, then it means that the hypothesis h shatters the dataset A. | 1 | DigiKlausur | 0.5 | 0.001522 | 271 |
null | A hypothesis $h \in H$ shatters a dataset $A \subseteq X$, then for every point $xi \in A$ there is a label $yi \in \{1,-1\}$ and the $H$ can saperate these two classes using $h$ with no training error. | For all 2^N possible binary labeling of every data, if a hypothesis h splits the positive data from the negative data with no error, then it means that the hypothesis h shatters the dataset A. | 2 | DigiKlausur | 1 | 0.001001 | 272 |
null | there exists an arrangement of these points in A sucht that for each possible combination of labels to these points the hypothesis h has zero training error | For all 2^N possible binary labeling of every data, if a hypothesis h splits the positive data from the negative data with no error, then it means that the hypothesis h shatters the dataset A. | 0 | DigiKlausur | 0 | 0.003546 | 273 |
null | An hypthesis *h* shatters a dataset A, if for a given data set, h is able to distinguish (or separate) the different classes of this data set. | For all 2^N possible binary labeling of every data, if a hypothesis h splits the positive data from the negative data with no error, then it means that the hypothesis h shatters the dataset A. | 2 | DigiKlausur | 1 | 0.001001 | 274 |
null | "h" shatters A if for any set of input data points in A there exist at least one training error of zero. | For all 2^N possible binary labeling of every data, if a hypothesis h splits the positive data from the negative data with no error, then it means that the hypothesis h shatters the dataset A. | 2 | DigiKlausur | 1 | 0.001001 | 275 |
null | H shatters A when for example in given dataset (X1,X2...Xr) output are in a form (X1, Y1),(X2,Y2)...(Xr,Yr) there has been found a 0 error. | For all 2^N possible binary labeling of every data, if a hypothesis h splits the positive data from the negative data with no error, then it means that the hypothesis h shatters the dataset A. | 2 | DigiKlausur | 1 | 0.001001 | 276 |
null | A machine F can shatter a set of points $x1, x2, x3,..., xn$ if and only if for every training set, there is a weight vector $\alpha$ that produces zero training error. | For all 2^N possible binary labeling of every data, if a hypothesis h splits the positive data from the negative data with no error, then it means that the hypothesis h shatters the dataset A. | 2 | DigiKlausur | 1 | 0.001001 | 277 |
null | A hypothesis $h \in H$ shatters a dataset $A \subset X \Leftrightarrow$ for each assignable configuration of $(xi, yi)\in A$, $h$ perfectly classifies all elements of the set $A$. | For all 2^N possible binary labeling of every data, if a hypothesis h splits the positive data from the negative data with no error, then it means that the hypothesis h shatters the dataset A. | 2 | DigiKlausur | 1 | 0.001001 | 278 |
null | null | For all 2^N possible binary labeling of every data, if a hypothesis h splits the positive data from the negative data with no error, then it means that the hypothesis h shatters the dataset A. | 0 | DigiKlausur | 0 | 0.003546 | 279 |
null | A hypothesis $h \in H$ shatters a dataset $A \subseteq X \Leftrightarrow$ at least on possible combination of dataset $A$ can be classified by the hypothesis $h \in H$ with zero training error. | For all 2^N possible binary labeling of every data, if a hypothesis h splits the positive data from the negative data with no error, then it means that the hypothesis h shatters the dataset A. | 0 | DigiKlausur | 0 | 0.003546 | 280 |
null | Given a dataset $A \subseteq X $ where X is the instent data space, for a given problem with the dataset A, if a learning machine is able to successfully split the positive and the negative data, then we say that A is shattered by the learning machine. | For all 2^N possible binary labeling of every data, if a hypothesis h splits the positive data from the negative data with no error, then it means that the hypothesis h shatters the dataset A. | 2 | DigiKlausur | 1 | 0.001001 | 281 |
null | Suppose X is a training dataset and A is the subset of training dataset then hypothesis h is said to shatter if can correctly classify all the points in A i.e zero training error. | For all 2^N possible binary labeling of every data, if a hypothesis h splits the positive data from the negative data with no error, then it means that the hypothesis h shatters the dataset A. | 2 | DigiKlausur | 1 | 0.001001 | 282 |
null | HERE: a hypothesis $h \in H$ shatters a dataset $A \subseteq X \Leftrightarrow \ldots$ if the hypothesis can clearly distinguish the positive examples from the negative examples in A. | For all 2^N possible binary labeling of every data, if a hypothesis h splits the positive data from the negative data with no error, then it means that the hypothesis h shatters the dataset A. | 1 | DigiKlausur | 0.5 | 0.001522 | 283 |
null | A hypothesis h is model that separates a dataset consisting of {(xi , yi)} samples into positive and negative samples. h is said to shatter a given subset of a dataset if it can successfully separate at least one configuration of the subset of dataset. | For all 2^N possible binary labeling of every data, if a hypothesis h splits the positive data from the negative data with no error, then it means that the hypothesis h shatters the dataset A. | 0 | DigiKlausur | 0 | 0.003546 | 284 |
null | A hypothesis $h \in H$ shatters a dataset $A \subseteq X $ if there exists an $\alpha$ for which there is zero training error | For all 2^N possible binary labeling of every data, if a hypothesis h splits the positive data from the negative data with no error, then it means that the hypothesis h shatters the dataset A. | 2 | DigiKlausur | 1 | 0.001001 | 285 |
null | for each of the $2^N$ (where N is the size of A) combinations of input output mappings of the form $(Xi, yi)$, h is able to classify the data correctly that is with zero error. | For all 2^N possible binary labeling of every data, if a hypothesis h splits the positive data from the negative data with no error, then it means that the hypothesis h shatters the dataset A. | 2 | DigiKlausur | 1 | 0.001001 | 286 |
null | null | For all 2^N possible binary labeling of every data, if a hypothesis h splits the positive data from the negative data with no error, then it means that the hypothesis h shatters the dataset A. | 0 | DigiKlausur | 0 | 0.003546 | 287 |
null | For all possible binary labeling of dataset A, we can find a hypothesis h that can separate the positive examples from negative examples, the H shatters A. | For all 2^N possible binary labeling of every data, if a hypothesis h splits the positive data from the negative data with no error, then it means that the hypothesis h shatters the dataset A. | 2 | DigiKlausur | 1 | 0.001001 | 288 |
null | The hypothesis $h$ can shatter any points of $x1$, $x2$, ..., $xn$ if and only if for every possible training set of the form $(x1, y1), (x2, y2), ... (xn, yn)$ there exist some values of $\alpha$ that gets zero training error. | For all 2^N possible binary labeling of every data, if a hypothesis h splits the positive data from the negative data with no error, then it means that the hypothesis h shatters the dataset A. | 2 | DigiKlausur | 1 | 0.001001 | 289 |
null | A hypothesis space H shatters a dataset, if and only if there is a **possbile $\alpha$ (weight vector)** on hypothesis space that **seperates all the positvie data from negative data**. | For all 2^N possible binary labeling of every data, if a hypothesis h splits the positive data from the negative data with no error, then it means that the hypothesis h shatters the dataset A. | 1 | DigiKlausur | 0.5 | 0.001522 | 290 |
null | a hypothesis $h \in H$ shatters $A \subseteq X$ if and only if there exists a value of $\alpha$ for which the training error is zero | For all 2^N possible binary labeling of every data, if a hypothesis h splits the positive data from the negative data with no error, then it means that the hypothesis h shatters the dataset A. | 1 | DigiKlausur | 0.5 | 0.001522 | 291 |
null | H is the vc dimension of a learning maching that can shatter h points. Vc dimension of a learning machine is the maximum number of points that can be arranged so that the learning machine can shatter them Shattering: The learning machine is said to shatter points $(x1 ... xr)$ if and only if all the possible training set of $((x1,y1) ... (xr,yr))$ can be classified with zero training error | For all 2^N possible binary labeling of every data, if a hypothesis h splits the positive data from the negative data with no error, then it means that the hypothesis h shatters the dataset A. | 2 | DigiKlausur | 1 | 0.001001 | 292 |
null | A hypothesis shatters a dataset if it can correctly classify all combinations of labellings of the points in the dataset. | For all 2^N possible binary labeling of every data, if a hypothesis h splits the positive data from the negative data with no error, then it means that the hypothesis h shatters the dataset A. | 2 | DigiKlausur | 1 | 0.001001 | 293 |
null | , if there exists a configuration of $X$, so that $h$ gets zero training error on any dichotomy of the datapoints. | For all 2^N possible binary labeling of every data, if a hypothesis h splits the positive data from the negative data with no error, then it means that the hypothesis h shatters the dataset A. | 0 | DigiKlausur | 0 | 0.003546 | 294 |
null | there exist w weights, which produce a perfect classification. | For all 2^N possible binary labeling of every data, if a hypothesis h splits the positive data from the negative data with no error, then it means that the hypothesis h shatters the dataset A. | 0 | DigiKlausur | 0 | 0.003546 | 295 |
null | for all possible classified subsets of dataset A the hypothesis h can seperate it | For all 2^N possible binary labeling of every data, if a hypothesis h splits the positive data from the negative data with no error, then it means that the hypothesis h shatters the dataset A. | 2 | DigiKlausur | 1 | 0.001001 | 296 |
null | when all combinations of position and labeling of the data can be separated in the given classes by the hypothesis | For all 2^N possible binary labeling of every data, if a hypothesis h splits the positive data from the negative data with no error, then it means that the hypothesis h shatters the dataset A. | 2 | DigiKlausur | 1 | 0.001001 | 297 |
null | h shatters A when and only when for all possibilities of (a1, y1), (a2, y2), ... ,(an, yn), where y is the class lable (1 or -1) there exists some $ alpha $ for a learning machine f that produces 0 training error. | For all 2^N possible binary labeling of every data, if a hypothesis h splits the positive data from the negative data with no error, then it means that the hypothesis h shatters the dataset A. | 2 | DigiKlausur | 1 | 0.001001 | 298 |
null | If there exist atleast one configuration of A for which training error of h is zero. i.e. it successfully classifies all oints in A. | For all 2^N possible binary labeling of every data, if a hypothesis h splits the positive data from the negative data with no error, then it means that the hypothesis h shatters the dataset A. | 0 | DigiKlausur | 0 | 0.003546 | 299 |
null | and there exist a linear saperater which saperates positve examples from the negavtive examples correctly. then we say that A can be shatter at h. | For all 2^N possible binary labeling of every data, if a hypothesis h splits the positive data from the negative data with no error, then it means that the hypothesis h shatters the dataset A. | 2 | DigiKlausur | 1 | 0.001001 | 300 |
null | Ginven a data set A if it is possible to find a hypotheis H which separates the data set into binary form without any error, we can say that hypothesis $h \in H$ shatters dataaet $A \subseteq X \Leftrightarrow \ldots$. | For all 2^N possible binary labeling of every data, if a hypothesis h splits the positive data from the negative data with no error, then it means that the hypothesis h shatters the dataset A. | 2 | DigiKlausur | 1 | 0.001001 | 301 |
null | It means that for all the points in A with input output pair (x,y), for any combination of ($xi$,$yi$) there exist parameter $\alpha$ of h that enables h to classify the points with zero error | For all 2^N possible binary labeling of every data, if a hypothesis h splits the positive data from the negative data with no error, then it means that the hypothesis h shatters the dataset A. | 2 | DigiKlausur | 1 | 0.001001 | 302 |
null | We say that a hypothesis h shatters a dataset A, iff the h produces a zero training error for certain data set A. In other words, we say that a hypothesis h shatters a dataset A, when h separates data A in two classes without erorr. | For all 2^N possible binary labeling of every data, if a hypothesis h splits the positive data from the negative data with no error, then it means that the hypothesis h shatters the dataset A. | 0 | DigiKlausur | 0 | 0.003546 | 303 |
null | $ \Delta w = \eta e(n)x(n) $, where $\eta$ is the learning rate. Widrow-Hoff rule states that the change in weights is proportional to the product of the error and the input in the corresponding synapse. | The adjustment made to a synaptic weight of a neuron is proportional to the product of the error signal and the input signal of the synapse in question. The rule is derived from the steepest descent method. | 2 | DigiKlausur | 1 | 0.001001 | 304 |
null | Weights adjusted are proportional to the product of error signal and the input vector w(n + 1) = w(n) + $\eta(d-y)x(n)$ $\eta$ is learning rate, d is desired output, y is current output. x(n) in input vector. | The adjustment made to a synaptic weight of a neuron is proportional to the product of the error signal and the input signal of the synapse in question. The rule is derived from the steepest descent method. | 2 | DigiKlausur | 1 | 0.001001 | 305 |
null | Adaption of weight is proportional to product of input and error: $w{new} = w{old} + x*e$ | The adjustment made to a synaptic weight of a neuron is proportional to the product of the error signal and the input signal of the synapse in question. The rule is derived from the steepest descent method. | 1 | DigiKlausur | 0.5 | 0.001522 | 306 |
null | For neurons with a linear activation function (ADALINE): $w(t+1)=w(t)+\alpha (d-y)x$, where x is the input pattern, d is the true value and y is the net output. Notice that the delta rule looks similiar to the perceptron learning rule, but was derived from SD, whereas the perceptron works with a step function which is not fully differentiable. | The adjustment made to a synaptic weight of a neuron is proportional to the product of the error signal and the input signal of the synapse in question. The rule is derived from the steepest descent method. | 2 | DigiKlausur | 1 | 0.001001 | 307 |
null | Widrow-Hoff learning rule is also known as error correction rule is used to update the weights as: $\Delta w = \eta (di-yi)xi$ where, d is the desired output and y is the output the network generates and x is the input. | The adjustment made to a synaptic weight of a neuron is proportional to the product of the error signal and the input signal of the synapse in question. The rule is derived from the steepest descent method. | 1 | DigiKlausur | 0.5 | 0.001522 | 308 |
null | $w(n+1) = w(n) + \mu (d(n) - y(n))x(n)$ The change of the weights is determined using the error ($d(n) - y(n)$) and the input that was given to the network. The learning rate can improve learing speed. The new weights are dependent on the old ones and the change calculated | The adjustment made to a synaptic weight of a neuron is proportional to the product of the error signal and the input signal of the synapse in question. The rule is derived from the steepest descent method. | 1 | DigiKlausur | 0.5 | 0.001522 | 309 |
null | $w{ij}(n)= w{ij}(n-1)+ learningrate*(dj-yj)*xi$ we change the weights by computing the error $ej= (dj-yj)$ for the input and multiply it by the learningrate and the $xi$ and adding it to the old weight. This minimises the squared error function (our cost function) and is the online variant of the steepest decent method. | The adjustment made to a synaptic weight of a neuron is proportional to the product of the error signal and the input signal of the synapse in question. The rule is derived from the steepest descent method. | 2 | DigiKlausur | 1 | 0.001001 | 310 |
null | $$ \Delta w(n) = \eta * e(n)*w(n) $$ $$ e(n) = (y-d) $$ The widrow hoff learning rule is error correction learning. It is used to train a network in a supervised manner. The widrow hoff learning rule can be derived from gradient decent. The rule consists of the error e(n) the neuron has and is muliplied with the weight so that the impact of the weight to the error is incorporated into the update. A learning rule is use as a adjustment in how much we trust the weight change. The error is calculate by the difference between the current and expected output. | The adjustment made to a synaptic weight of a neuron is proportional to the product of the error signal and the input signal of the synapse in question. The rule is derived from the steepest descent method. | 2 | DigiKlausur | 1 | 0.001001 | 311 |
null | The Widrow-Hoff learning rule is defined as: $w(n + 1) = w(n) + \eta * x(n) * e(n)$ The Widrow-Hoff learning rule is a rule for adjusting the weights of a NN for a error correction learning task. This learning rule is derived from the steepest descent method, where the direction for the minimization of the error is the defined as the oposite direction of the cost function's gradient. This gradient can be simplified as $x(n) * e(n)$, where e(n) is defined as the difference between the desired response and the actual response of the learning machine (NN): $e(n) = d(n) - y(n)$. $\eta$ defines the learning rate used. | The adjustment made to a synaptic weight of a neuron is proportional to the product of the error signal and the input signal of the synapse in question. The rule is derived from the steepest descent method. | 2 | DigiKlausur | 1 | 0.001001 | 312 |
null | null | The adjustment made to a synaptic weight of a neuron is proportional to the product of the error signal and the input signal of the synapse in question. The rule is derived from the steepest descent method. | 0 | DigiKlausur | 0 | 0.003546 | 313 |
null | Windrow-Hoff rule is $$W{new}=x{input}*W{old}*(d{output}-y{output})*eta*a $$ where $W{new}=new weight,W{old}=old weight,d{output}=desired output,y{output}=actual output,x{input}=input, eta=learning rate, a=learning constant$ | The adjustment made to a synaptic weight of a neuron is proportional to the product of the error signal and the input signal of the synapse in question. The rule is derived from the steepest descent method. | 0 | DigiKlausur | 0 | 0.003546 | 314 |
null | The Widrow-Holf or delta rule is a gradient descent learning rule used to adapt weight in a perceptron. $\Delta w(n) = - \eta(d(n) - y(n))x(n) $ $\Delta w(n) = - \eta e(n)x(n) $ | The adjustment made to a synaptic weight of a neuron is proportional to the product of the error signal and the input signal of the synapse in question. The rule is derived from the steepest descent method. | 1 | DigiKlausur | 0.5 | 0.001522 | 315 |
null | The widrow-Hoff (delta) learning rule is given by $$ w(n+1) = w(n) - \eta x(n) e(n)$$ where $e(n)$ is the error vector, $\eta$ is learning parameter, $x(n)$ is input vector. | The adjustment made to a synaptic weight of a neuron is proportional to the product of the error signal and the input signal of the synapse in question. The rule is derived from the steepest descent method. | 1 | DigiKlausur | 0.5 | 0.001522 | 316 |
null | The Widrow-Hoff Learning rule is also referred to as Delta, or Least Mean Square (LMS) Rule. It is used to minimize the cost function and is defined as follows: Delta wji(n) = eta (partial xi(n) / (partial wji(n)) where eta is the learning rate paramter, xi(n) is the total instantaneous error energy and w are the weights. | The adjustment made to a synaptic weight of a neuron is proportional to the product of the error signal and the input signal of the synapse in question. The rule is derived from the steepest descent method. | 1 | DigiKlausur | 0.5 | 0.001522 | 317 |
null | The Widrow-Hoff learning rule, also called delta rule, is used for learning a network by adjusting the synaptic weights of the network with the error signals: $$ w(n+1) = w(n) + \eta (d(n) - y(n)) x(n) $$ where $n$ is the number of iteration, $\eta$ is the learning rate, $d(n)$ is the desired output signal, $y(n)$ is the actual output signal, and $x(n)$ is the input signal. $(d(n) - y(n))$ is the error signal. | The adjustment made to a synaptic weight of a neuron is proportional to the product of the error signal and the input signal of the synapse in question. The rule is derived from the steepest descent method. | 2 | DigiKlausur | 1 | 0.001001 | 318 |
null | Widrow hoff's learning rule states that the adjustment of the weight of a synapses are propotional to the product of the error function and the input which is given by the synapses based on the problem. | The adjustment made to a synaptic weight of a neuron is proportional to the product of the error signal and the input signal of the synapse in question. The rule is derived from the steepest descent method. | 1 | DigiKlausur | 0.5 | 0.001522 | 319 |
null | Widrow Hoff rule is based minimising the mean square error using gradient descent alogirthm. Weights are adjusted in following manner:<br> w(n+1) = w(n) - n (gradient of mean square error) <br> It takes the gradient of the mean square error $0.5 e^{2}(n) = e(n) \frac{\partial e(n)}{\partial w} = e(n) x(n)$ | The adjustment made to a synaptic weight of a neuron is proportional to the product of the error signal and the input signal of the synapse in question. The rule is derived from the steepest descent method. | 2 | DigiKlausur | 1 | 0.001001 | 320 |
null | HERE: Widrow- Hoff rule: - $\Delta w$ = $\eta e(n) x(n)$ - Widrow-Hoff rule states that when an input x(n) produces an error e(n), then the change in the weight is directly proportional to the error signal and the input signal. | The adjustment made to a synaptic weight of a neuron is proportional to the product of the error signal and the input signal of the synapse in question. The rule is derived from the steepest descent method. | 1 | DigiKlausur | 0.5 | 0.001522 | 321 |
null | Widrow Hoff learning rule is also called as error corresction learning rule. The error is defined as the difference between the desired and the actua output of the learning machine. Assuming the desired signa is available, the error is computed and weights of the neural network are upadted in the direction of reduction of errors. The error for each input sample for a neuron k is computed using $ek(i) = dk(i) - yk(i)$. weight change $\Delta W = W*e$, that is the dot product of error and the weights is computed. | The adjustment made to a synaptic weight of a neuron is proportional to the product of the error signal and the input signal of the synapse in question. The rule is derived from the steepest descent method. | 2 | DigiKlausur | 1 | 0.001001 | 322 |
null | Given a neuron k excited by an input signal $xi$, if $w{ki}$ is the synaptic weight of the neuron, then the Widrow-Hoff learning rule gives the weight adjustment $\Delta w{ki}$ applied to the neuron k in mathematical terms as follows: $\Delta w{ki} = \eta xi(n)e(n)$ where e(n) is the instantaneous value of the error signal. Thus the Widrow-Hoff rule states that the synaptic adjustment applied to the weights of a neuron is proportional to the product of the input signal to the neuro and the instantaneous value of the error signal. This rule assumes that the neuron has an external supply of desired response so that the error can be computed. | The adjustment made to a synaptic weight of a neuron is proportional to the product of the error signal and the input signal of the synapse in question. The rule is derived from the steepest descent method. | 2 | DigiKlausur | 1 | 0.001001 | 323 |
null | The Widrow-Hoff learning rule is given by $$w(n + 1) = w(n) + \eta e(n) x(n)$$ where $w(n)$: Weight in iteration n $e(n) = d(n) - y(n)$: Error $d(n)$: Desired output $y(n)$: Actual output $x(n$: Input $\eta$: Learning rate | The adjustment made to a synaptic weight of a neuron is proportional to the product of the error signal and the input signal of the synapse in question. The rule is derived from the steepest descent method. | 1 | DigiKlausur | 0.5 | 0.001522 | 324 |
null | Widrow -Hoff learning rule states that the adaptation made to the synaptic weights is proportional to the product of input and the error function.It basically states that if the error is high then the product of input and error will also be high , and thus the adjustment made to the weight would be more. $wj(n+1) = wj(n) + eta*(error)*input$ | The adjustment made to a synaptic weight of a neuron is proportional to the product of the error signal and the input signal of the synapse in question. The rule is derived from the steepest descent method. | 2 | DigiKlausur | 1 | 0.001001 | 325 |
null | it is based on minimization of error cost function $\xi(w) = 0.5 e^2k(n)$, so synaptic weight from neuron k to input j is updated in a direction opposite to gradient vector of $\xi(w)$, that is $w{kj}(n+1) = w{kj}(n) - \eta \nabla \xi(w) = w{kj}(n) - \eta ek(n) xj(n)$, $\eta $ is learning rate.$ek(n)$ is neuron k error signal, $xj(n)$ is input data. | The adjustment made to a synaptic weight of a neuron is proportional to the product of the error signal and the input signal of the synapse in question. The rule is derived from the steepest descent method. | 2 | DigiKlausur | 1 | 0.001001 | 326 |
null | Windrow-Hoff or error correction learning rule says that the adjustment of a weight is proportional to the product of the error signal and the input signal of the weight. | The adjustment made to a synaptic weight of a neuron is proportional to the product of the error signal and the input signal of the synapse in question. The rule is derived from the steepest descent method. | 1 | DigiKlausur | 0.5 | 0.001522 | 327 |
null | $$\bigtriangleup \omega{ji} = ej * xi$$ $$\omega(n+1) = \omega(n) + \eta \bigtriangleup \omega{ji}$$ Widrow Hoff learning rule says that, the synaptic weight update is directly proportional to the product of error and the input. | The adjustment made to a synaptic weight of a neuron is proportional to the product of the error signal and the input signal of the synapse in question. The rule is derived from the steepest descent method. | 2 | DigiKlausur | 1 | 0.001001 | 328 |
null | Widrow-Hoff learning rule: The rules states that the weight update is directly proportional to the product of the input to the neuron and the error. $\Delta w{ij} = \eta e(n) \sum xi(n)$ | The adjustment made to a synaptic weight of a neuron is proportional to the product of the error signal and the input signal of the synapse in question. The rule is derived from the steepest descent method. | 1 | DigiKlausur | 0.5 | 0.001522 | 329 |
null | delta $ w{kj} = \eta ek . xj $ Widrow hoff rules states that the change in synaptic weight is proportional to the product of the error signal and the input signal | The adjustment made to a synaptic weight of a neuron is proportional to the product of the error signal and the input signal of the synapse in question. The rule is derived from the steepest descent method. | 1 | DigiKlausur | 0.5 | 0.001522 | 330 |
null | $\Delta w(n) = \mu * x(n) * e(n)$ $\mu = $ learning rate $x(n) = $ input at timestep n $e(n) = d(n) - y(n)$ $d(n) = $ desired signal at timestep n $y(n) = $ output of the network at timestep n The Widroff-Hoff (or delta rule) changes the weights depending on the input and the error, which is the difference between the output of the network and the desired output. This weight change can be scaled by a learning rate. | The adjustment made to a synaptic weight of a neuron is proportional to the product of the error signal and the input signal of the synapse in question. The rule is derived from the steepest descent method. | 2 | DigiKlausur | 1 | 0.001001 | 331 |
null | The Widrow-Hoff rule is used in error-correction learning and uses the current error and output of the system to determine the new weights. $w(n+1) = w(n)+\eta \cdot e(n) \cdot y(n) $ | The adjustment made to a synaptic weight of a neuron is proportional to the product of the error signal and the input signal of the synapse in question. The rule is derived from the steepest descent method. | 1 | DigiKlausur | 0.5 | 0.001522 | 332 |
null | $\Delta w(n) = learning\rate \cdot x(n) \cdot e(n)$, where x is the input data, $e = d - y$ is the error from the desired output and the actual output, and the learningrate is a parameter chosen as necessery to change the speed of learning. $w{new} = w{old} + learning\rate \cdot x \cdot e$, this is the formula to update the weights and to learn the input data. | The adjustment made to a synaptic weight of a neuron is proportional to the product of the error signal and the input signal of the synapse in question. The rule is derived from the steepest descent method. | 2 | DigiKlausur | 1 | 0.001001 | 333 |
null | weights(t) = weights(t-1) * learningrate * (desired(t) - output(t)) The Widrow-Hoff rule, also the delta rule, is used to update the weights of neural networks in a learning algorithm. It uses the previous weights' result and compares it to the desired result. This discrepancy is then applied to update the weights based on a learning rate. | The adjustment made to a synaptic weight of a neuron is proportional to the product of the error signal and the input signal of the synapse in question. The rule is derived from the steepest descent method. | 2 | DigiKlausur | 1 | 0.001001 | 334 |
null | wnew = wold + learningparameter * error(n) * input(n) while error is: desiredinput - currentoutput the new value for the synaptic weight is computed of the old value plus a learning rate times the current error and the input. The output error is decreased in each step until the change is to small or the generalization is sufficient | The adjustment made to a synaptic weight of a neuron is proportional to the product of the error signal and the input signal of the synapse in question. The rule is derived from the steepest descent method. | 2 | DigiKlausur | 1 | 0.001001 | 335 |
null | Rule: w+1 = w + n * x * ( y - d) where n is the learning rate, x is the input, y is the ouput of the network d is the desired output The widrow-Hoff rule minimizes the error (y-d). The weight change is proportional the ibnput x and the error. It can be derived from steepest descend. | The adjustment made to a synaptic weight of a neuron is proportional to the product of the error signal and the input signal of the synapse in question. The rule is derived from the steepest descent method. | 2 | DigiKlausur | 1 | 0.001001 | 336 |
null | null | The adjustment made to a synaptic weight of a neuron is proportional to the product of the error signal and the input signal of the synapse in question. The rule is derived from the steepest descent method. | 0 | DigiKlausur | 0 | 0.003546 | 337 |
null | This the basicly the calulating mean squared error (MSE) from the expected output and real output. Modifiying the weights for Minimizing MSE it . | The adjustment made to a synaptic weight of a neuron is proportional to the product of the error signal and the input signal of the synapse in question. The rule is derived from the steepest descent method. | 0 | DigiKlausur | 0 | 0.003546 | 338 |
null | Widrow-Hoff rule states that the weight adjustment is proportional to the product of input and the error in the output. It is also called the delta rule. $$\Delta w{ji} = \eta xie{ji}$$ $\eta$ is the proportional constant also called as learning constant $$W(i)=W(i-1)+\Delta W{ji}$$ | The adjustment made to a synaptic weight of a neuron is proportional to the product of the error signal and the input signal of the synapse in question. The rule is derived from the steepest descent method. | 2 | DigiKlausur | 1 | 0.001001 | 339 |
null | $\Delta W{ji}$ = $\eta ejxi$ Adjustment made to the weight of a neuron is proportional to the product of the error in that neuron and input applied to the neuron. | The adjustment made to a synaptic weight of a neuron is proportional to the product of the error signal and the input signal of the synapse in question. The rule is derived from the steepest descent method. | 2 | DigiKlausur | 1 | 0.001001 | 340 |
null | Widrow-Hoff learning rule is derived from LMS error method, and it is defined as: $W{t+1} = W{t} + \mu \cdot \Delta W$, where $\mu$ represent learning rate, and $\Delta W = -(gradient \ of \ instantaneus \ erorr) = -(d - y)X $, Here $d$ represent desired signal, while $y$ represent output signal of a neuron. $X$ represent input of a neuron | The adjustment made to a synaptic weight of a neuron is proportional to the product of the error signal and the input signal of the synapse in question. The rule is derived from the steepest descent method. | 2 | DigiKlausur | 1 | 0.001001 | 341 |
null | In backpropagation, the gradient of the error produced at the output layer (by partially differentiating the cost function with respect to the weights) is propogated backwards one layer at a time back to the input layer. This propagated gradient is used to update the weights in the corresponding layer. Backpropagation is necessary because the desired output at every layer is not known and it is only possible to formulate the cost function at the output layer. | Backpropagation lowers the error of a MLP level by level recursively backwards. It back propagates an error from the last layer to the first layer by updating the weights. The updates are determined by the local gradient at each level which is computed by partial derivatives of the error and chain rule. | 1 | DigiKlausur | 0.5 | 0.001522 | 342 |
null | In back propagation, there are two phases: 1. Forward Phase: First we apply input to the network and compute the current output. 2. Backward Phase: We compute the error between current and desired output. Error is minimized by computing gradient of error with respect to weight. In return, weights are adjust. After adjusting weights in backward phase, we again go to forward phase and compute the current output, check whether error is minimized or not. | Backpropagation lowers the error of a MLP level by level recursively backwards. It back propagates an error from the last layer to the first layer by updating the weights. The updates are determined by the local gradient at each level which is computed by partial derivatives of the error and chain rule. | 1 | DigiKlausur | 0.5 | 0.001522 | 343 |
null | Back propagation wants to minimize the error function E. E is given by: \( $ \frac{1}{2}\sum e(n)^{2}$ \). THe error function can be minimized by calculating the gradient starting from the output. Term for calculating the gradient differs. It depends on whether the neuron for which the gradient to be calculated is an output neuron or a hidden neuron. | Backpropagation lowers the error of a MLP level by level recursively backwards. It back propagates an error from the last layer to the first layer by updating the weights. The updates are determined by the local gradient at each level which is computed by partial derivatives of the error and chain rule. | 1 | DigiKlausur | 0.5 | 0.001522 | 344 |
null | Backpropagation is the general form of the delta rule, formulated for networks with multiple hidden layers. Here, we propagate the error of the network back to the input layer to determine the change of weights, using the error signal in the output layer and subsequently the local gradients in the hidden layers. In the forward pass, we compute the net output forwards. In the backward pass, we propagate the error backwards. The BP rule was derived from the error gradient w.r.t. the weights, and application of the chain rule. | Backpropagation lowers the error of a MLP level by level recursively backwards. It back propagates an error from the last layer to the first layer by updating the weights. The updates are determined by the local gradient at each level which is computed by partial derivatives of the error and chain rule. | 2 | DigiKlausur | 1 | 0.001001 | 345 |
null | Back propagation is propagation of error from the output layer to the hidden layer in network with multiple layers. This is done by calculating the local gradient of each node and then using this (along with the weight) to determine how much of the error is to be propagated to the particular node | Backpropagation lowers the error of a MLP level by level recursively backwards. It back propagates an error from the last layer to the first layer by updating the weights. The updates are determined by the local gradient at each level which is computed by partial derivatives of the error and chain rule. | 1 | DigiKlausur | 0.5 | 0.001522 | 346 |
null | Back propagation is used in multi layer network. It consits of two phases: Forward and backward. In the forward phase we give and input to the network and caculate its outputs. Also memorize the local field of each node. The local gradient (delta) is used to adapt the weights of the layers. It is different for output and the remaining layers. For node i in an output layer: $\deltai(vi) = \varphi^\prime(vi)(di - yi)$ For node i in other layers: $\deltai(vi) = \varphi^\prime(vi)\sum{j\in C} wji \deltaj(vj)$, where $C$ are all the nodes that use node i output as an input repeat this process for all input data until error is small enough | Backpropagation lowers the error of a MLP level by level recursively backwards. It back propagates an error from the last layer to the first layer by updating the weights. The updates are determined by the local gradient at each level which is computed by partial derivatives of the error and chain rule. | 2 | DigiKlausur | 1 | 0.001001 | 347 |
null | The back propagation algorithim is there to train a mulilayer feedforward ANN. We change the weights by computing the local gradiant at each neuron by using the neurons in the layer befor. The local gradient of the output neurons can be computed easaly. The activation function has to be differantable for the backpropagation algorithm. In the forwart pass we compute the output y at the output layer. In the backard pass we use the output y and our desired output d to compute the local gradients at the output layer. Then we go back layer by layer and use the local gradients from before to compute the new local gradients. By that we minimize the average squared error function. | Backpropagation lowers the error of a MLP level by level recursively backwards. It back propagates an error from the last layer to the first layer by updating the weights. The updates are determined by the local gradient at each level which is computed by partial derivatives of the error and chain rule. | 2 | DigiKlausur | 1 | 0.001001 | 348 |
null | Backpropagation is a learning algorithm for Multilayer FF NN. It is supervised error correction learning. The weights are initialised randomly The algorithm has to steps: In the forward pass the the output is calculated by using the current weights. In the backward pass the weight update for the outputlayer is as like in single layer ff. The error is used to update the weights. BP allows us to also calculate the error of hidden layers. For each hidden layer we use a local gradient as the error. The local gradient is the sum of weighted error of the following layer, which is passed trough the derivate of the activation function. So it is possible to backpropagate the error from the output layer to to first layer. A common Problem in BP is the vainshing gradient problem. Depending on the activation function used the local gradient gets smaller in each layer until it is eventually less than the floating point precision used. This limits the number of layers that can be stacked. | Backpropagation lowers the error of a MLP level by level recursively backwards. It back propagates an error from the last layer to the first layer by updating the weights. The updates are determined by the local gradient at each level which is computed by partial derivatives of the error and chain rule. | 2 | DigiKlausur | 1 | 0.001001 | 349 |
null | The back propagation algorithm is a learning algorithm for updating in the weights in a multi-layer neural network. For updating the weights of all the layers, the error of each neuron must be calculated. In the back propagation algorithm, two phases will be defined: - Forward phase: the output of the neural network will be calculated and also the error of the neurons in the output layer. - Backward phase: the gradient of each neuron will be calculated, by using the calculated error on the output layer and the defined connections between the hidden layer and the output layer. If multiple hidden layers are defined, the error will be iteratevely will be given backwards and the weights at each neuron will be updated. | Backpropagation lowers the error of a MLP level by level recursively backwards. It back propagates an error from the last layer to the first layer by updating the weights. The updates are determined by the local gradient at each level which is computed by partial derivatives of the error and chain rule. | 1 | DigiKlausur | 0.5 | 0.001522 | 350 |
null | Back propagation is a steepest decent method that uses the final produced error and the local gradient to define the amount of change needed for each synaptic weight. In this method we have two phase: - forward phase: in this phase we feed the input to the network and the network calculate the output - backward phase: in this phase we first calculate the error and then use the local gradient to propagate the error to the network from the last layer to the first and manipulate the synaptic weights | Backpropagation lowers the error of a MLP level by level recursively backwards. It back propagates an error from the last layer to the first layer by updating the weights. The updates are determined by the local gradient at each level which is computed by partial derivatives of the error and chain rule. | 2 | DigiKlausur | 1 | 0.001001 | 351 |
null | Back propagation consists of two steps: 1. forward pass - data is passed through the network and weights are atapted 2. backward pass - by using local field of each neuron error signal is propagated backward by using local field of each neuron from end to beginning and stacking them up. Local field is partial derivative of the output signal of a a neuron, for output neuron it is simplest to calculate as it has only desired output and actual output to deal with. | Backpropagation lowers the error of a MLP level by level recursively backwards. It back propagates an error from the last layer to the first layer by updating the weights. The updates are determined by the local gradient at each level which is computed by partial derivatives of the error and chain rule. | 1 | DigiKlausur | 0.5 | 0.001522 | 352 |
null | Backpropagation is a learning algorithm in multi layer networks that consists of two phases, a forward pass and a backward pass. In the forward pass, the output is calculated by passing activations layer through layer starting from the input, then through hidden layer and finally output. Then the error is calculated in the output layer and propagated backward through the network. In the forward pass, the weight do not change. In the backward pass, the weights change in proportion to the local gradient. | Backpropagation lowers the error of a MLP level by level recursively backwards. It back propagates an error from the last layer to the first layer by updating the weights. The updates are determined by the local gradient at each level which is computed by partial derivatives of the error and chain rule. | 1 | DigiKlausur | 0.5 | 0.001522 | 353 |
null | Backpropagation is a neural network based learning algorithm where the network learns by propagating the error through the network. BP consists of two stages: + Forward pass: where the error is computed by feeding the input to the network. + Backward pass: where error is propagated through the network for doing the weight updates locally. Since BP has vanishing gradient problem, it is useful to use activation functions which are infinitely differentiable such as sigmoid function. | Backpropagation lowers the error of a MLP level by level recursively backwards. It back propagates an error from the last layer to the first layer by updating the weights. The updates are determined by the local gradient at each level which is computed by partial derivatives of the error and chain rule. | 1 | DigiKlausur | 0.5 | 0.001522 | 354 |
null | The back propagation algorithm is used to calculate the error contribution of each neuron after a batch of data is processed. Required is a known desired output of each input value. Thus the back propagation algorithm is a supervised method. The algorithm can be subdivided into two phases: 1) Propagation: * Propagation forward through the network to generate the output value(s). * Calculation of the cost error term. * Propagation of the output activation back through the network usin the training pattern target in order to generate the deltas (differences between desired and actual output) of all output and hidden neurons / by recursevliy computing the local gradient of each neuron. 2) Weight update: For each weight the following steps need to be applied: * The weight's output delta and input activation are multiplied to find the gradient of the weight. * A ratio (percentage) of the weight's gradient is substracted from the weight. This ration is also referred to as the learning rate and influences the speed and quality of the learning. Learning is repeated for every new batch until the network performs adequately. | Backpropagation lowers the error of a MLP level by level recursively backwards. It back propagates an error from the last layer to the first layer by updating the weights. The updates are determined by the local gradient at each level which is computed by partial derivatives of the error and chain rule. | 2 | DigiKlausur | 1 | 0.001001 | 355 |
null | Backpropagation is an algorithm for training a neural network, and it contains of two main stages. The first stage is to compute the actual output given the input; in this stage, the signal flows forward from the input layer to the output layer, and the synaptic weights are fixed. The second stage is to update the synaptic weights by propagating the error signals backward from the output layer in a layer-by-layer manner; for each neuron, the local gradient, the partial derivative of cost function to the local field, is computed. | Backpropagation lowers the error of a MLP level by level recursively backwards. It back propagates an error from the last layer to the first layer by updating the weights. The updates are determined by the local gradient at each level which is computed by partial derivatives of the error and chain rule. | 2 | DigiKlausur | 1 | 0.001001 | 356 |
null | Back propogation usually occurs in a multi layer perceptron. It uses a non linear activation function. Basic elements: 1. Functional signals: These are the input signals, which passes through the network from left to right. As the name denotes it performs a usefull function at the output of the neuron and another reason for the name is that the functional signals are calculated based on the parameters and the activation function. 2. Error signals: Error signals propogate usually in the reverse direction which contains the error based on the desired output. It consists of 2 phases: 1. Forward phase: In the forward phase the signals propogate from left to right. Weights are fixed and passes through all the layers of the network, that is undergo all the activation. 2. Reverse phase: In the reverse phase, the local gradients are calcualted and are propogated through in the backward direction. Here weights change. | Backpropagation lowers the error of a MLP level by level recursively backwards. It back propagates an error from the last layer to the first layer by updating the weights. The updates are determined by the local gradient at each level which is computed by partial derivatives of the error and chain rule. | 1 | DigiKlausur | 0.5 | 0.001522 | 357 |