question
stringclasses
283 values
provided_answer
stringlengths
1
3.56k
reference_answer
stringclasses
486 values
grade
float64
0
100
data_source
stringclasses
7 values
normalized_grade
float64
0
1
weight
float64
0
0.04
index
int64
0
4.97k
null
given a neigbourhood function $h{ij}(n)$ and a lerning rate over time randomly assing different weights from the input layer to the neurons in the second layer for each training point xi do: - find the winner-takes-all neuron $k$ with $min ||xi-wi||$ - find the neighbours of $k$ with the neigbourhood function - compute the new weights for those neurons using the neighbourhood function and the learning rate - update (decrease) the neighbourhood function and the learning rate end
Arrange the weights in the required topology according to the problem. Initialize the weights randomly such that all the weights are different. Sample the input from the input space. Similarity matching: match the input to a neuron in the topological lattice which becomes the winning neuron. Update the weights of the winning neuron and its neighbours determined by the neighbourhood function. Reduce the neighbourhood and decay the learning rate and share radius. If ordering and convergence are complete, stop. Else continue sampling from the input space.
2
DigiKlausur
1
0.001001
158
null
null
Arrange the weights in the required topology according to the problem. Initialize the weights randomly such that all the weights are different. Sample the input from the input space. Similarity matching: match the input to a neuron in the topological lattice which becomes the winning neuron. Update the weights of the winning neuron and its neighbours determined by the neighbourhood function. Reduce the neighbourhood and decay the learning rate and share radius. If ordering and convergence are complete, stop. Else continue sampling from the input space.
0
DigiKlausur
0
0.003546
159
null
1: w = initweights() // equal to zero or random initialized 2: n = 0 3: WHILE !stopcriteria() 4: winnerneuron, y = (x, w) // find on the map layer which neuron is closer to the input (euclidean distance) 5: neighborhood = defineneighboor(winnerneuron, n) // define the neighborhood size (first iterations big, and being reduced) 6: eta = definelearningrate(n) // define the learning rate (large value at the first iterations and being reduced) 7: diffw = adaptweights(neighborhood, eta) // adapt the weights just for the winner neuron and its neighborhood 8: w = w + diffw // update the weights 9: stopcritera = muststop(y, x) // look if the distance between input and the winner neuron is 0 (or really close to 0) 9: END
Arrange the weights in the required topology according to the problem. Initialize the weights randomly such that all the weights are different. Sample the input from the input space. Similarity matching: match the input to a neuron in the topological lattice which becomes the winning neuron. Update the weights of the winning neuron and its neighbours determined by the neighbourhood function. Reduce the neighbourhood and decay the learning rate and share radius. If ordering and convergence are complete, stop. Else continue sampling from the input space.
2
DigiKlausur
1
0.001001
160
null
- randomly define some values for the synapitc connections in the network - send the first input to the network - in the output layer(map layer) select the neuron that has lowest error(competition phase) - based on a predefined method define the neighborhood of the selected neuron(cooparation phase) - change the weights of the selected neuron and the neurons located in its neighborhood(adaptation phase) - if the stop condition satisfied stop the process
Arrange the weights in the required topology according to the problem. Initialize the weights randomly such that all the weights are different. Sample the input from the input space. Similarity matching: match the input to a neuron in the topological lattice which becomes the winning neuron. Update the weights of the winning neuron and its neighbours determined by the neighbourhood function. Reduce the neighbourhood and decay the learning rate and share radius. If ordering and convergence are complete, stop. Else continue sampling from the input space.
2
DigiKlausur
1
0.001001
161
null
Has three parts in it - Competition, Cooperation, Adaptation get input variable and choose amount of neurons to be more than amount of variables then run competition, where from the input neurons will be compiting to each other on choosing which fits the most after finding winning neuron change weight of neighbouring neuron only in cooperation weights of neighbouring neurons are adjusted to clusters in adaptation neurons are pulled to input variables to establish the classification
Arrange the weights in the required topology according to the problem. Initialize the weights randomly such that all the weights are different. Sample the input from the input space. Similarity matching: match the input to a neuron in the topological lattice which becomes the winning neuron. Update the weights of the winning neuron and its neighbours determined by the neighbourhood function. Reduce the neighbourhood and decay the learning rate and share radius. If ordering and convergence are complete, stop. Else continue sampling from the input space.
0
DigiKlausur
0
0.003546
162
null
- Find the winning neuron - Find the neighbors of the winning neuron.
Arrange the weights in the required topology according to the problem. Initialize the weights randomly such that all the weights are different. Sample the input from the input space. Similarity matching: match the input to a neuron in the topological lattice which becomes the winning neuron. Update the weights of the winning neuron and its neighbours determined by the neighbourhood function. Reduce the neighbourhood and decay the learning rate and share radius. If ordering and convergence are complete, stop. Else continue sampling from the input space.
0
DigiKlausur
0
0.003546
163
null
**Pseudo code** + 1. Initialize map neurons, based on topology it could be a lattice, on a circle, etc. + 2. competition: find map neuron that is closest to an input neuron by computing distances $d$. + 3. update the position of closest map neuron with update rule. + 4. Do 2 and 3 until all input neurons are assigned a map neuron. + Do 2,3 and 4 until specified iterations or the net cumulative distance goes below some specified value or becomes zero.
Arrange the weights in the required topology according to the problem. Initialize the weights randomly such that all the weights are different. Sample the input from the input space. Similarity matching: match the input to a neuron in the topological lattice which becomes the winning neuron. Update the weights of the winning neuron and its neighbours determined by the neighbourhood function. Reduce the neighbourhood and decay the learning rate and share radius. If ordering and convergence are complete, stop. Else continue sampling from the input space.
2
DigiKlausur
1
0.001001
164
null
Produce trainSOM; begin: randomize weights for all neurons; for (i=1 to iterationnumber) do: begin: take random input pattern; find the winning neuron; find neighbors of the winner; modify synaptic weights of these neurons; reduce learning rate and lambda; end; end;
Arrange the weights in the required topology according to the problem. Initialize the weights randomly such that all the weights are different. Sample the input from the input space. Similarity matching: match the input to a neuron in the topological lattice which becomes the winning neuron. Update the weights of the winning neuron and its neighbours determined by the neighbourhood function. Reduce the neighbourhood and decay the learning rate and share radius. If ordering and convergence are complete, stop. Else continue sampling from the input space.
2
DigiKlausur
1
0.001001
165
null
Initialize the network with small and random weights. Sample the data set by picking an input randomly. > Determine the winning neuron based on the output value. > Determine the coopertaing neurons based using the neighborhood function. > Update the weights of the cooperating neurons. > Adjust the learning rate. > Stop if the network converges.
Arrange the weights in the required topology according to the problem. Initialize the weights randomly such that all the weights are different. Sample the input from the input space. Similarity matching: match the input to a neuron in the topological lattice which becomes the winning neuron. Update the weights of the winning neuron and its neighbours determined by the neighbourhood function. Reduce the neighbourhood and decay the learning rate and share radius. If ordering and convergence are complete, stop. Else continue sampling from the input space.
2
DigiKlausur
1
0.001001
166
null
Begin n = range of data set Initialise the weights. #We give a small random weights. for the range of n: Select a input signal, Find the winning neuron based on the similarity between the weights. Update the weights of the neighboring neuron Repeat until the convergence. 1. initalising 2. Sampling 3 Similarity matching. 4. Updating the weights 5. continuation.
Arrange the weights in the required topology according to the problem. Initialize the weights randomly such that all the weights are different. Sample the input from the input space. Similarity matching: match the input to a neuron in the topological lattice which becomes the winning neuron. Update the weights of the winning neuron and its neighbours determined by the neighbourhood function. Reduce the neighbourhood and decay the learning rate and share radius. If ordering and convergence are complete, stop. Else continue sampling from the input space.
2
DigiKlausur
1
0.001001
167
null
intialise weights <br> while( significant change is observed in topographic pattern) {<br> take a random input (sampling) <br> find the winning output neuron (competition) <br> adjust the weights of the winning neuron and its neighbourhood neurons (cooperation) <br> continue <br> }
Arrange the weights in the required topology according to the problem. Initialize the weights randomly such that all the weights are different. Sample the input from the input space. Similarity matching: match the input to a neuron in the topological lattice which becomes the winning neuron. Update the weights of the winning neuron and its neighbours determined by the neighbourhood function. Reduce the neighbourhood and decay the learning rate and share radius. If ordering and convergence are complete, stop. Else continue sampling from the input space.
2
DigiKlausur
1
0.001001
168
null
HERE: SOM learning - Initialization with random small weights. - Sampling: Picking a input pattern with certain probability. - Similarity matching: Finding the most matching neuron i.e., the winning neuron. - Synaptic updation: Updating the weights of the neuron and also the neurons in it's neighbourhood. - Continuation: Repeat steps 2 to 4 till there is no considerable change in the map.
Arrange the weights in the required topology according to the problem. Initialize the weights randomly such that all the weights are different. Sample the input from the input space. Similarity matching: match the input to a neuron in the topological lattice which becomes the winning neuron. Update the weights of the winning neuron and its neighbours determined by the neighbourhood function. Reduce the neighbourhood and decay the learning rate and share radius. If ordering and convergence are complete, stop. Else continue sampling from the input space.
2
DigiKlausur
1
0.001001
169
null
Parameters: $X$ data vectors, $W$ weights vectors in lattice , $\eta(n)$ -learning rate, $\sigma(n)$ - neighbourhood width, $h{ji(x)}$- neighbourhood function algo: 1) Initialize the weights to a small, random , non-repeatible values. 2) Sample a data vector with a probabiity 3) Compute the euclidean distance to weight vectors from the data points and find the winning neuron with minimum distance . 4) Update the weights of the winning neuron and its neighbourhood towards the input direction using neighbourhood function. 5) reduce the learning rate and the neighbour hood width and iterate from step 2 until no significant changes between weight vectors and inputs are seen.
Arrange the weights in the required topology according to the problem. Initialize the weights randomly such that all the weights are different. Sample the input from the input space. Similarity matching: match the input to a neuron in the topological lattice which becomes the winning neuron. Update the weights of the winning neuron and its neighbours determined by the neighbourhood function. Reduce the neighbourhood and decay the learning rate and share radius. If ordering and convergence are complete, stop. Else continue sampling from the input space.
2
DigiKlausur
1
0.001001
170
null
1. Initialization : Initialize the weight vectors with random values such that the $wj(0)$ is different for all weights. 2. Sampling : Draw sample example $x$ from input space. 3. Similarity matching : Find the best matching weight vector for the input vector : $Wi = argmini (x - Wi(n))$ 4. Adjust the weight vectors of neurons in the neighbourhood of the winning neuron 5. Go to Sampling step and repeat until no more changes are observed in the local neighbourhood of the winning neuron.
Arrange the weights in the required topology according to the problem. Initialize the weights randomly such that all the weights are different. Sample the input from the input space. Similarity matching: match the input to a neuron in the topological lattice which becomes the winning neuron. Update the weights of the winning neuron and its neighbours determined by the neighbourhood function. Reduce the neighbourhood and decay the learning rate and share radius. If ordering and convergence are complete, stop. Else continue sampling from the input space.
2
DigiKlausur
1
0.001001
171
null
1. Initailization: Initialize the weights of each neuron to small random values such that weight of each neuron is different. 2. Sampling: Sample an input from the input set 3. Similarity matching: Determine the neuron nearest to the sampled input based on its distance 4. Weight updation: Update the weights of the neighbouring neurons chosen by the neighbourhood function $h{ij}(n)$ 5. Continuation: Continue from sampling until there is no more change in the weights
Arrange the weights in the required topology according to the problem. Initialize the weights randomly such that all the weights are different. Sample the input from the input space. Similarity matching: match the input to a neuron in the topological lattice which becomes the winning neuron. Update the weights of the winning neuron and its neighbours determined by the neighbourhood function. Reduce the neighbourhood and decay the learning rate and share radius. If ordering and convergence are complete, stop. Else continue sampling from the input space.
2
DigiKlausur
1
0.001001
172
null
SOM is refered to as Self organized maps which is an unsupervised training algorithm for finding spatial pattern in data without using any external help.The process in SOM is explained below: -Initialization: Initialize random weights wj for input patterns - Sampling: Take nth random sample from the input (say x) - similarity matching :for the input x, find the best match in the weight vector. $i(x) = argmin(x - w)$ - update: the next step is to update the weights $w(n+1) = w(n) + eta*hji(x)*i(x)$ - continuation : continue from sampling until there is no significant change in the feature map
Arrange the weights in the required topology according to the problem. Initialize the weights randomly such that all the weights are different. Sample the input from the input space. Similarity matching: match the input to a neuron in the topological lattice which becomes the winning neuron. Update the weights of the winning neuron and its neighbours determined by the neighbourhood function. Reduce the neighbourhood and decay the learning rate and share radius. If ordering and convergence are complete, stop. Else continue sampling from the input space.
2
DigiKlausur
1
0.001001
173
null
Initialization: set random small values to weights wj is different for each neuron Sampling: draw n-th sample x from input space Competition: identify winning neuron i using arg min $||x-wi||$ which means weight vector of i is most similar to input Cooperation: identify neighbors of winning neuron i using neighborhood function $h{j,i(x)} (n)$ which shrinks with time Weight adaptation: adjustments made to synaptic weights of winning neuron and its neighbors go to sampling until no large changes in the feature map.
Arrange the weights in the required topology according to the problem. Initialize the weights randomly such that all the weights are different. Sample the input from the input space. Similarity matching: match the input to a neuron in the topological lattice which becomes the winning neuron. Update the weights of the winning neuron and its neighbours determined by the neighbourhood function. Reduce the neighbourhood and decay the learning rate and share radius. If ordering and convergence are complete, stop. Else continue sampling from the input space.
2
DigiKlausur
1
0.001001
174
null
GENERATE random weights for all neurons<br> FOR i to maxiteration DO<br> ------TAKE random input pattern<br> ------FIND the winning neuron<br> ------FIND the neighbors of the winning neuron<br> ------COMPUTE weigths of these neurons<br> ------REDUCE $\eta$ and $\lambda$<br> END FOR
Arrange the weights in the required topology according to the problem. Initialize the weights randomly such that all the weights are different. Sample the input from the input space. Similarity matching: match the input to a neuron in the topological lattice which becomes the winning neuron. Update the weights of the winning neuron and its neighbours determined by the neighbourhood function. Reduce the neighbourhood and decay the learning rate and share radius. If ordering and convergence are complete, stop. Else continue sampling from the input space.
2
DigiKlausur
1
0.001001
175
null
* Initialize the neuron weights randomly in a way that all neurons have different weights. * Generate random samples x from the input space. * Iterate the samples and **Compare distance between current input and all neurons** in the weight space. * **Find a winning neuron** with shortest distance from current input. * Distance is calculated using **euclidean or manhatten distance**. * Find the neurons in the neighborhood boundary of winning neuron. * **Update the weights** of neighborhood neurons using delta rule. * Adapt the size of neighborhood $(\lambda)$ and learning rate $(\eta)$ at each iteration * Repeat the process until there is no neurons in the neighborhood boundary or all the inputs moved to some neuron.
Arrange the weights in the required topology according to the problem. Initialize the weights randomly such that all the weights are different. Sample the input from the input space. Similarity matching: match the input to a neuron in the topological lattice which becomes the winning neuron. Update the weights of the winning neuron and its neighbours determined by the neighbourhood function. Reduce the neighbourhood and decay the learning rate and share radius. If ordering and convergence are complete, stop. Else continue sampling from the input space.
2
DigiKlausur
1
0.001001
176
null
Step1: It selects a datapoint in random through sampling. Step2: Finds the nearest neuron through competitive learning. Step3: Updates the weight of the winner neuron and updates the weight of neighbouring neurons by a fraction. Step4: Continues steps 1, 2, 3 until there is no change in the weights or some stopping criteria is met.
Arrange the weights in the required topology according to the problem. Initialize the weights randomly such that all the weights are different. Sample the input from the input space. Similarity matching: match the input to a neuron in the topological lattice which becomes the winning neuron. Update the weights of the winning neuron and its neighbours determined by the neighbourhood function. Reduce the neighbourhood and decay the learning rate and share radius. If ordering and convergence are complete, stop. Else continue sampling from the input space.
2
DigiKlausur
1
0.001001
177
null
{ take a rondom point from the training data competitive phase: find the winning neuron - the neuron similar feature, using the eucledian distance formula cooperative phase: find the neighbors of the winning neuron based on the neighbor function (eg: gaussian function) adaption phase: change the weights of the all the neighboring neuron of the winning node using the formula $ \del w = \eta xj - wj $ }
Arrange the weights in the required topology according to the problem. Initialize the weights randomly such that all the weights are different. Sample the input from the input space. Similarity matching: match the input to a neuron in the topological lattice which becomes the winning neuron. Update the weights of the winning neuron and its neighbours determined by the neighbourhood function. Reduce the neighbourhood and decay the learning rate and share radius. If ordering and convergence are complete, stop. Else continue sampling from the input space.
2
DigiKlausur
1
0.001001
178
null
input: distance function d(x, y), learning rate mu, neighborhood distance n Initialize the map layer with random weights for each input: find the weight which is closest to the input (minimum d(x, y)) change the weight in the direction of the input depending on the learning rate change all weights which are within the neighborhood distance n depending on their distance and the learning rate reduce learning rate and neighborhood distance
Arrange the weights in the required topology according to the problem. Initialize the weights randomly such that all the weights are different. Sample the input from the input space. Similarity matching: match the input to a neuron in the topological lattice which becomes the winning neuron. Update the weights of the winning neuron and its neighbours determined by the neighbourhood function. Reduce the neighbourhood and decay the learning rate and share radius. If ordering and convergence are complete, stop. Else continue sampling from the input space.
2
DigiKlausur
1
0.001001
179
null
1. Initialize small random weights. 2. Draw the nth sample from the input space. 3. Similarity matching: Determine the winning neuron. 4. Update the weights of the neuron an the topological neighborhood. 5. Repeat steps 2-4 w = random n = example.draw() wmax = getmin(wi*n) hn = getneighborhood(wmax) for wi in hn: wi = wi+$\eta$*h*y
Arrange the weights in the required topology according to the problem. Initialize the weights randomly such that all the weights are different. Sample the input from the input space. Similarity matching: match the input to a neuron in the topological lattice which becomes the winning neuron. Update the weights of the winning neuron and its neighbours determined by the neighbourhood function. Reduce the neighbourhood and decay the learning rate and share radius. If ordering and convergence are complete, stop. Else continue sampling from the input space.
2
DigiKlausur
1
0.001001
180
null
Initialize the weights randomly. Create a term T1 and T2, which decrease the learning\rate and neighbourhood function respectively. Calculate $i(x) = argmin | w - x |$, the weight which is closest to the input data received. i(x) is the neuron, which wins the competetive process, this neuron and its neighbours weights are updated using $w{new} = w{old} - learning\rate \cdot h(x) \cdot (w - x)$. h(x) is the neighbourhood function which determines, which neurons are updated and how strong they are changed by the update. It is defined using the distance between the neurons. The learning\rate is updated using $learning\rate / T1$, also is the neighbourhood function updated in the same way using T2. The learning\rate cannot get lower than 0.01, while the neighbourhood function can get as low as only the winning neuron. So in the beginning almost every neuron is updated and at the end only a small neighbourhood or the neuron itself is updated.
Arrange the weights in the required topology according to the problem. Initialize the weights randomly such that all the weights are different. Sample the input from the input space. Similarity matching: match the input to a neuron in the topological lattice which becomes the winning neuron. Update the weights of the winning neuron and its neighbours determined by the neighbourhood function. Reduce the neighbourhood and decay the learning rate and share radius. If ordering and convergence are complete, stop. Else continue sampling from the input space.
2
DigiKlausur
1
0.001001
181
null
for n iterations winner = competitionbetweenneurons() neighbourhood = cooperationwithneighbourhoodfunction(winner) updateweights(neighbourhood)
Arrange the weights in the required topology according to the problem. Initialize the weights randomly such that all the weights are different. Sample the input from the input space. Similarity matching: match the input to a neuron in the topological lattice which becomes the winning neuron. Update the weights of the winning neuron and its neighbours determined by the neighbourhood function. Reduce the neighbourhood and decay the learning rate and share radius. If ordering and convergence are complete, stop. Else continue sampling from the input space.
1
DigiKlausur
0.5
0.001522
182
null
given a map layer set random small values for weights from input to map layer repeat until not converged: find best match of input value and weight of the neurons (competitive process) adapt (increase) weight of winning neuron and neighboorhood (with gauss function and neighboorhood size) (cooperating process and weight adjustment) decrease neighboorhood size
Arrange the weights in the required topology according to the problem. Initialize the weights randomly such that all the weights are different. Sample the input from the input space. Similarity matching: match the input to a neuron in the topological lattice which becomes the winning neuron. Update the weights of the winning neuron and its neighbours determined by the neighbourhood function. Reduce the neighbourhood and decay the learning rate and share radius. If ordering and convergence are complete, stop. Else continue sampling from the input space.
2
DigiKlausur
1
0.001001
183
null
null
Arrange the weights in the required topology according to the problem. Initialize the weights randomly such that all the weights are different. Sample the input from the input space. Similarity matching: match the input to a neuron in the topological lattice which becomes the winning neuron. Update the weights of the winning neuron and its neighbours determined by the neighbourhood function. Reduce the neighbourhood and decay the learning rate and share radius. If ordering and convergence are complete, stop. Else continue sampling from the input space.
0
DigiKlausur
0
0.003546
184
null
Initialize weight vectors of hidden neurons with same dimmension as of data. Number of hidden neuron should be signifiacntly greater than number of data points. initialize learning rate n and neghbouring function h while (rate of change in weights is significant): for every datapoint: calculate distance of each neuron from data. select winner neuron w with minimum distance (maximum similarity) error = distnce of winner form datapoint adjust weights of neurons with the rule w = w + n*h*error
Arrange the weights in the required topology according to the problem. Initialize the weights randomly such that all the weights are different. Sample the input from the input space. Similarity matching: match the input to a neuron in the topological lattice which becomes the winning neuron. Update the weights of the winning neuron and its neighbours determined by the neighbourhood function. Reduce the neighbourhood and decay the learning rate and share radius. If ordering and convergence are complete, stop. Else continue sampling from the input space.
2
DigiKlausur
1
0.001001
185
null
randomly inilize the weights draw sample of inputs Increase the weights of the local neihburhood of winning neuron repeat the process above process till there is only one winning neuron
Arrange the weights in the required topology according to the problem. Initialize the weights randomly such that all the weights are different. Sample the input from the input space. Similarity matching: match the input to a neuron in the topological lattice which becomes the winning neuron. Update the weights of the winning neuron and its neighbours determined by the neighbourhood function. Reduce the neighbourhood and decay the learning rate and share radius. If ordering and convergence are complete, stop. Else continue sampling from the input space.
1
DigiKlausur
0.5
0.001522
186
null
null
Arrange the weights in the required topology according to the problem. Initialize the weights randomly such that all the weights are different. Sample the input from the input space. Similarity matching: match the input to a neuron in the topological lattice which becomes the winning neuron. Update the weights of the winning neuron and its neighbours determined by the neighbourhood function. Reduce the neighbourhood and decay the learning rate and share radius. If ordering and convergence are complete, stop. Else continue sampling from the input space.
0
DigiKlausur
0
0.003546
187
null
for i in numofepochs for p in inputpoints find the winning neuron find the neighbours of the winning neuron within distance sigma update winning neuron and neighbours weight update sigma and learningrate so that both reduces over time
Arrange the weights in the required topology according to the problem. Initialize the weights randomly such that all the weights are different. Sample the input from the input space. Similarity matching: match the input to a neuron in the topological lattice which becomes the winning neuron. Update the weights of the winning neuron and its neighbours determined by the neighbourhood function. Reduce the neighbourhood and decay the learning rate and share radius. If ordering and convergence are complete, stop. Else continue sampling from the input space.
2
DigiKlausur
1
0.001001
188
null
null
Arrange the weights in the required topology according to the problem. Initialize the weights randomly such that all the weights are different. Sample the input from the input space. Similarity matching: match the input to a neuron in the topological lattice which becomes the winning neuron. Update the weights of the winning neuron and its neighbours determined by the neighbourhood function. Reduce the neighbourhood and decay the learning rate and share radius. If ordering and convergence are complete, stop. Else continue sampling from the input space.
0
DigiKlausur
0
0.003546
189
null
A support vector machine is a maximum margin classifier in which the width of the boundary of separation is maximized. A margin is defined as the width of the boundary before hitting a point. This maximum margin intuitively feels safe, and is experimentally good.
SVMs are linear learnable machines in the simplest case. It uses a decision boundary with maximum margin to classify the data into different classes. The data points which are near the decision boundary are called support vectors and the margin is determined based on these points. Kernels are used to separate non-linearly separable data and the algorithm is solved by using Quadratic Programming.
1
DigiKlausur
0.5
0.001522
190
null
Basic idea of SVM is to best segregate the data into two classes with the help of decision boundary. This decision boundary is margin, we always try to maximize the margin to make sure data is classified correctly
SVMs are linear learnable machines in the simplest case. It uses a decision boundary with maximum margin to classify the data into different classes. The data points which are near the decision boundary are called support vectors and the margin is determined based on these points. Kernels are used to separate non-linearly separable data and the algorithm is solved by using Quadratic Programming.
1
DigiKlausur
0.5
0.001522
191
null
Support Vector Machines goal is to maximize margin between closest data points of separating hyperplane. Separating hyperplane is given by: 0 = w(n)*x(n) + b. By maximizing margin probability of classification errors is reduced.
SVMs are linear learnable machines in the simplest case. It uses a decision boundary with maximum margin to classify the data into different classes. The data points which are near the decision boundary are called support vectors and the margin is determined based on these points. Kernels are used to separate non-linearly separable data and the algorithm is solved by using Quadratic Programming.
1
DigiKlausur
0.5
0.001522
192
null
An SVM is a binary, linear classifier spanning a seperating hyperplane between two classes of datapoints. The hyperplane is spanned between both the positive and negative decision boundaries, and supported by a number of support vectors. Support vectors are the outermost datapoints which span the hyperplane. During training, the distance of falsely classified data points to their correct side of the hyperplane is minimized, utilizing a quadratic programming formulation.
SVMs are linear learnable machines in the simplest case. It uses a decision boundary with maximum margin to classify the data into different classes. The data points which are near the decision boundary are called support vectors and the margin is determined based on these points. Kernels are used to separate non-linearly separable data and the algorithm is solved by using Quadratic Programming.
2
DigiKlausur
1
0.001001
193
null
A SVM is a binary classifier with a maximum width boundary separating the two classes. This uses support vectors (vectors that pushes against the boundaries). The equations of the lines in an SVM are: - $wx+b>=1$:for class 1 - $wx+b<=-1$:for class -1 - M is the width between these boundaries.
SVMs are linear learnable machines in the simplest case. It uses a decision boundary with maximum margin to classify the data into different classes. The data points which are near the decision boundary are called support vectors and the margin is determined based on these points. Kernels are used to separate non-linearly separable data and the algorithm is solved by using Quadratic Programming.
1
DigiKlausur
0.5
0.001522
194
null
Because SVMs are binary classifiers we can use a border to sperate the data. The border is typically placed where it has the largest possible distance to both classes. The vectors the border touches on both sides with its margin are the support vectors.
SVMs are linear learnable machines in the simplest case. It uses a decision boundary with maximum margin to classify the data into different classes. The data points which are near the decision boundary are called support vectors and the margin is determined based on these points. Kernels are used to separate non-linearly separable data and the algorithm is solved by using Quadratic Programming.
1
DigiKlausur
0.5
0.001522
195
null
A SVM is an ANN for supervised learning, whicht is able to saperate two classes of data-points by using a hyperlane found by quadratic programming, by finding the biggest margin. The goal is to classify future data in there two classes.
SVMs are linear learnable machines in the simplest case. It uses a decision boundary with maximum margin to classify the data into different classes. The data points which are near the decision boundary are called support vectors and the margin is determined based on these points. Kernels are used to separate non-linearly separable data and the algorithm is solved by using Quadratic Programming.
1
DigiKlausur
0.5
0.001522
196
null
The SVM is a maximum margin classifier. It is used the binary classify datapoints in a dichotomy. The idea is to find a line wich linearly seperates both classes. There perfect position of this line is in right in the middle of these classes. To find this line(descision boundary) we define a positive and a negative boundary which are parallel to this line. The boundarys define the margin between both classes. The idea of SVM is that the datapoints which are next to the boundary can be used to define the margin. They are called support vectors. Addittionaly not every problem is linearly seperable so the idea was to transform the input into many higher dimensions using some kernel functions. We discussed the kernel function of polynomial terms and found out that it easy to compute.
SVMs are linear learnable machines in the simplest case. It uses a decision boundary with maximum margin to classify the data into different classes. The data points which are near the decision boundary are called support vectors and the margin is determined based on these points. Kernels are used to separate non-linearly separable data and the algorithm is solved by using Quadratic Programming.
2
DigiKlausur
1
0.001001
197
null
Support Vector Machines are a type of learning machines that try to classify different classes of an input space. For linear separable classes, the SVMs try to calculate the line that separates this two classes with maximum margin. The support vectors will be the points closer to this margin. When the input data is noisy, we have an optimization problem of two aspects (maximum margin, proper classification). So, a trade-off (C) will be defined. The trade-off will be calculated by the sum of the distance of misclassified points. For non-linear separable classes, a kernel will be defined that will transform the input data into a higher dimensional space.
SVMs are linear learnable machines in the simplest case. It uses a decision boundary with maximum margin to classify the data into different classes. The data points which are near the decision boundary are called support vectors and the margin is determined based on these points. Kernels are used to separate non-linearly separable data and the algorithm is solved by using Quadratic Programming.
2
DigiKlausur
1
0.001001
198
null
In linear SVM we have a linear border line classifier that seperate two different classes(positive and negative planes) and we calculate the distance of the data points from this border line classifier. Also a margin will be defined and this margin will be maximized until it touches some data points in the plane. The data points that the margin pushed agains them will be our support vectors. The error for the wrongly classified datapoints will be calculated by calculating the distance of the data point from its correct plane. The SVM tries to learn the classifier and the margin from the training data.
SVMs are linear learnable machines in the simplest case. It uses a decision boundary with maximum margin to classify the data into different classes. The data points which are near the decision boundary are called support vectors and the margin is determined based on these points. Kernels are used to separate non-linearly separable data and the algorithm is solved by using Quadratic Programming.
2
DigiKlausur
1
0.001001
199
null
Support vector machines are classifiers that are using support vectors, which are variables of the dataset. These variable are chosen during learning algorithm. Main advantage of SVMs is that it will not be overfitting by choosing correct margin. Activation functions can be both linear and nonlinear. Output of SVM is always TRUE or FALSE for given variable.
SVMs are linear learnable machines in the simplest case. It uses a decision boundary with maximum margin to classify the data into different classes. The data points which are near the decision boundary are called support vectors and the margin is determined based on these points. Kernels are used to separate non-linearly separable data and the algorithm is solved by using Quadratic Programming.
1
DigiKlausur
0.5
0.001522
200
null
Support vector machines are a type of neural network that build a desicion boundary around classes such that the margin of separation between classes is maximized.
SVMs are linear learnable machines in the simplest case. It uses a decision boundary with maximum margin to classify the data into different classes. The data points which are near the decision boundary are called support vectors and the margin is determined based on these points. Kernels are used to separate non-linearly separable data and the algorithm is solved by using Quadratic Programming.
0
DigiKlausur
0
0.003546
201
null
SVMs are binary classifier. They learn the classification by memorizing the marginal data points (called support vectors) that make up the decision boundaries (2 : positive and negative).
SVMs are linear learnable machines in the simplest case. It uses a decision boundary with maximum margin to classify the data into different classes. The data points which are near the decision boundary are called support vectors and the margin is determined based on these points. Kernels are used to separate non-linearly separable data and the algorithm is solved by using Quadratic Programming.
1
DigiKlausur
0.5
0.001522
202
null
The abbreviation SVM stands for Suppor Vector Machine. SVMs represent a feedforward category of NN. SVMs are binary learning machines whose functionality can be summarized for classification problem as follows: Given a training sample, the SVM constructs a hyperplane as the decision surface in such a way that the margin of seperation between positive and negative examples is maximized. One key innovation associated with SVMs is the kernel trick. The kernel trick consists of observing that many machine learning algorithms can be written exclusively in terms of dot products between examples. It allows us to learn models that are nonlinear as a function of x using convex optimization techniques that are guaranteed to converge efficiently. Besides, the kernel function k often admits an implementation that is significantly more computatinal efficient than naively constructing two vectors and explicetly taking their dot product.
SVMs are linear learnable machines in the simplest case. It uses a decision boundary with maximum margin to classify the data into different classes. The data points which are near the decision boundary are called support vectors and the margin is determined based on these points. Kernels are used to separate non-linearly separable data and the algorithm is solved by using Quadratic Programming.
2
DigiKlausur
1
0.001001
203
null
An SVM, or support vector machine, is a feedforward network with a hidden layer to learn a task in a supervised learning manner. The network tries to construct a hyperplane that separates the data points of two different classes by maximizing the margin of separation, which is the distance from the hyperplane to the closest data points called support vectors.
SVMs are linear learnable machines in the simplest case. It uses a decision boundary with maximum margin to classify the data into different classes. The data points which are near the decision boundary are called support vectors and the margin is determined based on these points. Kernels are used to separate non-linearly separable data and the algorithm is solved by using Quadratic Programming.
1
DigiKlausur
0.5
0.001522
204
null
Given a dataset, support vector machines builds a hyperplane in a such a way that positive and negative samples are seperated to the maximum distance. Width of the margin should be maximum The vectors to which the margins(margin for positive and negative sample) are pushed on to it are called support vectors.
SVMs are linear learnable machines in the simplest case. It uses a decision boundary with maximum margin to classify the data into different classes. The data points which are near the decision boundary are called support vectors and the margin is determined based on these points. Kernels are used to separate non-linearly separable data and the algorithm is solved by using Quadratic Programming.
1
DigiKlausur
0.5
0.001522
205
null
SVM stands for Support Vector Machine. It creates a hyperplane such that margin of separation between positive and negative classes is maximised.
SVMs are linear learnable machines in the simplest case. It uses a decision boundary with maximum margin to classify the data into different classes. The data points which are near the decision boundary are called support vectors and the margin is determined based on these points. Kernels are used to separate non-linearly separable data and the algorithm is solved by using Quadratic Programming.
1
DigiKlausur
0.5
0.001522
206
null
HERE: SVM is a linear machine whose goal is to construct a optimal hyperplane such that the marginal separation is the maximum between the decision boundaries. The decision boundaries are drawn parallel to the hyperplane which just push the datapoints closest to the hyperplane. The datapoints closer to the hyperplane are called support vectors.
SVMs are linear learnable machines in the simplest case. It uses a decision boundary with maximum margin to classify the data into different classes. The data points which are near the decision boundary are called support vectors and the margin is determined based on these points. Kernels are used to separate non-linearly separable data and the algorithm is solved by using Quadratic Programming.
1
DigiKlausur
0.5
0.001522
207
null
The idea of SVM is to fit a supervised model onto the training data allowing maximum generalization ability. This is done by computing maximum margin between different classes of data using the support vectors. The magrin can be computed using differeent kernels for a higher dimensional data.
SVMs are linear learnable machines in the simplest case. It uses a decision boundary with maximum margin to classify the data into different classes. The data points which are near the decision boundary are called support vectors and the margin is determined based on these points. Kernels are used to separate non-linearly separable data and the algorithm is solved by using Quadratic Programming.
2
DigiKlausur
1
0.001001
208
null
A SVM is a linear machine which is used in pattern classifcation problems to find a decision surface in the form of a hyperplane for linearly separable classes such that the margin of separation between the classes is as large as possible. SVM's are an approximate implementation of the induction principle of structural risk minimization which is based on the fact that the error rate in testing is bounded by a term that is dependent upon the sum of training error rate and the VC dimension of h.
SVMs are linear learnable machines in the simplest case. It uses a decision boundary with maximum margin to classify the data into different classes. The data points which are near the decision boundary are called support vectors and the margin is determined based on these points. Kernels are used to separate non-linearly separable data and the algorithm is solved by using Quadratic Programming.
1
DigiKlausur
0.5
0.001522
209
null
The basic idea of SVM is to determine the best decision boundary i.e. the one which provides maximum margin so that the boundary can be widened most before it touches any datapoint. It is done using Support Vectors which are the the datapoints the margin pushes against.
SVMs are linear learnable machines in the simplest case. It uses a decision boundary with maximum margin to classify the data into different classes. The data points which are near the decision boundary are called support vectors and the margin is determined based on these points. Kernels are used to separate non-linearly separable data and the algorithm is solved by using Quadratic Programming.
2
DigiKlausur
1
0.001001
210
null
SVM refers to support vector machines.In terms of a linear classification problem svm can be defined as creating a hyper plane which is a decision surface and to maximize the width of decision boundary.In cases where the problem is complex svm can be used as it classifies the data by projecting the data in higher dimension.If the data is to be separated in 3 classes , they can use 3 svm's for three different classes.
SVMs are linear learnable machines in the simplest case. It uses a decision boundary with maximum margin to classify the data into different classes. The data points which are near the decision boundary are called support vectors and the margin is determined based on these points. Kernels are used to separate non-linearly separable data and the algorithm is solved by using Quadratic Programming.
1
DigiKlausur
0.5
0.001522
211
null
basic idea of SVM is to construct a hyperplane as the decision surface in such a way that the margin of separation between negative examples and positive examples is maximized.
SVMs are linear learnable machines in the simplest case. It uses a decision boundary with maximum margin to classify the data into different classes. The data points which are near the decision boundary are called support vectors and the margin is determined based on these points. Kernels are used to separate non-linearly separable data and the algorithm is solved by using Quadratic Programming.
1
DigiKlausur
0.5
0.001522
212
null
The idea of SVM is to construct a hyperplane as a decision surface such that the margin separation between positive and negative examples is maximized.
SVMs are linear learnable machines in the simplest case. It uses a decision boundary with maximum margin to classify the data into different classes. The data points which are near the decision boundary are called support vectors and the margin is determined based on these points. Kernels are used to separate non-linearly separable data and the algorithm is solved by using Quadratic Programming.
1
DigiKlausur
0.5
0.001522
213
null
SVM tries to find a **best hyperplane with widest margin with the help of support vectors** such that all the data points are classified correctly.
SVMs are linear learnable machines in the simplest case. It uses a decision boundary with maximum margin to classify the data into different classes. The data points which are near the decision boundary are called support vectors and the margin is determined based on these points. Kernels are used to separate non-linearly separable data and the algorithm is solved by using Quadratic Programming.
2
DigiKlausur
1
0.001001
214
null
SVM is used for linearly separable data. A hyperplane is used to separate the data, but there could be so many hyperplanes that separate the data. The best hyperplane is choosen which separates data with a bigger margin. So in SVM we find the hyperplane which has a bigger margin between the hyperplane and both the positive and negative data lines.
SVMs are linear learnable machines in the simplest case. It uses a decision boundary with maximum margin to classify the data into different classes. The data points which are near the decision boundary are called support vectors and the margin is determined based on these points. Kernels are used to separate non-linearly separable data and the algorithm is solved by using Quadratic Programming.
1
DigiKlausur
0.5
0.001522
215
null
Given a training set for classification, The basic idea of SVM is to construct a hyperplane as decision boundary such a way that the margin between the positive and negative points is maximum Support vector is a small subset of the of the training data against which the boundary is pushed
SVMs are linear learnable machines in the simplest case. It uses a decision boundary with maximum margin to classify the data into different classes. The data points which are near the decision boundary are called support vectors and the margin is determined based on these points. Kernels are used to separate non-linearly separable data and the algorithm is solved by using Quadratic Programming.
2
DigiKlausur
1
0.001001
216
null
A support vector machine classifies given data using a decision boundary. The width of this decision boundary (margin) is maximized to ensure good results, because a maximized width is as robust as possible. The margin width is $\frac{2}{\sqrt{w * w}}$. To maximize it, quadratic programming is used. In order to handle noisy data, slack variables are introduced. To eliminate them, duality is used.
SVMs are linear learnable machines in the simplest case. It uses a decision boundary with maximum margin to classify the data into different classes. The data points which are near the decision boundary are called support vectors and the margin is determined based on these points. Kernels are used to separate non-linearly separable data and the algorithm is solved by using Quadratic Programming.
2
DigiKlausur
1
0.001001
217
null
An SVM is a linear classifier that divides a binary pattern, by a line that maximizes the margin between its line and the respective support vectors.
SVMs are linear learnable machines in the simplest case. It uses a decision boundary with maximum margin to classify the data into different classes. The data points which are near the decision boundary are called support vectors and the margin is determined based on these points. Kernels are used to separate non-linearly separable data and the algorithm is solved by using Quadratic Programming.
2
DigiKlausur
1
0.001001
218
null
A SVM learns a decision boundary from the input data. Additionaly it learns two margins, which are parallel to the decision boundary and lie as close as possible at the data points, the support vectors. The decision boundary is chosen so that the margins are maximized. Using kernel functions higher dimensional data and non linearly separable data can be learned aswell.
SVMs are linear learnable machines in the simplest case. It uses a decision boundary with maximum margin to classify the data into different classes. The data points which are near the decision boundary are called support vectors and the margin is determined based on these points. Kernels are used to separate non-linearly separable data and the algorithm is solved by using Quadratic Programming.
2
DigiKlausur
1
0.001001
219
null
An SVM is a learning machine that tries to learn the support vectors of a two class data set to get the maximum margin, the optimal seperating hyperplane, between the two classes.
SVMs are linear learnable machines in the simplest case. It uses a decision boundary with maximum margin to classify the data into different classes. The data points which are near the decision boundary are called support vectors and the margin is determined based on these points. Kernels are used to separate non-linearly separable data and the algorithm is solved by using Quadratic Programming.
2
DigiKlausur
1
0.001001
220
null
A SVM uses a few of the data points as support vectors to bild the maximum margin classifier. It searches for the seperating line, which has the maximum margin to the datapoints. In cases of Noise, the seperating line is searched, which minimizes the distance to the points in the wrong category. The data is cast to a higher dimensional space to use covers theorem while using kernels. The data is more likely linearly seperable in the higher dimensional feature space. Using structural risk minimization the dimensionality is reduced.
SVMs are linear learnable machines in the simplest case. It uses a decision boundary with maximum margin to classify the data into different classes. The data points which are near the decision boundary are called support vectors and the margin is determined based on these points. Kernels are used to separate non-linearly separable data and the algorithm is solved by using Quadratic Programming.
2
DigiKlausur
1
0.001001
221
null
SVMs are used to linearly seperate Data points. The decision boundary is line or hyperplane in higher dimensions that defines the lable of a data point. The decion boundary is choosen in a way that the margin is maximized. Data points on the decion boundary are called support vectors and define the hyperplane. In 2 dimensions if the data is liner seperable the margin is equal to 2/sqrt(w.w) where w is the weight vector. If the data is not linear seperable, the input can be projected into higher dimension space. This increases the chance of linear seperablity.
SVMs are linear learnable machines in the simplest case. It uses a decision boundary with maximum margin to classify the data into different classes. The data points which are near the decision boundary are called support vectors and the margin is determined based on these points. Kernels are used to separate non-linearly separable data and the algorithm is solved by using Quadratic Programming.
2
DigiKlausur
1
0.001001
222
null
Support vector machine is classifier which maximizes the margin between boundries learned from two classes. Margin is minimum distance by boundries can be increased before hitting datapoints. Support vectors are the datapoints against which magin pushes up the boundary.
SVMs are linear learnable machines in the simplest case. It uses a decision boundary with maximum margin to classify the data into different classes. The data points which are near the decision boundary are called support vectors and the margin is determined based on these points. Kernels are used to separate non-linearly separable data and the algorithm is solved by using Quadratic Programming.
2
DigiKlausur
1
0.001001
223
null
support vector machines are the finding classfiers, draw the dision boundary which push against the support vectors.
SVMs are linear learnable machines in the simplest case. It uses a decision boundary with maximum margin to classify the data into different classes. The data points which are near the decision boundary are called support vectors and the margin is determined based on these points. Kernels are used to separate non-linearly separable data and the algorithm is solved by using Quadratic Programming.
1
DigiKlausur
0.5
0.001522
224
null
The basic idea of Support Vector Machine (SVM) is to find the width of a line or hyperplane which which divides the input data into two classes. The points lying on the edge of the defined width are called support vectors.
SVMs are linear learnable machines in the simplest case. It uses a decision boundary with maximum margin to classify the data into different classes. The data points which are near the decision boundary are called support vectors and the margin is determined based on these points. Kernels are used to separate non-linearly separable data and the algorithm is solved by using Quadratic Programming.
1
DigiKlausur
0.5
0.001522
225
null
SVM is a classifier that classifies a set of points in a way that maximizes the margin between the points of two classes. The classification can be linear or non linear.
SVMs are linear learnable machines in the simplest case. It uses a decision boundary with maximum margin to classify the data into different classes. The data points which are near the decision boundary are called support vectors and the margin is determined based on these points. Kernels are used to separate non-linearly separable data and the algorithm is solved by using Quadratic Programming.
1
DigiKlausur
0.5
0.001522
226
null
The idea behing SVM is to find a hyperplane which separate data into classes. First it is required to find a data point which are clossest to hyperplane, and these data points are called support vectors. Next task is to find a maximum possible width of the hyperplain such that support vectors are on the edge of that hyperplane. This problem is formulated as min-max constrained optimization problem. In order to find a optimum width of hyperplane, (optimimum of a funtction) the idea is to use method of Langrange multiplier. Additionally, when I data is not linearly separable, than an approach is to project data in higher dimension and then to find a hyperplane that separates data in that dimension.
SVMs are linear learnable machines in the simplest case. It uses a decision boundary with maximum margin to classify the data into different classes. The data points which are near the decision boundary are called support vectors and the margin is determined based on these points. Kernels are used to separate non-linearly separable data and the algorithm is solved by using Quadratic Programming.
2
DigiKlausur
1
0.001001
227
null
In steepest descent, the gradient of the cost function is found by partially differentiating it with respect to the weights. The weights are then updated in the opposite direction if the gradient. This ensures that the weight moves in the steepest direction are reduced. It can also be proven that the weights always reduce. Hence, steepest descent can be used to minimize the cost function.
Steepest descent is used to update the weights in a NN during the learning phase. It helps to navigate the cost function and find the parameters for which the cost is minimum. The weights are updated in the direction of the steepest descent which is in a direction opposite to the gradient vector. This method could suffer from local minima and may become unstable.
2
DigiKlausur
1
0.001001
228
null
Steepest descent is method of optimizing the algorithm by minimizing the error. Weights are adjusted in the direction of steeping descent, opposite to the direction of the gradient.
Steepest descent is used to update the weights in a NN during the learning phase. It helps to navigate the cost function and find the parameters for which the cost is minimum. The weights are updated in the direction of the steepest descent which is in a direction opposite to the gradient vector. This method could suffer from local minima and may become unstable.
1
DigiKlausur
0.5
0.001522
229
null
Steepest descent moves the error within error surface a small step into the opposite direction of gradient. By help of steepest descent we want to minimize error. Steepest descent stops when gradient = 0.
Steepest descent is used to update the weights in a NN during the learning phase. It helps to navigate the cost function and find the parameters for which the cost is minimum. The weights are updated in the direction of the steepest descent which is in a direction opposite to the gradient vector. This method could suffer from local minima and may become unstable.
2
DigiKlausur
1
0.001001
230
null
When learning weights with a SD method, we try to reduce the error based on following the gradient of an error function in the opposite direction, effectively trailing the error surface towards the minimum. Here, the error function (typically some form of mean squared error) is differentiated w.r.t. the individual weights, expressing how much a weight contributes to the network error and must thus be corrected. Due to the gradient pointing in the direction of steepest ascent, we must thus step in the negative direction.
Steepest descent is used to update the weights in a NN during the learning phase. It helps to navigate the cost function and find the parameters for which the cost is minimum. The weights are updated in the direction of the steepest descent which is in a direction opposite to the gradient vector. This method could suffer from local minima and may become unstable.
2
DigiKlausur
1
0.001001
231
null
- Steepeset descent is used for error minimization when updating weights. - According to this, we update the weights along a direction which minimizes the error; which is calculated by finiding the slope at the point.
Steepest descent is used to update the weights in a NN during the learning phase. It helps to navigate the cost function and find the parameters for which the cost is minimum. The weights are updated in the direction of the steepest descent which is in a direction opposite to the gradient vector. This method could suffer from local minima and may become unstable.
1
DigiKlausur
0.5
0.001522
232
null
Speepest decent is used to minimize the training error of a network given sample inputs and desired outputs. It uses the gradient of the error function to move the weights closer to an optimal weight with lowest output error. Using a learning rate we can influence the speed and stability of this algorithm.
Steepest descent is used to update the weights in a NN during the learning phase. It helps to navigate the cost function and find the parameters for which the cost is minimum. The weights are updated in the direction of the steepest descent which is in a direction opposite to the gradient vector. This method could suffer from local minima and may become unstable.
2
DigiKlausur
1
0.001001
233
null
The steepest decent is the direction the error function falls the most. We want to change the weights in the direction of the steepest decent (the opposide direction of the gradient) to have a smaller error in the next iteration and to optimize the ANN.
Steepest descent is used to update the weights in a NN during the learning phase. It helps to navigate the cost function and find the parameters for which the cost is minimum. The weights are updated in the direction of the steepest descent which is in a direction opposite to the gradient vector. This method could suffer from local minima and may become unstable.
2
DigiKlausur
1
0.001001
234
null
The idea of learning a network is to minimize a certain costfunction. We can use steepest descent to minimize this cost function. While there are other optimization techniques which can be used for optimization, steepest decent is a widly used optimization technique. To optimize a network we calulate the partial derivatives(gradient) and use it to update our weights. It is also used in BP.
Steepest descent is used to update the weights in a NN during the learning phase. It helps to navigate the cost function and find the parameters for which the cost is minimum. The weights are updated in the direction of the steepest descent which is in a direction opposite to the gradient vector. This method could suffer from local minima and may become unstable.
2
DigiKlausur
1
0.001001
235
null
The approach of the method of steepest descent is to find the direction for the minimization of the error in an approximation problem. The cost function e, dependent of the weights w, will be derivated (partial derivative) for all defined weights. This gradient will be used for updating the weights for the next iteration. The direction of the minimization of the error is the oposite direction of the gradien: - g.
Steepest descent is used to update the weights in a NN during the learning phase. It helps to navigate the cost function and find the parameters for which the cost is minimum. The weights are updated in the direction of the steepest descent which is in a direction opposite to the gradient vector. This method could suffer from local minima and may become unstable.
2
DigiKlausur
1
0.001001
236
null
When the inputs are being send into a network and we calculate the error we need a mechanism to learn and manipopulate the free parameters of the network and the learning uses the error but we must know in which direction in the search(optimization) space we should move so that we can reach the global minima of the error for this we use steepest decent. This method tells us in which direction we need to move by getting the gradient from the error.
Steepest descent is used to update the weights in a NN during the learning phase. It helps to navigate the cost function and find the parameters for which the cost is minimum. The weights are updated in the direction of the steepest descent which is in a direction opposite to the gradient vector. This method could suffer from local minima and may become unstable.
2
DigiKlausur
1
0.001001
237
null
Steepest descent is a method of weight adaptation. It is using first order derivative to approximate the function. Therefore is rather slow.
Steepest descent is used to update the weights in a NN during the learning phase. It helps to navigate the cost function and find the parameters for which the cost is minimum. The weights are updated in the direction of the steepest descent which is in a direction opposite to the gradient vector. This method could suffer from local minima and may become unstable.
1
DigiKlausur
0.5
0.001522
238
null
The steepest descent is an unconstrained optimization method that seeks to minimize an error function. This function is iteratively changed in direction oposite to the gradient vector.
Steepest descent is used to update the weights in a NN during the learning phase. It helps to navigate the cost function and find the parameters for which the cost is minimum. The weights are updated in the direction of the steepest descent which is in a direction opposite to the gradient vector. This method could suffer from local minima and may become unstable.
1
DigiKlausur
0.5
0.001522
239
null
Method of steepest descent updates the weights in the direction where the error is minimum.
Steepest descent is used to update the weights in a NN during the learning phase. It helps to navigate the cost function and find the parameters for which the cost is minimum. The weights are updated in the direction of the steepest descent which is in a direction opposite to the gradient vector. This method could suffer from local minima and may become unstable.
1
DigiKlausur
0.5
0.001522
240
null
The steepest descent method is an algorithm for finding the nearest local minimum of a function which presupposes that the gradient of the function can be computed. This property is used to determine the optimal weights of the NN.
Steepest descent is used to update the weights in a NN during the learning phase. It helps to navigate the cost function and find the parameters for which the cost is minimum. The weights are updated in the direction of the steepest descent which is in a direction opposite to the gradient vector. This method could suffer from local minima and may become unstable.
2
DigiKlausur
1
0.001001
241
null
Steepest descent is used to update the synaptic weights of a network based on a cost function expressed by the errors of the output. The weights are adjusted in the direction opposite to the gradient of the cost function.
Steepest descent is used to update the weights in a NN during the learning phase. It helps to navigate the cost function and find the parameters for which the cost is minimum. The weights are updated in the direction of the steepest descent which is in a direction opposite to the gradient vector. This method could suffer from local minima and may become unstable.
2
DigiKlausur
1
0.001001
242
null
In steepest descent the adjustments done on the weight vector are in the direction of the steepest descent which is in the direction opposite to that of a gradient descent. In a learning problem, it basically used to reduce the cost based on the weight. The main goal is to find an optimal weight.
Steepest descent is used to update the weights in a NN during the learning phase. It helps to navigate the cost function and find the parameters for which the cost is minimum. The weights are updated in the direction of the steepest descent which is in a direction opposite to the gradient vector. This method could suffer from local minima and may become unstable.
2
DigiKlausur
1
0.001001
243
null
Method of steepest decent is an unconstrained optimization technique used for learning in a network. It is used in iterative manner to minimize the error in supervised learning. It finds the direction of maximum gradient. So we go in the opposite direction hoping to find the minima. Convergence of the algorithm depends on the learning rate and also the condition that it doesn't get stuck in local minima.
Steepest descent is used to update the weights in a NN during the learning phase. It helps to navigate the cost function and find the parameters for which the cost is minimum. The weights are updated in the direction of the steepest descent which is in a direction opposite to the gradient vector. This method could suffer from local minima and may become unstable.
2
DigiKlausur
1
0.001001
244
null
HERE: In steepest descent method, the network moves towards the direction of the maximum gradient. The learning with steepest descent method can be slow to converge and can exhibit zigzag behavior.
Steepest descent is used to update the weights in a NN during the learning phase. It helps to navigate the cost function and find the parameters for which the cost is minimum. The weights are updated in the direction of the steepest descent which is in a direction opposite to the gradient vector. This method could suffer from local minima and may become unstable.
2
DigiKlausur
1
0.001001
245
null
Steepest descent involves weight updation in the direction of maximum steep or maximum derease in the cost function ot in the direction opposite to the gradient funcion. The weight update is $\Delta w(n) = - \eta g(n)$ where $\eta$ is the learning rate which defines the magnitude of learning using the gradient g(n) which is the gradient of the cost function of errorsin the nth iteration. Higher $\eta$ will result in rapid learning but with oscilations in responses.
Steepest descent is used to update the weights in a NN during the learning phase. It helps to navigate the cost function and find the parameters for which the cost is minimum. The weights are updated in the direction of the steepest descent which is in a direction opposite to the gradient vector. This method could suffer from local minima and may become unstable.
2
DigiKlausur
1
0.001001
246
null
The method of steepest descent is used to find the direction in which the error function viewed as a function of weights is decreasing most rapidly and then take a small step in that direction. When learning a network, steepest descent enables to iteratively adjust the weight vectors until the optimal weight vector that minimises the cost function (i.e, the error function where error is computed as the difference between the desired and actual response of the network) is found.
Steepest descent is used to update the weights in a NN during the learning phase. It helps to navigate the cost function and find the parameters for which the cost is minimum. The weights are updated in the direction of the steepest descent which is in a direction opposite to the gradient vector. This method could suffer from local minima and may become unstable.
2
DigiKlausur
1
0.001001
247
null
When learning a network the steepest descent algorithm updates the weights in such a way that the error decreases in every iteration.
Steepest descent is used to update the weights in a NN during the learning phase. It helps to navigate the cost function and find the parameters for which the cost is minimum. The weights are updated in the direction of the steepest descent which is in a direction opposite to the gradient vector. This method could suffer from local minima and may become unstable.
1
DigiKlausur
0.5
0.001522
248
null
The method of steepest descent moves in the direction opposite to the gradient to minimize the cost funcion .
Steepest descent is used to update the weights in a NN during the learning phase. It helps to navigate the cost function and find the parameters for which the cost is minimum. The weights are updated in the direction of the steepest descent which is in a direction opposite to the gradient vector. This method could suffer from local minima and may become unstable.
2
DigiKlausur
1
0.001001
249
null
steepest decent method is based on minimization of error cost function $\xi(w) = 0.5 e^2k(n)$, so synaptic weight of network is updated in a direction opposite to gradient vector of $\xi(w)$, that is $Wk(n+1) = Wk(n) - \eta \nabla \xi(w) = Wk(n) - \eta ek(n) x(n)$, $\eta $is learning rate.$ek(n)$ is neuron k error signal, $xj(n)$ is input data.
Steepest descent is used to update the weights in a NN during the learning phase. It helps to navigate the cost function and find the parameters for which the cost is minimum. The weights are updated in the direction of the steepest descent which is in a direction opposite to the gradient vector. This method could suffer from local minima and may become unstable.
2
DigiKlausur
1
0.001001
250
null
The steepest is used to find a direction in which E is decreasing most rapidly. The adjustments applied to the weights are in the direction of steepest descent.
Steepest descent is used to update the weights in a NN during the learning phase. It helps to navigate the cost function and find the parameters for which the cost is minimum. The weights are updated in the direction of the steepest descent which is in a direction opposite to the gradient vector. This method could suffer from local minima and may become unstable.
2
DigiKlausur
1
0.001001
251
null
Steepest decent helps to **minimize the value of error function $E$** by finding the **right direction **to move the weight vector to reach global minima. The direction is always **opposite to the direction of actual gradient vector**.
Steepest descent is used to update the weights in a NN during the learning phase. It helps to navigate the cost function and find the parameters for which the cost is minimum. The weights are updated in the direction of the steepest descent which is in a direction opposite to the gradient vector. This method could suffer from local minima and may become unstable.
2
DigiKlausur
1
0.001001
252
null
Method of steepest descent is used to reduce the error. In backpropogation during backward pass we need to know how by how much amount the weights should be changed, this can be known if we use steepest descent, find the gradient of error and use it to reduce the error.
Steepest descent is used to update the weights in a NN during the learning phase. It helps to navigate the cost function and find the parameters for which the cost is minimum. The weights are updated in the direction of the steepest descent which is in a direction opposite to the gradient vector. This method could suffer from local minima and may become unstable.
2
DigiKlausur
1
0.001001
253
null
The steepest descent finds the direction of the error function and tries to reduce it by adding in the opposite direction $ del w = - \eta g(n)$ g(n)- gradient of the cost function
Steepest descent is used to update the weights in a NN during the learning phase. It helps to navigate the cost function and find the parameters for which the cost is minimum. The weights are updated in the direction of the steepest descent which is in a direction opposite to the gradient vector. This method could suffer from local minima and may become unstable.
2
DigiKlausur
1
0.001001
254
null
The steepest descent is used to find the right direction in which the weights should be changed while learning a network. The derivate of the error is used and weights are changed in that direction which makes the error smaller as fast as possible.
Steepest descent is used to update the weights in a NN during the learning phase. It helps to navigate the cost function and find the parameters for which the cost is minimum. The weights are updated in the direction of the steepest descent which is in a direction opposite to the gradient vector. This method could suffer from local minima and may become unstable.
2
DigiKlausur
1
0.001001
255
null
The steepest descent can be used to optimize the weights of a network. In steepest descent the error function is a function of the weights. So we determine the direction of the steepest descent on the error surface and go into that direction to minimize the error of the weights on optimize them.
Steepest descent is used to update the weights in a NN during the learning phase. It helps to navigate the cost function and find the parameters for which the cost is minimum. The weights are updated in the direction of the steepest descent which is in a direction opposite to the gradient vector. This method could suffer from local minima and may become unstable.
1
DigiKlausur
0.5
0.001522
256
null
The method of steepest descent is used to minimize the error function. The error function is the gradient of the error $\Delta e = d - y$, where d is the desired output and y is the actual output of the neuron.
Steepest descent is used to update the weights in a NN during the learning phase. It helps to navigate the cost function and find the parameters for which the cost is minimum. The weights are updated in the direction of the steepest descent which is in a direction opposite to the gradient vector. This method could suffer from local minima and may become unstable.
2
DigiKlausur
1
0.001001
257