question
stringclasses
283 values
provided_answer
stringlengths
1
3.56k
reference_answer
stringclasses
486 values
grade
float64
0
100
data_source
stringclasses
7 values
normalized_grade
float64
0
1
weight
float64
0
0.04
index
int64
0
4.97k
null
Let $x1$, $x2$, ... , $xN$ be the inputs to the neuron, $wi$ be the corresponding weights of connections, $b$ be the bias and $\varphi(.)$ be the activation function. Then, the induced field $v$ is given by - $$v = \sum{i = 1}^{N} wi .xi + b$$ The output $y$ is given by - $$y = \varphi(v)$$
Mathematical model of a neuron consists of a set of synapses or connecting links where each link is characterized by a weight, an adder function (linear combiner), which computes the weighted sum (local field) of the inputs plus some bias and an activation function (squashing function) for limiting the amplitude of a neuron’s output.
2
DigiKlausur
1
0.001001
58
null
The mathematical model of neuron has three parts: - a set of synapses or connencting links characterized by weight ,w . - an adder function that calculates the weighted sum of inputs plus some bias - an activation function (squashing function) to minimize the amplitude
Mathematical model of a neuron consists of a set of synapses or connecting links where each link is characterized by a weight, an adder function (linear combiner), which computes the weighted sum (local field) of the inputs plus some bias and an activation function (squashing function) for limiting the amplitude of a neuron’s output.
2
DigiKlausur
1
0.001001
59
null
$vk = \sum{j=1}^{m} w{kj} xj + bk$, $yk = \phi(vk)$, $w{kj}$ is the synaptic weight connecting neuron k and input data j, $xj$ is input data, $bk$ is bias, $vk$ is induced local field, $yk$ is output of neuron.
Mathematical model of a neuron consists of a set of synapses or connecting links where each link is characterized by a weight, an adder function (linear combiner), which computes the weighted sum (local field) of the inputs plus some bias and an activation function (squashing function) for limiting the amplitude of a neuron’s output.
2
DigiKlausur
1
0.001001
60
null
A neuron consists of a synapse connecting link, an adder function or linear combiner and an activation function. $$v = \Sigma wi \cdot x{i} + b$$, where $xi$ is the input, $wi$ is the weight and $b$ is bias.
Mathematical model of a neuron consists of a set of synapses or connecting links where each link is characterized by a weight, an adder function (linear combiner), which computes the weighted sum (local field) of the inputs plus some bias and an activation function (squashing function) for limiting the amplitude of a neuron’s output.
1
DigiKlausur
0.5
0.001522
61
null
A neuron is a basic information processing unit that have a adder function to compute **weighted sum of inputs plus bias** and apply activation function on the result. $$ \phi(v) = \sum\limits{i=1}^n \omega(i)x(i) + bias $$
Mathematical model of a neuron consists of a set of synapses or connecting links where each link is characterized by a weight, an adder function (linear combiner), which computes the weighted sum (local field) of the inputs plus some bias and an activation function (squashing function) for limiting the amplitude of a neuron’s output.
2
DigiKlausur
1
0.001001
62
null
Each neuron has a set of inputs and their respective weights. The local field is, $v = \sum(w{ij} * xi)$ The local field is passed through a activation function. So the output of the neuron is, $y = \phi(v)$ $y = \phi(\sum(w{ij} * xi))$
Mathematical model of a neuron consists of a set of synapses or connecting links where each link is characterized by a weight, an adder function (linear combiner), which computes the weighted sum (local field) of the inputs plus some bias and an activation function (squashing function) for limiting the amplitude of a neuron’s output.
2
DigiKlausur
1
0.001001
63
null
The neurons are the basic processing units in neural network output of the neuron = $ \phi (\sum w{i} x{i})$ they consist of three parts Synaptic weight: the connections between the neurons. characterised by weights Adder function: calculates the weighted sum of the inputs of the neuron Activation function: limits the amplitude of the output of the neuron. ($\phi$)
Mathematical model of a neuron consists of a set of synapses or connecting links where each link is characterized by a weight, an adder function (linear combiner), which computes the weighted sum (local field) of the inputs plus some bias and an activation function (squashing function) for limiting the amplitude of a neuron’s output.
2
DigiKlausur
1
0.001001
64
null
The model of a neuron consists of synaptic weights which are applied to the input signals. The weighted inputs are then summed which gives the local field. This local field is put into an activation function whose output will be the output of the neuron.
Mathematical model of a neuron consists of a set of synapses or connecting links where each link is characterized by a weight, an adder function (linear combiner), which computes the weighted sum (local field) of the inputs plus some bias and an activation function (squashing function) for limiting the amplitude of a neuron’s output.
2
DigiKlausur
1
0.001001
65
null
$y = \sum{i=0} \Phi(w*xi)$ A neuron consists of inputs $x$, synpatic weights $w$, an extra input $w0$ which is fixed to 1 for the bias, an Adder function, that creates the local field $v$ and a squashing function $\Phi$.
Mathematical model of a neuron consists of a set of synapses or connecting links where each link is characterized by a weight, an adder function (linear combiner), which computes the weighted sum (local field) of the inputs plus some bias and an activation function (squashing function) for limiting the amplitude of a neuron’s output.
2
DigiKlausur
1
0.001001
66
null
$y = \sum f(wx + b)$, where w are the weights, which change the input according to the learned weights, x is the input from the environment, b is the bias, which shifts the learned decision plane, and f() is the activation function, which limits the output to a desired region of values.
Mathematical model of a neuron consists of a set of synapses or connecting links where each link is characterized by a weight, an adder function (linear combiner), which computes the weighted sum (local field) of the inputs plus some bias and an activation function (squashing function) for limiting the amplitude of a neuron’s output.
2
DigiKlausur
1
0.001001
67
null
A neuron consists of one or multiple inputs which are gathered by a summation function. The hereby induced local field of the neuron is processed by a squashing function and generates the output of the neuron.
Mathematical model of a neuron consists of a set of synapses or connecting links where each link is characterized by a weight, an adder function (linear combiner), which computes the weighted sum (local field) of the inputs plus some bias and an activation function (squashing function) for limiting the amplitude of a neuron’s output.
1
DigiKlausur
0.5
0.001522
68
null
A neuron consist of input connection links with a synaptic weight, a bias, an adder which adds the input singnals and the bias and produces the local field. The local field is processed by the activation function and produces the output of the neuron.
Mathematical model of a neuron consists of a set of synapses or connecting links where each link is characterized by a weight, an adder function (linear combiner), which computes the weighted sum (local field) of the inputs plus some bias and an activation function (squashing function) for limiting the amplitude of a neuron’s output.
2
DigiKlausur
1
0.001001
69
null
A neuron consists of input nodes x1 to xn and weights w1 to wn, a linear combiner v= SUM( $ xi * wi $) + b, where b is some bias. The result v is called local field and is used as input for an acivation function $ phi(v) $
Mathematical model of a neuron consists of a set of synapses or connecting links where each link is characterized by a weight, an adder function (linear combiner), which computes the weighted sum (local field) of the inputs plus some bias and an activation function (squashing function) for limiting the amplitude of a neuron’s output.
2
DigiKlausur
1
0.001001
70
null
Neuron is consists of three units. 1. Synaptic links characterizex by weights which linearly ways the input. 2. Adder which adds weighted inputs to generate local field 3. Activation function which is nonlinear function sqashing the output of the neuron
Mathematical model of a neuron consists of a set of synapses or connecting links where each link is characterized by a weight, an adder function (linear combiner), which computes the weighted sum (local field) of the inputs plus some bias and an activation function (squashing function) for limiting the amplitude of a neuron’s output.
2
DigiKlausur
1
0.001001
71
null
1) Neuron is consist of sysnaptic links which measured in terms of weights. neuron is given with inputs.\ 2)it has adder funtion or combiner which adds all the inputs mulitplied by the weights and bias is extra input to the neuron as well. 3) it has a activation link which limit the amplitude of the output of the neuron.
Mathematical model of a neuron consists of a set of synapses or connecting links where each link is characterized by a weight, an adder function (linear combiner), which computes the weighted sum (local field) of the inputs plus some bias and an activation function (squashing function) for limiting the amplitude of a neuron’s output.
1
DigiKlausur
0.5
0.001522
72
null
Neuron is the basic information processing unit which is the main component of a neural network. A neuron is charecterized by its input ($xi$), synaptic weight ($wi$) and activation function $\phi(v)$. Mathematically it can be modelled as $\phi(wixi)$. Activation function bounds the input to a certain level.
Mathematical model of a neuron consists of a set of synapses or connecting links where each link is characterized by a weight, an adder function (linear combiner), which computes the weighted sum (local field) of the inputs plus some bias and an activation function (squashing function) for limiting the amplitude of a neuron’s output.
2
DigiKlausur
1
0.001001
73
null
A neuron has three components * Synaptic weight: w * Adder function: it multiplies input x with the weight * Activation function: It squashes the output of the adder function. Sigmoid, hyperbolic tangent, Rectified linear unit etc.
Mathematical model of a neuron consists of a set of synapses or connecting links where each link is characterized by a weight, an adder function (linear combiner), which computes the weighted sum (local field) of the inputs plus some bias and an activation function (squashing function) for limiting the amplitude of a neuron’s output.
2
DigiKlausur
1
0.001001
74
null
A neuron consist of set of inputs that takes data from environment. Each neuron contains synapses(connection links) that are characterized by weights. All inputs are connected to the summing (adder) function, that computes weighted sum of all input values. This weighted sum is called local field of neuron. The value of this local field (V) is limited(squased) by an activation funtion $\theta(V)$. The result from this squasing funtion is output of a neuron ($y = \theta(V)$). Additionally, a bias term $(b)$ is added to the input, and its value is always 0, but its associated weight is being changed over training period. Finally, output of neuron is $y = \theta(V)$, where $V = \sum Wj * Xj + b$
Mathematical model of a neuron consists of a set of synapses or connecting links where each link is characterized by a weight, an adder function (linear combiner), which computes the weighted sum (local field) of the inputs plus some bias and an activation function (squashing function) for limiting the amplitude of a neuron’s output.
2
DigiKlausur
1
0.001001
75
null
1. Label one class a positive with label +1 and the other class as negative with -1. 2. Augment the data with an additional value for the bias term. 3. Invert the sign of the data in the negative class. 4. Randomly initialize weights. 5. If $w^T \cdot x <= 0$, update weight by $ w(n+1) = w(n) + \eta x(n) $, else leave the weight unchanged. 6. Continue step 5. 7. terminate when there is no longer a change in any weight.
Label the data with positive and negative (+/-) labels, initialize the weights randomly, apply (simplified) update rule: Dw = eta*x(n) if <w,x> <= 0, repeat on all epochs till the weights don’t change much. The algorithm will converge as the data is linearly separable.
2
DigiKlausur
1
0.001001
76
null
1. Initialization: n(time step or iteration) = 1 and weights are small but randomly initialized 2. Activation of perceptron: Apply training pattern to activate the perceptron 3. Compute Output: Apply Activation function to the local field(weighted sum of inputs plus bias) 4. Adjust Weights: Adjust weight if current output(y) != desired output(d) 5. Continuation: We continue by increasing n during each iteration and repeat from step 2 untill all input pattern are applied to network and also error is minimized
Label the data with positive and negative (+/-) labels, initialize the weights randomly, apply (simplified) update rule: Dw = eta*x(n) if <w,x> <= 0, repeat on all epochs till the weights don’t change much. The algorithm will converge as the data is linearly separable.
2
DigiKlausur
1
0.001001
77
null
HERE: y denotes the actual result, d denotes the desired result positive train error: y = 0, d=1 $w{new} = w{old} + x $ negative train error: y = 1, d = 0 $w{new} = w{old} - x$
Label the data with positive and negative (+/-) labels, initialize the weights randomly, apply (simplified) update rule: Dw = eta*x(n) if <w,x> <= 0, repeat on all epochs till the weights don’t change much. The algorithm will converge as the data is linearly separable.
1
DigiKlausur
0.5
0.001522
78
null
initialize weights with zero or small values; sample data point, feed into network; compute net output, use the step activation function; compute error $e=(d-o)$, where d is the true label, o is the predicted label; correct weights based on $w(t+1)=w(t)+\alpha(d-o)x$, where alpha is the training rate and x is the input pattern; repeat for each pattern until convergence is reached;
Label the data with positive and negative (+/-) labels, initialize the weights randomly, apply (simplified) update rule: Dw = eta*x(n) if <w,x> <= 0, repeat on all epochs till the weights don’t change much. The algorithm will converge as the data is linearly separable.
2
DigiKlausur
1
0.001001
79
null
For this case, the parameters that need to be learned are the slope of the line and the intercept. These are the parameters for the weight vector. 1. Initialize random small values for weight vector. 2. For inputdata $xi$ in Training Data: - Apply the input to the weight vector. - e = the difference between the local field and the desired output $(di-yi)$ - Update weight: w(n+1) = w(n) + $\eta e xi$
Label the data with positive and negative (+/-) labels, initialize the weights randomly, apply (simplified) update rule: Dw = eta*x(n) if <w,x> <= 0, repeat on all epochs till the weights don’t change much. The algorithm will converge as the data is linearly separable.
1
DigiKlausur
0.5
0.001522
80
null
$\varphi(v) = \tanh(v)$, single node network, $\mu$ learning rate repeat as long as error is too high: 1. present sample to network and collect output. 2. compare actual output with desired output (d). 3. If not equal adapt weights: $wi(n + 1) = wi(n) + \mu(d-y)xi$
Label the data with positive and negative (+/-) labels, initialize the weights randomly, apply (simplified) update rule: Dw = eta*x(n) if <w,x> <= 0, repeat on all epochs till the weights don’t change much. The algorithm will converge as the data is linearly separable.
2
DigiKlausur
1
0.001001
81
null
given $k$ date points $(xi,yi)$ and $yi\in\{1,-1\}$ given a learning rate for each point i add a bias 1 so that point i == (1,xi,yi) ; for each point i there yi == -1 point = -1 * point; w= Nullvector; b = 0; convergance = false; while(convergence == false) convergance = true; for each point i in the training set: if(w*x<=0) do w = w+learningrate*pointi; convergance = false;
Label the data with positive and negative (+/-) labels, initialize the weights randomly, apply (simplified) update rule: Dw = eta*x(n) if <w,x> <= 0, repeat on all epochs till the weights don’t change much. The algorithm will converge as the data is linearly separable.
2
DigiKlausur
1
0.001001
82
null
weights # a weight vector phi = activation function eta = learning rate for each datapoint (xi,yi) do: weights[i] = weights[i] + eta * (xi[i]-yi)*weights[i]
Label the data with positive and negative (+/-) labels, initialize the weights randomly, apply (simplified) update rule: Dw = eta*x(n) if <w,x> <= 0, repeat on all epochs till the weights don’t change much. The algorithm will converge as the data is linearly separable.
1
DigiKlausur
0.5
0.001522
83
null
1: w, b = initweightsbias() // the weights can be initialized to 0 or random initialized 2: n = 0 3: WHILE !stopcriteria() DO // iteration until stop criteria is fulfilled 4: y = w(n) * x(n) + b // calculate output 5: IF x is in C1: e = 1 // if the x belongs to class C1, error i 1, otherwise is -1 6: ELSE IF x is in C2: e = -1 7: w = w + e * x // update weights using the calculated error 8: n = n + 1 9: END The stop criteria can be, if the number of misclassified input data is 0, then stop.
Label the data with positive and negative (+/-) labels, initialize the weights randomly, apply (simplified) update rule: Dw = eta*x(n) if <w,x> <= 0, repeat on all epochs till the weights don’t change much. The algorithm will converge as the data is linearly separable.
2
DigiKlausur
1
0.001001
84
null
The learning process consists of three main steps: 1- Positive error: - calculate the error of all the data sets in the learning set - change the w(weight): w(n+1) = w(n)+positive error - seperate the data points based on the new w 2- Negative error: - calculate the error of all the data sets in the learning set - change the w(weight): w(n+1) = w(n)+negative error - seperate the data points based on the new w 3- No error: - when we have no error this is the end of the training
Label the data with positive and negative (+/-) labels, initialize the weights randomly, apply (simplified) update rule: Dw = eta*x(n) if <w,x> <= 0, repeat on all epochs till the weights don’t change much. The algorithm will converge as the data is linearly separable.
1
DigiKlausur
0.5
0.001522
85
null
Define a bias in order to be able to trigger to which class data points will be classefied to. Assign initial randomly chosen weights, use a squashing function for example McCullon Pits, start training proccess and stop when error of output and desired output has reached desired percentage.
Label the data with positive and negative (+/-) labels, initialize the weights randomly, apply (simplified) update rule: Dw = eta*x(n) if <w,x> <= 0, repeat on all epochs till the weights don’t change much. The algorithm will converge as the data is linearly separable.
0
DigiKlausur
0
0.003546
86
null
Initialize the weight vector $\hat{w} = 0$ - do - for every training sample x,d $v = \sum wi xi + b$ $y = \phi(v)$ if d is not equal to y then $e = d - y$ $w = w + \eta e(i) xi$ - until convergence
Label the data with positive and negative (+/-) labels, initialize the weights randomly, apply (simplified) update rule: Dw = eta*x(n) if <w,x> <= 0, repeat on all epochs till the weights don’t change much. The algorithm will converge as the data is linearly separable.
2
DigiKlausur
1
0.001001
87
null
**Pseudo code** + initiate weights and bias randomly. + compute output for the given input data $ y' = \sumi (wixi) +b$. + compute error between computed $y'$ and desired output $y$. + update weights: $w(n+1) = w(n) + \eta (y-y') x$ + stop when the error is below some specified threshold or becomes zero in case of data that is perfectly linearly separable.
Label the data with positive and negative (+/-) labels, initialize the weights randomly, apply (simplified) update rule: Dw = eta*x(n) if <w,x> <= 0, repeat on all epochs till the weights don’t change much. The algorithm will converge as the data is linearly separable.
2
DigiKlausur
1
0.001001
88
null
null
Label the data with positive and negative (+/-) labels, initialize the weights randomly, apply (simplified) update rule: Dw = eta*x(n) if <w,x> <= 0, repeat on all epochs till the weights don’t change much. The algorithm will converge as the data is linearly separable.
0
DigiKlausur
0
0.003546
89
null
Initialize the perceptron with each weight equal to 0: $w(0) = 0$. Present the labeled examples $(xi, di)$ to the perceptron. > for each example $(xi, di)$ >> Compute actual output $yi$ and error signal >> Update weight based on the dlelta rule: $w(n+1) = w(n) + \eta (d(n) - y(n)) x(n)$
Label the data with positive and negative (+/-) labels, initialize the weights randomly, apply (simplified) update rule: Dw = eta*x(n) if <w,x> <= 0, repeat on all epochs till the weights don’t change much. The algorithm will converge as the data is linearly separable.
1
DigiKlausur
0.5
0.001522
90
null
We use threshold function as activation function. if w.x + b >= 1 label class 1. else label class 0.
Label the data with positive and negative (+/-) labels, initialize the weights randomly, apply (simplified) update rule: Dw = eta*x(n) if <w,x> <= 0, repeat on all epochs till the weights don’t change much. The algorithm will converge as the data is linearly separable.
0
DigiKlausur
0
0.003546
91
null
e(n) = current error <br> eps = convergence criteria <br> n = learning rate <br> while (change in e(n) not less then eps) {<br> calculate error e(n) <br> w(n+1) = w(n) + n e(n) x(n) (Widrow Hoffmen rule) <br> }
Label the data with positive and negative (+/-) labels, initialize the weights randomly, apply (simplified) update rule: Dw = eta*x(n) if <w,x> <= 0, repeat on all epochs till the weights don’t change much. The algorithm will converge as the data is linearly separable.
2
DigiKlausur
1
0.001001
92
null
HERE: Perceptron learning algorithm: - Initialize the network by assigning random weights to the synaptic links. - Calculate error as the difference of the desired output with the actual output. - If the input is misclassified with positive error, $w(new)$ = $w(current) + input$. - If the input is misclassified with negative error, $w(new)$ = $w(current) - input$. - If the input is correctly classified, no changes are made in the weights. - Repeat from step 2 as long as the error is under some defined threshold value.
Label the data with positive and negative (+/-) labels, initialize the weights randomly, apply (simplified) update rule: Dw = eta*x(n) if <w,x> <= 0, repeat on all epochs till the weights don’t change much. The algorithm will converge as the data is linearly separable.
2
DigiKlausur
1
0.001001
93
null
The linear binary classifiable data consists of input vector $X$ which when multipled with weights and added bias, fall into class+ or class- depending on the linear combination output of $WX + b$ being above 0(+) or below 0(class -). Algo: Para,meters : X,Y(desired output), W, b 1) weight vector W is initialized with small random values. 2) Input vector is chosen with a probabiity and output is computed using $WX + b$ . If the class Y of vector X is + and output is $<0$, or if the class of X is - and output if >0, then the weights are updated accordingly. Otherwise weights are left unchanged. 3) iterated over other input vectors until convergence of output.
Label the data with positive and negative (+/-) labels, initialize the weights randomly, apply (simplified) update rule: Dw = eta*x(n) if <w,x> <= 0, repeat on all epochs till the weights don’t change much. The algorithm will converge as the data is linearly separable.
2
DigiKlausur
1
0.001001
94
null
1. Initialization : At time step n(0), initialize weight vectors with random values $wj(0)$ 2. Activation : Apply the input example $(xi(n),di(n))$ to activate the perceptron with heavyside step function as the activation function. 3. If output of the perceptron $y(n) \neq d(n) $, adjust the weight vector using the rule : $w(n+1) = w(n) + \eta x(n)(d(n) - y(n))$ 4. Go to Activation and repeat until no more change in weight vector is observed
Label the data with positive and negative (+/-) labels, initialize the weights randomly, apply (simplified) update rule: Dw = eta*x(n) if <w,x> <= 0, repeat on all epochs till the weights don’t change much. The algorithm will converge as the data is linearly separable.
2
DigiKlausur
1
0.001001
95
null
1. Inputs X: $x1$, $x2$, ... , $xN$ 2. Desired outputs y: $y1$, $y2$, ... , $yN$ 3. Initialize weight vector $w$ to random small values 4. For each data point $xn$ in X: Calculate $\hat{y}n$ from $w$ and $xn$ Calculate error $en = yn - \hat{y}n$ Update $w$ according to delta rule end
Label the data with positive and negative (+/-) labels, initialize the weights randomly, apply (simplified) update rule: Dw = eta*x(n) if <w,x> <= 0, repeat on all epochs till the weights don’t change much. The algorithm will converge as the data is linearly separable.
2
DigiKlausur
1
0.001001
96
null
null
Label the data with positive and negative (+/-) labels, initialize the weights randomly, apply (simplified) update rule: Dw = eta*x(n) if <w,x> <= 0, repeat on all epochs till the weights don’t change much. The algorithm will converge as the data is linearly separable.
0
DigiKlausur
0
0.003546
97
null
apply input data to input layer and initialize small values weights minimize error according to difference between desired signal and output signal assign the test vector the class that has smallest error
Label the data with positive and negative (+/-) labels, initialize the weights randomly, apply (simplified) update rule: Dw = eta*x(n) if <w,x> <= 0, repeat on all epochs till the weights don’t change much. The algorithm will converge as the data is linearly separable.
2
DigiKlausur
1
0.001001
98
null
1. Compute the initial weights for all input vector 2. Apply matrix multiplication from input to weight vector 3. Apply linear combiner 4. Apply activation function to produce the output 5. Compute the error 6. Update weights
Label the data with positive and negative (+/-) labels, initialize the weights randomly, apply (simplified) update rule: Dw = eta*x(n) if <w,x> <= 0, repeat on all epochs till the weights don’t change much. The algorithm will converge as the data is linearly separable.
0
DigiKlausur
0
0.003546
99
null
* Randomly assign values to initial weights * Run the perceptron network and calculate the error (e = y-d) where, e is error, y is output and d is desired response. * Update the weights based on the error. * If error is positive, add the error with the input and update weight. * If error is negative, subtract the error with the input and update weight. * If there is no error, don't update the weights. * Repeat the above process until the calulated error is approximately equal to zero.
Label the data with positive and negative (+/-) labels, initialize the weights randomly, apply (simplified) update rule: Dw = eta*x(n) if <w,x> <= 0, repeat on all epochs till the weights don’t change much. The algorithm will converge as the data is linearly separable.
2
DigiKlausur
1
0.001001
100
null
null
Label the data with positive and negative (+/-) labels, initialize the weights randomly, apply (simplified) update rule: Dw = eta*x(n) if <w,x> <= 0, repeat on all epochs till the weights don’t change much. The algorithm will converge as the data is linearly separable.
0
DigiKlausur
0
0.003546
101
null
w = [random number betrween -1 and 1] for every data in training set { In the first layer: calculate the weighted sum using adder function calculate the output of the activation function In the ouput layer calculate the output y calculate the error e = d - y ; d- desired output change the weights using the formula $ \delta w = \eta xj ej$ } continue till the error converges
Label the data with positive and negative (+/-) labels, initialize the weights randomly, apply (simplified) update rule: Dw = eta*x(n) if <w,x> <= 0, repeat on all epochs till the weights don’t change much. The algorithm will converge as the data is linearly separable.
2
DigiKlausur
1
0.001001
102
null
Initialize as many random weights as the dimension of the data points For each data point: if the output matches the desired output do nothing else: change the weights in the direction of the datapoint so that the datapoint is classified correctly end if end for if some weight was changed: start again with the for loop end if
Label the data with positive and negative (+/-) labels, initialize the weights randomly, apply (simplified) update rule: Dw = eta*x(n) if <w,x> <= 0, repeat on all epochs till the weights don’t change much. The algorithm will converge as the data is linearly separable.
2
DigiKlausur
1
0.001001
103
null
1. Initialize the weights at random or as 0. 2. Activate the Perceptron by giving an example. 3. Compute the actual output of the neuron. 4. Adjust the parameters of there perceptron. 5. Continue until convergence is achieved. w = rand y = sum($\Phi$(w*x)) for wi in w: wi = wi+$\eta$*e*y
Label the data with positive and negative (+/-) labels, initialize the weights randomly, apply (simplified) update rule: Dw = eta*x(n) if <w,x> <= 0, repeat on all epochs till the weights don’t change much. The algorithm will converge as the data is linearly separable.
1
DigiKlausur
0.5
0.001522
104
null
Initialize the weights randomly. $y = \sum f(wx + b)$, compute the output of the perceptron using the input x, the weight w, the bias b and the activation function f(). $e = d - y$, calculate the error by substracting the actual output from the desired output. $w{new} = w{old} + learning\rate \cdot x \cdot e$, update the weights with this formula. The learning rate is a parameter which changes how fast the perceptron learns.
Label the data with positive and negative (+/-) labels, initialize the weights randomly, apply (simplified) update rule: Dw = eta*x(n) if <w,x> <= 0, repeat on all epochs till the weights don’t change much. The algorithm will converge as the data is linearly separable.
2
DigiKlausur
1
0.001001
105
null
for n iterations for each datapoint d error = desired - output if error > 0 weights = weights - error if error < 0 weights = weights + error
Label the data with positive and negative (+/-) labels, initialize the weights randomly, apply (simplified) update rule: Dw = eta*x(n) if <w,x> <= 0, repeat on all epochs till the weights don’t change much. The algorithm will converge as the data is linearly separable.
2
DigiKlausur
1
0.001001
106
null
pick random decision boundary while one of data points is in wrong class turn decision boundary by using vector of wrong data point (negative rule or positive)
Label the data with positive and negative (+/-) labels, initialize the weights randomly, apply (simplified) update rule: Dw = eta*x(n) if <w,x> <= 0, repeat on all epochs till the weights don’t change much. The algorithm will converge as the data is linearly separable.
1
DigiKlausur
0.5
0.001522
107
null
trainingset := set of labeled linear seperable data points w := weight vector with dimension of input data v := local field phi(v): activationfunction (threshold function) y:= output e := error (y - d) where d is the desired output from labeled training data n := learning rate (0.1) assign random values for w for x in trainingset: v = sum(xi * wi) y = phi(v) e = y - d w = w + n*x*e // delta rule end
Label the data with positive and negative (+/-) labels, initialize the weights randomly, apply (simplified) update rule: Dw = eta*x(n) if <w,x> <= 0, repeat on all epochs till the weights don’t change much. The algorithm will converge as the data is linearly separable.
2
DigiKlausur
1
0.001001
108
null
initialize weights w and bias b set learning rate n set errorthreshold (upper bound on error) while error < errorthreshold : for every datapoint x in tarining dataset : y = [w, b] . [x, 1] (bias is represented as weight of fixed input 1) if y is positive then x belongs to C1 otherwise to C2 store above predicted class. find error in predicted output with respect to the labels store error e esum = sum of all errors e for every data point w = w + n * esum
Label the data with positive and negative (+/-) labels, initialize the weights randomly, apply (simplified) update rule: Dw = eta*x(n) if <w,x> <= 0, repeat on all epochs till the weights don’t change much. The algorithm will converge as the data is linearly separable.
2
DigiKlausur
1
0.001001
109
null
for a binary classifier we can use threshold activation funtion. 1) randomly initilize the weights 2) you calculte the output of the neruon 3) find out the error by subtracting expected output and current output. 4) modify the weights related to that input with respect to the error. 5)repeat the process 2-4 till the you get minimal error.
Label the data with positive and negative (+/-) labels, initialize the weights randomly, apply (simplified) update rule: Dw = eta*x(n) if <w,x> <= 0, repeat on all epochs till the weights don’t change much. The algorithm will converge as the data is linearly separable.
2
DigiKlausur
1
0.001001
110
null
null
Label the data with positive and negative (+/-) labels, initialize the weights randomly, apply (simplified) update rule: Dw = eta*x(n) if <w,x> <= 0, repeat on all epochs till the weights don’t change much. The algorithm will converge as the data is linearly separable.
0
DigiKlausur
0
0.003546
111
null
continueprocess = true w = randomlyinitialize() while continueprocess for x in list of points y = w.x diff = d-y // d is the desired output if(diff >= 0) w = w + x else w = w - x if all points are classified without error continueprocess = false
Label the data with positive and negative (+/-) labels, initialize the weights randomly, apply (simplified) update rule: Dw = eta*x(n) if <w,x> <= 0, repeat on all epochs till the weights don’t change much. The algorithm will converge as the data is linearly separable.
2
DigiKlausur
1
0.001001
112
null
n<- learinig rate Repeat until the MSE is small enough: t=t+1 for each point in training set do: compute local vield of percepron: V = W*X apply linear activation function: $y = \theta(V) = V$ compute current error: e = (d-y) apply delta rule: W(t+1) = W(t) + n*e*X end
Label the data with positive and negative (+/-) labels, initialize the weights randomly, apply (simplified) update rule: Dw = eta*x(n) if <w,x> <= 0, repeat on all epochs till the weights don’t change much. The algorithm will converge as the data is linearly separable.
2
DigiKlausur
1
0.001001
113
null
Classification: In classification, the output produced by the NN is a discrete value which indicates which class the input belongs to. Regression: In regression, the output produced by the NN is a continuous variable. This could be used for instance, to approximate a continuous function.
Classification is a task of mapping data to discrete labels while regression is a task which maps data to a continuous function or real values. Error in classification is the number of misclassifications while in regression is the summed distance between the true and predicted values.
1
DigiKlausur
0.5
0.001522
114
null
In classification, output values are always discrete. In regression, output values are continuous
Classification is a task of mapping data to discrete labels while regression is a task which maps data to a continuous function or real values. Error in classification is the number of misclassifications while in regression is the summed distance between the true and predicted values.
1
DigiKlausur
0.5
0.001522
115
null
A hyerplane is given by y = w*x + b . Regression wants to determine w Classification wants to assign a class to a set of observations. Regression wants to determine separating hyerplane, classification wants to label data points with a class
Classification is a task of mapping data to discrete labels while regression is a task which maps data to a continuous function or real values. Error in classification is the number of misclassifications while in regression is the summed distance between the true and predicted values.
1
DigiKlausur
0.5
0.001522
116
null
In classification tasks, we assign discrete labels to data points of our training dataset, either being assigned a specific label or not (binary). For supervised learning, these datapoints are labeled with a label vector ground truth. In regression, we try to model a function which fits the data points of the training data, and thus model a function with continous values.
Classification is a task of mapping data to discrete labels while regression is a task which maps data to a continuous function or real values. Error in classification is the number of misclassifications while in regression is the summed distance between the true and predicted values.
1
DigiKlausur
0.5
0.001522
117
null
Classification: - It refers to classifying given data into discrete classes. - The output is discrete values. - Use for activity like pattern recognition, etc. Regression: - It refers to estimating the value of some continuous function given an input. - The output is continuous value. - used for activities like motor control, etc.
Classification is a task of mapping data to discrete labels while regression is a task which maps data to a continuous function or real values. Error in classification is the number of misclassifications while in regression is the summed distance between the true and predicted values.
1
DigiKlausur
0.5
0.001522
118
null
In classification we try to assign classes to input data. Regression we want the network to behave like a given system/formala. This can also be a time series of input and output data.
Classification is a task of mapping data to discrete labels while regression is a task which maps data to a continuous function or real values. Error in classification is the number of misclassifications while in regression is the summed distance between the true and predicted values.
1
DigiKlausur
0.5
0.001522
119
null
In classification the goal is to saperates points into different classes. The outcome is a class lable. Regression trys to fit a hyperplante to a point cloud best, so that future data is representet by that hyperplane best (LMS). It trys to minimize the distance to all data points. The outcome is a countinius variable.
Classification is a task of mapping data to discrete labels while regression is a task which maps data to a continuous function or real values. Error in classification is the number of misclassifications while in regression is the summed distance between the true and predicted values.
1
DigiKlausur
0.5
0.001522
120
null
Both are learning tasks of a ANN. In classification the goal is to assign a class label to new datapoints. In regression the goal is to eastimate a unkown function. The only difference between both is that classification uses discrete class labels, while in regression a continuous output is used
Classification is a task of mapping data to discrete labels while regression is a task which maps data to a continuous function or real values. Error in classification is the number of misclassifications while in regression is the summed distance between the true and predicted values.
1
DigiKlausur
0.5
0.001522
121
null
The approach of classification is to classify sets of input data into their correct classes (for example, used in pattern recognition). The approach of regression is to approximize to a defined function f by calculating the error between this function and the result of an algorithm. THe difference is that, the classification approach is applied to a discret data (the samples are the different points of the input space), and regression is an analogic approach where the whole function must be approximize (for any input given).
Classification is a task of mapping data to discrete labels while regression is a task which maps data to a continuous function or real values. Error in classification is the number of misclassifications while in regression is the summed distance between the true and predicted values.
1
DigiKlausur
0.5
0.001522
122
null
- Classification: In classification problems we have different groups of data that have some common properties and after training we want that our model can detect the class of the new sample correctly - Regression: In regression we have a series of values and we want to use the previuos values in this series and predict the next value
Classification is a task of mapping data to discrete labels while regression is a task which maps data to a continuous function or real values. Error in classification is the number of misclassifications while in regression is the summed distance between the true and predicted values.
1
DigiKlausur
0.5
0.001522
123
null
Classification is a problem of destinguishing to which discrete classes input variables are to be assigned to, regression is estimation of the output, by figuring out the continuous trend of the whole dataset.
Classification is a task of mapping data to discrete labels while regression is a task which maps data to a continuous function or real values. Error in classification is the number of misclassifications while in regression is the summed distance between the true and predicted values.
1
DigiKlausur
0.5
0.001522
124
null
Classification if to assign a class or category to the data, while regression is when you fit the data to a function.
Classification is a task of mapping data to discrete labels while regression is a task which maps data to a continuous function or real values. Error in classification is the number of misclassifications while in regression is the summed distance between the true and predicted values.
1
DigiKlausur
0.5
0.001522
125
null
+ Regression: learns model/function that can predict other unseen data well. Target/output is real spaced. + Classification: learns a model that classifies/maps input to a discrete target label. Targetlabel/output is binary/discrete.
Classification is a task of mapping data to discrete labels while regression is a task which maps data to a continuous function or real values. Error in classification is the number of misclassifications while in regression is the summed distance between the true and predicted values.
1
DigiKlausur
0.5
0.001522
126
null
Classification describes the application, in which a sample is assigned to one specific pattern of the problem. In comparison to regression is the output deterministic an not continiously. In regression the output is continuous describing
Classification is a task of mapping data to discrete labels while regression is a task which maps data to a continuous function or real values. Error in classification is the number of misclassifications while in regression is the summed distance between the true and predicted values.
1
DigiKlausur
0.5
0.001522
127
null
Classification is the task of classifying the input signals into a finite number of groups, so the output is a number that indicates a certain class. Regression is the task of approximating a function by estimating the values given the input signals, so the output can be any real number.
Classification is a task of mapping data to discrete labels while regression is a task which maps data to a continuous function or real values. Error in classification is the number of misclassifications while in regression is the summed distance between the true and predicted values.
1
DigiKlausur
0.5
0.001522
128
null
Classification: We need to predict the output data discretely. That is the output space is a discrete space. Regression: We need to predict the output data continuously. That is the output space is continuous space The main difference is the discreteness and contionousness.
Classification is a task of mapping data to discrete labels while regression is a task which maps data to a continuous function or real values. Error in classification is the number of misclassifications while in regression is the summed distance between the true and predicted values.
1
DigiKlausur
0.5
0.001522
129
null
Classification is a problem of assigning a particular class to each data point in a given dataset. <br> Regression is a problem of fitting the given dataset on a particular hyperplane which can be used for representing the given data. It finds the hyperplane which minimises the mean square error.
Classification is a task of mapping data to discrete labels while regression is a task which maps data to a continuous function or real values. Error in classification is the number of misclassifications while in regression is the summed distance between the true and predicted values.
1
DigiKlausur
0.5
0.001522
130
null
HERE: - Classification is a problem of assigning labels or classes to the input. The output is a discrete variable. - Regression is a problem of assigning a continuous variable to the input.
Classification is a task of mapping data to discrete labels while regression is a task which maps data to a continuous function or real values. Error in classification is the number of misclassifications while in regression is the summed distance between the true and predicted values.
1
DigiKlausur
0.5
0.001522
131
null
Classification is a problem of catergorization into discrete classes where as regression is a problem in a continuous space where the goal is to ether minimize or maximize a cost function. Classification is the process of dividing a set of discrete inputs into classes corresponding to similar patterns such as clustering. Regression could be finding a pattern of the distribution of the data such as ftting a line.
Classification is a task of mapping data to discrete labels while regression is a task which maps data to a continuous function or real values. Error in classification is the number of misclassifications while in regression is the summed distance between the true and predicted values.
1
DigiKlausur
0.5
0.001522
132
null
Classification in machine learning is used to find a decision surface in the form of a hyperplane that can separate a set of input examples (or set of patterns) into their respective classes. Regression on the other hand is used to find the parameters (i.e, the weight vector $w$ and the bias b) for the function thatcan best fit the given data points $\{xi,di\}$ . Thus classification deals with predicting the class label for discrete data points whereas regression deals with fitting a continuous real valued function.
Classification is a task of mapping data to discrete labels while regression is a task which maps data to a continuous function or real values. Error in classification is the number of misclassifications while in regression is the summed distance between the true and predicted values.
1
DigiKlausur
0.5
0.001522
133
null
Classification is separating the data into classes and the output is a discontinuous variable. Regression is fitting a model and the output is a continuous variable.
Classification is a task of mapping data to discrete labels while regression is a task which maps data to a continuous function or real values. Error in classification is the number of misclassifications while in regression is the summed distance between the true and predicted values.
1
DigiKlausur
0.5
0.001522
134
null
Classification is about classifying the given data into different classes, where as regresssion is about finding the local/global minima.We use perceptrons to classify the data and we use unconstrained optimization techniques like newton's method to find regression.
Classification is a task of mapping data to discrete labels while regression is a task which maps data to a continuous function or real values. Error in classification is the number of misclassifications while in regression is the summed distance between the true and predicted values.
1
DigiKlausur
0.5
0.001522
135
null
classification: assign a test data to a class that is prescribed regression: approximating an unknown function with minimization errors for input-output mapping.
Classification is a task of mapping data to discrete labels while regression is a task which maps data to a continuous function or real values. Error in classification is the number of misclassifications while in regression is the summed distance between the true and predicted values.
1
DigiKlausur
0.5
0.001522
136
null
Classsification: In classification, the output variable takes class labels or identifying group membership<br> Regression: In regression, the output variable takes continuous values or predicting a response
Classification is a task of mapping data to discrete labels while regression is a task which maps data to a continuous function or real values. Error in classification is the number of misclassifications while in regression is the summed distance between the true and predicted values.
1
DigiKlausur
0.5
0.001522
137
null
Classifaction problem is used to classify set of data points into specific groups. Regression is used to predict time series data. Classification works on discreate set of values and regression works on continuous values.
Classification is a task of mapping data to discrete labels while regression is a task which maps data to a continuous function or real values. Error in classification is the number of misclassifications while in regression is the summed distance between the true and predicted values.
1
DigiKlausur
0.5
0.001522
138
null
Classification: Classification is done between the classes. The machine determines to what class the data belongs to. Regression: Regression is a expecting output for an input. The machine learns from the given data and models a function and when new input is given it expects the output. Difference: Classification is discrete output where as Regression is a continuous output.
Classification is a task of mapping data to discrete labels while regression is a task which maps data to a continuous function or real values. Error in classification is the number of misclassifications while in regression is the summed distance between the true and predicted values.
1
DigiKlausur
0.5
0.001522
139
null
Regression: Tries to fit a line are curve among the given points The have continuous output the output is a function Classification: Tries to classify the given points into two or more calsses They have a discrete output the output is a value representing the class
Classification is a task of mapping data to discrete labels while regression is a task which maps data to a continuous function or real values. Error in classification is the number of misclassifications while in regression is the summed distance between the true and predicted values.
1
DigiKlausur
0.5
0.001522
140
null
Classification: Each datapoint is assigned with a class Regression: Each datapoint is assigned with a value In classification we assign classes or labels to datapoints. The error signal here can be only true or false. In regression we try to learn a function, the error for each prediction can be a number.
Classification is a task of mapping data to discrete labels while regression is a task which maps data to a continuous function or real values. Error in classification is the number of misclassifications while in regression is the summed distance between the true and predicted values.
2
DigiKlausur
1
0.001001
141
null
In classification a binary pattern has to be partitioned into the two classes. In regression a line has to be fitted closest to some datapoints. The difference is, that in Classification mthe output is a single class label, while in regression the output is continuous
Classification is a task of mapping data to discrete labels while regression is a task which maps data to a continuous function or real values. Error in classification is the number of misclassifications while in regression is the summed distance between the true and predicted values.
1
DigiKlausur
0.5
0.001522
142
null
In classification the input data is split in 2 or more classes. The goal of the neural network is to learn the input data and then be able to classify new input data into the classes. Based on the learned information the network then maps input data into one of the classes, which is discrete space. In regression the input data is learned aswell. But here the network tries to predict feature values, which are in continuous space. The network tries to predict close as possible to new input data only using the learned model.
Classification is a task of mapping data to discrete labels while regression is a task which maps data to a continuous function or real values. Error in classification is the number of misclassifications while in regression is the summed distance between the true and predicted values.
1
DigiKlausur
0.5
0.001522
143
null
Classification tries to label discrete data points with distinct classes, while regression tries to approximate a continuous function from discrete data points. Results of these methods are respectively a labeled data set or a continuous function.
Classification is a task of mapping data to discrete labels while regression is a task which maps data to a continuous function or real values. Error in classification is the number of misclassifications while in regression is the summed distance between the true and predicted values.
1
DigiKlausur
0.5
0.001522
144
null
In classification the task is to give an discrete output value to an input. It assignes one of all defined classes to the current input. Regression try to approximate a function while minimizing error and produces a continous output value.
Classification is a task of mapping data to discrete labels while regression is a task which maps data to a continuous function or real values. Error in classification is the number of misclassifications while in regression is the summed distance between the true and predicted values.
1
DigiKlausur
0.5
0.001522
145
null
Classification means mapping inptut data a class label, for example 1 and -1. IN regression on the other hand a continuous function is learned in way that f(x) - F(x) is minimized, where f(x) is the function learned by a learning machine and F(x) is the original function.
Classification is a task of mapping data to discrete labels while regression is a task which maps data to a continuous function or real values. Error in classification is the number of misclassifications while in regression is the summed distance between the true and predicted values.
1
DigiKlausur
0.5
0.001522
146
null
Classification is supervised learning where underlying function representing the trining data is learn from training data to predict classes of datapoints or patterns drawn from similar distribution as of tarining data. Weights of the neural network are learned to minimize the error in classification. Regression is supervised learning algorithm where underlying function representing the trining data is learn from training data to predict the value of label or output of some system for new datapoint or pattern of similar type. Weights of the neural network are learned to minimize the error in prediction of function. Differences. : 1. Output of classification is discrete ( Class 1,2,3 ) whereas output of regression is continuous 2. Error in classification is number of wrong classifications whereas Error in classification regression is distance between lable value and predicted value
Classification is a task of mapping data to discrete labels while regression is a task which maps data to a continuous function or real values. Error in classification is the number of misclassifications while in regression is the summed distance between the true and predicted values.
2
DigiKlausur
1
0.001001
147
null
classification is type of problem where algorithm needs to saperate the one data class from the another data class. If there is 2 classes C1 , C2 . algorithm classify the given data into these two classes. it is discreet process. Regression is the pridicting the next point depending on the previous points. it is continuous process.
Classification is a task of mapping data to discrete labels while regression is a task which maps data to a continuous function or real values. Error in classification is the number of misclassifications while in regression is the summed distance between the true and predicted values.
1
DigiKlausur
0.5
0.001522
148
null
Classification is the problem where the input data has to be put in two or more classes distinctively different from each other. For example in case of binary classification on class can be -1 and the other +1 Regression on the other hand is data fitting. THe main aim is to find a hyperplane which can fit a given input pattern.
Classification is a task of mapping data to discrete labels while regression is a task which maps data to a continuous function or real values. Error in classification is the number of misclassifications while in regression is the summed distance between the true and predicted values.
1
DigiKlausur
0.5
0.001522
149
null
Classification: Is a task to partition the given input into one of several classes. The calsses are descrete values. Regression: Regression is the tasks of predicting output in a continuous range. The prediction can be any value within a range.
Classification is a task of mapping data to discrete labels while regression is a task which maps data to a continuous function or real values. Error in classification is the number of misclassifications while in regression is the summed distance between the true and predicted values.
1
DigiKlausur
0.5
0.001522
150
null
In classification task the aim to separate data in different classes, such that output of NN gives value of class index for each input point. E.g in the task is to classify binary data, then the output of the NN will 0 or 1, and each value, represent on class. In case of regression task, the aim is to fit data, namely a function that perform input-ouput mapping. Output of NN in this case, will be error value, such that we know how close is out function fitted to data points.
Classification is a task of mapping data to discrete labels while regression is a task which maps data to a continuous function or real values. Error in classification is the number of misclassifications while in regression is the summed distance between the true and predicted values.
1
DigiKlausur
0.5
0.001522
151
null
1. Arrange the weights in the required topology according to the problem. 2. Initialize the weights randomly such that all the weights are different. 3. Sample the input from the input space. 4. Similarity matching: match the input to a neuron in the topological lattice which becomes the winning neuron. 5. Update the weights of the winning neuron and its neighbours determined by the neighbourhood function. Reduce the neighbourhood and reduce the learning rate and make sure learning rate is above zero. 6. If ordering and convergence is complete, stop. Else continue to step 3.
Arrange the weights in the required topology according to the problem. Initialize the weights randomly such that all the weights are different. Sample the input from the input space. Similarity matching: match the input to a neuron in the topological lattice which becomes the winning neuron. Update the weights of the winning neuron and its neighbours determined by the neighbourhood function. Reduce the neighbourhood and decay the learning rate and share radius. If ordering and convergence are complete, stop. Else continue sampling from the input space.
2
DigiKlausur
1
0.001001
152
null
i. First we initialize random weights for neurons ii. Then we choose random input from input space iii. We compute distance between input vector and each weight vector. iv. Neuron that have minimium euclidean distance with input vector is considered as winner neuron v. Then, we find the neighborhood neurons of the winning neuron vi. We adjust the weights of all neighborhood neurons vii. Reduce the learning parameter and neighborhood size viii. Continue until it converges.
Arrange the weights in the required topology according to the problem. Initialize the weights randomly such that all the weights are different. Sample the input from the input space. Similarity matching: match the input to a neuron in the topological lattice which becomes the winning neuron. Update the weights of the winning neuron and its neighbours determined by the neighbourhood function. Reduce the neighbourhood and decay the learning rate and share radius. If ordering and convergence are complete, stop. Else continue sampling from the input space.
1
DigiKlausur
0.5
0.001522
153
null
w denote weights t denotes threshold h denotes the neighborhood function, which decreases with distance d from winning neuron h(x, x{win} //neighborhood function return (exp(-2/||x-x{win}||) w = rand(); //initialize weights with random value while (w{delta} > t){ //proceed until there are no notieable changes x{win} = arg min ||x-w||^{2}//determine x which is closest to w (competetive learing) // Update weights of winning neuron // weights of losing neurons are not updated w{new} = w{old} + x*h(x, x{win})*(x-w)//update weights of neuron which a are in neighborhood of winning neuron }
Arrange the weights in the required topology according to the problem. Initialize the weights randomly such that all the weights are different. Sample the input from the input space. Similarity matching: match the input to a neuron in the topological lattice which becomes the winning neuron. Update the weights of the winning neuron and its neighbours determined by the neighbourhood function. Reduce the neighbourhood and decay the learning rate and share radius. If ordering and convergence are complete, stop. Else continue sampling from the input space.
2
DigiKlausur
1
0.001001
154
null
initialize weights with small values (such that all of the weight vectors are different); sample a datapoint, feed into network; determine the winning neuron on the lattice, picking the neuron with the least euclidean distance of its weight vector to the input vector; determine the neighbourhood of the winning neuron through the neighbourhood function; change weights of the neurons, namely spatially 'pulling' the weight vectors of the neighbourhood neurons towards the input vector; depending on the timestep, reduce learning rate and neighbourhood size based on wether we are in the organizing or finetuning step; repeat until maximum number of steps;
Arrange the weights in the required topology according to the problem. Initialize the weights randomly such that all the weights are different. Sample the input from the input space. Similarity matching: match the input to a neuron in the topological lattice which becomes the winning neuron. Update the weights of the winning neuron and its neighbours determined by the neighbourhood function. Reduce the neighbourhood and decay the learning rate and share radius. If ordering and convergence are complete, stop. Else continue sampling from the input space.
2
DigiKlausur
1
0.001001
155
null
1. Randomly initialize weights. 2. Randomly select an input from the training data. 3. Find the nearest neighbour of this input in the weights. This is done by finding the euclidean distance of the input from each weight. And selecting the weight with least distance. 4. Update the weights of all the neurons within the neighbourhood $h(n)$ (which is gaussian function, with an exponentially decaying $\sigma(n)$) of the winning neuron with some learning rate $\eta(n)$. $$\Delta w{ij}=\eta(n)h(n)(||xi-xj||)$$ where, $$\eta(n)= \eta0e^{-n/T1}$$ and $$\sigma(n)= \sigma0e^{-n/T1}$$
Arrange the weights in the required topology according to the problem. Initialize the weights randomly such that all the weights are different. Sample the input from the input space. Similarity matching: match the input to a neuron in the topological lattice which becomes the winning neuron. Update the weights of the winning neuron and its neighbours determined by the neighbourhood function. Reduce the neighbourhood and decay the learning rate and share radius. If ordering and convergence are complete, stop. Else continue sampling from the input space.
2
DigiKlausur
1
0.001001
156
null
In SOM we start with randomized weights, $\mu$ learning reate, $d{ji}$ distance between j and i, $h$ neighbour function repeat as long as error is too high/max iterations are not reached: 1. take input sample 2. find closest node/weight 3. find all it neighbours 4. move the weight and its neighbours closer to the given input, use the neighbour function (e.g. gaussian) to reduce effect to far distance neighbours 5. (optional) adapt learning rate and neighbour function
Arrange the weights in the required topology according to the problem. Initialize the weights randomly such that all the weights are different. Sample the input from the input space. Similarity matching: match the input to a neuron in the topological lattice which becomes the winning neuron. Update the weights of the winning neuron and its neighbours determined by the neighbourhood function. Reduce the neighbourhood and decay the learning rate and share radius. If ordering and convergence are complete, stop. Else continue sampling from the input space.
2
DigiKlausur
1
0.001001
157