Rest of questions

A detailed illustration of a neural network in action, showcasing nodes, connections, and data flow, with an educational and tech-savvy design.

Neural Network Training Quiz

Test your knowledge on neural networks and the backpropagation algorithm with this comprehensive quiz! Whether you are a student, a professional, or just a curious learner, this quiz covers essential concepts and technical details.

Key Features:

  • 17 carefully crafted questions
  • Multiple choice format for easy answering
  • Covers a wide range of topics in neural networks
17 Questions4 MinutesCreated by SyncingCloud257
A training pattern, consisting of an input vector x = [x1, x2, x3]T and desired outputs t = [t1, t2, t3]T, is presented to the following neural network. What is the usual sequence of events for training the network using the backpropagation algorithm?
A. (1) calculate yj = f(Hj ), (2) calculate zk = f(Ik), (3) update wkj, (4) update vji.
B. (1) calculate yj = f(Hj ), (2) calculate zk = f(Ik), (3) update vji, (4) update wkj
C. (1) calculate yj = f(Hj ), (2) update vji, (3) calculate zk = f(Ik), (4) update wkj.
D. (1) calculate zk = f(Ik), (2) update wkj, (3) calculate yj = f(Hj ), (4) update vji.
After some training, the units in the neural network of question 22 have the following weight vectors: v1 = ⎡ ⎣ −0.7 1.8 2.3 ⎤ ⎦, v2 = ⎡ ⎣ −1.2 −0, 6 2.1 ⎤ ⎦, w1 = 1.0 −3.5 , w2 = 0.5 −1.2 and w3 = 0.3 0.6 . Assume that all units have sigmoid activation functions given by f(x) = 1 1 + exp(−x) and that each unit has a bias θ = 0 (zero). If the network is tested with an input vector x = [2.0, 3.0, 1.0]T then the output of the first hidden neuron y1 will be (Hint: on some calculators, exp(x) = ex where e = 2.7182818)
A. -2.1000
B. 0.1091
C. 0.5000
D. 0.9982
E. 6.3000
For the same neural network described in questions 22 and 23, the output of the second hidden neuron y2 will be (Assume exactly the same weights, activation functions, bias values and input vector as described in the previous question.)
A. -2.1000
B. 0.1091
C. 0.5000
D. 0.9982
E. 6.3000
For the same neural network described in questions 22 and 23, the output of the first output neuron z1 will be (Assume exactly the same weights, activation functions, bias values and input vector as in question 23.)
A. 0.0570
B. 0.2093
C. 0.5902
D. 0.5910
E. 0.6494
For the same neural network described in questions 22 and 23, the output of the third output neuron z3 will be (Assume exactly the same weights, activation functions, bias values and input vector as in question 23.)
A. 0.0570
B. 0.2093
C. 0.5902
D. 0.5910
The following figure shows part of the neural network described in questions 22 and 23. In this question, a new input pattern is presented to the network and training continues as follows. The actual outputs of the network are given by z = [0.35, 0.88, 0.57]T and the corresponding target outputs are given by t = [1.00, 0.00, 1.00]T. The weights w12, w22 and w32 are also shown below.For the output units, the Generalized Delta Rule can be written as wkj = ηδkyj where δk = f (Ik)(tk − zk) where wkj is the change to the weight from unit j to unit k, η is the learning rate, δk is the error for unit k, and f(net) is the derivative of the activation function f(net). For the sigmoid activation function given in question 23, the derivative can be rewritten as f (Ik) = f(Ik)[1 − f(Ik)]. What is the error for each of the output units?
A. δoutput 1 = 0.4225, δoutput 2 = −0.1056, and δoutput 3 = 0.1849.
B. δoutput 1 = 0.1479, δoutput 2 = −0.0929, and δoutput 3 = 0.1054.
C. δoutput 1 = −0.4225, δoutput 2 = 0.1056, and δoutput 3 = −0.1849.
D. δoutput 1 = −0.1479, δoutput 2 = 0.0929, and δoutput 3 = −0.1054.
For the hidden units of the same network, the Generalized Delta Rule can be written as vji = ηδjxi where δj = f (Hj)  k δkwkj where vji is the change to the weight from unit I to unit j, η is the learning rate, δj is the error for unit j, and f(net) is the derivative of the activation function f(net). For the sigmoid activation function given in question 23, the derivative can be rewritten as f (Hj) = f(Hj)[1 − f(Hj )]. What is the error for hidden unit 2 given that its activation for the pattern being processed is currently y2 = 0.74?
A. δhidden 2 = −0.2388
B. δhidden 2 = −0.0660
C. δhidden 2 = 0.0000
D. δhidden 2 = 0.0660
Which of the following techniques is NOT a strategy for dealing with local minima in the backpropagation algorithm?
A. Add random noise to the weights or input vectors during training.
B. Train using the Generalized Delta Rule with momentum.
C. Train and test using the hold-one-out strategy.
D. Test with a committee of networks.
Training with the “1-of-M” coding is best explained as follows:
A. Set the actual output to 1 for the correct class, and set all of the other actual outputs to 0.
B. Set the actual outputs to the posterior probabilities for the different classes.
C. Set the target output to 1 for the correct class, and set all of the other target outputs to 0.
D. Set the target outputs to the posterior probabilities for the different classes.
Consider the following feedforward network with one hidden layer of units: The input vector to the network is x = [x1, x2, x3]T , the vector of hidden layer outputs is y = [y1, y2]T , the vector of actual outputs is z = [z1, z2, z3]T , and the vector of desired outputs is t = [t1, t2, t3]T. The network has the following weight vectors: v1 = ⎡ ⎣ 0.4 −0.6 1.9 ⎤ ⎦, v2 = ⎡ ⎣ −1.2 0.5 −0.7 ⎤ ⎦, w1 = 1.0 −3.5 , w2 = 0.5 −1.2 and w3 = 0.3 0.6 . Assume that all units have sigmoid activation functions given by f(x) = 1 1 + exp(−x) and that each unit has a bias θ = 0 (zero). If the network
A. -2.300
A. -2.300 B. 0.091
C. 0.644
D. 0.993
E. 4.900
Assuming exactly the same neural network and the same input vector as in the previous question, what is the activation I2 of the second output neuron?
A. 0.353
B. 0.387
C. 0.596
D. 0.662
For the hidden units of the network in question 31, the generalized Delta rule can be written as vji = ηδjxi where vji is the change to the weights from unit I to unit j, η is the learning rate, δj is the error term for unit j, and xi is the ith input to unit j. In the backpropagation algorithm, what is the error term δj?
A. δj = f(Hj)(tk − zk).
B. δj = f(Ik)(tk − zk).
C. δj = f(Hj)  k δkwkj.
D. δj = f(Ik)  k δkwkj.
For the output units of the network in question 31, the generalized Delta rule can be written as wkj = ηδkyj where wkj is the change to the weights from unit j to unit k, η is the learning rate, δk is the error term for unit k, and yj is the jth input to unit k. In the backpropagation algorithm, what is the error term δk?
A. δk = f(Hj)(tk − zk).
B. δk = f(Ik)(tk − zk).
C. δk = f(Hj)  k δkwkj.
D. δk = f(Ik)  k δkwkj.
Which of the following equations best describes the generalized Delta rule with momentum?
A. pwkj(t + 1) = ηδkyj + αf(Hj)yj(t)
B. pwkj(t + 1) = αδkyj(t)
C. pwkj(t + 1) = ηδkyj + αpwkj(t)
D. pwkj(t + 1) = ηδkyj(t)
E. pwkj(t + 1) = ηδkyj + αδkyj(t)
The following figure shows part of the neural network described in question 31. A new input pattern is presented to the network and training proceeds as follows. The actual outputs of the network are given by z = [0.32, 0.05, 0.67]T and the corresponding target outputs are given by t = [1.00, 1.00, 1.00]T. The weights w12, w22 and w32 are also shown below. For the output units, the derivative of the sigmoid function can be rewritten as f (Ik) = f(Ik)[1 − f(Ik)]. What is the error for each of the output units?
A. δoutput 1 = −0.2304, δoutput 2 = 0.3402, and δoutput 3 = −0.8476.
B. δoutput 1 = 0.1084, δoutput 2 = 0.1475, and δoutput 3 = 0.1054.
C. δoutput 1 = 0.1480, δoutput 2 = 0.0451, and δoutput 3 = 0.0730.
D. δoutput 1 = 0.4225, δoutput 2 = −0.1056, and δoutput 3 = 0.1849.
For the hidden units, the derivative of the sigmoid function can be rewritten as f (Hj) = f(Hj)[1 − f(Hj )]. What is the error for hidden unit 2 given that its activation for the pattern being processed is currently y2 = 0.50?
A. δhidden 2 = −0.4219
B. δhidden 2 = −0.1321
C. δhidden 2 = −0.0677
D. δhidden 2 = 0.0481
What is the biggest difference between Widrow & Hoff’s Delta Rule and the Perceptron Learning Rule for learning in a single-layer feed forward network?
 
{"name":"Rest of questions", "url":"https://www.quiz-maker.com/QPREVIEW","txt":"Test your knowledge on neural networks and the backpropagation algorithm with this comprehensive quiz! Whether you are a student, a professional, or just a curious learner, this quiz covers essential concepts and technical details. Key Features: 17 carefully crafted questions Multiple choice format for easy answering Covers a wide range of topics in neural networks","img":"https:/images/course8.png"}
Powered by: Quiz Maker