Question: Question 25 (1 point) In the original k-means clustering algorithm, the parameter k is Question 25 options: determined solely by the user initially set by
Question 25 (1 point)
In the original k-means clustering algorithm, the parameter k is
Question 25 options:
|
| determined solely by the user |
|
| initially set by the user, but then the algorithm converges on a value for k |
|
| determined solely by the algorithm |
(The following two questions are based on this description)
Suppose we are clustering the following set of instances using k-means clustering, with k = 2: {(1,1), (1,2), (1,3), (1,4), (3,1), (4,1), (5,1)}. Assume that the initial centers are C1 at (1,4) and C2 at (3,1). Note, the pairs show values for (x, y), where x and y are the attributes for an instance.
Question 26 (1 point)
Which of the following shows the initial clusters?
Question 26 options:
|
| Cluster 1: {(1,3), (1,4)}, Cluster 2: {(1,1), (1,2), (3,1), (4,1), (5,1)} |
|
| Cluster 1: {(1,1), (1,2), (1,3), (1,4)}, Cluster 2: {(3,1), (4,1), (5,1)} |
|
| Cluster 1: {(1,2), (1,3), (1,4)}, Cluster 2: {(1,1), (3,1), (4,1), (5,1)} |
|
| Cluster 1: {(1,1), (1,2), (1,3), (1,4), (3,1)}, Cluster 2: {(4,1), (5,1)} |
Question 27 (1 point)
If we continue the clustering process until convergence, which of the following shows the final clusters?
Question 27 options:
|
| Cluster 1: {(1,3), (1,4)}, Cluster 2: {(1,1), (1,2), (3,1), (4,1), (5,1)} |
|
| Cluster 1: {(1,1), (1,2), (1,3), (1,4)}, Cluster 2: {(3,1), (4,1), (5,1)} |
|
| Cluster 1: {(1,1), (1,2), (1,3), (1,4), (3,1)}, Cluster 2: {(4,1), (5,1)} |
|
| Cluster 1: {(1,2), (1,3), (1,4)}, Cluster 2: {(1,1), (3,1), (4,1), (5,1)} |
Question 28 (1 point)
Consider the incremental clustering algorithm, at a certain step, we have formed a tree as shown below, when a new instance f comes in, where might we insert this instance? Suppose that among the root and the five leaf nodes (a-e), a has the highest category utility (as a host) and b is the runner-up.
Question 28 options:
|
| as a new leaf of the root |
|
| we create a new internal node with a and f as leaf nodes |
|
| we create a new internal node with a, b and f as leaf nodes |
|
| b, c |
|
| a, b, c |
Question 29 (1 point)
In hierarchical agglomerative clustering algorithm, the similarity (or distance) between two clusters can be decided by
Question 29 options:
|
| the similarity (or distance) between the two closest members of the clusters |
|
| the similarity (or distance) between the two farthest members of the clusters |
|
| the similarity (or distance) between the centroids of the two clusters |
|
| average of the similarity (or distance) between all members of the clusters |
|
| all of the above |
Question 30 (1 point)
In a multi-instance learning problem (where a single example is a bag of instances), the training set would have
Question 30 options:
|
| a class associated with each bag |
|
| a class associated with each instance in each bag (so a bag might have several classes in it) |
|
| no class associated with either bags or instances |
Step by Step Solution
There are 3 Steps involved in it
Get step-by-step solutions from verified subject matter experts
