Clustering: K-Means Algorithm
Which statement best explains why K-Means uses squared Euclidean distance instead of simple Euclidean distance?
What is the main reason K-Means may produce different results on different runs?
Which condition ensures that K-Means has converged?
The distance of a point in a cluster to another point in the same cluster is generally
Why is feature scaling important in K-Means?
Which of the following best describes the role of centroids?
If clusters are highly overlapped, this indicates
Why does K-Means prefer compact clusters?
Which property of K-Means makes it unsuitable for categorical data?
Why is K-Means not suitable for clusters with complex shapes