问题描述
请参考以下代码
import numpy as np
from sklearn.cluster import AffinityPropagation
from sklearn import metrics
from sklearn.datasets.samples_generator import make_blobs
##############################################################################
# Generate sample data
centers = [[1, 1], [-1, -1], [1, -1]]
X, labels_true = make_blobs(n_samples=300, centers=centers, cluster_std=0.5)
# Compute similarities
X_norms = np.sum(X ** 2, axis=1)
S = - X_norms[:, np.newaxis] - X_norms[np.newaxis, :] + 2 * np.dot(X, X.T)
p=[10 * np.median(S),np.mean(S,axis=1),np.mean(S,axis=0),100000,-100000]
##############################################################################
# Compute Affinity Propagation
for preference in p:
af = AffinityPropagation().fit(S, preference)
cluster_centers_indices = af.cluster_centers_indices_
labels = af.labels_
n_clusters_ = len(cluster_centers_indices)
print('Estimated number of clusters: %d' % n_clusters_)
print("Homogeneity: %0.3f" % metrics.homogeneity_score(labels_true, labels))
print("Completeness: %0.3f" % metrics.completeness_score(labels_true, labels))
print("V-measure: %0.3f" % metrics.v_measure_score(labels_true, labels))
print("Adjusted Rand Index: %0.3f" % \
metrics.adjusted_rand_score(labels_true, labels))
print("Adjusted Mutual Information: %0.3f" % \
metrics.adjusted_mutual_info_score(labels_true, labels))
D = (S / np.min(S))
print("Silhouette Coefficient: %0.3f" %
metrics.silhouette_score(D, labels, metric='precomputed'))
##############################################################################
# Plot result
import pylab as pl
from itertools import cycle
pl.close('all')
pl.figure(1)
pl.clf()
colors = cycle('bgrcmykbgrcmykbgrcmykbgrcmyk')
for k, col in zip(range(n_clusters_), colors):
class_members = labels == k
cluster_center = X[cluster_centers_indices[k]]
pl.plot(X[class_members, 0], X[class_members, 1], col + '.')
pl.plot(cluster_center[0], cluster_center[1], 'o', markerfacecolor=col,
markeredgecolor='k', markersize=14)
for x in X[class_members]:
pl.plot([cluster_center[0], x[0]], [cluster_center[1], x[1]], col)
pl.title('Estimated number of clusters: %d' % n_clusters_)
pl.show()
尽管我正在循环中更改首选项值,但仍然获得相同的集群吗?那么,为什么偏好值的变化不会影响聚类结果?
更新
当我尝试以下代码时,结果如下
When I tried the following code the outcome is below
当我尝试使用Agost在构造函数中建议的建议时,我得到了以下输出结果
When I tried the suggestion as recommended by Agost in the constructor then I got following output
推荐答案
AP的sklearn实现似乎非常脆弱。
The sklearn implementation of AP appears to be quite fragile.
我的建议使用它的方法:
My suggestions for using it:
- 使用
verbose = True
查看何时无法收敛 - 将最大迭代次数增加到至少1000次
- 通过ch减小阻尼oosing 0.9而不是0.5
- use
verbose=True
to see when it failed to converge - increase the maximum number of iterations to at least 1000
- reduce the damping by choosing 0.9 instead of 0.5
原因是使用默认参数,sklearn的AP通常不会收敛...
The reason is that with default parameters, sklearn's AP usually does not converge...
正如@AgostBiro之前提到的那样,首选项不是 的 fit
函数的参数(而是构造函数) ),因此您的原始代码会忽略此首选项,因为 fit(X,y)
会忽略 y
(这是一个愚蠢的API具有无效的 y
参数,但是sklearn喜欢这看起来像分类API)
As mentioned by @AgostBiro before, preference is not a parameter of the fit
function (but the constructor), so your original code ignored the preference, because fit(X,y)
ignores y
(it's a stupid API to have the dead y
parameter, but sklearn likes that this looks like the classification API)
这篇关于偏好值的更改不会影响亲和力传播聚类的结果的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!