聚类(二)k均值(k-means)聚类算法
前面一章介绍了聚类的性能度量和常见的距离计算,本章将开始介绍具体的聚类算法,首先要介绍的就是原型聚类。
原型聚类亦称“基于原型的聚类”(prototype-based clustering),此类算法假设聚类结构能通过一组原型刻画,在现实聚类任务中极为常用。通常情况下,算法先对原型进行初始化,然后对原型进行迭代更新求解。常见的原型聚类算法有k均值算法、学习向量量化(Learning Vector Quantization, LVQ)和高斯混合聚类(Minture of Caussian)。
k均值算法:给定样本集
上式在一定程度上刻画了簇内样本围绕均值向量的紧密程度,
最小化上式
对于初始质心的选择采用k-means++的方法,k-means++算法受初始质心影响较小,表现上往往优于K-menas算法:
从样本中选择一个点作为初始质心(完全随机);
对于任意一个非质心样本
基于距离计算概率,来选择下一个质心
重复步骤2与3,直到选择k个质心为止。
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from sklearn.datasets import make_blobs
class KmeansCluster:
"""
基于原型聚类的,K均值算法
"""
def __init__(self, data, k = 3, max_epochs = 100, tol = 1e-5, dist_method = 'euclidean'):
self.X = data
self.m = data.shape[0]
self.k = k
self.max_epochs = max_epochs
self.tol = tol
self.dist_method = dist_method
self.distance_fun = self.distance_function() # 距离度量函数
self.cluster_centers = dict() # 存储簇中心向量,以簇数来存储
def distance_function(self):
# 距离度量方法:euclidean, mahattan, VDM, cosine, mahalanbis
if self.dist_method == 'euclidean':
return lambda x,y: np.sqrt(((x - y) ** 2).sum())
elif self.dist_method == "":
return None
def select_cluster_center(self):
# 按照kmeans++方法初始化簇中心
sample_j = np.random.choice(self.m, 1) # 随机选择一个簇中心样本索引
self.cluster_centers[0] = self.X[sample_j]
select_inx = [sample_j]
while len(self.cluster_centers) < self.k:
sample_j, max_dist = None, 0
for j in range(self.m):
for key in self.cluster_centers.keys():
# 计算当前样本距离每个簇中心的距离
dist = self.distance_fun(self.cluster_centers[key], self.X[j])
if dist > max_dist and j not in select_inx:
sample_j, max_dist = j, dist
select_inx.append(sample_j)
self.cluster_centers[len(self.cluster_centers)] = self.X[sample_j]
print("k-means++算法,初始化簇中心向量为:")
for key in self.cluster_centers.keys():
print("簇"+str(key + 1), self.cluster_centers[key])
print("-" * 100)
def fit_kmeans(self):
# k均值算法的核心内容,实质就是更新簇中心向量
for epochs in range(self.max_epochs):
cluster = dict() # 存储各簇样本索引,以簇索引为键
for idx in range(self.k):
cluster[idx] = [] # 每个簇存储样本索引
for j in range(self.m):
best_cluster_idx, min_dist = None, np.infty
for c_idx in self.cluster_centers.keys():
# 计算每个样本到每个簇中心的距离
dist = self.distance_fun(self.cluster_centers[c_idx], self.X[j])
if dist < min_dist:
best_cluster_idx, min_dist = c_idx, dist
cluster[best_cluster_idx].append(j)
# 更新簇中心均值向量
eps = 0 # 簇中心更新前后的簇中心距离累加和
for c_idx in self.cluster_centers.keys():
vec_center = np.mean(self.X[cluster[c_idx]], axis = 0) # 各簇中心均值向量
eps += self.distance_fun(vec_center, self.cluster_centers[c_idx])
self.cluster_centers[c_idx] = vec_center # 更新
# 簇中心更新过程输出
# print("iter", epochs + 1, ":", "簇中心与簇内样本索引:")
# for key in cluster.keys():
# print("簇" + str(key + 1), ", 中心", self.cluster_centers[key], "样本索引", cluster[key])
# print("-" * 100)
# 终止迭代条件
if eps < self.tol:
break
def predict(self, X):
# 针对每个样本,根据各簇中心计算距离,距离那个簇中心近,归于哪个簇
cluster_labels = [] # 簇中心索引
for i in range(X.shape[0]):
best_j, min_dist = None, np.infty
for idx in range(self.k):
dist = self.distance_fun(self.cluster_centers[idx], X[i])
if dist < min_dist:
min_dist, best_j = dist, idx
cluster_labels.append(best_j)
return np.asarray(cluster_labels)
def plt_classify(self):
# 绘制分类边界
x1_min, x2_min = self.X.min(axis = 0)
x1_max, x2_max = self.X.max(axis = 0)
t1 = np.linspace(x1_min, x1_max, 50)
t2 = np.linspace(x2_min, x2_max, 50)
x1, x2 = np.meshgrid(t1, t2)
x_show = np.stack((x1.flat, x2.flat), axis = 1)
# cm_light = ListedColormap(["#A0FFA0", '#FFA0A0', '#A0A0FF'])
# cm_dark = ListedColormap(['darkgreen', 'darkred', 'darkblue'])
cm_light = ListedColormap(['g', 'r', 'b', 'm', 'c'])
cm_dark = ListedColormap(['g', 'r', 'b', 'm', 'c'])
y_show_hat = self.predict(x_show)
y_show_hat = y_show_hat.reshape(x1.shape)
plt.figure(figsize = (9, 6), facecolor='w')
plt.pcolormesh(x1, x2, y_show_hat, shading='auto', cmap=cm_light, alpha = 0.3)
plt.scatter(self.X[:, 0], self.X[:, 1], c = self.predict(self.X).ravel(), edgecolors='k', s = 20, cmap=cm_dark)
for key in self.cluster_centers.keys():
center = self.cluster_centers[key]
plt.scatter(center[0], center[1], c = 'orange', marker='p', s = 100)
plt.xlabel('X1', fontsize = 11)
plt.ylabel('X2', fontsize = 11)
plt.xlim(x1_min, x1_max)
plt.ylim(x2_min, x2_max)
plt.grid(b = True, ls = ':', color = '#606060')
plt.title("K-means classification boundary and Cluster Center Vec", fontsize = 12)
plt.show()
X = pd.read_csv(r'../datasets/watermelon4.0.csv').values
kmc = KmeansCluster(X)
kmc.select_cluster_center()
kmc.fit_kmeans()
print("k-means++算法,最终簇中心向量为:")
for key in kmc.cluster_centers.keys():
print("簇"+ str(key + 1), kmc.cluster_centers[key])
print("-" * 100)
kmc.plt_classify()
centers = np.array([[0.2, 2.3], [-1.5, 2.3], [-2.8, 1.8], [-2.8, 2.8], [-2.8, 1.3]])
std = np.array([0.4, 0.3, 0.1, 0.1, 0.1])
X, y = make_blobs(n_samples=2000, n_features=2, centers=centers, cluster_std=std, random_state=0)
kmc = KmeansCluster(X, k = 5, tol = 1e-8)
kmc.select_cluster_center()
kmc.fit_kmeans()
print("k-means++算法,最终簇中心向量为:")
for key in kmc.cluster_centers.keys():
print("簇"+ str(key + 1), kmc.cluster_centers[key])
print("-" * 100)
kmc.plt_classify()
部分餐饮客户消费行为特征数据,共有940个样本数据。其中指标R表示最近一次消费时间间隔,F表示消费频率,M表示消费总金额。通过建立合理的客户价值评估模型,对客户进行分群,分析比较不同客户群的客户价值,并制定相应的营销策略,对不同的客户群提供个性化的客户服务是很有必要的。
from sklearn.preprocessing import StandardScaler
import seaborn as sns
X = pd.read_csv(r'../datasets/consumption_data.csv').values
# X = StandardScaler().fit_transform(X)
cluster_k = 3
kmc = KmeansCluster(X, k = cluster_k, tol = 1e-8)
kmc.select_cluster_center()
kmc.fit_kmeans()
labels = kmc.predict(X)
print("k-means++算法,最终簇中心向量为:")
for key in kmc.cluster_centers.keys():
print("簇"+ str(key + 1), kmc.cluster_centers[key])
title = ["R index", "F index", "M index"]
plt.figure(figsize = (7, 10))
for f in range(X.shape[1]): # 表示特征
plt.subplot(311 + f)
for c in range(cluster_k): # 表示簇索引
sns.kdeplot(X[labels == c][:, f])
plt.title(title[f])
plt.show()
sklearn.cluster.Kmeans调用函数实现结果,默认k-means++;
from sklearn.cluster import KMeans
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
X = pd.read_csv(r'../datasets/consumption_data.csv').values
cluster_k = 3
skm = KMeans(n_clusters=cluster_k).fit(X)
print(skm.cluster_centers_)
title = ["SR index", "SF index", "SM index"]
plt.figure(figsize = (7, 10))
for f in range(X.shape[1]): # 表示特征
plt.subplot(311 + f)
for c in range(cluster_k): # 表示簇索引
sns.kdeplot(X[skm.labels_ == c][:, f])
plt.title(title[f])
plt.show()
轮廓分析可以用来研究所得到的簇间的分离程度。轮廓图显示一个簇中的每个点与相邻簇中的点之间的距离的度量,从而提供了一种方法,例如视觉上评估簇数等参数。此测量值的范围是[-1,1]。靠近+1的轮廓系数表明样本远离相邻簇。值0表示样本处于或非常接近两个相邻簇之间的决策边界,负值表示这些样本可能已分配给错误的群集。
轮廓图显示,由于存在低于平均轮廓分数的簇,以及轮廓图大小的大幅度波动,n簇值3、5、6给定数据的一个坏选择。轮廓分析在2和4之间的决定中更矛盾。
也可以从轮廓图的厚度来显示簇的大小。当簇数等于2时,由于3个子簇被分组成一个大簇,因此聚类0的轮廓图尺寸较大。然而,当簇数等于4时,所有图的厚度或多或少相似,因此具有类似的尺寸。
import numpy as np
from matplotlib import cm
import matplotlib.pyplot as plt
from sklearn.datasets import make_blobs
from sklearn.cluster import KMeans
from sklearn.metrics import silhouette_score, silhouette_samples
X, y = make_blobs(n_samples=500, n_features=2, centers=4, cluster_std=1, center_box=(-10.0, 10.0), shuffle=True, random_state=1)
range_n_clusters = [2, 3, 4, 5, 6] # 簇的个数
for n_clusters in range_n_clusters:
fig, (ax1, ax2) = plt.subplots(1, 2)
fig.set_size_inches(12, 5)
# 第一个子图是轮廓图
ax1.set_xlim([-0.1, 1])
ax1.set_ylim([0, len(X) + (n_clusters + 1) * 10])
# 设置簇数和随机种子
clusterer = KMeans(n_clusters=n_clusters, random_state=0)
cluster_labels = clusterer.fit_predict(X) # 预测样本标记簇
silhouette_avg = silhouette_score(X, cluster_labels) # 所有样本轮廓系数得分均值
print("For n_clusters = ", n_clusters, "The average silhouette_score is :", silhouette_avg)
# 对每个样本计算轮廓得分
sample_silhouette_values = silhouette_samples(X, cluster_labels)
# 可视化第一个子图:轮廓图
y_lower = 10 # 轮廓边缘下届
for i in range(n_clusters):
# 统计每个簇内样本的轮廓得分
ith_cluster_silhouette_values = sample_silhouette_values[cluster_labels == i]
ith_cluster_silhouette_values.sort()
size_cluster_i = ith_cluster_silhouette_values.shape[0] # 簇内样本量
y_upper = y_lower + size_cluster_i # 轮廓边缘上界
cmap = cm.get_cmap("Spectral")
color = cmap(float(i) / n_clusters)
ax1.fill_betweenx(np.arange(y_lower, y_upper), 0, ith_cluster_silhouette_values, facecolor = color, edgecolor = color)
# 在每个轮廓中间标记簇编号
ax1.text(-0.05, y_lower + 0.5 * size_cluster_i, str(i))
y_lower = y_upper + 10 # 计算下一个轮廓边沿的下界
ax1.set_title("The silhouette plot for the various clusters.")
ax1.set_xlabel("The silhouette coefficient values")
ax1.set_ylabel("Cluster label")
# 绘制轮廓得分均值的垂直线,红色虚线
ax1.axvline(x = silhouette_avg, color = 'red', linestyle = '--')
ax1.set_yticks([])
ax1.set_xticks([-0.1, 0, 0.2, 0.4, 0.6, 0.8, 1])
# 可视化第二个子图:样本散点图和簇中心
cmap = cm.get_cmap("Spectral")
colors = cmap(cluster_labels.astype(float) / n_clusters)
ax2.scatter(X[:, 0], X[:, 1], marker = '.', s = 30, lw = 0, c = colors)
centers = clusterer.cluster_centers_
# 绘制较大簇中心白色圆圈
ax2.scatter(centers[:, 0], centers[:, 1], marker = 'o', c = 'k', alpha = 1, s = 200)
# 圆圈内填充数字,故s = 50
for i, c in enumerate(centers):
ax2.scatter(c[0], c[1], marker = '$%d$' % i, alpha = 1, s = 50, c = 'w')
ax1.set_title("The visualization of the clustered data.")
ax1.set_xlabel("Feature space for the 1st feature")
ax1.set_ylabel("Feature space for the 2st feature")
plt.suptitle(("Silhouette analysis for Kmeans clustering on sample data with n_clusters = %d" % n_clusters)
, fontsize = 14, fontweight = 'bold')
plt.show()
K-Means擅长处理球状分布的数据,当结果聚类是密集的,而且类和类之间的区别比较明显时,K均值的效果比较好。对于处理大数据集,这个算法是相对可伸缩和高效的,它的复杂度是
K-Means算法也存在一些问题:
算法的初始质心选择与算法的运行效率密切相关,而随机选取中心点有可能导致迭代次数很大或者局限于某个局部最优状态;通常
K均值最大的问题是要求用户必须事先给出k的个数,k的选择一般基于一些经验值和多次试验的结果,对于不同的数据集,k的取值没有可借鉴性。
对异常偏离的数据敏感度——离群点;K均值对“噪声”和孤立点数据是敏感的,少量的这类数据就能对平均值造成极大的影响。
二分Kmeans算法,用以改进Kmeans算法局部最优的情况。(scipy库中API:https://docs.scipy.org/doc/scipy/reference/generated/scipy.cluster.vq.kmeans.html)