【SVM预测】基于松鼠算法优化支持向量机SVM实现数据预测附matlab代码
1 简介
提出一种基于松鼠算法(SSA)和支持向量机(SVM)的股价预测方法.针对SVM预测模型参数难以确定的问题,采用SSA算法对SVM中惩罚因子及核函数参数进行优化,构建SSA-SVM股价预测模型。
2 部分代码
clc;clear all;close all;[f p]=uigetfile('*');X=importdata([p f]);data=X.data;data2=data(:,1:end-1);class=data(:,end);data1=knnimpute(data2);%%%%%%%%Feature selectionFSL=0;FSU=1;D=size(data1,2);for i=1:10FS(i,:)=FSL+randi([0 1],[1 D])*(FSU-FSL);tryfit(i)=fitness(data1,class,FS(i,:));catchfit(i)=1;continue;endendind=find(fit==min(fit));FSnew=FS(ind,:);pdp=0.1;row=1.204;V=5.25;S=0.0154;cd=0.6;CL=0.7;hg=1;sf=18;Gc=1.9;D1=1/(2*row*V.^2*S*cd);L=1/(2*row*V.^2*S*CL);tanpi=D1/L;dg=hg/(tanpi*sf);aa=randi([1 length(ind)]);iter=1;maxiter=2;while(iter<maxiter)for i=1:10if(rand>=pdp)FS(i,:)=round(FS(i,:)+(dg*Gc*abs(FSnew(1,:)-FS(i,:))));elseFS(i,:)=FSL+randi([0 1],[1 D])*(FSU-FSL);endFh=FS;fit1(i)=fitness(data1,class,FS(i,:));ind1=find(fit1==min(fit1));FSnew1=FS(ind1,:);if(rand>pdp)FS(i,:)=round(FS(i,:)+(dg*Gc*abs(FSnew(aa,:)-FS(i,:))));elseFS(i,:)=FSL+randi([0 1],[1 D])*(FSU-FSL);endFa=FS;fit2(i)=fitness(data1,class,FS(i,:));ind2=find(fit2==min(fit2));FSnew2=FS(ind2,:);endSc=sqrt(sum(abs(Fh-Fa)).^2);Smin=(10*exp(-6))/(365).^(iter/(maxiter/2.5));if(Sc<Smin)season=summer;for i=1:10FS(i,:)=FSL+levy(1,D,1.5)*(FSU-FSL);endelseseason=winter;break;end%%%Searching methodfit3(i)=fitness(data1,class,FS(i,:));ind3=find(fit3==min(fit3));final=abs(round([Fh(ind1,:);Fa(ind2,:);FS(ind3,:)]));for i=1:size(final,1)fitt(i)=fitness(data1,class,final(i,:));endbest(iter)=min(fitt);[ff inn]=min(fitt);bestfeat(iter,:)=final(inn,:);pdp=best(iter);iter=iter+1;endsel=find(bestfeat(end,:));disp('Selected Features');disp(sel)dataA =data2(:,sel); % some test datap = .7 ; % proportion of rows to select for trainingN = size(dataA,1); % total number of rowstf = false(N,1); % create logical index vectortf(1:round(p*N)) = true;tf = tf(randperm(N)); % randomise orderdataTraining = dataA(tf,:);labeltraining=class(tf);dataTesting = dataA(~tf,:);labeltesting=class(~tf);disp('Training feature size');disp(length(dataTraining))disp('Testing feature size');disp(length(dataTesting))svt=svmtrain(dataTraining,labeltraining);out1=svmclassify(svt,dataTesting);mdl=fitcknn(dataTraining,labeltraining);out2=predict(mdl,dataTesting);%%%%%%% NB %%%%%%%%mdl=fitcensemble(dataTraining,labeltraining);out3=predict(mdl,dataTesting);tp=length(find(out3==labeltesting));msgbox([{['Out of ',num2str(length(out3))]},{[num2str(tp),'are correctly classified']}])delete(gcp('nocreate'))disp('%%%%%%%% KNN %%%%%%%%%%%%%%')[EVAL CF] = Evaluate(out2,labeltesting);disp('Accuracy (%)');disp(EVAL(1)*100);disp('Precision (%)');disp(EVAL(4)*100);disp('Recall (%)');disp(EVAL(5)*100);disp('Fmeasure (%)');disp(EVAL(6)*100);disp('True Positive');disp(CF(1))disp('True Negative');disp(CF(2))disp('False Positive');disp(CF(3))disp('False Negative');disp(CF(4))disp('%%%%%%%% SVM %%%%%%%%%%%%%%')[EVAL3 CF] = Evaluate(out1,labeltesting);disp('Accuracy (%)');disp(EVAL3(1)*100);disp('Precision (%)');disp(EVAL3(4)*100);disp('Recall (%)');disp(EVAL3(5)*100);disp('Fmeasure (%)');disp(EVAL3(6)*100);disp('True Positive');disp(CF(1))disp('True Negative');disp(CF(2))disp('False Positive');disp(CF(3))disp('False Negative');disp(CF(4))disp('%%%%%% NB %%%%%%%%%%%%%%')[EVAL2 CF] = Evaluate(out3,labeltesting);disp('Accuracy (%)');disp(EVAL2(1)*100);disp('Precision (%)');disp(EVAL2(4)*100);disp('Recall (%)');disp(EVAL2(5)*100);disp('Fmeasure (%)');disp(EVAL2(6)*100);disp('True Positive');disp(CF(1))disp('True Negative');disp(CF(2))disp('False Positive');disp(CF(3))disp('False Negative');disp(CF(4))
3 仿真结果
4 参考文献
[1]顾嘉运, 刘晋飞, 陈明. 基于SVM的大样本数据回归预测改进算法[J]. 计算机工程, 2014, 40(1):6.
