欢迎来到淘文阁 - 分享文档赚钱的网站! | 帮助中心 好文档才是您的得力助手!
淘文阁 - 分享文档赚钱的网站
全部分类
  • 研究报告>
  • 管理文献>
  • 标准材料>
  • 技术资料>
  • 教育专区>
  • 应用文书>
  • 生活休闲>
  • 考试试题>
  • pptx模板>
  • 工商注册>
  • 期刊短文>
  • 图片设计>
  • ImageVerifierCode 换一换

    2022年智能优化算法源代码文件 .pdf

    • 资源ID:33377823       资源大小:258.25KB        全文页数:40页
    • 资源格式: PDF        下载积分:4.3金币
    快捷下载 游客一键下载
    会员登录下载
    微信登录下载
    三方登录下载: 微信开放平台登录   QQ登录  
    二维码
    微信扫一扫登录
    下载资源需要4.3金币
    邮箱/手机:
    温馨提示:
    快捷下载时,用户名和密码都是您填写的邮箱或者手机号,方便查询和重复下载(系统自动生成)。
    如填写123,账号就是123,密码也是123。
    支付方式: 支付宝    微信支付   
    验证码:   换一换

     
    账号:
    密码:
    验证码:   换一换
      忘记密码?
        
    友情提示
    2、PDF文件下载后,可能会被浏览器默认打开,此种情况可以点击浏览器菜单,保存网页到桌面,就可以正常下载了。
    3、本站不支持迅雷下载,请使用电脑自带的IE浏览器,或者360浏览器、谷歌浏览器下载即可。
    4、本站资源下载后的文档和图纸-无水印,预览文档经过压缩,下载后原文更清晰。
    5、试题试卷类文档,如果标题没有明确说明有答案则都视为没有答案,请知晓。

    2022年智能优化算法源代码文件 .pdf

    1 人工蚂蚁算法%function x,y, minvalue = AA(func) % Example x, y,minvalue = AA(Foxhole)clc; tic; subplot(2,2,1); % plot 1draw(func); title(func, Function); % 初始化各参数Ant=100; % 蚂蚁规模ECHO=200; % 迭代次数step=0.01*rand(1);% 局部搜索时的步长temp=0,0; % 各子区间长度start1=-100; end1=100; start2=-100; end2=100; Len1=(end1-start1)/Ant; Len2=(end2-start2)/Ant; %P = 0.2;% 初始化蚂蚁位置for i=1:Ant X(i,1)=(start1+(end1-start1)*rand(1); X(i,2)=(start2+(end2-start2)*rand(1); %func=AA_Foxhole_Func(X(i,1),X(i,2);val=feval(func,X(i,1),X(i,2); T0(i)=exp(-val);% 初始信息素 ,随函数值大, 信息素浓度小, 反之亦然 %*end; % 至此初始化完成for Echo=1:ECHO % 开始寻优%P0 函数定义 ,P0 为全局转移选择因子a1=0.9; b1=(1/ECHO)*2*log(1/2); f1=a1*exp(b1*Echo); a2=0.225; b2=(1/ECHO)*2*log(2); f2=a2*exp(b2*Echo); if Echo=T_Best T_Best=T0(j); BestIndex=j; end; end; W=Wmax-(Wmax-Wmin)*(Echo/ECHO); % 局部搜索步长更新参数for j_g=1:Ant % 全局转移概率求取,当该蚂蚁随在位置不是bestindex时if j_g=BestIndex r=T0(BestIndex)-T0(j_g); Prob(j_g)=exp(r)/exp(T0(BestIndex); else% 当j_g=BestIndex的时候进行局部搜索if rand(1)exp(-F1_B) X(BestIndex,1)=temp(1,1); X(BestIndex,2)=temp(1,2); end; end; for j_g_tr=1:Ant if Prob(j_g_tr)= 2 ifmin_local(Echo)min_global(Echo-1) min_global(Echo)=min_local(Echo); elsemin_global(Echo)=min_global(Echo-1); end; else min_global(Echo)=minvalue_iter; end; subplot(2,2,4);% Plot 3 min_global=min_global; index(:,1)=1:ECHO; plot(Echo, min_global(Echo),y*) %axis(0 ECHO 0 10); hold on; title (func, (X) = , num2str(minvalue_iter),Color,r); xlabel(iteration); ylabel(f(x); grid on; end; %ECHO循环结束 c_max,i_max=max(T0); minpoint=X(i_max,1),X(i_max,2); %func3 = AA_Foxhole_Func(X(i_max,1),X(i_max,2); %*名师资料总结 - - -精品资料欢迎下载 - - - - - - - - - - - - - - - - - - 名师精心整理 - - - - - - - 第 3 页,共 40 页 - - - - - - - - - 4 *%minvalue = func3;minvalue=feval(func,X(i_max,1),X(i_max,2); x=X(BestIndex,1); y=X(BestIndex,2); runtime=toc 人工免疫算法function x,y,fx,vfx,vmfit,P,vpm = AI(func,gen,n,pm,per); % Example x,y,fx = AI(Foxhole)subplot(2,2,1); draw(func); title( func, Function); if nargin = 1, % gen = 200; n = round(size(P,1)/2); pm = 0.0005; per = 0.0; fat = 10;%gen = 250; n = size(P,1); pm = 0.01; per = 0.0; fat = .1; P = cadeia(200,44,0,0,0); gen = 40; n = size(P,1); pm = 0.2; per = 0.0; fat = 0.1; end; while n = 0, n = input(n has to be at least one. Type a new value for n: ); end; xmin=-100; xmax=100; ymin=-100; ymax=100; x = decode(P(:,1:22),xmin,xmax); y = decode(P(:,23:end),ymin,ymax); %fit = eval(f);%fit=AI_Foxhole_Func(x,y);%fit=feval(func,x y); %imprime(1,vxp,vyp,vzp,x,y,fit,1,1); % Hypermutation controlling parameterspma = pm; itpm = gen; pmr = 0.8; % General defintionsvpm = ; vfx = ; vmfit = ; valfx = 1; N,L = size(P); it = 0; PRINT = 1; % Generationswhile it = gen & valfx = 100, x = decode(P(:,1:22),xmin,xmax); y = decode(P(:,23:end),ymin,ymax); T = ; cs = ; %fit = eval(f); %fit=AI_Foxhole_Func(x,y);% fit=feval(func,x y); a,ind = sort(fit); valx = x(ind(end-n+1:end); valy = y(ind(end-n+1:end); fx = a(end-n+1:end); % n best individuals (maximization)% Reproduction T,pcs = reprod(n,fat,N,ind,P,T); % Hypermutation M = rand(size(T,1),L) actual values% vxplot, vplot - original (base) functionif PRINT = 1, if rem(it,mit) = 0, mesh(vx,vy,vz); hold on; axis(-100 100 -100 100 0 500); xlabel(x); ylabel(y ); zlabel(f(x,y); plot3(x,y,fx,k*); drawnow; hold off ; end; end; % Reproductionfunction T,pcs = reprod(n,fat,N,ind,P,T); % n - number of clones% fat - multiplying factor% ind - best individuals% T - temporary population% pcs - final position of each cloneif n = 1, cs = N; T = ones(N,1) * P(ind(1),:); else , for i=1:n, % cs(i) = round(fat*N/i); cs(i) = round(fat*N); pcs(i) = sum(cs); T = T; ones(cs(i),1) * P(ind(end-i+1),:); end; end; % Control of pmfunction pm = pmcont(pm,pma,pmr,it,itpm); % pma - initial value% pmr - control rate% itpm - iterations for restoringif rem(it,itpm) = 0, pm = pm * pmr; if rem(it,10*itpm) = 0, pm = pma; end; end; % Decodify bitstringsfunction x = decode(v,min,max); % x - real value (precision: 6)% v - binary string (length: 22)v = fliplr(v); s = size(v); aux = 0:1:21; aux = ones(s(1),1)*aux; x1 = sum(v.*2.aux); x = min + (max-min)*x1 ./ 4194303; function ab,ag = cadeia(n1,s1,n2,s2,bip) %default parameter value seetingif nargin = 2, n2 = n1; s2 = s1; bip = 1; elseif nargin = 4, bip = 1; end; % Antibody (Ab) chainsab = 2 .* rand(n1,s1) - 1;%create n1 row s1 column array, its value range is between -1 or 1if bip = 1, ab = hardlims(ab); else , ab = hardlim(ab); end; % Antigen (Ag) chainsag = 2 .* rand(n2,s2) - 1; if bip = 1, 名师资料总结 - - -精品资料欢迎下载 - - - - - - - - - - - - - - - - - - 名师精心整理 - - - - - - - 第 6 页,共 40 页 - - - - - - - - - 7 ag = hardlims(ag); else , ag = hardlim(ag); end; % End Function CADEIA%-免疫粒子群优化算法(Artificial Immune - Particle Swarm Optimization)function x,y,Result=PSO_AI(func) % Example x, y,minvalue = PSO_AI(Foxhole)clc; subplot(2,2,1); % plot 1draw(func); title(func, Function); tic format long ; %-给定初始化条件-c1=1.4962; % 学习因子 1c2=1.4962; % 学习因子 2w=0.7298; % 惯性权重MaxDT=200; % 最大迭代次数D=2; % 搜索空间维数(未知数个数)N=100; % 初始化群体个体数目eps=10(-20); % 设置精度 ( 在已知最小值时候用)DS=10; % 每隔 DS次循环就检查最优个体是否变优replaceP=0.6; % 粒子的概率大于replaceP 将被免疫替换minD=1e-015; % 粒子间的最小距离Psum=0; % 个体最佳的和range=100; count = 0; %-初始化种群的个体-for i=1:N for j=1:D x(i,j)=-range+2*range*rand; % 随机初始化位置 v(i,j)=randn; % 随机初始化速度endend%-先计算各个粒子的适应度,并初始化Pi 和Pg-for i=1:N %p(i)=Foxhole(x(i,:),D); %fitness是计算各个粒子适应度的函数,见文件fitness.m %* p(i)=feval(func,x(i,:); y(i,:)=x(i,:); endpg=x(1,:); %Pg 为全局最优for i=2:N iffeval(func,x(i,:)feval(func,pg) %* pg=x(i,:); endend%-进入主要循环,按照公式依次迭代,直到满足精度要求-for t=1:MaxDT for i=1:N 名师资料总结 - - -精品资料欢迎下载 - - - - - - - - - - - - - - - - - - 名师精心整理 - - - - - - - 第 7 页,共 40 页 - - - - - - - - - 8 v(i,:)=w*v(i,:)+c1*rand*(y(i,:)-x(i,:)+c2*rand*(pg-x(i,:); x(i,:)=x(i,:)+v(i,:); iffeval(func,x(i,:)p(i) %*p(i)=feval(func,x(i,:); %* y(i,:)=x(i,:); endifp(i)feval(func,pg) %* pg=y(i,:); subplot(2,2,2); % Plot 1 bar(pg,0.25); axis(0 3 -40 40 ) ; title (Iteration , num2str(t); pause (0.1); subplot(2,2,3); % Plot 2plot(pg(1,1),pg(1,2),rs, MarkerFaceColor, r, MarkerSize,8) hold on; plot(x(:,1),x(:,2),k.); set(gca,Color,g ) hold off ; grid on; axis(-100 100 -100 100 ) ; title(Global Min = ,num2str(p(i); xlabel(Min_x= ,num2str(pg(1,1), Min_y= ,num2str(pg(1,2); endendPbest(t)=feval(func,pg) ; %*% if Foxhole(pg,D)DS if mod(t,DS)=0 & (Pbest(t-DS+1)-Pbest(t)1e-020 % 如果连续 DS代数,群体中的最优没有明显变优,则进行免疫 .% 在函数测试的过程中发现,经过一定代数的更新,个体最优不完全相等,但变化非常非常小,% 我认为这个时候也应用免疫了,所以我没有用“Pbest(t-DS+1)Pbest(t)”作为判断条件,% 不过“ (Pbest(t-DS+1)-Pbest(t)1e-020”是否合理也值得探讨。for i=1:N % 先计算出个体最优的和名师资料总结 - - -精品资料欢迎下载 - - - - - - - - - - - - - - - - - - 名师精心整理 - - - - - - - 第 8 页,共 40 页 - - - - - - - - - 9 Psum=Psum+p(i); endfor i=1:N % 免疫程序forj=1:N % 计算每个个体与个体 i 的距离distance(j)=abs(p(j)-p(i); end num=0; forj=1:N % 计算与第 i 个个体距离小于 minD的个数ifdistance(j)replaceP x(i,:)=-range+2*range*rand(1,D); subplot(2,2,4); axi; % Plot 4plot(x(i,1),x(i,2),k*); grid on; axis(-100 100 -100 100 ) ; title(New Min = ,num2str( feval(func,x(i,:); %* xlabel(Immune ,num2str(count); pause (0.2); count=count+1; endendendendend%-最后给出计算结果x=pg(1,1); y=pg(1,2); Result=feval(func,pg); %*toc %-算法结束 - function probabolity(N,i) PF=p(N-i)/Psum;% 适应度概率disp(PF); for jj=1:N distance(jj)=abs(P(jj)-P(i); endnum=0; for ii=1:N if distance(ii)minD num=num+1; endendPD=num/N; % 个体浓度PR=a*PF+(1-a)*PD; % 替换概率% result=PR;名师资料总结 - - -精品资料欢迎下载 - - - - - - - - - - - - - - - - - - 名师精心整理 - - - - - - - 第 9 页,共 40 页 - - - - - - - - - 10 差分进化算法function sol= DE(func) % Example sol= DE(Foxhole)tic popsize= 100; lb=-100 -100; ub=100 100; sol = diffevolve(func, popsize, lb, ub); toc endfunction sol, fval, evals = diffevolve(varargin) %DIFFEVOLVE Differential Evolution Optimization% Usage:% sol = DIFFEVOLVE(PROBLEM)% sol = DIFFEVOLVE(func, popsize, lb, ub) % sol = DIFFEVOLVE(func, popsize, lb, ub, option1, value1, .) % sol, fval = DIFFEVOLVE(.)% sol, fval, evals = DIFFEVOLVE(.)% % DIFFEVOLVE(func, popsize, lb, ub) tries to find the global optimum of % the fitness-function func, using a transversal differential evolution % strategy. The population size set by popsize, and the boundaries for% each dimension are set by the vectors lb and ub, respectively. % sol, fval, evals = DIFFEVOLVE(.) returns the trial vector found to % yield the global minimum in sol, and the corresponding function value % by fval. The total amount of function evaluations that the algorithm% performed is returned in evals.% The function func must accept a vector argument of length N, equal to % the number of elements in the vectors lb or ub. Also, the function % must be vectorized so that inserting a matrix of popsizexN will return % a vector of size popsizex1 containing the corresponding function values% for the N trial vectors. % The default control parameters DIFFEVOLVE uses, are % -1 F = 4) func = varargin1; popsize = varargin2; lb = varargin3; ub = varargin4; end% with additional optionsif (nargin = 5) if isstruct(varargin5) options = heurset(varargin5:end); else options = varargin5; end dum1, dum2, dum3, dum4, dum5, grace, display, maxfevals, convmethod, . convvalue, crossconst, Flb, Fub, n = parseprob(options); end% if called from GODLIKEif (nargin = 2) problem = varargin2; % errortrapif isstruct(problem) 名师资料总结 - - -精品资料欢迎下载 - - - - - - - - - - - - - - - - - - 名师精心整理 - - - - - - - 第 11 页,共 40 页 - - - - - - - - - 12 error(PROBLEM should be a structure. Type help heurset for details.) end pop, func, popsize, lb, ub, grace, display, maxfevals, convmethod, . convvalue, crossconst, Flb, Fub, n = parseprob(problem); % make sure the options are correct convmethod = maxiters; grace = 0; display = false; skippop = true; end% if given a full problem structureif (nargin = 1) problem = varargin1; % errortrapif isstruct(problem) error(PROBLEM should be a structure. Type help heurset for details.) end pop, func, popsize, lb, ub, grace, display, maxfevals, convmethod, . convvalue, crossconst, Flb, Fub, n = parseprob(problem); end% initialize convergence methodif strcmpi(convmethod, exhaustive) convergence = 1; maxiterinpop = convvalue; maxiters = inf; elseif strcmpi(convmethod, maxiters) convergence = 2; maxiterinpop = inf; maxiters = convvalue; elseif strcmpi(convmethod, achieveFval) convergence = 3; %errortrapif isempty(convvalue) error(Please define function value to achieve.) end maxiterinpop = inf; maxiters = inf; else convergence = 1; maxiterinpop = convvalue; end% problem dimensions dims = size(lb, 2); % errortrapsif ( (size(lb, 2) = 1 | size(ub, 2) = 1) &. (size(lb, 2) = dims | size(ub, 2) = dims) ) error(Upper- and lower boundaries must be the same size.) endif (popsize 0 & iters = maxiters) % evaluate fitnesses and adjust population fitold = fitnew; try fitnew = feval(func, newpop); catcherror(diffevolve:fevalerror, . Diffevolve cannot continue because the supplied cost function , .gave the following error:n %s, lasterr); endif (numel(fitnew) = popsize) 名师资料总结 - - -精品资料欢迎下载 - - - - - - - - - - - - - - - - - - 名师精心整理 - - - - - - - 第 13 页,共 40 页 - - - - - - - - - 14 error(diffevolve:function_not_vectorized_or_incorrect_dimensions, . The user-supplied cost function does not appear to get enough arguments,n, .or is not vectorized properly. Make sure the dimensions of the limits Lbn, .and Ub are the same as the required input vector for the function, or thatn, .the function is vectorized properly.) end prevpop = newpop; pop(fitnew fitold, :) = newpop(fitnew fitold, :); % increase number of function evaluations evals = evals + numel(fitnew);% improving solutions bestindfit, ind = min(fitnew); bestind = newpop(ind, :); if (bestindfit oldbestfit) % new best individual oldbestfit = bestindfit; oldbestind = bestind; improvement = maxiterinpop; % assign also the globals DIFFEVOLVE_bestind = bestind; DIFFEVOLVE_bestfval = bestindfit; % display improving solutionif display if converging1 fprintf(1, converge1bck); pause(0.05) endif converging2 fprintf(1, converge2bck); pause(0.05) endif (oldbestfit maxfevals) % maximum allowed function evaluations has been superceded fprintf(overfevals1); fprintf(overfevals2); breakendswitch convergence % exhaustive case 1 if (oldbestfit = bestindfit) & (oldbestfit = inf) if (improvement = 100) & display ifconverging1 fprintf(1, convergestr); 名师资料总结 - - -精品资料欢迎下载 - - - - - - - - - - - - - - - - - - 名师精心整理 - - - - - - - 第 14 页,共 40 页 - - - - - - - - - 15 fprintf(1, %3.0f, improvement-1); converging1 = true; pause(0.05) else fprintf(1, bbb%3.0f, improvement-1); pause(0.05) endend improvement = improvement - 1; end% max iterations case 2 if display & (maxiters-iters 100) if firsttime fprintf(1, itersstr); fprintf(1, %3.0f, maxiters-iters); pause(0.05) else fprintf(1, bbb%3.0f, maxiters-iters); pause(0.05) end firsttime = false; converging2 = true; end iters = iters + 1; % goal-attaincase 3 if (oldbestfit = convvalue) break ; endend% Transversal Differential Evolution for i = 1:popsize for j = 1:n base = 1; d1 = 1; d2 = 1; while base = d1 | base = d2 | d1 = d2 base = round(rand*(popsize-1)+1; d1 = round(rand*(popsize-1)+1; d2 = round(rand*(popsize-1)+1; end replace = rand crossconst | rand = i; if replace F = (Fub-Flb)*rand + Flb; newpop(i, :) = pop(base, :) + F*(pop(d1, :) - pop(d2, :); else newpop(i, :) = pop(i, :); endendend% enforce boundaries outsiders1 = (newpop maxs); if any(outsiders1(:) | any(outsiders2(:) reinit = rand(sizepop).*range + mins; newpop(outsiders1) = reinit(outsiders1); newpop(outsiders2) = reinit(outsiders2); endend名师资料总结 - - -精品资料欢迎下载 - - - - - - - - - - - - - - - - - - 名师精心整理 - - - - - - - 第 15 页,共 40 页 - - - - - - - - - 16 % (pre-) end values% if solution has been foundif isfinite(oldbestfit) % when called normallyif (skippop) fval = oldbestfit; sol = oldbestind; % when called from GODLIKEelse fval = fitnew; sol = prevpop; end% display final messageif display fprintf(1, nDifferential Evolution optimization algorithm has converged.n); pause(0.05) end% all trials might be infeasible else fprintf(1,n); warning( diffevolve:no_solution, .DIFFEVOLVE was unable to find any solution that gave finite values.) fval = oldbestfit; sol = NaN; end% Grace function evaluationsif (grace 0) % display progressif display fprintf(1, Performing direct-search.); pause(0.05) end% perform direct search options = optimset(TolX, eps, MaxFunEvals, grace, TolFun, . eps, MaxIter, 1e4, Display, off); soltry, fvaltry = fminsearch(func, sol, options); % enforce boundariesif any(soltry = ub | soltry = lb) sol = soltry; fval = fvaltry; end evals = evals + grace; end% finalize% display progressif display fprintf(1, All done.nn); pause(0.05) end% clear the temp globals clear globalDIFFEVOLVE_bestindDIFFEVOLVE_bestfvalend% parser function to easily parse the input argumentsfunction pop, func, popsize, lb, ub, grace, display, maxfevals, convmethod, . convvalue, crossconst, Flb, Fub, n = parseprob(problem) func = problem.costfun; popsize = problem.popsize; lb = problem.lb; 名师资料总结 - - -精品资料欢迎下载 - - - - - - - - - - - - - - - - - - 名师精心整理 - - - - - - - 第 16 页,共 40 页 - - - - - - - - - 17 ub = problem.ub; grace = problem.grace; display = pr

    注意事项

    本文(2022年智能优化算法源代码文件 .pdf)为本站会员(H****o)主动上传,淘文阁 - 分享文档赚钱的网站仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知淘文阁 - 分享文档赚钱的网站(点击联系客服),我们立即给予删除!

    温馨提示:如果因为网速或其他原因下载失败请重新下载,重复下载不扣分。




    关于淘文阁 - 版权申诉 - 用户使用规则 - 积分规则 - 联系我们

    本站为文档C TO C交易模式,本站只提供存储空间、用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。本站仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知淘文阁网,我们立即给予删除!客服QQ:136780468 微信:18945177775 电话:18904686070

    工信部备案号:黑ICP备15003705号 © 2020-2023 www.taowenge.com 淘文阁 

    收起
    展开