TLBO优化最小二乘支持向量机参数LSSVM.pdf
Model NOx emissions by least squares support vector machine withtuning based on ameliorated teachinglearning-based optimizationGuoqiang Lia,b,Peifeng Niua,b,Weiping Zhanga,b,Yongchao Liua,baKey Lab of Industrial Computer Control Engineering of Hebei Province,Yanshan University,Qinhuangdao 066004,ChinabNational Engineering Research Center for Equipment and Technology of Cold Strip Rolling,Qinhuangdao 066004,Chinaa b s t r a c ta r t i c l ei n f oArticle history:Received 14 November 2012Received in revised form 19 April 2013Accepted 24 April 2013Available online 2 May 2013Keywords:Teachinglearning-based optimizationLeast squares support vector machineNOx emissionsCoal-fired boilerThe teachinglearning-based optimization(TLBO)is a new efficient optimization algorithm.To improve thesolution quality and to quicken the convergence speed and running time of TLBO,this paper proposes anameliorated TLBO called A-TLBO and test it by classical numerical function optimizations.Compared withother several optimization methods,A-TLBO shows better search performance.In addition,the A-TLBO isadopted to adjust the hyper-parameters of least squares support vector machine(LS-SVM)in order tobuild NOx emissions model of a 330MW coal-fired boiler and obtain a well-generalized model.Experimentalresults show that the tuned LS-SVM model by A-TLBO has well regression precision and generalizationability.2013 Elsevier B.V.All rights reserved.1.IntroductionWith the increase of energy consumption worldwide and im-proved awareness of environmental protection,boiler combustionoptimization problem of power plants attracts the attention of techni-cal staffs and managers.The boiler combustion optimization technol-ogy is used to ensure boiler efficiency,and simultaneously to reducepollutant emissions,where the NOx emissions are the main compo-nents.So the core task is to cut down NOx emissions.However,thefirst work of controlling NOx emissions is to set up a high precisionprediction model.So building an accurate system model is very im-portant for monitoring and optimizing the operations of powerplants.In the past ten years,many research works on how to modeland forecast the NOx emissions of high capacity coal-fired boilershave been published 15.However,the traditional statistical analy-sis and forecasting methods are always based on large sample data,and many of the prediction methods,such as artificial neural net-works,have theoretical assurance all just under a large sample.Dueto various limits in actual circumstances,it is very difficult to gatherlarge sample data.For this problem,the least squares support vectormachine(LS-SVM)6,which is suited for small sample data,isadopted to model and predict NOx emissions.The LS-SVM is a reformulation to the standard support vector ma-chine(SVM)79,which simplifies the standard SVM model in agreat extent by applyinglinear least squares criteria to the loss functionto replace traditional quadratic programming method.The simplicityand inherited advantages of SVM such as excellent generalization abili-ty and a unique solution promote the application of LS-SVM in manypattern recognition and regression problems.The regressionaccuracyand generalization abilityof LS-SVMare ex-tremely dependent on two hyper-parameters:the regularization pa-rameter and the kernel parameter 2.So choosing appropriatehyper-parameters is very important for obtaining excellent generaliza-tion ability.Parameter choosing of the LS-SVM model could be thoughtas an essential optimization task.This calls for the use of advancedmeta-heuristic approaches,such as evolutionary or population-basedmethods.The teachinglearning-based optimization algorithm 10,11 is anew and efficient meta-heuristic optimization method based on thephilosophy of teaching and learning,which is proposed by Rao et al.Like other population-based optimization techniques such as particleswarm optimization(PSO)12,evolutionary optimization(DE)13,14,artificial bee colony(ABC)1519,Gravitational Search Algo-rithm(GSA)20,and Coupled Simulated Annealing(CSA)21,theTLBOisalsoapopulation-basedoptimizationmethodandadoptsapop-ulationofsolutionstoproceedtotheglobalsolution.Insomeresearches2224,the performance of TLBO has already been compared withother search optimization techniques such as genetic algorithm(GA)25,26,Bee algorithm(BA)27,and grenade explosion method(GEM)28.In addition,the TLBO has been applied to some complexcomputational problems,such as data clustering,mechanical design,electrochemicaldischargemachining,anddesignofplanarsteelframes.In this paper,in order to improve the solution quality and toquicken the convergence speed of TLBO,an ameliorated teachinglearning-based optimization algorithm called A-TLBO is proposed.InChemometrics and Intelligent Laboratory Systems 126(2013)1120 Correspondingauthorat:KeyLabofIndustrialComputerControlEngineeringofHebeiProvince,Yanshan University,Qinhuangdao 066004,China.Tel.:+86 13230355970;fax:+86 335 8072979.E-mail address:(P.Niu).0169-7439/$see front matter 2013 Elsevier B.V.All rights reserved.http:/dx.doi.org/10.1016/j.chemolab.2013.04.012Contents lists available at SciVerse ScienceDirectChemometrics and Intelligent Laboratory Systemsjournal homepage: are three major differences:the greedy selection mech-anism is not adopted but the elitist strategy,an inertia weight func-tion and an acceleration coefficient function are introduced toquicken the processes of Teaching and Learning.In order to testthe validity of the proposed method,it is adopted to optimize manyclassical numerical optimization functions and compared with othermethods.Experiment results show that the A-TLBO could find bettersolutions and have much faster convergence speed.In addition,theA-TLBO is also used to adjust two hyper-parameters of LS-SVM inorder to obtain a well-generalized model of NOx emissions for a330MW coal-fired boiler.Results show that the tuned LS-SVMmodel by A-TLBO has well regression precision and generalizationability.The rest of the paper is arranged as follows.In the next section,abrief literature review is presented.The ameliorated teachinglearning-based optimization is proposed in Section 3.In Section 4,the A-TLBO is applied to optimize some classical numerical optimiza-tion functions and compared with GSA,ABC and TLBO.In Section 5,the A-TLBO is employed to adjust the hyper-parameters of LS-SVMto model NOx emissions of a 330MW coal-fired boiler.Finally,Section 6 concludes the paper.2.Review of related works2.1.Teachinglearning-based optimizationThe teachinglearning-based optimization(TLBO)algorithm pro-posed by Rao is inspired by the effect of the influence of a teacheron the output of learners in a class.In TLBO,there are two vital com-ponents,Teacher phase and Learner phase,which indicate two dif-ferent learning modes.2.1.1.Teacher phaseIn this phase,Learners learn from a teacher,who is considered asthe most knowledgeable person in the society and would bringlearners up to his or her level in terms of knowledge.That is to say,the teacher would put effort to move the mean of a class up to hisor her level depending on his or her ability.Suppose,in the ith itera-tion,Miis the mean of marks obtained by learners in a class,and Tiisthe mark of the teacher.And the best learner could be mimicked asthe teacher,namely:Ti Xmin f X:1The teacher would put effort to move the mean value Mitowardsitself.Namely,the new mean Mnewwill be Ti.The learners would learn and update their knowledge according tothe following form:Xnew;i Xold;i riMnewTFMi2where Xold,iand Xnew,irespectively denote the ith learners mark be-fore or after learning from the teacher;riis a random number be-tween 0 and 1;and TFis a teaching factor which controls themovement of the mean value.The value of TFis either 1 or 2,whichis a heuristic step and decided randomly by:TF round 1 rand 0;1 21?:32.1.2.Learner phaseIn this phase,the learners could increase their knowledge by in-teractions among themselves.A learner could communicate random-ly with other learners in order to improve his or her knowledge withthe help of group discussions,formal communications,presentations,etc.A learner learns something new from the learners who have moreknowledge than him or her.The learning process could be describedas follows:In the ith iteration,for a learner Xi,randomly select another learn-er Xjwhere i jXnew;iXold;i riXiXj?if f Xif Xj?Xold;i riXjXi?if f Xi f Xj?:8:5where is the regularization parameter,and iand iare the slackvariables.Compared with SVM,the least squares support vector machine(LS-SVM)applies linear least squares criteria to the loss function toreplace the inequality constraints with equality constraints.So theoptimization problem can be defined as:min;e 12T 12Xli1e2isubject to:yi wT xi b ei;i 1;2;l:6The Lagrangian function can be set up by:L;b;e;eXli1iT xi b eiyi?7where iis the ith Lagrange multiplier.The KarushKuhnTucker(KKT)conditions for optimality are de-scribed as follows:L 0 Xli1i xi Lb 0Xli1i 0Lei 0i CeiLi 0T xi b ei yi:8:8After eliminating variables and ei,the optimization problemcould be transformed into the following form:1Il1l1Tl0264375b?y0?912G.Li et al./Chemometrics and Intelligent Laboratory Systems 126(2013)1120where y y1;yl?T;1l 1;1?T;1;l?Tand =ijl l.ij K xi;xj?xi T xj?:10Then,the explicit solution of Eq.(9)can be expressed as follows:b 1Tl 1Il?1y1Tl 1Il?11l 1Il?1y1lb:11According to Mercers theorem,the output result could be calcu-lated by:f x Xli1iK x;xi b12K x;xi expxxi222!13where is the Gaussian kernel,and the kernel function K(x,xi)is se-lected as the radial basis function(RBF)in this paper.3.Ameliorated teachinglearning-based optimizationIn this section,an ameliorated TLBO algorithm called A-TLBO isproposed to quicken the convergence speed and simultaneously toimprove the search accuracy of the original TLBO.In the A-TLBO,there are three major differences:by introducing the elitist strategyto replace the greedy selection mechanism,the inertia weight and ac-celeration coefficients to quicken the processes of Teaching andLearning.The A-TLBO is described as follows in detail.3.1.Initial parametersIn A-TLBO,the parameters:the number of learners and the maxi-mum iteration number are set in this step like that of TLBO.3.2.Teacher phaseIn this step,the new obtained mark by a learner mainly depends ontwoparts:thepreviousmarkXold,iandthedifference(Mnew TFMi)be-tweentheexistingandthenewmean.Inordertoimprovethequalityofteaching and simultaneously quicken the process of teaching,the solu-tion search equation described by Eq.(2)introduces two important pa-rameters which decide the influence degree of the two parts,and ismodified as follows:Xnew;i iXold;i iMnewTFMi14where iis the inertia weightthat controls impact of the previous solu-tion Xold,i;iis the acceleration coefficient between 0 and 1,which de-cidesthemaximumstepsize.However,iftheobtainedmarkisverylow,there is a big gap between the learner and the teacher.So a big correc-tion is needed in order to improve the learners mark.Conversely,if theobtained mark is very high,only a small modification is needed.So inorder to further improve the learning efficiency of the learner,inertiaweight and the acceleration coefficient are defined as functions of thefitness in the learning process.These parameters are given as follows:i 1=1 exp fit i =ap?iter15wi 1=1 exp fit i =apiter?16where fit(i)is the fitness of the ith learner;ap is the maximum fitnessvalues among learners in the first iteration;iter is the current iteration.NoYesFig.1.The flowchart of the A-TLBO algorithm.13G.Li et al./Chemometrics and Intelligent Laboratory Systems 126(2013)1120In addition,after all the learners update their marks by Eq.(14),they calculate their fitness values together in the form of a matrixby the objective function.The new marks and fitness values wouldnttake place the previous marks and fitness values but are respectivelymerged together with the previous ones as two new sets.And thenthe fitness set would be reordered as the sort ascends,and the markset would be reordered depending on the order of the fitness set.Fi-nally,the first half of fitness set would be assigned to the all learnersas their fitness values,and the corresponding values of the mark areset as the marks of all learners.Namely,during the updating marks,the greed selection mechanism is not adopted but an elitist strategy.This strategy could decrease a lot of time which could be expendedin comparison with each other,and simultaneously increase learningefficiency.3.3.Learner phaseIn this step,the learners increase their knowledge by communi-cating each other.Here a learner learns something new from somelearners who have more knowledge than him or her.A learnercould update their mark by the following form:In ith iteration,for a learner Xi,randomly select another learner Xjwhere i jXnew;iXold;i iXjXi?if f Xif Xj?Xold;i iXbestXiif f Xi f Xj?8:17i 1exp fit Xj?fit Xi?18i 1exp fit Xbestfit Xi19where Xbestis best learner in a class;iand iare acceleration coeffi-cients which decide the step size depending on the difference be-tween two learners.When all learners update their marks by Eq.(17),they could cal-culate their fitness values together.And then the elitist strategywould be adopted like the teaching phase in order to obtain newmarks and fitness values for all learners.3.4.Main steps of A-TLBOBased on the above explanation of initializing population,theTeacher phase,and the Learner phase,the flowchart of the A-TLBO al-gorithm is shown in Fig.1.4.Experimental study and discussion4.1.Experimental setupIn this subsection,in order to test the performance of A-TLBO,tenfamous benchmark optimization problems are adopted,which aredescribed in Table 1.In Table 1,n denotes the dimension,S denotesa subset of Rn,and synchronously the global optimum solutions andlocations of classical functions are also given in Table 1.Moreover,the A-TLBO is compared with other population-based optimizationtechniques:GSA,ABC and the original TLBO.The parameter settingof each optimization algorithm is given in Table 2.All the experiments are carried out under Windows 2000 andMatlab 2009 with an AMD Athlon(tm)64 X2 Dual Core Processor5000+,2.61 GHz and 2 GB RAM.4.2.Comparison with other methodsIn this subsection,a series of experiments are presented to showthe superiority of A-TLBO in convergence speed and quality solutionswith GSA,ABC and the original TLBO.For each function we make 30independent experiments,determine the runtime and the conver-gence time,and evaluate the best solution found and the standard de-viation.The results are shown in Tables 34 and Figs.2,3,4,and 5.Table 2Parameter setting.MethodPopulationsizeMaximumiterationThe dim ofeach objectOtherGSA40100010,30,50,100G0=100,=20ABC40100010,30,50,100Limit=200TLBO40100010,30,50,100A-TLBO40100010,30,50,100Table 1Classical benchmark functions.Test functionGlobal optimaOptimum locationSf1x?Xni1x2i00n100,100nf2x?Xni1xij j ni1xij j00n10,10nf3x?Xni1Xij1xj01A200n100,100nf4