应用信息论基础 (6).pdf
《应用信息论基础 (6).pdf》由会员分享,可在线阅读,更多相关《应用信息论基础 (6).pdf(30页珍藏版)》请在淘文阁 - 分享文档赚钱的网站上搜索。
1、 TBSI 2017 All rights reservedLecture 5Maximum Entropy and Minimum KL Divergence TBSI 2015 All rights reserved2Outline1Ill-posed Problems2Maximum Entropy3Minimal K-LDivergence4Connections between them5Intuition to ME Principle6Application of ME Principle TBSI 2015 All rights reserved3nA simple examp
2、le:sensor network localization1Ill-Posed ProblemrSensorrSensorrSensor TBSI 2015 All rights reserved4Sensor Network LocalizationnQuasi-distance between some pairs of nodesnCould we recover the coordinates of each nodes?nGPS Revisited-over deterministicnThis problem typically undeterministic TBSI 2015
3、 All rights reserved5Regime of quantitativeresearchnCategories of problemsDirect problem:model the physical laws,determinethe parameter of the model,input-outputReverse problem:based on observations,infer thesystem parameter and inputnill-posed problemsOver-deterministic:too many clues.Undeterminist
4、ic:too little clues.TBSI 2015 All rights reserved6A simplified viewnDirect problem:know X and A,resolve YnReverse problem:know Y,resolve X and A;know Y and A,resolve XSystem modelAInput:XOutput:Y TBSI 2015 All rights reserved7For over-deterministic problemsHm nnmmnrankn=AX=YAXYAXJ=AX-YAX-Y已知,其中 为矩阵,
5、为 维列向量,为 维列向量因为是过定问题,有,设(列满秩)。用最小二乘求解本问题,就是求解一个,使得最小化。TBSI 2015 All rights reserved8Under-deterministic problem-m nnmmnn rankAmore than one solutionnWhich one is most“reasonable”?nE.T.Jayne proposed ME Principlein 19572Maximum Entropy PrincipleEdwin Thompson Jaynes July 5,1922-April 30,1998 Intuitio
6、n Max Entropy-“uniform”distribution LLN-uniform distribution TBSI 2015 All rights reserved10Formal ProblemnSuppose we have a discrete RV X with unknown p.m.f.p(x).Given its expectation of some functionsdetermine the p.m.f.nConvert the problem into a constraint optimizationproblemObjective function:C
7、onstraints:Solution:1,2,mM=!()()mmxp x fxC=X()p x()()log()xH Xp xp x=X()1,()(),1,2,.,xmmxp xp x fxCmM=XX()()max()p xp xArgH X=TBSI 2015 All rights reserved11Maximum Entropy TheoremnTheorem 5.1:The p.m.f.that achieve maximumentropy iswheresatisfyProof:Apply Lagrange multiplier.01()exp()Mmmmp xfx=0,.,
8、M()p x()1,()(),1,2,.,xmmxp xp x fxCmM=XX TBSI 2015 All rights reserved12Proof of Theorem 5.1Let auxiliary function beand by takingwe havewhere 0=+1,and m(m=0,1,M)can be solved byM+1 constraints.1()()1()()MmmmxmxFH Xp xp x fxC=XX01()exp()Mmmmp xfx=11 log()()0()MmmmFp xfxp x=TBSI 2015 All rights reser
9、ved13Continuous R.V.snSubstitute entropy with differential entropy。01()0,()0,S()1()(),1,2,3,.,()exp()SmmSKmmkp xp xxp x dxp x fx dxCmMp xfx=+且当最大熵分布定理:满足约束条件且使微分熵达到最大值的分布为。TBSI 2015 All rights reserved14nME principle:No prior knowledge onp.m.f.nWhat if there is prior on p.m.f.nK-L Divergence measure
10、 thedifference between two p.m.f.nMinimum K-L Divergence:under thegiven constraints,find a p.m.f.that isas close to the prior as possible.3Minimum K-L DivergenceS.Kullback19031994 TBSI 2015 All rights reserved15最小鉴别信息原理的问题描述n问题:某随机变量X,概率分布q(x)未知,已知其先验概率密度p(x)及其若干函数的期望求在上述条件下对q(x)的最佳估计。n按照最小鉴别信息原理,上述
11、问题的求解可以表述为以下受限优化问题。取先验分布与目标分布之间的鉴别信息作为目标函数求在约束条件:下的解()(),1,2,.,mmSq x fx dxCmM=()()()log()Sq xDq xdxp x=q|p()(),1,2,.,mmsq x fx dxCmM=()1sq x dx=()()min()q xq xArgD=q|p TBSI 2015 All rights reserved16Minimum K-L Divergence PrinciplenTheorem 5.2:Given a prior p.m.f.p(x)and constraints,the minimum K-L
12、 divergence p.m.f iswhereare taken such thatsatisfies01()()exp()Mmmmq xp xfx=+0,.,M()q x()1,()(),1,2,.,xmmxq xq x fxCmM=XX TBSI 2015 All rights reserved174ConnectionsMin KL Principle is a generalization of ME principleAssume the prior is1()p xK=1D()|()()|)()()log()log()log1()log1D()|)()kkxxq xp xD q
- 配套讲稿:
如PPT文件的首页显示word图标,表示该PPT已包含配套word讲稿。双击word图标可打开word文档。
- 特殊限制:
部分文档作品中含有的国旗、国徽等图片,仅作为作品整体效果示例展示,禁止商用。设计者仅对作品中独创性部分享有著作权。
- 关 键 词:
- 应用信息论基础 6 应用 信息论 基础
限制150内