隐马尔科夫模型-英文课件.ppt
![资源得分’ title=](/images/score_1.gif)
![资源得分’ title=](/images/score_1.gif)
![资源得分’ title=](/images/score_1.gif)
![资源得分’ title=](/images/score_1.gif)
![资源得分’ title=](/images/score_05.gif)
《隐马尔科夫模型-英文课件.ppt》由会员分享,可在线阅读,更多相关《隐马尔科夫模型-英文课件.ppt(74页珍藏版)》请在淘文阁 - 分享文档赚钱的网站上搜索。
1、隐马尔科夫模型Hidden Markov Model(HMM)Hidden Markov ModeluThe problems about the Template methoduHMM is a popular statistical tooluDiscrete-Time Markov ProcessuTheory of HMM:The three basic problems2Review template methodnKey ideaTo derive typical sequences of speech frames for a pattern via some averaging
2、 procedureRely on the use of local spectral distance measures to compare patternsDynamic programming,temporally align patternsProblems of Template methodu语音是一个随机信号u非严格意义上的统计方法Statistical techniques have been widely used in clustering to create reference patternsStatistical signal characterization in
3、herent in the template representation is only implicit and often inadequate:neglects the second-order statisticsu缺乏鲁棒性4HMM:popular toolnThe basic theory of HMM was published in a series of classic papers by Baum and his colleagues in the late 1960s and early 1970s nHMM was implemented for speech-pro
4、cessing applications by Baker at CMU,and by Jelinek and his colleagues at IBM in the 1970snHMM provides a natural and highly reliable way of recognizing speech for a wide range of applications5HMM:popular toolnThe underlying assumption of the HMMnthe speech signal can be well characterized as a para
5、metric random processnthe parameters of the stochastic process can be determined in a precise,well-defined manner6Discrete-Time Markov ProcessnA system with N discrete states indexed by 1,2,N.:The state at time t7Discrete-Time Markov Process8时不变系统?时不变系统?Observable Markov ModelnEach state corresponds
6、 to an observable eventExample:weatherState1:rain or snowState2:cloudyState3:sunny9The weather is observed once a dayCould it be used for what case?Extensions to Hidden Markov Models-The Urn-and-Ball ModelnN glass urns,each with M distinct color ballsnA urn is randomly selected first,and then a ball
7、 is chosen at random,whose color is recorded as the observationnThe ball is then replaced in the urn from which it was selectednThe procedure is repeated 102.2.The Urn-and-Ball Model11HMM for weather forecastnWhat Operations do you design to carry out the ball selection?nHow do you extend the Markov
8、 process to HMM to give more precise weather forecast?Theory of HMMnTopologynElementsnBi-hidden processesnThree basic problems13HMM Topology:Ergodic14HMM Topology:Left-right15Parallel path left-right HMM16Elements of HMM1.N-每个模型的状态数2.M-每个状态的可观察现象数3.状态转移概率分布,其中4.状态观察现象概率分布5.初始状态概率分布,其中n we use the co
9、mpact notation To indicate the complete parameter set of the model,this parameter set,of course,defines a probability measure for O,which we discuss later,we use the terminology HMM to indicate the parameter set and the associated probability measure interchangeably without ambiguity.Elements of HMM
10、18Bi-Hidden processesuThe statesuThe observations19The Three Basic ProblemsnEvaluation:Forward processnOptimal path:Viterbi AlgorithmnTraining:Baum-Welch Algorithm20Problem1:Given the observation sequence O=(o1,o2,oT),and a model how do we efficiently compute ,the probability of the observation sequ
11、ence,given the model?We can also view the problem as one of scoring how well a given model matches a given observation sequence.To solve the problem allows us to choose the model that best matches the observations.Evaluation21nProblem2 Given the observation sequence O=(o1,o2,oT),and the model how do
12、 we choose a corresponding static sequence q=(q1q2,qt)that is optimal in some sense.in this problem to find the correct state sequence.we usually use an optimality criterion to solve this problem as best as possible.Evaluation22nProblem3:How do we adjust the model parameters to maximize In this prob
13、lem we attempt to optimize the model parameters to best describe how a given observation sequence comes about.The observation sequence used to adjust the model parameters is called a training sequence because it is used to train the HMM.Evaluation23Probability EvaluationnWe wish to calculate the pro
14、bability of the observation sequence.Consider one such fixed-state sequencenWhere q1 is the initial state.The probability of the observation sequence O given the state sequence of q is nWhere we have assumed statistical independence of observation.Thus we get 24Probability EvaluationnThe probability
15、 of such a state sequence q can be written as nThe joint probability of O and q occur simultaneously,is simply the product of the above two terms25Probability EvaluationnThe probability of O is obtained by summing this joint probability over all possible state sequence q,giving26A.The Forward Proced
16、urenConsider the forward variable defined as nThat is,the probability of the partial observation sequence,o1o2ot,(until time t)and state i at time t,given the model .We can solve for inductively,as follows:27Forward Procedure1.initialization 2.induction 3.termination28B.The Backward ProcedurenIn a s
17、imilar manner,we can consider a backward variable defined as nThat is,the probability of the partial observation sequence from t+1 to the end,given state i at time t and the model nAgain we can solve for inductively,as Follows:29Backward Procedure1.initialization 2.Induction30Backward procedurenThe
18、initialization step 1 arbitrarily define to be 1 for all i.nStep 2,which is illustrated in next figure,which shows that in order to have been in state i at time t,and to account for the observation sequence from time t+1 on,you have to consider all possible state j at time t+1 31naccording for the t
19、ransition from i to j,as well as the observation ot+1 in state j.And then account for the remaining partial observation sequence from state j.We will see alter how the backward as well as the forward calculation are used to help solve fundamental problem 2 and 3 of HMMsBackward procedure32ai3ai2ai1a
20、iNs1s2s3sNt+1tsiBackward procedure33nThere are several possible ways of solving problem 2,finding the“optimal”state sequence associated with the given observation sequence.To implement this problem 2,we can define that a posteriori probability variableBackward procedure34 That is,the probability of
21、being in state i at time t,given the observation sequence O,and the model,we can express in several forms,including Backward procedure35nSince is equal to we can write as Backward procedure36nWhere we see that accounts for the partial observation sequence and state i at t,while account for the remai
22、nder of the observation sequence ,given state Using ,we can solve for the individually most likely state at time t,as Backward procedure37A.The Viterbi AlgorithmnTo find the single best state sequence,q=(q1q2qT),for the given observation sequence O=(o1o2oT),we need to define the quantity38Viterbi Al
23、gorithmnThat is,is the best score along a single path,at time t,which accounts for the first t observations and ends in state i,by induction we have39Viterbi AlgorithmnThe complete procedure for finding the best state sequence can now be stated as follows:1.Initialization40Viterbi Algorithm2.Recursi
24、on3.Termination41Viterbi Algorithm4.Path(state sequence)backtrackingnIt should be noted that the Viterbi algorithm is similar in implementation to the forward calculation.42B.Alternative Viterbi ImplementationnBy taking logarithms of the model parameters,the Viterbi algorithm of the preceding sectio
25、n can be implemented without the need for any multiplications,thus:43Viterbi Algorithm0.Preprocessing44Viterbi Algorithm1.Initialization2.Recursion 45Viterbi Algorithm3.Termination4.Backtracking 46time-series modeling-声学统计模型(语音识别)语言模型通信系统生物信号处理手写字符识别面部识别Feature extraction(Ferdinando Samaria etc.at O
- 配套讲稿:
如PPT文件的首页显示word图标,表示该PPT已包含配套word讲稿。双击word图标可打开word文档。
- 特殊限制:
部分文档作品中含有的国旗、国徽等图片,仅作为作品整体效果示例展示,禁止商用。设计者仅对作品中独创性部分享有著作权。
- 关 键 词:
- 隐马尔科夫 模型 英文 课件
![提示](https://www.taowenge.com/images/bang_tan.gif)
限制150内