{"id":1147,"date":"2014-02-20T21:01:16","date_gmt":"2014-02-20T13:01:16","guid":{"rendered":"http:\/\/blog.dayandcarrot.net\/?p=1147"},"modified":"2014-02-20T21:01:16","modified_gmt":"2014-02-20T13:01:16","slug":"hidden-markov-model-%e9%9a%90%e9%a9%ac%e5%b0%94%e7%a7%91%e5%a4%ab%e6%a8%a1%e5%9e%8b-%e3%80%90%e8%87%aa%e5%b7%b1%e7%9a%84%e5%9e%83%e5%9c%be%e7%bf%bb%e8%af%91%e3%80%91","status":"publish","type":"post","link":"https:\/\/dayandcarrot.space\/?p=1147","title":{"rendered":"Hidden Markov Model \u9690\u9a6c\u5c14\u79d1\u592b\u6a21\u578b \u3010\u81ea\u5df1\u7684\u5783\u573e\u7ffb\u8bd1\u3011"},"content":{"rendered":"<p>\u770b\u8bed\u97f3\u60c5\u611f\u8bc6\u522b\u770b\u5230\u7684\u8bba\u6587\uff0c\u8bba\u6587\u662f\uff1a<br \/>\nSurvey on speech emotion recognition: Features, classification schemes,<br \/>\nand databases<br \/>\n\u5176\u4e2d\u6709\u4e00\u6bb5\u5199\u5230\u4e86HMM\u7528\u4f5c\u5206\u7c7b\u5668\u7684\uff0c\u539f\u6587\u4e2d\u90e8\u5206\u5982\u4e0b\uff08\u7248\u6743\u95ee\u9898\u6211\u5c31\u4e0d\u653e\u539f\u6587\u5168\u6587\u4e86\uff09\uff1a<br \/>\nThe HMM classifier has been extensively used in speech<br \/>\napplications such as isolated word recognition and speech<br \/>\nsegmentation because it is physically related to the production<br \/>\nmechanism of speech signal[102]. The HMM is a doubly stochastic<br \/>\nprocess which consists of a first-order Markov chain whose states<br \/>\nare hiddenfrom the observer. Associated with each state is a<br \/>\nrandom process which generates the observation sequence. Thus,<br \/>\nthe hidden states of the model capture the temporal structure of<br \/>\nthe data. Mathematically, for modeling a sequence of observable<br \/>\ndata vectors,x1,&#8230;,xT, by an HMM, we assume the existence of a<br \/>\nhidden Markov chain responsible for generating this observable<br \/>\ndata sequence. LetKbe the number of states,pi<br \/>\n, i\u00bc1,y,Kbe the<br \/>\ninitial state probabilities for the hidden Markov chain, and aij<br \/>\n,<br \/>\ni\u00bc1,y,K,j\u00bc1,y,Kbe the transition probability from stateito state<br \/>\nj. Usually, the HMM parameters are estimated based on the ML<br \/>\nprinciple. Assuming the true state sequence is s1,&#8230;,sT, the<br \/>\nlikelihood of the observable data is given by<br \/>\nwhere<br \/>\nbi<br \/>\n\u00f0xt\u00de\u0006P\u00f0xjst \u00bci\u00de<br \/>\nis the observation density of theith state. This density can be either<br \/>\ndiscrete for discrete HMM or a mixture of Gaussian densities for<br \/>\ncontinuous HMM. Since the true state sequence is not typically<br \/>\nknown, we have to sum over all possible state sequences to find the<br \/>\nlikelihood of a given data sequence, i.e.<br \/>\n<span style=\"line-height: 1.5;\">====================================================<\/span><br \/>\n\u4e0b\u9762\u662f\u6211\u81ea\u5df1\u7ffb\u8bd1\u7684<br \/>\n<a href=\"http:\/\/blog.dayandcarrot.net\/?attachment_id=1148\" rel=\"attachment wp-att-1148\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-1148\" alt=\" HMM \u9690\u9a6c\u5c14\u79d1\u592b\u6a21\u578b\" src=\"http:\/\/orz.dayandcarrot.net\/wordpress\/wp-content\/uploads\/2014\/02\/QQ\u622a\u56fe20140220205858.gif\" width=\"561\" height=\"570\" \/><\/a><br \/>\n====================================================<\/p>\n","protected":false},"excerpt":{"rendered":"<p>\u770b\u8bed\u97f3\u60c5\u611f\u8bc6\u522b\u770b\u5230\u7684\u8bba\u6587\uff0c\u8bba\u6587\u662f\uff1a Survey on speech emotion recognition: Features, classification schemes, and databases \u5176\u4e2d\u6709\u4e00\u6bb5\u5199\u5230\u4e86HMM\u7528\u4f5c\u5206\u7c7b\u5668\u7684\uff0c\u539f\u6587\u4e2d\u90e8\u5206\u5982\u4e0b\uff08\u7248\u6743\u95ee\u9898\u6211\u5c31\u4e0d\u653e\u539f\u6587\u5168\u6587\u4e86\uff09\uff1a The HMM classifier has been extensively used in speech applications such as isolated word recognition and speech segmentation because it is physically related to the production mechanism of speech signal[102]. The HMM is a doubly stochastic process which consists of a first-order Markov [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[4],"tags":[51,139],"class_list":["post-1147","post","type-post","status-publish","format-standard","hentry","category-study","tag-hmm","tag-139"],"_links":{"self":[{"href":"https:\/\/dayandcarrot.space\/index.php?rest_route=\/wp\/v2\/posts\/1147","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dayandcarrot.space\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dayandcarrot.space\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dayandcarrot.space\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/dayandcarrot.space\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=1147"}],"version-history":[{"count":0,"href":"https:\/\/dayandcarrot.space\/index.php?rest_route=\/wp\/v2\/posts\/1147\/revisions"}],"wp:attachment":[{"href":"https:\/\/dayandcarrot.space\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=1147"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dayandcarrot.space\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=1147"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dayandcarrot.space\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=1147"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}