欢迎来到淘文阁 - 分享文档赚钱的网站! | 帮助中心 好文档才是您的得力助手!
淘文阁 - 分享文档赚钱的网站
全部分类
  • 研究报告>
  • 管理文献>
  • 标准材料>
  • 技术资料>
  • 教育专区>
  • 应用文书>
  • 生活休闲>
  • 考试试题>
  • pptx模板>
  • 工商注册>
  • 期刊短文>
  • 图片设计>
  • ImageVerifierCode 换一换

    逻辑、计算和博弈 (11).pdf

    • 资源ID:67732031       资源大小:922.21KB        全文页数:21页
    • 资源格式: PDF        下载积分:10金币
    快捷下载 游客一键下载
    会员登录下载
    微信登录下载
    三方登录下载: 微信开放平台登录   QQ登录  
    二维码
    微信扫一扫登录
    下载资源需要10金币
    邮箱/手机:
    温馨提示:
    快捷下载时,用户名和密码都是您填写的邮箱或者手机号,方便查询和重复下载(系统自动生成)。
    如填写123,账号就是123,密码也是123。
    支付方式: 支付宝    微信支付   
    验证码:   换一换

     
    账号:
    密码:
    验证码:   换一换
      忘记密码?
        
    友情提示
    2、PDF文件下载后,可能会被浏览器默认打开,此种情况可以点击浏览器菜单,保存网页到桌面,就可以正常下载了。
    3、本站不支持迅雷下载,请使用电脑自带的IE浏览器,或者360浏览器、谷歌浏览器下载即可。
    4、本站资源下载后的文档和图纸-无水印,预览文档经过压缩,下载后原文更清晰。
    5、试题试卷类文档,如果标题没有明确说明有答案则都视为没有答案,请知晓。

    逻辑、计算和博弈 (11).pdf

    LOGIC,COMPUTATION AND GAMES Probability&Logic puzzle occurs in many different forms:The Jailer Glasses The Quizmaster Discussion Answer 1:no difference,Answer 2:Door 2 has become more probable,you should switch.The right solution is to switch?Better:analyze what is involved.puzzle occurs in many forms:The Jailer,Three Glasses Bayes Law:Two Computations P(Door 2|Door 3)=P(Door 3|Door 2)P(Door 2)/P(Door 3)=(1 x 1/3)/2/3=1/2 P(Door 2|QM opens 3)=P(QM opens 3|Door 2)P(Door 2)/P(QM opens 3)=(1 x 1/3)/1/2=2/3 What is the right computation?Bayes Law does not tell.Tree Representation epistemic model probability tree Issues represent the problem update with the right information only then:compute infer does the text give enough information?role of protocol for the quizmaster what is the update mechanism?what logic fits with this?are the probabilities over-kill,given the qualitative decision problem“Switch”?Logic and Probability,Three Approaches combine logical and probabilistic systems (often in practice)find qualitative base logic underneath probability,then derive probability measure(de Finettis program)find probability underneath logic,then derive logical notions as qualitative shorthand (Locke on belief,several philosophical logicians today)Epistemic Probabilistic Logic,1 models M=(W,R,V)as usual,we will ignore agent indices finite models!Note:Linear Inequalities the linear inequalities in the syntax may look surprising,but they are useful when proving a completeness theorem where we have to produce a suitable probability measure.With our later dynamic update logic PDEL with events,another test is that the linear inequality format is just what is needed to make the recursion axioms work.In combined logics of quantifiers and counting(ongoing work)integer programming problems with inequalities turn out just the right counterpart for basic logical reasoning.Epistemic Probabilistic Logic,2 for simplicity,assume that all worlds in an epistemic equivalence class assign the same probability measure.I.e.,agents know their probability distribution add:s I t Epistemic Probabilistic Logic,3 Theorem (Halpern)Epistemic probability logic is axiomatizable and decidable.The restriction to the rationals may not be essential.Theorem(Tarski)The first-order theory of the reals(R,0,1,+,)is decidable.PS about combining knowledge and probability probability 1 and knowledge:are they the same,or not?language:iterated knowledge vs.iterated probability Probabilistic PAL,Recursion Axioms PAL update M|on epistemic probability model M defined just as in Week 1,except that the probability measure is normalized to s in M|M,s|=valid recursion law the version on p.148 of LDII mistakenly dropped need to extend this to conditional probability:HW 3 Theorem Probabilistic epistemic PAL is complete.Aside:PAL and Bayes Law !P()=k looks like P(|)=k but they are not the same Fact Bayes Law fails for the PAL modality see posted LDII book,p.150 Explanation truth values after update for complex epistemic formulas the equivalence only holds for factual formulas Similar phenomena in recentwork on probabilistic reasoning with indexical expressions(“now”,“here”,”the president”)Recent News:PAL and Probabilistic Update new paper just seen for a conference make the PAL updates themselves probabilistic give information that with certain force/probability q and adjust the probability values of worlds appropriately this seems more like the Jeffrey Update of a later slide,or it might be a simple form of Wednesdays PDEL Occurrence Probability PAL cannot analyze the Quizmaster yet at least not in a natural manner that fits our tree diagram QMs action of Opening Door 3 has different probabilities depending on the world where it is performed:if the car is behind Door 1,1 if the car is behind Door 2 information about this occurrence probability typically comes from the protocol,any p 1 in case of Door 1 suffices Tree Representation Once More update rule for the probability of a history,take product of the probabilities for the events along the history (and renormalize w.r.t.all surviving histories)Update Through Events start initial epistemic probability model,prior probabilities (these could themselves have come from learning history)update step events can happen,creating the next layer of history M x E,events have preconditions on worlds where they occur (precondition of!Is ,or think of concrete actions)preconditions viewed probabilistically:occurrence probabilities P MxE(s,e)=PM(s)x preM(s,e)dynamic logic system plus yet more complex updates:Wednesday QM in Cognition A B C My friend wears glasses Who is my friend?high percentage of respondents:B what could be said:“glasses”,“hat”Gricean conversational explanation probabilistic statistical percentage,individual degree of belief?general issue:equivalence of probabilistic scenarios Outlook:Finite vs Infinite we assumed finite models this is a huge simplification compared with standard probability theory where the passage to the infinite is crucial,e.g.,with the distinction knowledge/probability 1.problem Lift the theory of this week to the infinite case Note:There are well-developed interfaces of logic and probability where the infinite case is central,e.g.,algorithmic randomness Individuals vs.Groups,Discrete vs Continuous related issues,though not the same from our focus on single agent behavior to long-term statistical group behavior Example:interface our agent logics for belief change with Markov chain models for opinion diffusion in groups/networks discrete vs continuous dynamics our recursion axioms are like discrete difference equations,logic of continuous information dynamics/differential equations?Outlook:Natural Language Why is the puzzle stated in natural language?Or is crucial information only given in context(protocol,etc.)Can we get the representation just from NLP/formal semantics?Or are additional problem-oriented representations needed?John McCarthy vs the linguists/logicians Replacement views of NL by logic/math have never worked Why do even the exact sciences keep a mixture of natural and formal languages?(Staal on artificial languages in history)more sophisticated views of strengths/weakness of both

    注意事项

    本文(逻辑、计算和博弈 (11).pdf)为本站会员(奉***)主动上传,淘文阁 - 分享文档赚钱的网站仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知淘文阁 - 分享文档赚钱的网站(点击联系客服),我们立即给予删除!

    温馨提示:如果因为网速或其他原因下载失败请重新下载,重复下载不扣分。




    关于淘文阁 - 版权申诉 - 用户使用规则 - 积分规则 - 联系我们

    本站为文档C TO C交易模式,本站只提供存储空间、用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。本站仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知淘文阁网,我们立即给予删除!客服QQ:136780468 微信:18945177775 电话:18904686070

    工信部备案号:黑ICP备15003705号 © 2020-2023 www.taowenge.com 淘文阁 

    收起
    展开