{"id":701,"date":"2021-09-07T22:24:42","date_gmt":"2021-09-07T22:24:42","guid":{"rendered":"https:\/\/fooledbyrandomnessdotcom.wordpress.com\/?p=701"},"modified":"2022-01-19T15:45:14","modified_gmt":"2022-01-19T15:45:14","slug":"estimating-medical-error-rate-an-intuitive-max-entropy-method","status":"publish","type":"post","link":"https:\/\/fooledbyrandomness.com\/blog\/index.php\/2021\/09\/07\/estimating-medical-error-rate-an-intuitive-max-entropy-method\/","title":{"rendered":"Maximum Ignorance Probability, with application to surgery&#8217;s error rates"},"content":{"rendered":"\n<h3 class=\"wp-block-heading\" id=\"introduction-and-result\">Introduction and Result<\/h3>\n\n\n\n<p><\/p>\n\n\n\n<p>A maximum entropy alternative to Bayesian methods for the estimation of independent Bernouilli sums.<\/p>\n\n\n\n<p>Let \\(X=\\{x_1,x_2,\\ldots, x_n\\}\\), where \\(x_i \\in \\{0,1\\}\\) be a vector representing an <em>n<\/em> sample of independent Bernouilli distributed random variables \\(\\sim \\mathcal{B}(p)\\). We are interested in the estimation of the probability <em>p<\/em>.<\/p>\n\n\n\n<p>We propose that the probablity that provides the best statistical overview, \\(p_m\\) (by reflecting the <em><strong>maximum ignorance<\/strong><\/em> point) is<\/p>\n\n\n\n<p class=\"has-text-align-center\">\\(p_m= 1-I_{\\frac{1}{2}}^{-1}(n-m, m+1)\\),                                                      (1)<\/p>\n\n\n\n<p>where \\(m=\\sum_i^n x_i \\) and  \\(I_.(.,.)\\) is the beta regularized function. <\/p>\n\n\n\n<p><\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"comparison-to-alternative-methods\">Comparison to Alternative Methods<\/h3>\n\n\n\n<p><\/p>\n\n\n\n<p><strong>EMPIRICAL<\/strong>: The sample frequency corresponding to the &#8220;empirical&#8221; distribution \\(p_s=\\mathbb{E}(\\frac{1}{n} \\sum_i^n x_i)\\), which clearly does not provide information for small samples.<\/p>\n\n\n\n<p><\/p>\n\n\n\n<p><strong>BAYESIAN<\/strong>: The standard Bayesian approach is to start with, for prior, the parametrized Beta Distribution \\(p \\sim Beta(\\alpha,\\beta)\\), which is not trivial: one is contrained by the fact that matching the mean and variance of the Beta distribution constrains the shape of the prior. Then it becomes convenient that the Beta, being a conjugate prior, updates into the same distribution with new parameters. Allora, with <em>n<\/em> samples and <em>m<\/em> realizations:<\/p>\n\n\n\n<p class=\"has-text-align-center\">\\(p_b \\sim Beta(\\alpha+m, \\beta+n-m)\\)                                           (2)<\/p>\n\n\n\n<p>with mean \\(p_b = \\frac{\\alpha +m}{\\alpha +\\beta +n}\\). We will see below how a low variance beta has too much impact on the result.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"derivations\">Derivations<\/h3>\n\n\n\n<p>Let \\(F_p(x)\\) be the CDF of the binomial \\( \\mathcal{B} in(n,p)\\). We are interested in \\(\\{ p: F_p(x)=q \\}\\) the maximum entropy probability. First let us figure out the target value <em>q<\/em>.<\/p>\n\n\n\n<p>To get the maximum entropy probability, we need to maximize \\(H_q=-\\left(\\;q \\; log(q) +(1-q)\\; log (1-q)\\right)\\). This is a very standard result: taking the first derivative w.r. to <em>q<\/em>, \\(\\log (q)+\\log (1-q)=0, 0\\leq q\\leq 1\\) and since \\(H_q\\) is concave to <em>q<\/em>, we get \\(q =\\frac{1}{2}\\).<\/p>\n\n\n\n<p><\/p>\n\n\n\n<p>Now we must find <em>p<\/em> by inverting the CDF. Allora for the general case, <\/p>\n\n\n\n<p class=\"has-text-align-center\">\\(p= 1-I_{\\frac{1}{2}}^{-1}(n-x,x+1)\\).<\/p>\n\n\n\n<p>And note that as in the graph below (thanks to comments below by ","protected":false},"excerpt":{"rendered":"<p>Introduction and Result A maximum entropy alternative to Bayesian methods for the estimation of independent Bernouilli sums. Let \\(X=\\{x_1,x_2,\\ldots, x_n\\}\\), where \\(x_i \\in \\{0,1\\}\\) be a vector representing an n sample of independent Bernouilli distributed random variables \\(\\sim \\mathcal{B}(p)\\). We are interested in the estimation of the probability p. We propose that the probablity that &hellip; <a href=\"https:\/\/fooledbyrandomness.com\/blog\/index.php\/2021\/09\/07\/estimating-medical-error-rate-an-intuitive-max-entropy-method\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;Maximum Ignorance Probability, with application to surgery&#8217;s error rates&#8221;<\/span><\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_bbp_topic_count":0,"_bbp_reply_count":0,"_bbp_total_topic_count":0,"_bbp_total_reply_count":0,"_bbp_voice_count":0,"_bbp_anonymous_reply_count":0,"_bbp_topic_count_hidden":0,"_bbp_reply_count_hidden":0,"_bbp_forum_subforum_count":0,"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[3,4],"tags":[],"class_list":["post-701","post","type-post","status-publish","format-standard","hentry","category-medicine","category-probability"],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/fooledbyrandomness.com\/blog\/index.php\/wp-json\/wp\/v2\/posts\/701","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/fooledbyrandomness.com\/blog\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/fooledbyrandomness.com\/blog\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/fooledbyrandomness.com\/blog\/index.php\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/fooledbyrandomness.com\/blog\/index.php\/wp-json\/wp\/v2\/comments?post=701"}],"version-history":[{"count":1,"href":"https:\/\/fooledbyrandomness.com\/blog\/index.php\/wp-json\/wp\/v2\/posts\/701\/revisions"}],"predecessor-version":[{"id":981,"href":"https:\/\/fooledbyrandomness.com\/blog\/index.php\/wp-json\/wp\/v2\/posts\/701\/revisions\/981"}],"wp:attachment":[{"href":"https:\/\/fooledbyrandomness.com\/blog\/index.php\/wp-json\/wp\/v2\/media?parent=701"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/fooledbyrandomness.com\/blog\/index.php\/wp-json\/wp\/v2\/categories?post=701"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/fooledbyrandomness.com\/blog\/index.php\/wp-json\/wp\/v2\/tags?post=701"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}