Hi Marshall,

Your latest email suggests that perhaps I wasn't mistaken when I asked for clarification about what kind of sequence you had in mind...  Anyhow, if you want to further "fuzz" the probabilities for the two biased coins, you could have their respective probabilities follow distributions (e.g., betas).  If you want their distributions to behave like "conjugate" lower-upper probabilities you could use the distributions that Parker Blakey and I wrote about in our IJAR paper that appeared this year (which I presented at the 2017 ISIPTA meeting).

Kind regards,

--Mike


From: SIPTA <sipta-bounces@idsia.ch> on behalf of Abrams, Marshall <mabrams@uab.edu>
Sent: Saturday, 17 November 2018 6:03:31 AM
To: sipta@idsia.ch
Subject: Re: [SIPTA] Are there imprecise analogues of pseudo-random number generators?
 
This discussion has clarified for me that what I need is not a sequence with distinct liminf and limsup.  In fact I understand the examples of natural physical processes that have been proposed as involving imprecise chance as ones in which there is no tendency to settle down to stable frequencies within the short or medium term (e.g.  quartz oscillator flicker noise in Grize and Fine 1987, or many examples in Igor Gorban's two recent books in English).  For all we know, if these processes could be defined precisely enough that their infinite extensions could be predicted, the relative frequencies might indeed tend toward single limits.

The fact that such processes exist in nature and that some of them might involve largely deterministic processes (e.g. air and water temperature fluctuations or sea wave heights in Gorban's books) suggests that it might be possible in principle to generate such patterns algorithmically.


I still need to read some of the papers that have been suggested, but in the mean time I am now thinking about a very crude method for generating this kind of pattern.  Here is a simple illustration:

Step 0: Start with an empty list of (no) coin toss outcomes.

Step 1: Flip a fair coin to choose another coin which is biased with either p=0.4 or p=0.6 for heads.  Also toss a 1000-sided die of uniform density to choose an integer n between 1 and 1000, inclusive.

Step 2: Then toss (or spin) the chosen biased coin n times and add the outcomes to the end of the list.

Go back to step 1, iterating steps 1 and 2 until there is a moderately long list of coin toss outcomes, say 50,000, if that is enough data to drive one's ABM or other simulation.


With this particular example, the limit of the relative frequency of heads in the limit will be 0.5 with probability 1, because the single-coin subsequences of lengths n=1 through 1000 have equal probability, and the two biased coins have equal probability of being chosen for such a subsequence.  But in shorter runs, frequencies will not appear to tend to 0.5, but will fluctuate in the neighborhood of [0.4, 0.6].

This can all be translated into code using PRNGs (in which case it's a deterministic model of a probabilistic model used to model something that's not probabilistic!).  I've started doing this.

Variations are possible.  There can be more than two coins, chosen with unequal probabilities.  The subsequence length choice could be done differently, e.g. using a Poisson distribution with no maximum length.  Regardless, the subsequence length choice process might have to be tuned to the typical numbers of trials in your simulation, since if there are many trials, frequencies might settle toward the ultimate limit, which might not be what's wanted for the simulation.  (As noted, this is a very crude method.)


Does this make (some) sense?

Thanks very much, again.


Marshall

Marshall Abrams, Associate Professor 
Department of Philosophy, University of Alabama at Birmingham
Email: mabrams@uab.edu; Phone: (205) 996-7483;  Fax: (205) 975-6610
Mail: HB 414A, 900 13th Street South, Birmingham, AL 35294-1260;  Office: HB 418