[SIPTA] Fwd: Are there imprecise analogues of pseudo-random number generators?
Dear Marshall,
For an imprecise subjectivist “the imprecision is in the eye of the learner”. A non-stationary RNG can jump from one sampling distribution to another but, again, this is done according to another (second-order) distribution. The sample will be indistinguishable from another one sampled from the marginal of that hierarchical model. The imprecision level is basically a parameter of your learning algorithm (e.g., s for the IDM, alpha for likelihood-based approachs), but not something you might learn from your data.
These are my two cents.
Bests, Alessandro
Alessandro Antonucci IDSIA Dalle Molle Institute for Artificial Intelligence Via Cantonale (Galleria 2) CH-6928, Manno-Lugano, CH
mail: alessandro(a)idsia.ch skype: alessandro.antonucci tel: +41 916108515 web: www.idsia.ch/~alessandro
---------- Forwarded message --------- From: Michael Smithson <Michael.Smithson(a)anu.edu.au> Date: mar 13 nov 2018 alle ore 14:55 Subject: Re: [SIPTA] Are there imprecise analogues of pseudo-random number generators? To: <alessandro.antonucci(a)gmail.com>
Hi Marshall,
It isn't clear to me what you want the long-run state to be. Do you want p(1) to settle down to having, say, a uniform distribution on [.6,.8]? Or for the relative frequency of 1's to end up strictly confined between .6 and .8? Or are you thinking of having p(1) fixed at .7 and some other random variable controlling the interval width around .7 that eventually settles on a width of .2? Or ... ?
Kind regards,
--Mike
From: SIPTA <sipta-bounces(a)idsia.ch> on behalf of Abrams, Marshall <mabrams(a)uab.edu> Sent: Tuesday, 13 November 2018 3:18:37 PM To: sipta(a)idsia.ch Subject: [SIPTA] Are there imprecise analogues of pseudo-random number generators?
I would like to ask a theoretical and practical question. (No conference announcement here!)
tl;dr: Are there imprecise analogues of pseudo-random number generators?
If that question doesn't make sense, please feel free to read on, with my thanks for doing so. (Sorry the following is so long; I was worried that what I was asking would be unclear otherwise.)
I am interested in imprecise chance, that is, objective imprecise probability; this idea has been discussed in different ways by Terrence Fine and his colleagues, Marco Cattaneo, Luke Glynn, Alan Hajek, Suppes and Zanotti, Stephan Hartmann, Igor Gorban, and probably others.
In an agent-based model, (precise) chance can be modeled using a PRNG. Can this be generalized to base an ABM on a model of imprecise chance? I had thought this would be impossible, but now I am not so sure. A good PRNG such as a Mersenne Twister is of course not truly chancy, and its output even has low Kolmogorov complexity, but it's still a good way to model chance in many contexts. Why couldn't imprecise chance be modeled in an ABM using a deterministic algorithm as well?
What would the output of a pseudo imprecise random number generator (PIRNG) look like?
Well, suppose one wrote a function that used a normal PRNG to generate 0's and 1's with probability 0.7 for 1. Then over most large sets of outputs, we would find the frequency of 1 to be close to 0.7, and as we increased the size of such a set, the frequency would usually get closer to 0.7
Similarly, using a PIRNG, one ought to be able to write a function that generates 0's and 1's with (for example) an interval probability of [0.6, 0.8] for 1. Then over most large lets of outputs, the frequency of 1 should lie in or near [0.6, 0.8], and as we increased the size of such a set, the frequency would remain near [0.6, 0.8] but would usually wander without settling down to any precise value within that interval.
Are there known algorithms that might have this kind of behavior? Or could this kind of behavior be simulated using PRNGs for sequences that are not extremely long? i.e. maybe one can write a function using a PRNG such that over the not-too-long-run, sequences would appear to be imprecisely chancy, but they would eventually settle down to precise values in the very very long run. That might be good enough for some ABMs.
Or am I just confused? (I think this is not unlikely.)
Any suggestions about what sort of literature I should look at or what kinds of keywords to search on would be welcome.
[In some parts of Fierens, Rego, and Fine's "A frequentist understanding of sets of measures" (2009), it sounds as if such a class of algorithms might be described, at least in a loose, general way, but if that is so, I have not understood the article well enough to see how what's given there does so. Their model uses sequence specified to be of intermediate Kolmogorov complexity, which itself seems like a difficult thing to generate.]
Thank you!
Marshall
Marshall Abrams, Associate Professor Department of Philosophy, University of Alabama at Birmingham Email: mabrams(a)uab.edu; Phone: (205) 996-7483; Fax: (205) 975-6610 Mail: HB 414A, 900 13th Street South, Birmingham, AL 35294-1260; Office: HB 418 Website: http://members.logical.net/~marshall
SIPTA mailing list SIPTA(a)idsia.ch http://mailman2.ti-edu.ch/mailman/listinfo/sipta
participants (1)
-
Alessandro Antonucci