Re: [SIPTA] Fwd: Are there imprecise analogues of pseudo-random number generators?
Dear all,
I can only recommend the paper by Gert and Jasper, which includes several deep results about infinite sequences of 0's and 1's. However, as far as I understand, Marshall is interested in finite sequences of 0's and 1's that are generally accepted as "random", like the ones generated by PRNGs. Unfortunately, I do not know of any non-artificial way of generating such sequences for imprecise probabilities.
Since iid sequences of 0's and 1's have relative frequencies that converge due to the law of large numbers, sequences for which the relative frequencies do not converge can be artificially obtained by dropping one (or both) of the two i's in iid. For instance, a sequence with relative frequency of 1's tending to oscillate between 0.6 and 0.8 can be obtained by keeping independence but varying the probability of 1's according to the following sequence: once 0.6, two times 0.9, six times 0.5, 18 times 0.9, 54 times 0.5, 162 times 0.9, and so on (alternating between 0.5 and 0.9 with block lengths increasing by threefold each time).
By adding some randomness to the sequence of probabilities, this could be made less artificial, but I do not know of any work in which this kind of ideas is explicitly developed.
Best wishes, Marco
-----Original Message----- From: SIPTA <sipta-bounces(a)idsia.ch> On Behalf Of Gert De Cooman Sent: Tuesday, November 13, 2018 7:04 PM To: Dr. Alessandro Antonucci <alessandro(a)idsia.ch> Cc: sipta(a)idsia.ch Subject: Re: [SIPTA] Fwd: Are there imprecise analogues of pseudo-random number generators?
My reply to Alessandro, building on my previous message :
Precise randomness is more than convergence of the sample mean for the sequence – which is what you use in your argument – it implies also convergence to the same limit of the sample mean for any (computably selected) subsequence. (This is the notion of computable stochasticity, which is still weaker than randomness, and therefore implied by it). It is easy to see that if the extreme points are visited in a computable fashion, we can computably select subsequences whose sample mean will converge to any of the extreme points. Or arbitrarily close to any convex mixture of them. So the sequence will then not be precisely random, contrary to what you suggest.
Computable randomness (in the precise case) of a sequence is the stronger property that unless you use the exact precise probability as your model for betting on the sequence of digits, you can be made to lose an unbounded amount of money by someone who bets against you using some computable strategy. It is this (martingale-theoretic) characterisation of (computable) randomness that we generalise to the imprecise case in our paper.
Every good wish, Gert
Prof. Gert de Cooman Universiteit Gent (ELIS–IDLab) — Durham University (Department of Mathematical Sciences) Technologiepark - Zwijnaarde 914, 9052 Zwijnaarde, Belgium t: +3292645653 — iMessage: +32496832628 e: gert.decooman(a)UGent.be w: users.ugent.be/~gdcooma
Op 13 nov. 2018, om 16:33 heeft Alessandro Antonucci <alessandro(a)idsia.ch> het volgende geschreven:
Dear Marshall,
For an imprecise subjectivist “the imprecision is in the eye of the learner”. A non-stationary RNG can jump from one sampling distribution to another but, again, this is done according to another (second-order) distribution. The sample will be indistinguishable from another one sampled from the marginal of that hierarchical model. The imprecision level is basically a parameter of your learning algorithm (e.g., s for the IDM, alpha for likelihood-based approachs), but not something you might learn from your data.
These are my two cents.
Bests, Alessandro
Alessandro Antonucci IDSIA Dalle Molle Institute for Artificial Intelligence Via Cantonale (Galleria 2) CH-6928, Manno-Lugano, CH
mail: alessandro(a)idsia.ch skype: alessandro.antonucci tel: +41 916108515 web: www.idsia.ch/~alessandro
---------- Forwarded message --------- From: Michael Smithson <Michael.Smithson(a)anu.edu.au> Date: mar 13 nov 2018 alle ore 14:55 Subject: Re: [SIPTA] Are there imprecise analogues of pseudo-random number generators? To: <alessandro.antonucci(a)gmail.com>
Hi Marshall,
It isn't clear to me what you want the long-run state to be. Do you want p(1) to settle down to having, say, a uniform distribution on [.6,.8]? Or for the relative frequency of 1's to end up strictly confined between .6 and .8? Or are you thinking of having p(1) fixed at .7 and some other random variable controlling the interval width around .7 that eventually settles on a width of .2? Or ... ?
Kind regards,
--Mike
From: SIPTA <sipta-bounces(a)idsia.ch> on behalf of Abrams, Marshall <mabrams(a)uab.edu> Sent: Tuesday, 13 November 2018 3:18:37 PM To: sipta(a)idsia.ch Subject: [SIPTA] Are there imprecise analogues of pseudo-random number generators?
I would like to ask a theoretical and practical question. (No conference announcement here!)
tl;dr: Are there imprecise analogues of pseudo-random number generators?
If that question doesn't make sense, please feel free to read on, with my thanks for doing so. (Sorry the following is so long; I was worried that what I was asking would be unclear otherwise.)
I am interested in imprecise chance, that is, objective imprecise probability; this idea has been discussed in different ways by Terrence Fine and his colleagues, Marco Cattaneo, Luke Glynn, Alan Hajek, Suppes and Zanotti, Stephan Hartmann, Igor Gorban, and probably others.
In an agent-based model, (precise) chance can be modeled using a PRNG. Can this be generalized to base an ABM on a model of imprecise chance? I had thought this would be impossible, but now I am not so sure. A good PRNG such as a Mersenne Twister is of course not truly chancy, and its output even has low Kolmogorov complexity, but it's still a good way to model chance in many contexts. Why couldn't imprecise chance be modeled in an ABM using a deterministic algorithm as well?
What would the output of a pseudo imprecise random number generator (PIRNG) look like?
Well, suppose one wrote a function that used a normal PRNG to generate 0's and 1's with probability 0.7 for 1. Then over most large sets of outputs, we would find the frequency of 1 to be close to 0.7, and as we increased the size of such a set, the frequency would usually get closer to 0.7
Similarly, using a PIRNG, one ought to be able to write a function that generates 0's and 1's with (for example) an interval probability of [0.6, 0.8] for 1. Then over most large lets of outputs, the frequency of 1 should lie in or near [0.6, 0.8], and as we increased the size of such a set, the frequency would remain near [0.6, 0.8] but would usually wander without settling down to any precise value within that interval.
Are there known algorithms that might have this kind of behavior? Or could this kind of behavior be simulated using PRNGs for sequences that are not extremely long? i.e. maybe one can write a function using a PRNG such that over the not-too-long-run, sequences would appear to be imprecisely chancy, but they would eventually settle down to precise values in the very very long run. That might be good enough for some ABMs.
Or am I just confused? (I think this is not unlikely.)
Any suggestions about what sort of literature I should look at or what kinds of keywords to search on would be welcome.
[In some parts of Fierens, Rego, and Fine's "A frequentist understanding of sets of measures" (2009), it sounds as if such a class of algorithms might be described, at least in a loose, general way, but if that is so, I have not understood the article well enough to see how what's given there does so. Their model uses sequence specified to be of intermediate Kolmogorov complexity, which itself seems like a difficult thing to generate.]
Thank you!
Marshall
Marshall Abrams, Associate Professor Department of Philosophy, University of Alabama at Birmingham Email: mabrams(a)uab.edu; Phone: (205) 996-7483; Fax: (205) 975-6610 Mail: HB 414A, 900 13th Street South, Birmingham, AL 35294-1260;
Office: HB 418 Website: http://members.logical.net/~marshall
SIPTA mailing list SIPTA(a)idsia.ch http://mailman2.ti-edu.ch/mailman/listinfo/sipta
SIPTA mailing list SIPTA(a)idsia.ch http://mailman2.ti-edu.ch/mailman/listinfo/sipta
SIPTA mailing list SIPTA(a)idsia.ch http://mailman2.ti-edu.ch/mailman/listinfo/sipta
participants (3)
-
Abrams
-
Marco Cattaneo
-
Michael Smithson