Note

BubbleRand: a deliberately lazy sampler

A goofy idea for when you want “kinda random,” not cryptographic or even fair.

The premise is simple: take a distribution (an array of numbers), randomly bubble sort small sections a handful of times, then grab the middle value. BubbleRand is intentionally slow, intentionally biased, and perfect for gamey chaos.

Think of it as “analog randomness.” You can tune how many passes you do, how wide the subranges are, and how many swaps you allow per pass.

BubbleRand in motion

The box highlights a random slice that gets bubble-sorted in a random direction. After a few passes, we grab the middle value as the output.

  • #1
    0.12
  • #2
    0.83
  • #3
    0.47
  • #4
    0.65
  • #5
    0.29
  • #6
    0.91
  • #7
    0.54
  • #8
    0.06
  • #9
    0.78
  • #10
    0.33
  • #11
    0.71
  • #12
    0.18
  • #13
    0.59
  • #14
    0.25
  • #15
    0.88

Quick randomness check

I ran a small comparison (20000 samples, 30 bins) between Math.random, crypto randomness, and BubbleRand. The charts below show the histogram distribution and a few simple stats.

Showing precomputed results · 20000 samples
Ranking (lower chi-square is better)
#1crypto.getRandomValues🏆
Score 83.3
#2BubbleRand (Math.random)
Score 78.3
#3Math.random
Score 72.5

Math.random

Mean
0.50145
Std Dev
0.28693
Chi-square
27.5

crypto.getRandomValues

Mean
0.50243
Std Dev
0.28847
Chi-square
16.68

BubbleRand (Math.random)

Mean
0.50276
Std Dev
0.28893
Chi-square
21.71

Notes: BubbleRand randomizes sort direction per pass. Chi-square is relative to a uniform distribution (lower is closer to uniform). Generated 2026-01-29.

How to read the numbers

Mean

For a uniform random generator over [0, 1), the mean should hover near 0.5. Small drift is normal with finite samples.

Std Dev

A perfectly uniform source has a standard deviation of about 0.2887. Values near 0.28–0.30 are healthy for this sample size.

Chi-square

Lower is closer to uniform. With 30 bins, a “good” value often lands in the 15–60 range for small runs (1k–5k samples). Larger runs should tighten toward ~30. Spikes higher than that can indicate bias—or just bad luck in a small run.

These are simple smoke tests, not cryptographic validation. For serious randomness, use dedicated suites (Dieharder, TestU01) and much larger sample sizes.

Algorithm sketch

  1. Start with a numeric distribution array.
  2. Pick a random slice window (start + length).
  3. Bubble-sort only that slice, but cap the swaps.
  4. Repeat 10 times (or any number you like).
  5. Return the middle value of the entire array.

Pseudo-code

function bubbleRand(values, passes = 10) {
  const arr = values.slice()
  for (let i = 0; i < passes; i++) {
    const start = randInt(0, arr.length - 2)
    const len = randInt(2, Math.min(12, arr.length - start))
    const end = start + len
    bubbleSlice(arr, start, end, maxSwaps = len * 2)
  }
  return arr[Math.floor(arr.length / 2)]
}

Why bother?

It’s not about correctness. It’s about texture. Think of loot tables, ambient behaviors, or procedural variation that should feel “alive” but not purely random.