Negative Sleep (#87)

by MenTaLguY

(Inspired by recent insomnia and a #ruby-lang snippet on Anarchaia...)

Let's suppose there existed a sleep which accepted negative arguments. Would it actually imply time travel? In the case of a negative argument, the obvious description of sleep's behavior (halt the thread for the given minimum number of seconds) would be trivially satisfied by behaving as if sleep(0) had been called. That's not a very interesting result.

Of course, that's not the only way to define sleep. You could also view a sleep as specifying the relationship between the computations on either side of it. In that case, sleep(1) might request that the second computation begin at least a second after the completion of the first one. Negative sleeps would simply reverse the order of the two computations, sleep(-1) meaning that the first computation should begin at least a second after the completion of the second.

That sounds slightly impossible to implement, doesn't it? But consider something like this:

( { ... } + + { ... } ).run

Another possible definition for sleep would be a minimum delta time between the end of the first computation and the beginning of the second one. In that case, the implementation of sleep would have to ensure that the second computation above would begin executing no less than one second before the completion of the first one. Multiple threads (and the ability to predict or control a computation's duration) might prove useful.

Your assignment for this quiz is twofold:

1) Devise an "interesting" definition for sleep which allows negative
durations. Alternately, use one of the definitions given here.
2) Write some Ruby code demonstrating behavior which satisfies that
definition. As with the above example, you needn't provide a drop-in
replacement for Kernel.sleep.

If your solution does involve time travel, please ensure that it isn't posted before the end of the 48-hour spoiler period (or before the beginning of the quiz).

Quiz Summary

These invent-your-own definition problems are always funny. Some of you have far more twisted minds than I can claim and that entertains Hal Fulton.

Let's get to the crazy definitions. Here's a simple one by Dirk Meijer, shown through an example script I wrote:

Sleeping for 2 seconds...
Seconds so far: 2.

Sleeping for 1 seconds...
Seconds so far: 3.

Sleeping for -1 seconds...
Seconds so far: 3.

Sleeping for 2 seconds...
Seconds so far: 4.

Notice that the negative sleep seemed to be instant and the sleep that followed it was shortened by one second. Here's Dirk's code:

alias :old_sleep :sleep

def sleep(n)
if $sleep>0

Nothing too surprising there. Any call with a positive sleep time adds to a global $sleep counter, triggers the old sleep() routine, and resets the global counter to zero. A call to with a negative time just subtracts from the counter. This allows code to queue up offsets, shortening future calls to sleep().

For the sake of completeness, here is my test script used above:

#!/usr/bin/env ruby -w

require "insomnia"

$start_time =

def pause(time)
puts "Sleeping for #{time} seconds..."
puts "Seconds so far: #{( - $start_time).round}."

[2, 1, -1, 2].each { |sec| pause(sec) }

Let's examine a completely different swing at the target, by Mike Nelson:

module Kernel
def n_sleep(n_sleep_time)
Thread.current.priority = -n_sleep_time

# test stuff
if __FILE__ == $0 { n_sleep(-3); 1.upto(10) {print "A"; sleep(0.1)} } { n_sleep( 1); 1.upto(10) {print "B"; sleep(0.1)} } { n_sleep(-2); 1.upto(10) {print "C"; sleep(0.1)} }
n_sleep(10); 1.upto(10) {print "m"; sleep(0.1)}
loop {break if Thread.list.size == 1}

Here the idea is that negative sleep means we should run more often than positive sleep values. Mike translated that concept to into Thread priorities, which seems similar in definition. Thus this one line solution sets the Thread priority to the opposite of the passed value (making negatives fast and positives slower).

In the test code, four Threads are all running in parallel and sleeping the same intervals, but the initial change of priority affects how they come up. Observe:


The "A" and "C" Threads always get to go first, because of their higher priority (negative calls to n_sleep()) and the "m" Thread is always tail-end-charlie.

Those are both clever, but pretty far from the suggested implementation in the quiz. For an example of that, let's have a peak at Dingsi's code:

class NegativeProc < Proc; end
class ProcStack
def initialize(*args)
@negative = args.shift if args.first == true
@stack = args

def + code
new_stack = @stack.dup

if code.is_a? NegativeProc
new_stack.insert(-2, code)
elsif code.respond_to? 'call'
if @negative
new_stack.insert(-3, code)

def call
@stack.each { |p| }

def ProcStack.sleep(time)
if time < 0 { Kernel.sleep(time.abs) }
else { Kernel.sleep(time) }

class Proc
def + code + code

# ...

Breaking this down, the first line just defines a new type of Proc called NegativeProc. We will see why in a moment.

The next class is a ProcStack object, for ordering a bunch of Procs to execute. You can see in initialize() that the ProcStack tracks whether it is @negative through some optional first parameter and the @stack of Procs/NegativeProcs to run.

The tricky method is +(). When a Proc/NegativeProc is added, the @stack is duplicated, adjusted for the new member, and made into a new ProcStack object. NegativeProcs get added in front of whatever came before them and they set that @negative parameter we spotted in initialize(). When it something else is added, the @negative flag causes it to jump in front of the previous Proc and negative sleep value. (See below for an example.) Otherwise, a Proc is just pushed onto the stack.

The rest is much easier to take in. call() just triggers each Proc/NegativeProc in turn. The class method sleep() builds Procs that sleep(). Negative sleep values build a NegativeProc, so it will trigger the @negative flag dance. Finally, a +() method is defined on Proc to create the initial ProcStack.

That ProcStack.+() is pretty hard to imagine without an example, so here's the code Dingsi included to show it of (with minor changes by me):

# ...

# should print something like "chunky ... bacon\hooray for foxes"
STDOUT.sync = 1 # added by JEG2 to flush print() calls immediately
whee = proc { print "bacon\n" } +
ProcStack.sleep(-2) +
proc { print "chunky " } +
ProcStack.sleep(1) +
proc { print "hooray for " } +
proc { print "foxes" }

Here's how the ProcStack builds up when that is run:

#<ProcStack @stack=[proc { print "bacon\n" }]> + ProcStack.sleep(-2)
#<ProcStack @stack=[proc { sleep(2) }, proc { print "bacon\n" }]
@negative =true> + proc { print "chunky " }
#<ProcStack @stack=[proc { print "chunky " }, proc { sleep(2) },
proc { print "bacon\n" }]> + ProcStack.sleep(1)
#<ProcStack @stack=[proc { print "chunky " }, proc { sleep(2) },
proc { print "bacon\n" }, proc { sleep(1) }]>

What is the practical value of all this? I'm not much sure there is one, save perhaps showing that it's entirely possible to invent new computational processes with not too much effort. Well, that and it's just fun stuff to play with. Ruby Quiz is all about the fun factor.

My thanks to all you crazy time-traveling programmers with insomnia. May you all sleep(-3) tonight.

There will be no Ruby Quiz this week. I will be busy participating in the annual ICFP programming contest and I encourage others to give it a shot. See you all next week!