Other async Rust / colored functions - what is this about?

There was a proposal about 'Rust in base' on the hackers list. And among a lot of insightful comments, some link led me to this article:


What is he talking about? Maybe some routined OO-programmer here would spend a few words?

I for my part never enjoyed OO until I met Ruby. (I was once forced to do OO in perl, and it was a pain). But Ruby, as the author says, is not concerned by the issue.
There is talk about "futures" and "promises". I don't remember where I met "promises" - but definitely I didn't understand the idea...
 
There is talk about "futures" and "promises". I don't remember where I met "promises" - but definitely I didn't understand the idea...
The C++ Programming Language, 4Ed, by Bjarne Stroustrup, 5.3.5?
 
That's what I'm trying to avoid. I tried to learn C++ years ago, and failed.
When going down that road, if you're coming from plain C, learn classes first and worry about templates last. You don't need to conquer all of C++ at once, pick up bits you find useful, and go from there.
 
Think of a normal call such as "result = f(a, b)". Conceptually the caller blocks until the function returns -- this is a synchronous call. But imagine that f() takes a very long time and you'd like to get some work in the meantime. So you use an asynchronous call, where you have a pattern like "<promise> = f(a,b)". That is, you just have a handle, a promise, to deliver the result later. When you really really want the result of f, you do "result = deliver <promise>". This delivery call is a synchronous call, in that the caller blocks until an actual result is delivered. Once again, Conceptually, you can think of a synchronous call as just a back to back promise followed by delivery. So basically now we have two colors: blue for synchronous calls and red for asynchronous calls. I believe the sync/async stuff comes from functional programming, not O-O.

In languages like Go, you simply start another thread and if you want that thread to deliver a result, you set up a channel. Basically any time you add concurrency, it introduces complications and different language may deal with them in different ways!
 
Think of a normal call such as "result = f(a, b)". Conceptually the caller blocks until the function returns -- this is a synchronous call. But imagine that f() takes a very long time and you'd like to get some work in the meantime. So you use an asynchronous call, where you have a pattern like "<promise> = f(a,b)". That is, you just have a handle, a promise, to deliver the result later. When you really really want the result of f, you do "result = deliver <promise>". This delivery call is a synchronous call, in that the caller blocks until an actual result is delivered.
Okay, yes, thank You! You have now intellegibly explained what I vaguely thought it might mean, but didn't make sense to me.

Once again, Conceptually, you can think of a synchronous call as just a back to back promise followed by delivery.
But why would one want to abstract away the flow of control into dispersed objects?

I had to do such async calls in Javascript - but that's an extremely constrained environment that doesn't seem to have means of management.
Doing such things extensively would remind me of how the blood was dispersed over the walls in the "Event Horizon" movie. 😨

In languages like Go, you simply start another thread and if you want that thread to deliver a result, you set up a channel. Basically any time you add concurrency, it introduces complications and different language may deal with them in different ways!

That is what I do in Ruby, just create a thread. Then there is another independent flow of control, and there are means how the two may interact, and these need to be taken care of.
 
But why would one want to abstract away the flow of control into dispersed objects?
As an example, consider the case where you are trying to get two separate remote resources A & B (via RPC or some tcp comm.) in order to do something. If you wait to get one resource and then the other, it will take you tA + tB before you can work (where tX is time to get resource X). If you kick off both requests at the same time and separately wait for them to complete, you can get started after max(tA, tB). Doing it this way may be even cheaper than starting two separate threads and then waiting for these threads to respond.

In a way this is similar to calling select() or poll() when you are handling multiple TCP connections, versus giving each thread one connection to manage.

This is just a different kind of control structure and has other applications. For example, in Scheme (delay <expr>) returns a <promise> and (force <promise>) returns the value of the original <expr>. Given this, we can build lazy lists, which can be infinite since the tail is evaluated only on demand! Example:

Code:
;#!/usr/local/bin/scm
(require 'macro)

;Here, scar, scdr and scons are the lazy equivalent of Scheme's car, cdr and cons.
(define (scar x) (car x))
(define (scdr x) (let ((y (cdr x))) (if (promise? y) (force y) y)))
(define-syntax scons (syntax-rules () ((scons x y) (cons x (delay y)))))

; stream contains x, f(x), f(f(x)), ...
(define (iterate f x)    (letrec ((ls (scons x (iterate f (f x))))) ls))

; n, n+1, n+2 ...
(define (from n) (iterate 1+ n))

; only output values from stream xs for which f is true
(define (filter f xs)
  (cond ((null? xs) '())
        ((f (scar xs)) (scons (scar xs) (filter f (scdr xs))))
        (else (filter f (scdr xs)))))

; Sieve of Eratosthenes
(define (sieve xs)
  (scons (scar xs)
         (sieve (filter (lambda (x) (> (modulo x (scar xs)) 0)) (scdr xs)))))

; an infinite list of primes....
(define primes (sieve (from 2)))

; to display a stream...
(define (sdisplay s)
  (do ((xs s (scdr xs)) (sep "(" " ")) ;)(
      ((null? xs) (display ")") "")
      (display sep)
      (display (scar xs))))

; take first n elements
(define (take n s)
  (cond ((<= n 0) '())
        (else (scons (scar s) (take (- n 1) (scdr s))))))

Now you can print the first 100 primes with (sdisplay (take 100 primes)) (newline). Neat, huh?!
 
Last edited:
Sorry a bit OT, but the talk about colored functions made me think of this:


adds color as a way of indicating how words should be interpreted

Seems a bit brain-melting, so pleased to see we are not meaning anything like that in terms of Rust!
 
When going down that road, if you're coming from plain C, learn classes first and worry about templates last. You don't need to conquer all of C++ at once, pick up bits you find useful, and go from there.
In order of priority I would say
1. Resource management
2. Using classes
3. Using templates and in particular the standard library containers and algorithms
4. Writing templates
5. Class inheritance
 
As you gain knowledge, standard library stuff has a lot of useful stuff. Maybe a bit heavyweight, but typically written very well.
 
This is just a different kind of control structure and has other applications. For example, in Scheme (delay <expr>) returns a <promise> and (force <promise>) returns the value of the original <expr>. Given this, we can build lazy lists, which can be infinite since the tail is evaluated only on demand! Example:
[...]
Now you can print the first 100 primes with (sdisplay (take 100 primes)) (newline). Neat, huh?!
Yes, this is beauty. It is something for people who love mathematical proofs.

I had a bit difficulties, I had to find the proper port that would provide an "scm" command (it wouldn't run in emacs, although it is kinda lookalike), then I had to fix the copy mistake in your code.
And I didn't fully understand the constructs, only so far to see the point in it. Thank You!
 
Back
Top