Pages

Friday, December 29, 2017

Compound Reader Zones

Welcome » NERWous C » Mel
  1. Reader OR Zones
  2. Reader LIST Zones
  3. Reader AND Zones
In the previous chapter, we have been introduced to exclusive reader zones as applicable to a single mel variable. In this chapter we expand the discussion of reader zones to multiple mel variables. A future chapter will cover reader zones used with structured mels.


Reader OR Zones

We start with the reader OR zone. Let's look at this example where we have two mel variables worked on by three tasks: a Producer, a Consumer, and a Manipulator that Manipulates all the things that are Produced before being Consumed:
#include "nerw.h"
main () {
   <mel> int store1, store2;
   <pel> p = <!> Manipulator (store1, store2);
   <!> Producer (store1, store2);
   <!> Consumer (store1, store2);
}
void Producer (<mel> int store1, <mel> int store2) {
   while ( <?>(store1 || store2) = Produce() );
   <close>store1;
   <close>store2;
}
void Consumer (<mel> int store1, <mel> int store2) {
   while ( true ) {
     try Consume(<?>(store1 || store2));
     catch((store1 && store2)<CLOSED>) break;
   }
}
void Manipulator (<mel> int store1, <mel> int store2) {
   try <? priority=NERW_PRIORITY_HIGHEST as=store> (store1 || store2) {
      store = Manipulate(store);
      <checkout writeover>;
   } <resume>;
   catch ( (store1 && store2)<CLOSED> ) {}
   printf ("Manipulator is done");
}
The two mel channels back up one another. If Producer finds store1 not available, it will try to deposit its product to store2. When Produce generates a zero product, Producer will break out of the production loop and closes both mel channels. Likewise, the Consumer continuously tries to Consume from either store1 or store2 whichever is available, until it gets a CLOSED exception on both channels. Both tasks use the mel OR read wait facility.

Reader OR Zone Access

As said previously, the goal of the Manipulator task is to capture all the items from Producer via either the store1 or store2 channel, do some manipulation on these items before releasing them to the Consumer task. This is an iteration of the Manipulator for single mel variable we have seen in the previous chapter. To realize this goal for two mel variables, the Manipulator here also makes use of an exclusive zone to do the manipulation, but depends on the mel OR wait on the mel channels.

This is the behind-the-scene process:
  1. The Manipulator task puts itself to store1 readers' queue. In the example above, it goes into the NERW_PRIORITY_HIGHEST priority readers' queue so that it can get to any produced items before Consumer.
     
  2. If it happens (1) to be first on the queue and (2) store1 is valued and (3) this value is not stale, then it will check into the exclusive reader zone, and invokes Manipulate on the eponymous variable store representing store1.
     
  3. If one of the conditions for store1 is false, Manipulator will put itself in the NERW_PRIORITY_HIGHEST priority readers' queue of store2. If it happens (1) to be first on this queue and (2) store2 is valued and (3) this value is not stale, then it will check into store2 exclusive reader zone, and invokes Manipulate on the eponymous variable store representing store2.
     
  4. Otherwise, Manipulator will wait on both queues at the same time.
     
  5. On the first queue that satisfies all 3 conditions (task first on the queue, mel is valued, and the value is not stale), Manipulator will check into the reader zone with that mel variable. Since this can be either store1 or store2, Manipulator uses the generic store eponymous variable.
     
  6. Inside the reader zone, the task makes reads and writes to the eponymous variable store. Since this is a local variable, the mel wait operator (<?>) is not applicable.
     
  7. After manipulation, the Manipulator task invokes the <checkout writeover> operation to update the mel channel and get out of the reader zone. The task remembers what mel channel is used on check-in so that the checkout behavior will be applied to that mel channel.
     
  8. On checkout of the reader zone, the Manipulator task gets off both priority wait queues - the one for the mel channel it has checked in, and the one for the mel channel it still waiting on. However, as there is a <resume> operator used in our example, the task is put back on both waiting queues right away.

Reader OR Zone Traverse

The behind-the-scene description above uncovers a subtle difference between
<? priority=NERW_PRIORITY_HIGHEST as=store> (store1 || store2)
and
<? priority=NERW_PRIORITY_HIGHEST as=store> (store2 || store1)
The first mel item in the OR list is checked first. If it is re-valued constantly by a writer task, the mel wait on that first item is likely successful and the reader OR zone is spent more with the first mel than with the second mel.

This issue can be resolved by using the random traverse behavior for OR reads:
<? priority=NERW_PRIORITY_HIGHEST as=store> (store1 ||<random> store2)
With the random traverse behavior, the check for store1 and store2 is randomized instead of serialized.

Reader OR Zone Resumption

On checkout of the reader zone, the Manipulator invokes the <resume> operation to jump back to the mel OR zone entrance, waiting for either store1 or store2 again. The task is put at the back of the queues, but since it is the only task for the NERW_PRIORITY_HIGHEST queue, it is at the top of both queues again.

Reader OR Zone Exception

If one channel has been closed, the mel OR wait will focus solely on the remaining channel. If both channels have been closed, the (store1 && store2)<CLOSED> exception will be raised, causing the Manipulator to abort the waiting for the exclusive zone. In the above example, the Manipulator displays the printf statement before it ends.

Reader OR Zone Cases

The previous Manipulator does not care if it selects store1 or store2. It processes using the stand-in store the same. What if it does make a case of doing store1 somewhat differently from store2? Let's explore such a case:
void Manipulator (<mel> int store1, <mel> int store2) {
   try <? priority=NERW_PRIORITY_HIGHEST as=store> (store1 || store2) {
      printf ("Select [%ll] to manipulate", store<id>);
      if ( store<id> == store1<id> )
         store = Manipulate_1 (store);
      else
         store = Manipulate_2 (store);
      <checkout writeover>;
   } <resume>;
   catch ( (store1 && store2)<CLOSED> ) {}
   printf ("Manipulator is done");
}
By using the <id> property, Manipulator can make a case of using either Manipulate_1 or Manipulate_2 depending on what mel variable is selected.

Reader OR Zone Gotcha!

A knowledgeable reader will see that the above implementation of Manipulator is not correct. It will let Producer items slipped by and gone directly to the Consumer without being Manipulated. For example, while Manipulator is working on store1, store2 is available for Producer to deposit a new product and Consumer to consume it.

The correct solution is not to use the reader OR zone, but to use two single reader zones, one for store1 and the other for store2:
main () {
   <mel> int store1, store2;
   <pel> p1 = <!> Manipulator (store1);
   <pel> p2 = <!> Manipulator (store2);
   <!> Producer (store1, store2);
   <!> Consumer (store1, store2);
}
void Producer (<mel> int store1, <mel> int store2) {
   while ( <?>(store1 || store2) = Produce() );
   <close>store1;
   <close>store2;
}
void Consumer (<mel> int store1, <mel> int store2) {
   while ( true ) {
      try Consume(<?>(store1 || store2)); 
      catch((store1 && store2)<CLOSED>) break;
   }
}
void Manipulator (<mel> int store) {
   try <? priority=NERW_PRIORITY_HIGHEST>(store) {
      store = Manipulate(store);
      <checkout writeover>;
   } <resume>;
   catch ( store<CLOSED> ) {}
   printf ("Manipulator for [%s] is done", store<name>);
}
Sometimes it is necessary to use a bad example to introduce a new feature.


Reader LIST Zones

In the previous examples, we have Producer and Consumer take in two mel variables but use only one of them. Let's modify the example so that these tasks make use of both of them. This allows us to also change the Manipulator to make use of the reader LIST zone.
main () {
   <mel> int store1, store2;
   <pel> p = <!> Manipulator (store1, store2);
   <!> Producer (store1, store2);
   <!> Consumer (store1, store2);
}
void Producer (<mel> int store1, <mel> int store2) {
   while ( <?>(store1, store2) = ProduceTwoItems() );
   <close>(store1 && store2);
}
void Consumer (<mel> int store1, <mel> int store2) {
   while ( true ) {
     try ConsumeTwoItems(<?>(store1, store2));
     catch((store1 || store2)<CLOSED>) break;
   }
}
void Manipulator(<mel> int store1, <mel> int store2) {
   try <? priority=NERW_PRIORITY_HIGHEST>(store1, store2) {
      store1 = Manipulate(store1);
      store2 = Manipulate(store2);
      <checkout writeover>;
   } <resume>;
   catch ( (store1 || store2)<CLOSED> ) {}
}
This is the behind-the-scene process for a reader LIST zone access:
  1. The Manipulator task puts itself to both store1 and store2 NERW_PRIORITY_HIGHEST priority queues.
     
  2. In each queue whenever (1) the Manipulator task becomes first on the queue and (2) the corresponding mel is valued and (3) this value is not stale, then it will get hold to that mel and waits for the other queue to also satisfy those three conditions.
     
  3. While waiting on a queue at the top of the queue position, the Manipulator task allows another reader task to get a stale value. However it will not allow another reader task to get a new value.
     
  4. Once it can get hold of both the required mels, the Manipulator task will check into the exclusive reader zone.
     
  5. From this time on, no reader task can get the blocked mel values. Non-intrusive readonly access and snapshot operation are still permissible.
     
  6. In the reader zone, the Manipulator task uses the local eponymous variables, which are initialized with the original values of the remote mel variables.
     
  7. Once it is done with the manipulations, the Manipulator task invokes the <checkout writeover> to replace the values of the mel variables with the values of the corresponding eponymous variables.
     
  8. On checkout of the reader zone, the Manipulator task gets off both priority wait queues. However, since there is a <resume> operator, the task is put back on both waiting queues right away.

Since the reader LIST zone requires both mels, a closure of either one is bad for Manipulator. This is the reason its catch on the CLOSED exception uses an OR clause. Compare this with the reader OR zone example where the Manipulator triggers on an AND CLOSED exception.

Reader LIST Zone Gotcha!

Can a produced item sneaked by from the Producer to the Consumer without going through the Manipulator? This can happen with the reader OR zone, but not with the reader LIST zone.

Like the reader OR zone Manipulator, the reader LIST zone Manipulator is always present in the higher priority queue than the Consumer, thus it always has first dip to the mel variables. Unlike the reader OR zone version though, the reader LIST zone Manipulator blocks both mel variables when it is in the reader zone, preventing the Consumer to sneak by.

On the other hand, a Manipulator-like task can introduce starvation. If instead of checking out with a <checkout writeover> which leaves the mel value available for Consumer, it were to use <checkout> which would remove the mel value, the Consumer would never see a product from Producer.


Reader AND Zones

The reader AND zone uses the mel AND wait in order to have exclusive access to all the specified mel items.

Let's rewrite the Manipulator task using a reader AND zone:
void Manipulator(<mel> int store1, <mel> int store2) {
   try <? priority=NERW_PRIORITY_HIGHEST>(store1 && store2) {
      store1 = Manipulate(store1);
      store2 = Manipulate(store2);
      <checkout writeover>;
   } <resume>;
   catch ( (store1 || store2)<CLOSED> ) {}
}
The above Manipulator is a bad Manipulator because it will allow products from the Producer to slip by and go directly to the Consumer. Let's see how so.
  1. The Manipulator task puts itself to both store1 and store2 NERW_PRIORITY_HIGHEST priority queues.
     
  2. In each queue whenever the Manipulator task becomes first on the queue, it will join the "top-of-the-queue" readers group, as specified by the mel AND wait process. It then waits to join the "top-of-the-queue" readers group of the other requested mel variable.
     
  3. When the Manipulator task is in a "top-of-the-queue" group for one mel reader's queue but not the other, it will allow other reader tasks to "jump the line" on that queue to get the mel value -- stale or new, in accordance with the mel AND wait process.
     
  4. When the Manipulator task is in the "top-of-the-queue" groups of both mel variables, it will check if both mel variables are (1) valued and (2) not stale. If one of the conditions is not true for either mel, the task keeps waiting. During this wait, it will allow other reader tasks to "jump the line" on both queues to get the mel values -- stale or new.
     
  5. Once all the conditions are met at both queues, the Manipulator task blocks both mels at the same time, and checks into the exclusive reader zone.
     
  6. From this time on, no reader task can "jump the line" and get the mel values. Non-intrusive readonly access and snapshot operation are still permissible.
     
  7. In the reader zone, the Manipulator task uses the local eponymous variables, which are initialized with the values of the mel variables.
     
  8. Once it is done with the manipulations, the Manipulator task invokes the <checkout writeover> to replace the values of the mel variables with the values of the corresponding eponymous variables.
     
  9. On checkout of the reader zone, the Manipulator task gets off both priority wait queues. However, since there is a <resume> operator, the task is put back on both waiting queues right away.

The use of the "top-of-the-queue" groups allows other reader tasks to "jump the line" and get the mel value even if this mel value is new to the Manipulator task. In our example, this reader task is the Consumer task and "jumping the line" will allow it to consume a product raw without being Manipulated.

On the other hand, the use of "top-of-the-queue" groups prevents deadlocks when multiple tasks vie for the same resources in a circular way, as in the The Dining Philosophers example. Unlike the readers LIST zone where the tasks get hold to the mel variables by themselves without knowledge of similar needs of other tasks, the readers AND zone method collects all such needs in "top-of-the-queue" groups where the NERW runtime has full knowledge to prevent deadlocks when granting exclusive access.


Previous Next Top

Tuesday, December 12, 2017

CommonJS Promises/A Example

Welcome » NERWous C » Examples
  1. CommonJS Promises
  2. NERWous C Sample


CommonJS Promises

The goal of the CommonJS group is to build a better JavaScript ecosystem. One of its proposal is Promises/A: "a promise represents the eventual value returned from the single completion of an operation".

The following example to display the web contents of the first web link in a tweet, is taken from an article that exalts the use of Promises/A:
getTweetsFor("domenic") // promise-returning function
  .then(function (tweets) {
    var shortUrls = parseTweetsForUrls(tweets);
    var mostRecentShortUrl = shortUrls[0];
    return expandUrlUsingTwitterApi(mostRecentShortUrl); // promise-returning function
  })
  .then(httpGet) // promise-returning function
  .then(
    function (responseBody) {
      console.log("Most recent link text:", responseBody);
    },
    function (error) {
      console.error("Error with the twitterverse:", error);
    }
  );
The promise of Promises/A is that promises can be chained, and exceptions can bubble up to someone who can handle that failure.

NERWous C Sample

The verbose first version shows all the tasks:
/* VERSION 1 - Asynchronous */
try {
   <pel> p_tweets = <!> getTweetsFor("domenic");    // parallel execution
   char* tweets = <?>p_tweets;     // the mel wait unblocks the thread for other tasks
   char* shortUrls = parseTweetsForUrls(tweets);     // serial execution
   char* mostRecentShortUrl = shortUrls[0];          // serial execution
   <pel> p_twitter = <!> expandUrlUsingTwitterApi(mostRecentShortUrl);    // parallel
   char* url = <?>p_twitter;     // mel wait unblock the thread for other tasks
   <pel> p_http = <!> httpGet(url);     // parallel execution
   char* responseBody = <?>p_http;     // mel wait unblock the thread for other tasks
   printf ("Most recent link text: %s", responseBody);
}
catch (p_tweets<...>) {
   printf ("Error [%s] on [getTweetsFor] due to [%s]",
      p_tweets<exception>, p_tweets<why>);
}
catch (p_twitter<...>) {
   printf ("Error [%s] on [expandUrlUsingTwitterApi] due to [%s]",
      p_twitter<exception>, p_twitter<why>);
}
catch (p_http<...>) {
   printf ("Error [%s] on [httpGet] due to [%s]",
      http<exception>, http<why>);
}
The 2nd version is more compact, and shows that the tasks can be chained together, and that errors can bubble to the top exception handler:
/* VERSION 2 - Asynchronous */
try {
   char* shortUrls = parseTweetsForUrls(<!><?> getTweetsFor("domenic"));
   char* mostRecentShortUrl = shortUrls[0];
   char* responseBody = <?><!> httpGet(
      <?><!> expandUrlUsingTwitterApi(mostRecentShortUrl)
      );
   printf ("Most recent link text: %s", responseBody);
}
catch (pel<...>) {
   printf ("Error [%s] on [%s] due to [%s]",
      pel<exception>, pel<name>, pel<why>);
}
The double header <?><!> means forking the task to run in parallel (<!>) and then waiting for its result (<?>).

The generic keyword pel in the catch statement represents a task that fails. The name of the task is found via the property pel<name>.

Removing the NERWous C symbols from the compact version above results in the synchronous C version:
/* VERSION 3 - Synchronous */
try {
  char* shortUrls = parseTweetsForUrls(getTweetsFor("domenic"));  /* blocking */
  char* mostRecentShortUrl = shortUrls[0];
  char* responseBody = httpGet(
     expandUrlUsingTwitterApi(mostRecentShortUrl)
     ); // blocking x 2
   printf ("Most recent link text: %s", responseBody);
} catch (error) {
  printf("Error with the twitterverse: %s ", error);
}
In other words, it is sometimes very easy to transform a serial synchronous version into a parallel asynchronous version using NERWous C. Just pepper it juicily with pel (<!>) and mel (<?>) constructs.


Previous Next Top

Wednesday, December 6, 2017

Concurrent Programming In Scala

Welcome » NERWous C » Examples
  1. Scala Language
  2. Actors Model
  3. Parallel Collections
  4. Futures and Promises


Scala Language

Publicly released in 2004, the Scala programming language was originally designed to be more concise than the Java language and with functional programming features missing in Java at that time. Since then the Scala language has been expanded from running solely on Java Virtual Machine to run on other platforms, such as Javascript. Current information about the language can be found on its official web site, www.scala-lang.org.

Scala supports parallel and concurrent programming via the following features:
  1. Actors Model
  2. Parallel Collections
  3. Futures and Promises
For each feature, let's study an example written in Scala, and see how it can be rewritten similarly in NERWous C.


Actors Model

Scala Actors are concurrent processes that communicate by exchanging messages.

Scala Actors - Ping-Pong Example

This ping-pong example uses the deprecated Scala Actors library. Scala Actors has now migrated to Akka. However since the ping-pong example is described in better details with the Scala Actors article than with the Akka terse documentation, it is used here.
case object Ping
case object Pong
case object Stop
import scala.actors.Actor
import scala.actors.Actor._
class Ping(count: int, pong: Actor) extends Actor {
  def act() {
    var pingsLeft = count - 1
    pong ! Ping
    while (true) {
      receive {
        case Pong =>
          if (pingsLeft % 1000 == 0)
            Console.println("Ping: pong")
          if (pingsLeft > 0) {
            pong ! Ping
            pingsLeft -= 1
          } else {
            Console.println("Ping: stop")
            pong ! Stop
            exit()
          }
      }
    }
  }
}
class Pong extends Actor {
  def act() {
    var pongCount = 0
    while (true) {
      receive {
        case Ping =>
          if (pongCount % 1000 == 0)
            Console.println("Pong: ping "+pongCount)
          sender ! Pong
          pongCount = pongCount + 1
        case Stop =>
          Console.println("Pong: stop")
          exit()
      }
    }
  }
}
object pingpong extends Application {
  val pong = new Pong
  val ping = new Ping(100000, pong)
  ping.start
  pong.start
}
The ping-pong example has a Ping actor sending to the Pong actor a "Ping" message. Upon receiving it, the Pong actor replies with a "Pong" message. After 1000 interactions, the Ping actor is done playing, and sends a "Stop" message. When Pong receives the "Stop", it also stops playing.

NERWOUS C Version 1

The first version rewrite in NERWous C hews to the Scala example, by using a receive input mel argument to represent the message sending between Scala actors.
main () {
   <pel>pong = <! name="Pong">Pong (null);
   <pel>ping = <! name="Ping">Ping (100000, null);
}
void Ping (int count, <mel> string receive) {
   <pel>pong;
   <? pel name="Pong" started>pong;

   int pingsLeft = count - 1;
   <?>pong.receive = "Ping";
   while ( 1 ) {
      switch ( <?>receive ) {   /* wait to receive */
         case "Pong";
            if (pingsLeft % 1000 == 0)
               printf ("Ping: pong");
            if (pingsLeft > 0) {
               <?>pong.receive = "Ping";
               --pingsLeft;
            } else {
               printf ("Ping: stop");
               <?>pong.receive = "Stop";
               <return>;
            }
            break;
         }
      }
   }
}
void Pong (<mel> string receive) {
   <pel>ping;
   <? pel name="Ping" started>ping;

   int pongCount = 0;
   while ( 1 ) {
      switch ( <?>receive ) {
         case "Ping":
            if (pongCount % 1000 == 0)
               printf ("Pong: ping " + pongCount);
             <?>ping.receive = "Pong";
             ++pongCount;
             break;
        case "Stop":
           printf ("Pong: stop");
           <return>;
      }
   }
}
The tricky thing about the Ping-Pong example is how Pong knows about Ping since it is created before Ping ever exists. The Scala solution is to use the sender actor which represents the actor that sends the message. When Pong receives the "Ping" message, the sender is the de facto Ping actor. The NERWous C version uses the wait-for-pel statement, which allows the task Pong to wait for a task named "Ping" to have a certain state (here, we pick the started state), and initializes the local pel variable ping with information about the task named "Ping":
<? pel name="Ping" started>ping;
Scala uses the receive method for an actor to send messages to another actor. The NERWous C version above uses the mel input argument. It is named receive here but can be any valid name. When Ping first runs, it sends a "Ping" message to Pong's receive mel input argument:
<?>pong.receive = "Ping";
In the mean time, the Pong task waits on its receive mel input argument to be valued with a message, either "Ping" or "Stop". With the former, it sends back a "Pong" message. With the latter, it just quits by running the <return> statement.
A computer linguist will notice that the C language on which NERWous C is based, does not support the native type string. It is used here for code simplicity since the focus is on the concurrency features, and not on the base language.

NERWOUS C Version 2

Let's now rewrite Version 1 to use NERWous C "streaming" feature. Instead of having Ping sending a "Ping" message directly to Pong, we will have Ping stream its "Ping" messages via the release operation to its mel output argument, and Pong access Ping's output messages:
main () {
   <pel>pong = <! name="Pong">Pong ();
   <pel>ping = <! name="Ping">Ping (100000);
}
<mel> string Ping (int count) {
   <pel>pong;
   <? pel name="Pong" started>pong;

   int pingsLeft = count - 1;
   <release> "Ping";   /* stream first "Ping" */

   <?>pong;   /* wait for Pong to stream */
   if (pingsLeft % 1000 == 0)
      printf ("Ping: pong");
   if (pingsLeft > 0) {
      <release> "Ping";   /* stream "Ping" again */
      --pingsLeft;
      <resume>;    /* resume the wait for Pong to stream */
   } else {
      printf ("Ping: stop");
   }
}
<mel> string Pong () {
   <pel>ping;
   <? pel name="Ping" started>ping;

   int pongCount = 0;
   try {
      <?>ping;   /* wait for Ping to stream */
      if (pongCount % 1000 == 0)
         printf ("Pong: ping " + pongCount);
      <release> "Pong";   /* stream "Pong" */
      ++pongCount;
      <resume>;    /* resume the wait for Ping to stream */
   }
   catch ( ping<ENDED> ) { }
}
Two changes are being made in Version 2. The first one is that the while loop has been replaced by the resume operation that repeats the mel wait for the streaming messages. The second one is that the "Stop" message has been removed. The Pong task knows that Ping has ended via the ENDED exception.


Parallel Collections

The parallel collections feature in Scala is discussed in this article. The examples to illustrate this feature are:
  1. Map
  2. Fold
  3. Filter
Map

This example uses a parallel map to transform a collection of String to all-uppercase:
val lastNames = List("Smith","Jones","Frankenstein","Bach","Jackson","Rodin").par
lastNames.map(_.toUpperCase)
The result of the run is:
SMITH, JONES, FRANKENSTEIN, BACH, JACKSON, RODIN

The NERWous C version is more verbose since there is no built-in map function. Again, for simplicity, we we will use the fictitious string type which does not exist in the C language:
/* VERSION 1 */
#define NUM 6
string lastNames[NUM] = {"Smith","Jones","Frankenstein","Bach","Jackson","Rodin"};
for (int i=0; i<NUM; ++i) {
   lastNames[i] = <!> toUpperCase(lastNames[i]);
}
The NERWous C version has a serial loop that pels a new inline task in each iteration. The inline task takes in lastNames[i] as a local variable, and ends with its toUpperCase value. The returned value is then re-assigned to the local variable lastNames[i].

Due to the assignment back to lastNames[i] which is under the main task context, each inline task has to finish running before the next inline task can run. So although the inline tasks can run in parallel with one another, they are practically run serially, albeit on a possibly different cel element.

Let's now transform lastNames into a mel array so that it can be accessed in parallel by the toUpperCase inline tasks:
/* VERSION 2 */
#define NUM 6
<mel>string lastNames[NUM] = {"Smith","Jones","Frankenstein","Bach","Jackson","Rodin"};
for (int i=0; i<NUM; ++i) {
   <!> { <? replace>lastNames[i] = toUpperCase(?); }
}
Each inline task that is pelled on each iteration of the for loop now truly runs in parallel, each accessing its portion of the shared mel array lastNames. Each inline task uses the reader zone shortcut to replace in place the value of the mel element lastNames[i].

Fold

This example adds up the values from 1 to 10000.
val parArray = (1 to 10000).toArray.par
parArray.fold(0)(_ + _)
The result of 1 + 2 + 3 + ... + 10000 is 50005000. Again, the NERWous C version is much more verbose since there is no built-in fold capability:
#define NUMCOUNT 10000
int parArray[NUMCOUNT];
for (i = 0; i<NUMCOUNT; ++i)
   parArray[i] = i+1;    /* initialize array */

<mel> int sum;
<collect> for (i = 0; i<NUMCOUNT; i += 2 ) {
    <! import="parArray[i] ..<flex ubound=NUMCOUNT uvalue=0>.. parArray[i+1]"> {
        <?> sum += (parArray[i] + parArray[i+1]);
    }
} <? ENDED>;
printf ("Summation value %d", <?>sum);
The first for loop initializes the parArray array with values 1, 2, etc. to 10000. The second for loop takes every two elements of the array and do the summation in separate tasks. Each task gets their local parArray elements imported, does the addition, and adds up the result into the share sum mel variable.

The second for loop is run inside a collect-ENDED block to make sure that all tasks that it pels have ended before the program continues with the printf statement. This ensures that the mel variable sum contains all the summations.

Each iteration in the second for loop creates a separate inline task to do the summation. This task requires two local values passed via the import operation. The import statement could be written simply as:
<! import="parArray[i],parArray[i+1]">
However if the number of items in parArray is odd, the last iteration of the for loop will generate an out-of-bound exception on parArray[i+1]. The use of the flex consecontive construct addendum allows the run-time environment to assign the uvalue value (0) to any parArray item with an index equal to or greater than the ubound value (NUMCOUNT).

Filter

This example uses a parallel filter to select the last names that come alphabetically after the letter “J”.
val lastNames = List("Smith","Jones","Frankenstein","Bach","Jackson","Rodin").par
lastNames.filter(_.head >= 'J')
The result of the run is Smith, Jones, Jackson, Rodin. This is the NERWous C version:
#define NUM 6
string lastNames[NUM] = { "Smith","Jones","Frankenstein","Bach","Jackson","Rodin" };
for (int i=0; i<NUM; ++i) {
   <!> { if head(lastNames[i]) >= 'J') printf("%s", lastNames[i]); }
}
Each iteration of the for loop pels a new task that runs independently from each other. Each task gets its local array element lastNames[i] imported for processing. Since each task runs independently and concurrently, the order of the names printed out is not deterministic.


Futures and Promises

Under Scala, a future is a read-only reference to a yet-to-be-completed value, while a promise is used to complete a future either sucessfully or with an exception and failure.

Futures - Coffee Making Example

The coffee making example is described in this article. The complete code is taken from GitHub:
import scala.concurrent._
import scala.concurrent.duration._
import scala.concurrent.Future
import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent.duration._
import scala.util.Random
import scala.util.Try

object kitchen extends App {

  /////////////////////////////
  // Some type aliases, just for getting more meaningful method signatures:
  type CoffeeBeans = String
  type GroundCoffee = String
  case class Water(temperature: Int)
  type Milk = String
  type FrothedMilk = String
  type Espresso = String
  type Cappuccino = String

  // some exceptions for things that might go wrong in the individual steps
  // (we'll need some of them later, use the others when experimenting
  // with the code):
  case class GrindingException(msg: String) extends Exception(msg)
  case class FrothingException(msg: String) extends Exception(msg)
  case class WaterBoilingException(msg: String) extends Exception(msg)
  case class BrewingException(msg: String) extends Exception(msg)
  /////////////////////////////
  def combine(espresso: Espresso, frothedMilk: FrothedMilk): Cappuccino = "cappuccino"

  def grind(beans: CoffeeBeans): Future[GroundCoffee] = Future {
    println("start grinding...")
    Thread.sleep(Random.nextInt(2000))
    if (beans == "baked beans") throw GrindingException("are you joking?")
    println("finished grinding...")
    s"ground coffee of $beans"
  }

  def heatWater(water: Water): Future[Water] = Future {
    println("heating the water now")
    Thread.sleep(Random.nextInt(2000))
    println("hot, it's hot!")
    water.copy(temperature = 85)
  }

  def frothMilk(milk: Milk): Future[FrothedMilk] = Future {
    println("milk frothing system engaged!")
    Thread.sleep(Random.nextInt(2000))
    println("shutting down milk frothing system")
    s"frothed $milk"
  }

  def brew(coffee: GroundCoffee, heatedWater: Water): Future[Espresso] = Future {
    println("happy brewing :)")
    Thread.sleep(Random.nextInt(2000))
    println("it's brewed!")
    "espresso"
  }

  ///////////// Business logic
  println("Kitched starting")
  def prepareCappuccino(): Future[Cappuccino] = {
    val groundCoffee = grind("arabica beans")
    val heatedWater = heatWater(Water(20))
    val frothedMilk = frothMilk("milk")
    for {
      ground <- groundCoffee
      water <- heatedWater
      foam <- frothedMilk
      espresso <- brew(ground, water)
    } yield combine(espresso, foam)
  }
  val capo = prepareCappuccino()
  Await.ready(capo, 1 minutes)
  println("Kitched ending")
}
The NERWous C version has the main task pels the tasks to do the grinding (grind), heating water (heatWater), frothing the milk (frothMilk) and brewing the espresso (brew) -- to be run in parallel. The brew task waits for the grind task to have the coffee beans ground, and the heatWater task to have the water hot to the requested temperature. The main task, after pelling all the activities, waits for the brew and frothMilk to be done before declaring "Kitchen ending". If the brewing is not ready within a minute, the main task ends with "Kitchen ending without espresso".
main () {
   printf ("Kitchen starting");
   <pel> ground = <!> grind("arabica beans");
   <pel> water = <!> heatWater(Water(20));
   <pel> foam = <!> frothMilk("milk");
   <pel> espresso = <!> brew(ground, water);

   try {
      string frothMilk_ret, brew_ret;
      (frothMilk_ret, brew_ret) = <? timeout=6000> (foam, espresso);
      if ( brew_ret == "" ) printf ("Kitchen ending without espresso");
      else printf ("Kitchen ending");
   }
   catch ((foam && espresso)<TIMEOUT>) {
      printf ("Espresso and milk not ready within 1 min");
   }
}

string grind( string beans ) {
    printf("start grinding...");
    sleep(rand()%2000);
    if ( beans == "baked beans") return "are you joking?";
    printf("finished grinding...");
    return("ground coffee of " + beans);
 }

 int heatWater(int temp) {
    printf("heating the water now")
    sleep(rand()%2000);
    printf("hot, it's hot!")
    return 85;
 }

 string frothMilk( string milk ) {
    printf("milk frothing system engaged!")
    sleep(rand()%2000);
    printf("shutting down milk frothing system");
    return "frothed " + milk;
 }

 string brew( pel ground, pel water ) {
    printf("happy brewing :)");
    sleep(rand()%2000);

    string ret = <?> ground;
    if ( ret == "are you joking?" ) return "";

    <?> water;
    printf("it's brewed!");
    return "espresso";
 }
The tasks communicate their ending via the return statement with a string value. The tasks that depend on those ending tasks, start to do their own stuff, then wait for those returned values before continuing with the rest of their own stuff.

Promises - Producer/Consumer Example

This producer/consumer example is taken from the Scala SIP-14 document.
import scala.concurrent.{ Future, Promise }
val p = Promise[T]()
val f = p.future

val producer = Future {
  val r = someComputation
  if (isInvalid(r))
    p failure (new IllegalStateException)
  else {
    val q = doSomeMoreComputation(r)
    p success q
  }
}

val consumer = Future {
  startDoingSomething()
  f onSuccess {
    case r => doSomethingWithResult(r)
  }
}
The NERWous C version has the main task pelling the producer and consumer tasks to run in parallel.
main () {
   <pel> prod = <!> producer();
   <!> consumer(prod);
}
int producer () {
   int r = someComputation();
   if (isInvalid(r) ) <end FAILED>;
   else {
      int q = doSomeMoreComputation(r);
      <return> q;
   }
}
void consumer (<pel>prod) {
   startDoingSomething();
   try {
      doSomethingWithResult (<?>prod);
   }
   catch (prod<FAILED>) {}
   catch (prod<...>) {}
}
The producer task either ends with a FAILED exception on an invalid someComputation result, or ends normally with a valid result to be used as the return value of the task.

The consumer task is given a representative of the producer task via the pel input argument prod. It does startDoingSomething then waits for producer return value via the mel wait statement <?>prod. If the wait is successful, it will do doSomethingWithResult. If the wait fails due to a FAILED or other exception, the consumer just ends without doing anything more.


Previous Next Top