Pages

Friday, December 29, 2017

Compound Reader Zones

Welcome » NERWous C » Mel
  1. Reader OR Zones
  2. Reader LIST Zones
  3. Reader AND Zones
In the previous chapter, we have been introduced to exclusive reader zones as applicable to a single mel variable. In this chapter we expand the discussion of reader zones to multiple mel variables. A future chapter will cover reader zones used with structured mels.


Reader OR Zones

We start with the reader OR zone. Let's look at this example where we have two mel variables worked on by three tasks: a Producer, a Consumer, and a Manipulator that Manipulates all the things that are Produced before being Consumed:
#include "nerw.h"
main () {
   <mel> int store1, store2;
   <pel> p = <!> Manipulator (store1, store2);
   <!> Producer (store1, store2);
   <!> Consumer (store1, store2);
}
void Producer (<mel> int store1, <mel> int store2) {
   while ( <?>(store1 || store2) = Produce() );
   <close>store1;
   <close>store2;
}
void Consumer (<mel> int store1, <mel> int store2) {
   while ( true ) {
     try Consume(<?>(store1 || store2));
     catch((store1 && store2)<CLOSED>) break;
   }
}
void Manipulator (<mel> int store1, <mel> int store2) {
   try <? priority=NERW_PRIORITY_HIGHEST as=store> (store1 || store2) {
      store = Manipulate(store);
      <checkout writeover>;
   } <resume>;
   catch ( (store1 && store2)<CLOSED> ) {}
   printf ("Manipulator is done");
}
The two mel channels back up one another. If Producer finds store1 not available, it will try to deposit its product to store2. When Produce generates a zero product, Producer will break out of the production loop and closes both mel channels. Likewise, the Consumer continuously tries to Consume from either store1 or store2 whichever is available, until it gets a CLOSED exception on both channels. Both tasks use the mel OR read wait facility.

Reader OR Zone Access

As said previously, the goal of the Manipulator task is to capture all the items from Producer via either the store1 or store2 channel, do some manipulation on these items before releasing them to the Consumer task. This is an iteration of the Manipulator for single mel variable we have seen in the previous chapter. To realize this goal for two mel variables, the Manipulator here also makes use of an exclusive zone to do the manipulation, but depends on the mel OR wait on the mel channels.

This is the behind-the-scene process:
  1. The Manipulator task puts itself to store1 readers' queue. In the example above, it goes into the NERW_PRIORITY_HIGHEST priority readers' queue so that it can get to any produced items before Consumer.
     
  2. If it happens (1) to be first on the queue and (2) store1 is valued and (3) this value is not stale, then it will check into the exclusive reader zone, and invokes Manipulate on the eponymous variable store representing store1.
     
  3. If one of the conditions for store1 is false, Manipulator will put itself in the NERW_PRIORITY_HIGHEST priority readers' queue of store2. If it happens (1) to be first on this queue and (2) store2 is valued and (3) this value is not stale, then it will check into store2 exclusive reader zone, and invokes Manipulate on the eponymous variable store representing store2.
     
  4. Otherwise, Manipulator will wait on both queues at the same time.
     
  5. On the first queue that satisfies all 3 conditions (task first on the queue, mel is valued, and the value is not stale), Manipulator will check into the reader zone with that mel variable. Since this can be either store1 or store2, Manipulator uses the generic store eponymous variable.
     
  6. Inside the reader zone, the task makes reads and writes to the eponymous variable store. Since this is a local variable, the mel wait operator (<?>) is not applicable.
     
  7. After manipulation, the Manipulator task invokes the <checkout writeover> operation to update the mel channel and get out of the reader zone. The task remembers what mel channel is used on check-in so that the checkout behavior will be applied to that mel channel.
     
  8. On checkout of the reader zone, the Manipulator task gets off both priority wait queues - the one for the mel channel it has checked in, and the one for the mel channel it still waiting on. However, as there is a <resume> operator used in our example, the task is put back on both waiting queues right away.

Reader OR Zone Traverse

The behind-the-scene description above uncovers a subtle difference between
<? priority=NERW_PRIORITY_HIGHEST as=store> (store1 || store2)
and
<? priority=NERW_PRIORITY_HIGHEST as=store> (store2 || store1)
The first mel item in the OR list is checked first. If it is re-valued constantly by a writer task, the mel wait on that first item is likely successful and the reader OR zone is spent more with the first mel than with the second mel.

This issue can be resolved by using the random traverse behavior for OR reads:
<? priority=NERW_PRIORITY_HIGHEST as=store> (store1 ||<random> store2)
With the random traverse behavior, the check for store1 and store2 is randomized instead of serialized.

Reader OR Zone Resumption

On checkout of the reader zone, the Manipulator invokes the <resume> operation to jump back to the mel OR zone entrance, waiting for either store1 or store2 again. The task is put at the back of the queues, but since it is the only task for the NERW_PRIORITY_HIGHEST queue, it is at the top of both queues again.

Reader OR Zone Exception

If one channel has been closed, the mel OR wait will focus solely on the remaining channel. If both channels have been closed, the (store1 && store2)<CLOSED> exception will be raised, causing the Manipulator to abort the waiting for the exclusive zone. In the above example, the Manipulator displays the printf statement before it ends.

Reader OR Zone Cases

The previous Manipulator does not care if it selects store1 or store2. It processes using the stand-in store the same. What if it does make a case of doing store1 somewhat differently from store2? Let's explore such a case:
void Manipulator (<mel> int store1, <mel> int store2) {
   try <? priority=NERW_PRIORITY_HIGHEST as=store> (store1 || store2) {
      printf ("Select [%ll] to manipulate", store<id>);
      if ( store<id> == store1<id> )
         store = Manipulate_1 (store);
      else
         store = Manipulate_2 (store);
      <checkout writeover>;
   } <resume>;
   catch ( (store1 && store2)<CLOSED> ) {}
   printf ("Manipulator is done");
}
By using the <id> property, Manipulator can make a case of using either Manipulate_1 or Manipulate_2 depending on what mel variable is selected.

Reader OR Zone Gotcha!

A knowledgeable reader will see that the above implementation of Manipulator is not correct. It will let Producer items slipped by and gone directly to the Consumer without being Manipulated. For example, while Manipulator is working on store1, store2 is available for Producer to deposit a new product and Consumer to consume it.

The correct solution is not to use the reader OR zone, but to use two single reader zones, one for store1 and the other for store2:
main () {
   <mel> int store1, store2;
   <pel> p1 = <!> Manipulator (store1);
   <pel> p2 = <!> Manipulator (store2);
   <!> Producer (store1, store2);
   <!> Consumer (store1, store2);
}
void Producer (<mel> int store1, <mel> int store2) {
   while ( <?>(store1 || store2) = Produce() );
   <close>store1;
   <close>store2;
}
void Consumer (<mel> int store1, <mel> int store2) {
   while ( true ) {
      try Consume(<?>(store1 || store2)); 
      catch((store1 && store2)<CLOSED>) break;
   }
}
void Manipulator (<mel> int store) {
   try <? priority=NERW_PRIORITY_HIGHEST>(store) {
      store = Manipulate(store);
      <checkout writeover>;
   } <resume>;
   catch ( store<CLOSED> ) {}
   printf ("Manipulator for [%s] is done", store<name>);
}
Sometimes it is necessary to use a bad example to introduce a new feature.


Reader LIST Zones

In the previous examples, we have Producer and Consumer take in two mel variables but use only one of them. Let's modify the example so that these tasks make use of both of them. This allows us to also change the Manipulator to make use of the reader LIST zone.
main () {
   <mel> int store1, store2;
   <pel> p = <!> Manipulator (store1, store2);
   <!> Producer (store1, store2);
   <!> Consumer (store1, store2);
}
void Producer (<mel> int store1, <mel> int store2) {
   while ( <?>(store1, store2) = ProduceTwoItems() );
   <close>(store1 && store2);
}
void Consumer (<mel> int store1, <mel> int store2) {
   while ( true ) {
     try ConsumeTwoItems(<?>(store1, store2));
     catch((store1 || store2)<CLOSED>) break;
   }
}
void Manipulator(<mel> int store1, <mel> int store2) {
   try <? priority=NERW_PRIORITY_HIGHEST>(store1, store2) {
      store1 = Manipulate(store1);
      store2 = Manipulate(store2);
      <checkout writeover>;
   } <resume>;
   catch ( (store1 || store2)<CLOSED> ) {}
}
This is the behind-the-scene process for a reader LIST zone access:
  1. The Manipulator task puts itself to both store1 and store2 NERW_PRIORITY_HIGHEST priority queues.
     
  2. In each queue whenever (1) the Manipulator task becomes first on the queue and (2) the corresponding mel is valued and (3) this value is not stale, then it will get hold to that mel and waits for the other queue to also satisfy those three conditions.
     
  3. While waiting on a queue at the top of the queue position, the Manipulator task allows another reader task to get a stale value. However it will not allow another reader task to get a new value.
     
  4. Once it can get hold of both the required mels, the Manipulator task will check into the exclusive reader zone.
     
  5. From this time on, no reader task can get the blocked mel values. Non-intrusive readonly access and snapshot operation are still permissible.
     
  6. In the reader zone, the Manipulator task uses the local eponymous variables, which are initialized with the original values of the remote mel variables.
     
  7. Once it is done with the manipulations, the Manipulator task invokes the <checkout writeover> to replace the values of the mel variables with the values of the corresponding eponymous variables.
     
  8. On checkout of the reader zone, the Manipulator task gets off both priority wait queues. However, since there is a <resume> operator, the task is put back on both waiting queues right away.

Since the reader LIST zone requires both mels, a closure of either one is bad for Manipulator. This is the reason its catch on the CLOSED exception uses an OR clause. Compare this with the reader OR zone example where the Manipulator triggers on an AND CLOSED exception.

Reader LIST Zone Gotcha!

Can a produced item sneaked by from the Producer to the Consumer without going through the Manipulator? This can happen with the reader OR zone, but not with the reader LIST zone.

Like the reader OR zone Manipulator, the reader LIST zone Manipulator is always present in the higher priority queue than the Consumer, thus it always has first dip to the mel variables. Unlike the reader OR zone version though, the reader LIST zone Manipulator blocks both mel variables when it is in the reader zone, preventing the Consumer to sneak by.

On the other hand, a Manipulator-like task can introduce starvation. If instead of checking out with a <checkout writeover> which leaves the mel value available for Consumer, it were to use <checkout> which would remove the mel value, the Consumer would never see a product from Producer.


Reader AND Zones

The reader AND zone uses the mel AND wait in order to have exclusive access to all the specified mel items.

Let's rewrite the Manipulator task using a reader AND zone:
void Manipulator(<mel> int store1, <mel> int store2) {
   try <? priority=NERW_PRIORITY_HIGHEST>(store1 && store2) {
      store1 = Manipulate(store1);
      store2 = Manipulate(store2);
      <checkout writeover>;
   } <resume>;
   catch ( (store1 || store2)<CLOSED> ) {}
}
The above Manipulator is a bad Manipulator because it will allow products from the Producer to slip by and go directly to the Consumer. Let's see how so.
  1. The Manipulator task puts itself to both store1 and store2 NERW_PRIORITY_HIGHEST priority queues.
     
  2. In each queue whenever the Manipulator task becomes first on the queue, it will join the "top-of-the-queue" readers group, as specified by the mel AND wait process. It then waits to join the "top-of-the-queue" readers group of the other requested mel variable.
     
  3. When the Manipulator task is in a "top-of-the-queue" group for one mel reader's queue but not the other, it will allow other reader tasks to "jump the line" on that queue to get the mel value -- stale or new, in accordance with the mel AND wait process.
     
  4. When the Manipulator task is in the "top-of-the-queue" groups of both mel variables, it will check if both mel variables are (1) valued and (2) not stale. If one of the conditions is not true for either mel, the task keeps waiting. During this wait, it will allow other reader tasks to "jump the line" on both queues to get the mel values -- stale or new.
     
  5. Once all the conditions are met at both queues, the Manipulator task blocks both mels at the same time, and checks into the exclusive reader zone.
     
  6. From this time on, no reader task can "jump the line" and get the mel values. Non-intrusive readonly access and snapshot operation are still permissible.
     
  7. In the reader zone, the Manipulator task uses the local eponymous variables, which are initialized with the values of the mel variables.
     
  8. Once it is done with the manipulations, the Manipulator task invokes the <checkout writeover> to replace the values of the mel variables with the values of the corresponding eponymous variables.
     
  9. On checkout of the reader zone, the Manipulator task gets off both priority wait queues. However, since there is a <resume> operator, the task is put back on both waiting queues right away.

The use of the "top-of-the-queue" groups allows other reader tasks to "jump the line" and get the mel value even if this mel value is new to the Manipulator task. In our example, this reader task is the Consumer task and "jumping the line" will allow it to consume a product raw without being Manipulated.

On the other hand, the use of "top-of-the-queue" groups prevents deadlocks when multiple tasks vie for the same resources in a circular way, as in the The Dining Philosophers example. Unlike the readers LIST zone where the tasks get hold to the mel variables by themselves without knowledge of similar needs of other tasks, the readers AND zone method collects all such needs in "top-of-the-queue" groups where the NERW runtime has full knowledge to prevent deadlocks when granting exclusive access.


Previous Next Top

Tuesday, December 12, 2017

CommonJS Promises/A Example

Welcome » NERWous C » Examples
  1. CommonJS Promises
  2. NERWous C Sample


CommonJS Promises

The goal of the CommonJS group is to build a better JavaScript ecosystem. One of its proposal is Promises/A: "a promise represents the eventual value returned from the single completion of an operation".

The following example to display the web contents of the first web link in a tweet, is taken from an article that exalts the use of Promises/A:
getTweetsFor("domenic") // promise-returning function
  .then(function (tweets) {
    var shortUrls = parseTweetsForUrls(tweets);
    var mostRecentShortUrl = shortUrls[0];
    return expandUrlUsingTwitterApi(mostRecentShortUrl); // promise-returning function
  })
  .then(httpGet) // promise-returning function
  .then(
    function (responseBody) {
      console.log("Most recent link text:", responseBody);
    },
    function (error) {
      console.error("Error with the twitterverse:", error);
    }
  );
The promise of Promises/A is that promises can be chained, and exceptions can bubble up to someone who can handle that failure.

NERWous C Sample

The verbose first version shows all the tasks:
/* VERSION 1 - Asynchronous */
try {
   <pel> p_tweets = <!> getTweetsFor("domenic");    // parallel execution
   char* tweets = <?>p_tweets;     // the mel wait unblocks the thread for other tasks
   char* shortUrls = parseTweetsForUrls(tweets);     // serial execution
   char* mostRecentShortUrl = shortUrls[0];          // serial execution
   <pel> p_twitter = <!> expandUrlUsingTwitterApi(mostRecentShortUrl);    // parallel
   char* url = <?>p_twitter;     // mel wait unblock the thread for other tasks
   <pel> p_http = <!> httpGet(url);     // parallel execution
   char* responseBody = <?>p_http;     // mel wait unblock the thread for other tasks
   printf ("Most recent link text: %s", responseBody);
}
catch (p_tweets<...>) {
   printf ("Error [%s] on [getTweetsFor] due to [%s]",
      p_tweets<exception>, p_tweets<why>);
}
catch (p_twitter<...>) {
   printf ("Error [%s] on [expandUrlUsingTwitterApi] due to [%s]",
      p_twitter<exception>, p_twitter<why>);
}
catch (p_http<...>) {
   printf ("Error [%s] on [httpGet] due to [%s]",
      http<exception>, http<why>);
}
The 2nd version is more compact, and shows that the tasks can be chained together, and that errors can bubble to the top exception handler:
/* VERSION 2 - Asynchronous */
try {
   char* shortUrls = parseTweetsForUrls(<!><?> getTweetsFor("domenic"));
   char* mostRecentShortUrl = shortUrls[0];
   char* responseBody = <?><!> httpGet(
      <?><!> expandUrlUsingTwitterApi(mostRecentShortUrl)
      );
   printf ("Most recent link text: %s", responseBody);
}
catch (pel<...>) {
   printf ("Error [%s] on [%s] due to [%s]",
      pel<exception>, pel<name>, pel<why>);
}
The double header <?><!> means forking the task to run in parallel (<!>) and then waiting for its result (<?>).

The generic keyword pel in the catch statement represents a task that fails. The name of the task is found via the property pel<name>.

Removing the NERWous C symbols from the compact version above results in the synchronous C version:
/* VERSION 3 - Synchronous */
try {
  char* shortUrls = parseTweetsForUrls(getTweetsFor("domenic"));  /* blocking */
  char* mostRecentShortUrl = shortUrls[0];
  char* responseBody = httpGet(
     expandUrlUsingTwitterApi(mostRecentShortUrl)
     ); // blocking x 2
   printf ("Most recent link text: %s", responseBody);
} catch (error) {
  printf("Error with the twitterverse: %s ", error);
}
In other words, it is sometimes very easy to transform a serial synchronous version into a parallel asynchronous version using NERWous C. Just pepper it juicily with pel (<!>) and mel (<?>) constructs.


Previous Next Top

Wednesday, December 6, 2017

Concurrent Programming In Scala

Welcome » NERWous C » Examples
  1. Scala Language
  2. Actors Model
  3. Parallel Collections
  4. Futures and Promises


Scala Language

Publicly released in 2004, the Scala programming language was originally designed to be more concise than the Java language and with functional programming features missing in Java at that time. Since then the Scala language has been expanded from running solely on Java Virtual Machine to run on other platforms, such as Javascript. Current information about the language can be found on its official web site, www.scala-lang.org.

Scala supports parallel and concurrent programming via the following features:
  1. Actors Model
  2. Parallel Collections
  3. Futures and Promises
For each feature, let's study an example written in Scala, and see how it can be rewritten similarly in NERWous C.


Actors Model

Scala Actors are concurrent processes that communicate by exchanging messages.

Scala Actors - Ping-Pong Example

This ping-pong example uses the deprecated Scala Actors library. Scala Actors has now migrated to Akka. However since the ping-pong example is described in better details with the Scala Actors article than with the Akka terse documentation, it is used here.
case object Ping
case object Pong
case object Stop
import scala.actors.Actor
import scala.actors.Actor._
class Ping(count: int, pong: Actor) extends Actor {
  def act() {
    var pingsLeft = count - 1
    pong ! Ping
    while (true) {
      receive {
        case Pong =>
          if (pingsLeft % 1000 == 0)
            Console.println("Ping: pong")
          if (pingsLeft > 0) {
            pong ! Ping
            pingsLeft -= 1
          } else {
            Console.println("Ping: stop")
            pong ! Stop
            exit()
          }
      }
    }
  }
}
class Pong extends Actor {
  def act() {
    var pongCount = 0
    while (true) {
      receive {
        case Ping =>
          if (pongCount % 1000 == 0)
            Console.println("Pong: ping "+pongCount)
          sender ! Pong
          pongCount = pongCount + 1
        case Stop =>
          Console.println("Pong: stop")
          exit()
      }
    }
  }
}
object pingpong extends Application {
  val pong = new Pong
  val ping = new Ping(100000, pong)
  ping.start
  pong.start
}
The ping-pong example has a Ping actor sending to the Pong actor a "Ping" message. Upon receiving it, the Pong actor replies with a "Pong" message. After 1000 interactions, the Ping actor is done playing, and sends a "Stop" message. When Pong receives the "Stop", it also stops playing.

NERWOUS C Version 1

The first version rewrite in NERWous C hews to the Scala example, by using a receive input mel argument to represent the message sending between Scala actors.
main () {
   <pel>pong = <! name="Pong">Pong (null);
   <pel>ping = <! name="Ping">Ping (100000, null);
}
void Ping (int count, <mel> string receive) {
   <pel>pong;
   <? pel name="Pong" started>pong;

   int pingsLeft = count - 1;
   <?>pong.receive = "Ping";
   while ( 1 ) {
      switch ( <?>receive ) {   /* wait to receive */
         case "Pong";
            if (pingsLeft % 1000 == 0)
               printf ("Ping: pong");
            if (pingsLeft > 0) {
               <?>pong.receive = "Ping";
               --pingsLeft;
            } else {
               printf ("Ping: stop");
               <?>pong.receive = "Stop";
               <return>;
            }
            break;
         }
      }
   }
}
void Pong (<mel> string receive) {
   <pel>ping;
   <? pel name="Ping" started>ping;

   int pongCount = 0;
   while ( 1 ) {
      switch ( <?>receive ) {
         case "Ping":
            if (pongCount % 1000 == 0)
               printf ("Pong: ping " + pongCount);
             <?>ping.receive = "Pong";
             ++pongCount;
             break;
        case "Stop":
           printf ("Pong: stop");
           <return>;
      }
   }
}
The tricky thing about the Ping-Pong example is how Pong knows about Ping since it is created before Ping ever exists. The Scala solution is to use the sender actor which represents the actor that sends the message. When Pong receives the "Ping" message, the sender is the de facto Ping actor. The NERWous C version uses the wait-for-pel statement, which allows the task Pong to wait for a task named "Ping" to have a certain state (here, we pick the started state), and initializes the local pel variable ping with information about the task named "Ping":
<? pel name="Ping" started>ping;
Scala uses the receive method for an actor to send messages to another actor. The NERWous C version above uses the mel input argument. It is named receive here but can be any valid name. When Ping first runs, it sends a "Ping" message to Pong's receive mel input argument:
<?>pong.receive = "Ping";
In the mean time, the Pong task waits on its receive mel input argument to be valued with a message, either "Ping" or "Stop". With the former, it sends back a "Pong" message. With the latter, it just quits by running the <return> statement.
A computer linguist will notice that the C language on which NERWous C is based, does not support the native type string. It is used here for code simplicity since the focus is on the concurrency features, and not on the base language.

NERWOUS C Version 2

Let's now rewrite Version 1 to use NERWous C "streaming" feature. Instead of having Ping sending a "Ping" message directly to Pong, we will have Ping stream its "Ping" messages via the release operation to its mel output argument, and Pong access Ping's output messages:
main () {
   <pel>pong = <! name="Pong">Pong ();
   <pel>ping = <! name="Ping">Ping (100000);
}
<mel> string Ping (int count) {
   <pel>pong;
   <? pel name="Pong" started>pong;

   int pingsLeft = count - 1;
   <release> "Ping";   /* stream first "Ping" */

   <?>pong;   /* wait for Pong to stream */
   if (pingsLeft % 1000 == 0)
      printf ("Ping: pong");
   if (pingsLeft > 0) {
      <release> "Ping";   /* stream "Ping" again */
      --pingsLeft;
      <resume>;    /* resume the wait for Pong to stream */
   } else {
      printf ("Ping: stop");
   }
}
<mel> string Pong () {
   <pel>ping;
   <? pel name="Ping" started>ping;

   int pongCount = 0;
   try {
      <?>ping;   /* wait for Ping to stream */
      if (pongCount % 1000 == 0)
         printf ("Pong: ping " + pongCount);
      <release> "Pong";   /* stream "Pong" */
      ++pongCount;
      <resume>;    /* resume the wait for Ping to stream */
   }
   catch ( ping<ENDED> ) { }
}
Two changes are being made in Version 2. The first one is that the while loop has been replaced by the resume operation that repeats the mel wait for the streaming messages. The second one is that the "Stop" message has been removed. The Pong task knows that Ping has ended via the ENDED exception.


Parallel Collections

The parallel collections feature in Scala is discussed in this article. The examples to illustrate this feature are:
  1. Map
  2. Fold
  3. Filter
Map

This example uses a parallel map to transform a collection of String to all-uppercase:
val lastNames = List("Smith","Jones","Frankenstein","Bach","Jackson","Rodin").par
lastNames.map(_.toUpperCase)
The result of the run is:
SMITH, JONES, FRANKENSTEIN, BACH, JACKSON, RODIN

The NERWous C version is more verbose since there is no built-in map function. Again, for simplicity, we we will use the fictitious string type which does not exist in the C language:
/* VERSION 1 */
#define NUM 6
string lastNames[NUM] = {"Smith","Jones","Frankenstein","Bach","Jackson","Rodin"};
for (int i=0; i<NUM; ++i) {
   lastNames[i] = <!> toUpperCase(lastNames[i]);
}
The NERWous C version has a serial loop that pels a new inline task in each iteration. The inline task takes in lastNames[i] as a local variable, and ends with its toUpperCase value. The returned value is then re-assigned to the local variable lastNames[i].

Due to the assignment back to lastNames[i] which is under the main task context, each inline task has to finish running before the next inline task can run. So although the inline tasks can run in parallel with one another, they are practically run serially, albeit on a possibly different cel element.

Let's now transform lastNames into a mel array so that it can be accessed in parallel by the toUpperCase inline tasks:
/* VERSION 2 */
#define NUM 6
<mel>string lastNames[NUM] = {"Smith","Jones","Frankenstein","Bach","Jackson","Rodin"};
for (int i=0; i<NUM; ++i) {
   <!> { <? replace>lastNames[i] = toUpperCase(?); }
}
Each inline task that is pelled on each iteration of the for loop now truly runs in parallel, each accessing its portion of the shared mel array lastNames. Each inline task uses the reader zone shortcut to replace in place the value of the mel element lastNames[i].

Fold

This example adds up the values from 1 to 10000.
val parArray = (1 to 10000).toArray.par
parArray.fold(0)(_ + _)
The result of 1 + 2 + 3 + ... + 10000 is 50005000. Again, the NERWous C version is much more verbose since there is no built-in fold capability:
#define NUMCOUNT 10000
int parArray[NUMCOUNT];
for (i = 0; i<NUMCOUNT; ++i)
   parArray[i] = i+1;    /* initialize array */

<mel> int sum;
<collect> for (i = 0; i<NUMCOUNT; i += 2 ) {
    <! import="parArray[i] ..<flex ubound=NUMCOUNT uvalue=0>.. parArray[i+1]"> {
        <?> sum += (parArray[i] + parArray[i+1]);
    }
} <? ENDED>;
printf ("Summation value %d", <?>sum);
The first for loop initializes the parArray array with values 1, 2, etc. to 10000. The second for loop takes every two elements of the array and do the summation in separate tasks. Each task gets their local parArray elements imported, does the addition, and adds up the result into the share sum mel variable.

The second for loop is run inside a collect-ENDED block to make sure that all tasks that it pels have ended before the program continues with the printf statement. This ensures that the mel variable sum contains all the summations.

Each iteration in the second for loop creates a separate inline task to do the summation. This task requires two local values passed via the import operation. The import statement could be written simply as:
<! import="parArray[i],parArray[i+1]">
However if the number of items in parArray is odd, the last iteration of the for loop will generate an out-of-bound exception on parArray[i+1]. The use of the flex consecontive construct addendum allows the run-time environment to assign the uvalue value (0) to any parArray item with an index equal to or greater than the ubound value (NUMCOUNT).

Filter

This example uses a parallel filter to select the last names that come alphabetically after the letter “J”.
val lastNames = List("Smith","Jones","Frankenstein","Bach","Jackson","Rodin").par
lastNames.filter(_.head >= 'J')
The result of the run is Smith, Jones, Jackson, Rodin. This is the NERWous C version:
#define NUM 6
string lastNames[NUM] = { "Smith","Jones","Frankenstein","Bach","Jackson","Rodin" };
for (int i=0; i<NUM; ++i) {
   <!> { if head(lastNames[i]) >= 'J') printf("%s", lastNames[i]); }
}
Each iteration of the for loop pels a new task that runs independently from each other. Each task gets its local array element lastNames[i] imported for processing. Since each task runs independently and concurrently, the order of the names printed out is not deterministic.


Futures and Promises

Under Scala, a future is a read-only reference to a yet-to-be-completed value, while a promise is used to complete a future either sucessfully or with an exception and failure.

Futures - Coffee Making Example

The coffee making example is described in this article. The complete code is taken from GitHub:
import scala.concurrent._
import scala.concurrent.duration._
import scala.concurrent.Future
import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent.duration._
import scala.util.Random
import scala.util.Try

object kitchen extends App {

  /////////////////////////////
  // Some type aliases, just for getting more meaningful method signatures:
  type CoffeeBeans = String
  type GroundCoffee = String
  case class Water(temperature: Int)
  type Milk = String
  type FrothedMilk = String
  type Espresso = String
  type Cappuccino = String

  // some exceptions for things that might go wrong in the individual steps
  // (we'll need some of them later, use the others when experimenting
  // with the code):
  case class GrindingException(msg: String) extends Exception(msg)
  case class FrothingException(msg: String) extends Exception(msg)
  case class WaterBoilingException(msg: String) extends Exception(msg)
  case class BrewingException(msg: String) extends Exception(msg)
  /////////////////////////////
  def combine(espresso: Espresso, frothedMilk: FrothedMilk): Cappuccino = "cappuccino"

  def grind(beans: CoffeeBeans): Future[GroundCoffee] = Future {
    println("start grinding...")
    Thread.sleep(Random.nextInt(2000))
    if (beans == "baked beans") throw GrindingException("are you joking?")
    println("finished grinding...")
    s"ground coffee of $beans"
  }

  def heatWater(water: Water): Future[Water] = Future {
    println("heating the water now")
    Thread.sleep(Random.nextInt(2000))
    println("hot, it's hot!")
    water.copy(temperature = 85)
  }

  def frothMilk(milk: Milk): Future[FrothedMilk] = Future {
    println("milk frothing system engaged!")
    Thread.sleep(Random.nextInt(2000))
    println("shutting down milk frothing system")
    s"frothed $milk"
  }

  def brew(coffee: GroundCoffee, heatedWater: Water): Future[Espresso] = Future {
    println("happy brewing :)")
    Thread.sleep(Random.nextInt(2000))
    println("it's brewed!")
    "espresso"
  }

  ///////////// Business logic
  println("Kitched starting")
  def prepareCappuccino(): Future[Cappuccino] = {
    val groundCoffee = grind("arabica beans")
    val heatedWater = heatWater(Water(20))
    val frothedMilk = frothMilk("milk")
    for {
      ground <- groundCoffee
      water <- heatedWater
      foam <- frothedMilk
      espresso <- brew(ground, water)
    } yield combine(espresso, foam)
  }
  val capo = prepareCappuccino()
  Await.ready(capo, 1 minutes)
  println("Kitched ending")
}
The NERWous C version has the main task pels the tasks to do the grinding (grind), heating water (heatWater), frothing the milk (frothMilk) and brewing the espresso (brew) -- to be run in parallel. The brew task waits for the grind task to have the coffee beans ground, and the heatWater task to have the water hot to the requested temperature. The main task, after pelling all the activities, waits for the brew and frothMilk to be done before declaring "Kitchen ending". If the brewing is not ready within a minute, the main task ends with "Kitchen ending without espresso".
main () {
   printf ("Kitchen starting");
   <pel> ground = <!> grind("arabica beans");
   <pel> water = <!> heatWater(Water(20));
   <pel> foam = <!> frothMilk("milk");
   <pel> espresso = <!> brew(ground, water);

   try {
      string frothMilk_ret, brew_ret;
      (frothMilk_ret, brew_ret) = <? timeout=6000> (foam, espresso);
      if ( brew_ret == "" ) printf ("Kitchen ending without espresso");
      else printf ("Kitchen ending");
   }
   catch ((foam && espresso)<TIMEOUT>) {
      printf ("Espresso and milk not ready within 1 min");
   }
}

string grind( string beans ) {
    printf("start grinding...");
    sleep(rand()%2000);
    if ( beans == "baked beans") return "are you joking?";
    printf("finished grinding...");
    return("ground coffee of " + beans);
 }

 int heatWater(int temp) {
    printf("heating the water now")
    sleep(rand()%2000);
    printf("hot, it's hot!")
    return 85;
 }

 string frothMilk( string milk ) {
    printf("milk frothing system engaged!")
    sleep(rand()%2000);
    printf("shutting down milk frothing system");
    return "frothed " + milk;
 }

 string brew( pel ground, pel water ) {
    printf("happy brewing :)");
    sleep(rand()%2000);

    string ret = <?> ground;
    if ( ret == "are you joking?" ) return "";

    <?> water;
    printf("it's brewed!");
    return "espresso";
 }
The tasks communicate their ending via the return statement with a string value. The tasks that depend on those ending tasks, start to do their own stuff, then wait for those returned values before continuing with the rest of their own stuff.

Promises - Producer/Consumer Example

This producer/consumer example is taken from the Scala SIP-14 document.
import scala.concurrent.{ Future, Promise }
val p = Promise[T]()
val f = p.future

val producer = Future {
  val r = someComputation
  if (isInvalid(r))
    p failure (new IllegalStateException)
  else {
    val q = doSomeMoreComputation(r)
    p success q
  }
}

val consumer = Future {
  startDoingSomething()
  f onSuccess {
    case r => doSomethingWithResult(r)
  }
}
The NERWous C version has the main task pelling the producer and consumer tasks to run in parallel.
main () {
   <pel> prod = <!> producer();
   <!> consumer(prod);
}
int producer () {
   int r = someComputation();
   if (isInvalid(r) ) <end FAILED>;
   else {
      int q = doSomeMoreComputation(r);
      <return> q;
   }
}
void consumer (<pel>prod) {
   startDoingSomething();
   try {
      doSomethingWithResult (<?>prod);
   }
   catch (prod<FAILED>) {}
   catch (prod<...>) {}
}
The producer task either ends with a FAILED exception on an invalid someComputation result, or ends normally with a valid result to be used as the return value of the task.

The consumer task is given a representative of the producer task via the pel input argument prod. It does startDoingSomething then waits for producer return value via the mel wait statement <?>prod. If the wait is successful, it will do doSomethingWithResult. If the wait fails due to a FAILED or other exception, the consumer just ends without doing anything more.


Previous Next Top

Sunday, September 10, 2017

Mel Properties

Welcome » NERWous C » Mel
  1. Mel Properties Cache
  2. Mel Properties Categories
    1. IN Properties
    2. OUT Properties
    3. SET Properties
    4. CORE Properties
  3. Mel Properties Constants
  4. Mel ID Property
  5. Mel URL Property
  6. Mel Value Property
  7. Mel Value Auto Operators
  8. Mel Status Property
  9. Mel Condition Property
  10. Mel Name Property
    1. Mel Name Scope Rule
    2. Mel Element Reference
    3. Mel Name Usage


Mel Properties Cache

The NERW Concurrency Model posits that tasks run in processing elements (pel) and share data contained in memory elements (mel). The pels and mels are hosted on computer elements (cel). The cels are distributed over a network of execution, read and write (nerw). When accessing shared data, a pel may have to cross the nerw network to reach a different cel that hosts the requested mel. This effort is resource intensive and incurs latency, reducing the throughput of a concurrent program.

To minimize the nerw impediments, NERWous C supports the concept of properties. Whenever a task takes on an operation that reaches out to a mel, the data that is returned to the task not only contains the result of the operation, but also other information about the mel. This information is saved locally into a cache set up inside the mel variable of the same name as the remote mel element. This eponymous mel variable represents the connection from the task to the mel element. Pieces of information in that cache are called properties.

The properties cache is the view of the mel at the time of the last operation the task conducts against that mel. This view will eventually become stale when other tasks update the shared mel element. To refresh its properties cache, a task can do another altering mel read or write operation, or invoke the non-altering snapshot operation.


Mel Properties Categories

This section expands on the introduction of mel properties categories from the last chapter. As discussed, there are three types of properties:
  1. IN properties are cached attributes from the last mel operation a task conducts against a mel
  2. OUT properties are results the task caches after receiving them from a mel operation
  3. SET properties are configuration information from the mel that are piggy-backed with each mel operation
  4. CORE properties are system values assigned by the CHAOS runtime to allow a task to reach out to a mel
Let's explore these properties, using the mel variable store as a stand-in.

IN Properties

The IN properties pertain solely to the requesting task. Unlike the OUT, SET and CORE properties, the IN properties do not belong to the mel element. They are attributes to the mel operation that the task initiates, and are specific to that mel operation. Another task may use different attributes in its own mel operation, and will have different IN properties values in its resident mel variable. These operational attributes are saved as properties so the program can refer back at them, for reporting, debugging, or subsequent invocations. As an example of the latter case, a task can adjust the priority of the next mel request based on the length of the wait from the previous request to average out the latency.

PropertySynopsis
store<priority>Priority value used by the last mel operation
store<timeout>Timeout value used by the last mel operation
store<name>Scoped name of the mel variable set during creation
store<agent>Entity that executes a requested mel operation

When a task does a new mel operation to the mel element, the IN properties in the mel variable are cleared, and re-initialized with the attributes of the new mel operation.

OUT Properties

The OUT properties cache the result returned by a mel operation.

PropertySynopsis
store<value>Value returned by the mel read operation. This value can be a simple entity (like an int), an array or a structured entity
store<count>Number of items in an array value
store<sequence>Version identifier of the value returned from a mel read operation
store<status>Status of the mel variable after the mel operation
store<condition>Status of the mel variable before the mel operation
store<error>A numerical value representing the error code of the last mel operation. A value of 0 means success and non-0 values mean failures.
store<why>Reason of a non-zero <error> value

The OUT properties reflect information about the mel element after the mel operation has completed for the task, either successfully or unsuccessfully. After awhile, this information becomes stale due to other tasks doing reads and writes on the mel element. To get the current information, the task must invoke a new read or write operation, or take a new snapshot of the OUT data via the snapshot operation.

SET Properties

The SET properties are additional information about the mel element that are piggy-backed on return of any mel operation. These properties are snapshots of the configuration setting of the mel element at the time of return. After awhile, these properties may be stale if other tasks have issued mel operations that change the configuration, such as the rebuffer operation. If up-to-date values are needed, the program should make another mel operation or use the snapshot operation to refresh the SET properties.

PropertySynopsis
store<buffer>Number of buffer slots of the mel element
store<location>Cel location of the mel element
store<readers>Number of reader tasks subscribing to the mel element
store<writers>Number of writer tasks subscribing to the mel element

The SET properties are system properties that can be changed programmatically by using mel operations. The CORE properties that we will explore next are system properties that cannot be changed programmatically once they are assigned.

CORE Properties

The CORE properties are system values assigned by the CHAOS runtime to allow a task to reach out to a mel element. These values cannot be changed once a mel element has been created.

PropertySynopsis
store<id>Mel variable numerical identification. This value is mostly used in environments where shared resources can be identified by a number, such as multi-CPUs shared-memory supercomputers. Otherwise this value is a perfect hash of the <url> property.
store<url>Mel element string identifier. This value is mostly used in distributed environments where shared resources are identified by a uniform resource locator. Otherwise this value is just a string representation of the numerical <id> property.

The CORE properties are identification properties. They are the same and remain constant in all the mel variables of all the tasks that refer to the same mel element.


Mel Properties Constants

Some mel properties have a wide range of application-specific values. Other mel properties, such as error and status have a fixed set of constant values. In NERWous C, these constant values are defined in the nerw.h file that must be included in all NERWous C programs.

For brevity, the inclusion of nerw.h is mostly skipped in the code samples.


Mel ID Property

The <id> property is assigned by the CHAOS runtime environment to uniquely identify a mel element. The type of this property is a long long, allowing a theoretical limit of over 9 quintillion mel elements (i.e. 9 followed by 18 zeros). Although NERWous C does not specify how an <id> property is formatted, it requires that:
  1. The <id> property is assigned to a mel element when it is created
  2. No two mel elements within the same NERWous C program can have the same <id> property
The <id> property can be used for tracking or debugging:
<mel> int store;    /* the <id> property is set */
printf ("Mel entity [%ll] created by declaration", store<id>);
Another use is to differentiate between the selected mel elements in a reader OR zone.

When a mel variable is declared, the associated mel element is also created. At this time the CHAOS runtime will assign a unique identification number to represent the mel element, and stores this value in the <id> property of the mel variable. If the task passes the mel variable to another task, the <id> property is passed verbatim. This allows both tasks to refer to the same mel element.

The <id> property is maintained by the CHAOS runtime, and cannot be assigned from a NERWous C program. Attempting to do so will trigger a compile-time error:
store<id> = 0LL;    /* Compile-time ERROR */
In a distributed environment where a resource can be better identified via a universal resource locator (URL) string, the <id> can be the numerical perfect hash of the <url> property. The NERWous C language does not dictate a perfect hash algorithm, but with 9 quintillion numerical possibilities, it should be possible to hash structured <url> strings into unique numbers.


Mel URL Property

The <url> property is the string equivalent to the <id> property. It is assigned by the CHAOS runtime environment to uniquely identify a mel element. The equivalence between the <id> and <url> means that either of them can be used for identification. Which format to use depends on the actual implementation of the mel operation and most importantly to the physical nature of the mel element.

If in use, a string identifier is not any random string value, but structured in certain way to fit the underlying physical environment. For example, if a mel element is a shared file, then a <url> value will use a file structure, such as /share/mel/store.dat. If it is hosted on a web site, a <url> value for a mel element of name store can be something like https://nerw.dom/mel/store, and mel operations such as this creation mel statement:
<mel buffer=10> int store
can be translated into this URL
https://nerw.dom/mel/store?action=create&buffer=10
In environments where a numerical identifier is more compatible, the <url> string identifier is its alphanumerical equivalent. For example, if <id> were the number 123456790, then <url> would be the string "1234567890".


Mel Value Property

Let's get back to the basic Producer/Consumer example, and change the Consumer code to illustrate the use of the <value> property:
main () {
    <mel> int store;
    <!> Producer (store);
    <!> Consumer (store);
}
void Producer (<mel> int store) {
    while ( <?>store = Produce() );
}
void Consumer (<mel> int store) {
     while ( <?>store )
         Consume(store<value>);
}
In the original Consumer, we use a local variable, c, to temporarily hold the value read from the mel element store before we Consume it. In the above example, we skip this intermediary local variable, and access the read value directly via the <value> property.

The mel read statement, <?>store, suspends the Consumer task until the Producer task deposits a new item to the mel element. The Consumer task then removes this item from the mel element, and saves it in the <value> property of its mel variable. It then uses that property implicitly for the while checking for a zero value, and explicitly as argument for the Consume serial function.

The <value> property contains the mel value of Consumer's last read from the mel element. Since then the mel element may have been updated with a new deposit from the Producer task. Once this happens, the <value> property cached in the Consumer mel variable store will be different from the value at the mel element. To refresh its <value> property, the Consumer task has to read the mel element again, or use the snapshot operation.

The example above shows the <value> property as a simple entity (i.e. an integer). The <value> can represent more complex entities such as:


Mel Value Auto Operators

The auto operators are auto-increment and auto-decrement operators. For each auto operator, we also have the pre and post actions, resulting in four cases in total.

Let's take a look at this pre-auto-increment statement:
int c = ++<?>store;
These are the steps that are carried out:
  1. The task waits for the mel element represented by the mel variable store to be filled and available for the task to read.
  2. The mel element is emptied and its value is transferred to the task's <value>property.
  3. The <value> property is incremented.
  4. The incremented <value> property is assigned to the c variable.
Note that the auto-increment is done on the <value> property resident to the task, and not on the remote mel element. As a matter of fact, once a value is read from a mel element, it is gone from the mel element. A waiting writer task may fill the mel element right away, and this value is totally different, location-wise, from the value just read.

Now let's take a look at this post-auto-increment statement:
int c = <?>store++;
These are the steps that are carried out:
  1. The task waits for the mel element represented by the mel variable store to be filled and available for the task to read.
  2. The mel element is emptied and its value is transferred to the task's <value>property.
  3. The <value> property is assigned to the c variable.
  4. The <value> property is incremented.
The pre-auto-decrement and post-auto-decrement statements act in a similar way:
int c = --<?>store;
int c = <?>store--;
In both the pre and post decrement cases, the <value> property is decremented by 1 from the original value read from the mel element. The difference is that the value assigned to the variable c is after the decrement (for pre-auto-decrement) or before the decrement (for post-auto-decrement).


Mel Status Property

When a task does a read or write operation to a mel element, the status of the mel element after the operation is also returned and cached in the <status> property in the resident mel variable. Available statuses are:

StatusConstantSynopsis
OPENNERW_STATUS_OPENThe mel element has been successfully created, and can now be written or read.
EMPTYNERW_STATUS_EMPTYWhen the mel element buffer does not contain any value to be read, the mel element is said to be empty. The reader task has to wait for the mel element to be filled in order to acquire a value from the reader's side of the buffer.
FILLEDNERW_STATUS_FILLEDWhen the mel element buffer contains a value that can be read, the mel element is said to be filled. The reader task has been able to acquire this value from the reader's side of the buffer.
FULLNERW_STATUS_FULLWhen the mel element cannot receive another value to be written into it because all the slots in its mel buffer already have a value, the mel element is said to be full. The writer task has to wait for the mel element to become vacant before it can deposit a new value in the writer's side of the buffer.
VACANTNERW_STATUS_VACANTWhen the mel element has an available slot in its mel buffer for a value to be written into it, the mel element is said to be vacant. The writer task can deposit a new value in the writer's side of the buffer.
CLOSEDNERW_STATUS_CLOSEDThe mel element has been closed. The mel element cannot be read nor written any more.

The constant values can be bit-wise OR to form combinations. Some combinations are meaningful, such as a mel element can be both empty and vacant after a mel operation. Other combinations are not possible, such as being empty and full at the same time.

NERW_STATUS_OPEN

The OPEN status can be used to check if a mel creation has been successful or not. As discussed in Mel Creation, a programmer can check for the failure of a mel creation operation by using the <error> property or catching an exception. Checking the <status> for NERW_STATUS_OPEN is another way to detect failure:
<mel> int store;
if ( (store<status> & NERW_STATUS_OPEN) == 0x0 )
    printf ("Mel creation failed!\n");
The NERW_STATUS_OPEN status always exists along with NERW_STATUS_EMPTY, NERW_STATUS_FILLED, NERW_STATUS_FULL or NERW_STATUS_VACANT. It is mutually exclusive with NERW_STATUS_CLOSED since a mel element is either open or closed but not both.

NERW_STATUS_EMPTY / NERW_STATUS_FILLED

These two values reflect the status of the mel element for the next reading after a task's mel operation (read or write). With NERW_STATUS_EMPTY, the mel element does not have any value for the next reading. With NERW_STATUS_FILLED, the mel element has a value for the next reading. If the <status> property is set to NERW_STATUS_FILLED after a read operation, the mel element is most likely to be buffered, and that the next slot in the buffer already has a standing value. If the mel element is not buffered, after a read operation which removes the mel value, the return status is most likely be NERW_STATUS_EMPTY.

The previous paragraph uses "most likely" in several places due to different implementations of the CHAOS runtime. If CHAOS also takes into account waiting tasks, the returned status of a mel element will be opportunistic. For example, on the case of an unbuffered mel element after a read operation, the returned status will be NERW_STATUS_FILLED (instead of NERW_STATUS_EMPTY) if CHAOS detects that there is a writer task waiting to deposit its value and make the mel element filled again.

It is worth reminding that the <status> property reflects the status of the mel element after a mel operation by a task. It is not the current status of the mel element. For example, some time after a task does a mel read and gets back a NERW_STATUS_EMPTY status, a writer task has come in and deposit a new value, changing the current status of the mel element back to NERW_STATUS_FILLED. To refresh its cached <status> property, a task has to do another intrusive mel operation (such as a read or write), or invoke the non-intrusive snapshot operation.

It is also worth mentioning again that the <status> property is part of the mel variable and thus resident to a particular task. Two tasks, say A and B, access the same mel element and may end up with the same value for their <status> property; however these values represent different things: one is the status of the mel element after task A accesses the mel, and the other is the mel status after task B's access, which happens at a different time.

Let's change the Producer/Consumer example to illustrate the use of the <status> property for reading:
#define BUFFERSIZE 10
main () {
    <mel buffer=BUFFERSIZE> int store;
    <!> Producer (store);
    <!> Consumer (store);
}
void Producer (<mel> int store) {
    while ( <?>store = Produce() );
}
void Consumer (<mel> int store) {
    int products[BUFFERSIZE];
    int i, j;
    while ( 1 ) {
        for (i=0; i<BUFFERSIZE; ++i) {
            products[i] = <?>store;
            if ( store<status> & NERW_STATUS_EMPTY ) break;
        }
        for (j=0; j<=i; ++j) {
            if ( !products[j] ) break;
            Consume (products[i]);
        }
        if ( j != i ) break;     /* detect 0 value, break out of while loop */
    }
}
The mel element is now created with a buffer of BUFFERSIZE slots. Unlike the previous Consumer which does the consumption one product at a time, the new Consumer reads in all available products that are currently buffered. It knows when to stop reading by checking the <status> property for the NERW_STATUS_EMPTY condition. The Consumer task then Consumes all the collected products.

NERW_STATUS_FULL / NERW_STATUS_VACANT

These two values reflect the status of the mel element for the next writing after the task's mel operation (read or write). With NERW_STATUS_FULL, the mel element does not have an available slot to receive a new value. If the mel element is buffered, all of its slots currently contain a value. With NERW_STATUS_VACANT, the mel element is either empty or is buffered and has at least an available slot to receive a new writing.

Let's modify the Producer task to illustrate the <status> property for writing:
void Producer (<mel> int store) {
    int product = Produce();
    while ( product ) {
        <?>store = product;
        if ( store<status> & NERW_STATUS_FULL)  )
            slowdown();
        product = Produce();
    }
}
After each writing, this Producer checks the status of the mel element. If the mel element is not buffered (or in other words it has only one slot), and there is no reader task in the waiting, the status will be NERW_STATUS_FULL. If there is a reader task in the waiting, the status can be either NERW_STATUS_FULL or NERW_STATUS_VACANT depending on the implementation of the CHAOS runtime. If the mel element is buffered (i.e. having multiple slots), a NERW_STATUS_FULL means that all the slots have been filled up, and the next write can be hit by a wait for a slot to be freed. To allow a reader task to catch up, the Producer task invokes the slowdown function to slow down its production.

NERW_STATUS_CLOSED

Any task can invoke the <close> operation to close a mel element. Once this operation is successful, the <status> property in the mel variable resident to the task invoking the <close> operation will have the NEWS_STATUS_CLOSED bit set, due to the piggy-backing of the SET properties. Other tasks do not have their <status> property updated to NEWS_STATUS_CLOSED until they make a mel operation to the mel element.


Mel Condition Property

The <condition> property is similar to the <status> property, but is taken before the read or write operation can be applied to the mel element. It has the same values as the <status> property; however their interpretations are different.

After a reader task retrieves a value from the mel element, it can check the <condition> property for post-mortem analysis. If NERW_STATUS_FILLED is set, the reader task after being granted access to the mel element based on the read operation requested priority, has found the mel element to already contain a value for it to read away. If the condition bit NERW_STATUS_EMPTY is set instead, it means that the reader task has to wait for a writer task to deposit a value. It both cases, a read statement, such as <?>store can eventually succeed, but in the latter case, the <?> operator will suspend the reading task longer.

For a writer task, a NERW_STATUS_FULL <condition> means that the task has to wait in the mel writers' queue for a reader task to remove a value and make a slot available for deposit. On the other hand, a NERW_STATUS_VACANT means that the writer task can find a vacant slot to deposit its value, and quickly gets off the <?> wait operator and resumes its processing.

Let's modify the previous Producer task to use the <condition> property instead of the <status> property:
void Producer (<mel> int store) {
    int product = Produce();
    while ( product ) {
        <?>store = product;
        if ( store<condition> & NERW_STATUS_FULL )
            slowdown();
        product = Produce();
    }
}
The use of the <condition> property seems to be a better fit for the decision to slowdown the production than the use of the <status> property in the earlier example. No matter if the mel element is buffered or not, a mel wait in its just completed write means that the reader task (or tasks) cannot keep up with the writer task.


Mel Name Property

While the <id> and <url> properties identify the remote mel element, the <name> property is a character string that belongs to the mel variable. It contains a unique name that identifies the mel variable within its given scope. Among all the properties of a mel variable, the <name> property is unusual in a sense that it is a compile-time property instead of being a run-time one like the others. This means that any errors in using this property must be resolved when the NERWous C is still being compiled or translated.

Let's look at this code example:
<mel> int GlobStore1;
<mel as="Big Store"> int GlobStore2;
main () {
    <mel> int mainStore1;
    <mel as="Small Store"> int mainStore2;

    printf ("<name> value [%s] should be [GlobStore1]\n", GlobStore1<name>);
    printf ("<name> value [%s] should be [Big Store]\n", GlobStore2<name>);
    printf ("<name> value [%s] should be [mainStore1]\n", mainStore1<name>);
    printf ("<name> value [%s] should be [Small Store]\n", mainStore2<name>);

    foo1 (<?>mainStore1, <?> mainStore2);
    foo2 (<?>mainStore1, <?> mainStore2);
}
void foo1 (<mel> int argstore1, <mel as="Tiny Store"> int argstore2) {
    printf ("<name> value [%s] should be [argstore1]\n", argstore1<name>);
    printf ("<name> value [%s] should be [Tiny Store]\n", argstore2<name>);
}
void foo2 (<mel as="mainStore1"> int m1, <mel> int m2) {
    printf ("<name> value [%s] should be [mainStore1]\n", m1<name>);
    printf ("<name> value [%s] should be [m2]\n", m2<name>);
}
By default, the <name> property of a mel variable is simply its compile-time name under the scope rule. Mel variables with the default <name> property are GlobStore1 (in global scope), mainStore1, argStore1 and m2 (in local scope).

A programmer can change the default name by using the as attribute, as seen in the declarations for GlobStore2 (in global scope), mainStore2, argstore2 and m1 (in local scope).

Since the value of the as attribute must be resolved during compile time, it must be a constant value, and not a variable. For example, the code snippet below will generate a compilation error:
<mel as="MyMel1"> int store1;    /* OK */
#define MYNAME2 "MyMel2"
<mel as=MYNAME2> int store2;    /* OK */
char myname3[20] = {'M','y','M','e','l', '3', '\0' };
<mel as=myname3> int store2;    /* ERROR */

Mel Name Scope Rule

The names used as the values of the as attribute must obey the mel scope rule. It means that the same name cannot be re-used in the same scope. The following example shows the good and bad usages of the as name:
<mel> int GlobStore1;
<mel as="GlobStore1"> int GlobStore2;  /* ERROR 1: GlobStore1 is in use */
main () {
    <mel as="GlobStore1"> int mainStore1;  /* ERROR 1: GlobStore1 is in use */
    <mel as="mainStore1"> int mainStore2;  /* ERROR 2: mainStore1 is in use */

    foo1 (<?>mainStore1, <?> mainStore2);
    foo2 (<?>mainStore1, <?> mainStore2);
}
void foo1 (<mel as="GlobStore1"> int argstore1,  /* ERROR 1: GlobStore1 is in use */
           <mel as="mainStore2"> int argstore2) {  /* OK */
    printf ("[argstore1] name is [%s]\n", argstore1<name>);
    printf ("[argstore2] name is [%s]\n", argstore2<name>);
}
void foo2 (<mel as="mainStore1"> int argstore1,  /* OK */
           <mel as="mainStore2"> int argstore2) {  /* OK */
    printf ("[argstore1] name is [%s]\n", argstore1<name>);
    printf ("[argstore2] name is [%s]\n", argstore2<name>);
}
The ERROR 1 occurrences are due to the names of global mel variables are reserved globally. They can be used to create other mel variables that point to the same remote mel element (as in <snapshot> operation with on attribute), but not to assign as names to mel variables that point to different mel elements.

The ERROR 2 occurrence is due to the names of all local mel variables (such as mainStore1 in main) are reserved in the local scope. This is even though mainStore1 is declared with the as attribute to a different name. On the other hand, the argstore2 can use mainStore2 as name because mainStore2 is in the local scope of the main task but not of the foo1 function.

As said before, the errors due to as name usage incongruities must be resolved at compilation time. They cannot be caught with the <error> property or mel operation exceptions since they do not appear when the program is run.

Mel Name Reference

The use of the as attribute in the mel argument declarations of the functions foo1 and foo2 shows the mel element reference reason for supporting the <name> mel variable property. In previous chapters, we would write:
main () {
    <mel> int mainStore;

    <!> runParallel (<?>mainStore);
    runSerial (<?>mainStore);
}
void runParallel (<mel> int mainStore) {
    printf ("<name> [%s] should be [mainStore]\n", mainStore<name>);
    doThingsInParallel(<?>mainStore);
}
void runSerial (<mel> int mainStore) {
    printf ("<name> [%s] should be [mainStore]\n", mainStore<name>);
    doThingsInSerial(<?>mainStore);
}
Since a mel element, no matter in what compile-time scope it is created, is shared to all tasks during run time, it is customary (but not required) to use the same name for all the subsequent mel variables that access that common mel element. Thus, after the task main creates the mel element mainStore (and maintains the access via its resident eponymous mel variable mainStore), the argument to the task runParallel and to the function runSerial are also named mainStore. (See mel passing rule.)

There is no rule to keep the same name for mel variables. For example, there may be coding standards that dictate arguments to be named in a certain way; or some application programming interface (API) that suggests a certain consistency for argument naming. Thus, if the above example has to be written as follows:
main () {
    <mel> int mainStore;

    <!> runParallel (<?>mainStore);
    runSerial (<?>mainStore);
}
void runParallel (<mel> int pararg) {
    printf ("<name> [%s] should be [pararg]\n", pararg<name>);
    doThingsInParallel(<pararg>);
}
void runSerial (<mel> int m) {
    printf ("<name> [%s] should be [m]\n", m<name>);
    doThingsInSerial(<m>);
}
then to keep the reference to the same mel element, we use the as attribute as in this revised example:
main () {
    <mel> int mainStore;

    <!> runParallel (<?>mainStore);
    runSerial (<?>mainStore);
}
void runParallel (<mel as="mainStore"> int pararg) {
    printf ("<name> [%s] should be [mainStore]\n", pararg<name>);
    doThingsInParallel(<pararg>);
}
void runSerial (<mel as="mainStore"> int m) {
    printf ("<name> [%s] should be [mainStore]\n", m<name>);
    doThingsInSerial(<m>);
}
The mel variables mainStore, pararg and m, have the same CORE properties since they refer to the same remote mel element. To make this commonality visible at code level, the as values for pararg and m are set to the name of the remote mel element (i.e. mainStore).

Mel Name Usage

Besides the mel element reference, there are other uses for the mel name property. This list summarizes all of them:
  1. Mel element reference
  2. mel OR reads
In this chapter, we have discussed the mel name reference use. The other uses will be explored in subsequent chapters.


Previous Next Top

Friday, August 18, 2017

Pel Location

Welcome » NERWous C » Pel
  1. At Attribute
  2. Location Property
  3. Down Exception
  4. Import Attribute
  5. Array Variables


At Attribute

When a task is created via a simple pel statement:
<!>producer();
it is arbitrarily assigned to a computer element (cel) in a pool of cels maintained by the CHAOS runtime environment. There are times that the work requires a specialized hardware. Like the at attribute for mel creations, the at attribute can be used with pel creations to assign a computing "location" to a task:
extern <cel> VENDOR, DISTRIBUTOR, USER;
main () {
     <mel at=DISTRIBUTOR> int store;
     <! at=VENDOR> Producer (store);
     <! at=USER> Consumer (store);
}
void Producer (<mel> int store) {
     int c;
     while ( c = Produce() )
         <?>store = c;

     <close>store;
}
void Consumer (<mel> int store) {
    try {
         while ( 1 )
             Consume(<?>store);
     } catch ( store<CLOSED> ) {
         return;
     }
}
The cel entities VENDOR, DISTRIBUTOR and USER are defined as extern. Their definition will come from a configuration file that is associated with the above NERWous C program. This configuration file is specific to the runtime environment, and allows the same NERWous C program to be run on different physical platforms.

It is also possible to hard-code the value of the cel. Like the at attribute for the mel statement, the at attribute for the pel statement can accept either a cel value or a location value:
#define VENDORURL "https://pelvendor.nerw/"
#define USERURL "https://peluser.nerw/"

    <cel> VENDOR;
    VENDOR<id> = VENDORURL;

    <! at=VENDOR> Producer (store);
    <! at=USERURL> Consumer (store);
The Producer task uses the cel variable VENDOR like before. However this time, this cel is initialized within the program with VENDORURL as the identification value. The VENDOR cel can be initialized more succinctly:
    <cel> VENDOR = { VENDORURL };
The Consumer task specifies the identification value USERURL directly. The CHAOS runtime will automatically create a cel variable with this identification value, and assign it to the Consumer task.


Location Property

The pelling task and the pelled task can find out what is the identification of the cel in use via the location property.
extern <cel> VENDOR, DISTRIBUTOR, USER;
main () {
     <mel at=DISTRIBUTOR> int store;
     <pel> prod = <! at=VENDOR> Producer (store);
     <pel> cons = <! at=USER> Consumer (store);

     printf ("Producer at [%s]", prod<location>);
     printf ("Consumer at [%s]", cons<location>);
}
void Producer (<mel> int store) {
     printf ("Running at [%s]", pel<location>);
     int c;
     while ( c = Produce() )
         <?>store = c;

     <close>store;
}
void Consumer (<mel> int store) {
    printf ("Running at [%s]", pel<location>);
    try {
         while ( 1 )
             Consume(<?>store);
     } catch ( store<CLOSED> ) {
         return;
     }
}
The pelling task, main, uses the pel variables, prod and cons, in order to access the location property. For the pelled tasks, Producer and Consumer, the location property is accessed via the default pel variable which represents the running task.


Down Exception


main () {
    <pel> task_producer = <!> Producer ();
    <pel> task_consumer = <!> Consumer (task_producer);
}
int Producer () {
    while ( 1 ) {
        int c = produce();
        if ( c == 0 ) {
           <return> c;
        }
        <release> c;
    }
}
void Consumer (<pel>tprod) {
    while ( 1 ) {
      try {
        consume (<?>tprod);
      }
      catch (tprod<ENDED>) {
        printf ("I am ending due to the Producer having ended");
        return;
      }
      catch (tprod<DOWN>) {
        printf ("I am ending due to the Producer has crashed");
        return;
      }
    }
}



Previous Next Top