Welcome
» NERWous C
» Pel
Pel A Loop
In the following example, we will pel two tasks to run two serial
The consumer task uses the abbreviated form of the pel statement. It omits the enclosing
The producer task uses the standard form of the pel statement with both encapsulating
We can replace the
Loop Of Pels
Unlike the pel-a-loop method which pels a single task, the loop-of- pels method is a serial loop that pels a new task with each iteration. The number of tasks created is equal to the number of loop iterations. While the tasks are created one at a time, the loop iterations usually run fairly fast on modern machines, which gives the impression that the tasks are being created at the same time. Once the tasks are created, they run in parallel with one another.
Do-While Loops
Let's modify our
The
Like the producing tasks, the consuming tasks are assigned to specific cels, but for consumption. A consuming task may be already running while the product it is supposed to consume, is still being produced by the corresponding producing task. In this case, the consuming task will suspend itself at the mel wait,
The local variable
Likewise, we have 3 instances of
In the above examples, the producing tasks use the standard form of a loop of pels, while the the consuming tasks use the abbreviated form. The standard form allows codes to sandwich the task pelling, and run in the context of the pelling task. For example:
While Loops
Let's now replace the
For Loops
Let's now replace the
Loop Of Synchronous Pels
The previous loop examples pel the task asynchronously. Once the pel requests have been accepted by the CHAOS runtime, the looping task can resume its looping endeavor without waiting for the pelled tasks to have been actually created. We now change the
NERW_STATUS_FAILED
This status means that CHAOS has not been able to create the task. The
What happens to the producing tasks that the
For producing tasks that have been created before the
NERW_STATUS_TIMEOUT
For the producing tasks, the
On timeout of the 2nd retry, the
Consumer Tasks
How does a consuming task know if its corresponding producing task has been created or not? It does not care about this knowledge. What the consuming task cares is to access the mel
The "tries" Local Variables
Note that there are two
Synchronous Loop Of Pels
In the previous example, we have the
In the above example, the parallelism is more muted than in previous examples. There are no producing tasks running in parallel with any consuming tasks. All the inline producing tasks are pelled then run to completion before the
Pel A Loop Of Pels
In the previous section, we have the producing tasks not running in parallel with the consuming tasks because we want to collect the status of all the producing runs before letting the consuming tasks run. Let us modify that example to have these tasks run in parallel again, while keeping the status collection. We'll do this by combining the pel-a-loop method with the loop-of-pels method:
The pel-a-loop task for production then pels
Once all the tasks pelled from the
Previous Next Top
Pel A Loop
In the following example, we will pel two tasks to run two serial
while
loops:
main () {
<mel buffer=10> int store;
<cel> fastcell;
<! at=fastcell> { // Producer inline task
while ( 1 ) { // to run this while loop in serial
int c = produce();
<?>store = c;
if ( c == 0 ) <end>;
}
}
<!> while ( 1 ) { // Consumer inline task
int c = <?>store;
if ( c == 0 ) <end>;
consume (c);
}
}
There are 3 tasks in the above program. The main
task pels a task to do production, and another task to do consumption. The while
loops inside the pelled tasks are run serially, with one item produced at a time, and one item consumed at a time. The mel store
is the communication channel between the two tasks. It has a buffer to allow the producer, which runs at a faster cel node (via the fastcel
at
attribute), to minimize waiting for the intermittently slower consumer task to catch up.
The consumer task uses the abbreviated form of the pel statement. It omits the enclosing
{
and }
brackets of the pel statement <!>
since there is only one statement (the while
loop) inside the pel code block.
The producer task uses the standard form of the pel statement with both encapsulating
{
and }
brackets. Although more verbose, it can support other statements before and after the while
loop. For example, for monitoring purpose, we can include some sandwiching printf
statements:
<! at=fastcel> {
printf ("Before while loop\n");
while ( 1 ) { // to run this while loop in serial
int c = produce();
<?>store = c;
if ( c == 0 ) <end>;
}
printf ("After while loop\n");
}
We can replace the
while
loop with a do-while
loop:
main () {
<mel buffer=10> int store;
<cel> fastcell;
<! at=fastcel> { // Producer inline task
printf ("Before do-while loop\n");
do {
int c = produce();
<?>store = c;
if ( c == 0 ) <end>;
} while ( 1 );
printf ("After do-while loop\n");
}
<!> do { // Consumer inline task
int c = <?>store;
if ( c == 0 ) <end>;
consume (c);
} while ( 1 );
}
The above example uses the <end>
statement to break out of the do-while
loops. This statement also ends the inline task. For the Consumer task, both activities are the same, since there is nothing else Consumer does after breaking out of the loop. For the Producer task, the <end>
statement prevents the execution of the closing printf
. The remedy is to use the C language's break
statement:
<! at=fastcel> { // Producer inline task
printf ("Before do-while loop\n");
do {
int c = produce();
<?>store = c;
if ( c == 0 ) break;
} while ( 1 );
printf ("After do-while loop\n");
}
After break
'ing, the Producer task does the printf
, and without any other activity programmed, the task just ends.
Loop Of Pels
Unlike the pel-a-loop method which pels a single task, the loop-of- pels method is a serial loop that pels a new task with each iteration. The number of tasks created is equal to the number of loop iterations. While the tasks are created one at a time, the loop iterations usually run fairly fast on modern machines, which gives the impression that the tasks are being created at the same time. Once the tasks are created, they run in parallel with one another.
Do-While Loops
Let's modify our
do-while
example to have the producing tasks producing exactly 3 items, to be stored separately, using 3 different producers. We also have 3 different consumers:
#define NUMITEMS 3
main () {
<mel> int stores[NUMITEMS];
<cel> producer[NUMITEMS] = { {"Producer 1"}, {"Producer 2"}, {"Producer 3"} };
<cel> consumer[NUMITEMS] = { {"Consumer 1"}, {"Consumer 2"}, {"Consumer 3"} };
int n = 0;
do {
<! at=producer[n]> {
<?>stores[n] = produce();
}
} while ( ++n < NUMITEMS );
n = 0;
do <! at=consumer[n]> {
consume (<?>stores[n]);
} while ( ++n < NUMITEMS );
}
To separate the products, we use the mel array stores
. Each producer is represented by its own cel element.
The
main
task finishes the do-while
loop for production before starting the do-while
loop for consumption. Unless the runtime primitive to create a task is very demanding, both loops can be ripped through quickly. In a blink of time, we can have 7 tasks running: the main
task, 3 producing tasks and 3 consuming tasks.
Like the producing tasks, the consuming tasks are assigned to specific cels, but for consumption. A consuming task may be already running while the product it is supposed to consume, is still being produced by the corresponding producing task. In this case, the consuming task will suspend itself at the mel wait,
<?>stores[n]
, for the product to show up.
The local variable
n
is an interesting fellow. As discussed in the chapter on local variables, there are in total 6 instances of n
in the above example. The instances belonging to the main
task are shown in red below:
int n = 0;
do {
<! at=producer[n]> {
<?>stores[n] = produce();
}
} while ( ++n < NUMITEMS );
n = 0;
do <! at=consumer[n]> {
consume (<?>stores[n]);
} while ( ++n < NUMITEMS );
On the other hand, the instance of n
in:
<?>stores[n] = produce();
belongs to the producer inline tasks. Since there are 3 producing tasks, there are 3 different instances of n
, each localized to a particular task. All these localized instances get their initial value from the instance in the main
task at the time of the pelling.
Likewise, we have 3 instances of
n
for the consuming tasks. If we were to change n
within a task, like:
do <! at=consumer[n]> {
consume (<?>stores[n]);
n = 0;
} while ( ++n < NUMITEMS );
that change would only be applicable to the instance n
of that consuming task. It would not affect the instance of n
in the while
statement which belongs to the main
task.
In the above examples, the producing tasks use the standard form of a loop of pels, while the the consuming tasks use the abbreviated form. The standard form allows codes to sandwich the task pelling, and run in the context of the pelling task. For example:
do {
printf ("Begin creating task for [%d]\n", n);
<! at=producer[n]> {
<?>stores[n] = produce();
}
printf ("Request for task [%d] submitted\n", n);
} while ( ++n < NUMITEMS );
While Loops
Let's now replace the
do-while
loops with while loops.
#define NUMITEMS 3
main () {
<mel> int stores[NUMITEMS];
<cel> producer[NUMITEMS] = { {"Producer 1"}, {"Producer 2"}, {"Producer 3"} };
<cel> consumer[NUMITEMS] = { {"Consumer 1"}, {"Consumer 2"}, {"Consumer 3"} };
int n = 0;
while ( ++n <= NUMITEMS ) {
printf ("Begin creating task for [%d]\n", n);
<! at=producer[n]> {
<?>stores[n] = produce();
}
printf ("Request for task [%d] submitted\n", n);
}
n = 0;
while ( ++n <= NUMITEMS )
<! at=consumer[n]> {
consume (<?>stores[n]);
}
}
The post-pel printf
statement says "Request for task [%d] submitted
" -- "submitted" and not "created", because the pel statement is asynchronous. As long as the request to create a task has been accepted by the CHAOS runtime, the main
task can return from the pel statement to iterate the while
loop. The submitted task will be created in due time by CHAOS.
For Loops
Let's now replace the
while
loops with for loops.
#define NUMITEMS 3
main () {
<mel> int stores[NUMITEMS];
<cel> producer[NUMITEMS] = { {"Producer 1"}, {"Producer 2"}, {"Producer 3"} };
<cel> consumer[NUMITEMS] = { {"Consumer 1"}, {"Consumer 2"}, {"Consumer 3"} };
for ( int n = 0; n < NUMITEMS; ++n ) {
printf ("Begin creating task for [%d]\n", n);
<! at=producer[n]> {
<?>stores[n] = produce();
}
printf ("Request for task [%d] submitted\n", n);
}
for ( int n = 0; n < NUMITEMS; ++n )
<! at=consumer[n]> {
consume (<?>stores[n]);
}
}
Again, the producer task uses the standard form and the consumer task, the abbreviated form for loops of pels.
Loop Of Synchronous Pels
The previous loop examples pel the task asynchronously. Once the pel requests have been accepted by the CHAOS runtime, the looping task can resume its looping endeavor without waiting for the pelled tasks to have been actually created. We now change the
for
loop example to use synchronous task creation method to allow the looping task to make sure that the pelled tasks have been actually created.
#define NUMITEMS 3
main () {
<mel> int stores[NUMITEMS];
<pel> prods[NUMITEMS];
<pel> conss[NUMITEMS];
<cel> producer[NUMITEMS] = { {"Producer 1"}, {"Producer 2"}, {"Producer 3"} };
<cel> consumer[NUMITEMS] = { {"Consumer 1"}, {"Consumer 2"}, {"Consumer 3"} };
/* Producer inline block */
int tries;
for ( int n = 0; n < NUMITEMS; ++n ) {
printf ("Begin creating Producer task [%d]\n", n);
tries = 0;
prods[n] = <! at=producer[n] timeout> {
<?>stores[n] = produce();
}
if ( prods[n]<status> == NERW_STATUS_FAILED ) {
printf ("EXIT LOOP - Task [%d] failed to be created due to [%s]\n",
n, prods[n]<why>);
<close> stores;
break;
}
else if ( prods[n]<status> == NERW_STATUS_TIMEOUT ) {
if ( ++tries < 2 ) {
printf ("RETRY after 1st TIMEOUT\n");
<resume>;
}
else {
printf ("STOP TRYING after 2nd TIMEOUT\n");
continue;
}
}
printf ("End creating Producer task for [%d]\n", n);
}
/* Consumer inline block */
for ( int n = 0; n < NUMITEMS; ++n ) {
conss[n] = <! at=consumer[n]> {
try {
int tries = 0;
consume (<?>stores[n]);
}
catch ( stores[n]<CLOSED> ) {
printf ("STOP TRYING -- The producer for [%d] item may have FAILED", n);
<end>;
}
catch ( stores[n]<TIMEOUT> ) {
if ( ++tries < 2 ) {
printf ("RETRY after 1st TIMEOUT\n");
<resume>;
}
else {
printf ("STOP TRYING after 2nd TIMEOUT\n");
<end>;
}
}
}
if ( conss[n]<status> == NERW_STATUS_FAILED ) {
printf ("Consumer task [%d] failed to be created due to [%s]\n",
n, conss[n]<why>);
continue;
}
}
}
For the producer tasks, an array of pel elements, <pel> prods[NUMITEMS]
, are declared. They are used to receive the result of the pel statement executions:
prods[n] = <! at=producer[n] timeout> { ... }
By assigning a pel variable to a pel creation statement, the main
task indicates that it is willing to wait to make sure that its request for the new task has been fulfilled (successfully or not). Via this synchronizing handshake, the main
task can check for the two errors that could possibly happen during a task pelling: a failure error and a timeout error. These errors are detected by checking the <status>
property in which CHAOS reports the outcome of the pelled task creation.
NERW_STATUS_FAILED
This status means that CHAOS has not been able to create the task. The
main
task decides to abort any further attempt to pel the rest of the tasks. It closes the mel stores
, then invokes the C statement break
to immediately get out of the serial for
loop.
What happens to the producing tasks that the
main
task has been able to create before issuing the break
statement? If they have produce
'd a value to their stores[n]
mel item, these values will be lost when main
issues the <close> store
command. At this time, main
still runs the for
loop for the producing tasks, and have not started the for
loop for the consuming tasks; therefore there are no tasks to consume
these produced values.
For producing tasks that have been created before the
break
statements but have not valued their stores[n]
, they continue to produce
. However, when they try to deposit the produce
d item to stores[n]
, they will catch a mel CLOSED
exception. For simplicity, we have not programmed in a try / catch
handling for the producing tasks. These tasks then end with an abortion on the unhandled CLOSED
exceptions.
NERW_STATUS_TIMEOUT
For the producing tasks, the
main
task handles the TIMEOUT status differently than the FAILED status. First it tries to wait some more. The <resume>
statement jumps the processing flow back to the latest mel or pel statement, which in this case, is the pel creation statement, allowing the main
task to repeat the task pelling:
<! at=producer[n] timeout>
The timeout
attribute without any value means that the main
task is willing to wait the default amount of milliseconds before abandoning the wait for the pel creation. At that time, the CHAOS runtime also aborts any pel creation work even if it is already partially successful.
On timeout of the 2nd retry, the
main
task aborts the attempt to pel the task, and continue
s its iteration to pel the next task. While the NERWous C <resume>
statement jumps the processing back to the pel statement <!
statement, the C language continue
jumps the processing all the way to the for
loop statement.
Consumer Tasks
How does a consuming task know if its corresponding producing task has been created or not? It does not care about this knowledge. What the consuming task cares is to access the mel
stores[n]
for a reading. If the reading times out, then the consuming task runs its own TIMEOUT
exception handler. The TIMEOUT
exception may be due to the slowness of the CHAOS runtime at that moment, or to the slowness of the corresponding producing task to generate a value, or to the producing task not being able to be created on the first place, such as it has "hung".
The "tries" Local Variables
Note that there are two
tries
local variables mentioned in the above example. They are separate and unrelated. The first one belongs to the main
task, and is used for the pelling of the producing tasks. The second one is in the context of each consuming task, and is used in the access of the mel variable stores[n]
. Thus, there are NUMITEMS
+ 1 instances of the tries
variable when this program runs: one for main
, and one for each of the NUMITEMS
consuming tasks.
Synchronous Loop Of Pels
In the previous example, we have the
main
task waits for the pel tasks to be created before it continues. Now we will extend this wait so that main
will wait for all the tasks to be actually finished before it continues:
#define NUMITEMS 3
main () {
<mel> int stores[NUMITEMS];
<pel> prods[NUMITEMS];
<pel> conss[NUMITEMS];
<cel> producer[NUMITEMS] = { {"Producer 1"}, {"Producer 2"}, {"Producer 3"} };
<cel> consumer[NUMITEMS] = { {"Consumer 1"}, {"Consumer 2"}, {"Consumer 3"} };
/* Producer inline block */
int tries;
<collect> for ( int n = 0; n < NUMITEMS; ++n ) {
printf ("Begin creating Producing task [%d]\n", n);
tries = 0;
prods[n] = <! at=producer[n] timeout> {
<?>stores[n] = produce();
}
if ( prods[n]<status> == NERW_STATUS_FAILED ) {
printf ("EXIT LOOP - Task [%d] failed to be created due to [%s]\n",
n, prods[n]<why>);
<close> stores;
break;
}
else if ( prods[n]<status> == NERW_STATUS_TIMEOUT ) {
if ( ++tries < 2 ) {
printf ("RETRY after 1st TIMEOUT\n");
<resume>;
}
else {
printf ("STOP TRYING after 2nd TIMEOUT\n");
continue;
}
}
printf ("End creating Producing task for [%d]\n", n);
} <? ENDED>;
printf ("All the Producing tasks have ended\n");
/* Consumer inline block */
<collect> for ( int n = 0; n < NUMITEMS; ++n ) {
conss[n] = <! at=consumer[n]> {
try {
int tries = 0;
consume (<?>stores[n]);
}
catch ( stores[n]<CLOSED> ) {
printf ("STOP TRYING -- The producer for [%d] item may have FAILED", n);
<end>;
}
catch ( stores[n]<TIMEOUT> ) {
if ( ++tries < 2 ) {
printf ("RETRY after 1st TIMEOUT\n");
<resume>;
}
else {
printf ("STOP TRYING after 2nd TIMEOUT\n");
<end>;
}
}
if ( conss[n]<status> == NERW_STATUS_FAILED ) {
printf ("Consuming task [%d] failed to be created due to [%s]\n",
n, conss[n]<why>);
continue;
}
} <? ENDED>;
}
By using the wrapper construct <collect> { ... } <? ENDED>
, the main
task asks the CHAOS runtime to monitor all the tasks pelled between the wrapper. The main
task will wait at the <? ENDED>
until all the collected tasks have ended.
In the above example, the parallelism is more muted than in previous examples. There are no producing tasks running in parallel with any consuming tasks. All the inline producing tasks are pelled then run to completion before the
main
task pels the inline consuming tasks. The above example still has more parallelism than a pure serial program -- the produce
and consume
funtions are executed in parallel via their corresponding for
loop of pels.
Pel A Loop Of Pels
In the previous section, we have the producing tasks not running in parallel with the consuming tasks because we want to collect the status of all the producing runs before letting the consuming tasks run. Let us modify that example to have these tasks run in parallel again, while keeping the status collection. We'll do this by combining the pel-a-loop method with the loop-of-pels method:
#define NUMITEMS 3
main () {
<mel> int stores[NUMITEMS];
<pel> prods[NUMITEMS];
<pel> conss[NUMITEMS];
<cel> producer[NUMITEMS] = { {"Producer 1"}, {"Producer 2"}, {"Producer 3"} };
<cel> consumer[NUMITEMS] = { {"Consumer 1"}, {"Consumer 2"}, {"Consumer 3"} };
/* Producer inline block */
int tries;
<?> {
<collect> for ( int n = 0; n < NUMITEMS; ++n ) {
printf ("Begin creating Producing task [%d]\n", n);
tries = 0;
prods[n] = <! at=producer[n] timeout> {
<?>stores[n] = produce();
}
if ( prods[n]<status> == NERW_STATUS_FAILED ) {
printf ("EXIT LOOP - Task [%d] failed to be created due to [%s]\n",
n, prods[n]<why>);
<close> stores;
break;
}
else if ( prods[n]<status> == NERW_STATUS_TIMEOUT ) {
if ( ++tries < 2 ) {
printf ("RETRY after 1st TIMEOUT\n");
<resume>;
}
else {
printf ("STOP TRYING after 2nd TIMEOUT\n");
continue;
}
}
printf ("End creating Producing task for [%d]\n", n);
} <? ENDED>;
printf ("All the Producing tasks have ended\n");
}
/* Consumer inline block */
<!> {
<collect> for ( int n = 0; n < NUMITEMS; ++n ) {
conss[n] = <! at=consumer[n]> {
try {
int tries = 0;
consume (<?>stores[n]);
}
catch ( stores[n]<CLOSED> ) {
printf ("STOP TRYING -- The producer for [%d] item may have FAILED", n);
<end>;
}
catch ( stores[n]<TIMEOUT> ) {
if ( ++tries < 2 ) {
printf ("RETRY after 1st TIMEOUT\n");
<resume>;
}
else {
printf ("STOP TRYING after 2nd TIMEOUT\n");
<end>;
}
}
if ( conss[n]<status> == NERW_STATUS_FAILED ) {
printf ("Consuming task [%d] failed to be created due to [%s]\n",
n, conss[n]<why>);
continue;
}
} <? ENDED>;
}
}
The number of tasks that are created is as follows. First the main
task is created. It then pels two pel-a-loop tasks, one for production, the other for consumption. Once the pellings are done, the main
task ends.
The pel-a-loop task for production then pels
NUMITEMS
tasks, one for each iteration of its for
loop. The pel-a-loop task for consumption does the same, and its NUMITEMS
loop-of-pels tasks run in parallel with those from the production side. Once they have pelled all of their tasks, the pel-a-loop tasks (one for production, one for onsumption) wait for their own pelled tasks to finish via the wait statement <? ENDED>
. These waits are done in parallel because they are done by concurrent tasks.
Once all the tasks pelled from the
for
loops have ended, the pel-a-loop parent tasks are woken up from their <? ENDED>
waits, do their post-for
-loop processing if any, then end. Their endings allow the NERWous C program to exit.
Previous Next Top
No comments:
Post a Comment