Saturday 29 November 2008

Transactions in Constraint Handling Rules

A paper to be presented at ICLP'08, Udine, Italy, December'08. My co-author Tom Schrijvers will give the presentation but here's already a draft of the talk.

If you've followed my previous posts on (software) transactions (in memory aka STM), this paper is more on the theoretical (greek symbol) side, but still worth a read if you seek for a systematic method to improve the efficiency of your STM code.

In the paper, we enrich the Constraint Handling Rules (CHR) calculus with transactions. If you've never heard the name CHR before, think of multi-set rewriting, bounded transactions, or a form of join pattern with guards and propagation. The paper develops a theory of how to (possibly) transform unbounded transactions to bounded transactions. Under optimistic concurrency control, unbounded transactions (e.g. transactions traversing a dynamic data structure such as a linked list) often yield "false" conflicts which means that transactions are executed sequentially instead of in parallel. Bounded transactions (e.g. dining philosphers which grap a bounded, two, number of forks) are generally free of such (in)efficiency problems. Details are in the paper.

Tuesday 18 November 2008

High-level Software Transaction Protocols

The topic is how to built high-level (composable) software transaction protocols on top of highly-concurrent, low-level data structures which are not composable. Yes, this is sort of a continuation of my previous 'Transactional Boosting' post. What I'm looking for is a systematic method to built boosted protocols. I believe that a promising formalism are join patterns (aka multi-set rewrite rules).


Transactional boosting revisited



Here's my attempt to recast the transactional boosting approach in terms of the join pattern language extension I'm currently working on. Join patterns can now have guards and propagation. For example, the following join patterns says

foo(x,y) \ bar(y,z) when x > z = ...

if we find method calls foo(x,y) and bar(y,z) where the condition x > z holds (notice the implicit constraint imposed via the shared variable z), then we remove (simplify) bar(y,z) from the method store but we keep (propagate) foo(x,y).

Okay, so let's make use of such join patterns to specify the boosted transactional protocol. For simplicity, let's consider a very simple to be boosted object. A set with one 'add' operation.


The interface:

add x = do t <- readEnv
r <- newSync
addI(t,x,r)
v <- readSync r


From the environment, we get the current transaction identifier (kind of ThreadId). We create a synchronization variable on which we will block. But first we call the method
addI(t,x,r) whose interaction with other method calls is specified below
(In a Join setting, method calls are recorded in a store).


The protocol to check for conflicts:

-- conflict --> suspend
addI(t1,x,_) \ addI(t2,x,r) = addS(t2,x,r)

-- commute
addI(t1,x,_) \ addI(t2,y,r) when x \= y = do primAdd y
register t2 'remove y'
r:= ()
addToStore 'addI(t,x,r)'

-- first call
addI(t,x,r) = do primAdd x
register t 'remove x'
r:= ()
addToStore 'addI(t,x,r)'


The first join pattern checks for non-commuting method calls (subtle point: we assume that the propagated head will NOT be executed as a goal). Join patterns are tried from top to bottom (in our implementation). Hence, if the first join pattern can't fire we try the second one which applies if all calls to add commute and there exist at least two add calls (we assume multi-set semantics, one call can't match two occurrences). In the 'commute' case, we call the primitive add (relying on the highly-concurrent, low-level data structure) and unblock the original add call. Registering of the inverse operation is necessary in case we abort the transaction. The last join pattern applies if we're the first to call add. Adding add calls to the store guarantees that
all calls are made visible to each other.

A transaction can only commit (reach the end) if it doesn't encounter conflicts. Otherwise, the transaction will be suspended.

The commit protocol has two stages. We first unblock one suspended add call (if exists). Otherwise, we do nothing. The assumption is that the committing transaction calls 'commit(t)'.

commit(t), addI(t,x,_), addS(t2,x,r) = addI(t2,x,r)
commit(t) = return ()


Actually, we could be more optimistic and unblock all commuting add calls. I can't see how to nicely formulate this with join pattern alone (we'd need to traverse all suspended calls and select all commuting ones).

We yet to release all 'locks' (the stored add calls) held by transaction t. Once more, we call 'commit(t)'.

commit(t) \ addI(t,x,_) = return ()
commit(t) = return ()

The propagated goal 'commit(t)' guarantees that we exhaustively release all locks (a for loop join pattern style).

In the last commit step, we erase all registered inverse operations.

As said above, in the current formulation, a transaction can only reach the end (and thus commit) if it doesn't encounter conflicts. The transaction won't fail or retry. It can only happen that the transaction is suspended, possibly indefinitely.

To avoid 'cyclic suspension' deadlocks, the 'transactional boosting' approach uses a time-out to actually fail a transaction. In case of failure, we need to remove the transaction's method calls

fail(t) \ addI(t,_,_) = return ()
fail(t) \ addS(t,_,_) = return ()
fail(t) = return ()

and we execute the registered inverse operations. Then, the transaction is started from scratch.

Where to go next



The above description is not necessarily meant to be used in an efficient implementation. However, I believe that the use of join patterns allows us to to give a nice declarative specification of 'transactional boosting' and its underlying protocols. I will be interesting to consider alternative 'boosting' protocols. It appears that in general the 'boosting' approach can only support pessimistic concurrency. Optimistic concurrency requires that a transaction only commits its updates only after no conflicts have been detected.

In my experience, transactional boosting and optimistic concurrency only works out for some specific cases, eg the multi-set rewrite stuff I mentioned in my earlier 'Transactional Boosting' post.

Anyway, there's still lots to discover. Guess that's something for a future post.

Sunday 16 November 2008

Transactional boosting

This post is about software transactional memory (STM) and how to retain its benefits (composability) while trying to minimize its drawbacks (poor performance due to false conflicts).

Software transactional memory: Composability and false conflicts



Software transactional memory (STM) is a concurrency control mechanism analogous to database transactions for controlling access to shared memory in concurrent computing (wikipedia). Below is a simple example of a concurrent, unordered singly-linked list using STM as supported by GHC Haskell.


date List a = Node {val :: a, next :: TVar a}
| Nil

insert :: TVar a -> a -> STM ()
find :: TVar a -> a -> STM Bool
-- True = present, False = absent
delete :: TVar a -> a -> STM Bool
-- True = present and deleted, False = absent

-- atomically delete 3 and 4 if present
transaction :: TVar a -> IO Bool
transaction root = atomically $ do
b1 <- find root 3
b2 <- find root 4
if b1 && b2 then do
delete root 3
delete root 4
else return False


The straightforward definitions of the linked list operations are left out. They are all executed within the STM monad which will guarantee that the operation is executed atomically, i.e. to its full extent or not at all which means in the programm language setting that we re-run (aka rollback/retry) the operation.

In Haskell, we can easily build 'bigger' STM operations by gluing together 'smaller' STM operations. For example, the transaction checks if elements 3 and 4 are present in the list and then will delete both elements. The entire transaction is executed atomically which guarantees that in case we return True, elements 3 and 4 were present but have now been deleted. Neat!

The benefits of STM over say locks are well documented. However, STM has (often) a significant performance disadvantage. Our (naive) STM implementation will protect the entire list. Suppose that elements 3 and 4 are stored towards the tail of the list. Hence, we (almost) need to traverse the entire list which then potentially leads to false conflicts with other concurrent STM operations. The effect is that many concurrent STM operations will be serialized (i.e.~sequential instead of parallel execution).

Comparing concurrent linked-list implementations in Haskell



I recently wrote a paper with the above title to be presented at DAMP'09, in collaboration with Simon Marlow and Edmund Lam. Our results show that the naive STM implementation is no match against more fine-tuned implementations which


  • either side-step the STM protocol (only each node operation is performed within the STM)
  • employ lock-coupling via Haskell's MVars, or
  • rely on atomic compare-and-swap.

Please check the paper for the details.

The (well-known) downside of this more efficient singly-linked operations (efficient in the sense they enjoy a higher degree of concurrency) is that they are not composable anymore. That is, it is non-trivial to implement the above transaction (atomically find and delete 3 and 4) using the more efficient implementations.

Transactional boosting



Maurice Herlihy and Eric Koskinen wrote two recent papers on a topic they refer to as transactional boosting: How to recover the 'high-level' benefits of STM while relying on 'low-level', highly concurrent data-structures.
The papers are (see Eric's website):

  • Transactional Boosting: A Methodology for Highly-Concurrent Transactional Objects
    M. Herlihy, E. Koskinen. PPoPP 2008
  • Aggressive Transactional Boosting
    E. Koskinen, M. Herlihy.
    Under submission. August 2008.


My understanding of how the transactional boosting approach meant to work is that


  • the user needs to verify if primitive operations 'commute', otherwise
  • the operands of (non-commuting) operations will be protected by an abstract lock,
  • each operations requires an inverse which will be executed if the transaction fails to commit.


It seems that the transactional boosting approach assumes pessimistic concurrency. I'm still not clear about the exact details of the approach but it looks definitely very promising.

Incidentally, I ran across the issue of transactional boosting in my own work on concurrent execution of multi-set rewrite rules.

Multi-set rewriting and transactional boosting



Below are simple examples of (monomorphic for simplicity) multi-set rewrite rules.
Right-hand sides are omitted because they don't matter here.

A, B <==> ...
A, C <==> ...


Multi-set rewrite rules operate on a store, a multi-set of elements. We can concurrently apply multi-set rewrite rules if they operate on non-conflict parts of the store. Suppose, we have the store

{A, A, B, C}

Then, the first rule can be applied on (the first) A and B and the second rule can be
applied on (the second A) and C. There's no conflict. Hence, both rules can be applied concurrently.

The point is that the store is implemented as a concurrent data-structure and execution of a rewrite rule can be viewed as an STM transaction. For example, the operational interpretation of the first rule is

atomically $ do
find A
find B
if both present
then do delete A
delete B
else return ()


The first implementation of concurrent multi-set rewrite rules, in collaboration with Edmund Lam (my PhD student btw), presented here

Edmund S. L. Lam, Martin Sulzmann:
A concurrent constraint handling rules implementation
in Haskell with software transactional memory. DAMP 2007:

used a (naive) STM list to implement the store. Our experiments confirm that such an approach won't scale. Due to a high(ly likely) number of false conflicts, multi-set rewrite rules are executed sequentially, though, we ought to execute them in parallel.

Our latest implementation, presented here

Martin Sulzmann, Edmund S. L. Lam:
Parallel execution of multi-set constraint rewrite
rules. PPDP 2008

uses a more low-level, 'more concurrent' data-structure which is *not* composable. Hence, we had to implement our own STM protocol on top of the low-level data-structure (The PPDP paper is a bit unclear about this but we'll present what we've achieved in its full glory in a forth-coming journal submission soon).

Summary



STM is great but it often has poor performance. There are many applications where we wish to have a 'transactional boosting' approach. As far as I can tell, the approach laid out by Maurice Herlihy and Eric Koskinen is fairly different from the approach we used for concurrent execution of multi-set rewrite rules. Possibly, there's some unifying theory but I have the feeling that transactional boosting relies on some domain-specific assumption. Anyway, interesting times with many interesting topics. I'm sure there' more to say about transactional boosting.

Wednesday 12 November 2008

Actor versus Join Concurrency

This post tries to shed some light on the differences and commonalities between actor and join style concurrency and explains its connection to multi-set rewrite rules.

Recently, I've published two libraries supporting


  • Actor concurrency, see here, and
  • Join concurrency, see here


Coordination via multi-set rewrite rules with guards



Both libraries support similar forms of coordination primitives in terms of multi-set rewrite rules with guards. For example, consider

Seller x, Buyer x --> "do something"

which says that if there's a seller and a buyer where both agree on the object x to be sold, then we execute the right-hand side "do something".

What's the connection to rewrite rules? Well, in case there's "Seller Obj" and "Buyer Obj", both will be rewritten to "do something". It's natural to view sellers and buyers as resources. To rewrite multiple resources, we simply use multi-set rewrite rules. For example,

A, A --> "blah"

will remove two occurences of "A" by "blah".

Lesson learned so far. The basic coordination primitives behind actor and join style concurrency are expressed in terms of multi-set rewrite rules. In the join setting, these multi-set rewrite rules are commonly referred to as join-patterns. However, please take note that Erlang-style actors only support single-headed rewrite rules and join-style concurrency as found in JoCaml, Polyphonic C or Claudio Russo's Joins librariy does not support guards (both features are supported by the above mentioned libaries).

Assuming we agree on a common set of multi-set rewrite rules (possibly with guards), what's the difference then between actor and join concurrency?

Actor concurrency



In the actor setting, we commonly assume that multi-set rewrite are applied sequentially. In the special case of Erlang's single-headed rewrite rules, we start scanning the mailbox (that's the place where sellers, buyers etc are stored) from left to right. For each mailbox element (often referred to as message aka method or even constraint in the rewrite setting), we then try the single-headed rewrite rules from top to bottom, until a match is found. In the general case of multi-set (headed) rewrite rules, we can assume that things can get a bit more tricky. Check out the paper below for a formal sequential execution semantics

Martin Sulzmann, Edmund S. L. Lam, Peter Van Weert,
Actors with Multi-headed Message Receive Patterns.
COORDINATION 2008

But if rewrite rules are executed sequentially where does concurrency come in? Well, we can have concurrent actors which communicate by sending each other messages (aka methods or constraints). The messages itself are processed and executed sequentially by each actor. You may therefore argue that the behavior of each actor is completely deterministic.

Join concurrency



In the join setting, the place where methods (aka messages or constraints) are kept is commonly referred to as a "store". Any multi-set rewrite rule can fire as soon as we find a match for its left-hand side among the st of stored methods. The only condition is that if two rewrite rules fire concurrently, they must operate on non-overlapping parts.

Summary



Both styles of concurrency use the same coordination primitive, i.e. multi-set rewrite rules. I'd argue that actor and join concurrency serve different purposes (hence, are equally useful). One can emulate the other but I prefer to have efficient support for both of them (not via emulation).

NOTE: I wrote this post are seeing these comments on reddit.com:

So, having a blog is possibly useful after all :)

The BaggerProblem

Description



There are n number of bags which can be filled up with either large, medium or small items. The weight of items of a specific kind is fixed. The maximum weight carried by a bag is fixed.

Bags are filled up with items in a fixed order:

  • 1. In sequential order from lowest to highest bag number
  • 2. Exhaustively fill up the bag first with

    • (a) large items, then
    • (b) medium items, then
    • (c) small items.



The straightforward solution is to

  • 1. Fill up bags with large items, then
  • 2. with medium items, and then finally with
  • 3. small items.


However, once a bag has been exhaustively filled up with large items (we either stop because the bag reached its maximum capacity of large items, or we're simply running out of large items), we can start filling up this bag with medium items. The point is that we don't have to wait until all bags are exhaustively filled up with large items.

This form of concurrency has clear connection to instruction pipelining. What we want to show here is how to come up with a concurrent solution to the bagger problem

We'll give two solutions:


  • a solution using join-patterns extended with guards,
  • from which we can easily derive a more 'elementary' solution using standard synchronization primitives such as Software Transactional Memory (STM).


Solution using Join extended with guards



The join-calculus (and languages implementing join-patterns) doesn't support guards. Hence, we make use of our join extension which supports guards. The guards we use here are in fact implicit (shared variables). Here we go:


The methods we introduce are:

Bag(kind,no,content) denotes a bag which

- currently expects to be filled with items of kind 'kind'
- no is the bags number
- content refers to the current content

Initially, all bags are of the form

Bag(Large,i,Empty)


Item(k) denotes an item of kind 'kind'


The above methods are all asynchronous.

We introduce two synchronous methods.

Iterator(kind,no) fills up bag number no with items of kind 'kind'.
Initially, there are three 'iterators':

- Iterator(Large,1)
- Iterator(Medium,1)
- Iterator(Small,1)

Each will run in its own thread. Each starts in 'Large' mode.
That is, only the large iterator can fill up the bags (sequentially).
Once the large iterator is done with bag number i, the bag is
'unlocked' by moving from 'Large' to 'Medium' mode and so on.

GetItem(kind,r) looks for an item of kind 'kind', the outcome
is reported in r of type Maybe Weight.

Here are the join patterns and their bodies.
(We assume that join patterns are executed from top to bottom,
for convenience only)

Bag(kind,no,content) & Iterator(kind,no) =
if (weight content) + (weight kind) >= maxBagWeight
then do bag (next kind) no content
if no < MaxBagNo
then iterator kind (no+1)
else return ()
else do
r <- getItem k
case r of
Nothing -> do
bag (next kind) no content
if no < MaxBagNo -- (***)
then iterator kind (no+1)
else return ()
Just itemWeight -> do
bag kind no (add content itemWeight)
iterator kind no


Item(k) & GetItem(k,x) = case k of
Large -> x := Just Large
...

GetItem(k,x) = x:= Nothing


NOTE:

First, I thought we could replace (***) by the 'default' rule

Iterator(kind, no) = return ()

which we put *after* the first rule.

However, this won't work because the 'small' iterator waiting for
the medium and large iterator may terminate prematurely.



A concrete implementation of the above can be found
here as part of the multisetrewrite library.

Elementary solution in Haskell using STM





> import IO
> import Data.IORef
> import Control.Concurrent
> import Control.Concurrent.STM
> import Control.Concurrent.Chan


We set up the item space by storing large, medium and small
items into their respective list. For each kind of items,
there's at most one thread trying to access the items.
Hence, using IORefs is fine.

> data ItemSpace = IS { large :: IORef [()]
> , medium :: IORef [()]
> , small :: IORef [()] }

We only store () values, the weights are fixed.

> largeWeight = 6
> mediumWeight = 4
> smallWeight = 2


The kind of items we support are either large, medium or small.
Done is a "dummy" item. We'll move from large to medium to small items.
Done simply indicates that we are done.

> data Item = Large | Medium | Small | Done deriving (Eq, Show)


> next :: Item -> Item
> next Large = Medium
> next Medium = Small
> next Small = Done

> itemWeight :: Item -> Int
> itemWeight Large = largeWeight
> itemWeight Medium = mediumWeight
> itemWeight Small = smallWeight


The procedure to fetch items. There may be concurrent threads
accessing either large, medium or small items. But there's
at most one thread for each kind.

> fetch :: ItemSpace -> Item -> IO (Maybe ())
> fetch is kind =
> let get m = do
> l <- readIORef m
> case l of
> (x:xs) -> do writeIORef m xs
> return (Just x)
> [] -> return Nothing
> in case kind of
> Large -> get (large is)
> Medium -> get (medium is)
> Small -> get (small is)


Each bag records its current weight and content.
The mode tells us which kind of items are currently allowed to
be put into the bag. The mode is protected by a TVar which allows us
to suspend until the large items are exhaustively put into the bag.

> data Bag = Bag { mode :: TVar Item
> , curWeight :: IORef Int
> , content :: IORef [Item] }

> printBag :: Bag -> IO ()
> printBag bag = do
> w <- readIORef (curWeight bag)
> c <- readIORef (content bag)
> putStrLn $ (show (w,c))

The maximum weight held by each bag is fixed.

> maxBagWeight = 16


We add an item to a bag by incrementing the bags weight
plus updating the content.

> add :: Bag -> Item -> IO ()
> add bag kind = do
> w <- readIORef (curWeight bag)
> c <- readIORef (content bag)
> writeIORef (curWeight bag) (w + itemWeight kind)
> writeIORef (content bag) (kind : c)


We check if a bag becomes full (over-weight) if we would add
another item.

> bagIsFull :: Bag -> Item -> IO (Bool)
> bagIsFull bag kind = do
> w <- readIORef (curWeight bag)
> return ((w + itemWeight kind) > maxBagWeight)



fillBag fills up (exhaustively) a bag with items of a given kind.
We 'unlock' the bag by 'incrementing' the mode if
- the bag is full
- no more items of this kind are left


> incrementMode :: Bag -> IO ()
> incrementMode bag =
> atomically $ do status <- readTVar (mode bag)
> writeTVar (mode bag) (next status)

> fillBag :: ItemSpace -> Bag -> Item -> IO ()
> fillBag is bag Done = error "not happening here"
> fillBag is bag kind =
> let loop = do
> b <- bagIsFull bag kind
> if b then incrementMode bag
> else do
> r <- fetch is kind
> case r of
> Nothing -> incrementMode bag
> Just _ -> do add bag kind
> loop
> in loop


We process (ie fill up) the list of bags sequentially.
For each bag we must wait untill its our turn.
The priorities are

large > medium > small


> waitMode :: Bag -> Item -> IO ()
> waitMode bag kind =
> atomically $ do status <- readTVar (mode bag)
> if status == kind then return ()
> else retry

> processBags :: ItemSpace -> Item -> [Bag] -> IO ()
> processBags is kind bs =
> mapM_ (\b -> do waitMode b kind
> fillBag is b kind) bs


For each kind we spawn a thread to fill up the list of bags.


> start :: ItemSpace -> [Bag] -> IO ()
> start is bs = do
> cnt <- atomically $ newTVar 0
> let spawn = \kind -> forkIO (do processBags is kind bs
> atomically $ do v <- readTVar cnt
> writeTVar cnt (v+1))
> spawn Large
> spawn Medium
> spawn Small
> atomically $ do v <- readTVar cnt
> if v == 3 then return ()
> else retry




testing

> createIS ln mn sn = do
> l <- newIORef [ () | x <- [1..ln]]
> m <- newIORef [ () | x <- [1..mn]]
> s <- newIORef [ () | x <- [1..sn]]
> return (IS {large = l, medium = m, small = s})

> emptyBag = do
> m <- atomically $ newTVar Large
> w <- newIORef 0
> c <- newIORef []
> return Bag {mode = m, curWeight = w, content = c}

> test largeNo medNo smallNo bagNo = do
> is <- createIS largeNo medNo smallNo
> bs <- mapM (\_ -> emptyBag) [1..bagNo]
> mapM_ (\b -> printBag b) bs
> putStrLn "Start"
> start is bs
> putStrLn "Done"
> mapM_ (\b -> printBag b) bs


Some observations



The STM solution can be improved. The thread filling up bags with large items doesn't need to check the status of the bag and the thread filling up small items only needs to be waken up once the 'medium' thread is done. We could achieve this behavior by replacing

mode :: TVar Item

with

mode1 :: TVar Item
mode2 :: TVar Item

That is, the 'medium' thread retries on mode1 and the 'small' thread retries on mode2.

In the join solution none of these optimizations are necessary, they are dealt with implicitly. For example, if Iterator(Small,someNo) is unable to fire a join-pattern, this method will be suspended (i.e. the method is put into the store). If the mode of a bag finally switches to small, this active bag will then in combination with the inactive (ie suspended) Iterator(Small,someNo) trigger the join-pattern.

Thursday 6 November 2008

Gossiping girls

In my class on UPPAAL (a modeling and analysis tool for communicating FAs aka CFAs), I gave the following (classic) exercise:

A number of girls initially know one distinct secret each. Each girl has access to a phone which can be used to call another girl to share their secrets. Each time two girls talk to each other they always exchange all secrets with each other (thus after the phone call they both know all secrets they knew together before the phone call). The girls can communicate only in pairs (no conference calls) but it is possible that different pairs of girls talk concurrently.

Sounds like fun, right? Before, I discuss a sample solution in UPPAAL, let's consider some alternative frameworks which we could use to code up the above problem.

Here's a solution using CHR (Constraint Handling Rules):

Girl(name1,secrets1), Girl(name2,secrets2) <==>
newSecrets(secrets1,secrets1) | let s=shareSecrets(secrets1,secrets2)
in Girl(name1,s)
Girl(name2,s)


What about using Join-patterns extended with guards and synchronous method calls:

Girl(name1,secrets1) & Girl(name,secrets2) = do
secrets1 := secrets1 `union` secrets2
secrets2 := secrets2 `union` secrets2


Okay, so finally my UPPAAL solution. First, the declarations plus description. The actual CFA is displayed below.



// =========================
// == Global declarations ==

// We introduce two channels. First, I thought we can do with just
// one channel but then I believe things will get really messy.

chan call, call2;

// We assume that there are three secrets: one, two and three

// To pass secrets among girls we introduce three (global) variables,
// one for each secret.
// The value 0 indicates that the secret is not present (ie known).
// The value 1 indicates that the secret is known.

int var_s1, var_s2, var_s3;

// The actual transmission of values is performed in two steps:
// 1. First the caller tranfers her secrets
// 2. Then, it's the callee's turn to transfer her secrets.
// We must guarantee that this step happens atomically between
// two parties. We can either (a) use the UPPAAL feature of
// committed locations or (b) we simply transmit the callers identity
// which then the callee transmits back for verification.
// Option (a) sounds too easy, almost cheating :),
// so we'll pursue option (b).

// To transmit the callers identity, we'll need another "message-passing" variable.

int var_id;

// WARNING: We use here the initial_secret as a unique identifier
// which implies that we only supports girls which initially now
// a distinct secret.

// Please check the declarations of the girl template which
// describes the ideas and functionality of the CFA.




// =======================================
// == Declarations of the girl template ==

// Three variables to represent the girl's knowledge/status about
// the three secrets.

// We will initialize the variables when instantiating
// the girl template. The input parameter initial_secret
// refers to the secret that's initially known.

int s1, s2, s3;

// local variables to store the transmitted messages (secrets and caller id)

int in1, in2, in3, in_id;

// The first step of the girl CFA is to record (locally) the initially known
// secret. View as programming if-then-else statements using CFA transitions.
// If initial_secret == 1 then s1 = 1, s2 = 0, s3 = 0
// else and so on

// It gets interesting once we reach the "Start" state.
// In the first stage, we can either make a call, via "call!", or receive a call via "call?".
// In case we're the caller, we transmit our secrets and our identity (in form of the
// initial_secret). In case we're the receiver of the call, we're storing the
// transmitted values. Then, in the second stage, the roles switch and the caller
// becomes the receive and vice versa. Notice the use of a different channel.
// This is to avoid that a caller of the first stage can interact with a receiver
// of the second stage. We yet need to check that the caller and receiver of the
// first stage are still the same for the second stage. We perform this check
// in state "Verify" by comparing the transmited identify with our own (the initial_secret).
// A point to note is that this check is only performed by the receiver
// of the second stage.
// States "Update_Secrets1" and "Update_Secrets2" then update the secrets after the
// exchange. The formula (x+y) - (x*y) evaluates to 1 or 0 for values 0 and 1.
// Sure, this could be done smarter using bit operations or simply using UPPAAL's
// C-style array.



// =========================
// == System declarations ==

girl1 = Girl(1);
girl2 = Girl(2);
girl3 = Girl(3);

// List one or more processes to be composed into a system.
system girl1, girl2, girl3;


And finally the CFA:
Girl
Get your own at Scribd or explore others:


So, which solution do you prefer? To be fair, UPPAAL is not meant for programming, it has its strength in analyzing CFAs via its expressive query language and model checker.

Wednesday 5 November 2008

From Join to UPPAAL and back

I'm currently teaching a course using UPPAAL for modelling and anlysis of communicating FAs (finite automatas). In this post, I will show some interesting connections between communicating FAs and coordination patters as found in the Join calculus (referred to as Join for brevity).

For concreteness, let's consider a (classic) coordination example: The 'bridge' problem. We have four soldiers on the left (unsafe) side of the bridge. They'd like to cross the bridge to get to the right (safe) side. The problem is it's at night and they need a torch to cross the bridge. The torch can carry a maximum of two soliders. We assume that the torch always carries two soliders from right to left and only solider goes back (left to right).

In Join, we can specify/model this problem as follows. We introduce synchronous methods 'take', 'release' and 'takeBack' to model the actions. We introduce asynchronous methods 'free', 'one', 'two', 'oneLeft', 'rightSide' and 'oneFrom' to represent the states of the torch and 'unsafe(i)', 'towards(i)', 'safe(i)' and 'back(i)' to represent the states of soldier i. Their meaning will become clear once see the join-patterns describing the interaction between torch and soldiers.

The behavior of the torch is specified via the following join-patterns (using Haskell-style syntax).

free = do take
one

one = do take
two

two = do release
oneLeft

oneLeft = do release
rightSide

rightSide = do takeBack
oneFrom

oneFrom = do release
free


Of course, we could have unfolded the above definitions into the following (and thus omitting the intermediate states)


free = do take
take
release
release
takeBack
release
free

This makes it even clearer that the torch waits for two soldiers to grab the torch. Then, the torch (with the soldiers) crosses the bridge (this is implicit). The torch releases both soliders and the torch will take back a soldier and release this soldier on the unsafe side again. The more verbose representation (using the intermediate states) will be useful later when discussing the connection to communicating FAs.

We yet need to model the behavior of solider(i). We finally make use of some multi-headed join-patterns.

unsafe(i) & take = towards(i)

towards(i) & release = safe(i)

safe(i) & takeBack = back(i)

back(i) & release = unsafe(i)


For example, the first join-pattern above says that if solider i is in the unsafe state and we can take the torch, then we unblock the caller of take (remember that take is synchronous) and we issue a new asynchronous method call towards(i). The semantics of join-pattern is actually fairly simply. In essence, multi-set rewriting where the methods are treated as resources. The multisetrewrite library introduced in a previous post, can easily encode join-patterns (and even more, eg guards etc).
I will release some runnable code at some later stage.

The point of this exercise is not to show what can be done with Join, rather, we want to establish some connections to communicating FAs. So, what's the connection? In Join, a join-pattern is triggered (aka executed) if we can match the pattern head with the specified method calls. In communicating FAs, two automatas 'communicate' if we find matching send (!) and receive (?) transitions.


p --- a! --> q r --- a? --> t

-----------------------------------------

(p,r) --- a --> (q,t)


Each component of the tuple corresponds to the states of a communicating FA (CFA). If one CFA has a send edge (a!) which matches the receiving edge (a?) of another CFA, then we make a transition along those edges as shown in the above diagram. This smells similar to Join, right? Indeed, in Join we can represent

p --- a! --> q

by

p = do a
q

and

p --- a? --> q

by

p & a = q

The states are turned into asynchronous method calls and the actions 'a' are turned into synchronous method calls.


In fact, the Join program shown above was derived from the following CFAs
using this translation scheme.

The torch CFA:

Torch
Get your own at Scribd or explore others:


The soldier CFA:
Soldier 3
Get your own at Scribd or explore others:


It's also possible to systematically translate Join programs to CFAs but it's getting more tedious to work out the details. Both languages are equally expressive. However, (I claim) Join allows us to specify more complex coordination patterns more easily than CFAs (therefore the transformation to CFAs gets quite involved). The advantages of having a Join representation of CFAs is that we know how to efficiently execute Join programs (eg see the multisetrewrite library which exploits join-style concurrency by executing non-overlapping join-patterns in parallel). On the other hand, a CFA representation of Join allows to make use of UPPAAL's expressive query language and its model checking facilities. For example, via the query

E<> safe(1) and safe(2) and safe(3) and safe(4)

we can check if (assuming we have four soldiers) it's possible for all four soldiers to reach the safe side. E says there exists a trace and <> says there exists a state.

We can also check if the system does not deadlock

A[] not deadlock

This property is not satisfied for the following reason. In case all four soldiers are on the safe side and one solider makes its way back to the unsafe side, the soldier gets to the unsafe side but won't be able to get back to the safe side. Remember that in our torch model, we require two soldiers per torch. For this example at least, the deadlock property doesn't say much. We only know that the program will terminate (possibly in some undesired state).

UPPAAL's query language is a subset of CTL btw. This implies that we canNOT verify (very compliated) queries such as "are the join-patterns confluent". Confluence requires to quantify over traces, then over states and then again over traces which is beyond CTL (for good reasons because it gets way too complex to check such properties).

Anyway, the connection between Join and CFAs is quite useful. This way I became a much better UPPAAL user. First write your program in Join and then convert to CFAs. I will write more about this connection at some later stage.

Playing with regular expressions: Intersection

We implement symbolic intersection among regular expression
by adapting Antimirov's regular expression rewriting algorithm.
By symbolic we mean that regular expression are not transformed to
automata. We rewrite regular expressions by using partial derivatives.

The algorithm (highlights):


r1 `intersect` r2 =

let [l1,...,ln] = lettersOf (r1 | r2)
in

(l1, partDeriv r1 l1 `intersect` partDeriv r2 l1)
| ...
| (l1, partDeriv r1 ln `intersect` partDeriv r2 ln)


The idea is that each regular expression is turned into a
'disjunction' of clauses where each clause starts with
a letter, followed by the partial derivative of the regular
expression wrt that letter.

You probably will know Brzozowski's derivative operation
which says that


derivative r l = [ w | (l,w) in L(r) ]


If we would use derivatives the above algorithm
is non-terminating. For example,


deriv (Star (L 'A')) 'A'
--> <<>,'A'*>

deriv (deriv (Star (L 'A')) 'A') 'A'
--> (<{},'A'*>|<<>,'A'*>)


and so on. The nice property of partial derivatives is that
they are 'finite'. Each regular expression has a finite
number of partial derivatives. For example,


partDeriv (Star (L 'A')) 'A'

--> <<>,'A'*>

partDeriv (partDeriv (Star (L 'A')) 'A') 'A'
--> <<>,'A'*>


Please check the paper

Valentin M. Antimirov: Partial Derivatives of Regular Expressions and Finite Automaton Constructions.
Theor. Comput. Sci. 155(2): 291-319 (1996)

for more details on derivatives, partial derivatives and their connection
(roughly, derivatives correspond to DFA states and partial derivatives correspond to NFA states).


There's another source of non-termination which we explain via
the following example.


A* `intersect` A*

--> (A, A* `intersect` A*)


where we simplified partDeriv A* A --> <<>,A*> to A*

The point to note is that the proof tree is infinite. The standard approach
is to use co-induction which then yields a finite (recursive) proof.

OK, here's finally the Haskell implementation:


> {-# LANGUAGE GADTs #-}

We're just using the GADT syntax to write 'ordinary' algebraic data types

> module RegExpIntersection where

> import List


> data RE a where
> Phi :: RE a -- empty language
> Empty :: RE a -- empty word
> L :: a -> RE a -- single letter taken from alphabet a
> Choice :: RE a -> RE a -> RE a -- r1 + r2
> Seq :: RE a -> RE a -> RE a -- (r1,r2)
> Star :: RE a -> RE a -- r*
> Var :: Int -> RE a -- first-order variables to represent regular equations
> deriving Eq

A word is a list of alphabet letters

> type Word a = [a]

Pretty printing of regular expressions

> instance Show a => Show (RE a) where
> show Phi = "{}"
> show Empty = "<>"
> show (L c) = show c
> show (Choice r1 r2) = ("(" ++ show r1 ++ "|" ++ show r2 ++ ")")
> show (Seq r1 r2) = ("<" ++ show r1 ++ "," ++ show r2 ++ ">")
> show (Star r) = (show r ++ "*")

Turning a list of regular expressions into a regular expression by using | (choice)

> resToRE :: [RE a] -> RE a
> resToRE (r:res) = foldl Choice r res
> resToRE [] = Phi


Computing the set of letters of a regular expression

> sigmaRE :: Eq a => RE a -> [a]
> sigmaRE (L l) = [l]
> sigmaRE (Seq r1 r2) = nub ((sigmaRE r1) ++ (sigmaRE r2))
> sigmaRE (Choice r1 r2) = nub ((sigmaRE r1) ++ (sigmaRE r2))
> sigmaRE (Star r) = sigmaRE r
> sigmaRE Phi = []
> sigmaRE Empty = []


Testing if a regular expression is empty (empty word)

> isEmpty :: RE a -> Bool
> isEmpty Phi = False
> isEmpty Empty = True
> isEmpty (Choice r1 r2) = (isEmpty r1) || (isEmpty r2)
> isEmpty (Seq r1 r2) = (isEmpty r1) && (isEmpty r2)
> isEmpty (Star r) = True
> isEmpty (L _) = False

Testing if a regular expression contains nothing

> isPhi :: RE a -> Bool
> isPhi Phi = True
> isPhi Empty = False
> isPhi (Choice r1 r2) = (isPhi r1) && (isPhi r2)
> isPhi (Seq r1 r2) = (isPhi r1) || (isPhi r2)
> isPhi (Star r) = False
> isPhi (L _) = False


Brzozowski's derivative operation
deriv r l denotes the regular expression where the
"leading l has been removed"
(not necessary for the intersection algorithm,
only included to illustrate the difference
to the upcoming partial derivative algorithm)

> deriv :: Eq a => RE a -> a -> RE a
> deriv Phi _ = Phi
> deriv Empty _ = Phi
> deriv (L l1) l2
> | l1 == l2 = Empty
> | otherwise = Phi
> deriv (Choice r1 r2) l =
> Choice (deriv r1 l) (deriv r2 l)
> deriv (Seq r1 r2) l =
> if isEmpty r1
> then Choice (Seq (deriv r1 l) r2) (deriv r2 l)
> else Seq (deriv r1 l) r2
> deriv (this@(Star r)) l =
> Seq (deriv r l) this



(a variant) of Antimirov's partial derivative operation

Antimirov demands that partDeriv (Star (L 'A')) 'A' yields [A*]
whereas our version yields [<<>,'A'*>].
The difference is not essential here.

> partDeriv :: Eq a => RE a -> a -> [RE a]
> partDeriv Phi l = []
> partDeriv Empty l = []
> partDeriv (L l') l
> | l == l' = [Empty]
> | otherwise = []
> partDeriv (Choice r1 r2) l = nub ((partDeriv r1 l) ++ (partDeriv r2 l))
> partDeriv (Seq r1 r2) l
> | isEmpty r1 =
> let s1 = [ (Seq r1' r2) | r1' <- partDeriv r1 l ]
> s2 = partDeriv r2 l
> in nub (s1 ++ s2)
> | otherwise = [ (Seq r1' r2) | r1' <- partDeriv r1 l ]
> partDeriv (Star r) l = [ (Seq r' (Star r)) | r' <- partDeriv r l ]


Here's finally the partial derivative based intersection algorithm.


> type Env a = [((RE a, RE a), RE a)]


> intersectRE :: Eq a => RE a -> RE a -> RE a
> intersectRE r1 r2 = intersectC [] r1 r2

> intersectC :: Eq a => Env a -> RE a -> RE a -> RE a
> intersectC env r1 r2
> | r1 == Phi || r2 == Phi = Phi
> | r1 == Empty || r2 == Empty = Empty
> | otherwise =
> case lookup (r1,r2) env of
> Just r -> r
> Nothing ->
> let letters = sigmaRE (r1 `Choice` r2)
> env' = ((r1,r2),r):env
> r1l l = resToRE $ partDeriv r1 l
> r2l l = resToRE $ partDeriv r2 l
> r' = resToRE $ map (\l -> Seq (L l) (intersectC env' (r1l l) (r2l l))) letters
> r = if (isEmpty r1) && (isEmpty r2)
> then Choice r' Empty
> else r'
> in r

The problem with the algorithm is that we use recursion in Haskell to model
recursive proofs. For example, try

intersect rAstar rAstar




Converting a regular equation into a regular expression

We assume that variables x appears (if at all) at position (r,Var x)
convert2 traveres the regexp and yields (r1,r2)
where the invariant is that r1 is part of the loop and r2 is the base case

> convert :: Int -> RE a -> RE a
> convert x r = let (r1,r2) = convert2 x r
> in Seq (Star r1) r2

> convert2 :: Int -> RE a -> (RE a, RE a)
> convert2 x Empty = (Empty, Empty)
> convert2 x (Var y)
> | x == y = (Empty,Empty)
> | otherwise = (Empty, Var y) -- can this happen?
> convert2 x (r@(Seq l r1))
> | contains x r1 = let (r2,r3) = convert2 x r1
> in (Seq l r2, r3)
> | otherwise = (Empty, r)
> convert2 x (Choice r1 r2) = let (r1', r1'') = convert2 x r1
> (r2', r2'') = convert2 x r2
> in (Choice r1' r2', Choice r1'' r2'')

> contains :: Int -> RE a -> Bool
> contains x (Var y) = x == y
> contains x (Seq r1 r2) = contains x r1 || contains x r2
> contains x (Star r) = contains x r
> contains x (Choice r1 r2) = contains x r1 || contains x r2
> contains x _ = False


We use first-order syntax to represent recursive proofs/regular expressions.
We then use convert to actually build the non-recursive regular expression.


> intersectRE2 :: Eq a => RE a -> RE a -> RE a
> intersectRE2 r1 r2 = intersectC2 1 [] r1 r2

> intersectC2 :: Eq a => Int -> Env a -> RE a -> RE a -> RE a
> intersectC2 cnt env r1 r2
> | r1 == Phi || r2 == Phi = Phi
> | r1 == Empty || r2 == Empty = Empty
> | otherwise =
> case lookup (r1,r2) env of
> Just r -> r
> Nothing ->
> let letters = sigmaRE (r1 `Choice` r2)
> env' = ((r1,r2),Var cnt):env
> r1l l = resToRE $ partDeriv r1 l
> r2l l = resToRE $ partDeriv r2 l
> r' = resToRE $ map (\l -> Seq (L l) (intersectC2 (cnt+1) env' (r1l l) (r2l l))) letters
> r = if (isEmpty r1) && (isEmpty r2)
> then Choice r' Empty
> else r'
> in convert cnt r


For testing purposes, it's handy to have a function which tests for (semantic)
equality among regular expressions (again written using partial derivatives).


> type EnvEq a = [(RE a, RE a)]

> eqRE :: Eq a => RE a -> RE a -> Bool
> eqRE r1 r2 = eqREC [] r1 r2

> eqREC :: Eq a => EnvEq a -> RE a -> RE a -> Bool
> eqREC env r1 r2
> | isEmpty r1 && (not (isEmpty r2)) = False
> | isPhi r1 && (not (isPhi r2)) = False
> | otherwise =
> if elem (r1,r2) env
> then True
> else let letters = sigmaRE (r1 `Choice` r2)
> env' = (r1,r2):env
> r1l l = resToRE $ partDeriv r1 l
> r2l l = resToRE $ partDeriv r2 l
> in and $ map (\l -> eqREC env' (r1l l) (r2l l)) letters

examples

> rA = L 'A'
> rAstar = Star rA
> rB = L 'B'
> rBstar = Star rB
> rAorBstar = Star (Choice rA rB)

Some sample runs (using intersectRE2, intersectRE won't terminate for interesting cases)

*RegExpIntersection> intersectRE2 rAstar rBstar
<((<>|<>)|<>)*,((<'A',{}>|<'B',{}>)|<>)>
*RegExpIntersection> eqRE Empty (intersectRE2 rAstar rBstar)
True
*RegExpIntersection> intersectRE2 rAorBstar rBstar
<((<>|<>)|<>)*,((<'A',{}>|<'B',<((<>|<'B',<>>)|<>)*,((<'A',{}>|<>)|<>)>>)|<>)>
*RegExpIntersection> eqRE rBstar (intersectRE2 rAorBstar rBstar)
True
*RegExpIntersection> intersectRE2 rAorBstar rAorBstar
<((<>|<>)|<>)*,((<'A',<((<'A',<>>|<'B',<>>)|<>)*,((<>|<>)|<>)>>|<'B',<((<'A',<>>|<'B',<>>)|<>)*,((<>|<>)|<>)>>)|<>)>
*RegExpIntersection> eqRE rAorBstar (intersectRE2 rAorBstar rAorBstar)
True
*RegExpIntersection> eqRE rAstar (intersectRE2 rAorBstar rAorBstar)
False
*RegExpIntersection> eqRE rBstar (intersectRE2 rAorBstar rAorBstar)
False


The regular expressions generated by the convert function are 'very verbose'.
There's space for lots of improvement.