Amphithéâtre Guillaume Budé, Site Marcelin Berthelot
Open to all
-

Abstract

Multi-core processorsare an example of parallel architecture with shared memory, where several computing units work simultaneously on a common memory. Programming such architectures is difficult: we need to control possible interference between process actions, and avoid critical races (race conditions) between simultaneous writes and reads.

What program logic can we use to verify parallel programs with shared memory? To answer this question, the fourth lecture introducedConcurrentSeparation Logic (CSL), an extension of separation logic with reasoning rules on parallelism and mutual exclusion.

Separation logic describes very simply parallel computing without resource sharing, where processes run in parallel on disjoint portions of memory. This is the case for many recursive algorithms on arrays or trees, where recursive calls are made on disjoint sub-trees or sub-arrays.

CSL, as introduced by O'Hearn in 2004, adds reasoning rules on critical sections, allowing multiple processes to access shared resources, provided that such accesses are mutually exclusive. Shared resources are described by separation logic formulas that must be invariant outside critical sections. This makes it possible to describe not only many inter-process synchronization idioms, but also the resource transfers that take place implicitly during these synchronizations.

We have defined a CSL for the small language of the previous lecture, enriched with constructs describing parallelism and atomic instructions. We have shown how to build binary semaphores, critical sections and producer-consumer schemes on this language and logic. Finally, we showed the semantic correctness of this CSL by taking up a demonstration published by Vafeiadis in 2011.