Early in the discussion it was was apparent that there was a terminology gap between multi-core concurrency and its close relative from the distributed programming world.
After that meeting, Guy put together the following vocabulary. I thought it was worth sharing for everyone that deals with multicore concurrency and distributed computing. Enjoy..
- Serializability – When a concurrent algorithm can promise that the concurrent events can be ordered in a “serial” correct order when.
- Linearizablity – When a concurrent algorithm can promise that the concurrent events can be ordered in a “serial” correct order when the guiding line is that each method invocation can be consider to happen at a single point of time during the method invocation.
- Lock-Free – An algorithm that promises that at least a single thread can progress all the time (meaning the system can progress) while other threads might starve.
- Wait-Free – An Algorithm that promises that at all the threads can progress all the time (meaning no starvation).
- Obstruction-Free – An Algorithm that promises that any thread can progress if executed in isolation (meaning no progress is promised)
- Consensus – An algorithm that help different process/thread to get into a single decision (proven to be impossible in distributed system) and can be done on parallel only if you use CompareAndSet.
- Amdahl’s Law – Amdahl’s law is a model for the relationship between the expected speedup of parallelized implementations of an algorithm relative to the serial algorithm.
- Moore’s Law – a popular statement that declares that CPU speed doubled approximately every two years. (It hasn’t actually proven to be true, although with careful attention to concurrency it can still seem to be true.)
- NUMA – Non-Uniform Memory Access or Non-Uniform Memory Architecture is a computer memory design used in multiprocessors, where the memory access time depends on the memory location relative to a processor. Under NUMA, a processor can access its own local memory faster than non-local memory, that is, memory local to another processor or memory shared between processors.
In the next post i’ll write on Concurrency (Scale-up) vs Distributed Computing (Scale-out)…