January 09, 2010 [dmd-concurrency] draft 1 | ||||
---|---|---|---|---|
| ||||
Posted in reply to Walter Bright | How would you briefly describe your mental model of caches?
Andrei
Walter Bright wrote:
> As usually, it all depends. => As usual, it all depends
>
> I'm not sure the discussion about parallel being replaced with serial has much relevance to concurrency. Also, I never understood the problems with concurrency until I constructed a mental model of how the memory caches work. So I suggest less words about the parallel vs serial and heat generation, and more about the memory caches.
>
> Andrei Alexandrescu wrote:
>> In the following days I'll send a number of chapter fragments for review. They are incomplete, but I expect the finished sections to be in reviewable form.
>>
>> I'd appreciate feedback (send it to this list). The first section is complete, the second is unfinished. Let me know!
>>
>>
>> Andrei
>> ------------------------------------------------------------------------
>>
>> _______________________________________________
>> dmd-concurrency mailing list
>> dmd-concurrency at puremagic.com
>> http://lists.puremagic.com/mailman/listinfo/dmd-concurrency
> _______________________________________________
> dmd-concurrency mailing list
> dmd-concurrency at puremagic.com
> http://lists.puremagic.com/mailman/listinfo/dmd-concurrency
|
January 09, 2010 [dmd-concurrency] draft 1 | ||||
---|---|---|---|---|
| ||||
Posted in reply to Andrei Alexandrescu | A lot like how virtual memory is implemented. Imagine each CPU has its own memory, separate from other CPUs and their memory. The only thing shared is the disk. A CPU runs inside its memory. Occasionally, that memory gets loaded from disk, and written to disk. When these reads and writes to disk happen is arbitrary and unknowable, hence the order in which other CPUs see my CPU's writes is indeterminate, if they see them at all.
A synchronization forces the CPU doing the sync to flush all its writes to disk and reload its memory from disk.
A read barrier forces your memory to be reloaded from disk before read. A write barrier forces your write to memory to be immediately written to disk. (Or perhaps I have that backwards.)
Substitute cache for memory, and memory for disk, and you have a useful mental model that explains the concurrency behavior.
Andrei Alexandrescu wrote:
> How would you briefly describe your mental model of caches?
>
> Andrei
>
> Walter Bright wrote:
>> As usually, it all depends. => As usual, it all depends
>>
>> I'm not sure the discussion about parallel being replaced with serial has much relevance to concurrency. Also, I never understood the problems with concurrency until I constructed a mental model of how the memory caches work. So I suggest less words about the parallel vs serial and heat generation, and more about the memory caches.
>>
>> Andrei Alexandrescu wrote:
>>> In the following days I'll send a number of chapter fragments for review. They are incomplete, but I expect the finished sections to be in reviewable form.
>>>
>>> I'd appreciate feedback (send it to this list). The first section is complete, the second is unfinished. Let me know!
>>>
>>>
>>> Andrei
>>> ------------------------------------------------------------------------
>>>
>>>
>>> _______________________________________________
>>> dmd-concurrency mailing list
>>> dmd-concurrency at puremagic.com
>>> http://lists.puremagic.com/mailman/listinfo/dmd-concurrency
>> _______________________________________________
>> dmd-concurrency mailing list
>> dmd-concurrency at puremagic.com
>> http://lists.puremagic.com/mailman/listinfo/dmd-concurrency
> _______________________________________________
> dmd-concurrency mailing list
> dmd-concurrency at puremagic.com
> http://lists.puremagic.com/mailman/listinfo/dmd-concurrency
>
>
|
Copyright © 1999-2021 by the D Language Foundation