Thread overview | |||||||||
---|---|---|---|---|---|---|---|---|---|
|
March 21, 2007 John Warner Backus | ||||
---|---|---|---|---|
| ||||
John Warner Backus died on March 17, 2007. http://en.wikipedia.org/wiki/John_Backus |
March 21, 2007 Re: John Warner Backus | ||||
---|---|---|---|---|
| ||||
Posted in reply to sclytrack | sclytrack wrote: > John Warner Backus died on March 17, 2007. > > http://en.wikipedia.org/wiki/John_Backus I'm actually kind of saddened by this. It's hard to see someone so influential in this field go. FWIW, he had one heck of a sendoff over on slashdot and digg: http://developers.slashdot.org/article.pl?sid=07/03/20/0223234 http://www.digg.com/programming/John_W_Backus_82_Fortran_Developer_Dies -- - EricAnderton at yahoo |
March 21, 2007 Re: John Warner Backus | ||||
---|---|---|---|---|
| ||||
Posted in reply to Pragma | Pragma wrote:
> sclytrack wrote:
>
>> John Warner Backus died on March 17, 2007.
>>
>> http://en.wikipedia.org/wiki/John_Backus
>
>
> I'm actually kind of saddened by this. It's hard to see someone so influential in this field go.
Aye
|
March 22, 2007 Re: John Warner Backus | ||||
---|---|---|---|---|
| ||||
Posted in reply to Pragma | Pragma wrote: > FWIW, he had one heck of a sendoff over on slashdot and digg: > > http://developers.slashdot.org/article.pl?sid=07/03/20/0223234 http://www.digg.com/programming/John_W_Backus_82_Fortran_Developer_Dies > Reading the slashdot freakshow this PDF got me thinking a bit about where we (and D) are going... http://www.st.cs.uni-sb.de/edu/seminare/2005/advanced-fp/docs/sweeny.pdf |
March 22, 2007 Re: John Warner Backus | ||||
---|---|---|---|---|
| ||||
Posted in reply to Tomas Lindquist Olsen | Tomas Lindquist Olsen wrote: > Pragma wrote: > >> FWIW, he had one heck of a sendoff over on slashdot and digg: >> >> http://developers.slashdot.org/article.pl?sid=07/03/20/0223234 >> http://www.digg.com/programming/John_W_Backus_82_Fortran_Developer_Dies >> > > Reading the slashdot freakshow this PDF got me thinking a bit about where we > (and D) are going... > > http://www.st.cs.uni-sb.de/edu/seminare/2005/advanced-fp/docs/sweeny.pdf Thanks for the link. It's an interesting read. Sweeny says some really *odd* things about the stauts quo, that make me wonder WTF the programmers on his team are doing. His comments on concurrency and musings on the next language are dead on, with (appropriate) shades of Backus thrown in: "In an concurrent world, imperative is the wrong default." "Transactions are the only plausible solution to concurrent mutable statue." D shines in a few of these areas, but needs library support for transactional memory, better concurrency support and constructs, something like a well-coded numerics library (true integers, etc), and something resembling compile-time iterator/bounds checking to fit the bill. :( -- - EricAnderton at yahoo |
March 22, 2007 Re: John Warner Backus | ||||
---|---|---|---|---|
| ||||
Posted in reply to Pragma | Pragma wrote: > > "Transactions are the only plausible solution to concurrent mutable statue." I'm not sure I agree. Many of the most common transactional processes work much like mutexes. In SQL, for example, data affected by a transaction is locked (typically at row, page, or table granularity) in much the same way as obtaining locks on mutexes protecting data. Deadlocks are quite possible, and before the era of automatic deadlock resolution, froze the DB indefinitely. The new concept of transactional memory turns this idea on its head by cloning affected data instead of locking it, and mutating the clones. Committing a transaction is therefore accomplished by comparing the original version of all affected data with the current version of the affected data, and if they match, the clones are substituted. If they don't match however, the entire transaction is rolled back and retried. The result is that large transactions are slow and require an unbounded amount of memory (because of the cloning), and no guarantee of progress is provided, because success ultimately relies on a race condition. That said, there have been proposals to add a transactional memory feature to hardware, and I think this is actually a good idea. The existing hardware-based solutions are typically limited to updating no more than 4-8 bytes of contiguous data, while transactional memory would allow for additional flexibility. I've seen implementations of lock-free binary trees based on this concept, and I'm not aware of anything comparable without it. Progress guarantees are less of an issue as well because hardware-level transactions will typically be very small. > D shines in a few of these areas, but needs library support for transactional memory, better concurrency support and constructs, something like a well-coded numerics library (true integers, etc), and something resembling compile-time iterator/bounds checking to fit the bill. :( I'd add something like CSP to the category of "better concurrency support." And I agree with the rest. Sean |
March 22, 2007 Re: John Warner Backus | ||||
---|---|---|---|---|
| ||||
Posted in reply to Sean Kelly | Sean Kelly wrote: > Pragma wrote: >> >> "Transactions are the only plausible solution to concurrent mutable statue." > > I'm not sure I agree. Many of the most common transactional processes work much like mutexes. In SQL, for example, data affected by a transaction is locked (typically at row, page, or table granularity) in much the same way as obtaining locks on mutexes protecting data. Deadlocks are quite possible, and before the era of automatic deadlock resolution, froze the DB indefinitely. > > The new concept of transactional memory turns this idea on its head by cloning affected data instead of locking it, and mutating the clones. Committing a transaction is therefore accomplished by comparing the original version of all affected data with the current version of the affected data, and if they match, the clones are substituted. If they don't match however, the entire transaction is rolled back and retried. The result is that large transactions are slow and require an unbounded amount of memory (because of the cloning), and no guarantee of progress is provided, because success ultimately relies on a race condition. I see what you mean. These things always seem so much more tranquil on the surface. It seems to me that the only positive trade off is for highly parallelizable and/or long-running algorithms, which hardly solves anything. > > That said, there have been proposals to add a transactional memory feature to hardware, and I think this is actually a good idea. The existing hardware-based solutions are typically limited to updating no more than 4-8 bytes of contiguous data, while transactional memory would allow for additional flexibility. I've seen implementations of lock-free binary trees based on this concept, and I'm not aware of anything comparable without it. Progress guarantees are less of an issue as well because hardware-level transactions will typically be very small. Neat! Seeing as how the industry is moving towards more and more processor cores, I suppose it follows that we'll eventually see additional hardware support to make it less unwieldy as well. I'm eager to see stuff like this happen. It sounds like something D could adopt easily, provided there's a way to qualify these concepts in a way that doesn't make a person's head explode. > >> D shines in a few of these areas, but needs library support for transactional memory, better concurrency support and constructs, something like a well-coded numerics library (true integers, etc), and something resembling compile-time iterator/bounds checking to fit the bill. :( > > I'd add something like CSP to the category of "better concurrency support." And I agree with the rest. > > > Sean -- - EricAnderton at yahoo |
Copyright © 1999-2021 by the D Language Foundation