July 15, 2014 Re: Proposal for design of 'scope' (Was: Re: Opportunities for D) | ||||
---|---|---|---|---|
| ||||
Posted in reply to Jacob Carlborg | On 07/15/2014 08:42 AM, Jacob Carlborg wrote:
> On 14/07/14 18:16, H. S. Teoh via Digitalmars-d wrote:
>
>> Mine is here:
>>
>> http://wiki.dlang.org/User:Quickfur/DIP_scope
>
> From the DIP:
>
> "The 'scope' keyword has been around for years, yet it is barely implemented and it's unclear just what it's supposed to mean"
>
> I don't know if it's worth clarify but "scope" currently has various features.
>
> 1. Allocate classes on the stack: "scope bar = new Bar()"
> 2. Forcing classes to be allocated on the stack: "scope class Bar {}"
> 3. The scope-statement: "scope(exit) file.close()"
> 4. Scope parameters. This is the part that is unclear what is means/is
> supposed to mean in the current language
>
Isn't both 1 and 2 deprecated?
|
July 15, 2014 Re: Proposal for design of 'scope' (Was: Re: Opportunities for D) | ||||
---|---|---|---|---|
| ||||
Posted in reply to simendsjo | On 15/07/14 08:46, simendsjo wrote: > Isn't both 1 and 2 deprecated? Depends on what you mean by "deprecated". People are keep saying that but it's not. Nothing, except for people saying that, indicates that. No deprecation message, no warning, nothing about it in the documentation. Even if/when that will be deprecated that it's not unclear what it does. -- /Jacob Carlborg |
July 15, 2014 Re: Proposal for design of 'scope' (Was: Re: Opportunities for D) | ||||
---|---|---|---|---|
| ||||
Posted in reply to Jacob Carlborg | On Tuesday, 15 July 2014 at 06:42:20 UTC, Jacob Carlborg wrote: > 1. Allocate classes on the stack: "scope bar = new Bar()" > 4. Scope parameters. This is the part that is unclear what is means/is supposed to mean in the current language These are actually the same thing: if something is stack allocated, it must not allow the reference to escape to remain memory safe... and if the reference is not allowed to escape, stack allocating the object becomes an obvious automatic optimization. People keep calling them deprecated but they really aren't - the escape analysis to make it memory safe just isn't implemented. > 2. Forcing classes to be allocated on the stack: "scope class Bar {}" I think this is the same thing too, just on the class instead of the object, but I wouldn't really defend this feature, even if implemented correctly, since ALL classes really ought to be scope compatible if possible to let the user decide on their lifetime. |
July 15, 2014 Re: Proposal for design of 'scope' (Was: Re: Opportunities for D) | ||||
---|---|---|---|---|
| ||||
Posted in reply to Adam D. Ruppe | On 15/07/14 14:47, Adam D. Ruppe wrote: > These are actually the same thing: if something is stack allocated, it > must not allow the reference to escape to remain memory safe... and if > the reference is not allowed to escape, stack allocating the object > becomes an obvious automatic optimization. > > People keep calling them deprecated but they really aren't - the escape > analysis to make it memory safe just isn't implemented. Yes, I agree. > I think this is the same thing too, just on the class instead of the > object, but I wouldn't really defend this feature, even if implemented > correctly, since ALL classes really ought to be scope compatible if > possible to let the user decide on their lifetime. If a class is allocated on the stack, its destructor will be called (at least according to the spec). If you declare a class "scope" you know it will always be allocated on the stack and can take advantage of that. Even if all classes are "scope" compatible some might _only_ be compatible with "scope". -- /Jacob Carlborg |
August 10, 2014 Re: Opportunities for D | ||||
---|---|---|---|---|
| ||||
Posted in reply to Walter Bright | I think Walter is exactly right with the first 7 points he is listing in his starting post of this thread. Nullable types are nice, but don't get too much distracted by them. The first 7 points are far more important. Go takes absolutely no effort to get rid of nil and they are very successful in despite of this nil thing. IMHO goroutines and channels is really the key. D might be a better C++. But languages need a use case to make people change. I don't see why D can't do for cloud computing and concurrent server-side software what Go is doing. Go's GC is also not that advanced, but it is precise so 24/7 is not a problem. Making the D GC precise is more important than making it faster. Actually, you get the strange situation now that to make D a language for the cloud a quick approach would be to make everything GC and let people have pointers as well as in Go. Of course, no good approach for the long run prospects of D. But let all memory management be handled by the GC should remain easy in D. Otherwise, D will be for the systems people only as with Rust. >Much of the froth about Go is dismissed by serious developers, but they nailed the goroutine thing. It's Go's killer feature. I think so, too. Along with channels and channel selects to coordinate all those goroutines and exchange data between them. Without them goroutines would be pointless except for doing things in parallel. I'm not sure you can do selects in the library with little lock contention, but I'm not an expert on this. >Think of it from the perspective of attracting Erlang programmers, or Java/Scala programmers who use Akka. Not wanting to be rude, but you don't stand a chance with that. Java has Hadoop, MongoDB, Hazelcast, Akka, Scala, Cassandra and MUCH more. No way you can beat all that. Hordes of average Java developers that will be against you, because they know Java and nothing else and don't want to loose their status. But Go also does not have these things. It's success is huge, though, and it seems mostly to be attributed to goroutines and channels. This made Go the "language for the cloud" (at least other people say so), which is what there is a need for now. Other than that Go is drop dead simple. You can start coding now and start your cloud software start-up now. There is nothing complicated you need to learn. D cannot compete with that (thank goodness it is also no minimalistic language like Go). >Akka similarly uses its own lightweight threads, not heavyweight JVM threads. Akka uses some approach like Apple's Grand Central Dispatch. As I understand it so does vibe.d (using libevent). A small number of threads is serving queues to which tasks are added. This works fine as long as those tasks are short runners. You can have 50.000 long runners in Go. As long as they aren't all active the system is well responsive. You can't have 50.000 long-runners in Akka, because they would block all kernel threads that serve the task queues. The 50.000 and first long running task will have to wait a long time till it will be served. This is why they have special worker threads in Vert.x for Java: threads that are reserved for long-runners (and you can't have many of them). |
August 10, 2014 Re: Opportunities for D | ||||
---|---|---|---|---|
| ||||
Posted in reply to Bienlein Attachments:
| On Sun, 2014-08-10 at 09:27 +0000, Bienlein via Digitalmars-d wrote: […] > IMHO goroutines and channels is really the key. D might be a better C++. But languages need a use case to make people change. From a marketing perspective, Go introduced goroutines (which is an implementation of a minor variant of CSP more or less), Rust introduces lots of things about memory management, references, etc. C and C++ have none of these. What does D bring to the field that is new today so that it can be used as a marketing tool? […] > But Go also does not have these things. It's success is huge, though, and it seems mostly to be attributed to goroutines and channels. This made Go the "language for the cloud" (at least other people say so), which is what there is a need for now. Other than that Go is drop dead simple. You can start coding now and start your cloud software start-up now. There is nothing complicated you need to learn. D cannot compete with that (thank goodness it is also no minimalistic language like Go). The core point about Go is goroutines: it means you don't have to do all this event loop programming and continuations stuff à la Node, Vert.x, Vibe.d, you can use processes and channels and the scheduling is handled at the OS level. No more shared memory stuff. OK so all this event loop, asyncio stuff is hip and cool, but as soon as you have to do something that is not zero time wrt event arrival, it all gets messy and complicated. (Over simplification, but true especially in GUI programming.) Go is otherwise a trimmed down C and so trivial (which turns out to be a good thing) but it also has types, instances and extension methods which are new and shiny and cool (even though they are not new nor shiny). These new things capture hearts and minds and create new active communities. It is true that Go is a "walled garden" approach to software, the whole package and executable management system is introvert and excludes. But it creates a space in which people can work without distraction. Dub has the potential to do for D what Go's package system and import from DVCS repositories has done, and that is great. But it is no longer new. D is just a "me too" language in that respect. > >Akka similarly uses its own lightweight threads, not heavyweight JVM threads. > > Akka uses some approach like Apple's Grand Central Dispatch. As I understand it so does vibe.d (using libevent). A small number of threads is serving queues to which tasks are added. This works fine as long as those tasks are short runners. You can have 50.000 long runners in Go. As long as they aren't all active the system is well responsive. You can't have 50.000 long-runners in Akka, because they would block all kernel threads that serve the task queues. The 50.000 and first long running task will have to wait a long time till it will be served. This is why they have special worker threads in Vert.x for Java: threads that are reserved for long-runners (and you can't have many of them). And Erlang. And GPars. And std.parallelism. It is the obviously sensible approach to management of multiple activities. D brings nothing new on this front. What no native code language (other than C++ in Anthony Williams' Just::Thread Pro in vestigial form) has is dataflow. This is going to be big in JVM-land fairly soon (wel actually it already is but no-one is talking about it much because of commercial vested interests) So if D got CSP, it would be me too but useful. If D got dataflow it would be "D the first language to support dataflow in native code systems". Now that could sell. -- Russel. ============================================================================= Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder@ekiga.net 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel@winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder |
August 11, 2014 Re: Opportunities for D | ||||
---|---|---|---|---|
| ||||
Posted in reply to Russel Winder | On Sunday, 10 August 2014 at 10:00:45 UTC, Russel Winder via Digitalmars-d wrote:
> So if D got CSP, it would be me too but useful. If D got dataflow it
> would be "D the first language to support dataflow in native code
> systems". Now that could sell.
Yes, that would be cool, but what do you mean specifically with "dataflow"? Apparently it is used to describe everything from tuple spaces to DSP engines.
I think dataflow in combination with transactional memory (Haswell and newer CPUs) could be a killer feature.
(I agree that CSP would be too much me too unless you build everything around it.)
|
August 11, 2014 Re: Opportunities for D | ||||
---|---|---|---|---|
| ||||
Posted in reply to Ola Fosheim Grøstad Attachments:
| On Mon, 2014-08-11 at 11:02 +0000, via Digitalmars-d wrote: […] > Yes, that would be cool, but what do you mean specifically with "dataflow"? Apparently it is used to describe everything from tuple spaces to DSP engines. I guess it is true that tuple spaces can be dataflow systems, as indeed can Excel. DSP engines are almost all dataflow exactly because signal processing is a dataflow problem. For me, software dataflow architecture is processes with input channels and output channels where the each process only computes on the receipt of data ready on some a combination of its inputs. I guess my exemplar framework is GPars dataflow http://www.gpars.org/1.0.0/guide/guide/dataflow.html > I think dataflow in combination with transactional memory (Haswell and newer CPUs) could be a killer feature. Václav Pech, myself and others been discussing the role of STM but haven't really come to a conclusion. STM is definitely a great tool for virtual machine, framework and library developers, but it is not certain is is a useful general applications tool. > (I agree that CSP would be too much me too unless you build everything around it.) I disagree. Actors, dataflow and CSP are all different. Although each can be constructed from one of the others, true, but it leads to inefficiencies. It turns out to be better to implement all three separately based on a lower-level set of primitives. Technically a CSP implementation has proof obligations to be able to claim to be CSP. As far as I am aware the only proven implementations are current JCSP and C++CSP2. D has the tools needed as shown by std.parallelism. If it could get actors, CSP and dataflow then it would have something new to tell the world about to be able to compete in the marketing stakes with Go and Rust. -- Russel. ============================================================================= Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder@ekiga.net 41 Buckmaster Road m: +44 7770 465 077 xmpp: russel@winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder |
August 11, 2014 Re: Opportunities for D | ||||
---|---|---|---|---|
| ||||
Posted in reply to Russel Winder | On Monday, 11 August 2014 at 15:13:43 UTC, Russel Winder via Digitalmars-d wrote: > For me, software dataflow architecture is processes with input channels > and output channels where the each process only computes on the receipt > of data ready on some a combination of its inputs. Yes, but to get efficiency you need to make sure to take advantage of cache coherency… >> I think dataflow in combination with transactional memory (Haswell and newer CPUs) could be a killer feature. > > Václav Pech, myself and others been discussing the role of STM but > haven't really come to a conclusion. STM is definitely a great tool for > virtual machine, framework and library developers, but it is not certain > is is a useful general applications tool. Really? I would think that putting TM to good use would be difficult without knowing the access patterns, so it would be more useful for engine and application developers…? You essentially want to take advantage of a low probability of accessing the same cache-lines within a transaction, otherwise it will revert to slow locking. So you need to minimize the probability of concurrent access. >> (I agree that CSP would be too much me too unless you build everything around it.) > > I disagree. Actors, dataflow and CSP are all different. Although each > can be constructed from one of the others, true, but it leads to > inefficiencies. It turns out to be better to implement all three > separately based on a lower-level set of primitives. I am thinking more of the eco-system. If you try to support too many paradigms you end up with many small islands which makes building applications more challenging and source code more difficult to read. I think dataflow would be possible to work into the range-based paradigm that D libraries seems to follow. C++ is good example of the high eco system costs of trying to support everything, but very little out of the box. You basically have to select one primary framework and then try to shoehorn other reusable components into that framework by ugly layers of glue… |
August 15, 2014 Re: Opportunities for D | ||||
---|---|---|---|---|
| ||||
Posted in reply to Ola Fosheim Grøstad | Am 11.08.2014 13:02, schrieb "Ola Fosheim Grøstad" <ola.fosheim.grostad+dlang@gmail.com>": > > I think dataflow in combination with transactional memory (Haswell and > newer CPUs) could be a killer feature. FYI: Intel TSX is not a thing anymore, it turned out to be buggy and is disabled by a microcode update now: http://techreport.com/news/26911/errata-prompts-intel-to-disable-tsx-in-haswell-early-broadwell-cpus Seems like even the upcoming Haswell-EP Xeons will have it disabled. Cheers, Daniel |
Copyright © 1999-2021 by the D Language Foundation