February 06, 2014
On Thursday, 6 February 2014 at 21:17:24 UTC, Andrei Alexandrescu
wrote:
> On 2/6/14, 12:51 PM, Frustrated wrote:
>> On Thursday, 6 February 2014 at 20:24:55 UTC, Andrei Alexandrescu
>> wrote:
>>> On 2/6/14, 12:01 PM, Frustrated wrote:
>>>> See the other post about this. scopeDeallocation meant to simply
>>>> signal that the scope of a has ended but not necessarily there
>>>> are no references to a.
>>>
>>> So that's a struct destructor.
>>>
>>> Andrei
>>
>> Well, except it hooks into the memory allocation strategy.
>>
>> I'm not saying that what I outlined above is perfect or the way
>> to go but just an idea ;)
>>
>> If new had some way to pass an "interface" that contained the
>> allocation strategy details and one had the notation like
>>
>> new!MyManualAllocator A;
>>
>> then I suppose A's destructor could call MyManualAllocator's
>> "scopeDeallocator" method and you wouldn't need an implicit call
>> there.
>
> What is MyManualAllocator - type or value?
>
> I should emphasize that obsessing over the syntax is counterproductive. Call a blessed function and call it a day.
>
>

It would be more of an abstract type. Something special template
aggregate that the compiler accesses to get code to "hook" in to
the memory management parts of  the code the compiler needs, but
has delegated specifics to the "user".

For example,

When the compiler is parsing code and comes across the new
keyword, it has hard coded what to do. Instead, if it delegated
what to do to the code itself(that somebody writes external to
the compiler) then it is more robust.

It is exactly analogous to interfaces, classes vs non-oop
programming. e.g., suppose we wanted a very generic compiler
where the programmer could add his own keywords. Instead of hard
coding the "actions" of the keywords we would delegate
responsibility to the user and provide hooks. When the parser
finds the keyword it simply calls code external to the compiler
instead of hard coded internal code. Of course it can get
complicated real quick but essentially that is what I am talking
about with memory management here. The compiler delegates exactly
what to do external code but provides the necessary hooks to
properly deal with it.

One can argue that we already have the ability to do that by
overriding new and using destructors but these are not general
enough as `new` is hard coded and destruction is not generic
enough.

The only way I can describe it properly is that it it would be
nice to plug and play specific memory management allocators into
the "compiler" so that almost anyone can achieve what they want.
To do this requires more support from the compiler as is, it uses
a hard coded version.

It is exactly analogous to this:

int x = 3; // <-- 3 is hard coded, not generic. Once compiled we
can't change it.

int x = file.read!int("settings.txt", 0); // <-- generic, x is
not fixed at compile time to a specific value. If we need to
change x we can do so.

Now apply the same logic as above but to the compiler and memory
management. Right now D is at the first case(D being X and the GC
being 3) and we want to get to the second case(the text file
being being a specific memory allocation method). file.read then
is what needs to be come up with, which is the way to decouple
the memory allocation scheme and the compiler's dependence on it.

I have no solution to the above... just ideas that may lead to
other ideas and hopefully a feasible solution.
February 07, 2014
> It would be more of an abstract type. Something special template
> aggregate that the compiler accesses to get code to "hook" in to
> the memory management parts of  the code the compiler needs, but
> has delegated specifics to the "user".

It's intresting idea for me.
If we mark a variable with a property, that how that memory is allocated then we got an marked AST node. And based on that mark we can perform AST manipulation during compile time. So if we marked a variable with ARC, then refcount decrease will injected during scope exit. If we mark a variable with GC, nothing will happen. We shoul allocate a separate memory area to refcounts, so that an RC variable can remain a pointer. These are just thoughts, nothing real proposal here... this is on my mind now..
February 07, 2014
On Friday, 7 February 2014 at 13:36:12 UTC, Németh Péter wrote:
>
>> It would be more of an abstract type. Something special template
>> aggregate that the compiler accesses to get code to "hook" in to
>> the memory management parts of  the code the compiler needs, but
>> has delegated specifics to the "user".
>
> It's intresting idea for me.
> If we mark a variable with a property, that how that memory is allocated then we got an marked AST node. And based on that mark we can perform AST manipulation during compile time. So if we marked a variable with ARC, then refcount decrease will injected during scope exit. If we mark a variable with GC, nothing will happen. We shoul allocate a separate memory area to refcounts, so that an RC variable can remain a pointer. These are just thoughts, nothing real proposal here... this is on my mind now..

Well, if one is providing hooks and provides the appropriate
hooks then whatever strategy is used defines how it deals with it
all.


Basically the idea is to export the memory management work to
external user code so it can be easily changed. In this case, for
the most part, the compiler will not know that reference counting
is be used(since it might not be) but scope hooks(or callbacks)
need to be used where reference counting would need them.

February 08, 2014
Am Fri, 07 Feb 2014 21:05:47 +0000
schrieb "Frustrated" <c1514843@drdrb.com>:

> On Friday, 7 February 2014 at 13:36:12 UTC, Németh Péter wrote:
> >
> >> It would be more of an abstract type. Something special
> >> template
> >> aggregate that the compiler accesses to get code to "hook" in
> >> to
> >> the memory management parts of  the code the compiler needs,
> >> but
> >> has delegated specifics to the "user".
> >
> > It's intresting idea for me.
> > If we mark a variable with a property, that how that memory is
> > allocated then we got an marked AST node. And based on that
> > mark we can perform AST manipulation during compile time. So if
> > we marked a variable with ARC, then refcount decrease will
> > injected during scope exit. If we mark a variable with GC,
> > nothing will happen. We shoul allocate a separate memory area
> > to refcounts, so that an RC variable can remain a pointer.
> > These are just thoughts, nothing real proposal here... this is
> > on my mind now..
> 
> Well, if one is providing hooks and provides the appropriate hooks then whatever strategy is used defines how it deals with it all.
> 
> 
> Basically the idea is to export the memory management work to external user code so it can be easily changed. In this case, for the most part, the compiler will not know that reference counting is be used(since it might not be) but scope hooks(or callbacks) need to be used where reference counting would need them.

Can we just copy Rust already?

Do you realize that you'll still need to have type constructors for your allocation schemes?

  A foo() {
      A a = new!myAlloc A();
      return a;
  }

  void main() {
      A a = foo();
      // All we know is we have a reference to A.
      // We need a type ctor to know we need to deallocate it
      // with myAlloc.scopeDesctructor.
  }

-- 
Marco

February 08, 2014
On Saturday, 8 February 2014 at 02:03:00 UTC, Marco Leise wrote:
> Can we just copy Rust already?

Assuming D was able to incorporate all of Rust's memory management features, what advantages would it offer over Rust apart from superior compile-time metaprogramming? Personally, I think working to improve the garbage collector could pay off in this regard; "Both Rust and D allow you to choose between using a garbage collector and using referencing counting, but D's garbage collector is a precise concurrent generational moving compacting collector, whereas Rust's is a non-parallel non-moving conservative collector that's orders of magnitude slower and uses much more memory." Currently, even Go has a better GC than D.
February 08, 2014
On Saturday, 8 February 2014 at 05:43:44 UTC, logicchains wrote:
> apart from superior compile-time metaprogramming? Personally, I think working to improve the garbage collector could pay off in this regard; "Both Rust and D allow you to choose between using a garbage collector and using referencing counting, but D's garbage collector is a precise concurrent generational moving compacting collector, whereas Rust's is a non-parallel non-moving conservative collector that's orders of magnitude slower and uses much more memory." Currently, even Go has a better GC than D.

Well, I think the main problem with D is that it is not a spec driven language, but an implementation driven language and that the implementation is based on compiler backends with very restrictive C semantics. D tries to bolt higher level functionality on top of that without opening up for efficient implementation of such features.

For instance:

1.  You could have GC free lambdas if you require that they only call functions that can work with fixed-size stack frames (so you allocate stack frames off a thread local heap where all allocation units are of a fixed size (say 1k)) and that they avoid recursion.

2. You could avoid lots of temporary allocations and copies if you could control what stack frames look like, by modifying the parent frame.

3. You could have zero overhead exception handling if you could get rid of the silly backwards-compatible driven design of Dwarf:

- If you fully control code gen you can have dual return points with a fixed offset, so you can modify the return address on the stack by adding an offset before returning to indicate an error or just jumping. The penalty is low since predicted branches tend to be cheap.

- If you don't need stack frames (by not having GC) you can maintain only frame pointers for function bodies that contains try blocks. Then jump straight to them on throw:

Throw becomes:

reg0 = exception_object
JMP *(framepointer+CATCHADDR)

Catch becomes:

switch(reg0.someid){
...
default:
   framepointer = *(framepointer+NEXTPTR)
   JMP *(framepointer+CATCHADDR)
}

This would make for very efficient exception handling with no more setup cost than you have today for stack frames. I believe.

It isn't reasonable to require that you write a backend from scratch, but I think it is reasonable to have a language spec that does not tie you to an implementation that will never go beyond C. If that makes adapting existing backends slightly less efficient, then that should be ok. It is more important to have a language spec that allows future compilers to be highly efficient.

- If you want mandatory GC, ok, but then change the language spec to make it possible to write a powerful compiler.

- If you 100% C++ parity/compatibility, great, but then make it just as easy (or easier) to write C++ type code without GC. Basically, you would have a language that consists primarily of syntactic sugar + some optimizing opportunities.

- If you want better performance than C. Great! But don't limit the language to C-semantics in the backend.

Make a decision for where you want to go at the language specification level. Forget about the current compiler/runtime. Where do you want to go?
February 08, 2014
"Ola Fosheim Grøstad" " wrote in message news:fbsmcitanisyitajkvcb@forum.dlang.org...

> Make a decision for where you want to go at the language specification level. Forget about the current compiler/runtime. Where do you want to go?

Something tells me you're not a compilers guy.

I want to go somewhere with an actual working compiler. 

February 08, 2014
On Saturday, 8 February 2014 at 15:57:17 UTC, Daniel Murphy wrote:
> I want to go somewhere with an actual working compiler.

Nothing I wrote prevents that. You just don't get optimal performance if you base it off a backend that is optimized for a different language.

It is more important to have a coherent language spec that is targetting a usage domain than an optimal compiler for a suboptimal language design that is shoehorned into something it is not fit for.
February 08, 2014
"Ola Fosheim Grøstad" " wrote in message news:frdycuzhjstlmgvgcjhy@forum.dlang.org...

> Nothing I wrote prevents that. You just don't get optimal performance if you base it off a backend that is optimized for a different language.

Not in theory, but it's like saying we shouldn't design the language based on semantics of existing processors, because it will lead to non-optimal performance on quantum computers.  The fact these tools/hardware/backends already exist is worth a huge amount in producing a language that is useful, even if not optimally optimal.

February 08, 2014
On Saturday, 8 February 2014 at 17:08:44 UTC, Daniel Murphy wrote:
> Not in theory, but it's like saying we shouldn't design the language based on semantics of existing processors, because it will lead to non-optimal performance on quantum computers.

No, my focus is on what can run efficiently on hardware that exist and is becoming prevalent in the next 3-5 years. I want a system language focus that allow you to write efficient servers, that in the future will work well with cache level 1 transactional memory and other goodies that are coming. A language that makes it easy to write fast, robust OS-free servers with fixed memory allocation that you can upload as VMs so that you can have stable "game servers"/"web services" that are performant, stable, and cost efficient.

No such language exist, a more focused D effort could take that niche and the future embedded space, I think. But you need to break away from C, C++, Java and C#.

IMHO C-semantics is stuck in the 1960s. x86 is to some extent geared towards making it easy to make C-semantics more efficient, but you can easily get better performance by going to the machine code level in terms of structure. What is difficult today is specifying such structures, so people end up with more efficient C-semantics (like doing FSM in C rather than in a manner that is optimized for the hardware).

>  The fact these tools/hardware/backends already exist is worth a huge amount in producing a language that is useful, even if not optimally optimal.

Sure, but that should not drive the language spec. That way you will never get the upper hand, you will never become more attractive than the competition. You can usually create reasonably efficient alternatives where the C-backend isn't cutting it in a proof-of-concept manner (like having extra data structures that you maintain on the side, or using DWARF for now, but admit that it is not your goal/standard!). I think the long term goal should be to have a backend that provides stuff that the C crowd does not have. Maybe when D is implemented in D you can move in that direction.

D2 smells too much of "C++ with opinionated restrictions" with C#/Java stuff thrown in without really making OS level system programming more attractive.

Unfortunately Go and Rust are currently also in that opinionated state that makes them atm not useful for systems level programming. And C/C++ is too unsuitable unless you have a lot of resources. But Go and Rust are more focused at a particular usage scenario (webserver/web browser) and their language design is thus focused. I think that is an advantage for them.

I think D could steal the VM/OS/embedded space with a very focused effort. C/C++ is ahead, but I believe D can surpass that if you go beyond C/C++ and forget about pleasing the Java/C# crowd.