April 08, 2013
On Monday, 8 April 2013 at 08:21:06 UTC, Manu wrote:
> On 8 April 2013 16:35, Paulo Pinto <pjmlp@progtools.org> wrote:
>
>> On Monday, 8 April 2013 at 03:13:00 UTC, Manu wrote:
>>
>>> On 7 April 2013 20:59, Paulo Pinto <pjmlp@progtools.org> wrote:
>>>
>>>  I am not giving up speed. It just happens that I have been coding since
>>>> 1986 and I am a polyglot programmer that started doing system programming
>>>> in the Pascal family of languages, before moving into C and C++ land.
>>>>
>>>> Except for some cases, it does not matter if you get an answer in 1s or
>>>> 2ms, however most single language C and C++ developers care about the 2ms
>>>> case even before starting to code, this is what I don't approve.
>>>>
>>>>
>>> Bear in mind, most remaining C/C++ programmers are realtime programmers,
>>> and that 2ms is 12.5% of the ENTIRE AMOUNT OF TIME that you have to run
>>> realtime software.
>>> If I chose not to care about 2ms only 8 times, I'll have no time left. I
>>> would cut off my left nut for 2ms most working days!
>>> I typically measure execution times in 10s of microseconds, if something
>>> measures in milliseconds it's a catastrophe that needs to be urgently
>>> addressed... and you're correct, as a C/C++ programmer, I DO design with
>>> consideration for sub-ms execution times before I write a single line of
>>> code.
>>> Consequently, I have seen the GC burn well into the ms on occasion, and as
>>> such, it is completely unacceptable in realtime software.
>>>
>>
>>
>> I do understand that, the thing is that since I am coding in 1986, I
>> remember people complaining that C and Turbo Pascal were too slow, lets
>> code everything in Assembly. Then C became alright, but C++ and Ada were
>> too slow, god forbid to call virtual methods or do any operator calls in
>> C++'s case.
>>
>
> The C++ state hasn't changed though. We still avoid virtual calls like the
> plague.
> One of my biggest design gripes with D, hands down, is that functions are
> virtual by default. I believe this is a critical mistake, and the biggest
> one in the language by far.


There I agree with you, I prefer the C++ and C# model as it is better suited for languages with AOT compilation.

Virtual by default, in terms of implementation, is not an issue if the code is JITed, but with AOT compilation you need PGO to be able to inline virtual calls.

>
> Afterwards the same discussion came around with JVM and .NET environments,
>> which while making GC widespread, also had the sad side-effect to make
>> younger generations think that safe languages require a VM when that is not
>> true.
>>
>
> I agree with this sad trend. D can help address this issue if it breaks
> free.

Even Go and Rust are a help in that direction I would say.


>
> Nowadays template based code beats C, systems programming is moving to C++
>> in mainstream OS, leaving C behind, while some security conscious areas are
>> adopting Ada and Spark.
>>
>
> I don't see a significant trend towards C++ in systems code? Where are you
> looking?

Mainly at the following examples.

Microsoft stating C90 is good enough in their tooling and C++ is way forward as the Windows system programming language.

On BUILD 2012 there is a brief mention from Herb Sutter that kernel team is making the code C++ compliant, on his presentation about Modern C++. I can search that on the videos, or maybe if someone was there can confirm it.

Oh, and the new Windows APIs since XP are mostly COM based, thus C++, because no sane person should try to use COM from C.

Mac OS X driver subsystem uses a C++ subset.

Symbian and BeOS/Haiku are implemented in C++.

OS/400 is a mix of Assembly, Modula-2 and C++.

Both gcc and clang now use C++ as implementation language.

Sometimes I think UNIX is what keeps C alive in a way.

> The main reason people are leaving C is because they've had quite enough of
> the inconvenience... 40 years is plenty thank you!
> I think the main problem for the latency is that nothing compelling enough
> really stood in to take the helm.
>
> Liberal use of templates only beats C where memory and bandwidth are
> unlimited. Sadly, most computers in the world these days are getting
> smaller, not bigger, so this is not a trend that should be followed.
> Binary size is, as always, a critical factor in performance (mainly
> relating to the size of the targets icache). Small isolated templates
> produce some great wins, over-application of templates results in crippling
> (and very hard to track/isolate) performance issues.
> These performance issues are virtually impossible to fight; they tend not
> to appear on profilers, since they're evenly distributed throughout the
> code, making the whole program uniformly slow, instead of producing
> hot-spots, which are much easier to combat.
> They also have the effect of tricking their authors
> into erroneously thinking that their code is performing really well, since
> the profilers show no visible hot spots. Truth is, they didn't both writing
> a proper basis for comparison, and as such, they will happily continue to
> believe their program performs well, or even improves the situation
> (...most likely verified by testing a single template version of one
> function over a generalised one that was slower, and not factoring in the
> uniform slowless of the whole application they have introduced).
>
> I often fear that D promotes its powerful templates too much, and that
> junior programmers might go even more nuts than in C++. I foresee that
> strict discipline will be required in the future... :/


I agree there.

Since D makes meta-programming too easy when compared with C++, I think some examples are just too clever for average developers.


>
> So for me when someone claims about the speed benefits of C and C++
>> currently have, I smile as I remember having this kind of discussions with
>> C having the role of too slow language.
>
>
> C was mainly too slow due to the immaturity of compilers, and the fact that
> computers were not powerful enough, or had enough resources to perform
> decent optimisations.
> [...]

Yeah the main issue was immature compilers.

Which is still true when targeting 8 and 16 processors as they
still have a similar environment, I imagine.


> With a good suite of intrinsics available to express architecture-specific
> concepts outside the language, I haven't had any reason to write assembly
> for years, the compiler/optimiser produce perfect code (within the ABI,
> which sometimes has problems).


I am cleaning up a toy compiler done on my final year (1999) and I wanted to remove the libc dependency on the runtime, which is quite small anyway, only allowing for int, boolean and string IO.

After playing around some hours writing Assembly from scratch, I decided to use the C compiler as high level assembler, disabling the dependency on the C runtime and talking directly to the kernel,

It is already good enough to get myself busy with Assembly in the code generator.

>
> Also, 6502 and z80 processors don't lend themselves to generic workloads.
> It's hard to develop a good general ABI for those machines; you typically
> want the ABI to be application specific... decent ABI's only started
> appearing for the 68000 line which had enough registers to implement a
> reasonable one.
>
> In short, I don't think your point is entirely relevalt. It's not the
> nature of C that was slow in those days, it's mainly the immaturity of the
> implementation, combined with the fact that the hardware did not yet
> support the concepts.
> So the point is fallacious, you basically can't get better performance if
> you hand-write x86 assembly these days. It will probably be worse.


I do lack real life experience in the game and real time areas, but sometimes the complains about new language features seem to be a thing of old generations don't wanting to learn the new ways.

But I have been proven wrong a few times. specially when I tend to assume stuff even without proper field experience.

>
>
> Walter's claim is that D's inefficient GC is mitigated by the fact that D
>>> produces less garbage than other languages, and this is true to an extent.
>>> But given that is the case, to be reliable, it is of critical importance
>>> that:
>>> a) the programmer is aware of every allocation they are making, they can't
>>> be hidden inside benign looking library calls like toUpperInPlace.
>>> b) all allocations should be deliberate.
>>> c) helpful messages/debugging features need to be available to track where
>>> allocations are coming from. standardised statistical output would be most
>>> helpful.
>>> d) alternatives need to be available for the functions that allocate by
>>> nature, or an option for user-supplied allocators, like STL, so one can
>>> allocate from a pool instead.
>>> e) D is not very good at reducing localised allocations to the stack, this
>>> needs some attention. (array initialisation is particularly dangerous)
>>> f) the GC could do with budgeting controls. I'd like to assign it 150us
>>> per
>>> 16ms, and it would defer excess workload to later frames.
>>>
>>
>>
>> No doubt D's GC needs to be improved, but I doubt making D a manual memory
>> managed language will improve the language's adoption, given that all new
>> system programming languages either use GC or reference counting as default
>> memory management.
>>
>
> I don't advocate making D a manual managed language. I advocate making it a
> _possibility_. Tools need to be supplied, because it wastes a LOT of time
> trying to assert your code (or subsets of your code, ie, an frame execution
> loop), is good.


Sorry about the confusion.

>
>
> What you need is a way to do controlled allocations for the few cases that
>> there is no way around it, but this should be reserved for modules with
>> system code and not scattered everywhere.
>>
>>
>>> Of course I think given time D compilers will be able to achieve C++ like
>>>
>>>> performance, even with GC or who knows, a reference counted version.
>>>>
>>>> Nowadays the only place I do manual memory management is when writing
>>>> Assembly code.
>>>>
>>>>
>>> Apparently you don't write realtime software. I get so frustrated on this
>>> forum by how few people care about realtime software, or any architecture
>>> other than x86 (no offense to you personally, it's a general observation).
>>> Have you ever noticed how smooth and slick the iPhone UI feels? It runs at
>>> 60hz and doesn't miss a beat. It wouldn't work in D.
>>> Video games can't stutter, audio/video processing can't stutter. ....
>>>
>>
>> I am well aware of that and actually I do follow the game industry quite
>> closely, being my second interest after systems/distributed computing. And
>> I used to be a IGDA member for quite a few years.
>>
>> However I do see a lot of games being pushed out the door in Java, C# with
>> local optimizations done in C and C++.
>>
>
>> Yeah most of they are no AAA, but that does make them less enjoyable.
>>
>
> This is certainly a prevaling trend. The key reason for this is
> productivity I think. Game devs are sick of C++. Like, REALLY sick of it.
> Just don't want to waste their time anymore.
> Swearing about C++ is a daily talk point. This is an industry basically
> screaming out for salvation, but you'll find no real consensus on where to
> go. People are basically dabbling at the moment.
> They are also lead by the platform holders to some extent, MS has a lot of
> influence (holder of 2 majorplatforms) and they push C#.
>
> But yes, also as you say, the move towards 'casual' games, where the
> performance requirements aren't really critical.
> In 'big games' though, it's still brutally competitive. If you don't raise
> the technology/performance bar, your competition will.
> D is remarkably close to offering salvation... this GC business is one of
> the final hurdles I think.


This is what I see with most system programming languages. The only ones that succeed in the long run, where the ones pushed by the platform holders.

That is what got me dragged from Turbo Pascal/Delphi land into C and C++, as I wanted to use the default OS languages, even though I preferred the former ones.

>
>
>> I also had the pleasure of being able to use the Native Oberon and AOS
>> operating systems back in the late 90's at the university, desktop
>> operating systems done in GC systems programming languages. Sure you could
>> do manual memory management, but only via the SYSTEM pseudo module.
>>
>> One of the applications was a video player, just the decoder was written
>> in Assembly.
>>
>> http://ignorethecode.net/blog/**2009/04/22/oberon/<http://ignorethecode.net/blog/2009/04/22/oberon/>
>>
>>
>> In the end the question is what would a D version just with manual memory
>> management have as compelling feature against C++1y and Ada, already
>> established languages with industry standards?
>>
>> Then again my lack of experience in the embedded world invalidates what I
>> think might be the right way.
>>
>
> C++11 is a joke. Too little, too late if you ask me.
> It barely addresses the problems it tries to tackle, and a lot of it is
> really lame library solutions. Also, C++ is too stuck. Bad language design
> that can never be changed.
> It's templates are a nightmare in particular, and it'll be stuck with
> headers forever. I doubt the compile times will ever be significantly
> improved.


I agree with you there, but the industry seems to be following along anyway.

>
> But again, I'm not actually advocating a D without the GC like others in
> this thread. I'm a realtime programmer, and I don't find the concepts
> incompatible, they just need tight control, and good debug/analysis tools.
> If I can timeslice the GC, limit it to ~150us/frame, that would do the
> trick. I'd pay 1-2% of my frame time for the convenience it offers for sure.
> I'd also rather it didn't stop the world. If it could collect on one thread
> while another thread was still churning data, that would really help the
> situation. Complex though...
> It helps that there are basically no runtime allocations in realtime
> software. This theoretically means the GC should have basically nothing to
> do! The state of the heap really shouldn't change from frame to frame, and
> surely that temporal consistency could be used to improve a good GC
> implementation? (Note: I know nothing about writing a GC)
> The main source of realtime allocations in D code come from
> array concatenation, and about 95% of that, in my experience, are
> completely local and could be relaxed onto the stack! But D doesn't do this
> in most cases (to my constant frustration)... it allocates anyway, even
> thought it can easily determine the allocation is localised.

Agreed.

Thanks for the explanation, it is always quite interesting to read your counterarguments.

--
Paulo
April 08, 2013
On 2013-04-08 11:14, Manu wrote:

> ... bugger! :/
> Well I guess that function just needs to be amended to
> not-upper-case-ify those troublesome letters? Shame.

I haven't looked at the source code so I don't know if this is the actual problem. But theoretically this will be a problem for every function trying to do this.

In general I would not be particular happy with a function that doesn't work properly. But since these cases are so few it might be ok.

-- 
/Jacob Carlborg
April 08, 2013
On 8 April 2013 19:26, Jacob Carlborg <doob@me.com> wrote:

> On 2013-04-08 10:31, Manu wrote:
>
>  D for embedded, like PROPER embedded (microcontrollers, or even
>> raspberry pi maybe?) is one area where most users would be happy to use
>> a custom druntime like the ones presented earlier in this thread where
>> it's strategically limited in scope and designed not to allocate.
>> 'Really embedded' software tends not to care so much about portability.
>> A bigger problem is D's executable size, which are rather 'plump' to be
>> frank :P
>> Last time I tried to understand this, one main issue was objectfactory,
>> and the inability to strip out unused classinfo structures (and other
>> junk). Any unused data should be stripped, but D somehow finds reason to
>> keep it all. Also, template usage needs to be relaxed. Over-use of
>> templates really bloats the exe. But it's not insurmountable, D could be
>> used in 'proper embedded'.
>>
>
> I agree with the templates, Phobos is full of them. Heck, I created a D-Objective-C bridge that resulted in a 60MB GUI Hello World exeuctable. Full of template and virtual methods bloat.


Haha, yeah I remember discussing that with you some time back when we were
discussing iPhone.
Rather humorous ;)

I do wonder if there's room in D for built-in Obj-C compatibility;
extern(ObjC) ;)
OSX and iOS are not minor platforms by any measure. At least support for
the most common parts of the Obj-C calling convention. D doesn't offer full
C++ either, but what it does offer is very useful, and it's important that
it's there.


April 08, 2013
On 8 April 2013 19:31, Jacob Carlborg <doob@me.com> wrote:

> On 2013-04-08 10:56, Dicebot wrote:
>
>  Sure. Actually, executable size is an easy problem to solve considering
>> custom druntimed mentioned before. Most of size in small executables come from statically linked huge druntime. (Simple experiment: use "-betterC" switch and compile hello-world program linking only to C stdlib. Same binary size as for C analog).
>>
>
> That's cheating. It's most likely due to the C standard library is being dynamically linked. If you dynamically link with the D runtime and the standard library you will get the same size for a Hello World in D as in C. Yes, I've tried this with Tango back in the D1 days.


I don't see how. I noticed that the ancillary data kept along with class
definitions and other stuff was quite significant, particularly when a
decent number of templates appear.
Dynamic linkage of the runtime can't affect that.


April 08, 2013
On 2013-04-08 11:17, Dicebot wrote:

> b) You are forced to make function templated to mark it as @nogc. Bad.

It depends on how it's used. If it's enough to annotate a type the function doesn't need to be templated. This should work just fine:


class Foo { }

struct ThreadSafe (T)
{
    T t;
}

ThreadSafe!(Foo) foo;

void process (ThreadSafe!(Foo) foo) { /* process foo */ }

-- 
/Jacob Carlborg
April 08, 2013
On Monday, 8 April 2013 at 09:31:03 UTC, Manu wrote:
> ... so where's your dconf talk then? You can have one of my slots, I'm very
> interested to hear all about it! ;)

Meh, I am a more of "poor student" type and can't really afford even a one-way plane ticket from Easter Europe to USA :( Latvia is India branch in Europe when it comes to cheap programming workforce. Will be waiting for videos from DConf.

I can provide any information you are interested in via e-mail though, I'd like to see how my explorations survive meeting with real requirements.
April 08, 2013
On Monday, 8 April 2013 at 09:31:46 UTC, Jacob Carlborg wrote:
> On 2013-04-08 10:56, Dicebot wrote:
>
>> Sure. Actually, executable size is an easy problem to solve considering
>> custom druntimed mentioned before. Most of size in small executables
>> come from statically linked huge druntime. (Simple experiment: use
>> "-betterC" switch and compile hello-world program linking only to C
>> stdlib. Same binary size as for C analog).
>
> That's cheating. It's most likely due to the C standard library is being dynamically linked. If you dynamically link with the D runtime and the standard library you will get the same size for a Hello World in D as in C. Yes, I've tried this with Tango back in the D1 days.

Erm. How so? Same C library is dynamically linked both for D and C programs so I am comparing raw binary size honestly here (and it is the same).

If you mean size of druntime is not that relevant if you link it dynamically - embedded application can often be the only program that runs on given system ("single executive" concept) and it makes no difference (actually, dynamic linking is not even possible in that case).

April 08, 2013
On Monday, 8 April 2013 at 09:41:05 UTC, Jacob Carlborg wrote:
> On 2013-04-08 11:17, Dicebot wrote:
>
>> b) You are forced to make function templated to mark it as @nogc. Bad.
>
> It depends on how it's used. If it's enough to annotate a type the function doesn't need to be templated. This should work just fine:
>
>
> class Foo { }
>
> struct ThreadSafe (T)
> {
>     T t;
> }
>
> ThreadSafe!(Foo) foo;
>
> void process (ThreadSafe!(Foo) foo) { /* process foo */ }

Hm, so you propose to use something like Malloced!Data / Stack!Data instead of marking whole function with @nogc? Interesting, I have never though about this approach, may be worth trying as proof-of-concept.
April 08, 2013
On 04/08/2013 07:55 AM, Paulo Pinto wrote:
> On Sunday, 7 April 2013 at 22:59:37 UTC, Peter Alexander wrote:
>> On Sunday, 7 April 2013 at 22:33:04 UTC, Paulo Pinto wrote:
>>> Am 08.04.2013 00:27, schrieb Timon Gehr:
>>>> Every time a variable is reassigned, its old value is destroyed.
>>>
>>> I do have functional and logic programming background and still fail
>>> to see how that is manual memory management.
>>
>> Mutable state is essentially an optimisation that reuses the same
>> memory for a new value (as opposed to heap allocating the new value).
>> In that sense, mutable state is manual memory management because you
>> have to manage what has access to that memory.
>
> If you as a developer don't call explicitly any language API to
> acquire/release resource and it is actually done on your behalf by the
> runtime, it is not manual memory management.

a = b;
  ^- explicit "language API" call
April 08, 2013
On 8 April 2013 20:10, Dicebot <m.strashun@gmail.com> wrote:

> On Monday, 8 April 2013 at 09:41:05 UTC, Jacob Carlborg wrote:
>
>> On 2013-04-08 11:17, Dicebot wrote:
>>
>>  b) You are forced to make function templated to mark it as @nogc. Bad.
>>>
>>
>> It depends on how it's used. If it's enough to annotate a type the function doesn't need to be templated. This should work just fine:
>>
>> class Foo { }
>>
>> struct ThreadSafe (T)
>> {
>>     T t;
>> }
>>
>> ThreadSafe!(Foo) foo;
>>
>> void process (ThreadSafe!(Foo) foo) { /* process foo */ }
>>
>
> Hm, so you propose to use something like Malloced!Data / Stack!Data instead of marking whole function with @nogc? Interesting, I have never though about this approach, may be worth trying as proof-of-concept.
>

It's such a dirty hack though ;) .. that does not give the impression, or confidence that the language addresses the problem. Actually, quite the opposite. If I saw that, I'd be worried...