April 08, 2013 Re: Disable GC entirely | ||||
---|---|---|---|---|
| ||||
Posted in reply to Manu | On 2013-04-08 06:30, Manu wrote: > I wonder if UDA's could be leveraged to implement this in a library? > UDA's can not permute the type, so I guess it's impossible to implement > something like @noalloc that behaves like @nothrow in a library... > I wonder what it would take, it would be generally interesting to move > some of the built-in attributes to UDA's if the system is rich enough to > express it. > > As a side though though, the information about whether a function can > allocate could be known implicitly by the compiler if it chose to track > that detail. I wonder if functions could gain a constant property so you > can assert on that detail in your own code? > ie: > > void myFunction() > { > // does stuff... > } > > > { > // ...code that i expect not to allocate... > > static assert(!myFunction.canAllocate); > > myFunction(); > } > > This way, I know for sure my code is good, and if I modify the body of > myFunction at some later time (or one of its sub-calls is modified), for > instance, to make an allocating library call, then i'll know about it > the moment I make the change. Scott Meyers had a talk about what he called red code/green code. It was supposed to statically enforce that green code cannot call red code. Then what is green code is completely up to you, if it's memory safe, thread safe, GC free or similar. I don't remember the conclusion and what could be implemented like this, but here's the talk: http://www.google.com/url?sa=t&rct=j&q=scott%20meyers%20red%20green%20code&source=web&cd=1&cad=rja&ved=0CCsQtwIwAA&url=http%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DJfu9Kc1D-gQ&ei=fXJiUfC3FuSB4gS41IHADQ&usg=AFQjCNGtKwLcr2jNjsC4RJ0_5k8WmAFzTw&bvm=bv.44770516,d.bGE -- /Jacob Carlborg |
April 08, 2013 Re: Disable GC entirely | ||||
---|---|---|---|---|
| ||||
Posted in reply to Jacob Carlborg | On Monday, 8 April 2013 at 07:35:59 UTC, Jacob Carlborg wrote:
> On 2013-04-08 06:30, Manu wrote:
> Scott Meyers had a talk about what he called red code/green code. It was supposed to statically enforce that green code cannot call red code. Then what is green code is completely up to you, if it's memory safe, thread safe, GC free or similar.
That kind of genericity will be just wonderful in some cases. For example, one could make sure, at compilation, that interrupt code does not call sleeping code when it comes to Linux kernel programming.
I wonder, however, if one could re-define the notions of green/red several times in a project. Maybe per-module basis?
|
April 08, 2013 Re: Disable GC entirely | ||||
---|---|---|---|---|
| ||||
Posted in reply to Manu | On Monday, 8 April 2013 at 04:30:56 UTC, Manu wrote:
> I wonder if UDA's could be leveraged to implement this in a library?
> UDA's can not permute the type, so I guess it's impossible to implement
> something like @noalloc that behaves like @nothrow in a library...
> I wonder what it would take, it would be generally interesting to move some
> of the built-in attributes to UDA's if the system is rich enough to express
> it.
Both blessing and curse of UDA's is that they are not part of type and thus mangling. I think it is possible to provide library implementation of @nogc for cases when all source code is available, but for external libraries and separate compilation it will become matter of trust which is hardly good.
|
April 08, 2013 Re: Disable GC entirely | ||||
---|---|---|---|---|
| ||||
Posted in reply to Paulo Pinto | On Monday, 8 April 2013 at 06:35:27 UTC, Paulo Pinto wrote:
> I do understand that, the thing is that since I am coding in 1986, I remember people complaining that C and Turbo Pascal were too slow, lets code everything in Assembly. Then C became alright, but C++ and Ada were too slow, god forbid to call virtual methods or do any operator calls in C++'s case.
>
> Afterwards the same discussion came around with JVM and .NET environments, which while making GC widespread, also had the sad side-effect to make younger generations think that safe languages require a VM when that is not true.
>
> Nowadays template based code beats C, systems programming is moving to C++ in mainstream OS, leaving C behind, while some security conscious areas are adopting Ada and Spark.
>
> So for me when someone claims about the speed benefits of C and C++ currently have, I smile as I remember having this kind of discussions with C having the role of too slow language.
But important question is "what has changed?". Was it just shift in programmer opinion and they initially mislabeled C code as slow or progress in compiler optimizations was real game-breaker? Same for GC's and VM's.
It may be perfectly possible to design GC that suits real-time needs and is fast enough (well, Manu has mentioned some of requirements it needs to satisfy). But if embedded developers need to wait until tool stack that advanced is produced for D to use it - it is pretty much same as saying that D is dead for embedded. Mythical "clever-enough compilers" are good in theory but job needs to be done right now.
|
April 08, 2013 Re: Disable GC entirely | ||||
---|---|---|---|---|
| ||||
Posted in reply to Jacob Carlborg Attachments:
| On 8 April 2013 17:21, Jacob Carlborg <doob@me.com> wrote: > On 2013-04-08 05:12, Manu wrote: > > Bear in mind, most remaining C/C++ programmers are realtime programmers, >> and that 2ms is 12.5% of the ENTIRE AMOUNT OF TIME that you have to run >> realtime software. >> If I chose not to care about 2ms only 8 times, I'll have no time left. I >> would cut off my left nut for 2ms most working days! >> I typically measure execution times in 10s of microseconds, if something >> measures in milliseconds it's a catastrophe that needs to be urgently >> addressed... and you're correct, as a C/C++ programmer, I DO design with >> consideration for sub-ms execution times before I write a single line of >> code. >> Consequently, I have seen the GC burn well into the ms on occasion, and >> as such, it is completely unacceptable in realtime software. >> >> The GC really needs to be addressed in terms of performance; it can't >> stop the world for milliseconds at a time. I'd be happy to give it >> ~150us every 16ms, but NOT 2ms every 200ms. >> Alternatively, some urgency needs to be invested in tools to help >> programmers track accidental GC allocations. >> > > An easy workaround is to remove the GC and when you use the GC you'll get linker errors. Not pretty but it could work. Hehe, yeah I'm aware of these tricks. But I'm not really keen to be doing that. Like I said before, I'm not actally interested in eliminating the GC, I just want it to be usable. I like the concept of a GC, and I wish I could trust it. This requires me spending time using it and gathering experience, and perhaps making a noise about my pains here from time to time ;) I cope with D in realtime software by carefully avoiding excess GC >> usage, which, sadly, means basically avoiding the standard library at >> all costs. People use concatenations all through the std lib, in the >> strangest places, I just can't trust it at all anymore. >> I found a weird one just a couple of days ago in the function >> toUpperInPlace() (!! it allocates !!), but only when it encountered a >> utf8 sequence, which means I didn't even notice while working in my >> language! >_< >> Imagine it, I would have gotten a bug like "game runs slow in russian", >> and I would have been SOOOO "what the ****!?", while crunching to ship >> the product... >> > > To address this particular case, without having looked at the code, you do know that it's possible that the length of a Unicode string changes when converting between upper and lower case for some languages. With that in mind, it might not be a good idea to have an in place version of toUpper/Lower at all. ... I don't think that's actually true. Can you suggest such a character in any language? I think they take that sort of thing into careful consideration when designing the codepoints for a character set. But if that is the case, then a function called toUpperInPlace is flawed by design, because it would be incapable of doing what it says it does. I'm not convinced that's true though. d) alternatives need to be available for the functions that allocate by >> nature, or an option for user-supplied allocators, like STL, so one can allocate from a pool instead. >> > > Have you seen this, links at the bottom: > > > http://3d.benjamin-thaut.de/?**p=20 <http://3d.benjamin-thaut.de/?p=20> > I hadn't. Interesting to note that I experience all the same critical issues listed at the bottom :) Most of them seem quite fix-able, it just needs some focused attention... My biggest issue not mentioned there though, is that when the datasets get large, the collects take longer, and they are not synced with the game, leading to regular intermittent spikes that result in regular lost frames. A stuttering framerate is the worst possible kind of performance problem. |
April 08, 2013 Re: Disable GC entirely | ||||
---|---|---|---|---|
| ||||
Posted in reply to Jacob Carlborg Attachments:
| On 8 April 2013 17:35, Jacob Carlborg <doob@me.com> wrote:
> On 2013-04-08 06:30, Manu wrote:
>
> I wonder if UDA's could be leveraged to implement this in a library?
>> UDA's can not permute the type, so I guess it's impossible to implement
>> something like @noalloc that behaves like @nothrow in a library...
>> I wonder what it would take, it would be generally interesting to move
>> some of the built-in attributes to UDA's if the system is rich enough to
>> express it.
>>
>> As a side though though, the information about whether a function can
>> allocate could be known implicitly by the compiler if it chose to track
>> that detail. I wonder if functions could gain a constant property so you
>> can assert on that detail in your own code?
>> ie:
>>
>> void myFunction()
>> {
>> // does stuff...
>> }
>>
>>
>> {
>> // ...code that i expect not to allocate...
>>
>> static assert(!myFunction.**canAllocate);
>>
>> myFunction();
>> }
>>
>> This way, I know for sure my code is good, and if I modify the body of myFunction at some later time (or one of its sub-calls is modified), for instance, to make an allocating library call, then i'll know about it the moment I make the change.
>>
>
> Scott Meyers had a talk about what he called red code/green code. It was supposed to statically enforce that green code cannot call red code. Then what is green code is completely up to you, if it's memory safe, thread safe, GC free or similar.
>
> I don't remember the conclusion and what could be implemented like this, but here's the talk:
>
> http://www.google.com/url?sa=**t&rct=j&q=scott%20meyers%**
> 20red%20green%20code&source=**web&cd=1&cad=rja&ved=**
> 0CCsQtwIwAA&url=http%3A%2F%**2Fwww.youtube.com%2Fwatch%3Fv%**
> 3DJfu9Kc1D-gQ&ei=**fXJiUfC3FuSB4gS41IHADQ&usg=**AFQjCNGtKwLcr2jNjsC4RJ0_**
> 5k8WmAFzTw&bvm=bv.44770516,d.**bGE<http://www.google.com/url?sa=t&rct=j&q=scott%20meyers%20red%20green%20code&source=web&cd=1&cad=rja&ved=0CCsQtwIwAA&url=http%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DJfu9Kc1D-gQ&ei=fXJiUfC3FuSB4gS41IHADQ&usg=AFQjCNGtKwLcr2jNjsC4RJ0_5k8WmAFzTw&bvm=bv.44770516,d.bGE>
That sounds awesome. I'll schedule it for later on! :P
|
April 08, 2013 Re: Disable GC entirely | ||||
---|---|---|---|---|
| ||||
Posted in reply to Paulo Pinto Attachments:
| On 8 April 2013 16:35, Paulo Pinto <pjmlp@progtools.org> wrote: > On Monday, 8 April 2013 at 03:13:00 UTC, Manu wrote: > >> On 7 April 2013 20:59, Paulo Pinto <pjmlp@progtools.org> wrote: >> >> I am not giving up speed. It just happens that I have been coding since >>> 1986 and I am a polyglot programmer that started doing system programming in the Pascal family of languages, before moving into C and C++ land. >>> >>> Except for some cases, it does not matter if you get an answer in 1s or 2ms, however most single language C and C++ developers care about the 2ms case even before starting to code, this is what I don't approve. >>> >>> >> Bear in mind, most remaining C/C++ programmers are realtime programmers, >> and that 2ms is 12.5% of the ENTIRE AMOUNT OF TIME that you have to run >> realtime software. >> If I chose not to care about 2ms only 8 times, I'll have no time left. I >> would cut off my left nut for 2ms most working days! >> I typically measure execution times in 10s of microseconds, if something >> measures in milliseconds it's a catastrophe that needs to be urgently >> addressed... and you're correct, as a C/C++ programmer, I DO design with >> consideration for sub-ms execution times before I write a single line of >> code. >> Consequently, I have seen the GC burn well into the ms on occasion, and as >> such, it is completely unacceptable in realtime software. >> > > > I do understand that, the thing is that since I am coding in 1986, I remember people complaining that C and Turbo Pascal were too slow, lets code everything in Assembly. Then C became alright, but C++ and Ada were too slow, god forbid to call virtual methods or do any operator calls in C++'s case. > The C++ state hasn't changed though. We still avoid virtual calls like the plague. One of my biggest design gripes with D, hands down, is that functions are virtual by default. I believe this is a critical mistake, and the biggest one in the language by far. Afterwards the same discussion came around with JVM and .NET environments, > which while making GC widespread, also had the sad side-effect to make younger generations think that safe languages require a VM when that is not true. > I agree with this sad trend. D can help address this issue if it breaks free. Nowadays template based code beats C, systems programming is moving to C++ > in mainstream OS, leaving C behind, while some security conscious areas are adopting Ada and Spark. > I don't see a significant trend towards C++ in systems code? Where are you looking? The main reason people are leaving C is because they've had quite enough of the inconvenience... 40 years is plenty thank you! I think the main problem for the latency is that nothing compelling enough really stood in to take the helm. Liberal use of templates only beats C where memory and bandwidth are unlimited. Sadly, most computers in the world these days are getting smaller, not bigger, so this is not a trend that should be followed. Binary size is, as always, a critical factor in performance (mainly relating to the size of the targets icache). Small isolated templates produce some great wins, over-application of templates results in crippling (and very hard to track/isolate) performance issues. These performance issues are virtually impossible to fight; they tend not to appear on profilers, since they're evenly distributed throughout the code, making the whole program uniformly slow, instead of producing hot-spots, which are much easier to combat. They also have the effect of tricking their authors into erroneously thinking that their code is performing really well, since the profilers show no visible hot spots. Truth is, they didn't both writing a proper basis for comparison, and as such, they will happily continue to believe their program performs well, or even improves the situation (...most likely verified by testing a single template version of one function over a generalised one that was slower, and not factoring in the uniform slowless of the whole application they have introduced). I often fear that D promotes its powerful templates too much, and that junior programmers might go even more nuts than in C++. I foresee that strict discipline will be required in the future... :/ So for me when someone claims about the speed benefits of C and C++ > currently have, I smile as I remember having this kind of discussions with C having the role of too slow language. C was mainly too slow due to the immaturity of compilers, and the fact that computers were not powerful enough, or had enough resources to perform decent optimisations. Back in those days I could disassemble basically anything and point at the compilers mistakes. (note, I was programming in the early 90's, so I imagine the situation was innumerably worse in the mid 80's) These days, I disassemble some code to check what the compiler did, and I'm usually surprised when I find a mistake. And when I do, I find it's usually MY mistake, and I tweak the C/C++ code to allow the compiler to do the proper job. With a good suite of intrinsics available to express architecture-specific concepts outside the language, I haven't had any reason to write assembly for years, the compiler/optimiser produce perfect code (within the ABI, which sometimes has problems). Also, 6502 and z80 processors don't lend themselves to generic workloads. It's hard to develop a good general ABI for those machines; you typically want the ABI to be application specific... decent ABI's only started appearing for the 68000 line which had enough registers to implement a reasonable one. In short, I don't think your point is entirely relevalt. It's not the nature of C that was slow in those days, it's mainly the immaturity of the implementation, combined with the fact that the hardware did not yet support the concepts. So the point is fallacious, you basically can't get better performance if you hand-write x86 assembly these days. It will probably be worse. Walter's claim is that D's inefficient GC is mitigated by the fact that D >> produces less garbage than other languages, and this is true to an extent. >> But given that is the case, to be reliable, it is of critical importance >> that: >> a) the programmer is aware of every allocation they are making, they can't >> be hidden inside benign looking library calls like toUpperInPlace. >> b) all allocations should be deliberate. >> c) helpful messages/debugging features need to be available to track where >> allocations are coming from. standardised statistical output would be most >> helpful. >> d) alternatives need to be available for the functions that allocate by >> nature, or an option for user-supplied allocators, like STL, so one can >> allocate from a pool instead. >> e) D is not very good at reducing localised allocations to the stack, this >> needs some attention. (array initialisation is particularly dangerous) >> f) the GC could do with budgeting controls. I'd like to assign it 150us >> per >> 16ms, and it would defer excess workload to later frames. >> > > > No doubt D's GC needs to be improved, but I doubt making D a manual memory managed language will improve the language's adoption, given that all new system programming languages either use GC or reference counting as default memory management. > I don't advocate making D a manual managed language. I advocate making it a _possibility_. Tools need to be supplied, because it wastes a LOT of time trying to assert your code (or subsets of your code, ie, an frame execution loop), is good. What you need is a way to do controlled allocations for the few cases that > there is no way around it, but this should be reserved for modules with system code and not scattered everywhere. > > >> Of course I think given time D compilers will be able to achieve C++ like >> >>> performance, even with GC or who knows, a reference counted version. >>> >>> Nowadays the only place I do manual memory management is when writing Assembly code. >>> >>> >> Apparently you don't write realtime software. I get so frustrated on this >> forum by how few people care about realtime software, or any architecture >> other than x86 (no offense to you personally, it's a general observation). >> Have you ever noticed how smooth and slick the iPhone UI feels? It runs at >> 60hz and doesn't miss a beat. It wouldn't work in D. >> Video games can't stutter, audio/video processing can't stutter. .... >> > > I am well aware of that and actually I do follow the game industry quite closely, being my second interest after systems/distributed computing. And I used to be a IGDA member for quite a few years. > > However I do see a lot of games being pushed out the door in Java, C# with local optimizations done in C and C++. > > Yeah most of they are no AAA, but that does make them less enjoyable. > This is certainly a prevaling trend. The key reason for this is productivity I think. Game devs are sick of C++. Like, REALLY sick of it. Just don't want to waste their time anymore. Swearing about C++ is a daily talk point. This is an industry basically screaming out for salvation, but you'll find no real consensus on where to go. People are basically dabbling at the moment. They are also lead by the platform holders to some extent, MS has a lot of influence (holder of 2 majorplatforms) and they push C#. But yes, also as you say, the move towards 'casual' games, where the performance requirements aren't really critical. In 'big games' though, it's still brutally competitive. If you don't raise the technology/performance bar, your competition will. D is remarkably close to offering salvation... this GC business is one of the final hurdles I think. > I also had the pleasure of being able to use the Native Oberon and AOS operating systems back in the late 90's at the university, desktop operating systems done in GC systems programming languages. Sure you could do manual memory management, but only via the SYSTEM pseudo module. > > One of the applications was a video player, just the decoder was written in Assembly. > > http://ignorethecode.net/blog/**2009/04/22/oberon/<http://ignorethecode.net/blog/2009/04/22/oberon/> > > > In the end the question is what would a D version just with manual memory management have as compelling feature against C++1y and Ada, already established languages with industry standards? > > Then again my lack of experience in the embedded world invalidates what I think might be the right way. > C++11 is a joke. Too little, too late if you ask me. It barely addresses the problems it tries to tackle, and a lot of it is really lame library solutions. Also, C++ is too stuck. Bad language design that can never be changed. It's templates are a nightmare in particular, and it'll be stuck with headers forever. I doubt the compile times will ever be significantly improved. But again, I'm not actually advocating a D without the GC like others in this thread. I'm a realtime programmer, and I don't find the concepts incompatible, they just need tight control, and good debug/analysis tools. If I can timeslice the GC, limit it to ~150us/frame, that would do the trick. I'd pay 1-2% of my frame time for the convenience it offers for sure. I'd also rather it didn't stop the world. If it could collect on one thread while another thread was still churning data, that would really help the situation. Complex though... It helps that there are basically no runtime allocations in realtime software. This theoretically means the GC should have basically nothing to do! The state of the heap really shouldn't change from frame to frame, and surely that temporal consistency could be used to improve a good GC implementation? (Note: I know nothing about writing a GC) The main source of realtime allocations in D code come from array concatenation, and about 95% of that, in my experience, are completely local and could be relaxed onto the stack! But D doesn't do this in most cases (to my constant frustration)... it allocates anyway, even thought it can easily determine the allocation is localised. |
April 08, 2013 Re: Disable GC entirely | ||||
---|---|---|---|---|
| ||||
Posted in reply to Adrian Mercieca | I just re-read the "Doom3 Source Code Review" by Fabien Sanglard (http://fabiensanglard.net/doom3/) and apparently they don't use the Standard C++ library. "The engine does not use the Standard C++ Library: All containers (map,linked list...) are re-implemented but libc is extensively used." I certainly feel that there is room for improvement, like optimizing the GC, define a GC-free subset of Phobos etc. But it seems like if you're writing really performance critical realtime software most likely you've to implement everything bottom up to get the level of control. Secondly it seems like it's most often cheaper to just throw faster hardware at a problem. "You can do tricks to address any one of them; but I pretty strongly believe that with all of these things that are troublesome in graphics, rather than throwing really complex algorithms at them, they will eventually fall to raw processing power."(http://fabiensanglard.net/doom3/interviews.php) My 2p. |
April 08, 2013 Re: Disable GC entirely | ||||
---|---|---|---|---|
| ||||
Posted in reply to Dicebot Attachments:
| On 8 April 2013 17:59, Dicebot <m.strashun@gmail.com> wrote:
> On Monday, 8 April 2013 at 06:35:27 UTC, Paulo Pinto wrote:
>
>> I do understand that, the thing is that since I am coding in 1986, I remember people complaining that C and Turbo Pascal were too slow, lets code everything in Assembly. Then C became alright, but C++ and Ada were too slow, god forbid to call virtual methods or do any operator calls in C++'s case.
>>
>> Afterwards the same discussion came around with JVM and .NET
>> environments, which while making GC widespread, also had the sad
>> side-effect to make younger generations think that safe languages require a
>> VM when that is not true.
>>
>> Nowadays template based code beats C, systems programming is moving to
>> C++ in mainstream OS, leaving C behind, while some security conscious areas
>> are adopting Ada and Spark.
>>
>> So for me when someone claims about the speed benefits of C and C++ currently have, I smile as I remember having this kind of discussions with C having the role of too slow language.
>>
>
> But important question is "what has changed?". Was it just shift in programmer opinion and they initially mislabeled C code as slow or progress in compiler optimizations was real game-breaker? Same for GC's and VM's.
>
> It may be perfectly possible to design GC that suits real-time needs and is fast enough (well, Manu has mentioned some of requirements it needs to satisfy). But if embedded developers need to wait until tool stack that advanced is produced for D to use it - it is pretty much same as saying that D is dead for embedded. Mythical "clever-enough compilers" are good in theory but job needs to be done right now.
>
D for embedded, like PROPER embedded (microcontrollers, or even raspberry
pi maybe?) is one area where most users would be happy to use a custom
druntime like the ones presented earlier in this thread where it's
strategically limited in scope and designed not to allocate. 'Really
embedded' software tends not to care so much about portability.
A bigger problem is D's executable size, which are rather 'plump' to be
frank :P
Last time I tried to understand this, one main issue was objectfactory, and
the inability to strip out unused classinfo structures (and other junk).
Any unused data should be stripped, but D somehow finds reason to keep it
all. Also, template usage needs to be relaxed. Over-use of templates really
bloats the exe. But it's not insurmountable, D could be used in 'proper
embedded'.
For 'practically embedded', like phones/games consoles, the EXE size is
still an issue, but we mainly care about performance. Shrink the EXE,
improve the GC.
There's no other showstoppers I'm aware of. D offers you as much control as
C++ over the rest of your performance considerations, I think they can be
addressed by the programmer.
That said, I'd still KILL for __forceinline! ;) ;)
|
April 08, 2013 Re: Disable GC entirely | ||||
---|---|---|---|---|
| ||||
Posted in reply to Manu | On Monday, 8 April 2013 at 08:31:29 UTC, Manu wrote: > D for embedded, like PROPER embedded (microcontrollers, or even raspberry > pi maybe?) is one area where most users would be happy to use a custom > druntime like the ones presented earlier in this thread where it's > strategically limited in scope and designed not to allocate. Yes, this is one of important steps in solution and some good work has been already done on topic. Main issue is that it won't be any convenient unless second step is done - making core language/compiler more friendly to embedded needs so that you can both implement custom druntime AND have solid language. Ability to track/prohibit GC allocations is one part of this. Static array literals is another. Most likely you'll also need to disable RTTI like it is done in C++/Embedded projects I have seen so far. I have done quite a research on this topic and have a lot to say here :) > 'Really > embedded' software tends not to care so much about portability. > A bigger problem is D's executable size, which are rather 'plump' to be > frank :P > Last time I tried to understand this, one main issue was objectfactory, and > the inability to strip out unused classinfo structures (and other junk). > Any unused data should be stripped, but D somehow finds reason to keep it > all. Also, template usage needs to be relaxed. Over-use of templates really > bloats the exe. But it's not insurmountable, D could be used in 'proper > embedded'. Sure. Actually, executable size is an easy problem to solve considering custom druntimed mentioned before. Most of size in small executables come from statically linked huge druntime. (Simple experiment: use "-betterC" switch and compile hello-world program linking only to C stdlib. Same binary size as for C analog). Once you have defined more restrictive language subset and implemented minimal druntime for it, executable sizes will get better. Template issue is not an issue on their own, but D front-end is very careless about emitting template symbols (see my recent thread on topic). Most of them are weak symbols but hitting certain cases/bugs may bloat executable without you even noticing that. None of those issues is unsolvable show-stopper. But there does not seem an interest to work in this direction from current dmd developers (I'd be glad to be mistaken) and dmd source code sets rather hard entry barrier. You see, game developers are not the only ones with real-time requirements that are freaking tired of working with 40-year obsolete languages :) I am very interested in this topic. Looking forward to watching your DConf presentation recording about tricks used to adapt it to game engine by the way. |
Copyright © 1999-2021 by the D Language Foundation