On 13 April 2013 00:56, Dmitry Olshansky <dmitry.olsh@gmail.com> wrote:
12-Apr-2013 18:40, Manu пишет:

It measures in the megabytes on PC's, I'm used to working on machines
with ~32k stack, I use it aggressively, and I don't tend to run out.

Just a moment ago you were arguing for some quite different platform :)

Hey? I think 32k is pretty small, and I find it comfortable enough. I'm just saying that most people who are only concerned with PC have nothing to worry about.

Regardless - think fibers on say servers. These get no more then around 64K of stack. In fact, current D fibers have something like 16K or 32K.
There even was report that writeln triggered stack overflow with these, so the size was extended a bit.

If it was overflowing 64k, then the function that allocates 1k for some string processing is not taking the significant share.

In general simple non-GUI threads on Windows get 64K by default (IRC).

And that's heaps.

    In the end if one library thinks it's fine to burn say 32K of stack
    with alloca but then it calls into another one that burns another
    chunk with alloca and so on untill happily stack overflowing
    (sometimes, on some systems at the right time!). Call graphs to
    calculate this is only available for the final app.

    Maybe just adding a separate thread-local growable stack for data
    would work - at least it wouldn't depend on sheer luck and
    particular OS settings.


If you're saying the stack is a limited resource, therefore it's unsafe
to use it, then you might as well argue that calling any function is an
unsafe process.

s/limited/unpredictably limited/

"calling any function is an  unsafe process" - indeed in principle you don't know how much of stack these use unless you measure them or analyze otherwise.

It's awful that on 32 bit system you can't expect stack to be arbitrarily long (as much as you'd use of it) due to threads quickly using up all of virtual memory. On 64-bit something to that effect is achievable

Just apply some common sense to your stack usage. Don't allocate hundreds of kb (or even 10s of kb) at a time.

Some common sense is required.

Exactly except that in the library there is no knowledge to get a measure of "common sense". It can't tell how somebody intends to use it, especially the standard library.

If the function can not be called recursively, then you can assume a reasonable amount of memory. If it can, or it does by design, then consider the design of the function more carefully, maybe there's other solutions.

> I wouldn't personally burn more than 1k
> in a single function, unless I knew it was close to a leaf by design
> (which many library calls are).

And you trust that nobody will build a ton of wrappers on top of your function (even close to leaf one)? Come on, "they" all do it.

A ton of wrappers in phobos? If it's their own program and they write code like that, they're probably using the heap.

90% of what we're dealing with here are strings, and they tend to
measure in the 10s of bytes.
???
Not at all, paths could easily get longer the 256, much to legacy apps chagrin.

And they fall back to the heap. No problem.
If I care about avoiding heap usage, I can know to avoid such deep paths. Ie, it can be controlled by the user.