Jump to page: 1 2
Thread overview
Advice requested for fixing issue 17914
Oct 23, 2017
Brian Schott
Oct 24, 2017
Kagamin
Oct 24, 2017
Brian Schott
Oct 24, 2017
qznc
Oct 25, 2017
Brian Schott
Oct 25, 2017
safety0ff
Oct 25, 2017
Jonathan M Davis
Oct 25, 2017
Nemanja Boric
Oct 25, 2017
Nemanja Boric
Oct 25, 2017
Nemanja Boric
Oct 25, 2017
Nemanja Boric
Oct 25, 2017
Nemanja Boric
Oct 25, 2017
Nemanja Boric
Oct 25, 2017
Nemanja Boric
October 23, 2017
Context: https://issues.dlang.org/show_bug.cgi?id=17914

I need to get this issue resolved as soon as possible so that the fix makes it into the next compiler release. Because it involves cleanup code in a class destructor a design change may be necessary. Who should I contact to determine the best way to fix this bug?
October 24, 2017
Call destroy(fiber) when it completed execution? Who manages fibers?
October 24, 2017
On 10/23/17 12:56 PM, Brian Schott wrote:
> Context: https://issues.dlang.org/show_bug.cgi?id=17914
> 
> I need to get this issue resolved as soon as possible so that the fix makes it into the next compiler release. Because it involves cleanup code in a class destructor a design change may be necessary. Who should I contact to determine the best way to fix this bug?

A failing use case would help. Fixing a bug when you can't reproduce is difficult.

-Steve
October 24, 2017
On Monday, 23 October 2017 at 16:56:32 UTC, Brian Schott wrote:
> Context: https://issues.dlang.org/show_bug.cgi?id=17914
>
> I need to get this issue resolved as soon as possible so that the fix makes it into the next compiler release. Because it involves cleanup code in a class destructor a design change may be necessary. Who should I contact to determine the best way to fix this bug?

Looking at git blame [0], I guess Martin Nowak and Nemanja Boric seem to be pretty involved. Not sure how deep Petar Kirov and Sean Kelly are into Fibers.

My question wrt to the bug: Why is munmap/freeStack called in the destructor? Could be done right after termination?

[0] https://github.com/dlang/druntime/blame/ec9a79e15d446863191308fd5e20febce2053546/src/core/thread.d#L4077
October 24, 2017
On Tuesday, 24 October 2017 at 14:28:01 UTC, Steven Schveighoffer wrote:
> A failing use case would help. Fixing a bug when you can't reproduce is difficult.
>
> -Steve

I've attached one to the bug report.

October 25, 2017
On Tuesday, 24 October 2017 at 21:49:10 UTC, qznc wrote:
> My question wrt to the bug: Why is munmap/freeStack called in the destructor? Could be done right after termination?

I've been reading the Fiber code and (so far) that seems seems to be reasonable. Can anybody think of a reason that this would be a bad idea? I'd rather not create a pull request for a design that's not going to work because of a detail I've overlooked.
October 25, 2017
On Wednesday, 25 October 2017 at 01:26:10 UTC, Brian Schott wrote:
>
> I've been reading the Fiber code and (so far) that seems seems to be reasonable. Can anybody think of a reason that this would be a bad idea? I'd rather not create a pull request for a design that's not going to work because of a detail I've overlooked.

Just skimming the Fiber code I found the reset(...) API functions whose purpose is to re-use Fibers once they've terminated.

Eager stack deallocation would have to coexist with the Fiber reuse API.

Perhaps the Fiber reuse API could simply be polished & made easy to integrate so that your original use case no longer hits system limits.

I.e. Perhaps an optional delegate could be called upon termination, making it easier to hook in Fiber recycling.

The reason my thoughts head in that direction is that I've read that mmap/unmmap 'ing frequently isn't recommended in performance conscious programs.
October 25, 2017
On 10/23/17 12:56 PM, Brian Schott wrote:
> Context: https://issues.dlang.org/show_bug.cgi?id=17914
> 
> I need to get this issue resolved as soon as possible so that the fix makes it into the next compiler release. Because it involves cleanup code in a class destructor a design change may be necessary. Who should I contact to determine the best way to fix this bug?

It appears that the limitation applies to mmap calls as well, and mmap call to allocate the stack has been in Fiber since as far as I can tell the beginning. How has this not shown up before?

Regardless of the cause, this puts a limitation on the number of simultaneous Fibers one can have. In other words, this is not just a problem with Fibers not being cleaned up properly, because one may need more than 65k fibers actually running simultaneously. We should try to prevent that as a limitation.

For example, even the following code I would think is something we should support:

void main()
{
	import std.concurrency : Generator, yield;
	import std.stdio : File, writeln;

	auto f = File("/proc/sys/vm/max_map_count", "r");
	ulong n;
	f.readf("%d", &n);
	writeln("/proc/sys/vm/max_map_count = ", n);
	Generator!int[] gens; // retain pointers to all the generators
	foreach (i; 0 .. n + 1000)
	{
		if (i % 1000 == 0)
			writeln("i = ", i);
		gens ~= new Generator!int({ yield(1); });
	}
}

If we *can't* do this, then we should provide a way to manage the limits

I.e. there should be a way to be able to create more than the limit's number of fibers, but only allocate stacks when we can (and have a way to tell the user what's going on).

-Steve
October 25, 2017
On Wednesday, October 25, 2017 09:26:26 Steven Schveighoffer via Digitalmars-d wrote:
> On 10/23/17 12:56 PM, Brian Schott wrote:
> > Context: https://issues.dlang.org/show_bug.cgi?id=17914
> >
> > I need to get this issue resolved as soon as possible so that the fix makes it into the next compiler release. Because it involves cleanup code in a class destructor a design change may be necessary. Who should I contact to determine the best way to fix this bug?
>
> It appears that the limitation applies to mmap calls as well, and mmap call to allocate the stack has been in Fiber since as far as I can tell the beginning. How has this not shown up before?

Maybe there was a change in the OS(es) being used that affected the limit?

- Jonathan M Davis

October 25, 2017
On Wednesday, 25 October 2017 at 14:19:14 UTC, Jonathan M Davis wrote:
> On Wednesday, October 25, 2017 09:26:26 Steven Schveighoffer via Digitalmars-d wrote:
>> On 10/23/17 12:56 PM, Brian Schott wrote:
>> > Context: https://issues.dlang.org/show_bug.cgi?id=17914
>> >
>> > I need to get this issue resolved as soon as possible so that the fix makes it into the next compiler release. Because it involves cleanup code in a class destructor a design change may be necessary. Who should I contact to determine the best way to fix this bug?
>>
>> It appears that the limitation applies to mmap calls as well, and mmap call to allocate the stack has been in Fiber since as far as I can tell the beginning. How has this not shown up before?
>
> Maybe there was a change in the OS(es) being used that affected the limit?
>
> - Jonathan M Davis

Yes, the stack is not immediately unmapped because it's very common
just to reset the fiber and reuse it for handling the new connection -
creating new fibers (and doing unmap on termination) is a problem in the
real life (as this is as well).

At sociomantic we already had this issue: https://github.com/sociomantic-tsunami/tangort/issues/2 Maybe this is the way to go - I don't see a reason why every stack should be mmaped separately.
« First   ‹ Prev
1 2