September 06

On Monday, 6 September 2021 at 16:55:30 UTC, deadalnix wrote:

>

Damn, I don't know who that guy is, but it seems like he foresaw the current mess that we are in.

The comment reads:

>

I'm not convinced this is the right approach. The thing will still fail to link when no main function is provided.

IMO, it is better to do this as proposed by basil. I plan to do a DMD PR.

So what happened to that plan? What's basil's proposal?

September 06

On Monday, 6 September 2021 at 20:19:02 UTC, Dennis wrote:

> >

I'm not convinced this is the right approach. The thing will still fail to link when no main function is provided.

IMO, it is better to do this as proposed by basil. I plan to do a DMD PR.

So what happened to that plan? What's basil's proposal?

I have no idea, I do not even remember having posted that comment.

September 06

On Monday, 6 September 2021 at 22:53:50 UTC, deadalnix wrote:

>

On Monday, 6 September 2021 at 17:12:50 UTC, jfondren wrote:

>

-main wasn't and still isn't a problem for the use case the PR was authored for, of "I want to run all my unit tests, only", and the approach of the PR approach was completely successful at making druntime much more friendly to that use case. It had zero impact on your use case, but you didn't sell it very well by only mentioning what for other uses was a very trivial inconvenience.

[something deadalnix typed a while ago and has been waiting to post]

The blindness you have to precisely how people intend to use features, "use case blindness", has resulted in a lot of trouble for you:

  1. you earn pointless animosity by describing existing features, which other people use all the time in comfort, as entirely broken and obviously badly designed.

  2. you keep derailing your own thread about your own use case, because you don't think it's important to keep the focus on your use case.

  3. even when you talk about problems that directly relate to pretty severe inconveniences for your use case, you're unable to interest other people in them, because you don't think to connect the problems to a use case where they are severe inconveniences.

You keep appealing to experiences that are actually personal to you and not generally felt. I've mostly enjoyed this thread and appreciate that you made it; I dug a lot into unit tests in D as a result of following it. The author of the PR that you think has helped 'lead to this mess' has said he's happy fine with the state of D unit testing. Just like other people are not all constantly slapping their foreheads over how irritating -main is to use, and just like the other participants of that PR didn't think "it doesn't fix -main" was a showstopper for that PR, other people also do not all think that "General has a 80 post thread about unit tests" is an argument in itself than unit tests have a problem.

I think a still likely outcome is that a general interest in unittests will result in a lot of improvements, but only to other people's use cases. If that's the case in a few months, give Discord a try.

September 07

On Monday, 6 September 2021 at 23:57:05 UTC, jfondren wrote:

>

The blindness you have to precisely how people intend to use features, "use case blindness", has resulted in a lot of trouble for you:

  1. you earn pointless animosity by describing existing features, which other people use all the time in comfort, as entirely broken and obviously badly designed.

Doesn't mean that it should not be improved, or properly fixed. Oh and if someone doesn't complain about them, doesn't mean that they have good architecture.

>
  1. you keep derailing your own thread about your own use case, because you don't think it's important to keep the focus on your use case.

Selecting just a few modules to compile in a large project will affect not only him, but all projects that use source libraries that have long compile time of the unit tests.

>
  1. even when you talk about problems that directly relate to pretty severe inconveniences for your use case, you're unable to interest other people in them, because you don't think to connect the problems to a use case where they are severe inconveniences.

Why are you trying to infer what he thinks, and post this as his thoughts?
He did interest at least me, since I had experience of waiting unit tests compile for a source library that I didn't need to in first place. And to tell the truth it wasn't a nice experience, given D advertises fast compilation cycles.

Best regards,
Alexandru.

September 08
On 9/2/21 7:00 PM, Steven Schveighoffer wrote:
> What I was saying is, you have a library, you built it without -unittest (on its own build line, or just use dub because it does the right thing), and then you build your main application with -unittest, linking in the already-built library (or its object files). The runtime runs all unittests it finds, and so it's running just the modules that were built with the -unittest flag.
>

Steve:

Sorry to reply a few days late and in the middle of a gargantuan thread, but I wanted some clarification from you on dub just doing the right thing --

Let's suppose I maintain a library L (written in D; has unit tests). Suppose further I have client program P, which uses L.

As far as I can tell, if P's dubfile includes library L, and the user execeutes `dub test` then the unit tests from the library L as well as program P are built and executed.

We had to wrap our library's unittests in `debug(library_unittest)` to escape this behavior.

Can you provide some clarity because it sounds like you are saying dub will build P's unit tests, but not library L's? Not at all my experience, unless it was updated recently.
September 09

On Thursday, 9 September 2021 at 00:13:00 UTC, James Blachly wrote:

>

Let's suppose I maintain a library L (written in D; has unit tests). Suppose further I have client program P, which uses L.

As far as I can tell, if P's dubfile includes library L, and the user execeutes dub test then the unit tests from the library L as well as program P are built and executed.

We had to wrap our library's unittests in debug(library_unittest) to escape this behavior.

Can you provide some clarity because it sounds like you are saying dub will build P's unit tests, but not library L's? Not at all my experience, unless it was updated recently.

Either one of the stated behaviors can be the case.

Consider these pair of libraries:

$ grep -R .
ell/dub.sdl:name "ell"
ell/dub.sdl:description "A minimal D application."
ell/dub.sdl:authors "Julian Fondren"
ell/dub.sdl:copyright "Copyright © 2021, Julian Fondren"
ell/dub.sdl:license "MIT"
ell/source/ell.d:module ell;
ell/source/ell.d:public const int x = 5;
ell/source/ell.d:unittest {
ell/source/ell.d:    assert(x != 5);
ell/source/ell.d:}
pea/dub.sdl:name "pea"
pea/dub.sdl:description "A minimal D application."
pea/dub.sdl:authors "Julian Fondren"
pea/dub.sdl:copyright "Copyright © 2021, Julian Fondren"
pea/dub.sdl:license "MIT"
pea/dub.sdl:dependency "ell" version="*"
pea/source/pea.d:module pea;
pea/source/pea.d:import ell : x;
pea/source/pea.d:public const int n = 5;
pea/source/pea.d:unittest {
pea/source/pea.d:    assert(n == x);
pea/source/pea.d:}

ell has tests that always fail. pea has tests that always succeed. pea depends on ell.

$ dub add-local ./ell
Registered package: ell (version: ~master)
$ cd pea
pea$ dub -q test
1 modules passed unittests

In this configuration, that ell fails its unittests doesn't matter at all to pea. The library and its value x are still used in pea.

And ell definitely fails its unittests:

pea$ cd ../ell; dub -q test
core.exception.AssertError@source/ell.d(6): unittest failure
...
... a lot of output
...
1/1 modules FAILED unittests
Program exited with code 1

But let's add a single line to ell's dub configuration...

ell$ echo 'targetType "sourceLibrary"' >> dub.sdl
ell$ cat dub.sdl
name "ell"
description "A minimal D application."
authors "Julian Fondren"
copyright "Copyright © 2021, Julian Fondren"
license "MIT"
targetType "sourceLibrary"

...and we see the behavior that you noted with P and L:

ell$ cd ../pea
pea$ dub -q test
core.exception.AssertError@../ell/source/ell.d(6): unittest failure
...
... a lot of output
...
1/2 modules FAILED unittests

IMO, D's #1 problem is that dub's documentation is 1/10000 the length it needs to be. (Meanwhile other problems with dub only show up in the three digits, like D problem #451: a dub !# script does a bunch of git and repository stuff with undesirable slowdowns.)

September 11

On Thursday, 26 August 2021 at 09:38:54 UTC, deadalnix wrote:

>

On Wednesday, 25 August 2021 at 20:49:32 UTC, Walter Bright wrote:

>

This is indeed a real problem. It was solved with the -main switch to dmd, which will just stick one in for you. I use it all the time for unit testing modules.

No it wasn't, because it add a main no matter what, which means that when there is a main in the module being tested, you get a link error.

Good news: this is now fixed!

https://github.com/dlang/dmd/pull/13057

Grab a nightly and try out: dmd -unittest -main -run

September 13

On Saturday, 11 September 2021 at 21:23:07 UTC, Dennis wrote:

>

Good news: this is now fixed!

https://github.com/dlang/dmd/pull/13057

Grab a nightly and try out: dmd -unittest -main -run

https://github.com/dlang/dmd/pull/13057#issuecomment-918061875

Next ›   Last »
1 2 3 4 5 6 7 8 9 10