May 10, 2015
On Sunday, 10 May 2015 at 12:54:02 UTC, Timon Gehr wrote:
> On 05/10/2015 07:39 AM, H. S. Teoh via Digitalmars-d wrote:
>> I also just realized that on Posix the profiling code apparently relies
>> on the rdtsc instruction, which counts CPU cycles in a 64-bit counter --
>> given the high frequencies of modern CPUs, moderately long-running
>> CPU-intensive processes easily overflow this counter, leading to
>> wrapped-around timing values and completely garbled output.
>>
>> gprof, for all of its flaws, does not suffer from this problem.
>
> http://www.wolframalpha.com/input/?i=2^64%2F%289+GHz%29
>
> What am I missing?

http://www.wolframalpha.com/input/?i=2^32%2F%289+GHz%29

It doesn't look at good that way, especially if you have some tight loops.
May 11, 2015
On 2015-05-08 21:45, Walter Bright wrote:

> This is an interesting problem, one that I faced with Warp.
>
> The solution was to make the function being tested a function template.
> Then, in the unit test I replace the 'file' argument to the function
> with a static array of test data.

I don't really like that approach. The need to change something to a template just to make it testable. That's the beaut of dynamic typing and especially Ruby.

In Ruby I can replace a given method on any object very easily, even on just one instance.

obj = Object.new

def obj.to_s
  'foo'
end

obj.to_s == 'foo'
Object.new.to_s != 'foo'

This is perfect for mocking and stubbing.

-- 
/Jacob Carlborg
May 11, 2015
On 2015-05-10 10:12, Jonathan M Davis wrote:

> Those are really the only ones that I've ever thought made sense, and in
> several cases, the things that folks want are things that I very much
> _don't_ want (e.g. continuing to execute a unittest block after an
> assertion failure).

I don't think most of us want that. What we (I) want is for _other_ unit test blocks to run after an assertion failure. I also believe all unit test blocks should be completely independent of each other.

-- 
/Jacob Carlborg
May 11, 2015
On Friday, 8 May 2015 at 19:45:35 UTC, Walter Bright wrote:
> On 5/8/2015 7:00 AM, Chris wrote:
>> The only drawback is that sometimes the logic of a program does not allow to
>> test every little bit, especially when handling files is concerned.
>
> This is an interesting problem, one that I faced with Warp.
>
> The solution was to make the function being tested a function template. Then, in the unit test I replace the 'file' argument to the function with a static array of test data.
>
> The term for it is 'type mocking', and it's fairly well explored territory. What I find most intriguing about it is it results in D programs consisting largely of template functions rather than plain functions.

Hm, I was thinking of something like that, however, it gets more and more complicated if you have e.g. a class that uses another class etc.

class Data // A singleton
{
  // stores paths to resources etc.
}

class Loader
{
  this()
   this.data = Data.getInstance();
  // Loads files and caches them in memory
}

class Process
{
  // Uses cached data
}

It ain't easy to unit test Process, but even Loader and Data can be tricky to unit test, because all of them depend on input from the outside.

./runprogram --config="config.json"
> Data.read("config.json")
> Loader.load(resources from config.json)
> Process.use(data loaded)

For cases like this, it's better to have an external test script.
May 11, 2015
On Monday, 11 May 2015 at 08:18:54 UTC, Jacob Carlborg wrote:
> On 2015-05-10 10:12, Jonathan M Davis wrote:
>
>> Those are really the only ones that I've ever thought made sense, and in
>> several cases, the things that folks want are things that I very much
>> _don't_ want (e.g. continuing to execute a unittest block after an
>> assertion failure).
>
> I don't think most of us want that. What we (I) want is for _other_ unit test blocks to run after an assertion failure. I also believe all unit test blocks should be completely independent of each other.

Well, for some of the discussions on parallelized unit tests, that would be required, and it's certainly good practice in general, but there's nothing currently stopping folks from writing unittest blocks which rely on what occurred in previous unit test blocks, and there are rare circumstances where it makes sense.

Hopefully, we can get to the point that druntime is able to run tests in parallel and then we can use attributes to mark parallelizable unittest blocks to control it.

- Jonathan M Davis
May 11, 2015
Would like to suggest strip to be added to the page. It makes wonders to the final executable.

On Friday, 8 May 2015 at 15:47:11 UTC, Vladimir Panteleev wrote:
> On Thursday, 7 May 2015 at 22:27:57 UTC, Walter Bright wrote:
>> But let's not forget the meat and potatoes on our plate while looking at our neighbor's salad dressing.
>
> I decided to line up our potatoes on a nice new wiki page:
>
> http://wiki.dlang.org/Development_tools

May 11, 2015
On 5/11/15 1:19 AM, Jacob Carlborg wrote:
> On 2015-05-10 10:12, Jonathan M Davis wrote:
>
>> Those are really the only ones that I've ever thought made sense, and in
>> several cases, the things that folks want are things that I very much
>> _don't_ want (e.g. continuing to execute a unittest block after an
>> assertion failure).
>
> I don't think most of us want that. What we (I) want is for _other_ unit
> test blocks to run after an assertion failure. I also believe all unit
> test blocks should be completely independent of each other.

Yah, that's reasonable. Should be configurable in some easy way. -- Andrei

May 13, 2015
On Monday, 11 May 2015 at 09:31:34 UTC, Chris wrote:
> Hm, I was thinking of something like that, however, it gets more and more complicated if you have e.g. a class that uses another class etc.
>
> class Data // A singleton
> {
>   // stores paths to resources etc.
> }
>
> class Loader
> {
>   this()
>    this.data = Data.getInstance();
>   // Loads files and caches them in memory
> }
>
> class Process
> {
>   // Uses cached data
> }
>
> It ain't easy to unit test Process, but even Loader and Data can be tricky to unit test, because all of them depend on input from the outside.

That's the reason for IoC design; it's similar to ranges in a sense that you feed the range with data instead of letting it figure out where to get that data on its own.
May 14, 2015
On Wednesday, 13 May 2015 at 08:26:35 UTC, Kagamin wrote:
> On Monday, 11 May 2015 at 09:31:34 UTC, Chris wrote:
>> Hm, I was thinking of something like that, however, it gets more and more complicated if you have e.g. a class that uses another class etc.
>>
>> class Data // A singleton
>> {
>>  // stores paths to resources etc.
>> }
>>
>> class Loader
>> {
>>  this()
>>   this.data = Data.getInstance();
>>  // Loads files and caches them in memory
>> }
>>
>> class Process
>> {
>>  // Uses cached data
>> }
>>
>> It ain't easy to unit test Process, but even Loader and Data can be tricky to unit test, because all of them depend on input from the outside.
>
> That's the reason for IoC design; it's similar to ranges in a sense that you feed the range with data instead of letting it figure out where to get that data on its own.

However, the data comes from somewhere outside the program, and although you can IoC most parts of a program _after_ it's been fed the data, the initial input section is not easily unit tested (i.e. unit test in D).

Unit tests only work for the data processing logic inside a unit after the program has received the data from the outside. But the initial acquisition of data is outside the scope of these unit tests. Thus, there are always bits and pieces that cannot be unit-tested like e.g. loading files.
May 14, 2015
On Thursday, 14 May 2015 at 13:22:21 UTC, Chris wrote:
> However, the data comes from somewhere outside the program, and although you can IoC most parts of a program _after_ it's been fed the data, the initial input section is not easily unit tested (i.e. unit test in D).

Gathering parts together is integration point, and unit tests don't test integration. In your example you would be able to test the Process class. If you are unsure Loader can't parse files - that can be tested without files, streams are enough.