October 17, 2019
On Thursday, 17 October 2019 at 20:39:41 UTC, H. S. Teoh wrote:
> Huh.  Walter says binary literal were removed from D, so how come the following still compiles on git master??
>
> 	pragma(msg, 0b1000_1000);
>

I was surprised by him mentioning that as well. I'm glad it stayed too, since I actually use them. When encoding in utf8 a code point of two code units for example, I think this looks really clean:
```
buf[0] = 0b1100_0000 | (chr >> 6) & 0b01_1111;
buf[1] = 0b1000_0000 | (chr >> 0) & 0b11_1111;
```

October 17, 2019
On Thursday, 17 October 2019 at 20:56:54 UTC, Dennis wrote:
> I was surprised by him mentioning that as well. I'm glad it stayed too, since I actually use them.

Indeed, me too. And they are definitely still there and I'd be quite sad if they disappeared.
October 17, 2019
On 10/17/2019 12:24 PM, Dennis wrote:
> On Monday, 23 September 2019 at 17:49:12 UTC, H. S. Teoh wrote:
>> Will this talk be posted somewhere like Youtube afterwards?
> 
> It's up now!
> https://www.youtube.com/watch?v=p22MM1wc7xQ

Slides: https://digitalmars.com/articles/hits.pdf
October 17, 2019
Reddit: https://www.reddit.com/r/programming/comments/djgsdy/d_at_20_hits_and_misses/
October 18, 2019
On Tuesday, 24 September 2019 at 23:27:44 UTC, Walter Bright wrote:

> I know. The same thing happened at DConf 2018, where the first morning's sessions were all lost.

Does this fall under the category of "things I learned the hard way?" :)
October 20, 2019
On Friday, 18 October 2019 at 01:37:01 UTC, Walter Bright wrote:
> Slides: https://digitalmars.com/articles/hits.pdf

Tangent time.

In regards to floating point:

> Unable to convince people that more precision is worthwhile

I'm actually waiting for quad floats to have hardware support. Registers are already wide enough, just nothing internally or in terms of instruction set for them yet.

But yeah. The reality is that the hardware I operate on uses 32-bit floats almost exclusively on the CPU (the sole exception I can think of is the main simulation timer, you don't want that as a 32-bit float). GPUs and shaders use 16-bit half floats extensively these days. But 64-bit is a rarity because operations take 40% more execution time on average with my own tests and we can forsake the accuracy for execution time.

This might change in a 4K TV world. Haven't really done too much with them yet (despite Quantum Break supporting 4K, but I didn't really notice anything glitchy that I could associate with floating point imprecision).

Still. 64- and 128-bit floats are quite useful for offline calculations. Make your data as accurate as possible, then let the runtime code use the fastest execution path it can.
1 2
Next ›   Last »