May 08, 2017
On 05/08/2017 09:21 AM, Rel wrote:
> What do you guys think of the points explained here:
> https://www.youtube.com/watch?v=gWv_vUgbmug
>

Watched the first 15-20 min of it. Definitely want to watch the rest. Buuuuuutttt.....so far it's a good example of the *one* little thing that kinda bugs me about Johnathan Blow:

I keep hearing him say the same things I've been saying for years, but because he wrote Braid, he can sometimes get people to actually listen instead of blindly dismissing everything. :/ (Granted that's not a fault of Blow's. But it still bugs me!)


> 1) The compile times seems very fast in comparison
> with other modern programming languages, I'm wondering
> how he managed to do it?

By being a game (and engine) developer and knowing the basics of writing efficient code, unlike the majority of the software industry.

(Seriously, if other circles of dev would pull their ***** out of their ***** long enough recognize all of what game programming obviously involves (ex: It's more than the glorified calculators that the upptity banking software is, and don't get me started on "enterprise" in general), then they could finally start learning how to write grown-up code and software today wouldn't suck so f****** badly.)

Also, not using header files.


> 2) Compile-time execution is not limited, the build
> system is interestingly enough built into the language.

Nemerle had that years ago (although I'm guessing/hoping that unlike Nemerle, Blow's implementation probably doesn't require manually compiling to a DLL before being able to use given code at compile-time).

My inclination is that it's the right approach, and is one thing that makes D look clunky and awkward by comparison. I never bought D's argument that compiling source shouldn't be allowed to do arbitrary code execution or I/O because, come on, "build scripts" and "build systems". That "no arbitrary code execution" ship sailed ages ago: Who in hell compiles software from source without using the provided buildscript or buildsystem configuration (all of which, by necessity, allow arbitrary code execution and IO)? Nobody who isn't searcing for their own little corner of hell, that's who.

The *one* thing that does give me a little pause though is the possibility that order of compilation could change the results of generated code. I think it'll be interesting to see how that plays out in practice. "Don't do that" sounds nice, but the question remains: "Is it something that will happen without the author knowing he's doing it? If so, will it be a problem?"

May 09, 2017
On Tuesday, 9 May 2017 at 02:13:19 UTC, Nick Sabalausky (Abscissa) wrote:
> On 05/08/2017 03:28 PM, Jack Stouffer wrote:
>>
> Uncompressed? Seriously? I assume that really means FLAC or something rather than truly uncompressed, but even still...sounds more like a bullet-list pandering^H^H^H^H^H^H^H^H^Hselling point to the same suckers^H^H^H^H^H^H^H"classy folk" who buy Monster-brand cables for digital signals than a legit quality enhancement. Take a top-of-the-line $$$$ audio system, set down a room full of audiophiles, and compare lossless vs 320kbps Vorbis...in a true double-blind, no WAY they'd be able to consistently spot the difference even if they try. Let alone while being detracted by all the fun of causing mass mayhem and carnage. Unless maybe you just happen to stumble upon some kind of audio savant.

Don't need to go that high. c't did a double blind study some years ago with the help of her sister magazine for audio equipment. So they made a very good setup. What they discovered is that mp3 with 160 kbit/s CBR was already undistinguishable from CD for 99% of people for almost all kind of music. mp3 is much better than its reputation, due to really bad encoders at the beginning (Xing was awful and was the widest used at the beginning, Fraunhofer was excellent but not free, lame took years before it was any good) people thought that the crap they heard was inherent to the mp3 format but very often it was bad grabbing, over eager lo-pass filtering and crappy psycho-acoustic models (Xing). So you make a good point that uncompressed audio for a game is completely nuts.

May 09, 2017
On Tuesday, 9 May 2017 at 06:10:39 UTC, Patrick Schluter wrote:
> equipment. So they made a very good setup. What they discovered is that mp3 with 160 kbit/s CBR was already undistinguishable from CD for 99% of people for almost all kind of music.

It isn't all that hard to distinguish if you know what to listen for. I hear a big difference in music I have mixed down/mastered on a good headset.

May 09, 2017
On Monday, 8 May 2017 at 19:14:16 UTC, Meta wrote:
> Is this why most console games that get ported to PC are massive? GTA V on PC, for example, was 100GB, while Skyrim was around 8GB.

Consoles have a fixed hardware level that will give you essentially deterministic performance. The quality of assets it can handle are generally 1/4 to 1/2 as detailed as what the current top-line but reasonably-priced PC hardware can handle. And PC gamers *love* getting the higher detailed assets. So we ship PC games with the option to scale the quality of the assets used at runtime, and ship with higher quality assets than is required for a console game.

See as an alternative example: the Shadows of Mordor ultra HD texture pack, which requires a 6GB video card and an additional download. Another example I like using is Rage, which is essentially 20GB of unique texture data. If they wanted to re-release it on Xbox One and PS4 without being accused of just dumping a port across, they'd want to ship with 80GB of texture data.

There's also grumblings about whether those HD packs are worth it, but now that 4K displays are coming in those grumblings are stopping as soon as people see the results.

On Tuesday, 9 May 2017 at 02:21:19 UTC, Nick Sabalausky (Abscissa) wrote:
> I don't know anything about Witcher, but FF13 *does* have a fair amount of pre-rendered video, FWIW. And maybe Witcher uses better compression than FF13?

Correct about the video. The Final Fantasy games are notorious for their pre-renders and their lengthy cutscenes. All of which require massive amounts of video and audio data.

Better compression though? Unlikely. Texture formats are fairly standardised these days. Mesh formats are custom, but not as much of a space hog as textures. Other assets like audio and video is more where the compression formats come in to play. But gaming hardware has a few tricks for that. For example:

On Tuesday, 9 May 2017 at 02:13:19 UTC, Nick Sabalausky (Abscissa) wrote:
> Uncompressed? Seriously? I assume that really means FLAC or something rather than truly uncompressed, but even still...sounds more like a bullet-list pandering^H^H^H^H^H^H^H^H^Hselling point to the same suckers^H^H^H^H^H^H^H"classy folk" who buy Monster-brand cables for digital signals than a legit quality enhancement.

Well, no. Gaming consoles - and even mobile devices - have dedicated hardware for decompressing some common audio and video formats. PC hardware does not. Decompression needs to happen on the CPU.

Take Titanfall as a use case, which copped quite a bit of flack for shipping the PC version with uncompressed audio. The Xbox One version shipped on a machine that guaranteed six hardware threads (at one per core) with dedicated hardware for audio decompression. Their PC minspec though? A dual core machine (at one thread per core) with less RAM and only using general purpose hardware.

The PC scene had a cry, but it was yet another case of PC gamers not actually understanding hardware fully. The PC market isn't all high-end users, the majority of players aren't running bleeding edge hardware. They made the right business decision to target hardware that low, but it meant some compromises had to be made. In this case, the cost of decompressing audio on the CPU was either unfeasible in real time or increased load times dramatically during load times. Loading uncompressed audio off the disk was legitimately an optimisation in both cases.

On Tuesday, 9 May 2017 at 06:50:18 UTC, Ola Fosheim Grøstad wrote:
> It isn't all that hard to distinguish if you know what to listen for. I hear a big difference in music I have mixed down/mastered on a good headset.

So, as Walter would say, "It's trivially obvious to the casual observer."

That's the point of the blind test. It isn't trivially obvious to the casual observer. You might think it is, but you're not a casual observer. That's essentially why LAME started up - a bunch of audiophiles decided to encode for perception of quality rather than strictly objective quality.
May 09, 2017
On 05/09/2017 02:10 AM, Patrick Schluter wrote:
> On Tuesday, 9 May 2017 at 02:13:19 UTC, Nick Sabalausky (Abscissa) wrote:
>> On 05/08/2017 03:28 PM, Jack Stouffer wrote:
>>>
>> Uncompressed? Seriously? I assume that really means FLAC or something
>> rather than truly uncompressed, but even still...sounds more like a
>> bullet-list pandering^H^H^H^H^H^H^H^H^Hselling point to the same
>> suckers^H^H^H^H^H^H^H"classy folk" who buy Monster-brand cables for
>> digital signals than a legit quality enhancement. Take a
>> top-of-the-line $$$$ audio system, set down a room full of
>> audiophiles, and compare lossless vs 320kbps Vorbis...in a true
>> double-blind, no WAY they'd be able to consistently spot the
>> difference even if they try. Let alone while being detracted by all
>> the fun of causing mass mayhem and carnage. Unless maybe you just
>> happen to stumble upon some kind of audio savant.
>
> Don't need to go that high. c't did a double blind study some years ago
> with the help of her sister magazine for audio equipment. So they made a
> very good setup. What they discovered is that mp3 with 160 kbit/s CBR
> was already undistinguishable from CD for 99% of people for almost all
> kind of music. mp3 is much better than its reputation, due to really bad

Interesting. Any links? Not familiar with what "c't" is.

Although, even 1% is still a *LOT* of people. I'd be more curious to see what encoding it would take to get more like 99.99% or so.

> encoders at the beginning (Xing was awful and was the widest used at the
> beginning, Fraunhofer was excellent but not free, lame took years before
> it was any good) people thought that the crap they heard was inherent to
> the mp3 format but very often it was bad grabbing, over eager lo-pass
> filtering and crappy psycho-acoustic models (Xing). So you make a good
> point that uncompressed audio for a game is completely nuts.
>

Fair point. Also, I've heard that the big quality improvements that aac/vorbis/etc have over mp3 are mainly just at lower bitrates.

May 09, 2017
On 05/09/2017 04:12 AM, Ethan Watson wrote:
>
> In this case, the cost of
> decompressing audio on the CPU was either unfeasible in real time or
> increased load times dramatically during load times. Loading
> uncompressed audio off the disk was legitimately an optimisation in both
> cases.
>

I'm surprised it would've made that much of a difference, I'd grown accustomed to seeing audio decoding as computationally cheap on even low-end hardware.

But then again, I suppose the average level of a modern AAA game may involve a bit more audio data than the average MP3 song (not to mention a lot more audio streams playing simultaneously), and is already maxing the hardware as much as it can.

May 09, 2017
On Tuesday, 9 May 2017 at 08:24:40 UTC, Nick Sabalausky (Abscissa) wrote:
> On 05/09/2017 02:10 AM, Patrick Schluter wrote:
>> On Tuesday, 9 May 2017 at 02:13:19 UTC, Nick Sabalausky (Abscissa) wrote:
>>> On 05/08/2017 03:28 PM, Jack Stouffer wrote:
>>>>
>>> Uncompressed? Seriously? I assume that really means FLAC or something
>>> rather than truly uncompressed, but even still...sounds more like a
>>> bullet-list pandering^H^H^H^H^H^H^H^H^Hselling point to the same
>>> suckers^H^H^H^H^H^H^H"classy folk" who buy Monster-brand cables for
>>> digital signals than a legit quality enhancement. Take a
>>> top-of-the-line $$$$ audio system, set down a room full of
>>> audiophiles, and compare lossless vs 320kbps Vorbis...in a true
>>> double-blind, no WAY they'd be able to consistently spot the
>>> difference even if they try. Let alone while being detracted by all
>>> the fun of causing mass mayhem and carnage. Unless maybe you just
>>> happen to stumble upon some kind of audio savant.
>>
>> Don't need to go that high. c't did a double blind study some years ago
>> with the help of her sister magazine for audio equipment. So they made a
>> very good setup. What they discovered is that mp3 with 160 kbit/s CBR
>> was already undistinguishable from CD for 99% of people for almost all
>> kind of music. mp3 is much better than its reputation, due to really bad
>
> Interesting. Any links? Not familiar with what "c't" is.

https://www.heise.de/ct/artikel/Kreuzverhoertest-287592.html

So, I got some details wrong in my recollection from memory. They compared 128 kbit/s, 256 kbit/s and CD. To remove bias, they burnt the mp3 after decompression on CD so that the testers couldn't distinguish between the 3 formats and played them in their high quality audio setup in their studios. The result was surprizing in that there was no difference between CD and 256K mp3, and only a slightly lower score for 128K mp3. They were also surprized that for some kind of music (classical), the mp3 128K was even favored by some testers over the other formats and they speculate that the encoding rounds out somehow some roughness of the music.
They also had one tester who was 100% accurate at recognizing mp3 over CD, but the guy had had a hearing accident in his youth where he lost part of the hearing spectrum (around 8KHz) which breaks the psycho-acoustic model and allows him to hear noise that is suppressed for the not hearing impared.

I don't know where I got the 160 KBit part of my message.

>>
>
> Fair point. Also, I've heard that the big quality improvements that aac/vorbis/etc have over mp3 are mainly just at lower bitrates.


May 09, 2017
On Tuesday, 9 May 2017 at 08:12:20 UTC, Ethan Watson wrote:
> That's the point of the blind test. It isn't trivially obvious to the casual observer. You might think it is, but you're not a casual observer.

Well the point of a blind test is more to establish validity for something having a different effect, but not for establishing that it isn't different. i.e. false vs unknown, so in the latter case it would be inconclusive.

These 2 statements are very different:

1. we have not been able to establish that there was any perceived difference
2. we have established that there was no perceived difference

How would they research this? By asking if one is better than the other? Well, that is highly subjective. Because better has to do with expectations. Anyway, cognitive analysis of difference is rather at a high level and for many something sounds the same if they interpret the signal the same way. Whereas immersion is much more subtle and depends on your state of mind also, not only what you perceive. So not easy to measure! Our perceptual machine is not a fixed machine, our expectations and mood feeds back into the system.

Some things like phasing/smearing in high frequency content and imaging does affect the experience, although the effect is very subtle and you need good head sets and having heard the original many times to pinpoint where the differences are at higher bitrates. (at 300kbit/s it probably isn't all that easy).

May 09, 2017
On 05/09/2017 04:44 AM, Patrick Schluter wrote:
> On Tuesday, 9 May 2017 at 08:24:40 UTC, Nick Sabalausky (Abscissa) wrote:
>> On 05/09/2017 02:10 AM, Patrick Schluter wrote:
>>
>> Interesting. Any links? Not familiar with what "c't" is.
>
> https://www.heise.de/ct/artikel/Kreuzverhoertest-287592.html
>
> So, I got some details wrong in my recollection from memory. They
> compared 128 kbit/s, 256 kbit/s and CD. To remove bias, they burnt the
> mp3 after decompression on CD so that the testers couldn't distinguish
> between the 3 formats and played them in their high quality audio setup
> in their studios. The result was surprizing in that there was no
> difference between CD and 256K mp3, and only a slightly lower score for
> 128K mp3.

Not surprised the 128k MP3 was noticeable. Even I've been able to notice that when I was listening for it (although, in retrospect, it was likely a bad encoder, now that I think about it...)

> They were also surprized that for some kind of music
> (classical), the mp3 128K was even favored by some testers over the
> other formats and they speculate that the encoding rounds out somehow
> some roughness of the music.
> They also had one tester who was 100% accurate at recognizing mp3 over
> CD, but the guy had had a hearing accident in his youth where he lost
> part of the hearing spectrum (around 8KHz) which breaks the
> psycho-acoustic model and allows him to hear noise that is suppressed
> for the not hearing impared.
>

Fascinating.

The 128k being sometimes favored for classical kinda reminds me of how some people prefer vinyl over CD/etc. Both are cases of audio data being lost, but in a way that is liked.

> I don't know where I got the 160 KBit part of my message.
>

Your memory recall must've applied a low-pass filter over "128K" and "256K" ;)


May 09, 2017
On Tuesday, 9 May 2017 at 16:26:35 UTC, Ola Fosheim Grøstad wrote:
> Some things like phasing/smearing in high frequency content and imaging does affect the experience, although the effect is very

I want to add that of course, modern commercial music is already ruined by too much compression and dynamic abuse so it is distorted from the beginning... just to get a loud signal. Trash in -> trash out. Same with speakers. Regular speakers are poor. Use a good headset (E.g. Sennheiser HD600 or better) and preferably use the same headset the audio engineer used... Loudspeaker in room -> not the same signal as on the CD.

Anyway, it is a complicated topic. I went to a two hours lecture on it a week ago. We were told to use this book: Applied Signal Processing: A MATLAB™-Based Proof of Concept by Dutoit and Marqués. It comes with code in matlab so you can modify the mp3 algorithms and explore the effects yourself. :)