January 17, 2015
> I know I am imposing on somebodies else's work here, but compressing
> resources should really be done.

Our webmaster got back. He said compression is more CPU work and on a fat pipe (which we do have) that may make things actually worse. Also, how would this work if we switch to vibe.d? -- Andrei

January 17, 2015
On Sat, Jan 17, 2015 at 12:52:29PM -0800, Andrei Alexandrescu via Digitalmars-d wrote:
> >I know I am imposing on somebodies else's work here, but compressing resources should really be done.
> 
> Our webmaster got back. He said compression is more CPU work and on a fat pipe (which we do have) that may make things actually worse. Also, how would this work if we switch to vibe.d? -- Andrei

+1 for dogfooding!


T

-- 
Notwithstanding the eloquent discontent that you have just respectfully expressed at length against my verbal capabilities, I am afraid that I must unfortunately bring it to your attention that I am, in fact, NOT verbose.
January 18, 2015
On Saturday, 17 January 2015 at 20:52:28 UTC, Andrei Alexandrescu wrote:
> Our webmaster got back. He said compression is more CPU work and on a fat pipe (which we do have) that may make things actually worse.

Doing it on demand might be a mistake here, but we can also pre-compress the files since it is a static site.

You just run gzip on the files then serve them up with the proper headers.

here's a thing about doing it in apache

http://stackoverflow.com/questions/75482/how-can-i-pre-compress-files-with-mod-deflate-in-apache-2-x

> Also, how would this work if we switch to vibe.d? -- Andrei

I don't know about vibe, but it is trivially simple in HTTP, so if it isn't supported there, it is probably a three (ish) line change.

Caching is the same deal btw, just set the right header and you'll get a huge improvement. "Cache-control: max-age=36000" will cache it for ten hours, without even needing to change the urls. (Changing urls is nice because you can set it to cache forever and still get instantly visible updates to the user by changing the url, but we'd probably be fine with a cache update lag and it is simpler that way.)

ETags are set right now and that does some caching, it could be improved further by adding the max-age bit tho.

This is an apache config of some sort too.

http://stackoverflow.com/questions/16750757/apache-set-max-age-or-expires-in-htaccess-for-directory

though I don't agree it should be one year unless we're using different urls, we should do hours or days, but that's how it is done.
January 18, 2015
On second thought this way works better:

http://stackoverflow.com/questions/7509501/how-to-configure-mod-deflate-to-serve-gzipped-assets-prepared-with-assetsprecom


though that's some ugly configuration, I hate apache.

But I just tested that locally and it all worked from a variety of user agents. All I had to do was gzip the file and also keep a copy of the uncompressed version to server to the (very few btw... but popular - curl, by default, is one of them) UAs that don't handle receiving gzipped info.
January 18, 2015
On 1/17/15 4:22 PM, Adam D. Ruppe wrote:
> On Saturday, 17 January 2015 at 20:52:28 UTC, Andrei Alexandrescu wrote:
>> Our webmaster got back. He said compression is more CPU work and on a
>> fat pipe (which we do have) that may make things actually worse.
>
> Doing it on demand might be a mistake here, but we can also pre-compress
> the files since it is a static site.
>
> You just run gzip on the files then serve them up with the proper headers.

Who's "you"? :o) -- Andrei

January 18, 2015
On Sunday, 18 January 2015 at 02:11:13 UTC, Andrei Alexandrescu wrote:
> Who's "you"? :o) -- Andrei

I'd do it myself, but after spending 30 minutes tonight trying and failing to get the website to build on my computer again tonight, I'm out of time.

It really isn't hard though with access to the html and .htaccess or something.

I just slapped this on my this-week-in-d local thingy:

.htaccess:

RewriteEngine on
RewriteCond %{HTTP:Accept-Encoding} \b(x-)?gzip\b
RewriteCond %{REQUEST_FILENAME}.gz -s
RewriteRule ^(.+) $1.gz [L]

<FilesMatch \.css\.gz$>
    ForceType text/css
    Header set Content-Encoding gzip
</FilesMatch>

<FilesMatch \.js\.gz$>
    ForceType text/javascript
    Header set Content-Encoding gzip
</FilesMatch>

<FilesMatch \.rss\.gz$>
    ForceType application/rss+xml
    Header set Content-Encoding gzip
</FilesMatch>

ExpiresActive on
ExpiresDefault "access plus 1 days"




Then ran

$ for i in *.html *.css *.rss *.js; do gzip "$i"; zcat "$i.gz" > "$i"; done;


(gzip replaces the original file so i just uncompressed it again after zipping with zcat. idk if there's a better way, the man page didn't give a quick answer so i just did it his way)


and the headers look good now. So like if that can be done on dlang.org too it should hopefully do the trick.
January 18, 2015
On Saturday, 17 January 2015 at 20:17:51 UTC, Andrei Alexandrescu wrote:
> On 1/17/15 12:00 PM, Sebastiaan Koppe wrote:
>> On Saturday, 17 January 2015 at 18:23:45 UTC, Andrei Alexandrescu wrote:
>>> On 1/17/15 10:01 AM, Sebastiaan Koppe wrote:
>> In the browser. So that on a reload of the page, the browser, instead of
>> making HTTP calls, uses it's cache.
>
> How do we improve that on our side?
2 things:

a) Set the proper cache headers in the http response.
b) Have a way to bust the cache if you have a new version of an resource.

If you have both in-place, you can set the expires header to 1 year in the future. Then bust the cache every time you have a new version of the file.

>
>>> Yah, we do a bunch of that stuff on facebook.com. It's significant
>>> work. Wanna have at it?
>> Yes. Please. But the compression thing takes precedence.
>
> Awesome. Don't forget you said this.

I won't.

>> Design is a *very* touchy issue. It is basically a matter of choice.
>> Without a definite choice made, I won't waste my time improving it.
>
> It's clear that once in a while we need to change the design just because it's old. Also, there are a few VERY obvious design improvements that need be done and would be accepted in a heartbeat, but NOBODY is doing them.

If I may suggest, I would split up the site into a couple of sections. One for Introduction/About, one for Docs/Api, one for Blogs, one for Community/Forum. Which is basically what everybody else is doing.

Just some random sites:

http://facebook.github.io/react/
https://www.dartlang.org/

>
> I'm not an expert in design but I can tell within a second whether I like one. Yet no PR is coming for improving the design.

Then why not just make a list of sites that we like. And then design this site like those. It is what all the designers are doing.
January 18, 2015
On Saturday, 17 January 2015 at 20:52:28 UTC, Andrei Alexandrescu
wrote:
>> I know I am imposing on somebodies else's work here, but compressing
>> resources should really be done.
>
> Our webmaster got back. He said compression is more CPU work and on a fat pipe (which we do have) that may make things actually worse. Also, how would this work if we switch to vibe.d? -- Andrei

If you do not have spare horsepower for compression, how will you
handle twice the load?

I have used vibe.d to fetch gzipped resources, it has all the
deflate&inflate stuff, so delivering gzipped resources should be
easy as flipping a switch.
January 18, 2015
On 1/17/15 11:23 PM, Sebastiaan Koppe wrote:
> On Saturday, 17 January 2015 at 20:52:28 UTC, Andrei Alexandrescu
> wrote:
>>> I know I am imposing on somebodies else's work here, but compressing
>>> resources should really be done.
>>
>> Our webmaster got back. He said compression is more CPU work and on a
>> fat pipe (which we do have) that may make things actually worse. Also,
>> how would this work if we switch to vibe.d? -- Andrei
>
> If you do not have spare horsepower for compression, how will you
> handle twice the load?

Not quite getting the logic there. -- Andrei


January 18, 2015
On Sunday, 18 January 2015 at 07:42:10 UTC, Andrei Alexandrescu wrote:
> On 1/17/15 11:23 PM, Sebastiaan Koppe wrote:
>> On Saturday, 17 January 2015 at 20:52:28 UTC, Andrei Alexandrescu
>> wrote:
>>>
>>> Our webmaster got back. He said compression is more CPU work and on a
>>> fat pipe (which we do have) that may make things actually worse. Also,
>>> how would this work if we switch to vibe.d? -- Andrei
>>
>> If you do not have spare horsepower for compression, how will you
>> handle twice the load?
>
> Not quite getting the logic there. -- Andrei
It is unrelated to my point about compression. The reasoning is as follows: if you are maxed out on resources, you will have problems when the site gets more visitors.

Compression can still help there. If the file is compressed the server needs to send less bytes, and can close the connection quicker. Pre-compression instead of doing on-demand, like Adam Ruppe said, will optimize it even more.

Btw. I build the dlang.org site on my computer but the <script> links have an %0 in the src attribute. Then 5 min later I saw the same on dlang.org

Funny thing is, all stuff is still functioning. Affirming my hunch that you can remove a lot of the js stuff.

The site now loads in 124kb. Whoever put that %0 there, you just cut down the site from 300kb to 124kb. Nice Job!