April 30, 2014
On Wednesday, 30 April 2014 at 14:57:00 UTC, Adam D. Ruppe wrote:
>> - Server side generation should be kept minimal, prevents caching.
>
> That's not really true. You can cache individual parts on the server and in some cases, cache the whole page on the client too.

Mhh… I think there are several different types of files and caching strategies:

1. static files (the ones that never change can be stored at edge caches)

2. pregenerated files (files served from Amazon AWS, Google Cloud Storage, CDNs)

3. proxy cachable files / client cachable files

4. server memcached files

5. fully dynamic files

Ola.
April 30, 2014
On 4/30/2014 9:47 AM, "Ola Fosheim Grøstad" <ola.fosheim.grostad+dlang@gmail.com>" wrote:
> On Wednesday, 30 April 2014 at 12:56:03 UTC, Nick Sabalausky wrote:
>> FWIW, IMO the big selling point of D is it's fairly unique knack for
>> letting you eat your cake and still have it. I rather like to think we
>> can manage merging the "full stacks" with the "lightweights".
>
> Ugh, avoid the full stacks like the plague. They tend to be lockin
> solutions. Reminds me of Tango in D1 where you risked pulling in all
> kinds of dependencies.
>
> You might dislike this, but I think nimble servers and clean separation
> with javascript heavy clients are the future.
>

That definitely is the direction things are moving right now. Granted, I don't like it, but you're right it's undoubtedly the popular direction and it's unlikely to slow or reverse anytime soon.

That said, I don't have an issue with fat clients in general. I usually tend to prefer them (ex: desktop email client). Just not when the "fat client" happens to be inside a web browser, because that tends to not be "fat" client so much as "needlessly bloated" client with an awkward, temperamental UI (example: MS Word and OpenOffice have never lost one keystroke for me from a network hiccup or anything equally trivial).

> What I don't want:
>
> - I have started to avoid server processing of forms, javascript/ajax
> gives better user experience.
>

JS can definitely help improve the UX of form validation, no doubt about that, but it's important to remember that server-side validation is still necessary anyway, regardless of what you do on the client.

> - I avoid advanced routing, it adds little and leads to harder to
> maintain code, give me a regexp based routing table in one location
> binding request-handlers.
>

Same here. I don't like having my available HTTP interfaces scattered across my codebade, and I definitely don't like having them implicit based on member-function visibility (I've used such frameworks before. Not personally my cup of tea).

What I *do* love is having a canonical table defining my entire HTTP interface in one easy location. The extra typing or non-DRYness of that is a mere triviality in my experience (and I'm normally a huge DRY buff).

> - Server side generation should be kept minimal, prevents caching.
>

Ehh, yes and no. Server side generation is usually fine, just not when it's done more often than necessary. And traditional server-side web technologies definitely tend to do it more than necessary,

For example, consider a page containing a blog post with (non-Disqus) user comments:

It's a complete waste for the server to regenerate such a page upon every request, PHP/ASP-style. That's because it doesn't *change* upon every viewing - it only changes on every post and edit (and not even every one of those, if there's enough comments to trigger paging).

So unless the page's rate of comment "submissions/edits" approaches the rate of comment "views" (unlikely...except maybe on YouTube ;) ), then it's best to re-generate upon posts/edits and then cache that. So you still get caching benefits, but with no need to make *all* the clients duplicate the exact same page-generating effort as each other upon every viewing.

Supporting login stuff (ex: "Hello, [your name here]! Logout?") doesn't really mess this up either. The vast majority of the page can still be cached by the server. Then, "generating" it for each user doesn't need to be anything more resource-intensive than this:

void loginUser(string name)
{
    session.user.loggedIn = true;
    session.user.name = name;

    // Whatever template engine you want:
    session.user.loggedInUI =
      `Hello <b>`~name~`</b>! <a href="/logout">Logout</a>`;
}

enum commonLoggedOutUI =
    `Login: <form>Username:<input...> Pass:<input...></form>`;

void showMyPage(OutRange response, User user)
{
    // myPage was automatically split into parts A and B
    // last time it was updated:

    response.put(myPagePartA);

    if(session.user.loggedIn)
        response.put(session.user.loggedInUI);
    else
        response.put(commonLoggedOutUI);

    response.put(myPagePartB);
}


> - Would never consider using serverside javascript generation.
>

Heh, I've actually done that on old-style ASP (ages ago). It was both confusing and interesting.

> What I do want:
>
> - Transparent fast distributed in-memory database with logging to a
> exchangable backing store and with consistency guarantees.
>
> - No filesystem dependencies.

I'll take those, plus a large vanilla latte, please. :) "Thank you, come again!"

April 30, 2014
On 4/30/2014 10:58 AM, Ary Borenszweig wrote:
>
> What if you have tests against a database that where each take some
> time? I don't want to wait for the whole tests to run...

Collapse block, [Home], [Shift]-[Down] (select), [Ctrl]-/ (comment)

;)

Just FWIW, though. I'm not arguing for or against an ability to run specific unittests.

April 30, 2014
On 4/30/14, 7:18 AM, Ary Borenszweig wrote:
> On 4/30/14, 6:43 AM, Dicebot wrote:
>> On Wednesday, 30 April 2014 at 07:14:34 UTC, Jacob Carlborg wrote:
>>> But unit tests in D suck as well. I mean, how do I run a single unit
>>> test in D?
>>
>> This is common complaint I still fail to understand. I have never ever
>> wanted to run a single unit test, why would one need it? If running all
>> module tests at once creates problems than either module is too big or
>> unit tests are not really unit tests.
>
> When I have a bug in my code I usually add a test for it so it never
> happens again.
>
> Because it's a bug, I might need to debug it. So I add a couple of
> "writefln" instead of using a debugger (it's faster and I get formatted
> results easier).
>
> Now, if I run all tests I will get output from all the tests, not the
> one I'm trying to debug. That's really annoying.

Yah, naming unittests is key here. With names one can specify which to run/not run, regex patterns (i.e. "run only quick*") etc. -- Andrei

April 30, 2014
On 4/30/2014 10:53 AM, Adam D. Ruppe wrote:
> On Wednesday, 30 April 2014 at 12:26:06 UTC, Nick Sabalausky wrote:
>
>> Then I use Adam's dom.d (in non-strict mode) to read the HTML form
>> template (preserving the templating stuff)
>
> I use strict mode for that stuff, keep in mind strict mode is about
> well-formedness, not validation. So it accepts custom tags and
> attributes easily enough.

Well, I've been using mustache-d as my main templating engine, which is just a general text preprocessor (Although I'm kinda eyeing that other text preprocessor that uses actual D code). IIRC, I think there were some cases where the my templates involved some non-well-formedness that the DOM's non-strict mode was perfectly happy with. Whatever it was, I'm sure there was *something* I was doing that strict mode was tripping up on. May have been an old version of the DOM, too.

Granted there are still things I have to refrain from doing in my HTML form templates because it would violate well-formedness *too much* even for an ultra-relaxed HTML DOM. But those cases always have other (arguably more sanitary) ways to accomplish the same thing.

April 30, 2014
On Wednesday, 30 April 2014 at 15:27:48 UTC, Nick Sabalausky wrote:
> That definitely is the direction things are moving right now. Granted, I don't like it, but you're right it's undoubtedly the popular direction and it's unlikely to slow or reverse anytime soon.

I'm not sure if I like it either, but I think websites are getting more usable now. For a while it was a shitty stuttering mess of HTML and JS that made me longing for an AntiWeb browser with community maintained per-site AI that turns the horrible HTML-mess into semantic markup that you can style yourself. I actually have a file called antiweb.d here… ;)

I also had high hopes for XSLT. I remember requiring studentprojects to serve XML from the server, and transform it to HTML using XSLT in the browser back in 2002 or so. And XSLT support was actually quite good, at least until the mobile shit hit the fan. Unfortunately connections were still slow so XSLT based rendering had to wait until the whole XML was downloaded. Today I think it might work out quite nicely, but I doubt anyone even remembers that browser can do XSLT today. Actually, I am not even sure if they all still support it?

The weird thing is that SEO and search engine priorities are the ones that keep the dynamic websites from going fully dynamic by their anti-dynamic measures (punishing non-static content) and they are also the ones that are pushing semantic markup such as itemscope/itemprop microdata.

On the other side of the fence the Wordpress authors are having a lot of power. Whatever Wordpress makes easy will dominate a large portion of the web. I think that is so sad, because the Wordpress codebase is… err… junk. I am not even going to use the term «a pile of junk» which would suggest that there is some sense of structure to it. I think it is more like a scattered mutating spaghetti dinner gone terribly wrong, slowly emerging from every toilet in every household taking over the earth… like the classic horror movies from the 1950s.

> JS can definitely help improve the UX of form validation, no doubt about that, but it's important to remember that server-side validation is still necessary anyway, regardless of what you do on the client.

Yup. So a must have is a good infrastructure for specifying database invariants and transactions. Ideally it should execute like a stored procedure thus leaving the server logic pretty clean.

> What I *do* love is having a canonical table defining my entire HTTP interface in one easy location. The extra typing or non-DRYness of that is a mere triviality in my experience (and I'm normally a huge DRY buff).

Yep, it also acts like always-up-to-date documentation when you come back to the source code 6 months later trying to figure out the main structure.

> So unless the page's rate of comment "submissions/edits" approaches the rate of comment "views" (unlikely...except maybe on YouTube ;) ), then it's best to re-generate upon posts/edits and then cache that. So you still get caching benefits, but with no need to make *all* the clients duplicate the exact same page-generating effort as each other upon every viewing.

Well, I would probably use JS… ;-)

But I am pragmatic. Caching and pregeneration can lead to bugs and complications. So it is usually a good idea to just do a dynamic version first and then add caching when needed.

I also sometimes use a dynamic template during development, and then just save a static version for release if I know that it won't change.

> I'll take those, plus a large vanilla latte, please. :) "Thank you, come again!"

You're welcome!

I think it is realistic now for smaller sites (say 1-8 servers) where you have enough RAM to hold perhaps 10 times the information the site will ever provide. One should be able to sync 8 servers that have relative few write operations easily. So, reading the log might take some time during startup, but with an efficient format… it probably could complete quickly for 1GB of data.
April 30, 2014
On 4/30/2014 11:04 AM, Adam D. Ruppe wrote:
>
> A big difference though is the compiler helps you a lot in D. In Ruby,
> for example, the main reason we use the unit tests (so far) is to help
> ensure consistency after refactoring something. It catchings things like
> a renaming we missed, or a removed method still in use.

This has a lot to do with why I don't buy the common argument that dynamic languages are all about "just getting shit done".

Anytime I use them, they just create more work for me. Writing more sanity checks. More hours debugging. More work to optimize hotspots. More time figuring out Tracebacks I'm getting from code I didn't even write or from tools I'm simply trying to install. Etc.

> In D, you just recompile and those things are found almost instantly
> without needing to actually run any code.
>

Gotta love it :)

April 30, 2014
On 4/30/2014 12:32 PM, "Ola Fosheim Grøstad" <ola.fosheim.grostad+dlang@gmail.com>" wrote:
>
> On the other side of the fence the Wordpress authors are having a lot of
> power. Whatever Wordpress makes easy will dominate a large portion of
> the web. I think that is so sad, because the Wordpress codebase is… err…
> junk.

I've used Wordpress. Its codebase isn't the only thing bad about it ;)

> I am not even going to use the term «a pile of junk» which would
> suggest that there is some sense of structure to it. I think it is more
> like a scattered mutating spaghetti dinner gone terribly wrong, slowly
> emerging from every toilet in every household taking over the earth…
> like the classic horror movies from the 1950s.
>

Sounds pretty much exactly what I'd expect from just about any PHP-based application. :/

>> JS can definitely help improve the UX of form validation, no doubt
>> about that, but it's important to remember that server-side validation
>> is still necessary anyway, regardless of what you do on the client.
>
> Yup. So a must have is a good infrastructure for specifying database
> invariants and transactions. Ideally it should execute like a stored
> procedure thus leaving the server logic pretty clean.
>

I have to admit I've been in the habit of avoiding anything beyond basic SELECT/INSERT/UPDATE/DELETE and CREATE TABLE. Not that I haven't used them, but I really should have more familiarity with the other stuff than I do. Ugh, but SQL can be such a pain, especially with all the vendor differences, and when compared to accomplishing something in whatever language I'm invoking SQL from.

April 30, 2014
On Wed, 2014-04-30 at 12:41 -0400, Nick Sabalausky via Digitalmars-d
wrote:
[…]
> This has a lot to do with why I don't buy the common argument that dynamic languages are all about "just getting shit done".

Interesting use of the word shit. I tend to find that the average programmer produces shit code in whatever language they use. This is a sad reflection on the whole of computing.

> Anytime I use them, they just create more work for me. Writing more sanity checks. More hours debugging. More work to optimize hotspots. More time figuring out Tracebacks I'm getting from code I didn't even write or from tools I'm simply trying to install. Etc.

Using a static language mindset when working with a dynamic language has this effect. Likewise the reverse, using a dynamic language mindset with a static language.

-- 
Russel. ============================================================================= Dr Russel Winder      t: +44 20 7585 2200   voip: sip:russel.winder@ekiga.net 41 Buckmaster Road    m: +44 7770 465 077   xmpp: russel@winder.org.uk London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder

April 30, 2014
On Wednesday, 30 April 2014 at 16:56:11 UTC, Nick Sabalausky wrote:
> Sounds pretty much exactly what I'd expect from just about any PHP-based application. :/

Modern PHP isn't so bad. I can write acceptable code in PHP. Though, I only do so when there is no other option, since it is the least desirable option next to Perl. The good thing about PHP is that default installs tend to have good C libraries. I think it would have died without that.

So, if PHP is ok then it must be the PHP programmers that are to blame. I shudder to think what happens with a niche community if they pick it as the next fad… It could destroy any upcoming programming community with spaghetti-hell. Are you sure you want to market D as a web platform?

> familiarity with the other stuff than I do. Ugh, but SQL can be such a pain, especially with all the vendor differences, and when compared to accomplishing something in whatever language I'm invoking SQL from.

You can implement it in the ORB or wherever unit that provides transactions. I was more pointing to what I find useful conceptually in terms of layers:

1. user input on the client
2. validate on client
3. post using ajax
4. server unwraps the data and blindly inserts it into the database
5. if transaction fails, notify client, goto 1
6. done.

Another good reason for fat clients is that the edit/run cycle is tighter and it is easier to run a debugger on it. It makes sense to put most of the code where you can mutate it easily.