May 16, 2011
On 2011-05-15 20:11, Alexander wrote:
> On 15.05.2011 19:36, Adam D. Ruppe wrote:
>
>> I think you'll feel differently once you see people abuse that option. It becomes hard to follow what's going on.
>
>    Sure I will feel differently, that's why I've said "if used correctly" - and I do use it correctly :)
>
>> You're also not used to it. (When I write this for newbies, I often put a comment in there:<!-- filled by program -->)
>
>    That kind of a comment made me crazy many times - because then I had to dig inside the program to find the place where it was filled :)
>
>> It's an easy pattern though: an empty element with an ID is meant to be filled in, and the ID should be descriptive enough
>> to make a good guess at what it's doing.
>
>    If at some point I want to change output from table view to something else - then I've to change the application as well, as it generates series of tr/td elements, which I don't need anymore. To me, it is much easier to change it in place - i.e.
> where this is done actually. Bouncing from template to code is quite annoying, to be honest, especially when that code is not your own :)
>
>> After you see it used once, it's not a mystery anymore.
>
>    It is not a mystery, just something that is blocking my productivity :) I just don't like it - and because of this I am less productive.
>
>    From my point of view, it makes no sense to say "that is wrong" or "that is bad" - unless there is *objective* (and only) way to do something *right* - which is not the case in web development (and any software development in general).
>
>    So, to some people (including myself) unless and until D may be embedded like PHP/ASP, it will be a show stopper. For some others - it will not, that's all.
>
>    For this reason, I prefer to use D as a backend only (=performance critical), and frontends will be in PHP or ASP (=UI only). Though, currently I evaluate option to use D to host a server (HTTP) with dmdscript as server-side scripting language.
> What blocks me is the absence of good, robust and high-performance socket/file I/O framework for D2.
>
> /Alexander

BTW, someone modified MiniD to support the <% %> syntax to embed it in html.

-- 
/Jacob Carlborg
May 16, 2011
> Instead you move the view layer into the model or controller layer. How's that any different?

Is that really what's happening? Any template has variables made available to it from the model. I'm just making them available at a higher level (pre-wrapped in semantic tags and grouped together).

The actual layout and styling, sometimes even data formatting - the stuff I think of as the view - is still done entirely in the separate html and css files.

I realize this isn't a perfect argument, but it's still worth it to me to have those other processing options available.
May 16, 2011
Jacob Carlborg wrote:
> I think you really should give it a try. This is a good place to start:

Cool, thanks!

> Don't know why but I think this is verbose and it's more difficult to visualize how the HTML will look like.

That's also a bizarre example... 9/10 times, my code looks more like:

auto list = document.getElementById("items-holder");
foreach(item; items)
    list.addChild("li", item);

than the longer thing. If you want to visualize the html, you usually want to look at the html file, since it contains the majority of the structure.

> but usually you use the "form_for" or "form_tag" helper that Rails provides:

That's not bad.

> Another thing I really like is it's built in support for rebinding the "this" variable

That's easy in Javascript too. It has the Function.apply method built in that makes it as simple as writing a higher order function.

I find javascript is easier to use if you think of it as a functional language with procedural elements rather than an object oriented one.

> Correct me if I'm wrong but that would require a request for bascially every function call?

Most of them, yes (though calls being passed as arguments to other calls are combined0, but keep in mind a HTTP request does *not* have to go to the server! If your server side code gives the appropriate cache headers, the ajax responses are cached too.

There's a good chance you have to hit the server anyway for db access too...

> I don't like that, and I don't like the inline javascript.

Inline javascript was done here for convenience. You're free to do it in a separate file just like any other script.
May 16, 2011
On 2011-05-15 21:19, Adam D. Ruppe wrote:
> But, in a lot of cases, you have to change the model to change
> the data you're showing anyway... the view only has what's available
> to it.

Depending on what needs to be changed this is the job of the controller, to get the necessary data, for a specific view, out of the model. But, of course, if the model doesn't have the data in the first place that would be impossible.

-- 
/Jacob Carlborg
May 16, 2011
On 15.05.2011 20:54, Adam D. Ruppe wrote:

> FYI, PHP uses files on the hard drive for sessions by default... optionally, it can use a database too.

  Not really. There are different options to keep the session data, though - so some of them may resort to store something in the disk. Both cases (disk/db) terribly slow down everything.

  I don't know how many visitors your websites have, but if you have several visits per second - you will feel it.

> AFAIK, there is no in-memory option beyond the options the kernel or database provide for file/table caching.

  There are several, again - like memcache. Believe me, once you have at least 500k page requests/month (excluding images and other static content, of course), you will change your mind about where (and how) to store data.

> But, I never finished it, because I use sessions so rarely. Most usages of it don't match well with the web's stateless design - if someone opens a link on your site in a new window, can be browse both windows independently?

  Web is stateless as long as you have static content only. What about web-commerce, shopping, applications like Google AdWords, web-mail etc? How can you handle it?

  But what do you mean by "independently"? Sure you can browse both, just any changes in session state (like, adding something to shopping cart) will be propagated to all windows eventually.

> Changing one *usually* shouldn't change the other.

  Sorry? Do you ever do some shopping online? ;) If I have many windows open, with different items, I *expect* that all of them will go into *one* shopping cart - "usually" :)

> Another big exception is something like a shopping cart. In cases like that, I prefer to use the database to store the cart anyway.

  Ah, here we are. Sure, you will need to store into DB - but only *if* there are changes. For any data which is not changing, but requested quite often, each access to the database will slow things down.

> In web.d, the session functions automatically check IP address and user agent
> as well as cookies. It can still be hijacked in some places, but it's a little harder.

  This is exactly what well-designed application and libraries do.

> To prevent hijacking in all situations, https is a required part of the solution, and the cgi library can't force that unilaterally. Well, maybe it could, but it'd suck.)

  It *can* enforce, by refusing non-encrypted connection, or redirecting to https when access is done by http.

> The downside is I *believe* it doesn't scale to massiveness. Then again, most our sites aren't massive anyway, so does it matter?

  Most of our are massive enough, so - it does matter :)

> Finally, long running processes can't be updated. You have to kill them and restart, but if their state is only in memory, this means
> you lose data.

  Not really - process should flush any dirty data to persistent storage and quit, so new copy may catch on.

/Alexander
May 16, 2011
On 2011-05-15 23:13, Nick Sabalausky wrote:
> Like I described in another post, I've *worked with* people who did web
> development professionally who still undeniably had nearly zero real
> competence. So you can't tell me just because they do it professionally
> indicates they actually have a clue what they're doing.

That seems to especially true with web development.

-- 
/Jacob Carlborg
May 16, 2011
Alexander wrote:
> I don't know how many visitors your websites have, but if you have several visits per second - you will feel it.

Two notes here: #1 several visits per second means over 5 million views a month. That's actually very rare.

The way I do optimizations is I write it just however comes to mind first (which usually isn't half bad, if I do say so myself), then watch it under testing.

If it proves to be a problem in real world use, then I'll start watching the times. IE9 has things built in to watch on the client side. D has stuff built in to watch the server side code. I sometimes just sprinkle writefln(timestamp) too for quick stuff.

I'm sometimes right about where the problem was, and sometimes quite surprised. It's those latter cases that justify this strategy - it keeps me from barking up the wrong tree.

(except once, a week or so ago. I was tracking down a bug that I believed to be in druntime. So I recompiled druntime in debug mode.

At that point, it was 1am, so I went to bed. When I woke up the next day... I forgot it was still in debug mode.

So I went to the site, and was annoyed to see it took almost 30 seconds to load one of the features. That's unacceptable. It was a new feature with some fairly complex code, so it didn't occur to me to check druntime.

I did the profiling and found a few slow parts in dom.d. Fixed them up and got a 20x speedup.

But, it was still taking several seconds. Then, I remembered about druntime! Recompiled it in the proper release mode, and it was back to the speed I'm supposed to get - single digit milliseconds.

Yeah, it's still 20x faster than it was, but 40 ms isn't /too/ bad...

Oh well though, while it was unnecessary, the profiling strategy did still give good returns for the effort put in!)


I've had problems with database queries being slow, but they have always been either complex queries or poorly designed tables. I've never had a problem with reading or writing out sessions. Another profiling story with the db query, but I'll spare you this time.

Then again, I only write sessions once when a use logs in, and reads are quick. So even if the system was slow, it wouldn't matter much.


Fun fact: the fastest page load is no page load. Client side caching is built into http... but it seems to be rarely used. If you mix code and data, you're screwed - you can't write out an HTTP header after output has been generated!

My cgi.d now includes more than just the header() function though. It keeps an updatable cache time, that you can set on each function, and is output as late as possible.

That means you can set caching on the individual function level, making it easy to manage, and still get a good result for the page as a whole.

Caching static content is obviously a plus. Caching dynamic content can be a plus too. I've gotten some nice usability speed boosts by adding the appropriate cache headers to ajax calls. Even expiring in just a few minutes in the future is nice for users. (Of course, the fastest javascript is also no javascript... but that's another story.)

> But what do you mean by "independently"?

Let me give an example. I briefly attended college with a "web app"
that was just a barely functional front end to an old COBOL app
on the backend.

The entire site was dependent on that flow - it kept a lot of
state server side. If you click add class in a new window, then
go to view schedule in the current window... both windows will fail.

The view schedule will error out because that's not a proper add class command. Add class window will fail because it errored out in the other window.


The thing was a godawful mess to use. You have to do it linearly, in just one window, every time. Ugh.


If it were better designed, each individual page would contain all the state it needed, so each tab works without affecting the other.



This is an extreme example, and there's some times where a little bit of sharing is appropriate. But, even then, you should ask yourself is server side sessions are really the right thing to do.

> *snip*
> Ah, here we are.

Yeah, you didn't have to do a line by line breakdown for something I discussed in the following paragraph.

Though, I will say handling things like adwords and webmail don't need sessions beside logging in and maybe cookies, and if you use them, your site is poorly designed.

Database access vs a session cache is another thing you'd profile. I suspect you'd be surprised - database engine authors spend a lot of time making sure their engine does fast reads, and frequently used tables will be in a RAM cache.

> It *can* enforce, by refusing non-encrypted connection, or redirecting to https when access is done by http.

My point is it requires server setup too, like buying and
installing a certificate. You can't *just* redirect and have to work.

> Not really - process should flush any dirty data to persistent storage and quit, so new copy may catch on.

Indeed - you're using some kind of database anyway, so the advantage of the long running process is diminished.
May 16, 2011
On 16/05/2011 09:54, Alexander wrote:
> On 16.05.2011 01:25, Robert Clipsham wrote:
>
>> It most definitely does not work perfectly. You highlight those that are not familiar with web development? They're the ones that use it.
>>
>> Visual Studio defaults to not using it now, there's a reason for that. I don't know about PHP IDEs.
>
>    I am sorry, but do we talk about the same thing? How is WordPress related to Visual Studio, especially how could it "use it"?
>
> /Alexander

My bad, I was rather tired when I posted this, I thought I was replying to something else. Sorry!

-- 
Robert
http://octarineparrot.com/
May 16, 2011
"Adam D. Ruppe" <destructionator@gmail.com> wrote in message news:iqrj55$24d8$1@digitalmars.com...
> Alexander wrote:
>
> Database access vs a session cache is another thing you'd profile. I suspect you'd be surprised - database engine authors spend a lot of time making sure their engine does fast reads, and frequently used tables will be in a RAM cache.
>

Yea, that's really one of the main points of a DBMS: Efficient access to large amounts of data.

Although I'd imagine accessing a DB for simple things could easily end up slower if the DB is on a different server. Big companies will often have setups like. Then again, if the network is all designed as set up well, and not congested, and the DB does have the data in RAM cache, then I'd imagine the lack of needing to do physical disk I/O could still make it faster.

>> It *can* enforce, by refusing non-encrypted connection, or redirecting to https when access is done by http.
>
> My point is it requires server setup too, like buying and
> installing a certificate. You can't *just* redirect and have to work.
>

That's not as bad as you may think, I've done that for my server recently. I *highly* recommend StartSSL. Most SSL/TLS certs are pretty expensive, but StartSSL has a free one:

http://www.startssl.com/?app=1

The only real limitations are:

- You have to renew it every 1 year.
- Each cert is only good for one domain, and no subdomains (although the
"www" subdomain is included for free, in addition to "no subdomain").
- The only validation it does is validation of the domain and contact email.

There are, naturally, some hoops to jump through when setting it up (generating/installing a client certificate first so you can authenticate with their site). But their site walks you through everything step by step, and if you just follow the directions you can have it all done in minutes. Their system does require JS, and doesn't really handle using multiple tabs, but they don't do any annoying flashiness with the JS, and even my notoriously JS-hating self still finds it well worth it.

I've been using it on my site for a little over a year and haven't had any problems. It's been great.



May 16, 2011
Nick Sabalausky:
> Then again, if the network is all designed as set up well, and
> not congested, and the DB does have the data in RAM cache, then I'd
> imagine the lack of needing to do physical disk I/O could still
> make it faster.

Yeah, I work with two setups like that, but in my cases the db servers are on very fast local network links, so it wasn't a problem.

It comes back to the same key of optimization though: profile it in real situations, since there's a lot of factors that are hard to guess by gut alone.

> That's not as bad as you may think, I've done that for my server recently.

Yeah, I've recently started using startssl too. They seem to be doing it the way it ought to be done!