February 11, 2007
After yesterday's hubbub, Judge Judy called and punished me to read about RoR, in addition to the obligatory sentence of helping a little old lady cross the street five times a week.

So I went and read the nice tutorial at:

http://www.onlamp.com/pub/a/onlamp/2006/12/14/revisiting-ruby-on-rails-revisited.html

I have a couple of questions that I assume will be easy to answer by anyone who actually has used RoR.

On the second page of the tutorial, the authors describe how they write SQL code to create tables, and then how they invoke Ruby to parse that SQL code and generate (I assume) Ruby wrappers for it.

Now consider that the database changes: new tables, new fields, different types for existing fields, etc.

1. Is now the generated Ruby code is out of sync with the database?

2. In case it is out of sync, what is the way to bring it back in sync? Manual editing of the Ruby code? Editing the SQL and then regenerating the wrappers? Some automated way?

An additional question: most interesting work in databases is done through views (SELECT statements) and stored procedures. Can Ruby parse such stuff and generate appropriate wrappers? If so, what happens when the views and stored procedures change?

I'm asking these questions because I want to figure whether automating the task of keeping in sync with a database, plus the additional type safety and speed, are significant advantages in the Web/DB domain. In such a scenario, error messages like the one in Part 2 (http://www.onlamp.com/pub/a/onlamp/2007/01/05/revisiting-ruby-on-rails-revisited-2.html?page=4) may be avoided; the code simply fails to compile. I know of domains where such advantages are very important, but I'm not sure how the Web/DB domain feels about it.


Andrei
February 11, 2007
Andrei Alexandrescu (See Website For Email) wrote:
> After yesterday's hubbub, Judge Judy called and punished me to read about RoR, in addition to the obligatory sentence of helping a little old lady cross the street five times a week.
> 
> So I went and read the nice tutorial at:
> 
> http://www.onlamp.com/pub/a/onlamp/2006/12/14/revisiting-ruby-on-rails-revisited.html 
> 
> 
> I have a couple of questions that I assume will be easy to answer by anyone who actually has used RoR.
> 
> On the second page of the tutorial, the authors describe how they write SQL code to create tables, and then how they invoke Ruby to parse that SQL code and generate (I assume) Ruby wrappers for it.
> 
> Now consider that the database changes: new tables, new fields, different types for existing fields, etc.
> 
> 1. Is now the generated Ruby code is out of sync with the database?
> 
> 2. In case it is out of sync, what is the way to bring it back in sync? Manual editing of the Ruby code? Editing the SQL and then regenerating the wrappers? Some automated way?
> 
> An additional question: most interesting work in databases is done through views (SELECT statements) and stored procedures. Can Ruby parse such stuff and generate appropriate wrappers? If so, what happens when the views and stored procedures change?
> 
> I'm asking these questions because I want to figure whether automating the task of keeping in sync with a database, plus the additional type safety and speed, are significant advantages in the Web/DB domain. In such a scenario, error messages like the one in Part 2 (http://www.onlamp.com/pub/a/onlamp/2007/01/05/revisiting-ruby-on-rails-revisited-2.html?page=4) may be avoided; the code simply fails to compile. I know of domains where such advantages are very important, but I'm not sure how the Web/DB domain feels about it.
> 
> 
> Andrei

1 & 2 depend on a couple of things in relation to RoR.

Since classes absorb the schema of the table they represent, if a data type of a column changes, it can handle it in the means of generic find, save, etc. However, if there is logic in the code that depends on that type, it's obviously going to fail.

Again with the addition of a column, it's absorbed by the class (meaning that there is a property generated at runtime for access to the class implementation.

If a table is added RoR doesn't care about it, but you can't use it in an association, due to the fact that the machinery needed for represented associations needs to be generated against that new table.

So how does it handle changes? Pretty blindly, as long as your changes don't change any logic you've predetermined into some code. including associations (like belongs_to (where you've changed the name of the foreign key)

Bonus Question:
Rails is pretty ignorant of the relationships defined in the database itself DHH (author of RoR) has his reasonings listed here
http://www.loudthinking.com/arc/000516.html

I have read about hacks to get procs to work under oracle, though I really haven't invested any time to the situation, as I've never used the db (firebird mainly)

(Not Defending his choices, nor am I an authoritative voice on RoR, besides the couple of patches back to it, I'm just a user - though limited as of late).

If these answers don't answer what you're looking for feel free to let me know

Cheers,
Robby


February 11, 2007
Robby wrote:
> Andrei Alexandrescu (See Website For Email) wrote:
>> After yesterday's hubbub, Judge Judy called and punished me to read about RoR, in addition to the obligatory sentence of helping a little old lady cross the street five times a week.
>>
>> So I went and read the nice tutorial at:
>>
>> http://www.onlamp.com/pub/a/onlamp/2006/12/14/revisiting-ruby-on-rails-revisited.html 
>>
>>
>> I have a couple of questions that I assume will be easy to answer by anyone who actually has used RoR.
>>
>> On the second page of the tutorial, the authors describe how they write SQL code to create tables, and then how they invoke Ruby to parse that SQL code and generate (I assume) Ruby wrappers for it.
>>
>> Now consider that the database changes: new tables, new fields, different types for existing fields, etc.
>>
>> 1. Is now the generated Ruby code is out of sync with the database?
>>
>> 2. In case it is out of sync, what is the way to bring it back in sync? Manual editing of the Ruby code? Editing the SQL and then regenerating the wrappers? Some automated way?
>>
>> An additional question: most interesting work in databases is done through views (SELECT statements) and stored procedures. Can Ruby parse such stuff and generate appropriate wrappers? If so, what happens when the views and stored procedures change?
>>
>> I'm asking these questions because I want to figure whether automating the task of keeping in sync with a database, plus the additional type safety and speed, are significant advantages in the Web/DB domain. In such a scenario, error messages like the one in Part 2 (http://www.onlamp.com/pub/a/onlamp/2007/01/05/revisiting-ruby-on-rails-revisited-2.html?page=4) may be avoided; the code simply fails to compile. I know of domains where such advantages are very important, but I'm not sure how the Web/DB domain feels about it.
>>
>>
>> Andrei
> 
> 1 & 2 depend on a couple of things in relation to RoR.
> 
> Since classes absorb the schema of the table they represent, if a data type of a column changes, it can handle it in the means of generic find, save, etc. However, if there is logic in the code that depends on that type, it's obviously going to fail.

Makes sense. So if all Ruby does is e.g. display a column, which I assume is a generic operation with uniform syntax over all types, the type of that column could be anything. If, on the other hand, Ruby does some math against a column, and that column changes from number to string, runtime errors would ensue.

What if there is a web page that accepts that column? Say the old type was a string (so the appropriate form field is a textbox), and the new type is a date (so the appropriate form field is three drop-down boxes). Is Ruby's HTML generator going to figure things out and spit the appropriate HTML page?

> Again with the addition of a column, it's absorbed by the class (meaning that there is a property generated at runtime for access to the class implementation.

Alright. And are there any operations (e.g. "print all columns") that will naturally encompass the new column too?

> If a table is added RoR doesn't care about it, but you can't use it in an association, due to the fact that the machinery needed for represented associations needs to be generated against that new table.

Interesting. So here's a limitation that might be addressed. Do all of today's database engines (e.g. mysql) offer structure inspection (e.g. "enumerate tables in this database", "enumerate fields in this table"...)?

> So how does it handle changes? Pretty blindly, as long as your changes don't change any logic you've predetermined into some code. including associations (like belongs_to (where you've changed the name of the foreign key)
> 
> Bonus Question:
> Rails is pretty ignorant of the relationships defined in the database itself DHH (author of RoR) has his reasonings listed here
> http://www.loudthinking.com/arc/000516.html

Interesting. I happen to disagree with the author, and the "you'll have to pry that logic from my dead, cold object-oriented hands" part definitely sets off a blinking red LED somewhere, but I'm not a DB expert nor a Ruby expert so possibly I'm even misunderstanding things.

I'm not even thinking of stored procedures as logic vehicles. All stored procedures I've dealt with were little more than simply stored views (SELECT statements). If I want to see customer phone numbers and their orders, I'm glad to let the database do that efficiently by doing an inner join and passing me the information in one shot, instead of having me look at the two relevant tables for the sake of object orientedness. To say nothing about data integrity, which is best taken care of at the database level.

The whole idea of manipulating data at table level and compulsively delaying absolutely all meaning to the application level reminds me of dBase and the 1980s.

Anyhow, what I think might be good for most people is to write a SELECT statement that does something, and have the columns of that SELECT automatically available for further processing. I understand RoR doesn't do that.

> I have read about hacks to get procs to work under oracle, though I really haven't invested any time to the situation, as I've never used the db (firebird mainly)
> 
> (Not Defending his choices, nor am I an authoritative voice on RoR, besides the couple of patches back to it, I'm just a user - though limited as of late).
> 
> If these answers don't answer what you're looking for feel free to let me know

Thanks, the answers were exactly to the point. By and large, I'm interested in seeing strengths and limitations in the RoR approach, and making an impression whether a more static approach would make a big difference (better DB <-> code binding, better error detection, faster execution).

After all, the approach wouldn't even feel much more static: with rdmd, fast compilation, and the bang syntax, D can pretty much have the feel of an interpreter.


Andrei
February 11, 2007
Andrei Alexandrescu (See Website For Email) wrote:

> Interesting. So here's a limitation that might be addressed. Do all of today's database engines (e.g. mysql) offer structure inspection (e.g. "enumerate tables in this database", "enumerate fields in this table"...)?

In my experience, yes (MySQL does, SQLite does, and all
of the more traditional RDBMS's do, including PostgreSQL,
Oracle, DB2, MS SQL Server, and many more).  This level
of "reflection" is standard.  In older versions of MySQL
it was accessible only via special APIs, but normally it
is available via system tables/views with names something
like SYS_ALL_TABLES, SYS_ALL_COLUMNS, etc., which have
columns for name, type, etc, so that SQL is the interface
to this metadata.

-- James
February 11, 2007
Andrei Alexandrescu (See Website For Email) wrote:
> 
> I'm not even thinking of stored procedures as logic vehicles. All stored procedures I've dealt with were little more than simply stored views (SELECT statements). If I want to see customer phone numbers and their orders, I'm glad to let the database do that efficiently by doing an inner join and passing me the information in one shot, instead of having me look at the two relevant tables for the sake of object orientedness. To say nothing about data integrity, which is best taken care of at the database level.

Stored procedures have a few advantages over inline queries:

- Speed.  Stored procedures are pre-compiled.  This can have a tremendous impact on performance for server applications.
- Decoupling.  If the schema changes the app doesn't typically even need recompilation, it "just works" so long as the stored procs are updated as well.
- Security.  Access to the tables can be restricted to admins so the only thing a user can do is run approved stored procs.
- Encapsulation.  This is really an extension of decoupling, but from more of an OO perspective.  Hiding data access and manipulation behind structs has the same advantages of the same thing in OO programming languages.

By comparison, views are pre-compiled and provide security, but not the other two features.  Quite simply, all interaction with a DB in applications I design is through stored procedures.  If a particular server doesn't support them then it's a toy.


Sean
February 11, 2007
Sean Kelly wrote:
> Andrei Alexandrescu (See Website For Email) wrote:
> 
>>
>> I'm not even thinking of stored procedures as logic vehicles. All stored procedures I've dealt with were little more than simply stored views (SELECT statements). If I want to see customer phone numbers and their orders, I'm glad to let the database do that efficiently by doing an inner join and passing me the information in one shot, instead of having me look at the two relevant tables for the sake of object orientedness. To say nothing about data integrity, which is best taken care of at the database level.
> 
> 
> Stored procedures have a few advantages over inline queries:
> 
> - Speed.  Stored procedures are pre-compiled.  This can have a tremendous impact on performance for server applications.
> - Decoupling.  If the schema changes the app doesn't typically even need recompilation, it "just works" so long as the stored procs are updated as well.
> - Security.  Access to the tables can be restricted to admins so the only thing a user can do is run approved stored procs.
> - Encapsulation.  This is really an extension of decoupling, but from more of an OO perspective.  Hiding data access and manipulation behind structs has the same advantages of the same thing in OO programming languages.
> 
> By comparison, views are pre-compiled and provide security, but not the other two features.  Quite simply, all interaction with a DB in applications I design is through stored procedures.  If a particular server doesn't support them then it's a toy.
> 
> 
> Sean


Amem :)
February 11, 2007
James Dennett wrote:
> Andrei Alexandrescu (See Website For Email) wrote:
> 
>> Interesting. So here's a limitation that might be addressed. Do all of today's database engines (e.g. mysql) offer structure inspection (e.g. "enumerate tables in this database", "enumerate fields in this table"...)?
> 
> In my experience, yes (MySQL does, SQLite does, and all
> of the more traditional RDBMS's do, including PostgreSQL,
> Oracle, DB2, MS SQL Server, and many more).  This level
> of "reflection" is standard.  In older versions of MySQL
> it was accessible only via special APIs, but normally it
> is available via system tables/views with names something
> like SYS_ALL_TABLES, SYS_ALL_COLUMNS, etc., which have
> columns for name, type, etc, so that SQL is the interface
> to this metadata.
> 
> -- James

I am currently attempting to add this metadata to Result objects in DDBI. Nothing in SVN yet, but I am making progress.  I'm also working on improving the types that DDBI returns, expanding past just char[].

http://www.dsource.org/projects/ddbi

BA
February 11, 2007
Andrei Alexandrescu (See Website For Email) schrieb:
> After yesterday's hubbub, Judge Judy called and punished me to read about RoR, in addition to the obligatory sentence of helping a little old lady cross the street five times a week.
> 
> So I went and read the nice tutorial at:
> 
> http://www.onlamp.com/pub/a/onlamp/2006/12/14/revisiting-ruby-on-rails-revisited.html 
> 
> 
> I have a couple of questions that I assume will be easy to answer by anyone who actually has used RoR.
> 
> On the second page of the tutorial, the authors describe how they write SQL code to create tables, and then how they invoke Ruby to parse that SQL code and generate (I assume) Ruby wrappers for it.
> 
> Now consider that the database changes: new tables, new fields, different types for existing fields, etc.
> 
> 1. Is now the generated Ruby code is out of sync with the database?
> 
> 2. In case it is out of sync, what is the way to bring it back in sync? Manual editing of the Ruby code? Editing the SQL and then regenerating the wrappers? Some automated way?
> 
> An additional question: most interesting work in databases is done through views (SELECT statements) and stored procedures. Can Ruby parse such stuff and generate appropriate wrappers? If so, what happens when the views and stored procedures change?

In general this kind of problems are solved by
implementing the observer - observeable/subject pattern.


> 
> I'm asking these questions because I want to figure whether automating the task of keeping in sync with a database, plus the additional type safety and speed, are significant advantages in the Web/DB domain. In such a scenario, error messages like the one in Part 2 (http://www.onlamp.com/pub/a/onlamp/2007/01/05/revisiting-ruby-on-rails-revisited-2.html?page=4) may be avoided; the code simply fails to compile. I know of domains where such advantages are very important, but I'm not sure how the Web/DB domain feels about it.
> 
> 
> Andrei

Bjoern
February 11, 2007
Sean Kelly wrote:
> Andrei Alexandrescu (See Website For Email) wrote:
>>
>> I'm not even thinking of stored procedures as logic vehicles. All stored procedures I've dealt with were little more than simply stored views (SELECT statements). If I want to see customer phone numbers and their orders, I'm glad to let the database do that efficiently by doing an inner join and passing me the information in one shot, instead of having me look at the two relevant tables for the sake of object orientedness. To say nothing about data integrity, which is best taken care of at the database level.
> 
> Stored procedures have a few advantages over inline queries:
> 
> - Speed.  Stored procedures are pre-compiled.  This can have a tremendous impact on performance for server applications.
> - Decoupling.  If the schema changes the app doesn't typically even need recompilation, it "just works" so long as the stored procs are updated as well.
> - Security.  Access to the tables can be restricted to admins so the only thing a user can do is run approved stored procs.
> - Encapsulation.  This is really an extension of decoupling, but from more of an OO perspective.  Hiding data access and manipulation behind structs has the same advantages of the same thing in OO programming languages.
> 
> By comparison, views are pre-compiled and provide security, but not the other two features.  Quite simply, all interaction with a DB in applications I design is through stored procedures.  If a particular server doesn't support them then it's a toy.

One can tell you work in finance. :o)

Then probably a framework that does what RoR does, plus offers tighter binding to large-scale databases (in addition to better speed and checking), could fill a niche that today RoR does not address for a mix of practical and philosophical reasons?


Andrei
February 11, 2007
BLS wrote:
> Andrei Alexandrescu (See Website For Email) schrieb:
>> After yesterday's hubbub, Judge Judy called and punished me to read about RoR, in addition to the obligatory sentence of helping a little old lady cross the street five times a week.
>>
>> So I went and read the nice tutorial at:
>>
>> http://www.onlamp.com/pub/a/onlamp/2006/12/14/revisiting-ruby-on-rails-revisited.html 
>>
>>
>> I have a couple of questions that I assume will be easy to answer by anyone who actually has used RoR.
>>
>> On the second page of the tutorial, the authors describe how they write SQL code to create tables, and then how they invoke Ruby to parse that SQL code and generate (I assume) Ruby wrappers for it.
>>
>> Now consider that the database changes: new tables, new fields, different types for existing fields, etc.
>>
>> 1. Is now the generated Ruby code is out of sync with the database?
>>
>> 2. In case it is out of sync, what is the way to bring it back in sync? Manual editing of the Ruby code? Editing the SQL and then regenerating the wrappers? Some automated way?
>>
>> An additional question: most interesting work in databases is done through views (SELECT statements) and stored procedures. Can Ruby parse such stuff and generate appropriate wrappers? If so, what happens when the views and stored procedures change?
> 
> In general this kind of problems are solved by
> implementing the observer - observeable/subject pattern.

Sorry, I'm not sure I understand. My understanding of the mechanism is the following:

1. The app runs a SQL-to-target-language parser to build an idea about the database.

2. The database folk changes the database in any number of ways. This is not a process that automatically notifies the target language application.

3. The target language application must undergo some change to accommodate the change in the database.

I did DB/financial work in 1998. This scenario was a total bitch because we didn't have small and fast test cases for all logic code to run when the database changed. Basically it was the customer (financial analysts) who let us know when something bombed, and they actually got so used to it that they even weren't pissed anymore.


Andrei
« First   ‹ Prev
1 2 3 4
Top | Discussion index | About this forum | D home