"I also plan to add hooks for various events, like when a row is created, updated, or deleted. So maybe a you could implement something where a row was always updated to indicate the last time the row was editor. Or CASCADE could be implemented as a hook on the delete event for ForeignKey."
If that sort of thing really matters to your application, then wouldn't you be using a database that had native triggers anyway?
Drawing the line between what you put in app logic and what you put in the database is always tricky. But I tend towards the view that basic data integrity self-defence is more reliable if done in/by the database.
You might put it in the application because: - it's portable across databases (but how often in real life do people decide to switch the database behind a serious application that their business depends on?) - it works for databases that don't support those things natively (but why would you use one of those if data integrity was a serious concern?) - you can see and maintain all your logic in one place
You might put it in the database because: - it's probably probably more efficient. Assume people building databases know more about how to do this sort of stuff quickly and reliably than you (for any likely value of "you") do. - it can't be bypassed by people accessing the same database from a different client application, or even by somebody else writng a different part of the same application
I notice that even MySQL might have triggers soon: http://rootprompt.org/article.php3?article=8289
In general, I agree, a company probably doesn't switch databases very often. I started my personal project using sqlite and moved over to mysql after development was under way. All it took was a change in the connection string ( and some refactoring to use datetime instead of mx.DateTime which doesn't count ). SQLObject's ability to easily switch backend stores helps me a lot - it makes it easier for me to develop and choose a provider when the time comes. But this is probably true more for small efforts than corporate IT infrastructure projects.
If you're writing an in-house app, or one targeted at only one DB, then putting the triggers in the DB can make sense (because you don't change databases often, as you pointed out). But if you're writing an app on top of SQLObject which you then want others to use with many different databases, it's nice to write the triggers once, rather than once for each potential target DB.
Well, like other people said, SQLObject can be used at a lower level than any single deployment -- SQLObject can be part of a project which may get installed anywhere. This just came up for a user: http://www.crummy.com/2005/02/18/0 -- and it will come up all the time for open source developers that distribute projects based on SQLObject.
But even then, database triggers are limited; they can operate on the database, but not otherwise. So, if your dependent object isn't a SQLObject instance then a database trigger won't accomplish anything. E.g., if you want a FileCol which only stores a filename in the database, and that file needs to be deleted along with the row. Or if you have two separate databases that have dependencies across the databases.
And even when you are only dealing with the database, database triggers still aren't sufficient, because they don't give SQLObject any feedback, so it doesn't know how to update its cache. SQLObject, for reasons tied to its very ORMness, has problems when the database gets updated behind its back. I'd like to fix some of those problems (and some of these features could help), but in this case SQLObject could still cache aggressively in the presence of in-process cascades.
Good points, Ian. I think I detect a fairly fundamental difference in worldview/emphasis/(don't say the p*gm word!) between:
- the application as the primary thing and the db as a dumb persistence store
- the data as the primary thing and the dbms as its primary guardian; triggers (etc) as the immune system that protect it from things malicious/miguided applications might try to get up to.
It's a design decision about at what layer in the stack you want to have your basic data integrity defence mechanisms, with different valid answers in different circumstances. Although what you seem to be saying/implying is that SQLObject won't be able to cope if you put them anywhere other than in the app's persistence layer. I would expect that (with all due respect) to be very significantly slower and less reliable than native triggers in any mature production rdbms - especially if the app isn't running on the same machine. And even if you can live with that, what if you aren't the only application - or the only instance of the same application - that's talking to the same database?
So you would appear to be limited to one instance of one application that "owns" the database entirely. Which of course is perfectly adequate in many circumstances, but severely limits what you can do if you want to load balance your app / run with existing production databases that already have integrity protections in place / etc.
Martin Fowler called one style an Application Database (what SQLObject prefers) vs. an Integration Database (where the database is authoritative and the communication medium for multiple applications).
I should really write a more thorough summary of why it's hard to be reasonably efficient with an ORM while also being compatible with an Integration Database style. Not impossible -- it's just a matter of what your expectations are. Hmm... I've never properly written up the performance issues of an ORM; I should do that so I can refer to it at times like these...