Thursday, February 28, 2013

The LedgerSMB Service Locator for UDF's: How it Works

One of the questions I got from a post last week was how the LedgerSMB service locator works.  In the past I have covered the database design side, but not how the service locator works.  Part of the reason is that the service locator is still very much a work in progress and over time, we expect to have something quite a bit more full-featured than what we have now which is simple but works.

We use currently a stored procedure API divided into an upper and lower half,  The upper half generates an API call for the lower half, which can be called independently when finer control is needed.

The Bottom Half


The bottom half of this mechanism is the stored procedure call API.    The Perl API uses the following syntax:

@results = call_procedure( procname => $procname, args => $args );

@results is an array of hash references each of which represents a row returned.  $procname is naturally the name of the procedure, and $args is an array reference of scalar argument values.

For example if I want to call a function named company__get which takes a single argument of $id, I might:

($company) = call_procedure( 
          procname => 'company__get', args => [$id] 
);

This function then generates a query from the arguments of:

SELECT * FROM company__get(?);

And runs it, parameterized, with $id as the argument.

The full source code for the function in Perl is:

sub call_procedure {
    my $self     = shift @_;
    my %args     = @_;
    my $procname = $args{procname};
    my $schema   = $args{schema};
    my @call_args;
    my $dbh = $LedgerSMB::App_State::DBH;
    if (!$dbh){
        $dbh = $self->{dbh};
    }
    @call_args = @{ $args{args} } if defined $args{args};
    my $order_by = $args{order_by};
    my $query_rc;
    my $argstr   = "";
    my @results;

    if (!defined $procname){
        $self->error('Undefined function in call_procedure.');
    }
    $procname = $dbh->quote_identifier($procname);
    # Add the test for whether the schema is something useful.
    $logger->trace("\$procname=$procname");
    $schema = $schema || $LedgerSMB::Sysconfig::db_namespace;

    $schema = $dbh->quote_identifier($schema);

    for ( 1 .. scalar @call_args ) {
        $argstr .= "?, ";
    }
    $argstr =~ s/\, $//;
    my $query = "SELECT * FROM $schema.$procname()";
    if ($order_by){
        $query .= " ORDER BY $order_by";
    }
    $query =~ s/\(\)/($argstr)/;
    my $sth = $dbh->prepare($query);
    my $place = 1;
    # API Change here to support byteas: 
    # If the argument is a hashref, allow it to define it's SQL type
    # for example PG_BYTEA, and use that to bind.  The API supports the old
    # syntax (array of scalars and arrayrefs) but extends this so that hashrefs
    # now have special meaning. I expect this to be somewhat recursive in the
    # future if hashrefs to complex types are added, but we will have to put
    # that off for another day. --CT
    foreach my $carg (@call_args){
        if (ref($carg) eq 'HASH'){
            $sth->bind_param($place, $carg->{value},
                       { pg_type => $carg->{type} });
        } else {
            $sth->bind_param($place, $carg);
        }
        ++$place;
    }
    $query_rc = $sth->execute();
    if (!$query_rc){
          if ($args{continue_on_error} and  #  only for plpgsql exceptions
                          ($dbh->state =~ /^P/)){
                $@ = $dbh->errstr;
          } else {
                $self->dberror($dbh->errstr . ": " . $query);
          }
    }

    my @types = @{$sth->{TYPE}};
    my @names = @{$sth->{NAME_lc}};
    while ( my $ref = $sth->fetchrow_hashref('NAME_lc') ) {
        for (0 .. $#names){
            #   numeric            float4/real
            if ($types[$_] == 3 or $types[$_] == 2) {
                $ref->{$names[$_]} = Math::BigFloat->new($ref->{$names[$_]});
            }
        }
        push @results, $ref;
    }
    return @results;
}


In addition to the portions described above, this function also does some basic error handling, delegating to another function which logs full errors and hides some errors (particularly security-sensitive ones) behind more generic user-facing error messages.

There is a modest amount more to the logic in terms of type handling and the like, but that's roughly it for the main logic.

Future Enhancements for the Bottom Half


In the future I would like to add a number of features including window definitions and window function aggregates which can be tacked on to a function's output.  I would in essence like to be able to go from a maximum complexity of something like:

SELECT * FROM my_func(?) order by foo;

to

SELECT *, sum(amount) 
          over (partition by reference order by entry_id)
          AS running_balance
  FROM gl_report(?, ?, ?) order by transdate;

This sort of thing would make reporting functions a lot more flexible.

The Top Half 


The top half serves as a general service with regard to the location of the stored procedure.  The function is located in the DBObject module, and is called "exec_method."  This function provides service location capabilities, provided that function names are unique (this may change in future generations as we experiment with other representations and interfaces).

The top half currently uses an object-property to argument mapping approach or an enumerated argument approach exclusively.  There is no ability to mix these, which is a current shortcoming.  The current code allows for an enumerated argument approach which I almost never use since it is relatively brittle.

Additionally the ordering API in the Perl code is really suboptimal and needs to be redone in future versions.

The Perl code is:

sub exec_method {
    my $self   = shift @_;
    my %args  = (ref($_[0]) eq 'HASH')? %{$_[0]}: @_;
    my $funcname = $args{funcname};

    my $schema   = $args{schema} || $LedgerSMB::Sysconfig::db_namespace;

    $logger->debug("exec_method: \$funcname = $funcname");
    my @in_args;
    @in_args = @{ $args{args} } if $args{args};

    my @call_args;

    my $query = "
        SELECT proname, pronargs, proargnames FROM pg_proc
         WHERE proname = ?
               AND pronamespace =
               coalesce((SELECT oid FROM pg_namespace WHERE nspname = ?),
                        pronamespace)
    ";
    my $sth   = $self->{dbh}->prepare(
                $query
    );
    my $ref;

    $ref = $sth->fetchrow_hashref('NAME_lc');

    my $pargs = $ref->{proargnames};
    my @proc_args;

    if ( !$ref->{proname} ) {    # no such function
        # If the function doesn't exist, $funcname gets zeroed?
        $self->error( "No such function:  $funcname");
#        die;
    }
    $ref->{pronargs} = 0 unless defined $ref->{pronargs};
    # If the user provided args..
    if (!defined $args{args}) {
        @proc_args = $self->_parse_array($pargs);
        if (@proc_args) {
            for my $arg (@proc_args) {
                #print STDERR "User Provided Args: $arg\n";

                if ( $arg =~ s/^in_// ) {
                     if ( defined $self->{$arg} )
                     {
                        $logger->debug("exec_method pushing $arg = $self->{$arg}");
                     }
                     else
                     {
                        $logger->debug("exec_method pushing \$arg defined $arg | \$self->{\$arg} is undefined");
                        #$self->{$arg} = undef; # Why was this being unset? --CT
                     }
                     push ( @call_args, $self->{$arg} );
                }
            }
        }
        for (@in_args) { push @call_args, $_ } ;
        $self->{call_args} = \@call_args;
        $logger->debug("exec_method: \$self = " . Data::Dumper::Dumper($self));

        return $self->call_procedure( procname => $funcname,
                                          args => \@call_args,
                                      order_by => $self->{_order_method}->{"$funcname"},
                                         schema=>$schema,
                             continue_on_error => $args{continue_on_error});
    }
    else {
        return $self->call_procedure( procname => $funcname,
                                          args => \@in_args,
                                      order_by => $self->{_order_method}->{"$funcname"},
                                         schema=>$schema,
                             continue_on_error => $args{continue_on_error});
    }
}


Opportunities for Future Improvements


Initial improvements include replacing the enumerated argument API with one where a hashref can be passed overwriting part or all of the args sent to the database.  This would continue to make the API flexible and dynamic but would allow for richer client code.  The ordering API also needs to be moved into the actual API call, instead of a separate call.

Next Generation Interface Under Development


The next generation interface will support an API call like:

SELECT * FROM save('(,A-12334,"Test, Inc.",232)'::entity);

The major challenge here is recursively building what is essentially a potentially nested CSV structure.  For example, we might have:

SELECT * FROM save('(,JE-12334,Cash Transfer,2013-05-01,f,"{""(,4,-1000)"",""(,7,1200)"",""(,12,200)""}")'::journal_entry);

The escaping is not actually too hard.  The key challenges are actually questions of performance optimizations, such as making sure that we cache data structures properly and the like.

However in addition to that problem I would like to be able to define window functions on result sets over the API so that running balances can be added in the database (where they can be done most efficiently).

A fair bit of work has been done on this already.

Licensing


The code  above is licensed under the GNU General Public License version 2 or at your option any later version.  The code is not the cleanest code we have written on the subject but it is the code which is used by LedgerSMB in production.

If you would like BSD-licensed code to work with, which is also likely cleaner code, please see the PHP implementation that John Locke and myself have put together.

Monday, February 18, 2013

A look back on something I wrote 9 years ago

I recently came across a comment I wrote in 2004 on Slashdot.  I read it and was surprised actually both where my approach had changed and where it hadn't.  The entire comment is worth posting with a few very minor edits:

First, I think that stored procs are not only often good but lead to better, more powerful, secure, and flexible applications that would be feasible without them. But on the other hand, they lead to hard to maintain applications which are explicitly tied to one database. So they are necessary but often misused.

Triggers are more important and usually use stored procedures to ensure that the information in the database is always meaningful or that some other automated activity happens on an insert or update.

Unless I absolutely have to, I try to avoid having my application call stored procedures directly. A relational database manager *should* be able to hide the stored procedures behind a view allowing a nice standard interface for your data. This means that if you have to move to another RDBMS later, porting is much more simple and mostly confined to the backend.

BTW, I agree with the points about having your business logic in one place. Stored procedures allow you to move business logic which is inherent to the database into the database, thus making it available from all clients regardless of the language they are written in. For a single app/db pair this is not an issue but if you have a large DB with many different clients, it is a larger issue. Maintaining your application in one location is a LOT less work than reimplementing it all in every one of your apps.

Triggers, BTW, as I mentioned before are very powerful mechanisms. They are not called by the app directly but are run every time a row is inserted, updated, or deleted (some RDBMS's also allow select triggers, though some have alternate ways of implementing this). They can be used to implement additional security restrictions, enforce referential integrity, or more advanced stuff such as send email when a new order is added to the database. Again, this is done regardless of whether the order is filed using a web app or a Windows app for the rep in the call center. Since the logic is unknown to the app, it isn't even aware of what happens after the order is placed. Talk about clean interfaces..... This requires stored procedures.

So, these are the great things about stored procedures. But when they are used badly, you end up with the stored procedures reducing the agility of the application because they tie it in too closely to the database. What do you do when your app is tied to Sybase and your customers want MS SQL Server? What if *all* your logic is in that database? Do you rewrite *all* of it for MS SQL Server? Probably Not. You are stuck and out of your investment.

In my opinion, it is important to maintain a clear line between what is the database's job and what is the application's job. If this line is blurred, bad things can result. Stored procedures are a very easy way to blur this line.

I design my apps with the following layers to them:

UI
Application-specific logic
Database Access Layer
Information Presentation (SQL API using views)
Information Management and Data Logic (usually stored procedures)
Information Storage (highly normalized, usually).

The comment was made as part of a discussion on stored procedures. I was discussing at the time an open source CRM solution I was working on which never gained much traction in part because it was one of those things I did primarily to learn PHP, and so I made some important errors in architecture on the PHP side, and that I made a bunch of other errors in managing the project.  I learned a lot from the failure both in what to not to do but just as importantly what was done right.

In reading the post I am struck by two things.  The first is that my understanding of the problems has advanced less than my understanding of solutions, and also because my approach to the solutions at the time and today aim squarely at the same problems.

I have said before many times that one of the primary causes of grief with stored procedures is the fact that they very often create very brittle API's which require way too many changes in code to work right.  The choice to go stored procedure-based then is one which has a number of possible hazards (and different db's have different ways of addressing this), and rolling out applications against a new revision of stored procedures can be a problem which can become more difficult to coordinate over time (Oracle's "Edition-based revision" system helps with this a great deal).

The way I tried to solve this back  a decade ago was to wrap functions in views, so that an application would not need to know the semantics of the call, and in fact everything could be done in dynamic sql.  This has the advantage of entirely hiding the call structure of the stored procedure, but it has the disadvantage of making it somewhat difficult to come up with a view structure sufficiently robust and flexible as to be generally useful.  This also was intended to ensure that the application could just send ansi-SQL and get answers back.

As I look at this solution I recognize that today there are no perfect solutions but that this was a failed but noble effort to deal with a real problem (but one that worked surprisingly well for the app I was building.

The approach we take in LedgerSMB is both similar and different from the approach we took in HERMES.  The basic layers are still there, but the tools in each layer are different:

LayerHERMESLedgerSMB
UIPHPTemplate Toolkit
Application-specific logicPHPPerl
Database Access LayerHand written PHP moduleDBI
Information PresentationViewsMoved up a level, a service locator written in Perl
Information Management and Data LogicStored Procedures, pretty basicStored procs plus named args
Information StorageNormalized tablesNormalized tables

The biggest differences between then and now are worth considering.  Instead of insisting that the public API is relational, the stored procedures form a discoverable API, and this allows one to utilize much more sophisticated stored procedures.  I couldn't imagine doing something like a trial balance or income statement in a view-based API.  Having access to function arguments makes a huge difference there.

To accomplish this the information presentation was moved above data access, and turned into a service locator rather than a simple wrapper.  More needs to be done on this wrapper, but it is a significant improvement over my previous attempts to approach these problems,  The approach is still a work in progress but the automation is sufficiently simple currently as to add a level of robustness not present before but not sufficient to have opacity-related bugs frequently.

I am much happier with the current approach I am using, but it is interesting to note how how little my approach has changed over the last decade or so.

Friday, February 15, 2013

Building SOLID Databases: Dependency Inversion and Robust DB Interfaces

Dependency inversion is the idea that interfaces should depend on abstractions not on specifics.  According to Wikipedia, the principle states:

A. High-level modules should not depend on low-level modules. Both should depend on abstractions.
B. Abstractions should not depend upon details. Details should depend upon abstractions.

Of course the second part of this principle is impossible if read literally.  You can't have an abstraction until you know what details are to be covered, and so the abstraction and details are both co-dependent.  If the covered details change sufficiently the abstraction will become either leaky or inadequate and so it is worth seeing these as intertwined to some extent.

The focus on abstraction is helpful because it suggests that the interface contract should be designed in such a way that neither side really has to understand any internal details of the other in order to make things work.  Both sides depend on well-encapsulated API's and neither side has to worry about what the other side is really doing.  This is what is meant by details depending on abstractions rather than the other way around.

This concept is quite applicable beyond object oriented programming because it covers a very basic aspect of API contract design, namely how well an API should encapsulate behavior.

This principle is first formulated in its current form in the object oriented programming paradigm but is generally applicable elsewhere.

SQL as an Abstraction Layer, or Why RDBMS are Still King


There are plenty of reasons to dislike SQL, such as the fact that nulls are semantically ambiguous.  As a basic disclaimer I am not holding SQL up to be a paragon of programming languages or even db interfaces, but I think it is important to discuss what SQL does right in this regard.

SQL is generally understood to be a declarative language which approximates relational mathematics for database access purposes.  With SQL, you specify what you want returned, not how to get it, and the planner determines the best way to get it.  SQL is thus an interface language rather than a programming language per se.  With SQL, you can worry about the logical structure, leaving the implementation details to the db engine.

SQL queries are basically very high level specifications of operations, not detailed descriptions of how to do something efficiently.  Even update and insert statements (which are by nature more imperative than select statements) leave the underlying implementation entirely to the database management system.

I think that this, along with many concessions the language has made to real-world requirements (such as bags instead of sets and the addition of ordering to bags) largely account for the success of this language.  SQL, in essence, encapsulates a database behind a mature mathematical, declarative model in the same way that JSON and REST do (in a much less comprehensive way) in many NoSQL db's.  In essence SQL provides encapsulation, interface, and abstraction in a very full-featured way and this is why it has been so successful.


SQL Abstraction as Imperfect


One obvious problem with treating SQL as an abstraction layer in its own right is that one is frequently unable to write details in a way that is clearly separate from the interface.  Often storage tables are hit directly, and therefore there is little separation between logical detail and logical interface, and so this can break down when database complexity reaches a certain size.  Approaches to managing this problem include using stored procedures or user defined functions, and using views to encapsulate storage tables.

Stored Procedures and User Defined Functions Done Wrong


Of the above methods, stored procedures and functional interfaces have bad reputations frequently because of bad experiences that many people have with them.    These include developers pushing too much logic into stored procedures, and the fact that defining functional interfaces in this way usually produces a very tight binding between database code and application code, often leading to maintainability problems.

The first case is quite obvious, and includes the all-too-frequent case of trying to send emails directly from stored procedures (always a bad idea).  This mistake leads to certain types of problems, including the fact that ACID-compliant operations may be mixed with non-ACID-compliant ones, leading to cases where a transaction can only be partially rolled back.  Oops, we didn't actually record the order as shipped, but we told the customer it was.....  MySQL users will also note this is an argument against mixing transactional and nontransactional backend table types in the same db.....  However that problem is outside the scope of this post.  Additionally, MySQL is not well suited for many applications against a single set of db relations.

The second problem, though, is more insidious.  The traditional way stored procedures and user defined functions are typically used, the application has to be deeply aware of the interface to the database, but the rollout for these aspects is different leading to the possibility or service interruptions, and a need to very carefully and closely time rollout of db changes with application changes.  As more applications use the database, this becomes harder and the chance of something being overlooked becomes greater.

For this reason the idea that all operations must go through a set of stored procedures is a decision fraught with hazard as the database and application environment evolves.  Typically it is easier to manage backwards-compatibility in schemas than it is in functions and so a key question is how many opportunities you have to create new bugs when a new column is added.

There are, of course, more hazards which I have dealt with before, but the point is that stored procedures are potentially harmful and a major part of the reason is that they usually form a fairly brittle contract with the application layer.  In a traditional stored procedure, adding a column to be stored will require changing the number of variables in the stored procedure's argument list, the queries to access it, and each application's call to that stored procedure.  In this way, they provide (in the absence of other help) at best a leaky abstraction layer around the database details.  This is the sort of problem that dependency inversion helps to avoid.

Stored Procedures and User Defined Functions Done Right


Not all stored procedures are done wrong.  In the LedgerSMB project we have at least partially solved the abstraction/brittleness issue by looking to web services for inspiration.  Our approach provides an additional mapping layer and dynamic query generation around a stored procedure interface.  By using a service locator pattern, and overloading the system tables in PostgreSQL as the service registry, we solve the problem of brittleness.

Our approach of course is not perfect and it is not the only possibility.  One shortcoming is that our approach is that the invocation of the service locator is relatively spartan.  We intend to allow more options there in the future.  However one thing I have noticed is the fact that there are far fewer places where bugs can hide and therefore faster and more robust development takes place.

Additionally a focus on clarity of code in stored procedures has eliminated a number of important performance bottlenecks, and it limits the number of places where a given change propagates to.

Other Important Options in PostgreSQL


Stored procedures are not the only abstraction mechanisms available from PostgreSQL.  In addition to views, there are also other interesting ways of using functions to accomplish this without insisting that all access goes through stored procedures.  In addition these methods can be freely mixed to produce very powerful, intelligent database systems.

Such options include custom types, written in C, along with custom operators, functions and the like.  These would then be stored in columns and SQL can be used to provide an abstraction layer around the types.  In this way SQL becomes the abstraction and the C programs become the details.  A future post will cover the use of ip4r in network management with PostgreSQL db's as an example of what can be done here.

Additionally, things like triggers and notifications can be used to ensure that appropriate changes trigger other changes in the same transaction or, upon transaction commit, hand off control to other programs in subsequent transactions (allowing for independent processing and error control for things like sending emails).

Recommendations


Rather than specific recommendations, the overall point here is to look at the database itself as a an application running in an application server (the RDBMS) and design it as an application with an appropriate API.  There are many ways to do this, from writing components in C and using SQL as an abstraction mechanism to writing things in SQL and using stored procedures as a mechanism.  One could even write code in SQL and still use SQL as an abstraction mechanism.

The key point however is to be aware of the need for discoverable abstraction, a need which to date things like ORMs and stored procedures often fill very imperfectly.  A well designed db with appropriate abstraction in interfaces, should be able to be seen as an application in its own right, engineered as such, and capable of serving multiple client apps through a robust and discoverable API.

As with all things, it starts by recognizing the problems and putting solutions as priorities from the design stage onward.

Tuesday, February 12, 2013

Building SOLID Databases: Interface Segregation, or Keep Stored Procedures Simple

As far as interfaces go, we are talking specifically about functional interfaces (UDF's, or "User Defined Functions", and stored procedures).  Raw interfaces like SQL queries have very different profiles of issues.  This is one principle that more or less takes care of itself in most databases due to existing concerns regarding normalization and the like.  Subtle violations are quite possible, however, but they can be avoided by careful design and the signs are relatively easy to spot.  Problems can occur in both purely relational and object-relational designs though.

Interface Segregation in Application Programming


In application programming, interface segregation is used to ensure that dissimilar interfaces are not too tightly coupled, posing maintenance problems.  The canonical example of a violation is the actual case of the Xerox print system job class which did everything from printing to collation and stapling of papers.  The problem is simply that the methods would become sufficiently entangled that small changes would require significant cascading interventions elsewhere in the system.

Interface segregation is a principle which suggests that the basic abstractions should be internally cohesive but externally less so, such that cascading changes are minimized.  It therefore goes hand in hand with the single responsibility principle but focuses on the interface level rather than the class level.

Interface Segregation in the Database


We'd like to think that interface segregation is a solution to a problem that only affects applications.  We really would.  However, many of the bad experiences people have had with stored procedures are in fact violations of this rule.

In general, my experience has been that stored procedure interfaces (or UDFs returning sets) work well when they are a single large query with some minor support logic, and become much harder to work with when they are many smaller queries chained together.  When a stored procedure or UDF tries to do to much, that's where you get problems and rather than counting screen lines, numbers of individual statements provide a better estimate of reducible complexity there.

When a stored procedure or UDF tries to do too much,  maintainability problems occur because functionality is intertwined in a way which exactly mirrors the problems found in the application layer.  This gives people bad experiences....

An Example of a Violation


In LedgerSMB 1.3 we introduced a number of stored procedures for payment management.  These each have some difficulties associated with them so what I am talking about here is not necessarily a condemnation of one interface over the other, but one specifically violates this principle by combining support for prepayments with support for paying existing invoices when the input requirements are different and these are handled dissimilarly in the underlying database.

This single payment interface, which can support either an overpayment or a payment of an invoice, is difficult to maintain because the input complexity is much larger than necessary.  In fact the function requires nearly 20 inputs vs 12 for each interface if these were split up, and about 20 statements as well, vs far fewer if these were split up (additional refactoring might be possible in that case too, to further reduce the number of statements through consolidation).  This is a piece of code I tend to dread fixing problems in.  It works well currently, but there is always a fear that I am breaking something that I am not aware of, and test cases only get you so far.  Fortunately we haven't had to change much there but all our payment interfaces are up for redesign when we get back around to them.

Danger Signs


 The most obvious danger sign that a stored procedure is trying to do too much is what I call "reducible complexity."  The ideal stored procedure in my view is a single very large query with some minor supporting logic around them.  Reducible complexity is bad, and should be reduced.  Singular large queries can be debugged much more easily than a series of shorter ones because one can take advantage of the highly structured nature of SQL.  Consolidating queries thus increases the complexity locally in the SQL statement but reduces it over the scope of the function.

For purposes of a blank rule, my thinking is that if you have more than about 5 queries in a stored procedure, it is worth taking a look, and if you have more than 10 or 20, something is definitely worth looking at.

A large number of small queries, or worse, a large number of large queries, is a sign that something is amiss.  The first step is to consolidate queries to the extent possible.    However if a large number of queries persist, then the immediate question is what can be done to break the logic into separate interfaces which may be independently used.

A second warning sign are stored procedures that require a very large number of inputs.  Large here is best seen in the context of distribution relative to other interfaces, and it is not an absolute guide.

Recommendations


I am a very large fan of using functional interfaces (stored procedures and user-defined functions) to grant access to the data, but such interfaces must be kept as simple as possible.  Simple in this case is defined by complexity that can be reduced by consolidation of logic at least as much as it is defined by complexity that can be reduced by breaking a function into several components.

Friday, February 8, 2013

Building SOLID Databases: Liskov Substitution Weirdness

If the Open/Closed principle starts looking a little strange as applied to object-relational design, the Liskov Substitution Principle applies in almost the exactly opposite way in the database as in the applications.  The reason become only clear once exploring this principle, seeing the limits on it, and investigating the realities of database development.  The canonical examples of LSP violations in application programs do not violate the principle at all in the database.  As always the reason is that databases model different things than applications and so their constraints are different.

The Formal Definition of the Liskov Substitution Principle


Let  be a property provable about objects of type . Then should be provable for objects of type where is a subtype of .

The Liskov Substitution Principle in Application Programming


The Liskov Substitution Principle is important because it ensures provability of state and behavior for subtypes.  This means you have to look at the full constraints that operate on a class method (excluding constructors I think, which operate under a different set of constraints).

In general for the LSP to be satisfied, preconstraints cannot be weakened, post-constraints cannot be strengthened, and invariants must be maintained not only in inherited methods but also non-inherited ones (this is broad enough to include the history constraint).  The typical problem this addresses is the square-rectangle problem, which is a well-understood problem in object-oriented programming, and one which calls into question certain promises of OOP as a whole.

The square-rectangle problem establishes a problem of irreducible complexity in object-oriented programming.  The typical examples are either "A square is-a rectangle" or "A circle is-a elipse."  Mathematically both of these statements are true but implementing them in an object-oriented framework adds a great deal of complexity.  The reason is that the mathematical hierarchy has no concept of a change of state which preserves invariances, and so the object hierarchy ends up being complicated by the need to shim in what is notably lacking in the geometric hierarchy.  This leads to either an explosion of abstract classes, an explosion of object metadata (can this rectangle be stretched -- always 'no' for a square)?  Can it be scaled?), or a loss of the sort of functionality we typically want from inheritance.

On the database level, these problems mostly don't exist in relational designs but they do exist in object relational designs, only differently (more on that below).

The Liskov Substitution Principle in application programming primarily operates primarily to preserve assumptions regarding state changes in an object when an interface is called.  If a square (defined as a rectangle where X and Y measurements are the same) can be stretched, it is no longer a square.  On the other hand, if we can't stretch it, it may not fill the contract of the rectangle.

When A Square Is-A Rectangle Subtype


The square-rectangle problem can be turned on its head to some extent with the question of when this is-a relationship is valid.  In fact the is-a relationship is valid for immutable objects.  It is perfectly valid to have a class ImmutableSquare be a subclass of ImmutableRectangle.

Secondly the subtyping can be valid if sufficient information is attached to the class to specify the invariants.  For example we might have a class with attributes "can_stretch" and "can_scale" where setting can_stretch off (as it would always be in a square) ensures that length to width proportions are always preserved.  The problem here is that the subclass invariances require support in the superclass and this leads to a lot of complexity in implementing the superclass, as well as fundamental limits of what can then be subclassed.

A Look at Database Limitations


In a normal relational database, the LSP is always met.  Domain-level constraints only apply to storage and not to calculation outputs, for example.

In database systems where tables instantiate types and substitutability is not anticipated (Oracle, DB2), then the LSP applies in the same way it does in application programming.  In the cases of table inheritance (Informix, PostgreSQL), things start to get a little weird.

For the purpose of stored data, the LSP is generally satisfied because constraints are inherited and strengthened.  A subtype is simply a sub-domain of the parent type possibly with some additional attributes.

The Square-Rectangle Problem in Object-Relational Databases


The Square-Rectangle problem is only even a potential problem in object-relational databases.  Purely relational designs can never run into this issue.  In object-relational designs a few very specific issues can occur depending entirely on how object-relational functionality is implemented at the database level.  In databases like Oracle and DB2, the LSP applies pretty much as is.  In Informix and PostgreSQL, the problems faced are actually somewhat different and lead to new classes of anomilies that need to be considered.  These include most prominently update anomilies when parent tables are updated.  These do not necessarily have easy workarounds.

Consider the following table structure:

CREATE TABLE my_rectangle (
    id serial primary key,
    height numeric,
    width numeric
 );

 CREATE TABLE my_square ( 
    check (height = width)
 ) INHERITS (my_rectangle);

Now, we may insert a bunch of squares and rectangles, and we could have checks.  Moreover we could have custom triggers to verify referential integrity for rows referencing my_rectangle and my_square.

So suppose we use update on my_rectangle to double the height of every rectangle in the system:

UPDATE my_rectangle SET height = height * 2;

Oops, we can't do that.  While rows in my_rectangle will happily be adjusted, rows in the child table my_square will not, and you will get an error.  This error can be avoided through a few issues, each of which has problems:

  1. Disallow updates, only allow inserts and deletes.  This makes referential integrity harder to manage when the row must be deleted and re-inserted.
  2. Use a trigger to rewrite an update to be a delete plus insert into either the current or parent table depending on constraints.  This makes RI harder to manage in that the procedure MUST disable custom RI triggers before doing this.   This would require that the procedure be aware of all custom RI triggers in order to do this.

Conclusions


The Liskov Substitution Principle depends quite a bit on specifics of how an object-relational database system intersects tables and objects for how it applies.  In general in a purely relational design you will never need to worry about it, but in an object-relational design there are some nasty corner cases that can come up, particularly where a query may operate on heterogenous subtypes as a set.  This is, perhaps, the only one of the SOLID principles which is O-R specific and it hits the db in different ways than the application because the db  operates on different principles.