Thursday, August 18, 2016

PostgreSQL vs Hadoop

So one of the clients I do work with is moving a large database from PostgreSQL to Hadoop.  The reasons are sound -- volume and velocity are major issues for them, and PostgreSQL is not going away in their data center and in their industry there is a lot more Hadoop usage and tooling than there is PostgreSQL tooling for life science analytics (Hadoop is likely to replace both PostgreSQL and, hopefully, a massive amount of data on NFS).  However this has provided an opportunity to think about big data problems and solutions and their implications.  At the same time I have seen as many people moving from Hadoop to PostgreSQL as the other way around.  No, LedgerSMB will never likely use Hadoop as a backend.  It is definitely not the right solution to any of our problems.

Big data problems tend to fall into three categories, namely managing ever increasing volume of data, managing increasing velocity of data, and dealing with greater variety of data structure.  It's worth noting that these are categories of problems, not specific problems themselves, and the problems within the categories are sufficiently varied that there is no solution for everyone.  Moreover these solutions are hardly without their own significant costs.  All too often I have seen programs like Hadoop pushed as a general solution without attention to these costs and the result is usually something that is overly complex and hard to maintain, may be slow, and doesn't work very well.

So the first point worth noting is that big data solutions are specialist solutions, while relational database solutions for OLTP and analytics are generalist solutions.  Usually those who are smart start with the generalist solutions and move to the specialist solutions unless they know out of the box that the specialist solutions address a specific problem they know they have.  No, Hadoop does not make a great general ETL platform.....

One of the key things to note is that Hadoop is built to solve all three problems simultaneously.  This means that you effectively buy into a lot of other costs if you are trying to solve only one of the V problems with it.

The single largest cost comes from the solutions to the variety of data issues.  PostgreSQL and other relational data solutions provide very good guarantees on the data because they enforce a lack of variety.  You force a schema on write and if that is violated, you throw an error.  Hadoop enforces a schema on read, and so you can store data and then try to read it, and get a lot of null answers back because the data didn't fit your expectations.  Ouch.  But that's very helpful when trying to make sense of a lot of non-structured data.

Now, solutions to check out first if you are faced with volume and velocity problems include Postgres-XL and similar shard/clustering solutions but these really require good data partitioning criteria.  If your data set is highly interrelated, it may not be a good solution because cross-node joins are expensive.  Also you wouldn't use these for smallish datasets either, certainly not if they are under a TB since the complexity cost of these solutions is not lightly undertaken either.

Premature optimization is the root of all evil and big data solutions have their place.  However don't use them just because they are cool or new, or resume-building.  They are specialist tools and overuse creates more problems than underuse.

Sunday, August 14, 2016

Forthcoming new scalable job queue extension

So for those of you who know, I now spend most of my time doing more general PostgreSQL consulting and a fair bit of time still on LedgerSMB.  One of my major projects lately has been on a large scientific computing platform currently run on PostgreSQL, but due to volume and velocity of data being moved to Hadoop (the client maintains other fairly large PostgreSQL instances with no intention of moving btw).

With this client's permission I have decided to take a lot of the work I have done in optimizing their job queue system and create an extension under PostgreSQL for it..  The job queue currently runs tens of millions of jobs per day (meaning twice that number of write queries, and a fair number of read queries too) and is one of the most heavily optimized parts of the system, so this will be based on a large number of lessons learned on what is a surprisingly hard problem.

It is worth contrasting this to pg_message_queue of which I am also the author.  pg_message_queue is intended as a light-weight, easy to use message queue extension that one can use to plug into other programs to solve common problems where notification and message transfer are the main problems.  This project will be an industrial scale job queuing system aimed at massive concurrency.  As a result simplicity and ease of use take second place to raw power and performance under load.  In other words here I am not afraid to assume the dba and programming teams know what they are doing and has the expertise to read the manual and implement appropriately.

The first version (1.x) will support all supported versions of PostgreSQL and make the following guarantees:


  1. massively multiparallel, non-blocking performance  (we currently use with 600+ connections to PostgreSQL by worker processes).
  2. Partitioning, coalescing, and cancelling of jobs similar in some ways to TheSchwartz
  3. Exponential pushback based on number of times a job has failed
  4. Jobs may be issued again after deletion but that this can always be detected and bad jobs pruned
  5. Optionally job table partitioning.
The first client written will rely on hand-coded SQL along with DBIx::Class's schema objects.  This client will guarantee that:

  1. Work modules done always succeeds or fails in a transaction
  2. A job notifier class will be shown
  3. Pruning of completed jobs will be provided via the  perl module and a second query.
The history of this is that this came from a major client's use of The Schwartz and they out grew it for scalability reasons.  While the basic approach is thus compatible, the following changes are made:

  1. Job arguments are in json format rather than in Storable format in bytea columns
  2. Highly optimized performance on PostgreSQL
  3. Coalesce is replaced by a single integer cancellation column
  4. Jobs may be requested by batches of various sizes
2.x will support 9.5+ and dispense with the need for both advisory locks and rechecking.  I would like to support some sort of graph management as well (i.e. a graph link that goes from one job type to another which specifies "for each x create a job for y" type of semantics.  That is still all in design.

Friday, August 5, 2016

use lib '.' considered harmful (exploits discussed)

Which the discussion of CVE-2016-1238, a quick and easy fix for broken code that has been suggested is to add the following line to the top of broken Perl scripts:  Note this applies to Perl as run anywhere, whether pl/perlU, plain perl, or something else.

use lib '.';

In some corners, this has become the goto solution for the problem (pun quite deliberate).  It works, gets the job done, and introduces subtle, hidden, and extremely dangerous problems in the process.

For those interested in the concerns specific to PostgreSQL, these will be discussed near the end of this article.

Security and the Garden Path


I am borrowing an idea here from linguistics, the idea of the garden path, as something that I think highlights a lot of subtle security problems.  Consider the newspaper headline "Number of Lothian patients made ill by drinking rockets."  Starts off simple enough and you get to the end, realizing you must have misread it (and you did, probably, since the number of patients increased who were made ill by drinking, not that some people got sick because they drank hydrazine).  The obvious reading and the logical reading diverge and this leads to a lot of fun in linguistic circles.

The same basic problem occurs with regard to security problems.  Usually security problems occur because of two problems. Either people do something obviously insecure (plain text authentication for ftp users where it matters) or they do something that looks on the surface like it is secure but behind the scenes does something unexpected.

Perl here has a few surprises here because parsing and running a script is a multi-pass process but we tend to read it as a single pass. Normally these don't cause real problems but in certain cases there are very subtle dangers lurking.  Here, with use lib '.', it is possible to inject code into a running program as long as an attacker can get a file placed in the current working directory of the program.

Relative paths, including the current working directory, do have legitimate use cases, but the problems and pitfalls must be understood before selecting this method.

What the lib pragma does


Perl looks for files to require or include based on an array of paths, globally defined, called @INC.  Use lib stores a copy of the original lib at first use, and then ensures that the directory specified occurs at the start of the search order.  So directories specified with use lib are searched before the default library directories.  This becomes important as we look at how Perl programs get executed.

How Perl runs a program


Perl runs a program in two passes.  First it creates the parse tree, then it runs the program.  This is a recurive process and because of how it works, it is possible for malicious code that gets accidently run in this process to transparently inject code into this (and other portions) of the Perl process.

Keep in mind that this makes Perl a very dynamic language which, on one hand, has serious garden path issues, but on the other ensures that it is an amazingly flexible language.

During the parse stage, Perl systematically works through the file, generating a parse tree, and running any "BEGIN" blocks, "use" statements, and "no" statements.  This means that injected code can be run even if later sytnax errors appear to prevent the bulk of the program from running at all or if earlier errors cause run-time exception.

After this process finishes, Perl executes the parse tree that results.  This means that Perl code can rewrite the parse tree before your code is ever really written and that code can be inserted into that part o the process.

Transparent code injection during 'use'


Consider a simple Perl script:

#!/usr/bin/perl

use lib '.';
use Cwd;
use 5.010;
use strict;
use warnings;

say getcwd();

Looks straight-forward.  And in most cases it will do exactly what it looks like it does.  It loads the standard Cwd module and prints out the current working directory.

However, suppose I run it in a different directory, where I add two additional files:

Cwd.pm contains:

package Cwd;

use Injected;

1;


hmmmm  that doesn't look good. What does Injected.pm do?


package Injected;
use strict;

sub import {
   local @INC = @INC;
   my ($module) = caller;
   warn $module;
   delete $INC{'Injected.pm'};
   delete $INC{"$module.pm"};
   @INC = grep { $_ ne '.' } @INC;
   eval "require $module";
   warn "Got you!, via $module";

}

1;


So when Cwd imports Injected, it deletes itself from the memory of having been included, deletes its caller too, reloads the correct caller (not from the current working directory) and then executes some code (here a harmless warning).

Cwd.pm then returns success
test.pl runs Cwd->import() which is now the correct one, but we have already run unintended code that could in theory do anything.

Any program capable of being run in arbitrary directories, written in Perl, which has this line in it (use lib '.') is subject to arbitrary code injection attacks using any module in the dependency tree, required or not.

Instead, do the opposite (where you can)


As a standard partof boilerplate in any secure Perl program, I strongly recommend adding the following line to the top of any script.  As long as modules don't add it back in behind your back (would be extremely rare that they would), adding the following line:

no lib '.';

Note that this strips out the current working directory even if it si supplied as a command-line argument.  So it may not be possible in all cases.  So use common sense, and do some testing, and document this as desired behavior.  Note one can still invoke with perl -I./. in most cases so it is possible to turn this safety off.....  Additionally if you put that at the start of your module, something you include could possibly put it back in.

Safer Alternatives


In a case where you need a path relative to the script being executed, FindBin is the ideal solution.  It gives you a fixed path relative to the script being run, which is usually sufficient for most cases of an application being installed on a system as a third party.  So instead you would do:

use FindBin;
use lib $FindBin::Bin;

Then the script's directory will be in the include path.

PL/PerlU notes:


I always add the explicit rejection of cwd in my plperlu functions.  However if someone has a program that is broken by CVE-2016-1238 related fixes, it is possible that someone would add a use lib '.' to a perl module, which is a bad idea.  As discussed in the previous post, careful code review is required to be absolutely safe.  Additionally, it is a very good idea to periodically check the PostgreSQL data directory for perl modules which would indicate a compromised system.

Tuesday, August 2, 2016

PostgreSQL, PL/Perl, and CVE-2016-1238

This post is about the dangers in writing user defined functions in untrusted languages, but it is also specifically about how to avoid CVE-2016-1238-related problems when writing PL/PerlU functions.  The fix is simple and straight-forward and it is important for it to be in pl/perlU stored procedures and user defined functions for reasons I will discuss.  This discusses actual exploits and the severity of being able to inject abritrary Perl code into the running database backend is a good reason to be disciplined and careful about the use of this language.

It is worth saying at the outset that I have been impressed by how well sensible design choices in PostgreSQL generally mitigate problems like this.  In essence you have to be working in an environment where a significant number of security measures have been bypassed either intentionally or not.  This speaks volumes on the security of PostgreSQL's design since it is highly unlikely that this particular attack vector was an explicit concern.  In other words these decisions make many attacks even against untrusted languages far more difficult than they would be otherwise.

The potential consequences are severe enough, however, that secure coding is particularly important in this environment even with the help of the secure design.  And in any language it is easy to code yourself into corners where you aren't aware you are introducing security problems until they bite you.

The current implementation, we will see, already has a fair bit of real depth of defense behind it.  PostgreSQL is not, by default, vulnerable to the problems in the CVE noted in the title.  However, with a little recklessness, it is possible to open the door to the possibility of real problems and it is possible for these problems to be hidden from the code reviewer by error rather than actual malice.  Given the seriousness of what can happen if you can run arbitrary code in the PostgreSQL back-end, my view is that all applicable means should be employed to prevent problems.

PL/PerlU can be used in vulnerable ways, but PL/Perl (without the U) is by design safe.  Even with PL/PerlU it is worth noting that multiple safety measures have to be bypassed before vulnerability becomes a concern.  This is not about any vulnerabilities in PostgreSQL, but what vulnerabilities can be added through carelessly writing stored procedures or user-defined functions.

There are two lessons I would  like people to take away from this.  The first is how much care has been taken with regard to PostgreSQL regarding security in design.  The second is how easily one can accidentally code oneself into a corner.  PL/PerlU often is the right solution for many problem domains and it can be used safely but some very basic rules need to be followed to stay out of trouble.

Extending SQL in PostgreSQL using Arbitrary Languages


PostgreSQL has a language handler system that allows user defined functions and stored procedures to be written in many different languages.  Out of the box, Python, TCL, C, and Perl come supported out of the box.  Perl and TCL come in trusted and untrusted variants (see below), while Python and C are always untrusted.

There are large numbers of external language handlers as well, and it is possible to write ones own.  Consequently, PostgreSQL allows, effectively, SQL to be extended by plugins written in any language.  The focus here will be on untrusted Perl, or PL/perlU.  Untrusted languages have certain inherent risks in their usage and these become important, as here, when there is concern about specific attack vectors.

Trusted vs Untrusted Languages, and what PL/Perl and PL/PerlU can do


PostgreSQL allows languages to be marked as 'trusted' or 'untrusted' by the RDBMS.  Trusted languages are made available for anyone to write functions for while untrusted languages are restricted to database superusers.  Trusted languages are certified by the developers not to interact with file handles, not to engage in any other activity other than manipulating data in the database.

There is no way to 'use' or 'require' a Perl module in pl/perl (trusted) and therefore it is largely irrelevant for our discussion.  However PL/PerlU can access anything else on the system and it does so with the same permissions as the database manager itself.  Untrusted languages, like PL/PerlU make it possible to extend SQL in arbitrary ways by injecting (with the database superuser's permission!) code into SQL queries written in whatever langauges one wants.  Untrusted languages make PostgreSQL one of the most programmable relational databases in the world, but they also complicate database security in important ways which are well beyond the responsibility of PostgreSQL as a project.

Like test harnesses, untrusted languages are insecure by design (i.e. they allow arbitrary code injection into the database backend) and this issue is as heavily mitigated as possible, making PostgreSQL one of the most security-aware databases in the market.

To define a function  in an untrusted language, one must be a database superuser, so the PostgeSQL community places primary trust in the database administration team not to do anything stupid, which is generally a good policy.  This article is largely in order to help that policy be effective.

Paths, Permissions, and CVE-2016-1238


The CVE referenced in the title of this article and section is one which allows attackers to inject code into a running Perl routine by placing it in the current working directory of a Perl process. If an optional dependency of a dependency is placed i the current working directory, it may be included in the current Perl interpreter without the understanding of the user.  This is a problem primarily because Perl programs are sufficiently complex that people rarely understand the full dependency tree of what they are running.

If the current working directory ends up being one which includes user-writeable data of arbitrary forms, the problem exists.  If not, then one is safe.

The problem however is that changing directories changes the include path.  You can demonstrate this by doing as follows:

In ./test.pl create a file that contains:

use test;
use test2;

./test.pm is:

chdir test;
\
test/test2.pm is:

warn 'haxored again';

What happens is that the use test statement includes ./test.pm which changes directory, so when you use test2, you are including from test/test2.pm.   This means that if any period during this cycle of the Perl interpreter's life you are in a world-writable directory, and including or requring files, you are asking for trouble.


Usual Cases for PL/PerlU


The usual use case of PL/PerlU is where you need a CPAN module to process or handle information..  For example you may want to return json using JSON.pm, or you may want to write to a (non-transactional log).

For example, the most recent PL/PerlU function I wrote used basic regular expressions to parse a public document record stored in a database to extract information relating to patents awarded on protein sequences.  It was trivial but was easier to use JSON than to write json serialization inline (and yes, performed well enough given the data sizes operate on).

Are usual cases safe?  Deceptively so!


Here are a few pl/perlU functions which show the relevant environment that PL/PerlU functions run in by default:

postgres=# create or replace function show_me_inc() returns setof text language plperlu as
$$
return \@INC;
$$;
CREATE FUNCTION
postgres=# select show_me_inc();
         show_me_inc          
------------------------------
 /usr/local/lib64/perl5
 /usr/local/share/perl5
 /usr/lib64/perl5/vendor_perl
 /usr/share/perl5/vendor_perl
 /usr/lib64/perl5
 /usr/share/perl5
 .
(7 rows)

postgres=# create or replace function get_cwd() returns text language plperlu as
postgres-# $$
postgres$# use Cwd;
postgres$# return getcwd();
postgres$# $$;
CREATE FUNCTION
postgres=# select get_cwd();
       get_cwd       
---------------------
 /var/lib/pgsql/data
(1 row)


Wow.  I did not expect such a sensible, secure implementation.  PostgreSQL usually refuses to start if non-superusers of the system have write access to the data directory.  So this is, by default, a very secure configuration right up until the first chdir() operation......

Now, in the normal use case of a user defined function using PL/PerlU, you are going to have no problems.  The reason is that most of the time, if you are doing sane things, you are going to want to write immutable functions which have no side effects and maybe use helpers like JSON.pm to format data.  Whether or not there are vulnerabilities exploitable via JSON.pm, they cannot be exploited in this manner.

However, sometimes people do the wrong things in the database and here, particularly, trouble can occur.

What gets you into trouble?


To have an exploit in pl/perlU code, several things have to happen:


  1. Either a previous exploit must exist which has written files to the PostgreSQL data directory, or one must change directories
  2. After changing directories, a vulnerable module must be loaded while in a directory the attacker has write access to.
It is possible, though unlikely, for the first to occur behind your back.  But people ask me all the time how to send email from the database backend and so you cannot always guarantee people are thinking through the consequences of their actions.

So vulnerabilities can occur when people are thinking tactically and coding in undisciplined ways.  This is true everywhere but here the problems are especially subtle and they are as dangerous as they are rare.


An attack scenario.


So suppose company A receives a text file via an anonymous ftp drop box in X12 format (the specific format is not relevant, just that it comes in with contents and a file name that are specified by the external user).  A complex format like X12 means they are highly unlikely to do the parsing themselves so they implement a module loader in PostgreSQL.  The module loader operates on a file handle as such:

On import, the module loader changes directories to the incoming directory.  In the actual call to get_handle, it opens a file handle, and creates an iterator based on that, and returns it.  Nobody notices that the directory is not changed back because this is then loaded into the db and no other file access is done here.  I have seen design problems of this magnitude go undetected for an extended period, so coding defensively means assuming they will exist.

Now, next, this is re-implemented in the database as such:

CREATE OR REPLACE FUNCTION load_incoming_file(fiilename text) 
RETURNS int
language plperlu as
$$
use CompanyFileLoader '/var/incoming'; # oops, we left the data directory
use CompanyConfig 'our_format'; # oops optional dependency that falls back to .
                                                    # in terms of exploit, the rest is irrelevant
                                                    # for a proof of concept you could just return 1 here
use strict;
use warnings;

my $filename = shift;
my $records = CompanyFileLoader->get_handle($filename);
while ($r = $records->next){
    # logic to insert into the db omitted
}

return $records->count;

$$;


The relevant poart of CompanyFileLoader.pm is (the rest could be replaced with stubs for a proof of concept):

package CompanyFileLoader;
use strict;
use warnings;

my @dirstack;
sub import {
    my $dir = pop;
    push @dirstack, $dir;
    chdir $dir;
}

Now, in a real-world published module, this would cause problems but in a company's internal operations it might not pose discovered problems in a timely fashion.

The relevant part of CompanyConfig is:

package CompanyConfig;
use strict;
use warnings;

eval { require 'SupplementalConfig' };

Now, if a SupplementalConfig.pm is loaded into the same directory as the text files, it will get loaded and run as part of the pl/perlU function.

Now to exploit this someone with knowledge of the system has to place this in that directory.  It could be someone internal due to failure to secure the inbound web service properly.  It could be someone external who has inside knowledge (a former employee for example).  Or a more standard exploit could be tried on the basis that some other shared module might have a shared dependency.

The level of inside knowledge required to pull that off is large but the consequences are actually pretty dire.  When loaded, the perl module interacts with the database with the same permissions as the rest of the function, but it also has filesystem access as the database server.  This means it could do any of the following:


  1. Write perl modules to exploit this vulnerability in other contexts to the Pg data directory
  2. delete or corrupt database files
  3. possibly alter log files depending on setup.
  4. Many other really bad things.

These risks are inherent with the use of untrusted languages, that you can write vulnerable code and introduce security problems into your database.  This is one example of that and I think the PostgreSQL team has done an extremely good job of making the platform secure.



Disciplined coding to prevent problems


The danger can be effectively prevented by following some basic rules:

All user defined functions and stored procedures in PL/PerlU should include the line:

no lib '.';

It is possible that modules could add this back in behind your back, but for published modules this is extremely unlikely.  So local development projects should not use lib '.' in order to prevent this.

Secondly, never use chdir in a pl/perl function.  Remember you can always do file operations with absolute paths.   Without chdir, no initial exploit against the current working directory is possible through pl/perlu.  Use of chdir circumvents important safety protections in PostgreSQL.

Thirdly it is important that one sticks to well maintained modules.  Dangling chdir's in a module's load logic are far more likely to be found and fixed when lots of other people are using a module, and a dangling chdir is a near requirement to accidental vulnerability.  For internal modules, they need to be reviewed both for optional dependencies usage and dangling chdir's in the module load logic.

Monday, August 1, 2016

CVE-2016-1238 and the hazards of testing frameworks

Because I have lost confidence in the approach taken by those in charge of fixing this problem, I have decided to do a full disclosure series on CVE-2016-1238.  As mentioned in a previous post, the proposed fixes for the minor versions don't even remotely fix the problem  and at most can provide a false sense of security.  For this reason it is extremely important that sysadmins understand how these exploits work and how to secure their systems.

A lot of the information here is not specific to this CVE but concerns security regarding running tests generally.  For this reason while this CVE is used as an example, the same basic concepts apply to PostgreSQL db testing, and much more.

As I mentioned in a previous post, prove cannot currently be run safely in any directory that is world writeable, and the current approach of making this a module problem make this far worse, not better.  Moreover this is not something which can be patched in prove without breaking the basic guarantee that it tests the application as it would run on the current system's Perl interpreter and if you break that, all bets are off.

All the exploits I cover in this series are exploitable on fully patched systems.  However they can be prevented by good system administration.  In every case, we will look at how system administration best practices can prevent the problem.

One key problem with the current approach is that it fails to differentiate between unknowing inclusion and user errors brought on simply by not understanding the issues.  The latter has been totally disregarded by the people attempting stop-gap patches.

Test harnesses generally are insecure by design.  The whole point is to execute arbitrary code and see what happens and with this comes a series of inherent risks.  We accept those risks because they are important to the guarantees we usually want to make that our software will perform in the real world as expected.  Moreover even the security risks inherent in test harnesses are good because it is better to find problems in an environment that is somewhat sandboxed than it is in your production environment.

So testing frameworks should be considered to be code injection frameworks and they can be vectors by which code can be injected into a tested application with system security implications.

Understood this way, testing frameworks are not only insecure by design but this is desirable since you don't want a testing framework to hide a problem from you that could bite you in production.

This is a generic problem of course.  A database testing framework would (and should) allow sql injection that could happen in the tested code so that these can be tested against.  So whether you working with Perl, Python, PostgreSQL, or anything else, the testing framework itself has to be properly secured.  And in each platform this means specific things.

In PostgreSQL, this means you need to pay close attention to certain things, such as the fact that the database tests should probably not run as a superuser for example since a superuser can do things like create functions wit system access.

In Perl, one of the most important things to consider is the inclusion of modules from the current working directory and the possibility that someone could do bad things there.

What Prove Guarantees


Prove guarantees that your perl code will run, in its current configuration, in accordance with your test cases.  This necessarily requires arbitrary code execution and arbitrary dependency requirements resolved in the way Perl would resolve them on your system.

Prove guarantees that the guarantees you specify in your test cases are met by your current Perl configuration.  It therefore cannot safely do any extra sandboxing for you.

How Prove Works


The basic architecture of prove is that it wraps a test harness which runs a specified program (via the shebang line) parses its output assuming it to be in the test-anything protocol, and generates a report from the rest.  For example if you create a file test.t:

#!/bin/bash

echo 'ok 1';
echo 'ok 2';
echo '1..2';

and run prove test.t

You will get a report like the following:

$ prove test.t 
test.t .. ok   
All tests successful.

What prove has done is invoke /bin/bash, run the file on it, parse the output, check that 2 tests were run, and that both printed ok (it is a little more complex than this but....), and let you know it worked.

Of course, usually we run perl, not bash.

An Attack Scenario


The most obvious attach scenarios would occur with automated test environment that are poorly secured.  In this case, if prove runs from a directory containing material more than one user might submit, user A may be able to inject code into user B's test runs.

Suppose user A has a hook for customization as follows:

eval { require 'custom.pl' };

Now this is intended to run a custom.pl file, at most once, when the code is run, and it checks the current @INC paths.  If this doesn't exist it falls back to the current working directory, i.e. the directory the shell was working in when prove was run.  If this directory shares information between users, user b can write a custom.pl file into that directory which will run when user A's scheduled tests are run.

The code would then run with the permissions of the test run and any misbehavior would tie back to user A's test run (it is harder to tie the behavior to user B).  In the event where user A's test run operates with more system permissions than user B's, the situation is quite a bit worse.  Or maybe user B doesn't even have tests run anymore for some reason.

Now, it has been proposed that prove prune its own @INC but that doesn't address this attack because prove has to run perl separately.  Additionally separation of concerns dictates that this is not prove's problem.

Behavior Considered Reasonably Safe


As we have seen, there are inherent risks to test frameworks and these have to be properly secured against the very real fact that one is often running untrusted code on them.  However, there are a few things that really should be considered safe.  These include:


  • Running tests during installation of trusted software as root.  If you don't trust the software, you should not be installing it.
  • Running tests from directories which store only information from a single user (subdirectories of a user's home directory for example).


Recommendations for Test Systems


Several basic rules for test systems apply:

  1. Understand that test systems run arbitrary code and avoid running test cases for automated build and test systems as privileged users.
  2. Properly secure all working directories to prevent users from unknowingly sharing data or test logic
  3. Running as root is only permitted as part of a process installing trusted software.