Discover Where your Firefox Profile Directory is.

I've often had the challenge of knowing "Which profile am I running? Where is it loaded from ? " with firebox. Especially on Windows, where there is a dizzying maze of inconsistency, which only seems to be getting worse. Thankfully, Firefox 4.0 ( and possibly earlier ) has a solution of sorts.

This helpful dialogue summarises a whole bunch of useful things that are probably cause for your woes.

However, they've done one thing, well meaning, but completely stupid in my estimation.

Now to me, this is a big problem, because mostly, I've had a common issue with anything "open a directory" related simply not work, and for me, it would be much much more practical if it just told me where it is ( this also really gets on my goat with the download manager, because open pretty almost never works as expected , especially as directories, and I just wish it would tell me where the damned file is instead of making me do a dance ).

So, I got busy and poked around in the guts of about:support with firebug, and found the source code for that button in chrome://. As a result, here is a nice little scriptlet that will tell you where that location is without needing to open some external application!

javascript:alert(Services.dirsvc.get("ProfD", Ci.nsIFile).path);

When you are on the about:support page, copy and paste that string as-is into your address bar and press enter, if you are lucky, you'll get an alert box telling you where to look

Hosted by imgur.com

Hopefully, this technique will help out where others fail ( Such as the one odd incident I saw on IRC today where the user has rm -r ~/.mozilla but they still somehow have Firefox remembering what their profile looks like.


Pet Peeve: People who use Math Notation to explain Programming.

Sure, I get that many programming things have a mathematical backing underlying their working. But do we have to have Math notation that only MathGeeks understand? Especially on Wikipedia?

I mean, there's a mathematical equation ( or equations ) backing you walking to the shop, but to explain walking to the shop I really really do not want to have to take several university papers to get there.

What practical purpose could there be to using it on Wikipedia?

Its not at all immediately accessible to anyone who is a programmer trying to understand the programming concept explained

Its even less accessible to anyone who is neither a programmer or a math-geek, and face it, they're the majority of the people out there.

I'd love to understand Canonincal LR Parsers, but the whole concept flys right out the window within 3 lines of that page at that lovely asinine:

 [ A → B • C, a ]

Sure, I probably learnt that syntax at some stage aeons ago, but gosh, I don't have a clue what it means now, nor is there any way for me to know what that is even talking about in order to search for it

Ironically, theres another little problem which makes using math notation completely pointless, is there's many operations that are simply impractical to use math-notation for, and people resort to using programming notation:

action[k, b] = "shift m"

and that, just makes matters worse, because that notation seems completely incoherent with the rest of it.

So is it stupid?

... or is it just me


Funniest Phishing Scam Ever.

Today I just received the funniest phishing scam I think I've ever seen. They're claiming to be paying out to victims of scams!. Awesome!.



This is to bring to your notice that our bank (ECOBANK INTL. PLC) is
delegated by the ECOWAS/UNITED NATIONS in Central Bank to pay victims
of scam $500,000 (Five Hundred Thousand Dollars Only).

You are to send the following informations for remittance.

Your Name.___________________________
Phone .___________________________
Amount Defrauded.___________________________

Send a copy of your response with the PAYMENT CODE

     ECB/06654 $500,000 USD.
     TEL: +234 7025669085

Email: scamvictimstransfer_2010@yahoo.co.jp

Yours Faithfully,
Mrs. Rosemary Peter
Copyright 2010

I bet you Won't Guess what the country that phone number is for.

Introducing Data::Handle

Comming to a mirror near you, soon, is Data::Handle.

What does Data::Handle do?

Data::Handle solves 2 very simple problems that occur with the __DATA__ section and the associated *DATA Glob, and both of them are to do with "multiple modules trying to access the section".

1. Provide a reliable way to get a file-handle with the position at the start of the __DATA__ segment

  1. *DATA is really a pointer to the entire file, and not just the data segment
  2. The Perl interpreter sets the current position in the file to be after the __DATA__ line

The first time you read from *DATA this of course works fine, but the issue is once you read it, it moves the internal file cursor, and if you read the whole section, after the first complete read, the cursor now points to EOF. For a second block of code to re-read this data without communicating with the first block of code, it has to then rewind the file cursor back to the start prior to reading, and there is no way naturally to know where that point to rewind back to is.

Other modules so far have remedied this by trying to rewind to the start of the file, and manually emulate various parts of the Perl Parser to re-find the start of the __DATA__ section before re-reading its contents.

This module however takes a different approach, and assumes that hopefully, the first person to read that file handle will know what they're doing, and use this module to do it. This module will then record the file offset the __DATA__ section began at, so from that point onwards, rewinding to the start is a trivial exercise.

And all this happens for you simply by you doing :

my $handle = Data::Handle->new( __PACKAGE__ ); 

instead of doing

my $handle = do { no strict 'refs'; \*{ __PACKAGE__ . "::DATA"} };
. ( Note: Side perk, the new syntax is simpler, more straight forward, easier to remember, and no dicking around with strict! ;D )

2. Provide a reliable way for 2 separate logical code units to access the same __DATA__ segment without interfering with each other

Because *DATA is a filehandle, and there is only one of them, seeking around in it can be problematic.

Especially if you have 2 code units that are trying to read it from different places. For a contrived example, prior to this module if you wanted to go back and re-read the start of the section, or skip forwards and read something later in the section, without forgetting where you are now, you'd need a contrived dance of seek/tell. Instead, now, you can just create another worker that will read that stuff for you, and the original handle will retain its position.

my $handle = Data::Handle->new( 'Foo' );
while( <$handle> ){ 

   if ( $_ =~ /something/ ){ 
       # get line 1. 
       my $slave = Data::Handle->new('Foo');
       my $firstline = <$slave>;
   # continue as normal.

Internally, there is a lovely dance of Seek() going on there, but from an interface perspective, you don't need to know its seeking, all you need to know is "Get reference to DATA, get data from it".

Sure, you can probably argue you could do it easily with lots of seek() in a nice way, but that logic falls apart when you have code in 2 separate places reading the same *DATA.

Its much smarter to be defensive about it, and have some assurance that you can read a file descriptor in a safe way without something evil like this tampering with it.

my $handle = do { no strict 'refs'; \*{ __PACKAGE__ . "::DATA"} };

sub evil_function { 
  my $handle = do { no strict 'refs'; \*{ __PACKAGE__ . "::DATA"} };
  seek $handle, 0, 3; # seek to EOF.

That is spooky action at a distance!

Data::Handle solves this by meticulously tracking position in each instance, and re-seeking the file handle to the place it was at the end of the last tracked read, so regardless of how much seeking around some other module did, as long as you got on the scene first, you should be unstoppable ;)


Find stale packages in Gentoo

I often get in days of clean-up mentality, and like to go through and see how old my oldest packages are, see if I can either clean them out, or recompile them to make sure they're using the latest tool-chain enhancements.

The easiest way to do this is really just scrape the file-system MTimes, its fast, it works, and nobody cares if its not 100% accurate.

find -O3 /var/db/pkg -depth -type f -name "*.ebuild" -printf "%T@ %T+ %p/%f\n" | \
    sort -r -k 1 > /tmp/mtimes;

You now have a lovely list of files in /tmp/mtimes showing when the most recent incarnation of each currently installed package appeared to have been installed.

I tend to walk over the tail end of this and manually re-emerge them if I feel like it.

Alternatively, if you think you don't need the package any-more, you can give the reverse dependencies a quick scan.

Sometimes the portage tools are fast enough, but I find just running a grep does the trick:

grep "silgraphite" /var/db/pkg/*/*/*DEPEND* 
# /var/db/pkg/app-text/texlive-core-2010/DEPEND:... xetex? ( ...media-libs/silgraphite )
# /var/db/pkg/app-text/texlive-core-2010/RDEPEND:... xetex? ( ...media-libs/silgraphite )

Oh right, seems I do need that, so I can either tell texlive-core I don't want xetex support and then clean it out, or I can just re-merge it and then think about this problem later

For the curious people

I'm also curious somewhat as to how old my system is as a whole, and have this script I threw together.
find -O3 /var/db/pkg -depth -type f -name "*.ebuild" -printf "%T@ %T+ %p/%f\n" | \
    sort -r -k 1 > /tmp/mtimes;

for j in $(seq 2008 2010 );
  for i in $(seq 01 12 | sed "s/^\([0-9]\)$/0\1/" );
     echo -n "$j-$i:" ;
     grep $j-$i /tmp/mtimes | wc -l ;
  done ;
done ;
And my findings tell me that most of my system is around 3-months old, and this is a good thing in my mind.
There's a couple things that are stuck in the past, but that's due to GCC incompatibilities/test failures, and I'll come back to these later and probably file a bug report if I can find something worth-while reporting.


Handling optional requirements with Class::Load

In a previous blog I discovered Class::Load and its awesomeness.

Here is one practical application of it:



Automatic Optional Requisites

Say you have a library which provides some form of extensibility to consuming modules, and you want a way to "magically" discover a class to use, but use a fallback if its not there.

Here is how to do it with Class::Load

use strict;
use warnings;
package Some::Module;
use Class::Load qw( :all );

sub do_setup { 
sub import {
    my $caller = caller(); 
    my $maybemodule = "$caller::Controller";
    if( try_load_class( $maybemodule ) ){ 
        do_setup( $maybemodule ); # its there, and it works.
    } else { 
        if ( $Class::Load::ERROR =~ qr/Can't locate \Q$maybemodule\E in \@INC/ ){ 
        } else { 
           die $Class::Load::ERROR;
To do it the right way without Class::Load is extraordinarily complicated.
use strict;
use warnings;
package Some::Module;

sub do_setup { 
sub import {
    my $caller = caller(); 
    my $maybemodule = "$caller::Controller";

    # see rt.perl.org #19213
    my @parts = split '::', $class;
    my $file = $^O eq 'MSWin32'
             ? join '/', @parts
             : File::Spec->catfile(@parts);
    $file .= '.pm';
    my $error;
    my $success;
       local $@;     
       $success = eval { 
          local $SIG{__DIE__} = 'DEFAULT';
          require $file;
       $error = $@;
     if( $success eq 'success' ) ){ 
        do_setup( $maybemodule ); # its there, and it works.
    } else { 
        if ( $error =~ qr/Can't locate \Q$maybemodule\E in \@INC/ ){ 
        } else { 
           die $error;

And even then, you still have a handful of sneaky bugs lurking in there :/

  1. With the second code, if somebody dynamically created the ::Controller class and didn't create a file for it, it will not work properly, and they'll have to tweak $INC somewhere for it to work
  2. If somebody loaded the ::Controller class manually before hand, but it failed, and they didn't report the error, on 5.8, the above code will behave as if the code Did load successfully. ( Truely nasty )

Class::Load has a lot of heuristics in it to try avoid both these situations ( well, the latter one will be soon once a 1-line patch goes in )

There are a few things still that I don't like doing that way, but for now, that's the best I can get

  1. Using a regular expression to determine what type of load failure occurred is nasty, but the only alternative approaches are either
    1. more complicated
    2. prone to be wrong on 5.8

What I'd like to be able to do

and may write a patch for

use strict;
use warnings;
package Some::Module;
use Class::Load qw( :all );

sub do_setup { 
sub import {
    my $caller = caller(); 
    my $maybemodule = "$caller::Controller";
    if( try_load_working_class( $maybemodule ) ){ 
        do_setup( $maybemodule ); # its there, and it works.
    } else {
        do_setup("Some::Module::Default"); #its not there.

The idea being "Syntax errors are syntax errors, there's no good reason to suppress them , at all", so in the above code, if Whatever::Controller existed, but was broken, it would die, instead of treating it as if it were absent.


Module Patched and on github! =). Waiting on an authoritative update =)

package App;
use Class::Load qw( :all );

sub import { 
   my $caller = caller();
   my $baseclass = load_optional_class("${caller}::Controller") ? "${caller}::Controller" : "App::Controller";
   push @{$caller}::ISA, $base_class; # this line is pseudocode.


On CPAN: http://search.cpan.org/~sartak/Class-Load-0.06/lib/Class/Load.pm.
Thanks Sartak =)


Searching / Design spec for the Ultimate 'require' tool.

Perl's de-facto require method is something of confusing amounts of complexity, complexity that is often overlooked in the edge cases. It looks straight forward:

require Class::Name;

And you think you're done right?

Not so.

Most of the problems come from one of 2 avenues.

  1. Things that happen when the module specified cannot, for whatever reason, be sourced
  2. Things that happen when you want to require a module by string name

Point 2 is probably the most commonly covered one, and it seems to be the primary objective of practically every require module I can find on CPAN.

However, many of the existing modules, in attempting to solve the string-name issue, result in the handling of 'this module cannot be sourced' becoming WORSE!.

Module Sourcing Headaches

The mysterious Perl 5.8 double-require hell

The following code, in my testing, works without issue:

     eval "require Foo;1"; 
     require Foo;

Now, if Foo happens to be broken, and cannot be sourced, on Perl < 5.10, then nothing will happen in the above code!. Scary, but true. Its even scarier if those 2 lines of code are worlds apart.

A quirk of how Perl 5.8 functions ( which is now solved in 5.10 ) is that once a module is require'd, as long as that file existed on disk, $INC{ } will be updated to map the module name to the found file name. This doesn't seem to bad, until you see how it behaves with regard to that being called again somewhere else. Take a look at this sample code from perlfunc:

sub require {
    my ( $filename ) = @_;
    if ( exists $INC{$filename} ){
       return 1 if $INC{$filename};
       die "Compilation failed in require";

Now on 5.10 this is fine, because $INC{$filename} is set to 'undef' if an error was encountered. But on everything prior to 5.10, the value of $INC{$file} is in every way identical to the value it would have if the module loaded successfully. And as you do not want to require the module again once it has loaded, this behaviour falsely thinks "Oh, that's already loaded" and doesn't tell anyone anywhere that there is a problem.

If that's too much reading for you, here's the executive summary of the problem: You need everyone, everywhere, who either directly, or indirectly calls require inside an eval, to make sure any compilation/parsing errors with require is handled immediately. Because failing this, everything else that requires that same broken file will treat the file as successfully loaded, will not error, and you'll just get some confusing problem where the modules contents will not be anywhere you can see them.

From a debugging perspective, this behaviour frankly scares me, and I'm very glad its fixed in 5.10, and glad I can use 5.10, but for you poor suckers stuck working with 5.8, or trying to make 5.8 backwards compatible modules, this problem will crop up eventually, if not for you, for somebody who uses your modules.

Awful exceptions are awful

This following code looks fine at a first approach, but there are many things wrong with it:

     if( eval "require Foo; 1" ){ 
         # behaviour to perform if there is a Foo
     } else {
         # behaviour to perform if there is no Foo

A nice and elegant way of saying "Try use this module, and if its not there, resort to some default behaviour"

But what about the magical middle condition, where its there, but its broken?. In this code, it will silently fall back to the default behaviour, and nothing anywhere will tell you that Foo is broken, and you'll spend several hours with a dumb look on your face while you prod completely unrelated code.

What we really need is a way to disambiguate between "its there" and "its broken", because ideally, if its there, and broken, we want a small nuclear explosion.

On Perl 5.10 and higher, this isn't so hard, we can just prod $INC{} to see what happened.

exists $INC{'Foo.pm'} a false valueThe module couldn't be found on disk, or nobody required it yet
exists $INC{'Foo.pm'} a true valueThe module exists on disk, and somebody has required it
defined $INC{'Foo.pm'}a false valueThe module exists, somebody required it, but it failed ( >5.10 only )
defined $INC{'Foo.pm'} a true value
  • The module loaded successfully ( >=5.10 )
  • Absolutely nothing useful( < 5.10 )

So that approach is not exactly very nice, or very portable.

The next option you have, is, if you're fortunate enough to actually get require to die for you when it should, is regexing the exception it throws. But that is just horrible. Regexing messages from die is stupid, its limited, and prone to breaking. Proper object exceptions are our salvation. What we really need for this situation is different exceptions that indicate the type of problem encountered, so we're not left guessing with cludgy code.

Stringy require headaches

This is the lesser evil, but not without its perils.

At some stage, if you write anything moderately interesting, you'll find the need to programmatically divine the name of a module to require. This is where require tends to bite you in the ass.

sub load_plugin { 
    my $plugin = shift;
    my $fullname = 'MyPackage::' . $plugin;
    require $fullname;

This is simply prohibited by the Perl Gods of Yore. You have to find some other way, and there are many modules targeted at this. There are some simple approaches, but they're also somewhat dangerous approaches too sometimes.

Bad Approach

Here is something you should really avoid if you're expecting the code to be used anywhere worth having any security. DO NOT DO THIS:

sub load_plugin { 
    my $plugin = shift;
    my $fullname = 'MyPackage::' . $plugin;
    eval "require $fullname;1" or die $@;

Firstly, you just pretty much wrote a wide open security hole. Somebody just needs to call:

   load_plugin( 'Bobby; unlink "/etc/some/important/document";' ); 

and the show is pretty much over. That's not necessarily so tragic if its your own code, and you're the only person who ever invokes it, but if its public facing, ( and especially if the code is published somewhere ), then avoid that style like cancer, because in my opinion, its not "if" its exploitable, but "when" its exploitable. Taint mode may help you a little bit, but don't bet on it.

Secondly, if you were foolish enough to have accidentally left out that 'or die $@' part, then you will have just created an invisible bug to be discovered later for everyone using Perl 5.8. Congratulations.

Less insane approach

The less insane approach is to emulate how perl maps Package names to file names internally, and pass that value to require. ( Because when you pass something as a string to require, its expecting a path of sorts, not a module name ).

sub load_plugin { 
    my $plugin = shift;
    my $fullname = 'MyPackage::' . $plugin;
    $fullname =~ s{::}{/}g;
    $fullname .= '.pm';
    require $fullname;

This is good, because there's no room for accidentally forgetting to call die $@, and the worst somebody can do is specify an arbitrary file on disk to read, which is what you were doing to begin with anyway. This is way way less dangerous than allowing execution of arbitrary code. Both these code samples are still plagued by the 5.8 double-require situation, if somebody manages to require() the broken code before you do and hide the error, but that's substantially less likely to happen.

Existing Modules, and what is wrong with them

I've seriously looked at many many modules on CPAN for this task. And sadly, none fit the bill perfectly.


This seems to be the most popular one. But it only solves the stringy-require issue, and in reality, adds MORE potential for failure.
  • Victim to the double-load on 5.8 issue.

    this one line of code is sufficient enough to make this weak to the double require issue.

    return eval { 1 } if $INC{$file};
    As discussed above, on 5.8, if the file has already been 'required' but failed, $INC{$file} will be set to the path to that file. And as a result , UNIVERSAL::require will just respond with "Oh right".

  • No Exceptions

    This module doesn't help us at all with regard to exception objects. It relies entirely on Perl's native ( virtually non-existent ) exception system

  • Actually exacerbates the 5.8 issue

    In my opinion, this module actually makes us take a step backwards in progressive coding. It replaces useful informative exception throwing, with silence, and requires you to check a return value. The result is, everyone who does Foo->require() without checking the return value, will result in the very next thing that tries to require Foo, and expect an exception when its broken, silently succeed, but there will be no "Foo"

  • 2005 called and want their Perl style back

    Seriously, we've been trying to encourage people to use stuff like 'Autodie' because checking the return value of every open, every close, and every print ( yes, print can fail! ) is tedium, lazy people often forget to, adding 'die "$@ $? $!" ' at the end of everything SUCKS, let alone throwing actual exceptions that explain /what/ the problem was.
    Try working out whether the reason open failed was the file just wasn't there, or there was a permissions issue, or one of the other dozens of possible reasons, via code, and you're stuck using regular expressions. Yuck.

  • Monkey patching

    A lot of people really dislike the monkey-patch style that bolts into UNIVERSAL. Magically turning up everywhere on every object is really nasty, and really magical, and far too much magic for something that could be achieved by using an exported method instead. Seriously, string_require("Foo::Bar") vs "Foo::Bar"->require(); the difference is not big enough to warrant the nastiness of the latter.


  • 5.8 Double-Load Weak

    Still relies entirely on require to die if it cant load something.

  • No Exceptions

    Relies on $@ being a useful enough value to the user

  • Implicitly treats Exceptions like scalars

    Even if in the future fantasty land Perl 's require started throwing useful Exceptions ( backtrace, attributes that explained the problem type, introspection, soforth ), the code concatentates it into another scalar, so any exceptions that may exist will get squashed


  • Holy hell, what?

    the code is from 2005 and has 2005 written all over it, if it was less chatoic, I might be able to see how it works

  • Doesn't invoke require

    It doesn't use require anywhere, so it doesn't even populate $@

  • Recommended use is to pass discovered variables to require

    Doesn't sound like much of a win to me. probably prone to the 5.8 issue

  • Doesn't throw exception objects

    Seems in 2005, nobody had discovered exceptions yet really.


  • Module is not really designed for one-off module requires
  • Code is weak vs 5.8 issues.
  • Code is pretty high on the wtfometer
  • Code aggravates 5.8 issues with suppressed failures by ignoring $@ after failures
  • No exception objects


  • Mangles $@ with chomp
  • No exception objects
  • 5.8 double-require weak
  • Oh dear, please , not AUTOLOAD :(


  • Mostly an over the top file finding library, doesn't handle any of the require stuff
  • the usual, no exceptions, 5.8 double-require-weak


  • Not for requiring modules at all.


  • Not really for this job, but...
  • Has a method for detecting package loading, however....
  • That method is subject to the 5.8 double-require weakness and its friends


  • proDespite being Acme::, it sucks less than everything else so far!
  • Still depends on native require for exceptions
  • XS
  • Still defers to the internal require() op, so probably still suffers the 5.8 problems.
  • Depends on >= 5.10 anyway


  • Just as bad interface wise as UNIVERSAL::require
  • But worse, AUTOLOAD magics
  • Documented in German
  • eval "use $string"
    , very bad
  • Substitutes Perl require-fails string-only exceptions with alternative, german, string-only exceptions. Joy.
  • Prone to 5.8 issues


  • Prone to 5.8 issues
  • Standard Perl native exceptions only



  • pro:Actually appears to have work-arounds in place for heuristically solving the 5.8 problem!
  • pro:Tests for the above claimed fact!
  • pro:Tests pass !
  • Still no exception objects ( perl default exceptions )
  • pro: Reasonably sane API
  • pro: No need to check silly return values

tl;dr summary

Class::Load is awesome, you should use it everywhere you need require to actually work sanely with possibly-missing or possibly-broken classes (ie: everywhere that there is a user-part in a require ).
You can probably use it for more, but that might be overkill =).

The only way I can see something being better than it is if something decides to implement object exceptions with failure metadata in them, instead of needing to re-explore the failure manually


Flashing a Dell Vostro 1510 Bios Without Windows.

I've been trying to get my BIOS updated for several months now, and attempted several times with no success, but tonight, I stumbled into the information that gave me the breakthrough I needed.

What Doesn't Work

Dell have started being nice and releasing more Linux friendly ways of installing bios updates, but sadly, for models like mine, there is no support.

Using Windows7/Vista 64bit

It just will never work. Not even with administrator privileges. That WinPhlash thing just Doesn't Work™

Using The biosdisk technique

Doesn't work, because the winphlash executable that you have to use will not run in the FreeDOS boot environment. It's a Win32 GUI app.

Using the remote boot update method

There is no hdr file produced by Dell to work with this bios yet, and there is no way to produce one.

Windows XP Portable Edition( BartPE )

In my case, the "bootable media" just hung for 8 hours and wouldn't boot.

Working Solution: phlash16.exe

Legacy dos apps save the day!.

While this method worked for me, its not official, nor supported, and you are on your own if it breaks, just like I was. However, you have my good intentions and the testimony of others that it works for some of us.


From here on, I'm going to be using very gentoo-specific terms, but they should be portable to other Linux distributions.

Install nessecary tools

You'll need libsmbios to see what you have, and biosdisk to generate a bootfloppy which we can inject our tools into. Wine will also be required to extract the ROM file from the Winphlash self-extractor.

$ sudo emerge -uvatDN libsmbios
$ sudo emerge -uvatDN biosdisk
$ sudo emerge -uvatDN wine
$ sudo emerge -uvatDN unzip

Check your Bios and System

While this technique may work for other Dells, I have not myself tested it and thus, cannot recommend it. If you use my pre-generated boot-floppy , you will WANT to have the same BIOS SystemID and Version as I currently have.
$ sudo smbios-sys-info-lite 
Libsmbios:    2.2.19
System ID:    0x0273  # MATCH 
Service Tag:  XXXXXXX
Express Service Code: XXXXXXXXXXX
Aset Tag:  X
Product Name: Vostro1510
BIOS Version: A10 # MATCH
Vendor:       Dell Inc.
Is Dell:      1

Get Dell Bios updates in WinPhlash form

I must remind you, this solution works ONLY for WinPhlash based bios updates. If your bios update installer tool does NOT use WinPhlash, you should not continue.

Get a copy of Phlash16.exe

I can't gurantee that this link will always work, so Google for it if my link dies. But in the mean time, you can get a copy of winphlash16 with one of intels bios flashes. You won't need everything in the archive, but it has what we need. So, download the 3C91.zip bios package for Quanta* QTW3A945PM1


Unpack your Dell BIOS

Unfortunately, Dell bundle this as a self-extracting achive, a weirdly formatted one at that, which contains the winphlash and the BIOS.ROM file you need. So we need to run this with Wine to get it out.
$ wine ~/Downloads/V151015.exe
It will do its thing, run and so forth, tell it to start extracting, and then winphlash will run. It will then throw an error, but it doesn't matter, its done the job we wanted it to.

Grab the BIOS.ROM from your wine dir

This will be in a directory such as C:\WINDOWS\TEMP\WINPHLASH, but relative to your wine directory. In my case, its ~/.wine/drive_c/windows/temp/WINPHLASH/BIOS.ROM
$ cp ~/.wine/drive_c/windows/temp/WINPHLASH/BIOS.ROM /tmp/

Unzip phlash16.exe from the Zip

$ cd /tmp/
$ mkdir phlash
$ cd phlash
$ unzip ~/Downloads/3C91.zip phlash16.exe

Build the MemDisk

We can now build an in-memory FreeDOS boot diskette and stash it into Grub.conf so we can boot into our bios updater on our next boot.

Mount /boot

$ sudo mount /boot 

Generate a BiosDisk

Note I don't really "get" how biosdisk works inside, so we're passing it here an executable thats reasonably large, and known not to work =).

We're going to fix that later.

$ sudo biosdisk install ~kent/Downoads/V151015.exe
That should have built you a nice boot floppy config and setup grub in nice ways.

Manually hack the generated BiosDisk

Now for the part that does all the sexy good work. That boot floppy image is a mountable filesystem image we can read and write to.
$ mkdir /tmp/vfat/
$ sudo mount /boot/V151015.img /tmp/vfat/
$ cd /tmp/vfat
$ sudo rm /tmp/vfat/V151015.exe
$ sudo cp /tmp/phlash/phlash16.exe ./
$ sudo cp /tmp/BIOS.ROM ./

Now here, we could make this automatically flash the bios on boot, but I'm going to suggest against that, because I don't want to run it accidentally at some later stage before I get rid of it again, and interrupt it, bricking my box =).

So instead, I've set it up to give me a command shell, and I manually execute the flash command.

$ sudo vim -c "set fileformat=dos" autoexec.bat

This is the flash instruction. You really want the /S option, unless you enjoy deafening beeping at 3am.

The /EXIT option is supposed to STOP exiting and auto-rebooting, but it does this anyway, for reasons that I don't understand, and quite frankly am slighly afraid of.

$ sudo vim -c "set fileformat=dos" flash.bat
phlash16.exe /S /EXIT BIOS.ROM

Now to check we have DOS line endings.
$  grep ''  autoexec.bat flash.bat  | cat --show-all
flash.bat:phlash16.exe /S /EXIT BIOS.ROM^M$
And now we can unmount it, and be ready to fly.
$ cd /
$ sudo umount /tmp/vfat


You are about to fly into dangerous territory, where enemy combatants like "power failure" can mean certain death

Please read all following instructions and be familiar with them BEFORE attempting to continue

  • When you reboot, you will see the bios flash option in grub
  • Select this
  • A few seconds later, you will be given the A:\ prompt of FreeDOS
  • You will double check you are on an uninterruptable power supply system, and batteries are working
  • You will be running mains power
  • type flash.bat and press your ENTER key
  • A bios flash utility will load, and progress to flash your bios
  • Upon completion of flashing, the computer may suddenly and unexpectedly reboot with NO warning, and this may surprise you and make you think your computer has crashed! DONT PANIC, wait it should quickly return to normal booting.
  • During this boot, you should observe the code "A15" in the bios booting sequence instead of "A10" =).
  • Boot back into linux, and proceed to remove the flash installer from the boot menu =)

Experimenters Advice

If you have elected to try see what other flags phlash16.exe has, you're probably not going to find any of them useful. I have noticed something weird, and that is, at least in the biosdisk boot enviroment, the /BU option does not work. Using it causes phlash16.exe to reboot as soon as it hits the backup phase, making you panic and thinking you killed it. I have not played with the other flags, but they didn't appear to be of use to me =).


yes, you may now go and try the above.

POST-OP cleanup

Once you have successfully flashed your bios, you may then clean the biosdisk entry out of grub:
$ biosdisk uninstall ~kent/Downloads/V1510A15.exe
And then it should be gone from menu.lst


For your convenience, I have made a pre-built copy of the biosimage, my grub configuration, and the memdisk kernel:

Dell Bios Phlash Vostro1510 A10 -> A15 on humyo.com

They worked for me, and might work for you too.

The grub configuration is given for your understanding. The top section is all you need to add, but it probably won't work like that verbatim.


Installing Multiple Perls with App::perlbrew and App::cpanminus

Having learnt from my previous mistakes, this is a simplistic way to set up multiple somewhat isolated installs of Perl in a user profile

App::perlbrew is a very handy tool for managing several user-installs of Perl, and facilitates the easy switching between Perl versions.

App::cpanminus is the most straight-forward and lightweight cpan client I've ever seen, and it just works, and works well, and leads to relatively pain-free installation 80% of the time.

1. Install A Bare copy of Perlbrew

Getting a copy of Perlbrew should be the very first thing you do. No cpanm, no local::lib, just straight perlbrew.
$ cd ~ 
$ curl -LO http://xrl.us/perlbrew

2. Setup Perlbrew

Once we have a copy of perlbrew, we run the install command of it, which completes the bootstrapping of perlbrew. Then all thats needed is to update your profile with the right magic line so that new shells will have the right environment set up.
$ perl ~/perlbrew install
$ rm ~/perlbrew
$ ~/perl5/perlbrew/bin/perlbrew init
# use the line perlbrew spits out.
$ echo "source /home/test_1/perl5/perlbrew/etc/bashrc" | tee -a ~/.bashrc

3. Enter your new perlbrew ENV

Now we enter our new shell so that we can test the change to our configuration. We run env and grep the PATH value just to double check perlbrew has worked properly.
$ bash
$ env | grep PATH

4. Choose a mirror

This step is mostly optional, but it lets you choose which mirror perlbrew will download Perl sources from, so a local one is best for speed sakes.
$ perlbrew mirror

5. Install your wanted perl versions

Now we perform the slow installation of our Perls. In my case, I'm installing a copy of the current stable ( 5.12.2 ) and the current development release ( 5.13.2 ). The -v is optional, but you'll want it if you do not wish to die of boredom because it generally just sits there doing nothing for 10+ minutes without it.
$ perlbrew -v install perl-5.12.2
$ perlbrew -v install perl-5.13.4

6. Setup 'cpanm' for each perl

This step appears to be the most important step. If you previously had cpanm installed with system perl you do NOT want to be using that at all. When cpanm is installed, the bin/ script hard-codes a path to the perl it was installed with, so using cpanm built with system perl will build installed modules using that system perl instead, and using its install paths and soforth, and you do not want this. So, you must install a cpanm for each perl using this bootstrap technique.
$ perlbrew switch perl-5.12.2
$ curl -L http://cpanmin.us | perl - App::cpanminus
$ perlbrew switch perl-5.13.4
$ curl -L http://cpanmin.us | perl - App::cpanminus

7. Configure local cpans

Strangely, I've found a few modules I try install tend to expect a working CPAN install, regardless of what tool I'm actually using. This should be fixed, but there is a practical work-around until then. Simply configure cpan!
$ perlbrew switch perl-5.12.2
$ cpan
# Answer all setup instructions
» o conf commit
» q
$ perlbrew switch perl-5.13.4
$ cpan
# Answer all setup instructions
» o conf commit
» q

8. Test your installs

This is a list of things I've found to trip up various corner cases and indicate you've built it wrong.
$ perlbrew switch perl-5.12.2
$ cpanm --interactive -v App::cpanoutdated
$ cpan-outdated
$ cpanm --interactive -v App::CPAN::Fresh

$ perlbrew switch perl-5.13.4
$ cpanm --interactive -v App::cpanoutdated
$ cpan-outdated
$ cpanm --interactive -v App::CPAN::Fresh
With all things going to plan, those 2 things at least should build and be runnable. cpan-outdated and cpanf should both be runnable in both perls without complaining it cant find their modules, and CPAN::Inject and Compress::BZip2 should install without strange failures. ( those 2 modules lead me in prior cases to discover broken setups that needed fixing to work, so hopefully, going to plan, following the instructions above will avoid this havoc. )

9. Profit!

Thats all there is to it. Note we do NOT use local::lib for this setup. Using each Perls default local module installation directory should be perfectly satisfactory, and as long as you're in a properly configured ENV and you're using 'perlbrew' to select perl's that are not system perl, everything should be sweet =).
Ok, lots of things on my machine fail to build still, but those peskynesses I'm convinced are unrelated to the Perl setup.

10. Credit

Props to the people who helped me out with working out this configuration ( brian d foy, miyagawa, John Napiorkowski ) and to the authors of cpanm ( miyagawa ) and perlbrew ( gugod ). These are awesome tools, and once you learn them, they really can make working with Perl a much more pleasureable experience!. And also props and ♥ to the Perl Community for simply existing, and fostering this development path.


I ♥ the Perl Community


Perl is awesome, but the community is better, with nothing even in competition as I know it.

Where else can you blog about a confusing corner case you hit in a seemingly rare operating system and get Excellent answers from not only great people, but the author of the module the problem was in, one of the people who wrote or contributed a lot of the other useful tools you use, and the author many recognized Perl books

And then, not only did I get the right solution for my problem, but many other alternative good approaches, as well as answering parts of the question I didn't even ask with side tips that seem "related enough" that I'd likely encounter in similar ways, and how to make my life easier when that happens.

I ♥ this positive approach to programming, where people are not only caring about solving my specific problem, but suggesting things that can help me become a better programmer as a whole, and I'm frankly proud simply to be involved with a community which has such a valuable work-ethic.

Frankly, its a shame its so hard to sell Perl on the community aspect, because it is just awesome in ways I've never seen before in a Programming Language, and it by far trumps technical aspects in terms of awesomeness. If brainf**k had a community as awesome as Perl has, it would probably be better than many languages simply because of the community aspect, at least in my opinion. Its just a shame you can't convey how great such a community being preset is to newbies to the language without first immersing them in the culture and community, because to understand and appreciate, I think you must first experience it.

OpenBSD + Perl + Modern Tools and Approaches -> Me = Confused :(

So, I'm doing my first attempt at a hand-holding free install of Perl. I'm used to the niceties of Gentoo and being able to do everything through its package manager, so I thought I'd try doing it the way everyone else in the world apparently uses as "The most practical".

I'm going to walk you through what I did, mostly constructed from memory, so you have an idea of what the problem I have is, or, if you're in a similar situation, you can get some progress and learn from my mistakes once I've worked out what I need.

Normally, I'd ask about this on #perl@irc.freenode.org or something on irc.perl.org, or if appropriate, file a bug. However, in this case, I can't even conceive of which would be the right place to target my question, OpenBSD is in my estimation very "niché" market at the moment, as are lots of the modern tools for Perl, and I don't know where the appropriate place to solicit help for them are. So, I approach the ALL MIGHTY LAZY-WEB.

The Setup

  1. Installed OpenBSD 4.7
    This shall be left as an exercise to the reader as to how this works.  Its too much to cover here, and it really is pretty straight forward =).
  2. Install cpanm
    Everyone I see in Perl these days seems to be ranting about this, so I used the perscribed instructions:    
    $ curl -L http://cpanmin.us | perl - --sudo App::cpanminus
  3. I don't want to be stuck using Perl 5.10.1, which is great and all, but I'd rather be doing work with 5.12.2 and 5.13.* . And I keep getting recommendations NOT to use system Perl for ANYTHING other than getting your custom Perl running. ( Using system Perl is fine in Gentoo, at least how I use it, we've got 5.12.2 in tree now, and stuffing Perl dists into Package Management JustWorks™ ). The new sex for this is allegedly perlbrew, so I'm firing that baby up next.
    $ cpanm --sudo App::perlbrew
  4. All appears good!. Now from here on, is where I think a few things start to drift south, but not entirely sure WHERE.
    $ perlbrew init
    # add instructed line to bash
    $ bash
    $ perlbrew install perl-5.13.4 -v
    $ perlbrew install perl-5.12.2 -v
  5. All this appears to run smoothly.
    $ perlbrew switch perl-5.13.4
  6. Here is where I do the stupid things that possibly lead to my downfall. First, you must understand how I want my setup:
    1. I want my primary development user (kent) to have 2 copies of Perl available, 5.13.4 and 5.12.4
    2. I want the modules for each install of Perl to follow their respective installs so I can just switch between Perls and have the modules switch over too
    3. "Production" Will repeat this process, except with less versions of Perl, and probably with less modules installed.

    To achieve this, I insert lines in my .bashrc until it resembles this
    source /home/kent/perl5/perlbrew/etc/bashrc
    export PERLDIR=/home/kent/perl5/perlbrew/perls/current
    export MODULEBUILDRC=/home/kent/perl5/perlbrew/etc/.modulebuildrc
    export PERL5LIB="${PERLDIR}:${PERLDIR}/i386-openbsd"
    export PERL_CPANM_OPT="--local-lib=${PERLDIR}"
    and .modulebuildrc of course contains this:
    install  --install_base  /home/kent/perl5/perlbrew/perls/current/
  7. For the most part this works perfectly, and I'm off installing modules happy as Larry.
  8. And then a few hours later, something depends on IO::Compress::BZip2. Now is the beginning of sorrows.

The Problem:

Can't find libbz2!

I'm sure as eggs I have bzip2 and family installed and working.
However, this worrisome notice appears during build:
 Entering Compress-Bzip2-2.09
Configuring Compress-Bzip2-2.09 ... Running Makefile.PL
Parsing config.in...
/usr/bin/ld: cannot find -lbz2
collect2: ld returned 1 exit status
compile command 'cc -fno-strict-aliasing -fno-delete-null-pointer-checks -pipe -fstack-protector -I/usr/local/include -Wl,-E  -fstack-protector -o show_bzversion show_bzversion.c -lbz2' failed
system bzip2 not found, building internal libbz2
Ah .... ok.
$ bzip2 -h 2❭&1 | head -n 1
bzip2, a block-sorting file compressor.  Version 1.0.5, 10-Dec-2007.
$ /usr/bin/ldd $(which bzip2)
        Start    End      Type Open Ref GrpRef Name
        1c000000 3c006000 exe  1    0   0      /usr/local/bin/bzip2
        065b5000 265b9000 rlib 0    1   0      /usr/local/lib/libbz2.so.10.4
        07295000 272ce000 rlib 0    1   0      /usr/lib/libc.so.53.1
        0643c000 0643c000 rtld 0    1   0      /usr/libexec/ld.so
Ok, so maybe it is a bit geriatric
That should be fine though right? WRONG

Something magical keeps finding Perl 5.10.1 :(

Surely, this abomination will not end well:
Building and testing Compress-Bzip2-2.09 for Compress::Bzip2 ... cp lib/Compress/Bzip2.pm blib/lib/Compress/Bzip2.pm
AutoSplitting blib/lib/Compress/Bzip2.pm (blib/lib/auto/Compress/Bzip2)
cd bzlib-src && make 
cc -c    -fno-strict-aliasing -fno-delete-null-pointer-checks -pipe -fstack-protector -I/usr/local/include -O2     -DVERSION=\"\"  -DXS_VERSION=\"\" -DPIC -fPIC "-I/usr/libdata/perl5/i386-openbsd/5.10.1/CORE"   blocksort.c
cc -c    -fno-strict-aliasing -fno-delete-null-pointer-checks -pipe -fstack-protector -I/usr/local/include -O2     -DVERSION=\"\"  -DXS_VERSION=\"\" -DPIC -fPIC "-I/usr/libdata/perl5/i386-openbsd/5.10.1/CORE"   huffman.c
cc -c    -fno-strict-aliasing -fno-delete-null-pointer-checks -pipe -fstack-protector -I/usr/local/include -O2     -DVERSION=\"\"  -DXS_VERSION=\"\" -DPIC -fPIC "-I/usr/libdata/perl5/i386-openbsd/5.10.1/CORE"   crctable.c
cc -c    -fno-strict-aliasing -fno-delete-null-pointer-checks -pipe -fstack-protector -I/usr/local/include -O2     -DVERSION=\"\"  -DXS_VERSION=\"\" -DPIC -fPIC "-I/usr/libdata/perl5/i386-openbsd/5.10.1/CORE"   randtable.c
cc -c    -fno-strict-aliasing -fno-delete-null-pointer-checks -pipe -fstack-protector -I/usr/local/include -O2     -DVERSION=\"\"  -DXS_VERSION=\"\" -DPIC -fPIC "-I/usr/libdata/perl5/i386-openbsd/5.10.1/CORE"   compress.c
cc -c    -fno-strict-aliasing -fno-delete-null-pointer-checks -pipe -fstack-protector -I/usr/local/include -O2     -DVERSION=\"\"  -DXS_VERSION=\"\" -DPIC -fPIC "-I/usr/libdata/perl5/i386-openbsd/5.10.1/CORE"   decompress.c
cc -c    -fno-strict-aliasing -fno-delete-null-pointer-checks -pipe -fstack-protector -I/usr/local/include -O2     -DVERSION=\"\"  -DXS_VERSION=\"\" -DPIC -fPIC "-I/usr/libdata/perl5/i386-openbsd/5.10.1/CORE"   bzlib.c
ar cr libbz2.a  && ranlib libbz2.a
cc -c    -fno-strict-aliasing -fno-delete-null-pointer-checks -pipe -fstack-protector -I/usr/local/include -O2     -DVERSION=\"\"  -DXS_VERSION=\"\" -DPIC -fPIC "-I/usr/libdata/perl5/i386-openbsd/5.10.1/CORE"   bzip2.c
/usr/bin/perl /usr/libdata/perl5/ExtUtils/xsubpp  -typemap /usr/libdata/perl5/ExtUtils/typemap -typemap typemap  Bzip2.xs > Bzip2.xsc && mv Bzip2.xsc Bzip2.c
cc -c  -Ibzlib-src  -fno-strict-aliasing -fno-delete-null-pointer-checks -pipe -fstack-protector -I/usr/local/include -O2     -DVERSION=\"2.09\"  -DXS_VERSION=\"2.09\" -DPIC -fPIC "-I/usr/libdata/perl5/i386-openbsd/5.10.1/CORE"   Bzip2.c
In file included from Bzip2.xs:7:
ppport.h:231:1: warning: "PERL_UNUSED_DECL" redefined
In file included from Bzip2.xs:4:
/usr/libdata/perl5/i386-openbsd/5.10.1/CORE/perl.h:330:1: warning: this is the location of the previous definition
Running Mkbootstrap for Compress::Bzip2 ()
Um. Um. Um.
How about NO
$ perl -v  | grep version 
This is perl 5, version 13, subversion 4 (v5.13.4) built for OpenBSD.i386-openbsd
That's going to go down like a houseboat on fire.

What comes next is only a natural

t/010-useability.t ...... 1/3 ol 'BZ2_bzDecompressInit'nm/work/1284524774.31144/Compress-Bzip2-2.09/blib/arch/auto/Compress/Bzip2/Bzip2.so: undefined symb
/usr/bin/perl:/home/kent/.cpanm/work/1284524774.31144/Compress-Bzip2-2.09/blib/arch/auto/Compress/Bzip2/Bzip2.so: undefined symbol 'BZ2_bzDecompress'
/usr/bin/perl:/home/kent/.cpanm/work/1284524774.31144/Compress-Bzip2-2.09/blib/arch/auto/Compress/Bzip2/Bzip2.so: undefined symbol 'BZ2_bzBuffToBuffDecompress'
/usr/bin/perl:/home/kent/.cpanm/work/1284524774.31144/Compress-Bzip2-2.09/blib/arch/auto/Compress/Bzip2/Bzip2.so: undefined symbol 'BZ2_bzDecompressEnd'
/usr/bin/perl:/home/kent/.cpanm/work/1284524774.31144/Compress-Bzip2-2.09/blib/arch/auto/Compress/Bzip2/Bzip2.so: undefined symbol 'BZ2_bzCompress'
/usr/bin/perl:/home/kent/.cpanm/work/1284524774.31144/Compress-Bzip2-2.09/blib/arch/auto/Compress/Bzip2/Bzip2.so: undefined symbol 'BZ2_bzBuffToBuffCompress'
/usr/bin/perl:/home/kent/.cpanm/work/1284524774.31144/Compress-Bzip2-2.09/blib/arch/auto/Compress/Bzip2/Bzip2.so: undefined symbol 'BZ2_bzlibVersion'
/usr/bin/perl:/home/kent/.cpanm/work/1284524774.31144/Compress-Bzip2-2.09/blib/arch/auto/Compress/Bzip2/Bzip2.so: undefined symbol 'BZ2_bzCompressInit'
/usr/bin/perl:/home/kent/.cpanm/work/1284524774.31144/Compress-Bzip2-2.09/blib/arch/auto/Compress/Bzip2/Bzip2.so: undefined symbol 'BZ2_bzCompressEnd'
And more and more of that explosion until you see:
Files=25, Tests=33,  7 wallclock secs ( 0.35 usr  0.21 sys +  4.74 cusr  1.44 csys =  6.74 CPU)
Result: FAIL
Failed 25/25 test programs. 30/33 subtests failed.
Oh crap. That's not good.
Something Seriously wrong is going on here, but hell knows what it is, and I'm the least qualified to work it out.

Call For Halp

If you know what I've done wrong, and how to correct this fatal flaw, please, point me straight. I can only reward you with Karma Cookies and a blog of response and update.
I acknowledge that CPANTS lists many many passes for this module, so it must be I who is at fault, right?

perl -V

Summary of my perl5 (revision 5 version 13 subversion 4) configuration:
    osname=openbsd, osvers=4.7, archname=OpenBSD.i386-openbsd
    uname='openbsd stridor.lan 4.7 generic#558 i386 '
    config_args='-de -Dprefix=/home/kent/perl5/perlbrew/perls/perl-5.13.4 -Dusedevel'
    hint=recommended, useposix=true, d_sigaction=define
    useithreads=undef, usemultiplicity=undef
    useperlio=define, d_sfio=undef, uselargefiles=define, usesocks=undef
    use64bitint=undef, use64bitall=undef, uselongdouble=undef
    usemymalloc=y, bincompat5005=undef
    cc='cc', ccflags ='-fno-strict-aliasing -pipe -fstack-protector -I/usr/local/include',
    cppflags='-fno-strict-aliasing -pipe -fstack-protector -I/usr/local/include'
    ccversion='', gccversion='3.3.5 (propolice)', gccosandvers='openbsd4.7'
    intsize=4, longsize=4, ptrsize=4, doublesize=8, byteorder=1234
    d_longlong=define, longlongsize=8, d_longdbl=define, longdblsize=12
    ivtype='long', ivsize=4, nvtype='double', nvsize=8, Off_t='off_t', lseeksize=8
    alignbytes=4, prototype=define
  Linker and Libraries:
    ld='cc', ldflags ='-Wl,-E  -fstack-protector -L/usr/local/lib'
    libpth=/usr/local/lib /usr/lib
    libs=-lm -lutil -lc
    perllibs=-lm -lutil -lc
    libc=/usr/lib/libc.so.53.1, so=so, useshrplib=false, libperl=libperl.a
  Dynamic Linking:
    dlsrc=dl_dlopen.xs, dlext=so, d_dlsymun=undef, ccdlflags=' '
    cccdlflags='-DPIC -fPIC ', lddlflags='-shared -fPIC  -L/usr/local/lib -fstack-protector'

Characteristics of this binary (from libperl): 
  Built under openbsd
  Compiled at Sep 14 2010 11:31:21


Gentoo Protip: Clean orphaned .la files.

If you've been using the "lafilefixer"1 to tweak "broken" .la files, you may have discovered a negative side effect of its use.

Primarily, that lafilefixer breaks the MD5/SHA sums of the various .la files, so when somebody removes that package later, or upgrades to a package with a differently named .la file, it leaves behind this .la cruft.2

The effect of this, is that subsequent builds can die in mysterious ways trying to find stuff, as stupid code tries to use the old and outdated .la files.

The solution is reasonably simple, all you need is a little help from a few good Unix commands.

You'll need 2 basic packages installed:

  1. GNU findutils "xargs" and "find", provided in sys-apps/findutils. You should already have these, because they are after all part of the "system" set.
  2. Gentoo's Portage-Utils'  "qfile", provided in app-portage/portage-utils.

Firstly, we generate a list of all the .la files.

kent@ember$ find -O3 /usr/lib64  -type f  -name "*.la"

We then pipe this list null-delimited to xargs ( for safety ) and ask "qfile" to tell us if they are orphans.

kent@ember$ find -O3 ./  -type f -name "*.la"  -print0 | xargs -0 qfile -o

We can then review this list, make a few "ahh!, so that explains that problem" statements, and then proceed to remove the listed files using our mechanism of choice. xargs + rm is good enough for me.

kent@ember$ find -O3 ./  -type f -name "*.la"  -print0 | xargs -0 qfile -o | xargs rm

And as if by magic, things that no longer wanted to compile resume compiling!.

For me, this cleaned up most of the residual problems I had after the whole libpng12 debacle.

Important: You should read the man pages for xargs and find to make sure you're not just cargoculting bad code. i.e: that -O3 thing can be dangerous.
1:  ( dev-util/lafilefixer )

2: Perhaps portage has a workaround for this, but I'm using Paludis, so don't know, sorry. Complaints to: /dev/null.


Git Rebase Part 1: Why you should use it, Theory.

If you have yet to master Gits rebase feature, now is the time to do so. Rebasing, as we call it, provides you with astonishingly awesome powers of manipulating your repository history. Rebasing is however, not for the faint of heart, and those still green on how Git really works, as there is a lot of concepts you need to have a firm grasp of in order to utilize it. ( I will try cover these concepts here ).

If you have only ever worked on simple no-branching repositories with few committers, in a non-distributed SCM, you probably have not even yet encountered a scenario where rebasing would make much sense to you. The problem is when you get diverging histories greater than a few commits.

Normally, people wait till merge time to resolve this divergence, but it can be less than simple, and the longer and more complex your history, the harder merging will be, and there is no technological way to make this simply go away ( at least, not yet ).

Fortunately, most of merges are non-competing commits, and Git does a stellar job of doing the right thing in these scenarios.

The challenge occurs when two people create competing commits on parallel histories, and the problem is exacerbated when there is a stack load of changes on top of those competing commits.

Classical solutions to this result in a big explosion at merge time, and all you have to go on is what the current state of either branch is at that time, and you have to know the code very well, and be able to “pick one” of the solutions ( or, if you are especially unlucky, manually find a 3rd solution in your head which is a product of both ).

Unfortunately, in many large open-source projects, not everyone knows everything about everything, and when it comes time to merge, the person doing the merge knows nothing about the specifics of the others changes on a line-by-line basis, and so, there is an unreasonable demand on the merger to be a magician.

Offloading the merge load to the contributor.

A good solution in my mind, is to offload the responsibility of resolving issues onto the person performing the contributions. They understand their code the most, they know what needs to go away when and where. One approach to this, involves perpetual merges from upstream to keep your branch “synced”, but this is a nightmare. It also in my experience doesn't work like you would expect. I do not want to go into the specifics of the problems I have seen with merges, simply because communicating them simply is difficult. Also, it makes things even more complicated later down the line with reintegration, as comparing the diffs can be misleading as to what really changed, as well as overcomplicating the commit history.

A logical way to consider how rebasing works.

Consider you are working on a more old-school SCM such as Subversion, where this rebasing feature does not exist. To emulate a rebase, what you would have to do, is first, find the point where the branch you are to rebase first diverges from trunk. Then, you would produce a patch for each and every commit that had been applied to the branch since it was branched from trunk. Doing this, is of course no simple feat ☺.

You would then create another branch, starting from the current trunk, and switch to it. Then, you would iterate through every patch in order, and apply it to this new branch, possibly stopping between each patch application, to correct any collisions that caused the patch to fail ( i.e.: edit the patch until it applied cleanly ), before committing it.

The product is a completely unambiguous patch series, relative to the current trunk. Where branches are considered “feature branches”, this new branch becomes a perfect logical sequence of commits that can be unambiguously applied to trunk to add the given feature.

At this new state, assuming no other commits are made to trunk, this new branch logically should be merge-able straight into trunk with no collisions whatsoever. ( I am of course making huge assumptions here with regard to subversion being smart enough to know how to handle it, and not simply going “Hurr, branch and trunk look different, must be a collision!”)

To Explain this Visually:

http://gist.github.com/raw/517220/9b885f405d2f9cd3bc1c19b69868db341d6eea75/graph.dot.txt This is our initial repository, a nice straight forward commit sequence. “Trunk” is the current state of our directory. Note that although I have used numbers for clarity in explanation, Git internally has no such sequential concept. Hopefully, this structure is apparent to all readers.

http://gist.github.com/raw/517220/81a7d918d13cfec92fcc84c6fd39b1fdb68e28cd/graph2.dot.txt In diagram 2, more commits have been created. At 04 a new topic branch was created called “X”. Since the divergence of these branches, 4 commits have been created on trunk and 5 commits on X. Normally, you would probably want to try merging x05 back into trunk to create a new commit. But this leaves you with multiple paths in your history, which can make things very messy over time.

In the above diagram, each and every commit has been “replayed” on top of the trunk. Note that this creates a new commit, which is a derivation of the original commit. Aggregatively, when a whole branch is replayed on top of trunk ( or any other branch for that matter ), the effect is you produce a second derived branch, that simply has a different origin.

In practice, this new branch is much like you had decided “Hey, branches are too hard, we will not do them, so new features must be worked on, and completed 100%, before starting another feature” , and you had instead merely “waited around” for commit 08 to arrive, and then proceeded to develop the same feature. ( Except for of course, in reality, you never had to do any of that silly waiting stuff, and you actually were able to use branches! )

After creating the derivative branch, we can then clean up the original branch. It is no longer needed, and having it lying around is just likely to confuse people, not to mention make our history graph very messy. Git will perform this step for you automatically, as soon as the rebase is deemed “Successful” and “Complete”.

Now, if you consider the logical application of what a merge does at this point, its quite straight forward now. In fact, that sentence almost constitutes a bad pun, considering git calls this type of merge scenario a “Fast Forward”. This is simply because it does no real merging at all. Git sees there is a simple straight linear sequence of commits that can exist to update trunk to reflect the integration of the branch, so it simply changes what commit it calls “head”.

Now as you can see ...

The result is a much much cleaner history to work with, and merging branches becomes trivially mindless ☺


All diagrams designed in graphviz. For the source for these diagrams, see This Gist on Github


Why Am I not using Perl 6 Yet?

I'm not here to deride it, I think its pretty, the syntax is nice, and it lacks some of the annoyances I currently have with Perl 5. Its got great features, and I whole heartedly want them to keep on trucking with that project.

My problem is not a petty squabble over things like Hurr, not perl5 enough or Derp, uses too much rams!, or Its too slow! or qualms about its completedness or its buggyness.

To a pragmatic person, none of those things really matter that much, you have to be doing really heavy work for speed and ram to be a problem on a Modern machine, and for a lot of things, I could not care less if startup time was a WHOLE SECOND LONGER. Hell, the total amount of time spent bitching about load time and speed now, in the real world, is likely to exceed the net total amount of time spent actually waiting for Perl6 to start. And the volumes of text and debate on this issue is almost certain to be a much larger waste of memory ( considering how much a single bit of information is replicated everywhere, and how it has to be replicated just to be *read*, and all the transport stuff that makes that possible ).

Back on the subject!

I think my biggest reason for not using Perl6 yet is that I'm not using Perl6 yet. I guess this is somewhat circular reasoning. but the problem is when I think "Oh, I have a task to achieve", my brain instantly starts forming it with regard to Perl5 and its idioms and methods.

Additionally, When I use Perl5, I'm not really spending a great deal of time messing around with its syntactical nuances. What I'm spending more time doing, is importing and using code and modules that already exists. I have a good mental understanding of all those great Perl modules from CPAN, and which ones I can JustUse to do whatever it is I want to be doing.

When I want to be doing something I don't already know how to do, the first thing I'm hitting up CPAN to see if somebodys done it already in a way I need, or to see if there are a few aggregate parts I can scrape together to make what I want

Also, most of my coding these days revolves around my various Perl5 modules, enhancing, maintaining, etc, and all this of course requires Perl5 to be employed. Its silly to consider depending on Perl6 to make a Perl5 module. And although I know I probably should be helping to reduce this problem by making Perl6 ports of my modules, its a bit chicken-egg because many of my modules are extensions for other Perl5 modules.

So, essentially, going Perl6 would require me to basically throw out everything I know, and then resort to doing things myself? If this is not the case, I don't understand/see how else I'm expected to do something in Perl6.

There's lots of fun examples of people doing raw hacking in Perl6, but I don't see boatloads of people using modules, and I don't see boatloads of Perl6 modules on CPAN when I'm searching for things I need to do.

If there's a secret second c6pan somewhere I'm just not seeing that these magically awesome Perl6 modules are being served on instead, Somebody should post a link to somewhere I'm likely to stumble over it.

Because presently, the gut reaction is barely better than suggesting I move back to PHP, where I have to reinvent every wheel myself in the event my behaviour is not implemented by a core PHP feature.

And the idea of being stuck back in that mindset is less than inspiring to me.

What would it take me to switch?

In a nutshell:
  • A much more obvious path to adoption
    • Obvious path to learning core syntax
    • Obvious path to finding extensions/modules
  • A More Comprehensive Archive of Perl6 modules.
  • Being things I currently use available on Perl6 in similar ways to how they are now, so I can jump ship, and start using those versions instead, and then start hacking on/improving those things with my own modules.


Extending Exception::Class in Moose

I recently had the joyous experience of porting some code to use proper Object Oriented Exceptions, and found a few niggles in my experience.

Exception::Class is a great module, and in terms of an Exception base class does lots of the things I want an exception module to do.

However, it has one and only one really big problem from my perspective, and that is, by default, its extensibility is a bit limited.

It appears to be highly targeted for its in-line declarations at import(), as follows:

 use Exception::Class (

      'AnotherException' => { isa => 'MyException' },

      'YetAnotherException' => {
          isa         => 'AnotherException',
          description => 'These exceptions are related to IPC'

      'ExceptionWithFields' => {
          isa    => 'YetAnotherException',
          fields => [ 'grandiosity', 'quixotic' ],
          alias  => 'throw_fields',

This is handy, For the simple case. But it doesn't do you a whole bunch of favours. Adding custom methods is a bane, and there's no field validation/processing support.

The best alternative to getting custom methods is Exception::Class::Nested which lets you do this:

        use Exception::Class::Nested (
                'MyException' => {
                        description => 'This is mine!',

                        'YetAnotherException' => {
                                description => 'These exceptions are related to IPC',

                                'ExceptionWithFields' => {
                                        fields => [ 'grandiosity', 'quixotic' ],
                                        alias => 'throw_fields',
                                        full_message => sub {
                                                my $self = shift;
                                                my $msg = $self->message;
                                                $msg .= " and grandiosity was " . $self->grandiosity;
                                                return $msg;

This is loads more practical, merely by eliminating the isa => stuff and adding of custom methods, but it still lacks many things in extensibility. No Type checking, no parameter processing, and worst of all, no apparently logical path to avoid clobbering parent methods ( I'm entirely assuming the ->SUPER:: stuff works, but I dislike that peskyness with a passion ). And last but not least, that module won't even install or pass its own tests.

So, you find yourself to this sort of thing:

use strict;
use warnings;
package ExceptionWithFields;
use base 'YetAnotherException';
# Every time I have to do this, I forget how to do it
# which is especially annoying as its not documented anywhere
# and Exception::Class bolts it on to its generated exceptions during ->import()
# so the method is nowhere to be found in Exception::Class::Base 's code 
# or its inheritance hierarchy.
# the inner guts of it are hidden away in Exception::Class::_make_subclass
sub Fields {
     # return an array of field names or they won't get populated.
     return ('grandiosity', 'quixotic');

# yes, you have to write your own accessors
# Exception::Class->import() generates these accessors manually.

sub grandiosity { 
    my ( $self ) = shift;
    return $self->{grandiosity};

sub quixotic {
    my ( $self ) = shift;
    return $self->{quixotic};

sub full_message {
    my $self = shift;
    my $msg = $self->message;
    $msg .= " and grandiosity was " . $self->grandiosity;
    return $msg;


YUCK!. . That's an awfully lot of nasty boilerplate :(.

This is only a simple example, so you can see how it'd get more complicated with more advanced things, I don't even want to contemplate how to handle parameter coercion/processing.

So, lets Moose this thing up!

I'm addicted to this Moose thing.

Moose probably makes Exception classes overweight, but considering how short lived they are, in many cases it doesn't really matter.

Unfortunately for us, Exception::Class uses some other weird thing which makes bolting stuff on to it a bit harder.

But fortunately, there is MooseX::NonMoose which makes this mostly painless.

use strict;
use warnings;
package MyException;
use Moose;
use MooseX::NonMoose;
use namespace::autoclean;
extends qw(Exception::Class::Base);
# This method is needed to delete things which are supposed to be handled by Moose 
# so they don't get passed to the parent constructor, because excess args cause it to fail -_-
  my ( $class, %args ) = @_;
  for ( $class->meta->get_attribute_list ) {
    delete $args{$_};
  return %args;
# Handy addition for giving back traces to user-land.
around show_trace => sub {
  my ( $orig, $class, @rest ) = @_;
  return $class->$orig(@rest);

Yay. Suddenly we have something Moose friendly that JustWorks as we want it to. And we've already added functionality by making all our children's stack-traces forced on by an ENV option, but otherwise behave as usual.

Now for the derivative classes

use strict;
use warnings;
package YetAnotherException;
use Moose;
use namespace::autoclean;
extends 'MyException'; 
use strict;
use warnings;
package ExceptionWithFields;
use Moose;
extends 'YetAnotherException'; 
use namespace::autoclean;

has 'grandiosity' => ( isa => 'Str', is => 'ro', required => 1 );
has 'quixotic' => ( isa => 'Str', is => 'ro' , required => 1 );

# Now with inheritable message code =)
around full_message => sub {
    my ( $orig, $self , @args ) = @_;
    my $msg = $self->$orig( @args );
    $msg .= " and grandiosity was " . $self->grandiosity;
    return $msg;

# Stick some lines *after* the stacktrace =D 
around as_string => sub { 
    my ( $orig, $self , @args ) = @_;
    my $msg = $self->$orig( @args ); 
    $msg .= "\n\n Please refer to the ExceptionWithFields Manual for more information"; 
    return $msg;

WAAAY More fun. Waaay Less headaches. Moose++

use ExceptionWithFields;

    message     => "This is a test",
    grandiosity => "This is grand!",
    quixotic    => "Very!",


Git Internals: An Executive Summary in 30 Lines of Perl, for smart newbies.

Update: Modified code a bit to handle the 'pack' specials. They're not so straight forward, will blog more on that later.

This blog post is not intended as a replacement for a real in-depth understanding of Gits command line interface, but it does aim to maximise the exposure of how it works internally, as really, its internal logic is astoundingly simple, and anyone with a good background in graph theory and databases will pretty much be able to quickly see the elegance in it. For more details, check out the excellent book, Pro Git, especially the internals chapter

The code

Gits core essentials, are almost nothing more than a bunch of deflated(zlib) text files. I'm going to assume you've got enough intelligence to RTFM and get a copy of something gitty and text based checked out. Perl Modules are good examples of this. I'm using my Dist::Zilla::PluginBundle::KENTNL::Lite tree.

git clone git://github.com/kentfredric/Dist-Zilla-PluginBundle-KENTNL-Lite.git /tmp/SomeDirName

I'm going to show you the core of git's system, which is just the "object" store.

cd /tmp/SomeDirName/.git
find objects/

Woot, there is all your files and stuff in git. How does it work? Thats where the perl script comes in.

use strict;
use warnings;

use Compress::Zlib;
use Carp qw( croak );

sub inflate_file {
    my ( $filename , $OFH ) = @_;
    my ( $inflator, $status ) = Compress::Zlib::inflateInit or croak("Cannot create inflator: $@");
    my $input = '';
    open my $fh, '<', $filename or croak("Can't open $filename, $@ $! $?");
    binmode $fh;
    binmode $OFH;

    my ( $output );
    while ( read( $fh, $input, 4096 )) {
        ( $output , $status ) = $inflator->inflate( \$input );
        print { $OFH } $output if $status == Compress::Zlib::Z_OK or $status == Compress::Zlib::Z_STREAM_END;
        last if $status != Compress::Zlib::Z_OK;
    croak( "Inflation failed of $filename , $@" ) unless $status == Compress::Zlib::Z_STREAM_END;

for ( @ARGV ) {
    next if $_ =~ /\.(idx|pack)|packs/;
    print qq{<--------BEGIN $_ --------->\n};
    inflate_file( $_ , *STDOUT );
    print qq{<--------END $_ --------->\n};


Pretend you cargo-cult dump that code to /tmp/deflate.pl

Now check this out:

perl /tmp/deflate.pl $( find objects/ -type f ) | cat -v | less

Awesome, you're now seeing the guts of how your repository works. For real. All we did was deflate each and every object. You'll see 3 types of object, ( each object says at the front what type they are before the ^@ ), tree's, blobs, and commits ( with trees being the most complicated of all ).

Blobs, they're just a files contents

Commits, all they are is a blob of text, with commit messages and stuff, timestamps, etc, and with text references (pretend its like an a-href in a web page or something ) to preceding ( parent ) commits, and a commit tree.

Trees are probably the hardest to work out just by looking at it. Its more or less just another text file, with another list of text references, except text references are pointing at either blobs, or other trees. So, you can pretend a "tree" is like a "dir" in some ways. There's data besides this, like file/dir names, and permissions, but thats the gist of it.

This has been your executive summary =)


Current Limitations In Exception Driven Perl: Stringy Core Exceptions

Lets just assume for one moment that we have a proper Exception Hierarchy, and that this wasn't a huge gaping hole in the current Exception landscape.

There's still the other problem of so much Perl code being not designed in Exception friendly ways.

die "$string"and croak "$string" is about as detailed as you get from most things.

And I'm sure everyone agrees that only passes for the bare minimum of exception handling techniques. No benefits of runtime stack introspection ( Edit: Ok, not without mangling sigdie, yuck ), re-throwing exceptions without losing the source failure point ( Edit: to clarify, not all 'die' calls are represented in the error ), let alone problem classification without resorting to regexing' the failure string. ( and that's far from reliable, considering those strings are targeted at humans, not machines, so are prone to being modified at a time later in life in a way your regex won't recognise, breaking your code ).

autodie is a good start to solving this problem, it doesn't have all the bells and whistles I'd hoped for, it has an error hierarchy, but it doesn't appear very flexible to extensible into other projects ( the whole thing is defined in a 'my' variable in Fatal.pm it seems ), and additionally, it doesn't supplement any of the things in Perl that already just die by throwing their own stringy exception, because as far as autodie appears concerned, if its already throwing an exception, why replace it?

One such builtin that is in this type of problem is require

There are at least 3 unique separate failure conditions that I know 'require' can spit out.

  • File not Found in @INC
  • require returned false value
  • compilation failed in require

All of the above being reported merely as strings leaves much to be desired. Sure, its great when things fail in obvious ways, but handling it in code is far too pesky.

Not everyone will have experienced this problem of course, but let me demonstrate a scenario.

sub findFirst { 
  my $plugin = shift;
  my $parent = "SomeApp";
  my @guessOrder = ( $plugin . "::" . $parent , $plugin );
  my @fails;
  for( @guessOrder ){
     local $@;
     eval "require $_; 1";
     if ( $@ ) {
        die $@ if $@ !~ /not found/ ; 
        push @fails, $@;
     } else { 
        return $_ ; 
  die "Couldn't load any of @guessOrder : @fails ";

my $plug = findFirst("Foo::Bar");

This is about as semantically clean as I can get it. The goal here is to permit "Not Found" family of require failures, but upon encountering something that exists but is merely broken, then push that failure up to userland, and, in the event none are found, dump all the errors out showing all the attempted paths that were searched and what was searched for.

But there are several problems with this code, the most obvious is that stringy eval is a really bad idea, I had hoped that at least one of the workarounds for this sillyness on CPAN came with something that threw an Exception object instead of a string.... but no, all I can find is ones that rely on the stock Perl system, and ones that go contrary to all logic and require you to check a return value for failure.

Another problem is the check for a string in the error. This is not as big a problem, but somebody malicious I guess could break something by explicitly crafting a death message that matched that line.

Another lovely problem is that death-rethrowing thing. Finding everywhere that the problem occurred in a non-insane way is hard. Ideally, not only should you have a trace depth from top level down to the point of the failure, but also a trace of everywhere the error was re-thrown, because the failure is really a domino effect, and not being able to see how it propagates without dropping into a debugger is hell.You tend to need more complex cases to see why this is happening though.


use strict;
use warnings;

sub fail {
  die "Hurp Durp!";

sub maybfail {
  unless ( eval { fail; 1; } ) {
    die "maybfail: $@";

sub moarfail {
  unless ( eval { maybfail; 1; } ) {
    die "Moarfail: $@";

To me, I'd like to be able to see that
  • the root error occurred as main:22 { moarfail:17 { maybfail:11 { fail:7 { die } } }
  • The error was rethrown at main:22{ moarfail:17 { maybfail:12 } }
  • The error was rethrown at main:22{ moarfail:18 }
At present, here's the best I can get out of that simple structure:
$ perl -MCarp::Always /tmp/die.pl 
Moarfail: maybfail: Hurp Durp! at /tmp/die.pl line 18
 main::moarfail() called at /tmp/die.pl line 22
$ perl /tmp/die.pl 
Moarfail: maybfail: Hurp Durp! at /tmp/die.pl line 7.
$ perl -MCarp::Always /tmp/die.pl 
Moarfail: maybfail: Hurp Durp! at /tmp/die.pl line 18
 main::moarfail() called at /tmp/die.pl line 22
$ perl -MDevel::SimpleTrace /tmp/die.pl 
Moarfail: maybfail: Hurp Durp!
 at main::fail(/tmp/die.pl:7)
 at (/tmp/die.pl:11)
 at main::maybfail(/tmp/die.pl:11)
 at (/tmp/die.pl:17)
 at main::moarfail(/tmp/die.pl:17)
 at main::(/tmp/die.pl:22)

Note how none of those traces reflect the fact I call "die" on line 12? Be glad the die isn't like 30 lines away in a different method where it might go completely unnoticed.

In fact, each and every one of these backtraces confuse me, because I can't work out why some know about the failure origin, and others don't ... ( Carp::Always seems to let you down and being completely unable to see a stack. :/ )

I would in fact, much rather prefer something like this that actually worked:


use strict;
use warnings;

sub fail {
    BasicException->throw( error => 'HurpDurp' );

sub maybfail {
  try { 
  } catch ( BasicException $e ) { 
     MoreComplexException->adopt( $e )->throw( error => 'Maybfail');

sub moarfail {
  try { 
  } catch ( MoreComplexException $e ) { 
     EvenMoreComplexExcetpion->adopt( $e )->throw( error => 'Moarfail');


Nothing I've seen handles that "adopt" thing, but its my little way of saying "We are in fact creating a new exception, because we want to provide more information about the problem, and increase the meaning of the problem relative to this context, but we also want to recognise that this problem is likely caused by another problem(s) that we identify here."

In case you TL;DR'd here, ( and because my train of thought was just snapped -_- ), the summary of this is: Its really challenging doing proper exception-oriented Perl when so many code features still throw those nasty stringy exceptions. :(


Current Limitations In Exception Driven Perl: Exception Base Classes.

I've started re-attempting to do Exception Oriented Perl Programming recently, and quickly discovered a whole raft of things that got in my way.

This is the first of such things.

I was very much appreciative of Exception::Class, it looks Mostly to Do The Right thing, its mostly simple and straight forward, it itself has some apparent limitations with regard to exception driven code, but I'll cover those later.

The biggest annoyance I have at present is there is no apparent de-facto base set of Exception classes to derive everything else from. I was expecting some sort of Exception Hierarchy much like Moose's Type Hierarchy, but none is to be found anywhere, and this stinks.

Is everyone to have their own base hierarchy for everything? The idea of every project having its own FileException class ship with it to me feels like Fail, and this problem I feel will be needed to addressed before more people start taking exception driven Perl seriously.

Additional to this fun, is presently, all the exception classes share the same name-space as everything else in Perl, because they're just Perl packages. I accept this limitation is mostly Perl's fault, but I still dislike it. The 'Type' name-space suffers a similar problem, but its not quite so bad.

The challenge here is having adequate classes to represent accurately all the classes of exception one wishes to provide, but have them still sanely organised, but without people needing to type out 100character incantations just to throw an exception.

Something akin to MooseX::Types which injects subs into the context would be nice-ish, the only problem there is when you do something stupid like create/import an exception with a name identical to a child namespace, ie:

   package Bar;
   use SomeTypePackage qw( Foo );
   use Bar::Foo; # Hurp durp. Bar::Foo->import() ==> Bar::Foo()->import() 
   Bar::Foo->new(); # moar hurp durp. Bar::Foo()->import() 

Its reasonably easy to work around, but discovering you've failed in this way is slightly less than obvious.


Todays amusing Perl parser confusion

Have a look at this very simple code and see what you expect it will do:

use strict;
use warnings;

print "hello";




It looks trivial right?

Not so.

$ perl /tmp/pl.pl 
Can't modify constant item in scalar assignment at /tmp/pl.pl line 13, at EOF
Bareword "cut" not allowed while "strict subs" in use at /tmp/pl.pl line 8.
Bareword "pod" not allowed while "strict subs" in use at /tmp/pl.pl line 8.
Execution of /tmp/pl.pl aborted due to compilation errors.



Running it through Deparse reveals the culprit:

$ perl -MO=Deparse /tmp/pl.pl 
Can't modify constant item in scalar assignment at /tmp/pl.pl line 13, at EOF
Bareword "cut" not allowed while "strict subs" in use at /tmp/pl.pl line 8.
Bareword "pod" not allowed while "strict subs" in use at /tmp/pl.pl line 8.
/tmp/pl.pl had compilation errors.
use warnings;
use strict 'refs';
print 'hello';
1 = 'pod' = 'cut';

Pesky indeed!.

The solution? Insert the humble ; like your mother taught you to.

use strict;
use warnings;

print "hello";



$ perl /tmp/pl.pl 

Perhaps this is worthy of applying a bugfix. Perl version = 5.12.1 =).