[avconv] Convert WMV to MP4

I just came across a great tool for converting .wmv files to .mp4 and various other formats. I needed to perform this this conversion because either my Panasonic DLNA equipped TV could not playback .wmv files or was it the fault of my QNAP DLNA server? However, .mp4 encoded files play without issue. In the past the tool of choice was ffmpeg, but it is now deprecated and it is suggested to use avconv not and in the near future:

`--> ffmpeg
ffmpeg version 0.8.4-6:0.8.4-0ubuntu0.12.10.1, Copyright (c) 2000-2012 the Libav developers
  built on Nov  6 2012 16:51:11 with gcc 4.7.2
*** THIS PROGRAM IS DEPRECATED ***
This program is only provided for compatibility and will be removed in a future release. Please use avconv instead.

To check the available codecs on your system, the ‘-codecs’ option to avconv displays all supported codecs and whether it is possible to encode, decode and perform various other tasks as the legend shows. To the left of codec are the supported functions. For example:

`--> avconv -codecs | head -10 && avconv -loglevel quiet -codecs | egrep "(wmv)"
avconv version 0.8.4-6:0.8.4-0ubuntu0.12.10.1, Copyright (c) 2000-2012 the Libav developers
  built on Nov  6 2012 16:51:11 with gcc 4.7.2
Codecs:
 D..... = Decoding supported
 .E.... = Encoding supported
 ..V... = Video codec
 ..A... = Audio codec
 ..S... = Subtitle codec
 ...S.. = Supports draw_horiz_band
 ....D. = Supports direct rendering method 1
 .....T = Supports weird frame truncation
 ------
 DEVSD  wmv1            Windows Media Video 7
 DEVSD  wmv2            Windows Media Video 8
 D V D  wmv3            Windows Media Video 9
 D V D  wmv3_vdpau      Windows Media Video 9 VDPAU
 D V D  wmv3image       Windows Media Video 9 Image

To convert our sample file: 1-25_681_webinar_2.wmv, 44MB in size, with the following characteristics:

`--> du -sh 1-25_681_webinar_2.wmv
44M     1-25_681_webinar_2.wmv

`--> avconv -i 1-25_681_webinar_2.wmv
avconv version 0.8.4-6:0.8.4-0ubuntu0.12.10.1, Copyright (c) 2000-2012 the Libav developers
  built on Nov  6 2012 16:51:11 with gcc 4.7.2
[wmv3 @ 0xa50be0] Extra data: 8 bits left, value: 0
Input #0, asf, from '1-25_681_webinar_2.wmv':
  Metadata:
    title           : 1-25 681 webinar 2
    WMFSDKVersion   : 11.0.5721.5251
    WMFSDKNeeded    : 0.0.0.0000
    IsVBR           : 1
    VBR Peak        : 295
    Buffer Average  : 772
  Duration: 01:02:38.47, start: 0.000000, bitrate: 96 kb/s
    Stream #0.0(eng): Audio: wmav2, 44100 Hz, 1 channels, s16, 48 kb/s
    Stream #0.1(eng): Video: wmv3 (Main), yuv420p, 640x416, 37 kb/s, 15 tbr, 1k tbn, 1k tbc
At least one output file must be specified

to .mp4 we perform:

`--> sudo avconv -i 1-25_681_webinar_2.wmv -strict experimental 1-25_681_webinar_2.mp4

where the .wmv input file (-i) is encoded into .mp4 using the experimental ‘aac’ encoder (-stric experimental):

avconv version 0.8.4-6:0.8.4-0ubuntu0.12.10.1, Copyright (c) 2000-2012 the Libav developers
  built on Nov  6 2012 16:51:11 with gcc 4.7.2
[wmv3 @ 0x1bd6be0] Extra data: 8 bits left, value: 0
Input #0, asf, from '1-25_681_webinar_2.wmv':
  Metadata:
    title           : 1-25 681 webinar 2
    WMFSDKVersion   : 11.0.5721.5251
    WMFSDKNeeded    : 0.0.0.0000
    IsVBR           : 1
    VBR Peak        : 295
    Buffer Average  : 772
  Duration: 01:02:38.47, start: 0.000000, bitrate: 96 kb/s
    Stream #0.0(eng): Audio: wmav2, 44100 Hz, 1 channels, s16, 48 kb/s
    Stream #0.1(eng): Video: wmv3 (Main), yuv420p, 640x416, 37 kb/s, 15 tbr, 1k tbn, 1k tbc
File '1-25_681_webinar_2.mp4' already exists. Overwrite ? [y/N] y
[buffer @ 0x1bd8860] w:640 h:416 pixfmt:yuv420p
[wmv3 @ 0x1bd6be0] Extra data: 8 bits left, value: 0
Output #0, mp4, to '1-25_681_webinar_2.mp4':
  Metadata:
    title           : 1-25 681 webinar 2
    WMFSDKVersion   : 11.0.5721.5251
    WMFSDKNeeded    : 0.0.0.0000
    IsVBR           : 1
    VBR Peak        : 295
    Buffer Average  : 772
    encoder         : Lavf53.21.0
    Stream #0.0(eng): Video: mpeg4, yuv420p, 640x416, q=2-31, 200 kb/s, 15 tbn, 15 tbc
    Stream #0.1(eng): Audio: aac, 44100 Hz, 1 channels, s16, 200 kb/s
Stream mapping:
  Stream #0:1 -> #0:0 (wmv3 -> mpeg4)
  Stream #0:0 -> #0:1 (wmav2 -> aac)
Press ctrl-c to stop encoding
frame=56377 fps=131 q=27.9 Lsize=  163756kB time=3758.47 bitrate= 356.9kbits/s
video:92153kB audio:69879kB global headers:0kB muxing overhead 1.064378%
178.31s user 8.23s system 43% cpu 7:12.13s total

I was surprised to see the resulting .mp4 was more than twice (160M) as large of the original!

`--> du -sh 1-25_681_webinar_2.*
160M    1-25_681_webinar_2.mp4
44M     1-25_681_webinar_2.wmv

Update: An anonymous commenter felt my pain and suggested I could have reduced the file size further by including the option “-tune film”. As of today and with the current version of avconv using ‘-tune film’ is no longer necessary. The resulting file size is smaller than in my initial post with this version of avconv:

avconv -version
avconv version 9.16-6:9.16-0ubuntu0.14.04.1, Copyright (c) 2000-2014 the Libav developers
built on Aug 10 2014 18:16:02 with gcc 4.8 (Ubuntu 4.8.2-19ubuntu1)
avconv 9.16-6:9.16-0ubuntu0.14.04.1
libavutil 52. 3. 0 / 52. 3. 0
libavcodec 54. 35. 0 / 54. 35. 0
libavformat 54. 20. 4 / 54. 20. 4
libavdevice 53. 2. 0 / 53. 2. 0
libavfilter 3. 3. 0 / 3. 3. 0
libavresample 1. 0. 1 / 1. 0. 1
libswscale 2. 1. 1 / 2. 1. 1
[/sourcecode]

Here are the file sizes of the same file with and then without the use of ‘-tune film’ (which by the way is not in the man page, but accepted as a valid argument):

With option “-tune film”

avconv -i 1-25_681_webinar_2.wmv -strict experimental -tune film 1-25_681_webinar_2.mp4

...output truncated...
[libx264 @ 0x7d6080] ref B L1: 94.5%  5.5%
[libx264 @ 0x7d6080] kb/s:40.87
504.36s user 6.57s system 331% cpu 2:34.02s total

File sizes after conversion with “-tune film”

du -h 1-25_681_webinar_2.*
83M     1-25_681_webinar_2.mp4
44M     1-25_681_webinar_2.wmv
\rm -f 1-25_681_webinar_2.mp4

WITHOUT option “-tune film”

avconv -i 1-25_681_webinar_2.wmv -strict experimental 1-25_681_webinar_2.mp4
..output truncated...
[libx264 @ 0x15b1fa0] kb/s:39.45
505.83s user 7.31s system 329% cpu 2:35.65s total

File sizes after conversion WITHOUT “-tune film”

du -h 1-25_681_webinar_2.*
83M     1-25_681_webinar_2.mp4
44M     1-25_681_webinar_2.wmv

Thanks to the anonymous commenter and those working on avconv.

Posted in *Nix | Tagged , , , , | 4 Comments

[QNAP] New NAS — TS-569L-US: Just Ordered!

ImageMy QNAP will be delivered this week, in time to start the new year (2013) off right! During my waiting period I started searching for the right memory module to upgrade from 1GB to a total (max) of 3GB. The exact type of memory to use for this QNAP NAS is not disclosed, but on the QNAP website they are selling 2GB for over $150.00 USD!! And, I am not buying that!

After searching the forums one member recommended Kingston KVR1333D3S8S9/2G, which could be found really cheap, for about $10-$30 dollars online. Another member suggested that the memory module inside their TS-269L is that made by ADATA. Based on the specs provided by QNAP on their website:2GB DDR3-1333 204Pin SO-DIMM RAM Module, I decided to purchase the following memory module from Amazon.com: ADATA Premier Series DDR3 1333Mhz 2 GB. The reviews are pretty fair and I will try my luck!

For the harddrives I am starting off with three (3) x 2TB Western Digital Red Drives, ordered right from Amazon! They will be used in a Raid-5 configuration for a total of 4TB, which should be enough for my need for now. I have been very luck for the past five (5) years without having to ever change a single drive in my Hammer N1200 and is still running fine now! **Knock on wood** :).

Posted in General-Tech | Leave a comment

[2012 in review] Site Stats

The WordPress.com stats helper monkeys prepared a 2012 annual report for this blog.

Here’s an excerpt:

19,000 people fit into the new Barclays Center to see Jay-Z perform. This blog was viewed about 86,000 times in 2012. If it were a concert at the Barclays Center, it would take about 5 sold-out performances for that many people to see it.

Click here to see the complete report.

Posted in General-Tech | Leave a comment

[Puppet] Adding The Schema for Storing Node Definitions In LDAP

Puppet allows the storage of node information in LDAP. For this write-up I will detail how to configure an Oracle Directory Server to store node information that can later be used by a puppet server for the retrieval of node classification information. The use of LDAP eliminates the need of having to use the flat file node.pp for node definitions.

On the server acting as the “puppet master”, ruby ldap client libraries are required. In the example below our “puppet master” server has already been configured on a ubuntu linux server.

Ensure ruby client libraries are installed:

After verifying the absence of the ruby client libraries we install them below:

--> aptitude search ruby | grep -i ldap
...edited...
p   libldap-ruby1.8                 - OpenLDAP library binding for Ruby 1.8
...edited...

--> aptitude install libldap-ruby1.8
...edited...
Fetched 66.8 kB in 0s (109 kB/s)
Selecting previously deselected package libldap-ruby1.8.
(Reading database ... 63468 files and directories currently installed.)
Unpacking libldap-ruby1.8 (from .../libldap-ruby1.8_0.9.7-1.1_amd64.deb) ...
Setting up libldap-ruby1.8 (0.9.7-1.1) ...

`--> ruby -rldap -e "puts :installed"
installed

Update /etc/puppet/puppet.conf to use LDAP

Change your “/etc/puppet/puppet.conf” [master] section to use ldap for node lookups on the master server. For example, the following should be placed in the /etc/puppet/puppet.conf file underneath the section [master]:

[master]
node_terminus = ldap
ldapserver = odsee.goldcoast.com
ldapbase = ou=hosts,dc=goldcoast,dc=com

Were ‘node_terminus’ was originally using file, but will now use ldap. ‘ldapserver’ should point to a valid ldap server that can be accessed on port 389. ‘ldapbase’ is where the puppet master server will look for node information. We will populate this organizational unit (ou) later on. Once the changes have been saved restart the “puppet master”. The ‘nope.pp’ file should no longer be referenced by the master server. But before discarding the file entirely we need to configure LDAP to add the custom puppet schema for our node definitions.

Adding the Puppet Schema to LDAP Directory Server

Next we need to populate our LDAP server to contain the puppet.schema definitions. I recommend visiting the following url for the latest puppet schema:

https://github.com/puppetlabs/puppet/blob/master/ext/ldap/puppet.schema

Login into your directory server. Copy the contents of ‘puppet.schema’ to a temporary file, for example to: /tmp/98puppet.ldif.tmp. The file as is, as of this writing, cannot be imported into Oracle Directory Server Enterprise (ODSEE) without modification.

The original ‘puppet.schema’ looks like:

bash-3.00# cat > /tmp/98puppet.ldif.tmp
attributetype ( 1.3.6.1.4.1.34380.1.1.3.10 NAME 'puppetClass'
DESC 'Puppet Node Class'
EQUALITY caseIgnoreIA5Match
SYNTAX 1.3.6.1.4.1.1466.115.121.1.26 )

attributetype ( 1.3.6.1.4.1.34380.1.1.3.9 NAME 'parentNode'
DESC 'Puppet Parent Node'
EQUALITY caseIgnoreIA5Match
        SYNTAX 1.3.6.1.4.1.1466.115.121.1.26
        SINGLE-VALUE )

attributetype ( 1.3.6.1.4.1.34380.1.1.3.11 NAME 'environment'
DESC 'Puppet Node Environment'
EQUALITY caseIgnoreIA5Match
SYNTAX 1.3.6.1.4.1.1466.115.121.1.26 )

attributetype ( 1.3.6.1.4.1.34380.1.1.3.12 NAME 'puppetVar'
DESC 'A variable setting for puppet'
EQUALITY caseIgnoreIA5Match
SYNTAX 1.3.6.1.4.1.1466.115.121.1.26 )

objectclass ( 1.3.6.1.4.1.34380.1.1.1.2 NAME 'puppetClient' SUP top AUXILIARY
DESC 'Puppet Client objectclass'
MAY ( puppetclass $ parentnode $ environment $ puppetvar ))

It can be easily converted with the following script, located at:

http://directory.fedoraproject.org/wiki/Howto:OpenLDAPMigration.

in order to work with ODSEE. For example:

bash-3.00# cd /tmp/

bash-3.00# perl ldif2dsee.pl 98puppet.ldif.tmp > 98puppet.ldif

After Conversion, the puppet schema will look like:

bash-3.00# cat 98puppet.ldif

dn: cn=schema
attributeTypes: ( 1.3.6.1.4.1.34380.1.1.3.10 NAME 'puppetClass' DESC 'Puppet Node Class' EQUALITY cas
eIgnoreIA5Match SYNTAX 1.3.6.1.4.1.1466.115.121.1.26 X-ORIGIN 'PUPPET')
attributeTypes: ( 1.3.6.1.4.1.34380.1.1.3.9 NAME 'parentNode' DESC 'Puppet Parent Node' EQUALITY case
IgnoreIA5Match SYNTAX 1.3.6.1.4.1.1466.115.121.1.26 SINGLE-VALUE X-ORIGIN 'PUPPET')
attributeTypes: ( 1.3.6.1.4.1.34380.1.1.3.11 NAME 'environment' DESC 'Puppet Node Environment' EQUALI
TY caseIgnoreIA5Match SYNTAX 1.3.6.1.4.1.1466.115.121.1.26 X-ORIGIN 'PUPPET')
attributeTypes: ( 1.3.6.1.4.1.34380.1.1.3.12 NAME 'puppetVar' DESC 'A variable setting for puppet' EQ
UALITY caseIgnoreIA5Match SYNTAX 1.3.6.1.4.1.1466.115.121.1.26 X-ORIGIN 'PUPPET')
objectClasses: ( 1.3.6.1.4.1.34380.1.1.1.2 NAME 'puppetClient' SUP top AUXILIARY DESC 'Puppet Client 
objectclass' MAY ( puppetClass $ parentNode $ environment $ puppetVar ) X-ORIGIN 'PUPPET')

Copy the resulting file, /tmp/98puppet.ldif, under the ODSEE schema/ path. This is usually under instance-path/config/schema/ :

bash-3.00# cp /tmp/98puppet.ldif /odsee/config/schema/

Restart the LDAP Instance

Before restarting the instance, tail the errors log file, instance-path/logs/errors in one window and in another restart the ldap instance ensuring there were no errors. For example, after restarting the instance:

bash-3.00# dsadm restart /odsee
Directory Server instance '/odsee' stopped

Note: Notice after the restart, the message says “… ‘/odsee’ stopped”. It should have said “… ‘/odsee’ restarted

The errors window should have displayed something similiar to:

[21/Jan/2012:22:25:43 -0500] - slapd shutting down - waiting for 0 threads to terminate
[21/Jan/2012:22:25:43 -0500] - libumem_dummy_thread started.
[21/Jan/2012:22:25:43 -0500] - Waiting for 6 database threads to stop
[21/Jan/2012:22:25:44 -0500] - All database threads now stopped
[21/Jan/2012:22:25:44 -0500] - slapd stopped.
[21/Jan/2012:22:25:47 -0500] - Sun-Directory-Server/11.1.1.3.0 B2010.0630.2254 (64-bit) starting up
[21/Jan/2012:22:25:49 -0500] - Listening on all interfaces port 389 for LDAP requests
[21/Jan/2012:22:25:49 -0500] - Listening on all interfaces port 636 for LDAPS requests
[21/Jan/2012:22:25:49 -0500] - slapd started. 
[21/Jan/2012:22:25:49 -0500] - INFO: 97 entries in the directory database.
...edited...

Verify The Puppet Schema is in LDAP

While still logged into the LDAP server, perform a basic search which should return the schema that was just imported.

bash-3.00# ldapsearch -T -b cn=schema "(objectclass=*)" | grep -i puppet

attributeTypes: ( 1.3.6.1.4.1.34380.1.1.3.11 NAME 'environment' DESC 'Puppet Node Environment' EQUALITY caseIgnoreIA5Match SYNTAX 1.3.6.1.4.1.1466.115.121.1.26 X-ORIGIN 'PUPPET' )
attributeTypes: ( 1.3.6.1.4.1.34380.1.1.3.10 NAME 'puppetClass' DESC 'Puppet Node Class' EQUALITY caseIgnoreIA5Match SYNTAX 1.3.6.1.4.1.1466.115.121.1.26 X-ORIGIN 'PUPPET' )
attributeTypes: ( 1.3.6.1.4.1.34380.1.1.3.9 NAME 'parentNode' DESC 'Puppet Parent Node' EQUALITY caseIgnoreIA5Match SYNTAX 1.3.6.1.4.1.1466.115.121.1.26 SINGLE-VALUE X-ORIGIN 'PUPPET' )
attributeTypes: ( 1.3.6.1.4.1.34380.1.1.3.12 NAME 'puppetVar' DESC 'A variable setting for puppet' EQUALITY caseIgnoreIA5Match SYNTAX 1.3.6.1.4.1.1466.115.121.1.26 X-ORIGIN 'PUPPET' )
objectClasses: ( 1.3.6.1.4.1.34380.1.1.1.2 NAME 'puppetClient' DESC 'Puppet Client objectclass' STRUCTURAL MAY ( puppetClass $ parentNode $ environment $ puppetVar ) X-ORIGIN 'PUPPET' )

Now you should be able to add node information within LDAP.

Add a base node to LDAP

I like to use the command line tool ldapvi for manipulating my ldap entries. I will not go into detail on how to configure ldapvi, but additional information may be found online. Let’s add a base node and assign the “base class” to it. We will place “cn=base”, under the “search base” ou=hosts,cn=goldcoast,dc=com:

--> ldapvi --add -o top -o device -o puppetClient -b cn=base,ou=hosts,cn=goldcoast,cn=com

After invocation, your default editor will open up with a screen similar to this:

# -*- coding: utf-8 -*- vim:encoding=utf-8:
# http://www.lichteblau.com/ldapvi/manual#syntax

### NOTE: objectclass is abstract: top
# structural object class: device
### WARNING: extra structural object class: puppetClient
add cn=base,ou=hosts,cn=goldcoast,cn=com
objectClass: top
objectClass: device
objectClass: puppetClient
cn:
#description:
#l:
#o:
#ou:
#owner:
#seeAlso:
#serialNumber:
puppetClass: base
#parentNode:
#environment:
#puppetVar:

My default editor is “vim” and I uncommented “puppetClass:” in order to use the “base” class for the “base node”. Once done, save and quit the file and you should be presented with authentication to commit the change to ldap — something similiar to:

...edited...
~
/tmp/ldapvi-usdGC1/data: 22 lines, 457 characters.
add: 1, rename: 0, modify: 0, delete: 0
Action? [yYqQvVebB*rsf+?] b

--- Login
Type M-h for help on key bindings.

Filter or DN: 
    Password: 

Cheers,
-swinful

Posted in *Nix | Leave a comment

[Solaris 11 Express] Configuring Samba via ZFS for use in ActiveDirectory

“Eh, I checked everywhere! I cannot find that smb.conf. Where could it have gone!?” And I thought he was lying when a colleague of mine mentioned this, trying to enable samba. Well, I checked and could not find any trace of the smb.conf file either. Although samba was enabled via ZFS and we could see the windows shares, we could not access them. Sure enabling samba via zfs was fairly simple and I enabled samba as follows. Considering tank is our dataset on a system called army) with the domain goldcoast.com I performed:

# zfs sharesmb=on tank

which should implicitly enable the SMF: svc:/network/smb/server:default

What was actually missing, since we are in an Active Directory environment was joining our Solaris host to the domain and mapping corresponding Windows users to Unix users — provided the Windows and Unix usernames are the same and in this case they were.

Join Solaris to the Active Directory domain:

# smbadm join -u administrator goldcoast.com

At his point the Windows shares were now accessible, but you may have noticed the file mappings were wrong. For example, on the Windows side of things if you created a new file the owner and group would appear differently on the Unix side, similiar to the below listing:

# ls -ltr
   -rwx------+ 1 2147540993 2147483653          0 May 10 16:24 New Text Document.txt                    

And with permissions like that, in a shared environment there are sure to be a lot of complaints.

To map all AD users that are part of domain goldcoast.com, considering the local unix accounts have the same name we performed:

# idmap add "winuser:*@goldcoast.com" unixuser:*

And samba is enabled. Try it, try to access the share from Windows using

Start -> Run: \\army\tank

If your Windows machine is connected to an ActiveDirectory Controller you should be prompted for a username/password dialog.


References:

  1. Solaris CIFS Permissions
  2. Oracle Solaris SMB and Windows Interoperability Administration Guide

Posted in *Nix | Leave a comment

[Perl] It is never too late to learn!

All these years and I have never had the need to seriously learn perl until now. While searching for a good beginners guide I was particularly interested in a decent Computer Based Training (CBT), but that was hard to come by — at least trying to find a free one worth my while. I wanted something similar to the old DOS Unix CBT I once used when I was learning UNIX or even something to the extent of the Tcl CBT. Well, I did did not quite find what I was looking for, so instead I checked what was already available on my BSD box for learning perl:

`--> apropos perl | grep doc
perlapi(1)               - autogenerated documentation for the perl public API
perldoc(1)               - Look up Perl documentation in Pod format
perlintern(1)            - autogenerated documentation of purely internal Perl functions
perlplan9(1)             - Plan 9-specific documentation for Perl
perlpod(1)               - the Plain Old Documentation format Xref "POD plain old documentation"
perltoc(1)               - perl documentation table of contents
perlvms(1)               - VMS-specific documentation for Perl

What stood out to me was perltoc(1). And, it is what I used for the basis of starting to learn perl. For example perltoc(1) — like its’ name suggest will provide a brief table of contents for the rest of the perl documentation set. I used it to scan for areas that interested me about perl.

`--> man perltoc | col -bx | egrep "perl.+ -+ .*" | sed 's/^ *//' | more
perltoc - perl documentation table of contents
perlintro -- a brief introduction and overview of Perl
perlreftut - Mark's very short tutorial about references
perldsc - Perl Data Structures Cookbook
perllol - Manipulating Arrays of Arrays in Perl
perlrequick - Perl regular expressions quick start
perlretut - Perl regular expressions tutorial
perlboot - Beginner's Object-Oriented Tutorial
perltoot - Tom's object-oriented tutorial for perl
perltooc - Tom's OO Tutorial for Class Data in Perl
perlbot - Bag'o Object Tricks (the BOT)
perlperf - Perl Performance and Optimization Techniques
perlstyle - Perl style guide
perlcheat - Perl 5 Cheat Sheet
perltrap - Perl traps for the unwary
perldebtut - Perl debugging tutorial
perlfaq - frequently asked questions about Perl
perlfaq  - this document, perlfaq1 - General Questions About Perl,
perlfaq2 - Obtaining and Learning about Perl, perlfaq3 -

Once I got feet wet I decided to purchase one of the O’ Reilly Books: Programming Perl 4th Edition.And, I also found the following sites useful: http://learn.perl.org and http://perldoc.perl.org.

Posted in Programming | Leave a comment

[GnuCash] MacPorts Compile

Finally! Just finished with the compilation and installation of GnuCash using macports on my PowerBook Pro running Mac OS X 10.5.8:

...edited...
--->  Configuring gnucash
--->  Building gnucash
--->  Staging gnucash into destroot
--->  Installing gnucash @2.4.7_1
--->  Activating gnucash @2.4.7_1
--->  Cleaning gnucash
5150.53s user 1790.23s system 109% cpu 1:45:58.98s total

Now, that took long enough — one hour and forty five minutes! Thought I would share. -;)

Posted in *Nix | Leave a comment