[elbe-devel] [PATCH 1/7] pep8: use autopep8 for better coding style

Torben Hohn torben.hohn at linutronix.de
Fri Jan 5 11:23:05 CET 2018


Please document the version of autopep8 used.

otherwise

Reviewed-by: Torben Hohn <torben.hohn at linutronix.de>

On Wed, Dec 20, 2017 at 10:11:41PM +0100, Manuel Traut wrote:
> this change was generated by:
> 
> $ autopep8 -i -r -a -a -j 5 --ignore E721,W503 .
> 
> it alters the pycodestyle statistics from:
> 
> 24      E111 indentation is not a multiple of four
> 1       E124 closing bracket does not match visual indentation
> 21      E127 continuation line over-indented for visual indent
> 221     E128 continuation line under-indented for visual indent
> 1       E131 continuation line unaligned for hanging indent
> 1595    E201 whitespace after '['
> 1648    E202 whitespace before ']'
> 8       E203 whitespace before ':'
> 2059    E211 whitespace before '('
> 90      E221 multiple spaces before operator
> 3       E222 multiple spaces after operator
> 40      E225 missing whitespace around operator
> 5       E228 missing whitespace around modulo operator
> 125     E231 missing whitespace after ':'
> 181     E251 unexpected spaces around keyword / parameter equals
> 4       E261 at least two spaces before inline comment
> 4       E262 inline comment should start with '# '
> 11      E265 block comment should start with '# '
> 3       E266 too many leading '#' for block comment
> 1       E271 multiple spaces after keyword
> 7       E272 multiple spaces before keyword
> 17      E301 expected 1 blank line, found 0
> 262     E302 expected 2 blank lines, found 1
> 73      E303 too many blank lines (2)
> 74      E305 expected 2 blank lines after class or function definition, found 1
> 2       E402 module level import not at top of file
> 489     E501 line too long (93 > 79 characters)
> 15      E502 the backslash is redundant between brackets
> 2       E701 multiple statements on one line (colon)
> 8       E703 statement ends with a semicolon
> 6       E711 comparison to None should be 'if cond is None:'
> 2       E712 comparison to True should be 'if cond is True:' or 'if cond:'
> 16      E713 test for membership should be 'not in'
> 16      E714 test for object identity should be 'is not'
> 40      E722 do not use bare except'
> 2       E731 do not assign a lambda expression, use a def
> 7       E741 ambiguous variable name 'l'
> 3       W291 trailing whitespace
> 2       W293 blank line contains whitespace
> 24      W391 blank line at end of file
> 
> to:
> 
> 7       E211 whitespace before '('
> 6       E265 block comment should start with '# '
> 3       E266 too many leading '#' for block comment
> 2       E402 module level import not at top of file
> 98      E501 line too long (82 > 79 characters)
> 7       E741 ambiguous variable name 'l'
> 
> However this adds changes like this:
> 
> -            log.printo( "Source for Package " + pkg.name + "-" + pkgver.version + " could not be downloaded" )
> +            log.printo(
> +                "Source for Package " +
> +                pkg.name +
> +                "-" +
> +                pkgver.version +
> +                " could not be downloaded")
> 
> that should better look like this:
> 
> +            log.printo("Source for Package %s-%s could not be downloaded" % (
> +                       pkg.name, pkgver.version))
> 
> Fixup fallout like this if a file is touched in the future.
> 
> Signed-off-by: Manuel Traut <manut at linutronix.de>
> ---
>  docs/conf.py                            |    2 +-
>  elbe                                    |    5 +-
>  elbepack/aptpkgutils.py                 |   60 +-
>  elbepack/aptprogress.py                 |   67 +-
>  elbepack/asciidoclog.py                 |   31 +-
>  elbepack/asyncworker.py                 |  601 +++++++++---------
>  elbepack/cdroms.py                      |  227 ++++---
>  elbepack/commands/add.py                |   16 +-
>  elbepack/commands/adjustpkgs.py         |   41 +-
>  elbepack/commands/bootup-check.py       |   37 +-
>  elbepack/commands/buildchroot.py        |  106 ++--
>  elbepack/commands/buildsysroot.py       |   24 +-
>  elbepack/commands/check_updates.py      |   63 +-
>  elbepack/commands/chg_archive.py        |   49 +-
>  elbepack/commands/chroot.py             |   39 +-
>  elbepack/commands/control.py            |  135 ++--
>  elbepack/commands/daemon.py             |   27 +-
>  elbepack/commands/db.py                 |    9 +-
>  elbepack/commands/debianize.py          |   13 +-
>  elbepack/commands/diff.py               |   94 +--
>  elbepack/commands/gen_update.py         |   71 ++-
>  elbepack/commands/genlicence.py         |   20 +-
>  elbepack/commands/get_archive.py        |   24 +-
>  elbepack/commands/hdimg.py              |   51 +-
>  elbepack/commands/init.py               |  148 +++--
>  elbepack/commands/initvm.py             |   81 +--
>  elbepack/commands/mkcdrom.py            |   97 +--
>  elbepack/commands/parselicence.py       |  239 ++++----
>  elbepack/commands/pbuilder.py           |   44 +-
>  elbepack/commands/pin_versions.py       |   32 +-
>  elbepack/commands/pkgdiff.py            |   33 +-
>  elbepack/commands/preprocess.py         |   13 +-
>  elbepack/commands/prjrepo.py            |   15 +-
>  elbepack/commands/remove_sign.py        |    5 +-
>  elbepack/commands/setsel.py             |   23 +-
>  elbepack/commands/show.py               |   29 +-
>  elbepack/commands/sign.py               |    5 +-
>  elbepack/commands/toolchainextract.py   |   58 +-
>  elbepack/commands/updated.py            |   69 +--
>  elbepack/commands/validate.py           |   19 +-
>  elbepack/commands/xsdtoasciidoc.py      |   21 +-
>  elbepack/config.py                      |    2 +-
>  elbepack/daemons/soap/__init__.py       |   28 +-
>  elbepack/daemons/soap/authentication.py |   31 +-
>  elbepack/daemons/soap/datatypes.py      |    3 +-
>  elbepack/daemons/soap/esoap.py          |  284 ++++-----
>  elbepack/daemons/soap/faults.py         |  171 +++---
>  elbepack/db.py                          | 1016 ++++++++++++++++---------------
>  elbepack/dbaction.py                    |  166 ++---
>  elbepack/debianize/barebox.py           |   64 +-
>  elbepack/debianize/base.py              |  169 ++---
>  elbepack/debianize/debianize.py         |   10 +-
>  elbepack/debianize/kernel.py            |   87 +--
>  elbepack/debianize/uboot.py             |   16 +-
>  elbepack/debianreleases.py              |   26 +-
>  elbepack/debpkg.py                      |   64 +-
>  elbepack/directories.py                 |   20 +-
>  elbepack/dosunix.py                     |   29 +-
>  elbepack/dump.py                        |  136 +++--
>  elbepack/efilesystem.py                 |  215 ++++---
>  elbepack/elbeproject.py                 |  572 +++++++++--------
>  elbepack/elbexml.py                     |  188 +++---
>  elbepack/filesystem.py                  |  109 ++--
>  elbepack/finetuning.py                  |  217 ++++---
>  elbepack/fstab.py                       |   61 +-
>  elbepack/gpg.py                         |   49 +-
>  elbepack/hdimg.py                       |  477 +++++++++------
>  elbepack/initvmaction.py                |  315 ++++++----
>  elbepack/kvm.py                         |   10 +-
>  elbepack/licencexml.py                  |   53 +-
>  elbepack/pbuilder.py                    |  119 ++--
>  elbepack/pbuilderaction.py              |   92 ++-
>  elbepack/pkgarchive.py                  |  170 +++---
>  elbepack/pkgutils.py                    |  157 +++--
>  elbepack/projectmanager.py              |  523 ++++++++--------
>  elbepack/repomanager.py                 |  377 +++++++-----
>  elbepack/rfs.py                         |  219 ++++---
>  elbepack/rpcaptcache.py                 |  206 ++++---
>  elbepack/shellhelper.py                 |   15 +-
>  elbepack/soapclient.py                  |  357 +++++++----
>  elbepack/templates.py                   |   38 +-
>  elbepack/toolchain.py                   |   63 +-
>  elbepack/treeutils.py                   |  106 ++--
>  elbepack/updated.py                     |  479 ++++++++-------
>  elbepack/updated_monitors.py            |   59 +-
>  elbepack/updatepkg.py                   |   98 +--
>  elbepack/validate.py                    |   16 +-
>  elbepack/version.py                     |    2 +-
>  elbepack/virtapt.py                     |  175 +++---
>  elbepack/xmldefaults.py                 |  261 ++++----
>  elbepack/xmlpreprocess.py               |   19 +-
>  elbepack/ziparchives.py                 |   27 +-
>  setup.py                                |   44 +-
>  test/updated.py                         |   94 +--
>  94 files changed, 6203 insertions(+), 4844 deletions(-)
> 
> diff --git a/docs/conf.py b/docs/conf.py
> index d5ea6aad..44ebab18 100644
> --- a/docs/conf.py
> +++ b/docs/conf.py
> @@ -154,4 +154,4 @@ texinfo_documents = [
>       'Miscellaneous'),
>  ]
>  
> -html_sidebars = { '**': ['globaltoc.html'] }
> +html_sidebars = {'**': ['globaltoc.html']}
> diff --git a/elbe b/elbe
> index d74e3133..47158613 100755
> --- a/elbe
> +++ b/elbe
> @@ -26,6 +26,7 @@ def usage():
>  # so that it knows, where the current elbe
>  # executable can be found.
>  
> +
>  init_directories(__file__)
>  
>  if (len(sys.argv) < 2):
> @@ -43,9 +44,9 @@ if not sys.argv[1] in cmd_list:
>      usage()
>      sys.exit(20)
>  
> -modname = "elbepack.commands."+sys.argv[1]
> +modname = "elbepack.commands." + sys.argv[1]
>  
>  mod = __import__(modname)
>  cmdmod = sys.modules[modname]
>  
> -cmdmod.run_command( sys.argv[2:] )
> +cmdmod.run_command(sys.argv[2:])
> diff --git a/elbepack/aptpkgutils.py b/elbepack/aptpkgutils.py
> index 9a0d1d21..721a5b73 100644
> --- a/elbepack/aptpkgutils.py
> +++ b/elbepack/aptpkgutils.py
> @@ -11,37 +11,40 @@ INSTALLED = 4
>  NOTINSTALLED = 5
>  
>  statestring = {
> -    MARKED_INSTALL : "MARKED_INSTALL",
> -    MARKED_UPGRADE : "MARKED_UPGRADE",
> -    MARKED_DELETE  : "MARKED_DELETE",
> -    UPGRADABLE     : "UPGRADABLE",
> -    INSTALLED      : "INSTALLED",
> -    NOTINSTALLED   : "NOT INSTALLED"
> +    MARKED_INSTALL: "MARKED_INSTALL",
> +    MARKED_UPGRADE: "MARKED_UPGRADE",
> +    MARKED_DELETE: "MARKED_DELETE",
> +    UPGRADABLE: "UPGRADABLE",
> +    INSTALLED: "INSTALLED",
> +    NOTINSTALLED: "NOT INSTALLED"
>  }
>  
> +
>  def getdeps(pkg):
>      for dd in pkg.dependencies:
>          for d in dd:
>              yield d.name
>  
> +
>  def getalldeps(c, pkgname):
>      retval = []
>      togo = [pkgname]
>  
>      while len(togo):
>          pp = togo.pop()
> -        pkg = c[ pp ]
> +        pkg = c[pp]
>  
>          for p in getdeps(pkg.candidate):
>              if p in retval:
>                  continue
> -            if not p in c:
> +            if p not in c:
>                  continue
>              retval.append(p)
>              togo.append(p)
>  
>      return retval
>  
> +
>  def pkgstate(pkg):
>      if pkg.marked_install:
>          return MARKED_INSTALL
> @@ -56,21 +59,21 @@ def pkgstate(pkg):
>      else:
>          return NOTINSTALLED
>  
> -def pkgorigin(pkg):
> -        if pkg.installed:
> -            o = pkg.installed.origins[0]
> -            origin = "%s %s %s" % (o.site, o.archive, o.component)
> -        else:
> -            origin = None
>  
> -        return origin
> +def pkgorigin(pkg):
> +    if pkg.installed:
> +        o = pkg.installed.origins[0]
> +        origin = "%s %s %s" % (o.site, o.archive, o.component)
> +    else:
> +        origin = None
>  
> +    return origin
>  
>  
>  class PackageBase(object):
> -    def __init__( self, name, installed_version,
> -                  candidate_version, installed_md5, candidate_md5,
> -                  state, is_auto_installed, origin, architecture ):
> +    def __init__(self, name, installed_version,
> +                 candidate_version, installed_md5, candidate_md5,
> +                 state, is_auto_installed, origin, architecture):
>  
>          self.name = name
>          self.installed_version = installed_version
> @@ -83,14 +86,16 @@ class PackageBase(object):
>          self.architecture = architecture
>  
>      def __repr__(self):
> -        return "<APTPackage %s-%s state: %s>" % (self.name, self.installed_version, statestring[self.state])
> +        return "<APTPackage %s-%s state: %s>" % (
> +            self.name, self.installed_version, statestring[self.state])
>  
> -    def __eq__( self, other ):
> +    def __eq__(self, other):
>          vereq = (self.installed_version == other.installed_version)
>          nameq = (self.name == other.name)
>  
>          return vereq and nameq
>  
> +
>  class APTPackage(PackageBase):
>      def __init__(self, pkg, cache=None):
>          if type(pkg) == str:
> @@ -106,8 +111,8 @@ class APTPackage(PackageBase):
>  
>          if pkg.installed:
>              arch = pkg.installed.architecture
> -            self.installed_deb = pkg.name + '_' + iver.replace( ':', '%3a' ) + \
> -                    '_' + arch + '.deb'
> +            self.installed_deb = pkg.name + '_' + iver.replace(':', '%3a') + \
> +                '_' + arch + '.deb'
>          elif pkg.candidate:
>              arch = pkg.candidate.architecture
>              self.installed_deb = None
> @@ -122,9 +127,8 @@ class APTPackage(PackageBase):
>  
>  
>  class XMLPackage(PackageBase):
> -    def __init__( self, node, arch ):
> -        PackageBase.__init__( self, node.et.text, node.et.get('version'),
> -                              None, node.et.get('md5'), None,
> -                              INSTALLED, node.et.get('auto') == 'true',
> -                              None, arch )
> -
> +    def __init__(self, node, arch):
> +        PackageBase.__init__(self, node.et.text, node.et.get('version'),
> +                             None, node.et.get('md5'), None,
> +                             INSTALLED, node.et.get('auto') == 'true',
> +                             None, arch)
> diff --git a/elbepack/aptprogress.py b/elbepack/aptprogress.py
> index 9fe193a7..99709c9e 100644
> --- a/elbepack/aptprogress.py
> +++ b/elbepack/aptprogress.py
> @@ -9,36 +9,37 @@ from apt.progress.base import InstallProgress, AcquireProgress, OpProgress
>  from apt_pkg import size_to_str
>  import os
>  
> +
>  class ElbeInstallProgress (InstallProgress):
>  
> -    def __init__ (self, cb=None, fileno=2):
> -        InstallProgress.__init__ (self)
> +    def __init__(self, cb=None, fileno=2):
> +        InstallProgress.__init__(self)
>          self.cb = cb
>          self.fileno = fileno
>  
> -    def write (self, line):
> +    def write(self, line):
>          if line == 'update finished':
>              self.percent = 100
>  
> -        line = str (self.percent) + "% " + line
> -        line.replace ('\f', '')
> +        line = str(self.percent) + "% " + line
> +        line.replace('\f', '')
>          if self.cb:
> -            self.cb (line)
> +            self.cb(line)
>          else:
>              print(line)
>  
> -    def processing (self, pkg, stage):
> -        self.write ("processing: " + pkg + " - " + stage)
> +    def processing(self, pkg, stage):
> +        self.write("processing: " + pkg + " - " + stage)
>  
> -    def dpkg_status_change (self, pkg, status):
> -        self.write (pkg + " - " + status)
> +    def dpkg_status_change(self, pkg, status):
> +        self.write(pkg + " - " + status)
>  
> -    def status_change (self, pkg, percent, status):
> -        self.write (pkg + " - " + status + " " + str (percent) + "%")
> +    def status_change(self, pkg, percent, status):
> +        self.write(pkg + " - " + status + " " + str(percent) + "%")
>  
> -    def run (self, obj):
> +    def run(self, obj):
>          try:
> -            obj.do_install (self.fileno)
> +            obj.do_install(self.fileno)
>          except AttributeError:
>              print("installing .deb files is not supported by elbe progress")
>              raise SystemError
> @@ -51,19 +52,20 @@ class ElbeInstallProgress (InstallProgress):
>          return retval
>  
>      def finishUpdate(self):
> -        self.write ("update finished")
> +        self.write("update finished")
> +
>  
>  class ElbeAcquireProgress (AcquireProgress):
>  
> -    def __init__ (self, cb=None):
> -        AcquireProgress.__init__ (self)
> +    def __init__(self, cb=None):
> +        AcquireProgress.__init__(self)
>          self._id = 1
>          self.cb = cb
>  
> -    def write (self, line):
> -        line.replace ('\f', '')
> +    def write(self, line):
> +        line.replace('\f', '')
>          if self.cb:
> -            self.cb (line)
> +            self.cb(line)
>          else:
>              print(line)
>  
> @@ -71,40 +73,43 @@ class ElbeAcquireProgress (AcquireProgress):
>          line = 'Hit ' + item.description
>          if item.owner.filesize:
>              line += ' [%sB]' % size_to_str(item.owner.filesize)
> -        self.write (line)
> +        self.write(line)
>  
>      def fail(self, item):
>          if item.owner.status == item.owner.STAT_DONE:
> -            self.write ("Ign " + item.description)
> +            self.write("Ign " + item.description)
>  
>      def fetch(self, item):
>          if item.owner.complete:
>              return
>          item.owner.id = self._id
>          self._id += 1
> -        line = "Get:" + str (item.owner.id) + " " + item.description
> +        line = "Get:" + str(item.owner.id) + " " + item.description
>          if item.owner.filesize:
>              line += (" [%sB]" % size_to_str(item.owner.filesize))
>  
>          self.write(line)
>  
> -    def pulse (self, owner):
> +    def pulse(self, owner):
>          return True
>  
> +
>  class ElbeOpProgress (OpProgress):
>  
> -    def __init__ (self, cb=None):
> -        OpProgress.__init__ (self)
> +    def __init__(self, cb=None):
> +        OpProgress.__init__(self)
>          self._id = 1
>          self.cb = cb
>  
> -    def write (self, line):
> -        line.replace ('\f', '')
> +    def write(self, line):
> +        line.replace('\f', '')
>          if self.cb:
> -            self.cb (line)
> +            self.cb(line)
>          else:
>              print(line)
> -    def update (self, percent=None):
> +
> +    def update(self, percent=None):
>          pass
> -    def done (self):
> +
> +    def done(self):
>          pass
> diff --git a/elbepack/asciidoclog.py b/elbepack/asciidoclog.py
> index 6c3ca59d..fc44209f 100644
> --- a/elbepack/asciidoclog.py
> +++ b/elbepack/asciidoclog.py
> @@ -8,6 +8,7 @@ import sys
>  
>  from elbepack.shellhelper import CommandError, command_out_stderr, command_out
>  
> +
>  class LogBase(object):
>      def __init__(self, fp):
>          self.fp = fp
> @@ -21,38 +22,40 @@ class LogBase(object):
>      def h1(self, text):
>          self.printo()
>          self.printo(text)
> -        self.printo("="*len(str(text)))
> +        self.printo("=" * len(str(text)))
>          self.printo()
>  
>      def h2(self, text):
>          self.printo()
>          self.printo(text)
> -        self.printo("-"*len(str(text)))
> +        self.printo("-" * len(str(text)))
>          self.printo()
>  
>      def table(self):
> -        self.printo( "|=====================================" )
> +        self.printo("|=====================================")
>  
>      def verbatim_start(self):
> -        self.printo( "------------------------------------------------------------------------------" )
> +        self.printo(
> +            "------------------------------------------------------------------------------")
>  
>      def verbatim_end(self):
> -        self.printo( "------------------------------------------------------------------------------" )
> +        self.printo(
> +            "------------------------------------------------------------------------------")
>          self.printo()
>  
>      def do(self, cmd, allow_fail=False, input=None):
>  
> -        if input == None:
> -            self.printo( "running cmd +%s+" % cmd )
> +        if input is None:
> +            self.printo("running cmd +%s+" % cmd)
>          else:
> -            self.printo( "running cmd +%s with STDIN %s+" % (cmd, input) )
> +            self.printo("running cmd +%s with STDIN %s+" % (cmd, input))
>  
>          self.verbatim_start()
>          ret, out = command_out(cmd, input=input, output=self.fp)
>          self.verbatim_end()
>  
>          if ret != 0:
> -            self.printo( "Command failed with errorcode %d" % ret )
> +            self.printo("Command failed with errorcode %d" % ret)
>              if not allow_fail:
>                  raise CommandError(cmd, ret)
>  
> @@ -62,26 +65,27 @@ class LogBase(object):
>          os.environ["LC_ALL"] = "C"
>  
>          chcmd = "chroot %s %s" % (directory, cmd)
> -        self.do( chcmd, **args )
> +        self.do(chcmd, **args)
>  
>      def get_command_out(self, cmd, allow_fail=False):
>  
> -        self.printo( "getting output from cmd +%s+" % cmd )
> +        self.printo("getting output from cmd +%s+" % cmd)
>  
>          ret, output, stderr = command_out_stderr(cmd)
>  
>          if len(stderr) != 0:
>              self.verbatim_start()
> -            self.print_raw( stderr )
> +            self.print_raw(stderr)
>              self.verbatim_end()
>  
>          if ret != 0:
> -            self.printo( "Command failed with errorcode %d" % ret )
> +            self.printo("Command failed with errorcode %d" % ret)
>              if not allow_fail:
>                  raise CommandError(cmd, ret)
>  
>          return output
>  
> +
>  class ASCIIDocLog (LogBase):
>      def __init__(self, fname):
>          self.fname = fname
> @@ -105,6 +109,7 @@ class StdoutLog(LogBase):
>      def reset(self):
>          pass
>  
> +
>  class StderrLog(LogBase):
>      def __init__(self):
>          LogBase.__init__(self, sys.stderr)
> diff --git a/elbepack/asyncworker.py b/elbepack/asyncworker.py
> index 5dd8b660..231480a4 100644
> --- a/elbepack/asyncworker.py
> +++ b/elbepack/asyncworker.py
> @@ -19,457 +19,468 @@ from elbepack.elbeproject import AptCacheCommitError, AptCacheUpdateError
>  
>  
>  class AsyncWorkerJob(object):
> -    def __init__ (self, project):
> +    def __init__(self, project):
>          self.project = project
>  
> -    def enqueue (self, queue, db):
> -        queue.put( self )
> +    def enqueue(self, queue, db):
> +        queue.put(self)
>  
> -    def execute (self, db):
> +    def execute(self, db):
>          pass
>  
> +
>  class BuildSysrootJob(AsyncWorkerJob):
> -    def __init__ (self, project):
> -        AsyncWorkerJob.__init__( self, project )
> +    def __init__(self, project):
> +        AsyncWorkerJob.__init__(self, project)
>  
> -    def enqueue (self, queue, db):
> -        db.set_busy( self.project.builddir,
> -                [ "empty_project", "needs_build", "has_changes",
> -                  "build_done", "build_failed" ] )
> -        self.project.log.printo( "Enqueueing project for building sysroot" )
> -        AsyncWorkerJob.enqueue( self, queue, db )
> +    def enqueue(self, queue, db):
> +        db.set_busy(self.project.builddir,
> +                    ["empty_project", "needs_build", "has_changes",
> +                     "build_done", "build_failed"])
> +        self.project.log.printo("Enqueueing project for building sysroot")
> +        AsyncWorkerJob.enqueue(self, queue, db)
>  
> -    def execute (self, db):
> +    def execute(self, db):
>          try:
> -            self.project.log.printo( "Build sysroot started" )
> +            self.project.log.printo("Build sysroot started")
>              self.project.build_sysroot()
> -            db.update_project_files( self.project )
> -            self.project.log.printo( "Build finished successfully" )
> -            db.reset_busy( self.project.builddir, "build_done" )
> +            db.update_project_files(self.project)
> +            self.project.log.printo("Build finished successfully")
> +            db.reset_busy(self.project.builddir, "build_done")
>          except Exception as e:
> -            db.update_project_files( self.project )
> -            self.project.log.printo( "Build sysroot failed" )
> -            self.project.log.printo( traceback.format_exc() )
> -            db.reset_busy( self.project.builddir, "build_failed" )
> +            db.update_project_files(self.project)
> +            self.project.log.printo("Build sysroot failed")
> +            self.project.log.printo(traceback.format_exc())
> +            db.reset_busy(self.project.builddir, "build_failed")
> +
>  
>  class BuildChrootTarJob(AsyncWorkerJob):
> -    def __init__ (self, project):
> -        AsyncWorkerJob.__init__( self, project )
> +    def __init__(self, project):
> +        AsyncWorkerJob.__init__(self, project)
>  
> -    def enqueue (self, queue, db):
> -        db.set_busy( self.project.builddir,
> -                [ "empty_project", "needs_build", "has_changes",
> -                  "build_done", "build_failed" ] )
> -        self.project.log.printo( "Enqueueing project for building croot tar" )
> -        AsyncWorkerJob.enqueue( self, queue, db )
> +    def enqueue(self, queue, db):
> +        db.set_busy(self.project.builddir,
> +                    ["empty_project", "needs_build", "has_changes",
> +                     "build_done", "build_failed"])
> +        self.project.log.printo("Enqueueing project for building croot tar")
> +        AsyncWorkerJob.enqueue(self, queue, db)
>  
> -    def execute (self, db):
> +    def execute(self, db):
>          try:
> -            self.project.log.printo( "Build chroot tarball started" )
> +            self.project.log.printo("Build chroot tarball started")
>              self.project.build_chroottarball()
> -            db.update_project_files( self.project )
> -            self.project.log.printo( "Build finished successfully" )
> -            db.reset_busy( self.project.builddir, "build_done" )
> +            db.update_project_files(self.project)
> +            self.project.log.printo("Build finished successfully")
> +            db.reset_busy(self.project.builddir, "build_done")
>          except Exception as e:
> -            db.update_project_files( self.project )
> -            self.project.log.printo( "Build chrroot tarball failed" )
> -            self.project.log.printo( traceback.format_exc() )
> -            db.reset_busy( self.project.builddir, "build_failed" )
> -
> +            db.update_project_files(self.project)
> +            self.project.log.printo("Build chrroot tarball failed")
> +            self.project.log.printo(traceback.format_exc())
> +            db.reset_busy(self.project.builddir, "build_failed")
>  
>  
>  class BuildJob(AsyncWorkerJob):
> -    def __init__ (self, project, build_bin, build_src, skip_pbuilder):
> -        AsyncWorkerJob.__init__( self, project )
> +    def __init__(self, project, build_bin, build_src, skip_pbuilder):
> +        AsyncWorkerJob.__init__(self, project)
>          self.build_bin = build_bin
>          self.build_src = build_src
>          self.skip_pbuilder = skip_pbuilder
>  
> -    def enqueue (self, queue, db):
> -        db.set_busy( self.project.builddir,
> -                [ "empty_project", "needs_build", "has_changes",
> -                  "build_done", "build_failed" ] )
> -        self.project.log.printo( "Enqueueing project for build" )
> -        AsyncWorkerJob.enqueue( self, queue, db )
> +    def enqueue(self, queue, db):
> +        db.set_busy(self.project.builddir,
> +                    ["empty_project", "needs_build", "has_changes",
> +                     "build_done", "build_failed"])
> +        self.project.log.printo("Enqueueing project for build")
> +        AsyncWorkerJob.enqueue(self, queue, db)
>  
> -    def execute (self, db):
> +    def execute(self, db):
>          try:
> -            self.project.log.printo( "Build started" )
> +            self.project.log.printo("Build started")
>              self.project.build(skip_pkglist=False,
>                                 build_bin=self.build_bin,
>                                 build_sources=self.build_src,
>                                 skip_pbuild=self.skip_pbuilder)
> -            db.update_project_files( self.project )
> -            self.project.log.printo( "Build finished successfully" )
> -            db.reset_busy( self.project.builddir, "build_done" )
> +            db.update_project_files(self.project)
> +            self.project.log.printo("Build finished successfully")
> +            db.reset_busy(self.project.builddir, "build_done")
>          except DebootstrapException as e:
> -            db.update_project_files( self.project )
> -            self.project.log.printo( "Build failed !!!" )
> -            self.project.log.printo( "" )
> -            self.project.log.printo( "Debootstrap failed to install the base rootfilesystem." )
> -            self.project.log.printo( "Probable cause might be:" )
> -            self.project.log.printo( "- Problems with Internet Connection" )
> -            self.project.log.printo( "- Broken mirrors" )
> -            self.project.log.printo( "" )
> -            self.project.log.printo( "Message from apt:" )
> -            self.project.log.printo( str(e) )
> -            self.project.log.printo( "" )
> -            db.reset_busy( self.project.builddir, "build_failed" )
> +            db.update_project_files(self.project)
> +            self.project.log.printo("Build failed !!!")
> +            self.project.log.printo("")
> +            self.project.log.printo(
> +                "Debootstrap failed to install the base rootfilesystem.")
> +            self.project.log.printo("Probable cause might be:")
> +            self.project.log.printo("- Problems with Internet Connection")
> +            self.project.log.printo("- Broken mirrors")
> +            self.project.log.printo("")
> +            self.project.log.printo("Message from apt:")
> +            self.project.log.printo(str(e))
> +            self.project.log.printo("")
> +            db.reset_busy(self.project.builddir, "build_failed")
>          except AptCacheCommitError as e:
> -            db.update_project_files( self.project )
> -            self.project.log.printo( "Build failed !!!" )
> -            self.project.log.printo( "" )
> -            self.project.log.printo( "Failed to commit the AptCache changes." )
> -            self.project.log.printo( "Most likely is, that packages failed to download." )
> -            self.project.log.printo( "Probable cause might be:" )
> -            self.project.log.printo( "- Problems with Internet Connection" )
> -            self.project.log.printo( "- Broken mirrors" )
> -            self.project.log.printo( "" )
> -            self.project.log.printo( "Message from apt:" )
> -            self.project.log.printo( str(e) )
> -            self.project.log.printo( "" )
> -            db.reset_busy( self.project.builddir, "build_failed" )
> +            db.update_project_files(self.project)
> +            self.project.log.printo("Build failed !!!")
> +            self.project.log.printo("")
> +            self.project.log.printo("Failed to commit the AptCache changes.")
> +            self.project.log.printo(
> +                "Most likely is, that packages failed to download.")
> +            self.project.log.printo("Probable cause might be:")
> +            self.project.log.printo("- Problems with Internet Connection")
> +            self.project.log.printo("- Broken mirrors")
> +            self.project.log.printo("")
> +            self.project.log.printo("Message from apt:")
> +            self.project.log.printo(str(e))
> +            self.project.log.printo("")
> +            db.reset_busy(self.project.builddir, "build_failed")
>          except AptCacheUpdateError as e:
> -            db.update_project_files( self.project )
> -            self.project.log.printo( "Build failed !!!" )
> -            self.project.log.printo( "" )
> -            self.project.log.printo( "Failed to build the Apt Cache." )
> -            self.project.log.printo( "Not all Mirror Index Files could be downloaded." )
> -            self.project.log.printo( "Probable cause might be:" )
> -            self.project.log.printo( "- Problems with Internet Connection" )
> -            self.project.log.printo( "- Broken mirrors" )
> -            self.project.log.printo( "" )
> -            self.project.log.printo( "Message from apt:" )
> -            self.project.log.printo( str(e) )
> -            self.project.log.printo( "" )
> -            db.reset_busy( self.project.builddir, "build_failed" )
> +            db.update_project_files(self.project)
> +            self.project.log.printo("Build failed !!!")
> +            self.project.log.printo("")
> +            self.project.log.printo("Failed to build the Apt Cache.")
> +            self.project.log.printo(
> +                "Not all Mirror Index Files could be downloaded.")
> +            self.project.log.printo("Probable cause might be:")
> +            self.project.log.printo("- Problems with Internet Connection")
> +            self.project.log.printo("- Broken mirrors")
> +            self.project.log.printo("")
> +            self.project.log.printo("Message from apt:")
> +            self.project.log.printo(str(e))
> +            self.project.log.printo("")
> +            db.reset_busy(self.project.builddir, "build_failed")
>          except Exception as e:
> -            db.update_project_files( self.project )
> -            self.project.log.printo( "Build failed" )
> -            self.project.log.printo( traceback.format_exc() )
> -            self.project.log.printo( "" )
> -            self.project.log.printo( "Exception:" )
> -            self.project.log.printo( str(e) )
> -            self.project.log.printo( "" )
> -            db.reset_busy( self.project.builddir, "build_failed" )
> +            db.update_project_files(self.project)
> +            self.project.log.printo("Build failed")
> +            self.project.log.printo(traceback.format_exc())
> +            self.project.log.printo("")
> +            self.project.log.printo("Exception:")
> +            self.project.log.printo(str(e))
> +            self.project.log.printo("")
> +            db.reset_busy(self.project.builddir, "build_failed")
> +
>  
>  class PdebuildJob(AsyncWorkerJob):
> -    def __init__ (self, project):
> -        AsyncWorkerJob.__init__( self, project )
> +    def __init__(self, project):
> +        AsyncWorkerJob.__init__(self, project)
>  
> -    def enqueue (self, queue, db):
> -        db.set_busy( self.project.builddir,
> -                [ "empty_project", "needs_build", "has_changes",
> -                  "build_done", "build_failed" ] )
> -        self.project.log.printo( "Enqueueing project for pdebuild" )
> -        AsyncWorkerJob.enqueue( self, queue, db )
> +    def enqueue(self, queue, db):
> +        db.set_busy(self.project.builddir,
> +                    ["empty_project", "needs_build", "has_changes",
> +                     "build_done", "build_failed"])
> +        self.project.log.printo("Enqueueing project for pdebuild")
> +        AsyncWorkerJob.enqueue(self, queue, db)
>  
> -    def execute (self, db):
> +    def execute(self, db):
>          try:
> -            self.project.log.printo( "Pdebuild started" )
> +            self.project.log.printo("Pdebuild started")
>              self.project.pdebuild()
> -            db.update_project_files( self.project )
> -            self.project.log.printo( "Pdeb finished successfully" )
> -            db.reset_busy( self.project.builddir, "build_done" )
> +            db.update_project_files(self.project)
> +            self.project.log.printo("Pdeb finished successfully")
> +            db.reset_busy(self.project.builddir, "build_done")
>          except Exception as e:
> -            db.update_project_files( self.project )
> -            self.project.log.printo( "Pdebuild failed" )
> -            self.project.log.printo( traceback.format_exc() )
> -            db.reset_busy( self.project.builddir, "build_failed" )
> +            db.update_project_files(self.project)
> +            self.project.log.printo("Pdebuild failed")
> +            self.project.log.printo(traceback.format_exc())
> +            db.reset_busy(self.project.builddir, "build_failed")
> +
>  
>  class CreatePbuilderJob(AsyncWorkerJob):
> -    def __init__ (self, project):
> -        AsyncWorkerJob.__init__( self, project )
> +    def __init__(self, project):
> +        AsyncWorkerJob.__init__(self, project)
>  
> -    def enqueue (self, queue, db):
> -        db.set_busy( self.project.builddir,
> -                [ "empty_project", "needs_build", "has_changes",
> -                  "build_done", "build_failed" ] )
> -        self.project.log.printo( "Enqueueing project to have the pbuilder built" )
> -        AsyncWorkerJob.enqueue( self, queue, db )
> +    def enqueue(self, queue, db):
> +        db.set_busy(self.project.builddir,
> +                    ["empty_project", "needs_build", "has_changes",
> +                     "build_done", "build_failed"])
> +        self.project.log.printo(
> +            "Enqueueing project to have the pbuilder built")
> +        AsyncWorkerJob.enqueue(self, queue, db)
>  
> -    def execute (self, db):
> +    def execute(self, db):
>          try:
> -            self.project.log.printo( "Building pbuilder started" )
> +            self.project.log.printo("Building pbuilder started")
>              self.project.create_pbuilder()
> -            db.update_project_files( self.project )
> -            self.project.log.printo( "Pbuilder finished successfully" )
> -            db.reset_busy( self.project.builddir, "build_done" )
> +            db.update_project_files(self.project)
> +            self.project.log.printo("Pbuilder finished successfully")
> +            db.reset_busy(self.project.builddir, "build_done")
>          except Exception as e:
> -            db.update_project_files( self.project )
> -            self.project.log.printo( "Pbuilder failed" )
> -            self.project.log.printo( traceback.format_exc() )
> -            db.reset_busy( self.project.builddir, "build_failed" )
> +            db.update_project_files(self.project)
> +            self.project.log.printo("Pbuilder failed")
> +            self.project.log.printo(traceback.format_exc())
> +            db.reset_busy(self.project.builddir, "build_failed")
> +
>  
>  class UpdatePbuilderJob(AsyncWorkerJob):
> -    def __init__ (self, project):
> -        AsyncWorkerJob.__init__( self, project )
> +    def __init__(self, project):
> +        AsyncWorkerJob.__init__(self, project)
>  
> -    def enqueue (self, queue, db):
> -        db.set_busy( self.project.builddir,
> -                [ "empty_project", "needs_build", "has_changes",
> -                  "build_done", "build_failed" ] )
> -        self.project.log.printo( "Enqueueing project to update the pbuilder" )
> -        AsyncWorkerJob.enqueue( self, queue, db )
> +    def enqueue(self, queue, db):
> +        db.set_busy(self.project.builddir,
> +                    ["empty_project", "needs_build", "has_changes",
> +                     "build_done", "build_failed"])
> +        self.project.log.printo("Enqueueing project to update the pbuilder")
> +        AsyncWorkerJob.enqueue(self, queue, db)
>  
> -    def execute (self, db):
> +    def execute(self, db):
>          try:
> -            self.project.log.printo( "Updating pbuilder started" )
> +            self.project.log.printo("Updating pbuilder started")
>              self.project.update_pbuilder()
> -            self.project.log.printo( "Updating Pbuilder finished successfully" )
> -            db.reset_busy( self.project.builddir, "build_done" )
> +            self.project.log.printo("Updating Pbuilder finished successfully")
> +            db.reset_busy(self.project.builddir, "build_done")
>          except Exception as e:
> -            db.update_project_files( self.project )
> -            self.project.log.printo( "update Pbuilder failed" )
> -            self.project.log.printo( traceback.format_exc() )
> -            db.reset_busy( self.project.builddir, "build_failed" )
> +            db.update_project_files(self.project)
> +            self.project.log.printo("update Pbuilder failed")
> +            self.project.log.printo(traceback.format_exc())
> +            db.reset_busy(self.project.builddir, "build_failed")
> +
>  
>  class APTUpdateJob(AsyncWorkerJob):
> -    def __init__ (self, project):
> -        AsyncWorkerJob.__init__( self, project )
> +    def __init__(self, project):
> +        AsyncWorkerJob.__init__(self, project)
>  
> -    def enqueue (self, queue, db):
> -        db.set_busy( self.project.builddir,
> -                [ "build_done", "has_changes" ] )
> -        self.project.log.printo( "Enqueueing project for APT cache update" )
> -        AsyncWorkerJob.enqueue( self, queue, db )
> +    def enqueue(self, queue, db):
> +        db.set_busy(self.project.builddir,
> +                    ["build_done", "has_changes"])
> +        self.project.log.printo("Enqueueing project for APT cache update")
> +        AsyncWorkerJob.enqueue(self, queue, db)
>  
> -    def execute (self, db):
> +    def execute(self, db):
>          try:
> -            self.project.log.printo( "APT cache update started" )
> +            self.project.log.printo("APT cache update started")
>              with self.project.buildenv:
>                  self.project.get_rpcaptcache().update()
> -            self.project.log.printo( "APT cache update finished successfully" )
> -            db.reset_busy( self.project.builddir,
> -                    "has_changes" )
> +            self.project.log.printo("APT cache update finished successfully")
> +            db.reset_busy(self.project.builddir,
> +                          "has_changes")
>          except Exception as e:
> -            self.project.log.printo( "APT cache update failed" )
> -            self.project.log.printo( str(e) )
> -            db.reset_busy( self.project.builddir, "build_failed" )
> +            self.project.log.printo("APT cache update failed")
> +            self.project.log.printo(str(e))
> +            db.reset_busy(self.project.builddir, "build_failed")
> +
>  
>  class APTUpdUpgrJob(AsyncWorkerJob):
> -    def __init__ (self, project):
> -        AsyncWorkerJob.__init__( self, project )
> +    def __init__(self, project):
> +        AsyncWorkerJob.__init__(self, project)
>  
> -    def enqueue (self, queue, db):
> -        db.set_busy( self.project.builddir, [ "build_done", "has_changes" ] )
> -        self.project.log.printo( "Enqueueing project for APT update & upgrade" )
> -        AsyncWorkerJob.enqueue( self, queue, db )
> +    def enqueue(self, queue, db):
> +        db.set_busy(self.project.builddir, ["build_done", "has_changes"])
> +        self.project.log.printo("Enqueueing project for APT update & upgrade")
> +        AsyncWorkerJob.enqueue(self, queue, db)
>  
> -    def execute (self, db):
> +    def execute(self, db):
>          try:
> -            self.project.log.printo( "APT update started" )
> +            self.project.log.printo("APT update started")
>              with self.project.buildenv:
>                  self.project.get_rpcaptcache().update()
> -            self.project.log.printo( "APT update finished, upgrade started" )
> -            self.project.get_rpcaptcache().upgrade ()
> -            self.project.log.printo( "APT upgrade finished" )
> -            db.reset_busy( self.project.builddir, "has_changes" )
> +            self.project.log.printo("APT update finished, upgrade started")
> +            self.project.get_rpcaptcache().upgrade()
> +            self.project.log.printo("APT upgrade finished")
> +            db.reset_busy(self.project.builddir, "has_changes")
>          except Exception as e:
> -            self.project.log.printo( "APT update & upgrade failed" )
> -            self.project.log.printo( str(e) )
> -            db.reset_busy( self.project.builddir, "build_failed" )
> +            self.project.log.printo("APT update & upgrade failed")
> +            self.project.log.printo(str(e))
> +            db.reset_busy(self.project.builddir, "build_failed")
> +
>  
>  class APTCommitJob(AsyncWorkerJob):
> -    def __init__ (self, project):
> -        AsyncWorkerJob.__init__( self, project )
> +    def __init__(self, project):
> +        AsyncWorkerJob.__init__(self, project)
>  
> -    def enqueue (self, queue, db):
> -        old_status = db.set_busy( self.project.builddir,
> -                [ "build_done", "has_changes" ] )
> +    def enqueue(self, queue, db):
> +        old_status = db.set_busy(self.project.builddir,
> +                                 ["build_done", "has_changes"])
>          if self.project.get_rpcaptcache().get_changes():
> -            self.project.log.printo( "Enqueueing project for package changes" )
> -            AsyncWorkerJob.enqueue( self, queue, db )
> +            self.project.log.printo("Enqueueing project for package changes")
> +            AsyncWorkerJob.enqueue(self, queue, db)
>          else:
> -            db.reset_busy( self.project.builddir, old_status )
> +            db.reset_busy(self.project.builddir, old_status)
>  
> -    def execute (self, db):
> +    def execute(self, db):
>          try:
> -            self.project.log.printo( "Applying package changes" )
> +            self.project.log.printo("Applying package changes")
>              with self.project.buildenv:
>                  # Commit changes, update full package list and write
>                  # out new source.xml
>                  self.project.get_rpcaptcache().commit()
> -                dump_fullpkgs( self.project.xml,
> -                        self.project.buildenv.rfs,
> -                        self.project.get_rpcaptcache() )
> +                dump_fullpkgs(self.project.xml,
> +                              self.project.buildenv.rfs,
> +                              self.project.get_rpcaptcache())
>  
> -            sourcexmlpath = path.join( self.project.builddir,
> -                    "source.xml" )
> -            self.project.xml.xml.write( sourcexmlpath )
> +            sourcexmlpath = path.join(self.project.builddir,
> +                                      "source.xml")
> +            self.project.xml.xml.write(sourcexmlpath)
>  
> -            self.project.log.printo( "Package changes applied successfully" )
> -            db.reset_busy( self.project.builddir,
> -                    "has_changes" )
> +            self.project.log.printo("Package changes applied successfully")
> +            db.reset_busy(self.project.builddir,
> +                          "has_changes")
>          except Exception as e:
> -            self.project.log.printo( "Applying package changes failed" )
> -            self.project.log.printo( str(e) )
> -            db.reset_busy( self.project.builddir,
> -                    "build_failed" )
> +            self.project.log.printo("Applying package changes failed")
> +            self.project.log.printo(str(e))
> +            db.reset_busy(self.project.builddir,
> +                          "build_failed")
>  
>  
>  class GenUpdateJob(AsyncWorkerJob):
> -    def __init__ (self, project, base_version):
> -        self.name = project.xml.text( "/project/name" )
> +    def __init__(self, project, base_version):
> +        self.name = project.xml.text("/project/name")
>          self.base_version = base_version
> -        self.current_version = project.xml.text( "/project/version" )
> +        self.current_version = project.xml.text("/project/version")
>          AsyncWorkerJob.__init__(self, project)
>  
> -    def enqueue (self, queue, db):
> -        self.old_status = db.set_busy( self.project.builddir,
> -                [ "build_done", "has_changes" ] )
> -        self.base_version_xml = db.get_version_xml( self.project.builddir,
> -                self.base_version )
> +    def enqueue(self, queue, db):
> +        self.old_status = db.set_busy(self.project.builddir,
> +                                      ["build_done", "has_changes"])
> +        self.base_version_xml = db.get_version_xml(self.project.builddir,
> +                                                   self.base_version)
>  
>          self.project.log.printo(
> -                "Enqueueing project for generating update package" )
> +            "Enqueueing project for generating update package")
>  
> -        AsyncWorkerJob.enqueue( self, queue, db )
> +        AsyncWorkerJob.enqueue(self, queue, db)
>  
> -    def execute (self, db):
> +    def execute(self, db):
>          upd_filename = self._gen_upd_filename()
> -        upd_pathname = path.join( self.project.builddir, upd_filename )
> +        upd_pathname = path.join(self.project.builddir, upd_filename)
>  
> -        self.project.log.printo( "Generating update package" )
> +        self.project.log.printo("Generating update package")
>  
>          try:
> -            gen_update_pkg( self.project, self.base_version_xml, upd_pathname )
> -            self.project.log.printo( "Update package generated successfully" )
> +            gen_update_pkg(self.project, self.base_version_xml, upd_pathname)
> +            self.project.log.printo("Update package generated successfully")
>          except Exception as e:
> -            self.project.log.printo( "Generating update package failed" )
> -            self.project.log.printo( str(e) )
> +            self.project.log.printo("Generating update package failed")
> +            self.project.log.printo(str(e))
>          finally:
>              # Update generation does not change the project, so we always
>              # keep the old status
> -            db.add_project_file( self.project.builddir, upd_filename,
> -                    "application/octet-stream",
> -                    "Update package from %s to %s" %
> -                    ( self.base_version, self.current_version ) )
> -            db.reset_busy( self.project.builddir, self.old_status )
> +            db.add_project_file(self.project.builddir, upd_filename,
> +                                "application/octet-stream",
> +                                "Update package from %s to %s" %
> +                                (self.base_version, self.current_version))
> +            db.reset_busy(self.project.builddir, self.old_status)
>  
> -    def _gen_upd_filename (self):
> -        filename = quote( self.name, ' ' ) + '_'
> -        filename += quote( self.base_version ) + '_'
> -        filename += quote( self.current_version ) + '.upd'
> +    def _gen_upd_filename(self):
> +        filename = quote(self.name, ' ') + '_'
> +        filename += quote(self.base_version) + '_'
> +        filename += quote(self.current_version) + '.upd'
>  
>          return filename
>  
>  
>  class SaveVersionJob(AsyncWorkerJob):
> -    def __init__ (self, project, description):
> -        AsyncWorkerJob.__init__( self, project )
> +    def __init__(self, project, description):
> +        AsyncWorkerJob.__init__(self, project)
>          self.description = description
>  
> -    def enqueue (self, queue, db):
> -        self.old_status = db.set_busy( self.project.builddir,
> -                [ "build_done", "has_changes" ] )
> -        self.name = self.project.xml.text( "project/name" )
> -        self.version = self.project.xml.text( "project/version" )
> +    def enqueue(self, queue, db):
> +        self.old_status = db.set_busy(self.project.builddir,
> +                                      ["build_done", "has_changes"])
> +        self.name = self.project.xml.text("project/name")
> +        self.version = self.project.xml.text("project/version")
>  
>          # Create the database entry now. This has the advantage that the
>          # user will see an error message immediately, if he tries to use
>          # the same version number twice. The time-consuming part is creating
>          # the package archive, which is done in execute.
>          try:
> -            db.save_version( self.project.builddir, self.description )
> -        except:
> -            db.reset_busy( self.project.builddir, self.old_status )
> +            db.save_version(self.project.builddir, self.description)
> +        except BaseException:
> +            db.reset_busy(self.project.builddir, self.old_status)
>              raise
>  
>          if self.project.savesh_file:
> -            self.project.log.h2 ("save version script:")
> -            self.project.log.do (self.project.savesh_file + ' "%s %s %s"' % (
> -                            self.project.builddir,
> -                            self.project.xml.text ("project/version"),
> -                            self.project.xml.text ("project/name")),
> -                            allow_fail=True)
> -
> -        self.project.log.printo( "Enqueueing project to save package archive" )
> -        AsyncWorkerJob.enqueue( self, queue, db )
> -
> -    def execute (self, db):
> -        self.project.log.printo( "Generating package archive" )
> -        repodir = get_versioned_filename( self.name, self.version,
> -                ".pkgarchive" )
> +            self.project.log.h2("save version script:")
> +            self.project.log.do(self.project.savesh_file + ' "%s %s %s"' % (
> +                self.project.builddir,
> +                self.project.xml.text("project/version"),
> +                self.project.xml.text("project/name")),
> +                allow_fail=True)
> +
> +        self.project.log.printo("Enqueueing project to save package archive")
> +        AsyncWorkerJob.enqueue(self, queue, db)
> +
> +    def execute(self, db):
> +        self.project.log.printo("Generating package archive")
> +        repodir = get_versioned_filename(self.name, self.version,
> +                                         ".pkgarchive")
>          try:
> -            gen_binpkg_archive( self.project, repodir )
> -            self.project.log.printo( "Version saved successfully" )
> +            gen_binpkg_archive(self.project, repodir)
> +            self.project.log.printo("Version saved successfully")
>          except Exception as e:
> -            db.del_version( self.project.builddir, self.version, force=True )
> -            self.project.log.printo( "Saving version failed" )
> -            self.project.log.printo( str(e) )
> +            db.del_version(self.project.builddir, self.version, force=True)
> +            self.project.log.printo("Saving version failed")
> +            self.project.log.printo(str(e))
>          finally:
> -            db.reset_busy( self.project.builddir, self.old_status )
> +            db.reset_busy(self.project.builddir, self.old_status)
>  
>  
>  class CheckoutVersionJob(AsyncWorkerJob):
> -    def __init__ (self, project, version):
> -        AsyncWorkerJob.__init__( self, project )
> +    def __init__(self, project, version):
> +        AsyncWorkerJob.__init__(self, project)
>          self.version = version
>  
> -    def enqueue (self, queue, db):
> -        self.name = self.project.xml.text( "project/name" )
> -        old_status = db.set_busy( self.project.builddir,
> -                [ "build_done", "has_changes", "build_failed" ] )
> +    def enqueue(self, queue, db):
> +        self.name = self.project.xml.text("project/name")
> +        old_status = db.set_busy(self.project.builddir,
> +                                 ["build_done", "has_changes", "build_failed"])
>  
>          # If old status was build_failed, just restore the source.xml of the
>          # given version and restore the status, indicating that we need a
>          # complete rebuild
>          if old_status == "build_failed":
> -            self.project.log.printo( "Previous project status indicated a "
> -                    "failed build." )
> -            self.project.log.printo( "Just checking out the XML file." )
> +            self.project.log.printo("Previous project status indicated a "
> +                                    "failed build.")
> +            self.project.log.printo("Just checking out the XML file.")
>  
>              try:
> -                db.checkout_version_xml( self.project.builddir, self.version )
> -                self.project.set_xml( None )
> +                db.checkout_version_xml(self.project.builddir, self.version)
> +                self.project.set_xml(None)
>              finally:
> -                db.reset_busy( self.project.builddir, old_status )
> +                db.reset_busy(self.project.builddir, old_status)
>              return
>  
>          # Otherwise, restore the source.xml of the given version and enqueue
>          # the project for package archive checkout
>          try:
> -            db.checkout_version_xml( self.project.builddir, self.version )
> -            self.project.set_xml( None )
> -        except:
> -            db.reset_busy( self.project.builddir, old_status )
> -            self.project.set_xml( None )
> +            db.checkout_version_xml(self.project.builddir, self.version)
> +            self.project.set_xml(None)
> +        except BaseException:
> +            db.reset_busy(self.project.builddir, old_status)
> +            self.project.set_xml(None)
>              raise
>  
>          self.project.log.printo(
> -                "Enqueueing project for package archive checkout" )
> -        AsyncWorkerJob.enqueue( self, queue, db )
> +            "Enqueueing project for package archive checkout")
> +        AsyncWorkerJob.enqueue(self, queue, db)
>  
> -    def execute (self, db):
> -        self.project.log.printo( "Checking out package archive" )
> -        repodir = get_versioned_filename( self.name, self.version,
> -                ".pkgarchive" )
> +    def execute(self, db):
> +        self.project.log.printo("Checking out package archive")
> +        repodir = get_versioned_filename(self.name, self.version,
> +                                         ".pkgarchive")
>  
>          try:
> -            checkout_binpkg_archive( self.project, repodir )
> +            checkout_binpkg_archive(self.project, repodir)
>              self.project.log.printo(
> -                    "Package archive checked out successfully" )
> -            db.reset_busy( self.project.builddir, "has_changes" )
> +                "Package archive checked out successfully")
> +            db.reset_busy(self.project.builddir, "has_changes")
>          except Exception as e:
>              self.project.log.printo(
> -                    "Checking out package archive failed" )
> -            self.project.log.printo( str(e) )
> -            db.reset_busy( self.project.builddir, "build_failed" )
> +                "Checking out package archive failed")
> +            self.project.log.printo(str(e))
> +            db.reset_busy(self.project.builddir, "build_failed")
>  
>  
>  @contextmanager
> -def savecwd ():
> +def savecwd():
>      oldcwd = getcwd()
>      try:
>          yield
>      finally:
> -        chdir( oldcwd )
> +        chdir(oldcwd)
>  
>  
>  class AsyncWorker(Thread):
> -    def __init__ (self, db):
> -        Thread.__init__( self, name="AsyncWorker" )
> +    def __init__(self, db):
> +        Thread.__init__(self, name="AsyncWorker")
>          self.db = db
>          self.queue = Queue()
>          self.start()
> @@ -479,16 +490,16 @@ class AsyncWorker(Thread):
>          self.queue.join()
>          self.join()
>  
> -    def enqueue (self, job):
> -        job.enqueue( self.queue, self.db )
> +    def enqueue(self, job):
> +        job.enqueue(self.queue, self.db)
>  
> -    def run (self):
> +    def run(self):
>          loop = True
>          while loop:
>              job = self.queue.get()
> -            if not job is None:
> +            if job is not None:
>                  with savecwd():
> -                    job.execute( self.db )
> +                    job.execute(self.db)
>              else:
>                  loop = False
>              self.queue.task_done()
> diff --git a/elbepack/cdroms.py b/elbepack/cdroms.py
> index 10c9e179..61fe56f1 100644
> --- a/elbepack/cdroms.py
> +++ b/elbepack/cdroms.py
> @@ -12,54 +12,71 @@ from elbepack.repomanager import CdromBinRepo
>  from elbepack.repomanager import CdromInitRepo
>  from elbepack.aptpkgutils import XMLPackage
>  from elbepack.aptprogress import ElbeAcquireProgress
> -from elbepack.filesystem  import Filesystem, hostfs
> -from elbepack.dump        import get_initvm_pkglist
> +from elbepack.filesystem import Filesystem, hostfs
> +from elbepack.dump import get_initvm_pkglist
>  from apt import Cache
>  
> -CDROM_SIZE = 640*1000*1000
> +CDROM_SIZE = 640 * 1000 * 1000
>  
> -def mk_source_cdrom(rfs, arch, codename, init_codename, target, log, cdrom_size=CDROM_SIZE, xml=None):
>  
> -    hostfs.mkdir_p( '/var/cache/elbe/sources' )
> -    rfs.mkdir_p( '/var/cache/elbe/sources' )
> +def mk_source_cdrom(
> +        rfs,
> +        arch,
> +        codename,
> +        init_codename,
> +        target,
> +        log,
> +        cdrom_size=CDROM_SIZE,
> +        xml=None):
>  
> -    repo = CdromSrcRepo( codename, init_codename,
> -                         os.path.join( target, "srcrepo" ),
> -                         log,
> -                         cdrom_size )
> +    hostfs.mkdir_p('/var/cache/elbe/sources')
> +    rfs.mkdir_p('/var/cache/elbe/sources')
>  
> -    cache = get_rpcaptcache( rfs, "aptcache.log", arch )
> +    repo = CdromSrcRepo(codename, init_codename,
> +                        os.path.join(target, "srcrepo"),
> +                        log,
> +                        cdrom_size)
> +
> +    cache = get_rpcaptcache(rfs, "aptcache.log", arch)
>  
>      pkglist = cache.get_installed_pkgs()
>  
>      forbiddenPackages = []
> -    if xml != None and xml.has('target/pkg-list'):
> +    if xml is not None and xml.has('target/pkg-list'):
>          for i in xml.node('target/pkg-list'):
>              try:
>                  if i.tag == 'pkg' and i.et.attrib['on_src_cd'] == 'False':
> -                   forbiddenPackages.append(i.text('.').strip())
> +                    forbiddenPackages.append(i.text('.').strip())
>  
>              except KeyError:
>                  pass
>  
> -
>      for pkg in pkglist:
>          # Do not include forbidden packages in src cdrom
>          if pkg.name in forbiddenPackages:
>              continue
>          try:
> -            dsc = cache.download_source( pkg.name, '/var/cache/elbe/sources' )
> -            repo.includedsc( dsc, force=True )
> +            dsc = cache.download_source(pkg.name, '/var/cache/elbe/sources')
> +            repo.includedsc(dsc, force=True)
>          except ValueError as ve:
> -            log.printo( "No sources for Package " + pkg.name + "-" + pkg.installed_version )
> +            log.printo(
> +                "No sources for Package " +
> +                pkg.name +
> +                "-" +
> +                pkg.installed_version)
>          except FetchError as fe:
> -            log.printo( "Source for Package " + pkg.name + "-" + pkg.installed_version + " could not be downloaded" )
> +            log.printo(
> +                "Source for Package " +
> +                pkg.name +
> +                "-" +
> +                pkg.installed_version +
> +                " could not be downloaded")
>  
> -    repo.finalize ()
> +    repo.finalize()
>  
>      pkglist = get_initvm_pkglist()
> -    cache = Cache ()
> -    cache.open ()
> +    cache = Cache()
> +    cache.open()
>  
>      for pkg in pkglist:
>          # Do not include forbidden packages in src cdrom
> @@ -68,114 +85,164 @@ def mk_source_cdrom(rfs, arch, codename, init_codename, target, log, cdrom_size=
>          try:
>              p = cache[pkg.name]
>              if pkg.name == 'elbe-bootstrap':
> -                pkgver = p.versions [0]
> +                pkgver = p.versions[0]
>              else:
>                  pkgver = p.installed
>  
> -            dsc = pkgver.fetch_source ('/var/cache/elbe/sources',
> -                             ElbeAcquireProgress (cb=None), unpack=False)
> -            repo.includedsc( dsc )
> +            dsc = pkgver.fetch_source(
> +                '/var/cache/elbe/sources',
> +                ElbeAcquireProgress(
> +                    cb=None),
> +                unpack=False)
> +            repo.includedsc(dsc)
>          except ValueError as ve:
> -            log.printo( "No sources for Package " + pkg.name + "-" + str(pkg.installed_version) )
> +            log.printo("No sources for Package " + pkg.name +
> +                       "-" + str(pkg.installed_version))
>          except FetchError as fe:
> -            log.printo( "Source for Package " + pkg.name + "-" + pkgver.version + " could not be downloaded" )
> +            log.printo(
> +                "Source for Package " +
> +                pkg.name +
> +                "-" +
> +                pkgver.version +
> +                " could not be downloaded")
>  
> -    repo.finalize ()
> +    repo.finalize()
>  
> -    return repo.buildiso( os.path.join( target, "src-cdrom.iso" ) )
> +    return repo.buildiso(os.path.join(target, "src-cdrom.iso"))
>  
>  
> -def mk_binary_cdrom(rfs, arch, codename, init_codename, xml, target, log, cdrom_size=CDROM_SIZE):
> +def mk_binary_cdrom(
> +        rfs,
> +        arch,
> +        codename,
> +        init_codename,
> +        xml,
> +        target,
> +        log,
> +        cdrom_size=CDROM_SIZE):
>  
> -    rfs.mkdir_p( '/var/cache/elbe/binaries/added' )
> -    rfs.mkdir_p( '/var/cache/elbe/binaries/main' )
> -    hostfs.mkdir_p( '/var/cache/elbe/binaries/main' )
> +    rfs.mkdir_p('/var/cache/elbe/binaries/added')
> +    rfs.mkdir_p('/var/cache/elbe/binaries/main')
> +    hostfs.mkdir_p('/var/cache/elbe/binaries/main')
>  
> -    if not xml is None:
> -        mirror = xml.get_primary_mirror (rfs.fname("cdrom"))
> +    if xml is not None:
> +        mirror = xml.get_primary_mirror(rfs.fname("cdrom"))
>      else:
> -        mirror='http://ftp.de.debian.org/debian'
> +        mirror = 'http://ftp.de.debian.org/debian'
>  
> -    repo_path = os.path.join (target, "binrepo")
> -    target_repo_path = os.path.join (repo_path, 'targetrepo')
> +    repo_path = os.path.join(target, "binrepo")
> +    target_repo_path = os.path.join(repo_path, 'targetrepo')
>  
> -    repo = CdromInitRepo (arch, init_codename,
> +    repo = CdromInitRepo(arch, init_codename,
>                           repo_path, log, cdrom_size, mirror)
>  
> -    target_repo = CdromBinRepo (arch, codename, None,
> -                         target_repo_path, log, cdrom_size, mirror)
> +    target_repo = CdromBinRepo(arch, codename, None,
> +                               target_repo_path, log, cdrom_size, mirror)
>  
> -    if not xml is None:
> +    if xml is not None:
>          pkglist = get_initvm_pkglist()
> -        cache = Cache ()
> -        cache.open ()
> +        cache = Cache()
> +        cache.open()
>          for pkg in pkglist:
>              try:
> -                p = cache [pkg.name]
> +                p = cache[pkg.name]
>                  if pkg.name == 'elbe-bootstrap':
> -                    pkgver = p.versions [0]
> +                    pkgver = p.versions[0]
>                  else:
>                      pkgver = p.installed
> -                deb = pkgver.fetch_binary ('/var/cache/elbe/binaries/main',
> -                                            ElbeAcquireProgress (cb=None) )
> +                deb = pkgver.fetch_binary('/var/cache/elbe/binaries/main',
> +                                          ElbeAcquireProgress(cb=None))
>                  repo.includedeb(deb, 'main')
>              except ValueError as ve:
> -                log.printo( "No Package " + pkg.name + "-" + str(pkg.installed_version) )
> +                log.printo("No Package " + pkg.name +
> +                           "-" + str(pkg.installed_version))
>              except FetchError as fe:
> -                log.printo( "Package " + pkg.name + "-" + pkgver.version + " could not be downloaded" )
> +                log.printo(
> +                    "Package " +
> +                    pkg.name +
> +                    "-" +
> +                    pkgver.version +
> +                    " could not be downloaded")
>              except TypeError as te:
> -                log.printo( "Package " + pkg.name + "-" + str(pkg.installed_version) + " missing name or version" )
> +                log.printo("Package " +
> +                           pkg.name +
> +                           "-" +
> +                           str(pkg.installed_version) +
> +                           " missing name or version")
>  
> -        cache = get_rpcaptcache( rfs, "aptcache.log", arch )
> +        cache = get_rpcaptcache(rfs, "aptcache.log", arch)
>          for p in xml.node("debootstrappkgs"):
>              pkg = XMLPackage(p, arch)
>              try:
> -                deb = cache.download_binary( pkg.name,
> -                                             '/var/cache/elbe/binaries/main',
> -                                             pkg.installed_version )
> +                deb = cache.download_binary(pkg.name,
> +                                            '/var/cache/elbe/binaries/main',
> +                                            pkg.installed_version)
>                  target_repo.includedeb(deb, 'main')
>              except ValueError as ve:
> -                log.printo( "No Package " + pkg.name + "-" + pkg.installed_version )
> +                log.printo(
> +                    "No Package " +
> +                    pkg.name +
> +                    "-" +
> +                    pkg.installed_version)
>              except FetchError as fe:
> -                log.printo( "Package " + pkg.name + "-" + pkg.installed_version + " could not be downloaded" )
> +                log.printo(
> +                    "Package " +
> +                    pkg.name +
> +                    "-" +
> +                    pkg.installed_version +
> +                    " could not be downloaded")
>              except TypeError as te:
> -                log.printo( "Package " + pkg.name + "-" + pkg.installed_version + " missing name or version" )
> -
> -    cache = get_rpcaptcache( rfs, "aptcache.log", arch )
> -    pkglist = cache.get_installed_pkgs ()
> +                log.printo(
> +                    "Package " +
> +                    pkg.name +
> +                    "-" +
> +                    pkg.installed_version +
> +                    " missing name or version")
> +
> +    cache = get_rpcaptcache(rfs, "aptcache.log", arch)
> +    pkglist = cache.get_installed_pkgs()
>      for pkg in pkglist:
>          try:
> -            deb = cache.download_binary( pkg.name,
> +            deb = cache.download_binary(pkg.name,
>                                          '/var/cache/elbe/binaries/added',
> -                                        pkg.installed_version )
> +                                        pkg.installed_version)
>              target_repo.includedeb(deb, 'added', pkg.name, True)
>          except KeyError as ke:
> -            log.printo( str (ke) )
> +            log.printo(str(ke))
>          except ValueError as ve:
> -            log.printo( "No Package " + pkg.name + "-" + pkg.installed_version )
> +            log.printo("No Package " + pkg.name + "-" + pkg.installed_version)
>          except FetchError as fe:
> -            log.printo( "Package " + pkg.name + "-" + str (pkg.installed_version) + " could not be downloaded" )
> +            log.printo("Package " +
> +                       pkg.name +
> +                       "-" +
> +                       str(pkg.installed_version) +
> +                       " could not be downloaded")
>          except TypeError as te:
> -            log.printo( "Package " + pkg.name + "-" + pkg.installed_version + " missing name or version" )
> +            log.printo(
> +                "Package " +
> +                pkg.name +
> +                "-" +
> +                pkg.installed_version +
> +                " missing name or version")
>  
>      repo.finalize()
> -    target_repo.finalize ()
> +    target_repo.finalize()
>  
>      # Mark the binary repo with the necessary Files
>      # to make the installer accept this as a CDRom
> -    repo_fs = Filesystem( repo_path )
> -    repo_fs.mkdir_p (".disk")
> -    repo_fs.write_file (".disk/base_installable", 0o644, "main\n")
> -    repo_fs.write_file (".disk/base_components", 0o644, "main\n")
> -    repo_fs.write_file (".disk/cd_type", 0o644, "not_complete\n")
> -    repo_fs.write_file (".disk/info", 0o644, "elbe inst cdrom - full cd\n")
> -    repo_fs.symlink (".", "debian", allow_exists=True)
> -    repo_fs.write_file ("md5sum.txt", 0o644, "")
> +    repo_fs = Filesystem(repo_path)
> +    repo_fs.mkdir_p(".disk")
> +    repo_fs.write_file(".disk/base_installable", 0o644, "main\n")
> +    repo_fs.write_file(".disk/base_components", 0o644, "main\n")
> +    repo_fs.write_file(".disk/cd_type", 0o644, "not_complete\n")
> +    repo_fs.write_file(".disk/info", 0o644, "elbe inst cdrom - full cd\n")
> +    repo_fs.symlink(".", "debian", allow_exists=True)
> +    repo_fs.write_file("md5sum.txt", 0o644, "")
>  
>      # write source xml onto cdrom
> -    xml.xml.write (repo_fs.fname ('source.xml'))
> +    xml.xml.write(repo_fs.fname('source.xml'))
>  
> -    target_repo_fs = Filesystem( target_repo_path )
> -    target_repo_fs.write_file (".aptignr", 0o644, "")
> +    target_repo_fs = Filesystem(target_repo_path)
> +    target_repo_fs.write_file(".aptignr", 0o644, "")
>  
> -    return repo.buildiso( os.path.join( target, "bin-cdrom.iso" ) )
> +    return repo.buildiso(os.path.join(target, "bin-cdrom.iso"))
> diff --git a/elbepack/commands/add.py b/elbepack/commands/add.py
> index 07e072d0..70cc6d40 100644
> --- a/elbepack/commands/add.py
> +++ b/elbepack/commands/add.py
> @@ -10,10 +10,12 @@ import sys
>  from elbepack.elbexml import ElbeXML
>  from optparse import OptionParser
>  
> -def run_command( argv ):
>  
> -    oparser = OptionParser( usage="usage: %prog add [options] <xmlfile> <pkg1> [pkgN]")
> -    (opt,args) = oparser.parse_args(argv)
> +def run_command(argv):
> +
> +    oparser = OptionParser(
> +        usage="usage: %prog add [options] <xmlfile> <pkg1> [pkgN]")
> +    (opt, args) = oparser.parse_args(argv)
>  
>      if len(args) < 2:
>          print("Wrong number of arguments")
> @@ -21,20 +23,20 @@ def run_command( argv ):
>          sys.exit(20)
>  
>      try:
> -        xml = ElbeXML ( args[0] )
> +        xml = ElbeXML(args[0])
>      except Exception as e:
>          print("Error reading xml file: %s" % str(e))
>          sys.exit(20)
>  
>      for a in args[1:]:
>          try:
> -            xml.add_target_package( a )
> +            xml.add_target_package(a)
>          except Exception as e:
>              print("Error adding package %s: %s" % (a, str(e)))
>              sys.exit(20)
>  
>      try:
> -        xml.xml.write( args[0] )
> -    except:
> +        xml.xml.write(args[0])
> +    except BaseException:
>          print("Unable to write new xml file")
>          sys.exit(20)
> diff --git a/elbepack/commands/adjustpkgs.py b/elbepack/commands/adjustpkgs.py
> index 0dc1ebd7..0ef80be4 100644
> --- a/elbepack/commands/adjustpkgs.py
> +++ b/elbepack/commands/adjustpkgs.py
> @@ -15,15 +15,16 @@ import apt.progress
>  
>  import sys
>  
> +
>  class adjpkg(object):
>      def __init__(self, logfile, name):
>  
> -        self.outf = ASCIIDocLog (logfile)
> +        self.outf = ASCIIDocLog(logfile)
>  
>          if name:
> -            self.outf.h1( "ELBE Report for Project "+name )
> +            self.outf.h1("ELBE Report for Project " + name)
>          else:
> -            self.outf.h1( "ELBE Report" )
> +            self.outf.h1("ELBE Report")
>  
>      def set_pkgs(self, pkglist):
>  
> @@ -38,15 +39,16 @@ class adjpkg(object):
>              for p in cache:
>                  if not p.is_installed:
>                      continue
> -                if p.essential or p.is_auto_installed or (p.name in pkglist) or p.installed.priority == "important" or p.installed.priority == "required":
> +                if p.essential or p.is_auto_installed or (
> +                        p.name in pkglist) or p.installed.priority == "important" or p.installed.priority == "required":
>                      continue
>                  print("MARK REMOVE %s" % p.name)
> -                p.mark_delete( auto_fix=False, purge=True )
> +                p.mark_delete(auto_fix=False, purge=True)
>  
>              for name in pkglist:
>  
> -                if not name in cache:
> -                    self.outf.printo( "- package %s does not exist" % name )
> +                if name not in cache:
> +                    self.outf.printo("- package %s does not exist" % name)
>                      errors += 1
>                      continue
>  
> @@ -58,7 +60,6 @@ class adjpkg(object):
>              cache.commit(apt.progress.base.AcquireProgress(),
>                           apt.progress.base.InstallProgress())
>  
> -
>              cache.update()
>              cache.open(None)
>  
> @@ -66,7 +67,7 @@ class adjpkg(object):
>                  if not p.is_installed:
>                      continue
>                  if p.is_auto_removable:
> -                    p.mark_delete( purge=True )
> +                    p.mark_delete(purge=True)
>                      print("MARKED AS AUTOREMOVE %s" % p.name)
>  
>          cache.commit(apt.progress.base.AcquireProgress(),
> @@ -74,14 +75,15 @@ class adjpkg(object):
>  
>          return errors
>  
> -def run_command( argv ):
> +
> +def run_command(argv):
>      oparser = OptionParser(usage="usage: %prog adjustpkgs [options] <xmlfile>")
>  
> -    oparser.add_option( "-o", "--output", dest="output",
> -                        help="name of logfile" )
> -    oparser.add_option( "-n", "--name", dest="name",
> -                        help="name of the project (included in the report)" )
> -    (opt,args) = oparser.parse_args(argv)
> +    oparser.add_option("-o", "--output", dest="output",
> +                       help="name of logfile")
> +    oparser.add_option("-n", "--name", dest="name",
> +                       help="name of the project (included in the report)")
> +    (opt, args) = oparser.parse_args(argv)
>  
>      if len(args) != 1:
>          print("Wrong number of arguments")
> @@ -91,8 +93,7 @@ def run_command( argv ):
>      if not opt.output:
>          return 0
>  
> -
> -    xml = etree( args[0] )
> +    xml = etree(args[0])
>      xml_pkglist = xml.node("/target/pkg-list")
>      xml_pkgs = [p.et.text for p in xml_pkglist]
>  
> @@ -107,10 +108,12 @@ def run_command( argv ):
>      #         and its dependencies (if it is not in  target/pkg-list.
>      buildenv_pkgs = []
>      if xml.has("./project/buildimage/pkg-list"):
> -        buildenv_pkgs = [p.et.text for p in xml.node("project/buildimage/pkg-list")]
> +        buildenv_pkgs = [p.et.text for p in xml.node(
> +            "project/buildimage/pkg-list")]
>  
>      adj = adjpkg(opt.output, opt.name)
>      return adj.set_pkgs(xml_pkgs + mandatory_pkgs + buildenv_pkgs)
>  
> +
>  if __name__ == "__main__":
> -    run_command( sys.argv[1:] )
> +    run_command(sys.argv[1:])
> diff --git a/elbepack/commands/bootup-check.py b/elbepack/commands/bootup-check.py
> index 8f3cdf49..df8c643a 100644
> --- a/elbepack/commands/bootup-check.py
> +++ b/elbepack/commands/bootup-check.py
> @@ -10,13 +10,15 @@ import apt_pkg
>  
>  from elbepack.treeutils import etree
>  
> -def is_in_fpl (p, fpl):
> +
> +def is_in_fpl(p, fpl):
>      for ip in fpl:
>          if ip.et.text == p.name:
>              return True
>      return False
>  
> -def is_installed (ip, cache):
> +
> +def is_installed(ip, cache):
>      try:
>          p = cache[ip.et.text]
>      except KeyError:
> @@ -26,37 +28,40 @@ def is_installed (ip, cache):
>          return True
>      return False
>  
> -def bootup_check (xml):
>  
> -    fpl = xml.node ("fullpkgs")
> +def bootup_check(xml):
> +
> +    fpl = xml.node("fullpkgs")
>  
> -    apt_pkg.init ()
> -    cache = apt_pkg.Cache ()
> -    hl_cache = apt.cache.Cache ()
> +    apt_pkg.init()
> +    cache = apt_pkg.Cache()
> +    hl_cache = apt.cache.Cache()
>  
>      for p in hl_cache:
>          if p.is_installed:
> -            if not is_in_fpl (p, fpl):
> +            if not is_in_fpl(p, fpl):
>                  print("%s installed by user" % p.name)
>  
>      for ip in fpl:
> -         if not is_installed (ip, cache):
> -                print("%s removed by user" % ip.et.text)
> +        if not is_installed(ip, cache):
> +            print("%s removed by user" % ip.et.text)
>  
> -def bootup_info ():
> -    with open ("/etc/elbe_version", 'r') as ev:
> +
> +def bootup_info():
> +    with open("/etc/elbe_version", 'r') as ev:
>          print(ev.read())
>  
> -def run_command (argv):
> +
> +def run_command(argv):
>      try:
> -        xml = etree ("/etc/elbe_base.xml")
> +        xml = etree("/etc/elbe_base.xml")
>      except IOError:
>          print("/etc/elbe_base.xml removed by user")
>          return -1
>  
> -    bootup_check (xml)
> +    bootup_check(xml)
>      try:
> -        bootup_info ()
> +        bootup_info()
>      except IOError:
>          print("/etc/elbe_version removed by user")
>          return -1
> diff --git a/elbepack/commands/buildchroot.py b/elbepack/commands/buildchroot.py
> index ef49bf46..5aab4270 100644
> --- a/elbepack/commands/buildchroot.py
> +++ b/elbepack/commands/buildchroot.py
> @@ -16,60 +16,68 @@ from elbepack.cdroms import CDROM_SIZE
>  from sqlalchemy.exc import OperationalError
>  
>  
> -def run_command( argv ):
> -    oparser = OptionParser(usage="usage: %prog buildchroot [options] <xmlfile>")
> +def run_command(argv):
> +    oparser = OptionParser(
> +        usage="usage: %prog buildchroot [options] <xmlfile>")
>  
> -    oparser.add_option( "-t", "--target", dest="target",
> -                        help="directoryname of target" )
> +    oparser.add_option("-t", "--target", dest="target",
> +                       help="directoryname of target")
>  
> -    oparser.add_option( "-o", "--output", dest="output",
> -                        help="name of logfile" )
> +    oparser.add_option("-o", "--output", dest="output",
> +                       help="name of logfile")
>  
> -    oparser.add_option( "-n", "--name", dest="name",
> -                        help="name of the project (included in the report)" )
> +    oparser.add_option("-n", "--name", dest="name",
> +                       help="name of the project (included in the report)")
>  
> -    oparser.add_option( "--skip-pbuild", action="store_true",
> -                        dest="skip_pbuild", default=False,
> -                        help="skip building packages from <pbuilder> list" )
> +    oparser.add_option("--skip-pbuild", action="store_true",
> +                       dest="skip_pbuild", default=False,
> +                       help="skip building packages from <pbuilder> list")
>  
> -    oparser.add_option( "--build-bin", action="store_true",
> -                        dest="build_bin", default=False,
> -                        help="Build Binary Repository CDROM, for exact Reproduction" )
> +    oparser.add_option(
> +        "--build-bin",
> +        action="store_true",
> +        dest="build_bin",
> +        default=False,
> +        help="Build Binary Repository CDROM, for exact Reproduction")
>  
> -    oparser.add_option( "--build-sources", action="store_true",
> -                        dest="build_sources", default=False,
> -                        help="Build Source CD" )
> +    oparser.add_option("--build-sources", action="store_true",
> +                       dest="build_sources", default=False,
> +                       help="Build Source CD")
>  
> -    oparser.add_option( "--proxy", dest="proxy",
> -                        help="Override the http proxy" )
> +    oparser.add_option("--proxy", dest="proxy",
> +                       help="Override the http proxy")
>  
> -    oparser.add_option( "--debug", action="store_true", dest="debug",
> -                        default=False,
> -                        help="Enable various features to debug the build" )
> +    oparser.add_option("--debug", action="store_true", dest="debug",
> +                       default=False,
> +                       help="Enable various features to debug the build")
>  
> -    oparser.add_option( "--buildtype", dest="buildtype",
> -                        help="Override the buildtype" )
> +    oparser.add_option("--buildtype", dest="buildtype",
> +                       help="Override the buildtype")
>  
> -    oparser.add_option( "--cdrom-size", action="store",
> -                        dest="cdrom_size", default=CDROM_SIZE, help="ISO CD size in MB" )
> +    oparser.add_option(
> +        "--cdrom-size",
> +        action="store",
> +        dest="cdrom_size",
> +        default=CDROM_SIZE,
> +        help="ISO CD size in MB")
>  
> -    oparser.add_option( "--skip-validation", action="store_true",
> -                        dest="skip_validation", default=False,
> -                        help="Skip xml schema validation" )
> +    oparser.add_option("--skip-validation", action="store_true",
> +                       dest="skip_validation", default=False,
> +                       help="Skip xml schema validation")
>  
> -    oparser.add_option( "--skip-debootstrap", action="store_true",
> -                        dest="skip_debootstrap", default=False,
> -                        help="Skip debootstrap" )
> +    oparser.add_option("--skip-debootstrap", action="store_true",
> +                       dest="skip_debootstrap", default=False,
> +                       help="Skip debootstrap")
>  
> -    oparser.add_option( "--skip-pkglist", action="store_true",
> -                        dest="skip_pkglist", default=False,
> -                        help="ignore changes of the package list" )
> +    oparser.add_option("--skip-pkglist", action="store_true",
> +                       dest="skip_pkglist", default=False,
> +                       help="ignore changes of the package list")
>  
> -    oparser.add_option( "--skip-cdrom", action="store_true",
> -                        dest="skip_cdrom", default=False,
> -                        help="(now obsolete) Skip cdrom iso generation" )
> +    oparser.add_option("--skip-cdrom", action="store_true",
> +                       dest="skip_cdrom", default=False,
> +                       help="(now obsolete) Skip cdrom iso generation")
>  
> -    (opt,args) = oparser.parse_args(argv)
> +    (opt, args) = oparser.parse_args(argv)
>  
>      if len(args) != 1:
>          print("wrong number of arguments")
> @@ -81,27 +89,33 @@ def run_command( argv ):
>          sys.exit(20)
>  
>      if opt.skip_cdrom:
> -        print("WARNING: Skip CDROMS is now the default, use --build-bin to build binary CDROM")
> +        print(
> +            "WARNING: Skip CDROMS is now the default, use --build-bin to build binary CDROM")
>  
>      try:
> -        project = ElbeProject( opt.target, args[0], opt.output, opt.name,
> -                opt.buildtype, opt.skip_validation )
> +        project = ElbeProject(opt.target, args[0], opt.output, opt.name,
> +                              opt.buildtype, opt.skip_validation)
>      except ValidationError as e:
>          print(str(e))
>          print("xml validation failed. Bailing out")
>          sys.exit(20)
>  
>      try:
> -        project.build( opt.skip_debootstrap, opt.build_bin,
> -                opt.build_sources, opt.cdrom_size, opt.debug, opt.skip_pkglist,
> -                opt.skip_pbuild )
> +        project.build(
> +            opt.skip_debootstrap,
> +            opt.build_bin,
> +            opt.build_sources,
> +            opt.cdrom_size,
> +            opt.debug,
> +            opt.skip_pkglist,
> +            opt.skip_pbuild)
>      except CommandError as ce:
>          print("command in project build failed: %s" % ce.cmd)
>          sys.exit(20)
>  
>      try:
>          db = ElbeDB()
> -        db.save_project (project)
> +        db.save_project(project)
>      except OperationalError:
>          print("failed to save project in database")
>          sys.exit(20)
> diff --git a/elbepack/commands/buildsysroot.py b/elbepack/commands/buildsysroot.py
> index 64608b8a..ed802584 100644
> --- a/elbepack/commands/buildsysroot.py
> +++ b/elbepack/commands/buildsysroot.py
> @@ -11,15 +11,17 @@ import sys
>  from elbepack.elbeproject import ElbeProject
>  from elbepack.elbexml import ValidationError
>  
> -def run_command( argv ):
> -    oparser = OptionParser(usage="usage: %prog buildsysroot [options] <builddir>")
> -    oparser.add_option( "--skip-validation", action="store_true",
> -                        dest="skip_validation", default=False,
> -                        help="Skip xml schema validation" )
> -    oparser.add_option( "--buildtype", dest="buildtype",
> -                        help="Override the buildtype" )
>  
> -    (opt,args) = oparser.parse_args(argv)
> +def run_command(argv):
> +    oparser = OptionParser(
> +        usage="usage: %prog buildsysroot [options] <builddir>")
> +    oparser.add_option("--skip-validation", action="store_true",
> +                       dest="skip_validation", default=False,
> +                       help="Skip xml schema validation")
> +    oparser.add_option("--buildtype", dest="buildtype",
> +                       help="Override the buildtype")
> +
> +    (opt, args) = oparser.parse_args(argv)
>  
>      if len(args) != 1:
>          print("wrong number of arguments")
> @@ -27,11 +29,11 @@ def run_command( argv ):
>          sys.exit(20)
>  
>      try:
> -        project = ElbeProject( args[0], override_buildtype=opt.buildtype,
> -                skip_validate=opt.skip_validation )
> +        project = ElbeProject(args[0], override_buildtype=opt.buildtype,
> +                              skip_validate=opt.skip_validation)
>      except ValidationError as e:
>          print(str(e))
>          print("xml validation failed. Bailing out")
>          sys.exit(20)
>  
> -    project.build_sysroot ()
> +    project.build_sysroot()
> diff --git a/elbepack/commands/check_updates.py b/elbepack/commands/check_updates.py
> index 6f98b806..8ec9cf8f 100644
> --- a/elbepack/commands/check_updates.py
> +++ b/elbepack/commands/check_updates.py
> @@ -15,15 +15,20 @@ from optparse import OptionParser
>  from elbepack.validate import validate_xml
>  from elbepack.xmldefaults import ElbeDefaults
>  
> -def run_command( argv ):
>  
> -    oparser = OptionParser(usage="usage: %prog check_updates [options] <source-xmlfile>")
> -    oparser.add_option( "-s", "--script", dest="script",
> -                        help="filename of script to run when an update is required" )
> -    oparser.add_option( "--skip-validation", action="store_true",
> -                        dest="skip_validation", default=False,
> -                        help="Skip xml schema validation" )
> -    (opt,args) = oparser.parse_args(argv)
> +def run_command(argv):
> +
> +    oparser = OptionParser(
> +        usage="usage: %prog check_updates [options] <source-xmlfile>")
> +    oparser.add_option(
> +        "-s",
> +        "--script",
> +        dest="script",
> +        help="filename of script to run when an update is required")
> +    oparser.add_option("--skip-validation", action="store_true",
> +                       dest="skip_validation", default=False,
> +                       help="Skip xml schema validation")
> +    (opt, args) = oparser.parse_args(argv)
>  
>      if len(args) != 1:
>          print("Wrong number of arguments")
> @@ -31,8 +36,8 @@ def run_command( argv ):
>          sys.exit(20)
>  
>      if not opt.skip_validation:
> -        validation = validate_xml (args[0])
> -        if len (validation) != 0:
> +        validation = validate_xml(args[0])
> +        if len(validation) != 0:
>              print("xml validation failed. Bailing out")
>              for i in validation:
>                  print(i)
> @@ -40,49 +45,51 @@ def run_command( argv ):
>  
>      print("checking %s" % args[0])
>  
> -    xml = etree( args[0] )
> +    xml = etree(args[0])
>  
> -    if xml.has( "project/buildtype" ):
> -        buildtype = xml.text( "/project/buildtype" )
> +    if xml.has("project/buildtype"):
> +        buildtype = xml.text("/project/buildtype")
>      else:
>          buildtype = "nodefaults"
>  
> -    defs = ElbeDefaults( buildtype )
> +    defs = ElbeDefaults(buildtype)
>  
> -    arch  = xml.text("project/buildimage/arch", default=defs, key="arch")
> +    arch = xml.text("project/buildimage/arch", default=defs, key="arch")
>      suite = xml.text("project/suite")
>  
> -    name  = xml.text("project/name", default=defs, key="name")
> +    name = xml.text("project/name", default=defs, key="name")
>  
>      apt_sources = xml.text("sources_list").replace("10.0.2.2", "localhost")
> -    apt_prefs   = xml.text("apt_prefs")
> +    apt_prefs = xml.text("apt_prefs")
>  
>      fullp = xml.node("fullpkgs")
>  
> -    v = virtapt.VirtApt( name, arch, suite, apt_sources, apt_prefs )
> +    v = virtapt.VirtApt(name, arch, suite, apt_sources, apt_prefs)
>  
>      d = virtapt.apt_pkg.DepCache(v.cache)
> -    d.read_pinfile( v.projectpath + "/etc/apt/preferences" )
> +    d.read_pinfile(v.projectpath + "/etc/apt/preferences")
>  
>      for p in fullp:
>          pname = p.et.text
> -        pver  = p.et.get('version')
> +        pver = p.et.get('version')
>          pauto = p.et.get('auto')
>  
>          if pauto != "true":
> -            d.mark_install( v.cache[pname] )
> +            d.mark_install(v.cache[pname])
>  
>      errors = 0
>      required_updates = 0
>  
>      for p in fullp:
>          pname = p.et.text
> -        pver  = p.et.get('version')
> +        pver = p.et.get('version')
>          pauto = p.et.get('auto')
>  
> -        if not pname in v.cache:
> +        if pname not in v.cache:
>              if pauto == 'false':
> -                print("%s does not exist in cache but is specified in pkg-list" % pname)
> +                print(
> +                    "%s does not exist in cache but is specified in pkg-list" %
> +                    pname)
>                  errors += 1
>              else:
>                  print("%s is no more required" % pname)
> @@ -92,8 +99,8 @@ def run_command( argv ):
>  
>          centry = v.cache[pname]
>  
> -        if d.marked_install( centry ):
> -            cver = d.get_candidate_ver( v.cache[pname] ).ver_str
> +        if d.marked_install(centry):
> +            cver = d.get_candidate_ver(v.cache[pname]).ver_str
>              if pver != cver:
>                  print("%s: %s != %s" % (pname, pver, cver))
>                  required_updates += 1
> @@ -103,10 +110,10 @@ def run_command( argv ):
>      if errors > 0:
>          print("%d Errors occured, xml files needs fixing" % errors)
>          if opt.script:
> -            os.system( "%s ERRORS %s" % (opt.script, args[0]) )
> +            os.system("%s ERRORS %s" % (opt.script, args[0]))
>      elif required_updates > 0:
>          print("%d updates required" % required_updates)
>          if opt.script:
> -            os.system( "%s UPDATE %s" % (opt.script, args[0]) )
> +            os.system("%s UPDATE %s" % (opt.script, args[0]))
>      else:
>          print("No Updates available")
> diff --git a/elbepack/commands/chg_archive.py b/elbepack/commands/chg_archive.py
> index eb727d4d..61dd1173 100644
> --- a/elbepack/commands/chg_archive.py
> +++ b/elbepack/commands/chg_archive.py
> @@ -14,28 +14,33 @@ from elbepack.treeutils import etree
>  from optparse import OptionParser
>  
>  
> -def enbase( fname ):
> +def enbase(fname):
>      infile = file(fname, "r")
>      s = infile.read()
>      enc = standard_b64encode(s)
>  
>      splited = ""
> -    i=0
> +    i = 0
>      l_enc = len(enc)
> -    while i<l_enc:
> -        splited += (enc[i:i+60]+"\n")
> -        i+=60
> +    while i < l_enc:
> +        splited += (enc[i:i + 60] + "\n")
> +        i += 60
>  
>      return splited
>  
> -def run_command( argv ):
>  
> -    oparser = OptionParser( usage="usage: %prog chg_archive [options] <xmlfile> [<archive>|<directory>]")
> -    oparser.add_option ("--keep-attributes", action="store_true",
> -                        help="keep file owners and groups, if not specified all files will belong to root:root",
> -                        dest="keep_attributes", default=False)
> +def run_command(argv):
>  
> -    (opt,args) = oparser.parse_args(argv)
> +    oparser = OptionParser(
> +        usage="usage: %prog chg_archive [options] <xmlfile> [<archive>|<directory>]")
> +    oparser.add_option(
> +        "--keep-attributes",
> +        action="store_true",
> +        help="keep file owners and groups, if not specified all files will belong to root:root",
> +        dest="keep_attributes",
> +        default=False)
> +
> +    (opt, args) = oparser.parse_args(argv)
>  
>      if len(args) != 2:
>          print("Wrong number of arguments")
> @@ -43,34 +48,34 @@ def run_command( argv ):
>          sys.exit(20)
>  
>      try:
> -        xml = etree( args[0] )
> -    except:
> +        xml = etree(args[0])
> +    except BaseException:
>          print("Error reading xml file!")
>          sys.exit(20)
>  
> -    if os.path.isdir (args[1]):
> +    if os.path.isdir(args[1]):
>          archive = '.archive.tbz'
>          if opt.keep_attributes:
>              cmd = 'tar cfj .archive.tbz -C '
>          else:
>              cmd = 'tar cjf .archive.tbz --owner=root --group=root -C '
>          cmd += args[1] + ' .'
> -        os.system (cmd)
> +        os.system(cmd)
>      else:
>          archive = args[1]
>  
>      try:
> -        arch = xml.ensure_child( "archive" )
> -        arch.set_text( enbase( archive ) )
> -    except:
> +        arch = xml.ensure_child("archive")
> +        arch.set_text(enbase(archive))
> +    except BaseException:
>          print("Error reading archive")
>          sys.exit(20)
>  
>      try:
> -        xml.write( args[0] )
> -    except:
> +        xml.write(args[0])
> +    except BaseException:
>          print("Unable to write new xml file")
>          sys.exit(20)
>  
> -    if os.path.isdir (args[1]):
> -        os.remove (archive)
> +    if os.path.isdir(args[1]):
> +        os.remove(archive)
> diff --git a/elbepack/commands/chroot.py b/elbepack/commands/chroot.py
> index b880a997..45035e65 100644
> --- a/elbepack/commands/chroot.py
> +++ b/elbepack/commands/chroot.py
> @@ -12,18 +12,20 @@ import os
>  from elbepack.elbeproject import ElbeProject
>  from elbepack.elbexml import ValidationError, ValidationMode
>  
> -def run_command( argv ):
> -    oparser = OptionParser(usage="usage: %prog chroot [options] <builddir> [cmd]")
> -    oparser.add_option( "--skip-validation", action="store_true",
> -                        dest="skip_validation", default=False,
> -                        help="Skip xml schema validation" )
> -    oparser.add_option( "--target", action="store_true", dest="target",
> -                        help="chroot into target instead of buildenv",
> -                        default=False )
> -    oparser.add_option( "--buildtype", dest="buildtype",
> -                        help="Override the buildtype" )
> -
> -    (opt,args) = oparser.parse_args(argv)
> +
> +def run_command(argv):
> +    oparser = OptionParser(
> +        usage="usage: %prog chroot [options] <builddir> [cmd]")
> +    oparser.add_option("--skip-validation", action="store_true",
> +                       dest="skip_validation", default=False,
> +                       help="Skip xml schema validation")
> +    oparser.add_option("--target", action="store_true", dest="target",
> +                       help="chroot into target instead of buildenv",
> +                       default=False)
> +    oparser.add_option("--buildtype", dest="buildtype",
> +                       help="Override the buildtype")
> +
> +    (opt, args) = oparser.parse_args(argv)
>  
>      if len(args) < 1:
>          print("wrong number of arguments")
> @@ -31,8 +33,11 @@ def run_command( argv ):
>          sys.exit(20)
>  
>      try:
> -        project = ElbeProject(args[0], override_buildtype=opt.buildtype,
> -                skip_validate=opt.skip_validation, url_validation=ValidationMode.NO_CHECK)
> +        project = ElbeProject(
> +            args[0],
> +            override_buildtype=opt.buildtype,
> +            skip_validate=opt.skip_validation,
> +            url_validation=ValidationMode.NO_CHECK)
>      except ValidationError as e:
>          print(str(e))
>          print("xml validation failed. Bailing out")
> @@ -42,7 +47,7 @@ def run_command( argv ):
>      os.environ["LANGUAGE"] = "C"
>      os.environ["LC_ALL"] = "C"
>      # TODO: howto set env in chroot?
> -    os.environ["PS1"] = project.xml.text ('project/name') + ': \w\$'
> +    os.environ["PS1"] = project.xml.text('project/name') + ': \w\$'
>  
>      cmd = "/bin/bash"
>  
> @@ -54,7 +59,7 @@ def run_command( argv ):
>  
>      if opt.target:
>          with project.targetfs:
> -            os.system( "/usr/sbin/chroot %s %s" % (project.targetpath, cmd) )
> +            os.system("/usr/sbin/chroot %s %s" % (project.targetpath, cmd))
>      else:
>          with project.buildenv:
> -            os.system( "/usr/sbin/chroot %s %s" % (project.chrootpath, cmd) )
> +            os.system("/usr/sbin/chroot %s %s" % (project.chrootpath, cmd))
> diff --git a/elbepack/commands/control.py b/elbepack/commands/control.py
> index 2991d2f7..1e70d4c7 100644
> --- a/elbepack/commands/control.py
> +++ b/elbepack/commands/control.py
> @@ -19,121 +19,142 @@ from elbepack.config import cfg
>  
>  from elbepack.elbexml import ValidationMode
>  
> -def run_command (argv):
> -    oparser = OptionParser (usage="usage: elbe control [options] <command>")
>  
> -    oparser.add_option ("--host", dest="host", default=cfg['soaphost'],
> -                        help="Ip or hostname of elbe-daemon.")
> +def run_command(argv):
> +    oparser = OptionParser(usage="usage: elbe control [options] <command>")
>  
> -    oparser.add_option ("--port", dest="port", default=cfg['soapport'],
> -                        help="Port of soap itf on elbe-daemon.")
> +    oparser.add_option("--host", dest="host", default=cfg['soaphost'],
> +                       help="Ip or hostname of elbe-daemon.")
>  
> -    oparser.add_option ("--pass", dest="passwd", default=cfg['elbepass'],
> -                        help="Password (default is foo).")
> +    oparser.add_option("--port", dest="port", default=cfg['soapport'],
> +                       help="Port of soap itf on elbe-daemon.")
>  
> -    oparser.add_option ("--user", dest="user", default=cfg['elbeuser'],
> -                        help="Username (default is root).")
> +    oparser.add_option("--pass", dest="passwd", default=cfg['elbepass'],
> +                       help="Password (default is foo).")
>  
> -    oparser.add_option ("--retries", dest="retries", default="10",
> -                        help="How many times to retry the connection to the server before giving up (default is 10 times, yielding 10 seconds).")
> +    oparser.add_option("--user", dest="user", default=cfg['elbeuser'],
> +                       help="Username (default is root).")
>  
> -    oparser.add_option( "--build-bin", action="store_true",
> -                        dest="build_bin", default=False,
> -                        help="Build binary repository CDROM, for exact reproduction." )
> +    oparser.add_option(
> +        "--retries",
> +        dest="retries",
> +        default="10",
> +        help="How many times to retry the connection to the server before giving up (default is 10 times, yielding 10 seconds).")
>  
> -    oparser.add_option( "--build-sources", action="store_true",
> -                        dest="build_sources", default=False,
> -                        help="Build source CDROM" )
> +    oparser.add_option(
> +        "--build-bin",
> +        action="store_true",
> +        dest="build_bin",
> +        default=False,
> +        help="Build binary repository CDROM, for exact reproduction.")
>  
> -    oparser.add_option( "--skip-pbuilder", action="store_true",
> -                        dest="skip_pbuilder", default=False,
> -                        help="skip pbuilder section of XML (dont build packages)" )
> +    oparser.add_option("--build-sources", action="store_true",
> +                       dest="build_sources", default=False,
> +                       help="Build source CDROM")
>  
> -    oparser.add_option( "--output",
> -                        dest="output", default=None,
> -                        help="Output files to <directory>" )
> +    oparser.add_option(
> +        "--skip-pbuilder",
> +        action="store_true",
> +        dest="skip_pbuilder",
> +        default=False,
> +        help="skip pbuilder section of XML (dont build packages)")
>  
> -    oparser.add_option( "--matches", dest="matches", default=False,
> -                        help="Select files based on wildcard expression.")
> +    oparser.add_option("--output",
> +                       dest="output", default=None,
> +                       help="Output files to <directory>")
>  
> -    oparser.add_option( "--pbuilder-only", action="store_true",
> -                        dest="pbuilder_only", default=False,
> -                        help="Only list/download pbuilder Files" )
> +    oparser.add_option("--matches", dest="matches", default=False,
> +                       help="Select files based on wildcard expression.")
>  
> -    devel = OptionGroup(oparser, "options for elbe developers",
> -            "Caution: Don't use these options in a productive environment")
> -    devel.add_option( "--skip-urlcheck", action="store_true",
> -                 dest="url_validation", default=ValidationMode.CHECK_ALL,
> -                 help="Skip URL Check inside initvm" )
> +    oparser.add_option("--pbuilder-only", action="store_true",
> +                       dest="pbuilder_only", default=False,
> +                       help="Only list/download pbuilder Files")
>  
> -    devel.add_option ("--debug", action="store_true",
> -                 dest="debug", default=False,
> -                 help="Enable debug mode.")
> +    devel = OptionGroup(
> +        oparser,
> +        "options for elbe developers",
> +        "Caution: Don't use these options in a productive environment")
> +    devel.add_option("--skip-urlcheck", action="store_true",
> +                     dest="url_validation", default=ValidationMode.CHECK_ALL,
> +                     help="Skip URL Check inside initvm")
>  
> -    devel.add_option ("--ignore-version-diff", action="store_true",
> -                        dest="ignore_version", default=False,
> -                        help="allow different elbe version on host and initvm")
> -    oparser.add_option_group (devel)
> +    devel.add_option("--debug", action="store_true",
> +                     dest="debug", default=False,
> +                     help="Enable debug mode.")
>  
> +    devel.add_option("--ignore-version-diff", action="store_true",
> +                     dest="ignore_version", default=False,
> +                     help="allow different elbe version on host and initvm")
> +    oparser.add_option_group(devel)
>  
> -    (opt,args) = oparser.parse_args (argv)
> +    (opt, args) = oparser.parse_args(argv)
>  
>      if len(args) < 1:
>          print("elbe control - no subcommand given", file=sys.stderr)
> -        ClientAction.print_actions ()
> +        ClientAction.print_actions()
>          return
>  
>      try:
> -        control = ElbeSoapClient (opt.host, opt.port, opt.user, opt.passwd, debug=opt.debug, retries=int(opt.retries))
> +        control = ElbeSoapClient(
> +            opt.host,
> +            opt.port,
> +            opt.user,
> +            opt.passwd,
> +            debug=opt.debug,
> +            retries=int(
> +                opt.retries))
>      except socket.error as e:
> -        print("Failed to connect to Soap server %s:%s\n" % (opt.host, opt.port), file=sys.stderr)
> +        print("Failed to connect to Soap server %s:%s\n" %
> +              (opt.host, opt.port), file=sys.stderr)
>          print("", file=sys.stderr)
> -        print("Check, wether the Soap Server is running inside the initvm", file=sys.stderr)
> +        print(
> +            "Check, wether the Soap Server is running inside the initvm",
> +            file=sys.stderr)
>          print("try 'elbe initvm attach'", file=sys.stderr)
>          sys.exit(10)
>      except URLError as e:
> -        print("Failed to connect to Soap server %s:%s\n" % (opt.host, opt.port), file=sys.stderr)
> +        print("Failed to connect to Soap server %s:%s\n" %
> +              (opt.host, opt.port), file=sys.stderr)
>          print("", file=sys.stderr)
>          print("Check, wether the initvm is actually running.", file=sys.stderr)
>          print("try 'elbe initvm start'", file=sys.stderr)
>          sys.exit(10)
>      except BadStatusLine as e:
> -        print("Failed to connect to Soap server %s:%s\n" % (opt.host, opt.port), file=sys.stderr)
> +        print("Failed to connect to Soap server %s:%s\n" %
> +              (opt.host, opt.port), file=sys.stderr)
>          print("", file=sys.stderr)
>          print("Check, wether the initvm is actually running.", file=sys.stderr)
>          print("try 'elbe initvm start'", file=sys.stderr)
>          sys.exit(10)
>  
>      try:
> -        v_server = control.service.get_version ()
> +        v_server = control.service.get_version()
>          if v_server != elbe_version:
>              print("elbe v%s is used in initvm, this is not compatible with \
>  elbe v%s that is used on this machine. Please install same \
>  versions of elbe in initvm and on your machine." % (v_server, elbe_version), file=sys.stderr)
>              if not (opt.ignore_version):
> -                sys.exit (20)
> +                sys.exit(20)
>      except AttributeError:
>          print("the elbe installation inside the initvm doesn't provide a \
>  get_version interface. Please create a new initvm or upgrade \
>  elbe inside the existing initvm.", file=sys.stderr)
>          if not (opt.ignore_version):
> -            sys.exit (20)
> +            sys.exit(20)
>  
>      try:
> -        action = ClientAction (args[0])
> +        action = ClientAction(args[0])
>      except KeyError:
>          print("elbe control - unknown subcommand", file=sys.stderr)
> -        ClientAction.print_actions ()
> +        ClientAction.print_actions()
>          sys.exit(20)
>  
> -
> -
>      try:
> -        action.execute (control, opt, args[1:])
> +        action.execute(control, opt, args[1:])
>      except WebFault as e:
>          print("Server returned error:", file=sys.stderr)
>          print("", file=sys.stderr)
> -        if hasattr (e.fault, 'faultstring'):
> +        if hasattr(e.fault, 'faultstring'):
>              print(e.fault.faultstring, file=sys.stderr)
>          else:
>              print(e, file=sys.stderr)
> diff --git a/elbepack/commands/daemon.py b/elbepack/commands/daemon.py
> index a74413a5..137a5d86 100644
> --- a/elbepack/commands/daemon.py
> +++ b/elbepack/commands/daemon.py
> @@ -14,39 +14,44 @@ import elbepack.daemons
>  
>  import sys
>  
> +
>  def get_daemonlist():
> -    return [ x for _, x, _ in iter_modules(elbepack.daemons.__path__) ]
> +    return [x for _, x, _ in iter_modules(elbepack.daemons.__path__)]
> +
>  
> -def run_command( argv ):
> +def run_command(argv):
>      daemons = get_daemonlist()
>  
>      if not daemons:
>          print("no elbe daemons installed")
>  
>      oparser = OptionParser(usage="usage: %prog")
> -    oparser.add_option( "--host", dest="host", default='0.0.0.0',
> -                        help="interface to host daemon" )
> -    oparser.add_option( "--port", dest="port", default=7587,
> -                        help="port to host daemon" )
> +    oparser.add_option("--host", dest="host", default='0.0.0.0',
> +                       help="interface to host daemon")
> +    oparser.add_option("--port", dest="port", default=7587,
> +                       help="port to host daemon")
>  
>      for d in daemons:
> -        oparser.add_option( "--"+str(d), dest=str(d), default=False,
> -                action="store_true", help="enable "+str(d))
> +        oparser.add_option("--" + str(d), dest=str(d), default=False,
> +                           action="store_true", help="enable " + str(d))
>  
> -    (opt,args) = oparser.parse_args(argv)
> +    (opt, args) = oparser.parse_args(argv)
>  
>      active = False
>  
>      for d in daemons:
>          for o in dir(opt):
>              if str(o) == str(d):
> -                if getattr(opt,o) == True:
> +                if getattr(opt, o):
>                      active = True
>                      print("enable %s" % str(d))
>                      module = "elbepack.daemons." + str(d)
>                      mod = __import__(module)
>                      cmdmod = sys.modules[module]
> -                    cherrypy.tree.graft(cmdmod.get_app(cherrypy.engine), "/"+str(d))
> +                    cherrypy.tree.graft(
> +                        cmdmod.get_app(
> +                            cherrypy.engine),
> +                        "/" + str(d))
>      if not active:
>          print("no daemon activated, use")
>          for d in daemons:
> diff --git a/elbepack/commands/db.py b/elbepack/commands/db.py
> index 138ef281..8850fe92 100644
> --- a/elbepack/commands/db.py
> +++ b/elbepack/commands/db.py
> @@ -7,18 +7,19 @@ from __future__ import print_function
>  
>  from elbepack.dbaction import DbAction
>  
> -def run_command (argv):
> +
> +def run_command(argv):
>  
>      if not len(argv):
>          print("elbe db - no action given")
> -        DbAction.print_actions ()
> +        DbAction.print_actions()
>          return
>  
>      try:
> -        DbAction (argv[0]).execute (argv[1:])
> +        DbAction(argv[0]).execute(argv[1:])
>      except KeyError:
>          print("elbe db - unknown action given")
> -        DbAction.print_actions ()
> +        DbAction.print_actions()
>          return
>  
>      return
> diff --git a/elbepack/commands/debianize.py b/elbepack/commands/debianize.py
> index f7d0d02c..c31287b1 100644
> --- a/elbepack/commands/debianize.py
> +++ b/elbepack/commands/debianize.py
> @@ -10,17 +10,18 @@ import sys
>  
>  from elbepack.debianize.debianize import Debianize, DebianizeBase
>  
> -def run_command ( args ):
> -    if os.path.exists ('debian'):
> +
> +def run_command(args):
> +    if os.path.exists('debian'):
>          print("debian folder already exists, nothing to do")
> -        sys.exit (10)
> +        sys.exit(10)
>  
>      try:
> -        debianizer = DebianizeBase.get_debianizer ()
> -        Debianize (debianizer).run ()
> +        debianizer = DebianizeBase.get_debianizer()
> +        Debianize(debianizer).run()
>          sys.exit(10)
>      except KeyError:
>          print("This creates a debinization of a source directory.")
>          print("The software was not able to identify the current directory.")
>          print("Please run the command from source directory")
> -        sys.exit (20)
> +        sys.exit(20)
> diff --git a/elbepack/commands/diff.py b/elbepack/commands/diff.py
> index 3aedc4be..3da450e5 100644
> --- a/elbepack/commands/diff.py
> +++ b/elbepack/commands/diff.py
> @@ -11,52 +11,62 @@ import filecmp
>  
>  from optparse import OptionParser
>  
> -def walk_generated( gen_path, fix_path, exclude ):
>  
> -    file_to_rm  = []
> +def walk_generated(gen_path, fix_path, exclude):
> +
> +    file_to_rm = []
>      file_differ = []
>      gen_path = gen_path.rstrip("/")
>      fix_path = fix_path.rstrip("/")
>  
>      for root, dirs, files in os.walk(gen_path):
> -        if root==gen_path:
> +        if root == gen_path:
>              infs_root = "/"
>          else:
> -            infs_root = root.replace( gen_path, "")
> +            infs_root = root.replace(gen_path, "")
>  
>          if True in [infs_root.startswith(x) for x in exclude]:
>              continue
>  
> -        if len(files)==0:
> -            if not os.path.exists(fix_path+infs_root):
> -                print("empty directory %s only exists in gen image" % (infs_root))
> -                file_to_rm.append( infs_root )
> +        if len(files) == 0:
> +            if not os.path.exists(fix_path + infs_root):
> +                print(
> +                    "empty directory %s only exists in gen image" %
> +                    (infs_root))
> +                file_to_rm.append(infs_root)
>          else:
>              for f in files:
> -                gen_fname = os.path.join(gen_path+infs_root, f)
> -                fix_fname = os.path.join(fix_path+infs_root, f)
> +                gen_fname = os.path.join(gen_path + infs_root, f)
> +                fix_fname = os.path.join(fix_path + infs_root, f)
>  
>                  if os.path.exists(gen_fname) and os.path.exists(fix_fname):
>                      if os.path.isfile(gen_fname) and os.path.isfile(fix_fname):
> -                        if not os.path.islink(gen_fname) and not os.path.islink(fix_fname):
> -                            if not filecmp.cmp(gen_fname,fix_fname,shallow=False):
> -                                print("files %s and %s differ" %(gen_fname, fix_fname))
> -                                file_differ.append(os.path.join( infs_root, f ) )
> +                        if not os.path.islink(
> +                                gen_fname) and not os.path.islink(fix_fname):
> +                            if not filecmp.cmp(
> +                                    gen_fname, fix_fname, shallow=False):
> +                                print(
> +                                    "files %s and %s differ" %
> +                                    (gen_fname, fix_fname))
> +                                file_differ.append(os.path.join(infs_root, f))
>                          else:
> -                            if not (os.readlink(gen_fname) == os.readlink(fix_fname)):
> -                                print("symlinks %s and %s differ" %(gen_fname, fix_fname))
> -                                file_differ.append(os.path.join( infs_root, f ) )
> +                            if not (
> +                                    os.readlink(gen_fname) == os.readlink(fix_fname)):
> +                                print(
> +                                    "symlinks %s and %s differ" %
> +                                    (gen_fname, fix_fname))
> +                                file_differ.append(os.path.join(infs_root, f))
>  
>                  elif not os.path.exists(gen_fname) and os.path.exists(fix_fname):
>                      print("file %s only exists in fixed image" % (fix_fname))
>                  elif os.path.exists(gen_fname) and not os.path.exists(fix_fname):
>                      print("file %s only exists in gen image" % (gen_fname))
> -                    file_to_rm.append( os.path.join( infs_root, f ) )
> +                    file_to_rm.append(os.path.join(infs_root, f))
>  
>      return file_differ, file_to_rm
>  
>  
> -def walk_fixed( gen_path, fix_path, exclude ):
> +def walk_fixed(gen_path, fix_path, exclude):
>  
>      file_only = []
>      dir_to_create = []
> @@ -64,35 +74,38 @@ def walk_fixed( gen_path, fix_path, exclude ):
>      fix_path = fix_path.rstrip("/")
>  
>      for root, dirs, files in os.walk(fix_path):
> -        if root==fix_path:
> +        if root == fix_path:
>              infs_root = "/"
>          else:
> -            infs_root = root.replace( fix_path, "")
> +            infs_root = root.replace(fix_path, "")
>  
>          if True in [infs_root.startswith(x) for x in exclude]:
>              continue
>  
> -        if len(files)==0:
> -            if not os.path.exists(gen_path+infs_root):
> -                print("empty directory %s only exists in fix image" % (infs_root))
> -                dir_to_create.append( infs_root.lstrip("/") )
> +        if len(files) == 0:
> +            if not os.path.exists(gen_path + infs_root):
> +                print(
> +                    "empty directory %s only exists in fix image" %
> +                    (infs_root))
> +                dir_to_create.append(infs_root.lstrip("/"))
>          else:
>              for f in files:
> -                gen_fname = os.path.join(gen_path+infs_root, f)
> -                fix_fname = os.path.join(fix_path+infs_root, f)
> +                gen_fname = os.path.join(gen_path + infs_root, f)
> +                fix_fname = os.path.join(fix_path + infs_root, f)
>  
>                  if not os.path.exists(gen_fname) and os.path.exists(fix_fname):
>                      print("file %s only exists in fixed image" % (fix_fname))
> -                    file_only.append( os.path.join( infs_root, f ) )
> +                    file_only.append(os.path.join(infs_root, f))
>  
>      return file_only, dir_to_create
>  
> -def run_command( argv ):
> +
> +def run_command(argv):
>  
>      oparser = OptionParser(usage="usage: %prog diff [options] <dir1> <dir2>")
> -    oparser.add_option( "--exclude", action="append", dest="exclude",
> -                        help="Paths to exclude" )
> -    (opt,args) = oparser.parse_args(argv)
> +    oparser.add_option("--exclude", action="append", dest="exclude",
> +                       help="Paths to exclude")
> +    (opt, args) = oparser.parse_args(argv)
>  
>      if len(args) != 2:
>          print("Wrong number of arguments")
> @@ -105,23 +118,20 @@ def run_command( argv ):
>      gen_rfs = args[0]
>      fix_rfs = args[1]
>  
> -    differ, rm = walk_generated( gen_rfs, fix_rfs, opt.exclude )
> -    only, mkdir = walk_fixed( gen_rfs, fix_rfs, opt.exclude )
> -
> +    differ, rm = walk_generated(gen_rfs, fix_rfs, opt.exclude)
> +    only, mkdir = walk_fixed(gen_rfs, fix_rfs, opt.exclude)
>  
>      print("suggesting:")
>      print()
>  
>      for f in rm:
> -        print("<rm>%s</rm>"%f)
> +        print("<rm>%s</rm>" % f)
>  
>      for d in mkdir:
> -        print("<mkdir>%s</mkdir>"%d)
> +        print("<mkdir>%s</mkdir>" % d)
>  
>      print("")
>  
> -    fileline=""
> -    for f in differ+only:
> -        print("tar rf archive.tar -C %s %s"%(fix_rfs, f))
> -
> -
> +    fileline = ""
> +    for f in differ + only:
> +        print("tar rf archive.tar -C %s %s" % (fix_rfs, f))
> diff --git a/elbepack/commands/gen_update.py b/elbepack/commands/gen_update.py
> index 10811a33..47f65d4d 100644
> --- a/elbepack/commands/gen_update.py
> +++ b/elbepack/commands/gen_update.py
> @@ -13,32 +13,39 @@ from elbepack.elbeproject import ElbeProject
>  from elbepack.elbexml import ValidationError
>  from elbepack.updatepkg import gen_update_pkg, MissingData
>  
> -def run_command( argv ):
> +
> +def run_command(argv):
>      oparser = OptionParser(usage="usage: %prog gen_update [options] [xmlfile]")
> -    oparser.add_option( "-t", "--target", dest="target",
> -                        help="directoryname of target" )
> -    oparser.add_option( "-o", "--output", dest="output",
> -                        help="filename of the update package" )
> -    oparser.add_option( "-n", "--name", dest="name",
> -                        help="name of the project (included in the report)" )
> -    oparser.add_option( "-p", "--pre-sh", dest="presh_file",
> -                        help="script that is executed before the update will be applied" )
> -    oparser.add_option( "-P", "--post-sh", dest="postsh_file",
> -                        help="script that is executed after the update was applied" )
> -    oparser.add_option( "-c", "--cfg-dir", dest="cfg_dir",
> -                        help="files that are copied to target" )
> -    oparser.add_option( "-x", "--cmd-dir", dest="cmd_dir",
> -                        help="scripts that are executed on the target" )
> -    oparser.add_option( "--skip-validation", action="store_true",
> -                        dest="skip_validation", default=False,
> -                        help="Skip xml schema validation" )
> -    oparser.add_option( "--buildtype", dest="buildtype",
> -                        help="Override the buildtype" )
> -    oparser.add_option( "--debug", action="store_true", dest="debug",
> -                        default=False,
> -                        help="Enable various features to debug the build" )
> -
> -    (opt,args) = oparser.parse_args(argv)
> +    oparser.add_option("-t", "--target", dest="target",
> +                       help="directoryname of target")
> +    oparser.add_option("-o", "--output", dest="output",
> +                       help="filename of the update package")
> +    oparser.add_option("-n", "--name", dest="name",
> +                       help="name of the project (included in the report)")
> +    oparser.add_option(
> +        "-p",
> +        "--pre-sh",
> +        dest="presh_file",
> +        help="script that is executed before the update will be applied")
> +    oparser.add_option(
> +        "-P",
> +        "--post-sh",
> +        dest="postsh_file",
> +        help="script that is executed after the update was applied")
> +    oparser.add_option("-c", "--cfg-dir", dest="cfg_dir",
> +                       help="files that are copied to target")
> +    oparser.add_option("-x", "--cmd-dir", dest="cmd_dir",
> +                       help="scripts that are executed on the target")
> +    oparser.add_option("--skip-validation", action="store_true",
> +                       dest="skip_validation", default=False,
> +                       help="Skip xml schema validation")
> +    oparser.add_option("--buildtype", dest="buildtype",
> +                       help="Override the buildtype")
> +    oparser.add_option("--debug", action="store_true", dest="debug",
> +                       default=False,
> +                       help="Enable various features to debug the build")
> +
> +    (opt, args) = oparser.parse_args(argv)
>  
>      if len(args) != 1:
>          if not opt.cfg_dir and not opt.cmd_dir:
> @@ -59,9 +66,9 @@ def run_command( argv ):
>          buildtype = None
>  
>      try:
> -        project = ElbeProject (opt.target, name=opt.name,
> -                override_buildtype=buildtype,
> -                skip_validate=opt.skip_validation)
> +        project = ElbeProject(opt.target, name=opt.name,
> +                              override_buildtype=buildtype,
> +                              skip_validate=opt.skip_validation)
>      except ValidationError as e:
>          print(str(e))
>          print("xml validation failed. Bailing out")
> @@ -81,12 +88,12 @@ def run_command( argv ):
>  
>      update_xml = None
>      if len(args) >= 1:
> -        update_xml = args[ 0 ]
> +        update_xml = args[0]
>  
>      try:
> -        gen_update_pkg( project, update_xml, opt.output, buildtype,
> -                opt.skip_validation, opt.debug,
> -                cfg_dir = opt.cfg_dir, cmd_dir = opt.cmd_dir )
> +        gen_update_pkg(project, update_xml, opt.output, buildtype,
> +                       opt.skip_validation, opt.debug,
> +                       cfg_dir=opt.cfg_dir, cmd_dir=opt.cmd_dir)
>  
>      except ValidationError as e:
>          print(str(e))
> diff --git a/elbepack/commands/genlicence.py b/elbepack/commands/genlicence.py
> index acaceab6..aac98601 100644
> --- a/elbepack/commands/genlicence.py
> +++ b/elbepack/commands/genlicence.py
> @@ -13,14 +13,15 @@ import io
>  from elbepack.asciidoclog import StdoutLog
>  from elbepack.filesystem import Filesystem
>  
> -def run_command( argv ):
> +
> +def run_command(argv):
>      oparser = OptionParser(usage="usage: %prog genlicence [options] <rfs>")
> -    oparser.add_option( "--output", dest="output",
> -                        help="outputfilename" )
> -    oparser.add_option( "--xml", dest="xml", default=None,
> -                        help="xml outputfilename" )
> +    oparser.add_option("--output", dest="output",
> +                       help="outputfilename")
> +    oparser.add_option("--xml", dest="xml", default=None,
> +                       help="xml outputfilename")
>  
> -    (opt,args) = oparser.parse_args(argv)
> +    (opt, args) = oparser.parse_args(argv)
>  
>      if len(args) != 1:
>          print("wrong number of arguments")
> @@ -29,14 +30,13 @@ def run_command( argv ):
>  
>      chroot = os.path.abspath(args[0])
>  
> -    rfs = Filesystem( chroot )
> +    rfs = Filesystem(chroot)
>      log = StdoutLog()
>  
>      if opt.output:
> -        f = io.open( opt.output, "w+", encoding='utf-8' )
> +        f = io.open(opt.output, "w+", encoding='utf-8')
>      else:
> -        f = io.open( 'licence.txt', "w+", encoding='utf-8' )
> +        f = io.open('licence.txt', "w+", encoding='utf-8')
>  
>      rfs.write_licenses(f, log, opt.xml)
>      f.close()
> -
> diff --git a/elbepack/commands/get_archive.py b/elbepack/commands/get_archive.py
> index eb0505b0..6f3b14ea 100644
> --- a/elbepack/commands/get_archive.py
> +++ b/elbepack/commands/get_archive.py
> @@ -14,41 +14,39 @@ from elbepack.treeutils import etree
>  from optparse import OptionParser
>  
>  
> -def unbase( s, fname ):
> +def unbase(s, fname):
>      outfile = file(fname, "w")
> -    outfile.write( standard_b64decode(s) )
> +    outfile.write(standard_b64decode(s))
>      outfile.close()
>  
>  
> -def run_command( argv ):
> +def run_command(argv):
>  
> -    oparser = OptionParser( usage="usage: %prog get_archive <xmlfile> <archive>")
> -    (opt,args) = oparser.parse_args(argv)
> +    oparser = OptionParser(
> +        usage="usage: %prog get_archive <xmlfile> <archive>")
> +    (opt, args) = oparser.parse_args(argv)
>  
>      if len(args) != 2:
>          print("Wrong number of arguments")
>          oparser.print_help()
>          sys.exit(20)
>  
> -    if os.path.exists( args[1] ):
> +    if os.path.exists(args[1]):
>          print("archive already exists, bailing out")
>          sys.exit(20)
>  
>      try:
> -        xml = etree( args[0] )
> -    except:
> +        xml = etree(args[0])
> +    except BaseException:
>          print("Error reading xml file!")
>          sys.exit(20)
>  
>      if xml.has("archive"):
>          try:
> -            unbase( xml.text("/archive"), args[1] )
> -        except:
> +            unbase(xml.text("/archive"), args[1])
> +        except BaseException:
>              print("Error writing archive")
>              sys.exit(20)
>      else:
>          print("no archive in this xml file.")
>          sys.exit(20)
> -
> -
> -
> diff --git a/elbepack/commands/hdimg.py b/elbepack/commands/hdimg.py
> index 48efe7b5..f29958b3 100644
> --- a/elbepack/commands/hdimg.py
> +++ b/elbepack/commands/hdimg.py
> @@ -12,27 +12,32 @@ from optparse import OptionParser
>  from elbepack.elbeproject import ElbeProject
>  from elbepack.elbexml import ValidationError
>  
> -def run_command( argv ):
>  
> -    oparser = OptionParser( usage="usage: %prog hdimg --target <dir> --output <out> <xmlfile>")
> -    oparser.add_option( "--target", dest="target",
> -                        help="target directory",
> -                        metavar="FILE" )
> -    oparser.add_option( "-o", "--output", dest="output",
> -                        help="name of logfile" )
> -    oparser.add_option( "--buildtype", dest="buildtype",
> -                        help="Override the buildtype" )
> -    oparser.add_option( "--skip-validation", action="store_true",
> -                        dest="skip_validation", default=False,
> -                        help="Skip xml schema validation" )
> -    oparser.add_option( "--skip-grub", action="store_true",
> -                        dest="skip_grub", default=False,
> -                        help="Skip grub install" )
> -    oparser.add_option( "--grub-version", type="int",
> -                        dest="grub_version", default=202,
> -                        help="use specific grub version (possible values are 0, 199 and 202)" )
> +def run_command(argv):
>  
> -    (opt,args) = oparser.parse_args(argv)
> +    oparser = OptionParser(
> +        usage="usage: %prog hdimg --target <dir> --output <out> <xmlfile>")
> +    oparser.add_option("--target", dest="target",
> +                       help="target directory",
> +                       metavar="FILE")
> +    oparser.add_option("-o", "--output", dest="output",
> +                       help="name of logfile")
> +    oparser.add_option("--buildtype", dest="buildtype",
> +                       help="Override the buildtype")
> +    oparser.add_option("--skip-validation", action="store_true",
> +                       dest="skip_validation", default=False,
> +                       help="Skip xml schema validation")
> +    oparser.add_option("--skip-grub", action="store_true",
> +                       dest="skip_grub", default=False,
> +                       help="Skip grub install")
> +    oparser.add_option(
> +        "--grub-version",
> +        type="int",
> +        dest="grub_version",
> +        default=202,
> +        help="use specific grub version (possible values are 0, 199 and 202)")
> +
> +    (opt, args) = oparser.parse_args(argv)
>  
>      if len(args) != 1:
>          print("Wrong number of arguments")
> @@ -52,15 +57,15 @@ def run_command( argv ):
>      if opt.skip_grub:
>          opt.grub_version = 0
>  
> -    if opt.grub_version not in [0,199,202]:
> +    if opt.grub_version not in [0, 199, 202]:
>          print("invalid grub version")
>          oparser.print_help()
>          sys.exit(20)
>  
>      try:
> -        project = ElbeProject( opt.target, override_buildtype=opt.buildtype,
> -                xmlpath=args[0], logpath=opt.output,
> -                skip_validate=opt.skip_validation )
> +        project = ElbeProject(opt.target, override_buildtype=opt.buildtype,
> +                              xmlpath=args[0], logpath=opt.output,
> +                              skip_validate=opt.skip_validation)
>      except ValidationError as e:
>          print(str(e))
>          print("xml validation failed. Bailing out")
> diff --git a/elbepack/commands/init.py b/elbepack/commands/init.py
> index 561bdbfa..dfe538e5 100644
> --- a/elbepack/commands/init.py
> +++ b/elbepack/commands/init.py
> @@ -19,40 +19,52 @@ from elbepack.directories import init_template_dir, elbe_dir
>  
>  from optparse import OptionParser
>  
> -def run_command( argv ):
> -    oparser = OptionParser( usage="usage: %prog init [options] <filename>" )
>  
> -    oparser.add_option( "--skip-validation", action="store_true",
> -                        dest="skip_validation", default=False,
> -                        help="Skip xml schema validation" )
> +def run_command(argv):
> +    oparser = OptionParser(usage="usage: %prog init [options] <filename>")
>  
> -    oparser.add_option( "--directory", dest="directory",
> -                        help="Working directory (default is build)",
> -                        metavar="FILE" )
> +    oparser.add_option("--skip-validation", action="store_true",
> +                       dest="skip_validation", default=False,
> +                       help="Skip xml schema validation")
>  
> -    oparser.add_option( "--cdrom", dest="cdrom",
> -                        help="Use FILE as cdrom iso, and use that to build the initvm",
> -                        metavar="FILE" )
> +    oparser.add_option("--directory", dest="directory",
> +                       help="Working directory (default is build)",
> +                       metavar="FILE")
>  
> -    oparser.add_option( "--proxy", dest="proxy",
> -                        help="Override the http Proxy" )
> +    oparser.add_option(
> +        "--cdrom",
> +        dest="cdrom",
> +        help="Use FILE as cdrom iso, and use that to build the initvm",
> +        metavar="FILE")
>  
> -    oparser.add_option( "--buildtype", dest="buildtype",
> -                        help="Override the buildtype" )
> +    oparser.add_option("--proxy", dest="proxy",
> +                       help="Override the http Proxy")
>  
> -    oparser.add_option( "--debug", dest="debug",
> -                        action="store_true", default=False,
> -           help="start qemu in graphical mode to enable console switch" )
> +    oparser.add_option("--buildtype", dest="buildtype",
> +                       help="Override the buildtype")
>  
> -    oparser.add_option( "--devel", dest="devel",
> -                        action="store_true", default=False,
> -           help="use devel mode, and install current builddir inside initvm" )
> +    oparser.add_option(
> +        "--debug",
> +        dest="debug",
> +        action="store_true",
> +        default=False,
> +        help="start qemu in graphical mode to enable console switch")
>  
> -    oparser.add_option( "--nesting", dest="nesting",
> -                        action="store_true", default=False,
> -           help="allow initvm to support nested kvm. This makes /proc/cpuinfo inside initvm differ per host." )
> +    oparser.add_option(
> +        "--devel",
> +        dest="devel",
> +        action="store_true",
> +        default=False,
> +        help="use devel mode, and install current builddir inside initvm")
>  
> -    (opt,args) = oparser.parse_args(argv)
> +    oparser.add_option(
> +        "--nesting",
> +        dest="nesting",
> +        action="store_true",
> +        default=False,
> +        help="allow initvm to support nested kvm. This makes /proc/cpuinfo inside initvm differ per host.")
> +
> +    (opt, args) = oparser.parse_args(argv)
>  
>      if len(args) == 0:
>          print("no filename specified")
> @@ -64,46 +76,47 @@ def run_command( argv ):
>          sys.exit(20)
>  
>      if opt.devel:
> -        if not os.path.isdir( os.path.join (elbe_dir, "elbepack")):
> +        if not os.path.isdir(os.path.join(elbe_dir, "elbepack")):
>              print("Devel Mode only valid, when running from elbe checkout")
>              sys.exit(20)
>  
>      if not opt.skip_validation:
> -        validation = validate_xml (args[0])
> -        if len (validation) != 0:
> +        validation = validate_xml(args[0])
> +        if len(validation) != 0:
>              print("xml validation failed. Bailing out")
>              for i in validation:
>                  print(i)
>              sys.exit(20)
>  
> -    xml = etree( args[0] )
> +    xml = etree(args[0])
>  
> -    if not xml.has( "initvm" ):
> +    if not xml.has("initvm"):
>          print("fatal error: xml missing mandatory section 'initvm'")
>          sys.exit(20)
>  
>      if opt.buildtype:
>          buildtype = opt.buildtype
> -    elif xml.has( "initvm/buildtype" ):
> -        buildtype = xml.text( "/initvm/buildtype" )
> +    elif xml.has("initvm/buildtype"):
> +        buildtype = xml.text("/initvm/buildtype")
>      else:
>          buildtype = "nodefaults"
>  
> -    defs = ElbeDefaults( buildtype )
> +    defs = ElbeDefaults(buildtype)
>  
>      http_proxy = ""
> -    if os.getenv ("http_proxy"):
> -        http_proxy = os.getenv ("http_proxy")
> +    if os.getenv("http_proxy"):
> +        http_proxy = os.getenv("http_proxy")
>      elif opt.proxy:
>          http_proxy = opt.proxy
>      elif xml.has("initvm/mirror/primary_proxy"):
> -        http_proxy = xml.text("initvm/mirror/primary_proxy").strip().replace("LOCALMACHINE", "localhost")
> +        http_proxy = xml.text(
> +            "initvm/mirror/primary_proxy").strip().replace("LOCALMACHINE", "localhost")
>  
>      if opt.cdrom:
> -        mirror = xml.node ("initvm/mirror")
> -        mirror.clear ()
> -        cdrom = mirror.ensure_child ("cdrom")
> -        cdrom.set_text (os.path.abspath (opt.cdrom))
> +        mirror = xml.node("initvm/mirror")
> +        mirror.clear()
> +        cdrom = mirror.ensure_child("cdrom")
> +        cdrom.set_text(os.path.abspath(opt.cdrom))
>  
>      if not opt.directory:
>          path = "./build"
> @@ -113,14 +126,18 @@ def run_command( argv ):
>      try:
>          os.makedirs(path)
>      except OSError as e:
> -        print("unable to create project directory: %s (%s)" % (path, e.strerror))
> +        print(
> +            "unable to create project directory: %s (%s)" %
> +            (path, e.strerror))
>          sys.exit(30)
>  
> -    out_path = os.path.join(path,".elbe-in")
> +    out_path = os.path.join(path, ".elbe-in")
>      try:
>          os.makedirs(out_path)
>      except OSError as e:
> -        print("unable to create subdirectory: %s (%s)" % (out_path, e.strerror))
> +        print(
> +            "unable to create subdirectory: %s (%s)" %
> +            (out_path, e.strerror))
>          sys.exit(30)
>  
>      d = {"elbe_version": elbe_version,
> @@ -130,12 +147,12 @@ def run_command( argv ):
>           "prj": xml.node("/initvm"),
>           "http_proxy": http_proxy,
>           "pkgs": xml.node("/initvm/pkg-list") or [],
> -         "preseed": get_initvm_preseed(xml) }
> +         "preseed": get_initvm_preseed(xml)}
>  
>      if http_proxy != "":
> -        os.putenv ("http_proxy", http_proxy)
> -        os.putenv ("https_proxy", http_proxy)
> -        os.putenv ("no_proxy", "localhost,127.0.0.1")
> +        os.putenv("http_proxy", http_proxy)
> +        os.putenv("https_proxy", http_proxy)
> +        os.putenv("no_proxy", "localhost,127.0.0.1")
>  
>      try:
>          copy_kinitrd(xml.node("/initvm"), out_path, defs, arch="amd64")
> @@ -147,27 +164,42 @@ def run_command( argv ):
>          print("Check Mirror configuration")
>          sys.exit(20)
>  
> -    templates = os.listdir( init_template_dir )
> +    templates = os.listdir(init_template_dir)
>  
> -    make_executable = [ "init-elbe.sh.mako",
> -                        "preseed.cfg.mako" ]
> +    make_executable = ["init-elbe.sh.mako",
> +                       "preseed.cfg.mako"]
>  
>      for t in templates:
> -        o = t.replace( ".mako", "" )
> +        o = t.replace(".mako", "")
>  
>          if t == "Makefile.mako" or t == "libvirt.xml.mako":
> -            write_template(os.path.join(path,o), os.path.join(init_template_dir, t), d, linebreak=True )
> +            write_template(
> +                os.path.join(
> +                    path, o), os.path.join(
> +                    init_template_dir, t), d, linebreak=True)
>          else:
> -            write_template(os.path.join(out_path,o), os.path.join(init_template_dir, t), d, linebreak=False )
> +            write_template(
> +                os.path.join(
> +                    out_path, o), os.path.join(
> +                    init_template_dir, t), d, linebreak=False)
>  
>          if t in make_executable:
> -            os.chmod( os.path.join(out_path,o), 0o755 )
> +            os.chmod(os.path.join(out_path, o), 0o755)
>  
> -    shutil.copyfile( args[0],
> -       os.path.join(out_path, "source.xml" ) )
> +    shutil.copyfile(args[0],
> +                    os.path.join(out_path, "source.xml"))
>  
>      if opt.cdrom:
> -        shutil.move( "/tmp/elbe-keyring.gpg", os.path.join(out_path, "elbe-keyring.gpg" ) )
> +        shutil.move(
> +            "/tmp/elbe-keyring.gpg",
> +            os.path.join(
> +                out_path,
> +                "elbe-keyring.gpg"))
>  
>      if opt.devel:
> -        os.system ('tar cfj "%s" -C "%s" .' % (os.path.join (out_path, "elbe-devel.tar.bz2"), elbe_dir))
> +        os.system(
> +            'tar cfj "%s" -C "%s" .' %
> +            (os.path.join(
> +                out_path,
> +                "elbe-devel.tar.bz2"),
> +                elbe_dir))
> diff --git a/elbepack/commands/initvm.py b/elbepack/commands/initvm.py
> index c1657198..21a1dc7f 100644
> --- a/elbepack/commands/initvm.py
> +++ b/elbepack/commands/initvm.py
> @@ -11,66 +11,79 @@ import os
>  from optparse import OptionParser
>  from elbepack.initvmaction import InitVMAction, InitVMError
>  
> -def run_command (argv):
> -    oparser = OptionParser (usage="usage: elbe initvm [options] <command>")
>  
> -    oparser.add_option ("--directory", dest="directory", default=None,
> -                        help="directory, where the initvm resides, default is ./initvm")
> +def run_command(argv):
> +    oparser = OptionParser(usage="usage: elbe initvm [options] <command>")
>  
> -    oparser.add_option ("--cdrom", dest="cdrom", default=None,
> -                        help="iso image of Binary cdrom")
> +    oparser.add_option(
> +        "--directory",
> +        dest="directory",
> +        default=None,
> +        help="directory, where the initvm resides, default is ./initvm")
>  
> -    oparser.add_option( "--devel", action="store_true",
> -                        dest="devel", default=False,
> -                        help="Install elbe Version from the current working into initvm" )
> +    oparser.add_option("--cdrom", dest="cdrom", default=None,
> +                       help="iso image of Binary cdrom")
>  
> -    oparser.add_option( "--skip-download", action="store_true",
> -                        dest="skip_download", default=False,
> -                        help="Skip downloading generated Files" )
> +    oparser.add_option(
> +        "--devel",
> +        action="store_true",
> +        dest="devel",
> +        default=False,
> +        help="Install elbe Version from the current working into initvm")
>  
> -    oparser.add_option ("--output", dest="outdir", default=None,
> -                        help="directory where to save downloaded Files")
> +    oparser.add_option("--skip-download", action="store_true",
> +                       dest="skip_download", default=False,
> +                       help="Skip downloading generated Files")
>  
> -    oparser.add_option( "--skip-build-bin", action="store_false",
> -                        dest="build_bin", default=True,
> -                        help="Skip building Binary Repository CDROM, for exact Reproduction" )
> +    oparser.add_option("--output", dest="outdir", default=None,
> +                       help="directory where to save downloaded Files")
>  
> -    oparser.add_option( "--skip-build-sources", action="store_false",
> -                        dest="build_sources", default=True,
> -                        help="Skip building Source CDROM" )
> +    oparser.add_option(
> +        "--skip-build-bin",
> +        action="store_false",
> +        dest="build_bin",
> +        default=True,
> +        help="Skip building Binary Repository CDROM, for exact Reproduction")
>  
> -    oparser.add_option( "--keep-files", action="store_true",
> -                        dest="keep_files", default=False,
> -                        help="don't delete elbe project files in initvm" )
> +    oparser.add_option("--skip-build-sources", action="store_false",
> +                       dest="build_sources", default=True,
> +                       help="Skip building Source CDROM")
>  
> -    oparser.add_option ("--writeproject", dest="writeproject", default=None,
> -                        help="write project name to file")
> +    oparser.add_option("--keep-files", action="store_true",
> +                       dest="keep_files", default=False,
> +                       help="don't delete elbe project files in initvm")
>  
> -    oparser.add_option( "--nesting", dest="nesting",
> -                        action="store_true", default=False,
> -           help="allow initvm to support nested kvm. This makes /proc/cpuinfo inside initvm differ per host." )
> +    oparser.add_option("--writeproject", dest="writeproject", default=None,
> +                       help="write project name to file")
>  
> -    (opt,args) = oparser.parse_args (argv)
> +    oparser.add_option(
> +        "--nesting",
> +        dest="nesting",
> +        action="store_true",
> +        default=False,
> +        help="allow initvm to support nested kvm. This makes /proc/cpuinfo inside initvm differ per host.")
> +
> +    (opt, args) = oparser.parse_args(argv)
>  
>      if len(args) < 1:
>          print("elbe initvm - no subcommand given", file=sys.stderr)
> -        InitVMAction.print_actions ()
> +        InitVMAction.print_actions()
>          sys.exit(20)
>  
>      directory = opt.directory or os.getcwd() + '/initvm'
>  
>      # Use absolute Path
> -    directory = os.path.abspath (directory)
> +    directory = os.path.abspath(directory)
>  
>      try:
> -        action = InitVMAction (args[0])
> +        action = InitVMAction(args[0])
>      except KeyError:
>          print("elbe initvm - unknown subcommand", file=sys.stderr)
> -        InitVMAction.print_actions ()
> +        InitVMAction.print_actions()
>          sys.exit(20)
>  
>      try:
> -        action.execute (directory, opt, args[1:])
> +        action.execute(directory, opt, args[1:])
>      except InitVMError as e:
>          print("InitVM Exception", file=sys.stderr)
>          print(e, file=sys.stderr)
> diff --git a/elbepack/commands/mkcdrom.py b/elbepack/commands/mkcdrom.py
> index 33e553d8..dbd31171 100644
> --- a/elbepack/commands/mkcdrom.py
> +++ b/elbepack/commands/mkcdrom.py
> @@ -17,35 +17,38 @@ from elbepack.efilesystem import ChRootFilesystem
>  from elbepack.cdroms import mk_source_cdrom, mk_binary_cdrom, CDROM_SIZE
>  
>  
> -
> -def run_command( argv ):
> +def run_command(argv):
>      oparser = OptionParser(usage="usage: %prog mkcdrom [options] <builddir>")
> -    oparser.add_option( "--skip-validation", action="store_true",
> -                        dest="skip_validation", default=False,
> -                        help="Skip xml schema validation" )
> -    oparser.add_option( "--buildtype", dest="buildtype",
> -                        help="Override the buildtype" )
> -    oparser.add_option( "--arch", dest="arch",
> -                        help="Override the architecture" )
> -    oparser.add_option( "--codename", dest="codename",
> -                        help="Override the codename" )
> -    oparser.add_option( "--init_codename", dest="init_codename",
> -                        help="Override the initvm codename" )
> -    oparser.add_option( "--rfs-only", action="store_true",
> -                        dest="rfs_only", default=False,
> -                        help="builddir points to RFS" )
> -    oparser.add_option( "--log", dest="log",
> -                        help="Log to filename" )
> -    oparser.add_option( "--binary", action="store_true",
> -                        dest="binary", default=False,
> -                        help="build binary cdrom" )
> -    oparser.add_option( "--source", action="store_true",
> -                        dest="source", default=False,
> -                        help="build source cdrom" )
> -    oparser.add_option( "--cdrom-size", action="store",
> -                        dest="cdrom_size", default=CDROM_SIZE, help="ISO CD size in MB" )
> +    oparser.add_option("--skip-validation", action="store_true",
> +                       dest="skip_validation", default=False,
> +                       help="Skip xml schema validation")
> +    oparser.add_option("--buildtype", dest="buildtype",
> +                       help="Override the buildtype")
> +    oparser.add_option("--arch", dest="arch",
> +                       help="Override the architecture")
> +    oparser.add_option("--codename", dest="codename",
> +                       help="Override the codename")
> +    oparser.add_option("--init_codename", dest="init_codename",
> +                       help="Override the initvm codename")
> +    oparser.add_option("--rfs-only", action="store_true",
> +                       dest="rfs_only", default=False,
> +                       help="builddir points to RFS")
> +    oparser.add_option("--log", dest="log",
> +                       help="Log to filename")
> +    oparser.add_option("--binary", action="store_true",
> +                       dest="binary", default=False,
> +                       help="build binary cdrom")
> +    oparser.add_option("--source", action="store_true",
> +                       dest="source", default=False,
> +                       help="build source cdrom")
> +    oparser.add_option(
> +        "--cdrom-size",
> +        action="store",
> +        dest="cdrom_size",
> +        default=CDROM_SIZE,
> +        help="ISO CD size in MB")
>  
> -    (opt,args) = oparser.parse_args(argv)
> +    (opt, args) = oparser.parse_args(argv)
>  
>      if len(args) != 1:
>          print("wrong number of arguments", file=sys.stderr)
> @@ -54,50 +57,54 @@ def run_command( argv ):
>  
>      if not opt.rfs_only:
>          try:
> -            project = ElbeProject( args[0], logpath=opt.log,
> -                    override_buildtype=opt.buildtype,
> -                    skip_validate=opt.skip_validation )
> +            project = ElbeProject(args[0], logpath=opt.log,
> +                                  override_buildtype=opt.buildtype,
> +                                  skip_validate=opt.skip_validation)
>          except ValidationError as e:
> -            print(str (e), file=sys.stderr)
> +            print(str(e), file=sys.stderr)
>              print("xml validation failed. Bailing out", file=sys.stderr)
>              sys.exit(20)
>  
>          builddir = project.builddir
>          rfs = project.buildenv.rfs
>          xml = project.xml
> -        arch = xml.text("project/arch", key="arch" )
> -        codename = xml.text("project/suite" )
> +        arch = xml.text("project/arch", key="arch")
> +        codename = xml.text("project/suite")
>          log = project.log
> -        init_codename = xml.get_initvm_codename ()
> +        init_codename = xml.get_initvm_codename()
>      else:
> -        builddir = os.path.abspath( os.path.curdir )
> -        rfs = ChRootFilesystem( args[0] )
> +        builddir = os.path.abspath(os.path.curdir)
> +        rfs = ChRootFilesystem(args[0])
>          arch = opt.arch
>          codename = opt.codename
>          init_codename = opt.init_codename
>          xml = None
>          if opt.log:
> -            log = ASCIIDocLog( opt.log )
> +            log = ASCIIDocLog(opt.log)
>          else:
>              log = StdoutLog()
>  
>      generated_files = []
>      if opt.source:
>          with rfs:
> -            generated_files += mk_source_cdrom( rfs, arch, codename,
> -                                                init_codename, builddir, log,
> -                                                opt.cdrom_size )
> +            generated_files += mk_source_cdrom(rfs, arch, codename,
> +                                               init_codename, builddir, log,
> +                                               opt.cdrom_size)
>  
>      if opt.binary:
>          with rfs:
> -            generated_files += mk_binary_cdrom( rfs, arch, codename,
> -                                                init_codename, xml, builddir, log,
> -                    opt.cdrom_size )
> +            generated_files += mk_binary_cdrom(rfs,
> +                                               arch,
> +                                               codename,
> +                                               init_codename,
> +                                               xml,
> +                                               builddir,
> +                                               log,
> +                                               opt.cdrom_size)
>  
>      print("")
>      print("Image Build finished !")
>      print("")
>      print("Files generated:")
>      for f in generated_files:
> -        print(" %s"%f)
> -
> +        print(" %s" % f)
> diff --git a/elbepack/commands/parselicence.py b/elbepack/commands/parselicence.py
> index ef351ab5..044a8b33 100644
> --- a/elbepack/commands/parselicence.py
> +++ b/elbepack/commands/parselicence.py
> @@ -18,68 +18,67 @@ from elbepack.treeutils import etree
>  from elbepack.version import elbe_version
>  from elbepack.shellhelper import system_out
>  
> +
>  class license_dep5_to_spdx (dict):
> -    def __init__ (self, xml_fname=None):
> +    def __init__(self, xml_fname=None):
>          self.perpackage_mapping = {}
>          self.perpackage_override = {}
>          if xml_fname is None:
>              return
>  
> -        xml = etree (xml_fname)
> +        xml = etree(xml_fname)
>  
> -        if xml.root.has ('global'):
> +        if xml.root.has('global'):
>              for mapping in xml.root.node('global'):
>                  self[mapping.et.attrib['name']] = mapping.et.text
>  
> -        if xml.root.has ('perpackage'):
> +        if xml.root.has('perpackage'):
>              for pkg in xml.root.node('perpackage'):
>                  pname = pkg.et.attrib['name']
>                  self.perpackage_mapping[pname] = {}
>                  self.perpackage_override[pname] = []
>                  for pp in pkg:
>                      if pp.tag == 'mapping':
> -                        self.perpackage_mapping[pname][pp.et.attrib['name']] = pp.et.text
> +                        self.perpackage_mapping[pname][pp.et.attrib['name']
> +                                                       ] = pp.et.text
>                      if pp.tag == 'license':
> -                        self.perpackage_override[pname].append (pp.et.text)
> -
> +                        self.perpackage_override[pname].append(pp.et.text)
>  
> -    def have_override (self, pkgname):
> +    def have_override(self, pkgname):
>          return pkgname in self.perpackage_override
>  
> -    def get_override (self, pkgname):
> +    def get_override(self, pkgname):
>          return self.perpackage_override[pkgname]
>  
> -    def map_one_license (self, pkgname, l, errors):
> +    def map_one_license(self, pkgname, l, errors):
>          if pkgname in self.perpackage_mapping:
>              if l in self.perpackage_mapping[pkgname]:
>                  return self.perpackage_mapping[pkgname][l]
>          if l in self:
>              return self[l]
> -        errors.append ('no mapping for "%s" for pkg "%s"' % (l, pkgname))
> +        errors.append('no mapping for "%s" for pkg "%s"' % (l, pkgname))
>          return None
>  
> -    def map_license_string (self, pkgname, l_string, errors):
> +    def map_license_string(self, pkgname, l_string, errors):
>          ors = []
>          for one_or in l_string.split(' or '):
>              ands = []
> -            for one_and in one_or.split (' and '):
> -                with_split = one_and.split (' with ')
> -                mapped_lic = self.map_one_license (pkgname, with_split[0], errors)
> +            for one_and in one_or.split(' and '):
> +                with_split = one_and.split(' with ')
> +                mapped_lic = self.map_one_license(
> +                    pkgname, with_split[0], errors)
>                  if mapped_lic is None:
>                      mapped_lic = u"UNKNOWN_MAPPING(%s)" % with_split[0]
> -                if len (with_split) == 2:
> -                    ands.append (mapped_lic + u" WITH " + with_split[1])
> +                if len(with_split) == 2:
> +                    ands.append(mapped_lic + u" WITH " + with_split[1])
>                  else:
> -                    ands.append (mapped_lic)
> -            ors.append (' AND '.join (ands))
> +                    ands.append(mapped_lic)
> +            ors.append(' AND '.join(ands))
>  
> -        retval = ' OR '.join (ors)
> +        retval = ' OR '.join(ors)
>          return retval
>  
> -
> -
> -
> -    def map_lic (self, pkgname, licenses, errors):
> +    def map_lic(self, pkgname, licenses, errors):
>          if pkgname in self.perpackage_override:
>              if len(self.perpackage_override[pkgname]) > 0:
>                  return self.perpackage_override[pkgname]
> @@ -87,65 +86,66 @@ class license_dep5_to_spdx (dict):
>          retval = []
>          for l in licenses:
>              if l is not None:
> -                retval.append (self.map_license_string (pkgname, l, errors))
> +                retval.append(self.map_license_string(pkgname, l, errors))
>              else:
> -                retval.append ('Empty license')
> +                retval.append('Empty license')
>  
>          return retval
>  
>  
> -
> -
> -
> -
> -
> -def scan_nomos ( license_text ):
> +def scan_nomos(license_text):
>      with NamedTemporaryFile() as f:
> -        f.write (license_text.encode('utf-8'))
> -        nomos_out = system_out ('/usr/share/fossology/nomos/agent/nomos "%s"' % f.name)
> +        f.write(license_text.encode('utf-8'))
> +        nomos_out = system_out(
> +            '/usr/share/fossology/nomos/agent/nomos "%s"' %
> +            f.name)
>  
>      expected_start = 'File %s contains license(s) ' % os.path.basename(f.name)
> -    if not nomos_out.startswith (expected_start):
> +    if not nomos_out.startswith(expected_start):
>          raise Exception("nomos output error")
>  
> -    licenses = nomos_out [len(expected_start):].strip()
> +    licenses = nomos_out[len(expected_start):].strip()
>  
>      return licenses.split(',')
>  
>  
> -def license_string (pkg):
> -    if not pkg.has ('spdx_licenses'):
> +def license_string(pkg):
> +    if not pkg.has('spdx_licenses'):
>          return 'NOASSERTION'
>  
>      l_list = []
>      for ll in pkg.node('spdx_licenses'):
> -        if ll.et.text.find (' OR ') != -1:
> -            l_list.append ('('+ll.et.text+')')
> +        if ll.et.text.find(' OR ') != -1:
> +            l_list.append('(' + ll.et.text + ')')
>          else:
> -            l_list.append (ll.et.text)
> -
> -    return ' AND '.join (l_list)
> -
> -
> -
> -
> -
> -def run_command( argv ):
> -    oparser = OptionParser(usage="usage: %prog parselicence [options] <licencefile>")
> -    oparser.add_option( "--output", dest="output",
> -                        help="outputfilename" )
> -    oparser.add_option( "--mapping", dest="mapping",
> -                        help="mapping filename" )
> -    oparser.add_option( "--use-nomos", action="store_true",
> -                        dest="use_nomos", default=False,
> -                        help="Use the external nomos tool on the copyright text, and record the ouput in out xml" )
> -    oparser.add_option( "--errors-only", action="store_true",
> -                        dest="only_errors", default=False,
> -                        help="Only Output Packages with errors, needing a fix in the mapping file" )
> -    oparser.add_option( "--tvout", dest="tagvalue",
> -                        help="tag value output filename" )
> -
> -    (opt,args) = oparser.parse_args(argv)
> +            l_list.append(ll.et.text)
> +
> +    return ' AND '.join(l_list)
> +
> +
> +def run_command(argv):
> +    oparser = OptionParser(
> +        usage="usage: %prog parselicence [options] <licencefile>")
> +    oparser.add_option("--output", dest="output",
> +                       help="outputfilename")
> +    oparser.add_option("--mapping", dest="mapping",
> +                       help="mapping filename")
> +    oparser.add_option(
> +        "--use-nomos",
> +        action="store_true",
> +        dest="use_nomos",
> +        default=False,
> +        help="Use the external nomos tool on the copyright text, and record the ouput in out xml")
> +    oparser.add_option(
> +        "--errors-only",
> +        action="store_true",
> +        dest="only_errors",
> +        default=False,
> +        help="Only Output Packages with errors, needing a fix in the mapping file")
> +    oparser.add_option("--tvout", dest="tagvalue",
> +                       help="tag value output filename")
> +
> +    (opt, args) = oparser.parse_args(argv)
>  
>      if len(args) != 1:
>          print("wrong number of arguments")
> @@ -164,7 +164,7 @@ def run_command( argv ):
>          oparser.print_help()
>          sys.exit(20)
>  
> -    mapping = license_dep5_to_spdx (opt.mapping)
> +    mapping = license_dep5_to_spdx(opt.mapping)
>  
>      unknown_licenses = []
>      # Dont use direct iterator, because we might want to delete
> @@ -179,100 +179,103 @@ def run_command( argv ):
>  
>          if pkg.has('heuristics'):
>              hr += 1
> -            if not mapping.have_override (pkg_name):
> -                errors.append ('no override for heuristics based package "%s"' % pkg_name)
> +            if not mapping.have_override(pkg_name):
> +                errors.append(
> +                    'no override for heuristics based package "%s"' %
> +                    pkg_name)
>  
> -        if mapping.have_override (pkg_name):
> -            pkg.append ('have_override')
> +        if mapping.have_override(pkg_name):
> +            pkg.append('have_override')
>  
>          if pkg.has('debian_licenses'):
> -            sp = pkg.ensure_child ('spdx_licenses')
> +            sp = pkg.ensure_child('spdx_licenses')
>              sp.clear()
>              sp.et.text = '\n'
>              lics = []
>              for l in pkg.node('debian_licenses'):
>                  if l.et.text in lics:
>                      continue
> -                lics.append (l.et.text)
> +                lics.append(l.et.text)
>  
> -            mapped_lics = mapping.map_lic (pkg_name, lics, errors)
> +            mapped_lics = mapping.map_lic(pkg_name, lics, errors)
>  
>              for l in mapped_lics:
>                  ll = sp.append('license')
> -                ll.et.text=l
> +                ll.et.text = l
>  
>              if len(mapped_lics) == 0:
> -                errors.append ('empty mapped licenses in package "%s"' % pkg_name)
> +                errors.append(
> +                    'empty mapped licenses in package "%s"' %
> +                    pkg_name)
>          else:
> -            if not mapping.have_override (pkg_name):
> -                errors.append ('no debian_licenses and no override in package "%s"' % pkg_name)
> +            if not mapping.have_override(pkg_name):
> +                errors.append(
> +                    'no debian_licenses and no override in package "%s"' %
> +                    pkg_name)
>              else:
> -                sp = pkg.ensure_child ('spdx_licenses')
> +                sp = pkg.ensure_child('spdx_licenses')
>                  sp.clear()
>                  sp.et.text = '\n'
> -                for l in mapping.get_override (pkg_name):
> +                for l in mapping.get_override(pkg_name):
>                      ll = sp.append('license')
> -                    ll.et.text=l
> +                    ll.et.text = l
>  
>          if opt.use_nomos:
> -            nomos_l = scan_nomos( pkg.text('text') )
> +            nomos_l = scan_nomos(pkg.text('text'))
>              if nomos_l[0] != 'No_license_found':
> -                nomos_node = pkg.append ('nomos_licenses')
> -                nomos_node.et.text='\n'
> +                nomos_node = pkg.append('nomos_licenses')
> +                nomos_node.et.text = '\n'
>                  for l in nomos_l:
> -                    ll = nomos_node.append ('license')
> +                    ll = nomos_node.append('license')
>                      ll.et.text = l
>  
> -        if len (errors) > 0:
> +        if len(errors) > 0:
>              for e in errors:
> -                ee = pkg.append ('error')
> -                ee.et.text=e
> +                ee = pkg.append('error')
> +                ee.et.text = e
>              err_pkg += 1
>          elif opt.only_errors:
>              # No Errors, and only_errors is active
>              # Remove package node
> -            tree.root.remove_child (pkg)
> +            tree.root.remove_child(pkg)
>  
> -    if not opt.tagvalue is None:
> -        with io.open (opt.tagvalue, "wt", encoding='utf-8') as fp:
> -            fp.write (u'SPDXVersion: SPDX-1.2\n')
> -            fp.write (u'DataLicense: CC0-1.0\n')
> +    if opt.tagvalue is not None:
> +        with io.open(opt.tagvalue, "wt", encoding='utf-8') as fp:
> +            fp.write(u'SPDXVersion: SPDX-1.2\n')
> +            fp.write(u'DataLicense: CC0-1.0\n')
>              #fp.write (u'SPDXID: SPDXRef-DOCUMENT\n')
>              #fp.write (u'DocumentName: %s\n' % opt.tagvalue)
>              #fp.write (u'DocumentNamespace: %s\n' % some_uri_with_uuid )
> -            fp.write (u'\n')
> -            fp.write (u'## Creation Information\n')
> -            fp.write (u'Creator: Tool: elbe-%s\n' % elbe_version )
> -            fp.write (u'Created: %s\n' % datetime.now().isoformat() )
> -            fp.write (u'\n' )
> -            fp.write (u'\n' )
> -            fp.write (u'## Package Information\n' )
> -            fp.write (u'\n' )
> +            fp.write(u'\n')
> +            fp.write(u'## Creation Information\n')
> +            fp.write(u'Creator: Tool: elbe-%s\n' % elbe_version)
> +            fp.write(u'Created: %s\n' % datetime.now().isoformat())
> +            fp.write(u'\n')
> +            fp.write(u'\n')
> +            fp.write(u'## Package Information\n')
> +            fp.write(u'\n')
>  
>              for pkg in tree.root:
> -                fp.write (u'## Package %s\n' % pkg.et.attrib['name'] )
> -                fp.write (u'PackageName: %s\n' % pkg.et.attrib['name'])
> -                fp.write (u'PackageDownloadLocation: NOASSERTION\n')
> +                fp.write(u'## Package %s\n' % pkg.et.attrib['name'])
> +                fp.write(u'PackageName: %s\n' % pkg.et.attrib['name'])
> +                fp.write(u'PackageDownloadLocation: NOASSERTION\n')
>                  #fp.write (u'PackageVerificationCode: %s\n')
> -                if pkg.has ('have_override'):
> -                    fp.write (u'PackageLicenseConcluded: %s\n' % license_string (pkg))
> -                    fp.write (u'PackageLicenseDeclared: NOASSERTION\n')
> +                if pkg.has('have_override'):
> +                    fp.write(
> +                        u'PackageLicenseConcluded: %s\n' %
> +                        license_string(pkg))
> +                    fp.write(u'PackageLicenseDeclared: NOASSERTION\n')
>  
>                  else:
> -                    fp.write (u'PackageLicenseConcluded: NOASSERTION\n')
> -                    fp.write (u'PackageLicenseDeclared: %s\n' % license_string (pkg))
> -                fp.write (u'PackageLicenseInfoFromFiles: NOASSERTION\n')
> -                fp.write (u'\n' )
> -
> -
> -
> -
> -    if not opt.output is None:
> -        tree.write (opt.output)
> +                    fp.write(u'PackageLicenseConcluded: NOASSERTION\n')
> +                    fp.write(
> +                        u'PackageLicenseDeclared: %s\n' %
> +                        license_string(pkg))
> +                fp.write(u'PackageLicenseInfoFromFiles: NOASSERTION\n')
> +                fp.write(u'\n')
>  
> +    if opt.output is not None:
> +        tree.write(opt.output)
>  
>      print("statistics:")
>      print("num:%d mr:%d hr:%d err_pkg:%d" % (num_pkg, mr, hr, err_pkg))
> -
> -
> -
> diff --git a/elbepack/commands/pbuilder.py b/elbepack/commands/pbuilder.py
> index 13e8c072..23e79a18 100644
> --- a/elbepack/commands/pbuilder.py
> +++ b/elbepack/commands/pbuilder.py
> @@ -10,45 +10,49 @@ import sys
>  from optparse import OptionParser
>  from elbepack.pbuilderaction import PBuilderAction, PBuilderError
>  
> -def run_command (argv):
> -    oparser = OptionParser (usage="usage: elbe pbuilder [options] <command>")
>  
> -    oparser.add_option ("--project", dest="project", default=None,
> -                        help="project directory on the initvm")
> +def run_command(argv):
> +    oparser = OptionParser(usage="usage: elbe pbuilder [options] <command>")
>  
> -    oparser.add_option ("--xmlfile", dest="xmlfile", default=None,
> -                        help="xmlfile to use")
> +    oparser.add_option("--project", dest="project", default=None,
> +                       help="project directory on the initvm")
>  
> -    oparser.add_option ("--writeproject", dest="writeproject", default=None,
> -                        help="write project name to file")
> +    oparser.add_option("--xmlfile", dest="xmlfile", default=None,
> +                       help="xmlfile to use")
>  
> -    oparser.add_option( "--skip-download", action="store_true",
> -                        dest="skip_download", default=False,
> -                        help="Skip downloading generated Files" )
> +    oparser.add_option("--writeproject", dest="writeproject", default=None,
> +                       help="write project name to file")
>  
> -    oparser.add_option ("--origfile", dest="origfile", default=[], action="append",
> -                        help="upload orig file")
> +    oparser.add_option("--skip-download", action="store_true",
> +                       dest="skip_download", default=False,
> +                       help="Skip downloading generated Files")
>  
> -    oparser.add_option ("--output", dest="outdir", default=None,
> -                        help="directory where to save downloaded Files")
> +    oparser.add_option(
> +        "--origfile",
> +        dest="origfile",
> +        default=[],
> +        action="append",
> +        help="upload orig file")
>  
> +    oparser.add_option("--output", dest="outdir", default=None,
> +                       help="directory where to save downloaded Files")
>  
> -    (opt,args) = oparser.parse_args (argv)
> +    (opt, args) = oparser.parse_args(argv)
>  
>      if len(args) < 1:
>          print("elbe pbuilder - no subcommand given", file=sys.stderr)
> -        PBuilderAction.print_actions ()
> +        PBuilderAction.print_actions()
>          return
>  
>      try:
> -        action = PBuilderAction (args[0])
> +        action = PBuilderAction(args[0])
>      except KeyError:
>          print("elbe pbuilder - unknown subcommand", file=sys.stderr)
> -        PBuilderAction.print_actions ()
> +        PBuilderAction.print_actions()
>          sys.exit(20)
>  
>      try:
> -        action.execute (opt, args[1:])
> +        action.execute(opt, args[1:])
>      except PBuilderError as e:
>          print("PBuilder Exception", file=sys.stderr)
>          print(e, file=sys.stderr)
> diff --git a/elbepack/commands/pin_versions.py b/elbepack/commands/pin_versions.py
> index b64a015b..9d1618ba 100644
> --- a/elbepack/commands/pin_versions.py
> +++ b/elbepack/commands/pin_versions.py
> @@ -12,14 +12,15 @@ from optparse import OptionParser
>  from elbepack.validate import validate_xml
>  
>  
> -def run_command( argv ):
> +def run_command(argv):
>  
> -    oparser = OptionParser( usage="usage: %prog pin_versions [options] <xmlfile>")
> -    oparser.add_option( "--skip-validation", action="store_true",
> -                        dest="skip_validation", default=False,
> -                        help="Skip xml schema validation" )
> +    oparser = OptionParser(
> +        usage="usage: %prog pin_versions [options] <xmlfile>")
> +    oparser.add_option("--skip-validation", action="store_true",
> +                       dest="skip_validation", default=False,
> +                       help="Skip xml schema validation")
>  
> -    (opt,args) = oparser.parse_args(argv)
> +    (opt, args) = oparser.parse_args(argv)
>  
>      if len(args) != 1:
>          print("Wrong number of arguments")
> @@ -27,20 +28,20 @@ def run_command( argv ):
>          sys.exit(20)
>  
>      if not opt.skip_validation:
> -        validation = validate_xml (args[0])
> -        if len (validation) != 0:
> +        validation = validate_xml(args[0])
> +        if len(validation) != 0:
>              print("xml validation failed. Bailing out")
>              for i in validation:
>                  print(i)
>              sys.exit(20)
>  
>      try:
> -        xml = etree( args[0] )
> -    except:
> +        xml = etree(args[0])
> +    except BaseException:
>          print("Error reading xml file!")
>          sys.exit(20)
>  
> -    if not xml.has ("fullpkgs"):
> +    if not xml.has("fullpkgs"):
>          print("xml file does not have fullpkgs node")
>          sys.exit(20)
>  
> @@ -51,16 +52,15 @@ def run_command( argv ):
>  
>      for p in fullp:
>          pname = p.et.text
> -        pver  = p.et.get('version')
> +        pver = p.et.get('version')
>  
>          pak = plist.append('pkg')
> -        pak.set_text( pname )
> +        pak.set_text(pname)
>          pak.et.tail = '\n'
>          pak.et.set('version', pver)
>  
>      try:
> -        xml.write( args[0] )
> -    except:
> +        xml.write(args[0])
> +    except BaseException:
>          print("Unable to write new xml file")
>          sys.exit(20)
> -
> diff --git a/elbepack/commands/pkgdiff.py b/elbepack/commands/pkgdiff.py
> index e335e716..c1bd2390 100644
> --- a/elbepack/commands/pkgdiff.py
> +++ b/elbepack/commands/pkgdiff.py
> @@ -15,12 +15,18 @@ from optparse import OptionParser
>  
>  from elbepack.elbexml import ElbeXML, ValidationMode
>  
> -def run_command( argv ):
>  
> -    oparser = OptionParser(usage="usage: %prog pkgdiff [options] <rfs1> <rfs2>")
> -    oparser.add_option( "--noauto", action="store_true", dest="noauto", default=False,
> -                        help="Dont compare automatically installed Packages" )
> -    (opt,args) = oparser.parse_args(argv)
> +def run_command(argv):
> +
> +    oparser = OptionParser(
> +        usage="usage: %prog pkgdiff [options] <rfs1> <rfs2>")
> +    oparser.add_option(
> +        "--noauto",
> +        action="store_true",
> +        dest="noauto",
> +        default=False,
> +        help="Dont compare automatically installed Packages")
> +    (opt, args) = oparser.parse_args(argv)
>  
>      if len(args) != 2:
>          print("Wrong number of arguments")
> @@ -31,8 +37,11 @@ def run_command( argv ):
>      fix_rfs = args[1]
>  
>      x = os.path.join(gen_rfs, 'etc/elbe_base.xml')
> -    xml = ElbeXML (x, skip_validate=True, url_validation=ValidationMode.NO_CHECK)
> -    arch = xml.text ('project/arch', key='arch')
> +    xml = ElbeXML(
> +        x,
> +        skip_validate=True,
> +        url_validation=ValidationMode.NO_CHECK)
> +    arch = xml.text('project/arch', key='arch')
>  
>      apt_pkg.init_config()
>      apt_pkg.config.set('RootDir', gen_rfs)
> @@ -67,15 +76,15 @@ def run_command( argv ):
>                  fix_pkgs[p.name] = p.current_ver
>  
>      for p in fix_pkgs:
> -        if not p in gen_pkgs:
> +        if p not in gen_pkgs:
>              print("+<pkg>%s</pkg>" % p)
>  
>      for p in gen_pkgs.keys():
> -        if not p in fix_pkgs.keys():
> +        if p not in fix_pkgs.keys():
>              print("-<pkg>%s</pkg>" % p)
>  
>      for p in fix_pkgs.keys():
>          if p in gen_pkgs.keys() and fix_pkgs[p] != gen_pkgs[p]:
> -            print("%s: Version mismatch %s != %s" % (p, fix_pkgs[p], gen_pkgs[p]))
> -
> -
> +            print(
> +                "%s: Version mismatch %s != %s" %
> +                (p, fix_pkgs[p], gen_pkgs[p]))
> diff --git a/elbepack/commands/preprocess.py b/elbepack/commands/preprocess.py
> index bf2081da..b39f6317 100644
> --- a/elbepack/commands/preprocess.py
> +++ b/elbepack/commands/preprocess.py
> @@ -10,12 +10,13 @@ import os
>  from optparse import OptionParser
>  from elbepack.xmlpreprocess import XMLPreprocessError, xmlpreprocess
>  
> -def run_command( argv ):
> -    oparser = OptionParser( usage="usage: %prog preprocess [options] <xmlfile>")
> -    oparser.add_option ("-o", "--output", dest="output",
> -                        default="preprocess.xml",
> -                        help="preprocessed output file", metavar="<xmlfile>")
> -    (opt,args) = oparser.parse_args(argv)
> +
> +def run_command(argv):
> +    oparser = OptionParser(usage="usage: %prog preprocess [options] <xmlfile>")
> +    oparser.add_option("-o", "--output", dest="output",
> +                       default="preprocess.xml",
> +                       help="preprocessed output file", metavar="<xmlfile>")
> +    (opt, args) = oparser.parse_args(argv)
>  
>      if len(args) != 1:
>          print("Wrong number of arguments", file=sys.stderr)
> diff --git a/elbepack/commands/prjrepo.py b/elbepack/commands/prjrepo.py
> index f97ec057..110f0eeb 100644
> --- a/elbepack/commands/prjrepo.py
> +++ b/elbepack/commands/prjrepo.py
> @@ -17,6 +17,7 @@ from elbepack.soapclient import RepoAction, ElbeSoapClient
>  from elbepack.version import elbe_version
>  from elbepack.config import cfg
>  
> +
>  def run_command(argv):
>      oparser = OptionParser(usage="usage: elbe prjrepo [options] <command>")
>  
> @@ -71,7 +72,7 @@ def run_command(argv):
>                  opt.retries))
>      except socket.error as e:
>          print("Failed to connect to Soap server %s:%s\n" %
> -               (opt.host, opt.port), file=sys.stderr)
> +              (opt.host, opt.port), file=sys.stderr)
>          print("", file=sys.stderr)
>          print(
>              "Check, wether the Soap Server is running inside the initvm",
> @@ -80,14 +81,14 @@ def run_command(argv):
>          sys.exit(10)
>      except URLError as e:
>          print("Failed to connect to Soap server %s:%s\n" %
> -               (opt.host, opt.port), file=sys.stderr)
> +              (opt.host, opt.port), file=sys.stderr)
>          print("", file=sys.stderr)
>          print("Check, wether the initvm is actually running.", file=sys.stderr)
> -        print( "try `elbe initvm start`", file=sys.stderr)
> +        print("try `elbe initvm start`", file=sys.stderr)
>          sys.exit(10)
>      except BadStatusLine as e:
>          print("Failed to connect to Soap server %s:%s\n" %
> -               (opt.host, opt.port), file=sys.stderr)
> +              (opt.host, opt.port), file=sys.stderr)
>          print("", file=sys.stderr)
>          print("Check, wether the initvm is actually running.", file=sys.stderr)
>          print(
> @@ -97,19 +98,19 @@ def run_command(argv):
>  
>      # Check Elbe version
>      try:
> -        v_server = control.service.get_version ()
> +        v_server = control.service.get_version()
>          if v_server != elbe_version:
>              print("elbe v%s is used in initvm, this is not compatible with \
>  elbe v%s that is used on this machine. Please install same \
>  versions of elbe in initvm and on your machine." % (v_server, elbe_version), file=sys.stderr)
>              if not (opt.ignore_version):
> -                sys.exit (20)
> +                sys.exit(20)
>      except AttributeError:
>          print("the elbe installation inside the initvm doesn't provide a \
>  get_version interface. Please create a new initvm or upgrade \
>  elbe inside the existing initvm.", file=sys.stderr)
>          if not (opt.ignore_version):
> -            sys.exit (20)
> +            sys.exit(20)
>  
>      # Check whether subcommand exists
>      try:
> diff --git a/elbepack/commands/remove_sign.py b/elbepack/commands/remove_sign.py
> index ec66606a..c7b0b15a 100644
> --- a/elbepack/commands/remove_sign.py
> +++ b/elbepack/commands/remove_sign.py
> @@ -7,13 +7,14 @@ from __future__ import print_function
>  
>  from elbepack.gpg import unsign_file
>  
> -def run_command( argv ):
> +
> +def run_command(argv):
>      if(len(argv) != 1):
>          print("Wrong number of arguments.")
>          print("Please pass the name of the file to unsign.")
>          return
>  
> -    fname = unsign_file( argv[0] )
> +    fname = unsign_file(argv[0])
>      if fname:
>          print("unsigned file: %s" % fname)
>      else:
> diff --git a/elbepack/commands/setsel.py b/elbepack/commands/setsel.py
> index 52e037a3..8d142fa3 100644
> --- a/elbepack/commands/setsel.py
> +++ b/elbepack/commands/setsel.py
> @@ -10,8 +10,9 @@ import sys
>  from elbepack.treeutils import etree
>  from optparse import OptionParser
>  
> -def parse_selections( fname ):
> -    fp = file(fname,"r")
> +
> +def parse_selections(fname):
> +    fp = file(fname, "r")
>  
>      sels = []
>  
> @@ -23,7 +24,6 @@ def parse_selections( fname ):
>  
>          sp = l.split()
>  
> -
>          print("%s %s" % (sp[0], sp[1]))
>  
>          if sp[1] == 'install':
> @@ -33,29 +33,26 @@ def parse_selections( fname ):
>      return sels
>  
>  
> -def run_command( argv ):
> +def run_command(argv):
>  
>      oparser = OptionParser(usage="usage: %prog setsel <xmlfile> <pkglist.txt>")
> -    (opt,args) = oparser.parse_args(argv)
> +    (opt, args) = oparser.parse_args(argv)
>  
>      if len(args) != 2:
>          print("Wrong number of arguments")
>          oparser.print_help()
>          sys.exit(20)
>  
> -    xml = etree( args[0] )
> +    xml = etree(args[0])
>  
>      pkg_list = xml.node("/pkg-list")
>  
>      pkg_list.clear()
>  
> -    sels = parse_selections( args[1] )
> +    sels = parse_selections(args[1])
>  
>      for s in sels:
> -        new = pkg_list.append( 'pkg' )
> -        new.set_text( s )
> -
> -
> -    xml.write( args[0] )
> -
> +        new = pkg_list.append('pkg')
> +        new.set_text(s)
>  
> +    xml.write(args[0])
> diff --git a/elbepack/commands/show.py b/elbepack/commands/show.py
> index fd1b8a8d..76448c16 100644
> --- a/elbepack/commands/show.py
> +++ b/elbepack/commands/show.py
> @@ -12,18 +12,19 @@ from elbepack.validate import validate_xml
>  
>  from optparse import OptionParser
>  
> -def run_command( argv ):
> -    oparser = OptionParser( usage="usage: %prog show [options] <filename>" )
>  
> -    oparser.add_option( "--verbose", action="store_true", dest="verbose",
> -                        default=False,
> -                        help="show detailed project informations" )
> +def run_command(argv):
> +    oparser = OptionParser(usage="usage: %prog show [options] <filename>")
>  
> -    oparser.add_option( "--skip-validation", action="store_true",
> -                        dest="skip_validation", default=False,
> -                        help="Skip xml schema validation" )
> +    oparser.add_option("--verbose", action="store_true", dest="verbose",
> +                       default=False,
> +                       help="show detailed project informations")
>  
> -    (opt,args) = oparser.parse_args(argv)
> +    oparser.add_option("--skip-validation", action="store_true",
> +                       dest="skip_validation", default=False,
> +                       help="Skip xml schema validation")
> +
> +    (opt, args) = oparser.parse_args(argv)
>  
>      if len(args) == 0:
>          print("No Filename specified")
> @@ -37,15 +38,15 @@ def run_command( argv ):
>  
>      try:
>          if not opt.skip_validation:
> -            validation = validate_xml (args[0])
> -            if len (validation) != 0:
> +            validation = validate_xml(args[0])
> +            if len(validation) != 0:
>                  print("xml validation failed. Bailing out")
>                  for i in validation:
>                      print(i)
>                  sys.exit(20)
>  
> -        xml = etree( args[0] )
> -    except:
> +        xml = etree(args[0])
> +    except BaseException:
>          print("Unable to open xml File. Bailing out")
>          sys.exit(20)
>  
> @@ -53,7 +54,7 @@ def run_command( argv ):
>          print("no project description available")
>          sys.exit(20)
>  
> -    print("== %s ==" %(args[0]))
> +    print("== %s ==" % (args[0]))
>      print("Debian suite: %s" % (xml.text("./project/suite")))
>      for s in xml.text("./project/description").splitlines():
>          print("%s" % s.strip())
> diff --git a/elbepack/commands/sign.py b/elbepack/commands/sign.py
> index c54f8085..410d8f37 100644
> --- a/elbepack/commands/sign.py
> +++ b/elbepack/commands/sign.py
> @@ -7,10 +7,11 @@ from __future__ import print_function
>  
>  from elbepack.gpg import sign_file
>  
> -def run_command( argv ):
> +
> +def run_command(argv):
>      if(len(argv) != 2):
>          print("Wrong number of arguments.")
>          print("Please pass the name of the file to sign and a valid gnupg fingerprint.")
>          return
>      else:
> -        sign_file( argv[0], argv[1])
> +        sign_file(argv[0], argv[1])
> diff --git a/elbepack/commands/toolchainextract.py b/elbepack/commands/toolchainextract.py
> index 59cea588..0de9110f 100644
> --- a/elbepack/commands/toolchainextract.py
> +++ b/elbepack/commands/toolchainextract.py
> @@ -14,17 +14,18 @@ from elbepack.debpkg import build_binary_deb
>  from elbepack.toolchain import get_toolchain
>  from elbepack.asciidoclog import StdoutLog
>  
> -def run_command( argv ):
> +
> +def run_command(argv):
>      oparser = OptionParser(usage="usage: %prog toolchainextract [options]")
> -    oparser.add_option( "-p", "--path", dest="path",
> -                        help="path to toolchain" )
> -    oparser.add_option( "-o", "--output", dest="output",
> -                        help="output repository path" )
> -    oparser.add_option( "-c", "--codename", dest="codename",
> -                        help="distro codename for repository" )
> -    oparser.add_option( "-b", "--buildtype", dest="buildtype",
> -                        help="Override the buildtype" )
> -    (opt,args) = oparser.parse_args(argv)
> +    oparser.add_option("-p", "--path", dest="path",
> +                       help="path to toolchain")
> +    oparser.add_option("-o", "--output", dest="output",
> +                       help="output repository path")
> +    oparser.add_option("-c", "--codename", dest="codename",
> +                       help="distro codename for repository")
> +    oparser.add_option("-b", "--buildtype", dest="buildtype",
> +                       help="Override the buildtype")
> +    (opt, args) = oparser.parse_args(argv)
>  
>      if not opt.path:
>          oparser.print_help()
> @@ -39,26 +40,41 @@ def run_command( argv ):
>          oparser.print_help()
>          return 0
>  
> -    defaults = ElbeDefaults( opt.buildtype )
> +    defaults = ElbeDefaults(opt.buildtype)
>  
> -    toolchain = get_toolchain( defaults["toolchaintype"], opt.path, defaults["arch"] )
> +    toolchain = get_toolchain(
> +        defaults["toolchaintype"],
> +        opt.path,
> +        defaults["arch"])
>  
> -    tmpdir   = mkdtemp()
> +    tmpdir = mkdtemp()
>  
>      for lib in toolchain.pkg_libs:
> -        files = toolchain.get_files_for_pkg( lib )
> +        files = toolchain.get_files_for_pkg(lib)
>  
> -        pkglibpath = os.path.join( "usr/lib", defaults["triplet"] )
> -        fmap = [ (f, pkglibpath) for f in files ]
> +        pkglibpath = os.path.join("usr/lib", defaults["triplet"])
> +        fmap = [(f, pkglibpath) for f in files]
>  
> -        build_binary_deb( lib, defaults["arch"], defaults["toolchainver"], lib + " extracted from toolchain", fmap, toolchain.pkg_deps[lib], tmpdir )
> +        build_binary_deb(
> +            lib,
> +            defaults["arch"],
> +            defaults["toolchainver"],
> +            lib +
> +            " extracted from toolchain",
> +            fmap,
> +            toolchain.pkg_deps[lib],
> +            tmpdir)
>  
>      pkgs = os.listdir(tmpdir)
>  
> -    repo = ToolchainRepo( defaults["arch"], opt.codename, opt.output, StdoutLog() )
> +    repo = ToolchainRepo(
> +        defaults["arch"],
> +        opt.codename,
> +        opt.output,
> +        StdoutLog())
>  
>      for p in pkgs:
> -        repo.includedeb( os.path.join(tmpdir, p) )
> +        repo.includedeb(os.path.join(tmpdir, p))
>  
> -    repo.finalize ()
> -    os.system( 'rm -r "%s"' % tmpdir )
> +    repo.finalize()
> +    os.system('rm -r "%s"' % tmpdir)
> diff --git a/elbepack/commands/updated.py b/elbepack/commands/updated.py
> index b5c415b6..bcf22912 100644
> --- a/elbepack/commands/updated.py
> +++ b/elbepack/commands/updated.py
> @@ -23,45 +23,45 @@ except ImportError:
>      usbmonitor_available = False
>  
>  
> -def shutdown (signum, fname, status):
> +def shutdown(signum, fname, status):
>      status.stop = True
>      for mon in status.monitors:
>          mon.stop()
>  
>  
> -def run_command (argv):
> +def run_command(argv):
>  
> -    status = UpdateStatus ()
> +    status = UpdateStatus()
>  
> -    oparser = OptionParser (usage="usage: %prog updated [options] <filename>")
> +    oparser = OptionParser(usage="usage: %prog updated [options] <filename>")
>  
> -    oparser.add_option ("--directory", dest="update_dir",
> -                        help="monitor dir (default is /var/cache/elbe/updates)",
> -                        metavar="FILE" )
> +    oparser.add_option("--directory", dest="update_dir",
> +                       help="monitor dir (default is /var/cache/elbe/updates)",
> +                       metavar="FILE")
>  
> -    oparser.add_option ("--repocache", dest="repo_dir",
> -                        help="monitor dir (default is /var/cache/elbe/repos)",
> -                        metavar="FILE" )
> +    oparser.add_option("--repocache", dest="repo_dir",
> +                       help="monitor dir (default is /var/cache/elbe/repos)",
> +                       metavar="FILE")
>  
> -    oparser.add_option ("--host", dest="host", default="",
> -                        help="listen host")
> +    oparser.add_option("--host", dest="host", default="",
> +                       help="listen host")
>  
> -    oparser.add_option ("--port", dest="port", default=8088,
> -                        help="listen port")
> +    oparser.add_option("--port", dest="port", default=8088,
> +                       help="listen port")
>  
> -    oparser.add_option ("--nosign", action="store_true", dest="nosign",
> -                        default=False,
> -                        help="accept none signed files")
> +    oparser.add_option("--nosign", action="store_true", dest="nosign",
> +                       default=False,
> +                       help="accept none signed files")
>  
> -    oparser.add_option ("--verbose", action="store_true", dest="verbose",
> -                        default=False,
> -                        help="force output to stdout instead of syslog")
> +    oparser.add_option("--verbose", action="store_true", dest="verbose",
> +                       default=False,
> +                       help="force output to stdout instead of syslog")
>  
> -    oparser.add_option ("--usb", action="store_true", dest="use_usb",
> -                        default=False,
> -                        help="monitor USB devices")
> +    oparser.add_option("--usb", action="store_true", dest="use_usb",
> +                       default=False,
> +                       help="monitor USB devices")
>  
> -    (opt,args) = oparser.parse_args(argv)
> +    (opt, args) = oparser.parse_args(argv)
>  
>      status.nosign = opt.nosign
>      status.verbose = opt.verbose
> @@ -76,8 +76,8 @@ def run_command (argv):
>      else:
>          status.repo_dir = opt.repo_dir
>  
> -    if not os.path.isdir (update_dir):
> -        os.makedirs (update_dir)
> +    if not os.path.isdir(update_dir):
> +        os.makedirs(update_dir)
>  
>      status.monitors = []
>  
> @@ -88,11 +88,12 @@ def run_command (argv):
>              um = USBMonitor(status, recursive=False)
>              status.monitors.append(um)
>          else:
> -            status.log("USB Monitor has been requested. "
> -                       "This requires pyudev module which could not be imported.")
> +            status.log(
> +                "USB Monitor has been requested. "
> +                "This requires pyudev module which could not be imported.")
>              sys.exit(1)
>  
> -    signal.signal (signal.SIGTERM, shutdown)
> +    signal.signal(signal.SIGTERM, shutdown)
>  
>      for mon in status.monitors:
>          mon.start()
> @@ -104,13 +105,13 @@ def run_command (argv):
>  
>      wsgi_application = WsgiApplication(application)
>  
> -    status.soapserver = make_server (opt.host, int (opt.port),
> -                                     wsgi_application)
> +    status.soapserver = make_server(opt.host, int(opt.port),
> +                                    wsgi_application)
>  
>      try:
> -        status.soapserver.serve_forever ()
> -    except:
> -        shutdown (1, "now", status)
> +        status.soapserver.serve_forever()
> +    except BaseException:
> +        shutdown(1, "now", status)
>  
>      for mon in status.monitors:
>          mon.join()
> diff --git a/elbepack/commands/validate.py b/elbepack/commands/validate.py
> index cee93f5e..dea91378 100644
> --- a/elbepack/commands/validate.py
> +++ b/elbepack/commands/validate.py
> @@ -11,13 +11,14 @@ from optparse import OptionParser
>  from elbepack.validate import validate_xml
>  from elbepack.elbexml import ElbeXML, ValidationMode, ValidationError
>  
> -def run_command( argv ):
> -    oparser = OptionParser( usage="usage: %prog validate <xmlfile>")
> -    oparser.add_option ("--validate-urls", dest="validate_urls",
> -                        help="try to access specified repositories",
> -                        default=False, action="store_true")
>  
> -    (opt,args) = oparser.parse_args(argv)
> +def run_command(argv):
> +    oparser = OptionParser(usage="usage: %prog validate <xmlfile>")
> +    oparser.add_option("--validate-urls", dest="validate_urls",
> +                       help="try to access specified repositories",
> +                       default=False, action="store_true")
> +
> +    (opt, args) = oparser.parse_args(argv)
>  
>      if len(args) < 1:
>          oparser.print_help()
> @@ -28,8 +29,8 @@ def run_command( argv ):
>          oparser.print_help()
>          sys.exit(20)
>  
> -    validation = validate_xml (args[0])
> -    if len (validation):
> +    validation = validate_xml(args[0])
> +    if len(validation):
>          print("validation failed")
>          for i in validation:
>              print(i)
> @@ -42,4 +43,4 @@ def run_command( argv ):
>              print(e)
>              sys.exit(20)
>  
> -    sys.exit (0)
> +    sys.exit(0)
> diff --git a/elbepack/commands/xsdtoasciidoc.py b/elbepack/commands/xsdtoasciidoc.py
> index 6771e33c..0bcde215 100644
> --- a/elbepack/commands/xsdtoasciidoc.py
> +++ b/elbepack/commands/xsdtoasciidoc.py
> @@ -13,28 +13,29 @@ from elbepack.templates import write_template
>  
>  from optparse import OptionParser
>  
> -def run_command( argv ):
> -    oparser = OptionParser(usage="usage: %prog xsdtoasciidoc [options] <xsdfile>")
>  
> -    oparser.add_option( "--output", dest="out",
> -                        help="specify output filename",
> -                        metavar="FILE" )
> +def run_command(argv):
> +    oparser = OptionParser(
> +        usage="usage: %prog xsdtoasciidoc [options] <xsdfile>")
>  
> -    (opt,args) = oparser.parse_args(argv)
> +    oparser.add_option("--output", dest="out",
> +                       help="specify output filename",
> +                       metavar="FILE")
> +
> +    (opt, args) = oparser.parse_args(argv)
>  
>      if len(args) != 1:
>          print("Wrong number of arguments")
>          oparser.print_help()
>          sys.exit(20)
>  
> -    xml = etree( args[0] )
> +    xml = etree(args[0])
>  
>      if not opt.out:
>          print("--output is mandatory")
>          sys.exit(20)
>  
>      d = {"opt": opt,
> -         "xml": xml }
> -
> -    write_template(opt.out, xsdtoasciidoc_mako_fname, d )
> +         "xml": xml}
>  
> +    write_template(opt.out, xsdtoasciidoc_mako_fname, d)
> diff --git a/elbepack/config.py b/elbepack/config.py
> index 74c21843..30910db0 100644
> --- a/elbepack/config.py
> +++ b/elbepack/config.py
> @@ -5,6 +5,7 @@
>  
>  import os
>  
> +
>  class Config(dict):
>      def __init__(self):
>          self['soaphost'] = "localhost"
> @@ -30,4 +31,3 @@ class Config(dict):
>  
>  
>  cfg = Config()
> -
> diff --git a/elbepack/daemons/soap/__init__.py b/elbepack/daemons/soap/__init__.py
> index 28da22cf..97ec6130 100644
> --- a/elbepack/daemons/soap/__init__.py
> +++ b/elbepack/daemons/soap/__init__.py
> @@ -23,36 +23,38 @@ except ImportError as e:
>  
>  from elbepack.projectmanager import ProjectManager
>  
> +
>  class EsoapApp(Application):
> -    def __init__ (self, *args, **kargs):
> +    def __init__(self, *args, **kargs):
>          Application.__init__(self, *args, **kargs)
> -        self.pm = ProjectManager ("/var/cache/elbe")
> +        self.pm = ProjectManager("/var/cache/elbe")
> +
>  
>  class MySession (SessionMiddleware, SimplePlugin):
> -    def __init__ (self, app, pm, engine):
> +    def __init__(self, app, pm, engine):
>          self.pm = pm
>          SessionMiddleware.__init__(self, app)
>  
> -        SimplePlugin.__init__(self,engine)
> +        SimplePlugin.__init__(self, engine)
>          self.subscribe()
>  
>      def stop(self):
>          self.pm.stop()
>  
> -    def __call__ (self, environ, start_response):
> -        #example to hook into wsgi environment
> -        if environ ['PATH_INFO'].startswith ('/FILE:'):
> -            f = environ ['PATH_INFO'][6:]
> -            #return f
> +    def __call__(self, environ, start_response):
> +        # example to hook into wsgi environment
> +        if environ['PATH_INFO'].startswith('/FILE:'):
> +            f = environ['PATH_INFO'][6:]
> +            # return f
>  
>          return SessionMiddleware.__call__(self, environ, start_response)
>  
>  
>  def get_app(engine):
>  
> -    app = EsoapApp ([ESoap], 'soap',
> -                    in_protocol=Soap11(validator='lxml'),
> -                    out_protocol=Soap11())
> +    app = EsoapApp([ESoap], 'soap',
> +                   in_protocol=Soap11(validator='lxml'),
> +                   out_protocol=Soap11())
>  
> -    wsgi = WsgiApplication (app)
> +    wsgi = WsgiApplication(app)
>      return MySession(wsgi, app.pm, engine)
> diff --git a/elbepack/daemons/soap/authentication.py b/elbepack/daemons/soap/authentication.py
> index 1b8b8e4e..629e8dc6 100644
> --- a/elbepack/daemons/soap/authentication.py
> +++ b/elbepack/daemons/soap/authentication.py
> @@ -6,6 +6,7 @@
>  from faults import SoapElbeNotLoggedIn, SoapElbeNotAuthorized
>  from functools import wraps
>  
> +
>  def authenticated_uid(func):
>      """ decorator, which Checks, that the current session is logged in,
>          and also passes the current uid to the function
> @@ -15,7 +16,7 @@ def authenticated_uid(func):
>          Example:
>              @soapmethod (String, _returns=Array(SoapFile))
>              @authenticated_uid
> -            def get_files (self, uid, builddir): 
> +            def get_files (self, uid, builddir):
>      """
>      if func.__code__.co_argcount == 2:
>          @wraps(func)
> @@ -26,7 +27,7 @@ def authenticated_uid(func):
>              except KeyError:
>                  raise SoapElbeNotLoggedIn()
>  
> -            return func(self,uid)
> +            return func(self, uid)
>          return wrapped
>      elif func.__code__.co_argcount == 3:
>          @wraps(func)
> @@ -37,7 +38,7 @@ def authenticated_uid(func):
>              except KeyError:
>                  raise SoapElbeNotLoggedIn()
>  
> -            return func(self,uid,arg1)
> +            return func(self, uid, arg1)
>          return wrapped
>      elif func.__code__.co_argcount == 4:
>          @wraps(func)
> @@ -48,7 +49,7 @@ def authenticated_uid(func):
>              except KeyError:
>                  raise SoapElbeNotLoggedIn()
>  
> -            return func(self,uid,arg1,arg2)
> +            return func(self, uid, arg1, arg2)
>          return wrapped
>      elif func.__code__.co_argcount == 5:
>          @wraps(func)
> @@ -59,7 +60,7 @@ def authenticated_uid(func):
>              except KeyError:
>                  raise SoapElbeNotLoggedIn()
>  
> -            return func(self,uid,arg1,arg2,arg3)
> +            return func(self, uid, arg1, arg2, arg3)
>          return wrapped
>      elif func.__code__.co_argcount == 6:
>          @wraps(func)
> @@ -70,7 +71,7 @@ def authenticated_uid(func):
>              except KeyError:
>                  raise SoapElbeNotLoggedIn()
>  
> -            return func(self,uid,arg1,arg2,arg3,arg4)
> +            return func(self, uid, arg1, arg2, arg3, arg4)
>          return wrapped
>      elif func.__code__.co_argcount == 7:
>          @wraps(func)
> @@ -81,12 +82,12 @@ def authenticated_uid(func):
>              except KeyError:
>                  raise SoapElbeNotLoggedIn()
>  
> -            return func(self,uid,arg1,arg2,arg3,arg4,arg5)
> +            return func(self, uid, arg1, arg2, arg3, arg4, arg5)
>          return wrapped
>      else:
> -        raise Exception( "arg count %d not implemented" % func.__code__.co_argcount )
> -
> -
> +        raise Exception(
> +            "arg count %d not implemented" %
> +            func.__code__.co_argcount)
>  
>  
>  def authenticated_admin(func):
> @@ -97,7 +98,7 @@ def authenticated_admin(func):
>          Example:
>              @soapmethod (String, _returns=Array(SoapFile))
>              @authenticated_uid
> -            def get_files (self, uid, builddir): 
> +            def get_files (self, uid, builddir):
>      """
>      if func.__code__.co_argcount == 1:
>          @wraps(func)
> @@ -124,7 +125,7 @@ def authenticated_admin(func):
>              if not self.app.pm.db.is_admin(uid):
>                  raise SoapElbeNotAuthorized()
>  
> -            return func(self,arg1)
> +            return func(self, arg1)
>          return wrapped
>      elif func.__code__.co_argcount == 3:
>          @wraps(func)
> @@ -137,7 +138,9 @@ def authenticated_admin(func):
>  
>              if not self.app.pm.db.is_admin(uid):
>                  raise SoapElbeNotAuthorized()
> -            return func(self,arg1,arg2)
> +            return func(self, arg1, arg2)
>          return wrapped
>      else:
> -        raise Exception( "arg count %d not implemented" % func.__code__.co_argcount )
> +        raise Exception(
> +            "arg count %d not implemented" %
> +            func.__code__.co_argcount)
> diff --git a/elbepack/daemons/soap/datatypes.py b/elbepack/daemons/soap/datatypes.py
> index 1862a249..caef5924 100644
> --- a/elbepack/daemons/soap/datatypes.py
> +++ b/elbepack/daemons/soap/datatypes.py
> @@ -6,6 +6,7 @@
>  from spyne.model.complex import ComplexModel
>  from spyne.model.primitive import Unicode, DateTime
>  
> +
>  class SoapProject (ComplexModel):
>      __namespace__ = 'soap'
>  
> @@ -22,6 +23,7 @@ class SoapProject (ComplexModel):
>          self.status = prj.status
>          self.edit = prj.edit
>  
> +
>  class SoapFile (ComplexModel):
>      __namespace__ = 'soap'
>  
> @@ -31,4 +33,3 @@ class SoapFile (ComplexModel):
>      def __init__(self, fi):
>          self.name = fi.name
>          self.description = fi.description
> -
> diff --git a/elbepack/daemons/soap/esoap.py b/elbepack/daemons/soap/esoap.py
> index 4f8ef0bc..a9077635 100644
> --- a/elbepack/daemons/soap/esoap.py
> +++ b/elbepack/daemons/soap/esoap.py
> @@ -31,16 +31,17 @@ except ImportError as e:
>      print("please install python(3)-spyne", file=sys.stderr)
>      sys.exit(20)
>  
> +
>  class ESoap (ServiceBase):
>  
>      __name__ = 'soap'
>  
> -    @rpc (_returns=String )
> +    @rpc(_returns=String)
>      @soap_faults
>      def get_version(self):
>          return elbe_version
>  
> -    @rpc (String, String, _returns=Boolean )
> +    @rpc(String, String, _returns=Boolean)
>      @soap_faults
>      def login(self, user, passwd):
>          s = self.transport.req_env['beaker.session']
> @@ -48,293 +49,300 @@ class ESoap (ServiceBase):
>          s.save()
>          return True
>  
> -
> -    @rpc (_returns=Array(String))
> +    @rpc(_returns=Array(String))
>      @soap_faults
>      @authenticated_admin
> -    def list_users (ctx):
> -        return [u.name for u in ctx.app.pm.db.list_users ()]
> +    def list_users(ctx):
> +        return [u.name for u in ctx.app.pm.db.list_users()]
>  
> -    @rpc (_returns=Array(SoapProject))
> +    @rpc(_returns=Array(SoapProject))
>      @soap_faults
>      @authenticated_admin
> -    def list_projects (ctx):
> +    def list_projects(ctx):
>          return ctx.app.pm.db.list_projects()
>  
> -    @rpc (String, String, _returns=Array(SoapFile))
> +    @rpc(String, String, _returns=Array(SoapFile))
>      @authenticated_uid
>      @soap_faults
> -    def get_files (self, uid, builddir, _returns=Array(SoapFile)):
> -        files = self.app.pm.db.get_project_files (builddir)
> +    def get_files(self, uid, builddir, _returns=Array(SoapFile)):
> +        files = self.app.pm.db.get_project_files(builddir)
>          return [SoapFile(f) for f in files]
>  
> -    @rpc (String, String, String, Integer, _returns=Integer)
> +    @rpc(String, String, String, Integer, _returns=Integer)
>      @authenticated_uid
>      @soap_faults
> -    def upload_file (self, uid, builddir, fname, blob, part):
> -        fn = os.path.join (builddir, fname)
> +    def upload_file(self, uid, builddir, fname, blob, part):
> +        fn = os.path.join(builddir, fname)
>          if (part == 0):
> -            if self.app.pm.db.is_busy (builddir):
> +            if self.app.pm.db.is_busy(builddir):
>                  return -1
> -            self.app.pm.db.set_busy (builddir,
> -                [ "empty_project", "needs_build", "has_changes",
> -                  "build_done", "build_failed" ] )
> -            #truncate file
> -            with open (fn, 'w') as fp:
> -                fp.write ('')
> +            self.app.pm.db.set_busy(builddir,
> +                                    ["empty_project", "needs_build", "has_changes",
> +                                     "build_done", "build_failed"])
> +            # truncate file
> +            with open(fn, 'w') as fp:
> +                fp.write('')
>  
>          if part == -1:
> -            with open (fn, 'a') as fp:
> -                fp.flush ()
> -            self.app.pm.db.reset_busy (builddir, "has_changes")
> +            with open(fn, 'a') as fp:
> +                fp.flush()
> +            self.app.pm.db.reset_busy(builddir, "has_changes")
>              if (fname == "source.xml"):
>                  # ensure that the project cache is reloaded
> -                self.app.pm.close_current_project (uid)
> -                self.app.pm.open_project (uid, builddir, url_validation=ValidationMode.NO_CHECK)
> -                self.app.pm.set_current_project_xml (uid, fn)
> +                self.app.pm.close_current_project(uid)
> +                self.app.pm.open_project(
> +                    uid, builddir, url_validation=ValidationMode.NO_CHECK)
> +                self.app.pm.set_current_project_xml(uid, fn)
>              return -2
>  
> -        with open (fn, 'a') as fp:
> -            fp.write (binascii.a2b_base64 (blob))
> +        with open(fn, 'a') as fp:
> +            fp.write(binascii.a2b_base64(blob))
>  
> -        return part+1
> +        return part + 1
>  
> -    @rpc (String, String, Integer, _returns=String)
> +    @rpc(String, String, Integer, _returns=String)
>      @authenticated_uid
>      @soap_faults
> -    def get_file (self, uid, builddir, filename, part):
> +    def get_file(self, uid, builddir, filename, part):
>          size = 1024 * 1024 * 5
>          pos = size * part
>          file_name = builddir + "/" + filename
> -        file_stat = os.stat (file_name)
> +        file_stat = os.stat(file_name)
>  
>          if (pos >= file_stat.st_size):
>              return "EndOfFile"
>  
> -        with open (file_name) as fp:
> +        with open(file_name) as fp:
>              if not fp:
>                  return "FileNotFound"
>              try:
> -                fp.seek (pos)
> -                data = fp.read (size)
> -                return binascii.b2a_base64 (data)
> -            except:
> +                fp.seek(pos)
> +                data = fp.read(size)
> +                return binascii.b2a_base64(data)
> +            except BaseException:
>                  return "EndOfFile"
>  
> -    @rpc (String)
> +    @rpc(String)
>      @authenticated_uid
>      @soap_faults
> -    def build_chroot_tarball (self, uid, builddir):
> -        self.app.pm.open_project (uid, builddir)
> -        self.app.pm.build_chroot_tarball (uid)
> +    def build_chroot_tarball(self, uid, builddir):
> +        self.app.pm.open_project(uid, builddir)
> +        self.app.pm.build_chroot_tarball(uid)
>  
> -    @rpc (String)
> +    @rpc(String)
>      @authenticated_uid
>      @soap_faults
> -    def build_sysroot (self, uid, builddir):
> -        self.app.pm.open_project (uid, builddir)
> -        self.app.pm.build_sysroot (uid)
> +    def build_sysroot(self, uid, builddir):
> +        self.app.pm.open_project(uid, builddir)
> +        self.app.pm.build_sysroot(uid)
>  
> -    @rpc (String, Boolean, Boolean, Boolean)
> +    @rpc(String, Boolean, Boolean, Boolean)
>      @authenticated_uid
>      @soap_faults
> -    def build (self, uid, builddir, build_bin, build_src, skip_pbuilder):
> -        self.app.pm.open_project (uid, builddir)
> -        self.app.pm.build_current_project (uid, build_bin, build_src,
> -                skip_pbuilder)
> +    def build(self, uid, builddir, build_bin, build_src, skip_pbuilder):
> +        self.app.pm.open_project(uid, builddir)
> +        self.app.pm.build_current_project(uid, build_bin, build_src,
> +                                          skip_pbuilder)
>  
> -    @rpc (String)
> +    @rpc(String)
>      @authenticated_uid
>      @soap_faults
> -    def build_pbuilder (self, uid, builddir):
> -        self.app.pm.open_project (uid, builddir)
> -        self.app.pm.build_pbuilder (uid)
> +    def build_pbuilder(self, uid, builddir):
> +        self.app.pm.open_project(uid, builddir)
> +        self.app.pm.build_pbuilder(uid)
>  
> -
> -    @rpc (String)
> +    @rpc(String)
>      @authenticated_uid
>      @soap_faults
> -    def update_pbuilder (self, uid, builddir):
> -        self.app.pm.open_project (uid, builddir)
> -        self.app.pm.update_pbuilder (uid)
> +    def update_pbuilder(self, uid, builddir):
> +        self.app.pm.open_project(uid, builddir)
> +        self.app.pm.update_pbuilder(uid)
>  
> -    @rpc (String)
> +    @rpc(String)
>      @authenticated_uid
>      @soap_faults
> -    def start_cdrom (self, uid, builddir):
> -        self.app.pm.open_project (uid, builddir, url_validation=ValidationMode.NO_CHECK)
> +    def start_cdrom(self, uid, builddir):
> +        self.app.pm.open_project(
> +            uid, builddir, url_validation=ValidationMode.NO_CHECK)
>  
> -        cdrom_fname = os.path.join (builddir, "uploaded_cdrom.iso")
> +        cdrom_fname = os.path.join(builddir, "uploaded_cdrom.iso")
>  
>          # Now write empty File
> -        fp = open (cdrom_fname, "w")
> +        fp = open(cdrom_fname, "w")
>          fp.close()
>  
> -    @rpc (String, String)
> +    @rpc(String, String)
>      @authenticated_uid
>      @soap_faults
> -    def append_cdrom (self, uid, builddir, data):
> -        self.app.pm.open_project (uid, builddir, url_validation=ValidationMode.NO_CHECK)
> +    def append_cdrom(self, uid, builddir, data):
> +        self.app.pm.open_project(
> +            uid, builddir, url_validation=ValidationMode.NO_CHECK)
>  
> -        cdrom_fname = os.path.join (builddir, "uploaded_cdrom.iso")
> +        cdrom_fname = os.path.join(builddir, "uploaded_cdrom.iso")
>  
>          # Now append data to cdrom_file
> -        fp = open (cdrom_fname, "a")
> -        fp.write (binascii.a2b_base64 (data))
> +        fp = open(cdrom_fname, "a")
> +        fp.write(binascii.a2b_base64(data))
>          fp.close()
>  
> -    @rpc (String)
> +    @rpc(String)
>      @authenticated_uid
>      @soap_faults
> -    def finish_cdrom (self, uid, builddir):
> -        self.app.pm.open_project (uid, builddir, url_validation=ValidationMode.NO_CHECK)
> -        self.app.pm.set_current_project_upload_cdrom (uid)
> +    def finish_cdrom(self, uid, builddir):
> +        self.app.pm.open_project(
> +            uid, builddir, url_validation=ValidationMode.NO_CHECK)
> +        self.app.pm.set_current_project_upload_cdrom(uid)
>  
> -    @rpc (String)
> +    @rpc(String)
>      @authenticated_uid
>      @soap_faults
> -    def start_pdebuild (self, uid, builddir):
> -        self.app.pm.open_project (uid, builddir)
> +    def start_pdebuild(self, uid, builddir):
> +        self.app.pm.open_project(uid, builddir)
>  
> -        pdebuild_fname = os.path.join (builddir, "current_pdebuild.tar.gz")
> +        pdebuild_fname = os.path.join(builddir, "current_pdebuild.tar.gz")
>  
>          # Now write empty File
> -        fp = open (pdebuild_fname, "w")
> +        fp = open(pdebuild_fname, "w")
>          fp.close()
>  
> -    @rpc (String, String)
> +    @rpc(String, String)
>      @authenticated_uid
>      @soap_faults
> -    def append_pdebuild (self, uid, builddir, data):
> -        self.app.pm.open_project (uid, builddir)
> +    def append_pdebuild(self, uid, builddir, data):
> +        self.app.pm.open_project(uid, builddir)
>  
> -        pdebuild_fname = os.path.join (builddir, "current_pdebuild.tar.gz")
> +        pdebuild_fname = os.path.join(builddir, "current_pdebuild.tar.gz")
>  
>          # Now write empty File
> -        fp = open (pdebuild_fname, "a")
> -        fp.write (binascii.a2b_base64 (data))
> +        fp = open(pdebuild_fname, "a")
> +        fp.write(binascii.a2b_base64(data))
>          fp.close()
>  
> -    @rpc (String)
> +    @rpc(String)
>      @authenticated_uid
>      @soap_faults
> -    def finish_pdebuild (self, uid, builddir):
> -        self.app.pm.open_project (uid, builddir)
> -        self.app.pm.build_current_pdebuild (uid)
> +    def finish_pdebuild(self, uid, builddir):
> +        self.app.pm.open_project(uid, builddir)
> +        self.app.pm.build_current_pdebuild(uid)
>  
> -    @rpc (String, String)
> +    @rpc(String, String)
>      @authenticated_uid
>      @soap_faults
> -    def start_upload_orig (self, uid, builddir, fname):
> -        self.app.pm.open_project (uid, builddir)
> +    def start_upload_orig(self, uid, builddir, fname):
> +        self.app.pm.open_project(uid, builddir)
>  
> -        orig_fname = os.path.join (builddir, fname)
> +        orig_fname = os.path.join(builddir, fname)
>  
>          # Now write empty File
> -        fp = open (orig_fname, "w")
> +        fp = open(orig_fname, "w")
>          fp.close()
>  
>          self.app.pm.set_orig_fname(uid, fname)
>  
> -
> -    @rpc (String, String)
> +    @rpc(String, String)
>      @authenticated_uid
>      @soap_faults
> -    def append_upload_orig (self, uid, builddir, data):
> -        self.app.pm.open_project (uid, builddir)
> +    def append_upload_orig(self, uid, builddir, data):
> +        self.app.pm.open_project(uid, builddir)
>  
> -        orig_fname = os.path.join (builddir, self.app.pm.get_orig_fname(uid))
> +        orig_fname = os.path.join(builddir, self.app.pm.get_orig_fname(uid))
>  
>          # Now append to File
> -        fp = open (orig_fname, "a")
> -        fp.write (binascii.a2b_base64 (data))
> +        fp = open(orig_fname, "a")
> +        fp.write(binascii.a2b_base64(data))
>          fp.close()
>  
> -    @rpc (String)
> +    @rpc(String)
>      @authenticated_uid
>      @soap_faults
> -    def finish_upload_orig (self, uid, builddir):
> +    def finish_upload_orig(self, uid, builddir):
>          # If we support more than one orig, we need to put the orig_files into some
>          # list here.
>          # We still need the notion of a "current" orig during file upload.
>          pass
>  
> -    @rpc (String)
> +    @rpc(String)
>      @authenticated_uid
>      @soap_faults
> -    def reset_project (self, uid, builddir):
> -        self.app.pm.db.reset_project (builddir, True)
> +    def reset_project(self, uid, builddir):
> +        self.app.pm.db.reset_project(builddir, True)
>  
> -    @rpc (String)
> +    @rpc(String)
>      @authenticated_uid
>      @soap_faults
> -    def del_project (self, uid, builddir):
> -        self.app.pm.del_project (uid, builddir)
> +    def del_project(self, uid, builddir):
> +        self.app.pm.del_project(uid, builddir)
>  
>      @rpc(String, String, _returns=String)
>      @authenticated_uid
>      @soap_faults
> -    def create_project (self, uid, xml, url_validation):
> +    def create_project(self, uid, xml, url_validation):
>          with NamedTemporaryFile() as fp:
> -            fp.write (binascii.a2b_base64 (xml))
> -            fp.flush ()
> -            prjid = self.app.pm.create_project (uid, fp.name, url_validation=url_validation)
> +            fp.write(binascii.a2b_base64(xml))
> +            fp.flush()
> +            prjid = self.app.pm.create_project(
> +                uid, fp.name, url_validation=url_validation)
>  
>          return prjid
>  
>      @rpc(String, _returns=String)
>      @authenticated_uid
>      @soap_faults
> -    def new_project (self, uid, url_validation):
> -        return self.app.pm.new_project (uid)
> +    def new_project(self, uid, url_validation):
> +        return self.app.pm.new_project(uid)
>  
> -    @rpc (String, Integer, _returns=String)
> +    @rpc(String, Integer, _returns=String)
>      @authenticated_uid
>      @soap_faults
> -    def get_project_busy (self, uid, builddir, part):
> -        self.app.pm.open_project (uid, builddir)
> -        ret,log = self.app.pm.current_project_is_busy (uid, part)
> +    def get_project_busy(self, uid, builddir, part):
> +        self.app.pm.open_project(uid, builddir)
> +        ret, log = self.app.pm.current_project_is_busy(uid, part)
>          # return bool value to be compatible with elbe v1.0
> -        if (part == None) and (log == "") and (not ret):
> +        if (part is None) and (log == "") and (not ret):
>              return ret
>          if not ret:
>              return 'FINISH'
>          return log
>  
> -    @rpc ()
> +    @rpc()
>      @authenticated_uid
>      @soap_faults
> -    def shutdown_initvm (self, uid):
> -        system ("systemctl --no-block poweroff")
> +    def shutdown_initvm(self, uid):
> +        system("systemctl --no-block poweroff")
>  
> -    @rpc (String)
> +    @rpc(String)
>      @authenticated_uid
>      @soap_faults
> -    def rm_log (self, uid, builddir):
> -        self.app.pm.open_project (uid, builddir)
> -        self.app.pm.rm_log (uid, builddir)
> +    def rm_log(self, uid, builddir):
> +        self.app.pm.open_project(uid, builddir)
> +        self.app.pm.rm_log(uid, builddir)
>  
>      @rpc(String, _returns=String)
>      @authenticated_uid
>      @soap_faults
> -    def list_packages (self, uid, builddir):
> +    def list_packages(self, uid, builddir):
>          s = ''
> -        for root, dirnames, filenames in os.walk(os.path.join(builddir, "repo/pool/main")):
> +        for root, dirnames, filenames in os.walk(
> +                os.path.join(builddir, "repo/pool/main")):
>              for filename in fnmatch.filter(filenames, '*.deb'):
> -                s += filename +'\n'
> +                s += filename + '\n'
>          return s
>  
> -    @rpc (String, String)
> +    @rpc(String, String)
>      @authenticated_uid
>      @soap_faults
> -    def tar_prjrepo (self, uid, builddir, filename):
> -        self.app.pm.open_project (uid, builddir)
> +    def tar_prjrepo(self, uid, builddir, filename):
> +        self.app.pm.open_project(uid, builddir)
>          with tarfile.open(os.path.join(builddir, filename), "w:gz") as tar:
> -            tar.add(os.path.join(builddir, "repo"), arcname=os.path.basename(os.path.join(builddir, "repo")))
> +            tar.add(
> +                os.path.join(
> +                    builddir, "repo"), arcname=os.path.basename(
> +                    os.path.join(
> +                        builddir, "repo")))
>  
> -    @rpc (String, String)
> +    @rpc(String, String)
>      @authenticated_uid
>      @soap_faults
> -    def include_package (self, uid, builddir, filename):
> -        self.app.pm.open_project (uid, builddir)
> +    def include_package(self, uid, builddir, filename):
> +        self.app.pm.open_project(uid, builddir)
>          self.app.pm.add_deb_package(uid, filename)
> diff --git a/elbepack/daemons/soap/faults.py b/elbepack/daemons/soap/faults.py
> index c36dbb0f..773f02ed 100644
> --- a/elbepack/daemons/soap/faults.py
> +++ b/elbepack/daemons/soap/faults.py
> @@ -15,33 +15,55 @@ from elbepack.elbexml import ValidationError
>  from elbepack.db import ElbeDBError, InvalidLogin
>  
>  
> -class SoapElbeDBError( Fault ):
> +class SoapElbeDBError(Fault):
>      def __init__(self, dberr):
>          Fault.__init__(self, faultcode="ElbeDBError", faultstring=str(dberr))
>  
> -class SoapElbeProjectError( Fault ):
> +
> +class SoapElbeProjectError(Fault):
>      def __init__(self, err):
> -        Fault.__init__(self, faultcode="ElbeProjectError", faultstring=str(err))
> +        Fault.__init__(
> +            self,
> +            faultcode="ElbeProjectError",
> +            faultstring=str(err))
> +
>  
> -class SoapElbeAuthenticationFailed( Fault ):
> +class SoapElbeAuthenticationFailed(Fault):
>      def __init__(self):
> -        Fault.__init__(self, faultcode="ElbeAuthenticationFailed", faultstring="Authentication Failed")
> +        Fault.__init__(
> +            self,
> +            faultcode="ElbeAuthenticationFailed",
> +            faultstring="Authentication Failed")
> +
>  
> -class SoapElbeNotLoggedIn( Fault ):
> +class SoapElbeNotLoggedIn(Fault):
>      def __init__(self):
> -        Fault.__init__(self, faultcode="ElbeNotLoggedIn", faultstring="Not authenticated ! Cant let you perform this command.")
> +        Fault.__init__(
> +            self,
> +            faultcode="ElbeNotLoggedIn",
> +            faultstring="Not authenticated ! Cant let you perform this command.")
>  
> -class SoapElbeNotAuthorized( Fault ):
> +
> +class SoapElbeNotAuthorized(Fault):
>      def __init__(self):
> -        Fault.__init__(self, faultcode="ElbeNotAuthorized", faultstring="Not Authorized ! Cant let you perform this command.")
> +        Fault.__init__(
> +            self,
> +            faultcode="ElbeNotAuthorized",
> +            faultstring="Not Authorized ! Cant let you perform this command.")
> +
>  
> -class SoapElbeValidationError( Fault ):
> +class SoapElbeValidationError(Fault):
>      def __init__(self, exc):
> -        Fault.__init__(self, faultcode="ElbeValidationError", faultstring=exc.__repr__())
> +        Fault.__init__(
> +            self,
> +            faultcode="ElbeValidationError",
> +            faultstring=exc.__repr__())
>  
> -class SoapElbeInvalidState( Fault ):
> +
> +class SoapElbeInvalidState(Fault):
>      def __init__(self):
> -        Fault.__init__(self, faultcode="ElbeInvalidState", faultstring="Project is Busy ! Operation Invalid")
> +        Fault.__init__(self, faultcode="ElbeInvalidState",
> +                       faultstring="Project is Busy ! Operation Invalid")
>  
>  
>  def soap_faults(func):
> @@ -54,140 +76,141 @@ def soap_faults(func):
>              try:
>                  return func(self)
>              except ProjectManagerError as e:
> -                raise SoapElbeProjectError (str (e))
> +                raise SoapElbeProjectError(str(e))
>              except InvalidState as e:
>                  raise SoapElbeInvalidState()
>              except ElbeDBError as e:
> -                raise SoapElbeDBError (str (e))
> +                raise SoapElbeDBError(str(e))
>              except OSError as e:
> -                raise SoapElbeProjectError ("OSError: " + str (e))
> +                raise SoapElbeProjectError("OSError: " + str(e))
>              except ValidationError as e:
> -                raise SoapElbeValidationError (e)
> +                raise SoapElbeValidationError(e)
>              except InvalidLogin:
>                  raise SoapElbeNotAuthorized()
>              except Exception as e:
> -                raise SoapElbeProjectError (format_exc ())
> +                raise SoapElbeProjectError(format_exc())
>          return wrapped
>      if func.__code__.co_argcount == 2:
> -        @wraps (func)
> -        def wrapped (self, arg1):
> +        @wraps(func)
> +        def wrapped(self, arg1):
>              try:
> -                return func (self,arg1)
> +                return func(self, arg1)
>              except ProjectManagerError as e:
> -                raise SoapElbeProjectError (str (e))
> +                raise SoapElbeProjectError(str(e))
>              except InvalidState as e:
> -                raise SoapElbeInvalidState ()
> +                raise SoapElbeInvalidState()
>              except ElbeDBError as e:
> -                raise SoapElbeDBError (str (e))
> +                raise SoapElbeDBError(str(e))
>              except OSError as e:
> -                raise SoapElbeProjectError ("OSError: " + str (e))
> +                raise SoapElbeProjectError("OSError: " + str(e))
>              except ValidationError as e:
> -                raise SoapElbeValidationError (e)
> +                raise SoapElbeValidationError(e)
>              except InvalidLogin:
>                  raise SoapElbeNotAuthorized()
>              except Exception as e:
> -                raise SoapElbeProjectError (format_exc ())
> +                raise SoapElbeProjectError(format_exc())
>          return wrapped
>      if func.__code__.co_argcount == 3:
> -        @wraps (func)
> -        def wrapped (self, arg1, arg2):
> +        @wraps(func)
> +        def wrapped(self, arg1, arg2):
>              try:
> -                return func (self,arg1,arg2)
> +                return func(self, arg1, arg2)
>              except ProjectManagerError as e:
> -                raise SoapElbeProjectError (str (e))
> +                raise SoapElbeProjectError(str(e))
>              except InvalidState as e:
> -                raise SoapElbeInvalidState ()
> +                raise SoapElbeInvalidState()
>              except ElbeDBError as e:
> -                raise SoapElbeDBError (str (e))
> +                raise SoapElbeDBError(str(e))
>              except OSError as e:
> -                raise SoapElbeProjectError ("OSError: " + str (e))
> +                raise SoapElbeProjectError("OSError: " + str(e))
>              except ValidationError as e:
> -                raise SoapElbeValidationError (e)
> +                raise SoapElbeValidationError(e)
>              except InvalidLogin:
>                  raise SoapElbeNotAuthorized()
>              except Exception as e:
> -                raise SoapElbeProjectError (format_exc ())
> +                raise SoapElbeProjectError(format_exc())
>          return wrapped
>      if func.__code__.co_argcount == 4:
> -        @wraps (func)
> -        def wrapped (self, arg1, arg2, arg3):
> +        @wraps(func)
> +        def wrapped(self, arg1, arg2, arg3):
>              try:
> -                return func (self,arg1,arg2, arg3)
> +                return func(self, arg1, arg2, arg3)
>              except ProjectManagerError as e:
> -                raise SoapElbeProjectError (str (e))
> +                raise SoapElbeProjectError(str(e))
>              except InvalidState as e:
> -                raise SoapElbeInvalidState ()
> +                raise SoapElbeInvalidState()
>              except ElbeDBError as e:
> -                raise SoapElbeDBError (str (e))
> +                raise SoapElbeDBError(str(e))
>              except OSError as e:
> -                raise SoapElbeProjectError ("OSError: " + str (e))
> +                raise SoapElbeProjectError("OSError: " + str(e))
>              except ValidationError as e:
> -                raise SoapElbeValidationError (e)
> +                raise SoapElbeValidationError(e)
>              except InvalidLogin:
>                  raise SoapElbeNotAuthorized()
>              except Exception as e:
> -                raise SoapElbeProjectError (format_exc ())
> +                raise SoapElbeProjectError(format_exc())
>          return wrapped
>      if func.__code__.co_argcount == 5:
> -        @wraps (func)
> -        def wrapped (self, arg1, arg2, arg3, arg4):
> +        @wraps(func)
> +        def wrapped(self, arg1, arg2, arg3, arg4):
>              try:
> -                return func (self,arg1,arg2, arg3, arg4)
> +                return func(self, arg1, arg2, arg3, arg4)
>              except ProjectManagerError as e:
> -                raise SoapElbeProjectError (str (e))
> +                raise SoapElbeProjectError(str(e))
>              except InvalidState as e:
> -                raise SoapElbeInvalidState ()
> +                raise SoapElbeInvalidState()
>              except ElbeDBError as e:
> -                raise SoapElbeDBError (str (e))
> +                raise SoapElbeDBError(str(e))
>              except OSError as e:
> -                raise SoapElbeProjectError ("OSError: " + str (e))
> +                raise SoapElbeProjectError("OSError: " + str(e))
>              except ValidationError as e:
> -                raise SoapElbeValidationError (e)
> +                raise SoapElbeValidationError(e)
>              except InvalidLogin:
>                  raise SoapElbeNotAuthorized()
>              except Exception as e:
> -                raise SoapElbeProjectError (format_exc ())
> +                raise SoapElbeProjectError(format_exc())
>          return wrapped
>      if func.__code__.co_argcount == 6:
> -        @wraps (func)
> -        def wrapped (self, arg1, arg2, arg3, arg4, arg5):
> +        @wraps(func)
> +        def wrapped(self, arg1, arg2, arg3, arg4, arg5):
>              try:
> -                return func (self,arg1,arg2, arg3, arg4, arg5)
> +                return func(self, arg1, arg2, arg3, arg4, arg5)
>              except ProjectManagerError as e:
> -                raise SoapElbeProjectError (str (e))
> +                raise SoapElbeProjectError(str(e))
>              except InvalidState as e:
> -                raise SoapElbeInvalidState ()
> +                raise SoapElbeInvalidState()
>              except ElbeDBError as e:
> -                raise SoapElbeDBError (str (e))
> +                raise SoapElbeDBError(str(e))
>              except OSError as e:
> -                raise SoapElbeProjectError ("OSError: " + str (e))
> +                raise SoapElbeProjectError("OSError: " + str(e))
>              except ValidationError as e:
> -                raise SoapElbeValidationError (e)
> +                raise SoapElbeValidationError(e)
>              except InvalidLogin:
>                  raise SoapElbeNotAuthorized()
>              except Exception as e:
> -                raise SoapElbeProjectError (format_exc ())
> +                raise SoapElbeProjectError(format_exc())
>          return wrapped
>      if func.__code__.co_argcount == 7:
> -        @wraps (func)
> -        def wrapped (self, arg1, arg2, arg3, arg4, arg5, arg6):
> +        @wraps(func)
> +        def wrapped(self, arg1, arg2, arg3, arg4, arg5, arg6):
>              try:
> -                return func (self,arg1,arg2, arg3, arg4, arg5, arg6)
> +                return func(self, arg1, arg2, arg3, arg4, arg5, arg6)
>              except ProjectManagerError as e:
> -                raise SoapElbeProjectError (str (e))
> +                raise SoapElbeProjectError(str(e))
>              except InvalidState as e:
> -                raise SoapElbeInvalidState ()
> +                raise SoapElbeInvalidState()
>              except ElbeDBError as e:
> -                raise SoapElbeDBError (str (e))
> +                raise SoapElbeDBError(str(e))
>              except OSError as e:
> -                raise SoapElbeProjectError ("OSError: " + str (e))
> +                raise SoapElbeProjectError("OSError: " + str(e))
>              except ValidationError as e:
> -                raise SoapElbeValidationError (e)
> +                raise SoapElbeValidationError(e)
>              except InvalidLogin:
>                  raise SoapElbeNotAuthorized()
>              except Exception as e:
> -                raise SoapElbeProjectError (format_exc ())
> +                raise SoapElbeProjectError(format_exc())
>          return wrapped
>      else:
> -        raise Exception( "arg count %d not implemented" % func.__code__.co_argcount )
> -
> +        raise Exception(
> +            "arg count %d not implemented" %
> +            func.__code__.co_argcount)
> diff --git a/elbepack/db.py b/elbepack/db.py
> index 469509bf..c7a6ef2d 100644
> --- a/elbepack/db.py
> +++ b/elbepack/db.py
> @@ -31,16 +31,19 @@ from elbepack.elbeproject import ElbeProject
>  from elbepack.elbexml import (ElbeXML, ValidationMode)
>  from elbepack.dosunix import dos2unix
>  
> -Base = declarative_base ()
> +Base = declarative_base()
> +
>  
>  class ElbeDBError(Exception):
> -    def __init__ (self, message):
> +    def __init__(self, message):
>          Exception.__init__(self, message)
>  
> +
>  class InvalidLogin(Exception):
> -    def __init__ (self):
> +    def __init__(self):
>          Exception.__init__(self, "Invalid login")
>  
> +
>  @contextmanager
>  def session_scope(session):
>      try:
> @@ -48,204 +51,205 @@ def session_scope(session):
>          try:
>              session.commit()
>          except OperationalError as e:
> -            raise ElbeDBError( "database commit failed: " + str(e) )
> -    except:
> +            raise ElbeDBError("database commit failed: " + str(e))
> +    except BaseException:
>          session.rollback()
>          raise
>      finally:
>          session.remove()
>  
> -def get_versioned_filename( name, version, suffix ):
> -    return quote( name, ' ') + '_' + quote( version, ' ') + suffix
> +
> +def get_versioned_filename(name, version, suffix):
> +    return quote(name, ' ') + '_' + quote(version, ' ') + suffix
> +
>  
>  class ElbeDB(object):
> -    db_path     = '/var/cache/elbe'
> +    db_path = '/var/cache/elbe'
>      db_location = 'sqlite:///' + db_path + '/elbe.db'
>  
> -    def __init__ (self):
> -        engine = create_engine( self.__class__.db_location,
> -                connect_args={ 'timeout': 30 } )
> -        Base.metadata.create_all( engine )
> -        smaker = sessionmaker( bind=engine )
> -        self.session = scoped_session( smaker )
> +    def __init__(self):
> +        engine = create_engine(self.__class__.db_location,
> +                               connect_args={'timeout': 30})
> +        Base.metadata.create_all(engine)
> +        smaker = sessionmaker(bind=engine)
> +        self.session = scoped_session(smaker)
>  
> -    def list_users (self):
> +    def list_users(self):
>          with session_scope(self.session) as s:
> -            res = s.query (User).all()
> +            res = s.query(User).all()
>              ret = []
>              for u in res:
>                  ret.append(UserData(u))
>              return ret
>  
> -    def list_projects (self):
> +    def list_projects(self):
>          with session_scope(self.session) as s:
> -            res = s.query (Project).all()
> +            res = s.query(Project).all()
>              ret = []
>              for p in res:
>                  ret.append(ProjectData(p))
>              return ret
>  
> -    def list_projects_of (self, userid):
> +    def list_projects_of(self, userid):
>          with session_scope(self.session) as s:
> -            res = s.query (Project).filter (Project.owner_id == userid).all()
> +            res = s.query(Project).filter(Project.owner_id == userid).all()
>              ret = []
>              for p in res:
>                  ret.append(ProjectData(p))
>              return ret
>  
> -    def get_project_data (self, builddir):
> +    def get_project_data(self, builddir):
>          # Can throw: ElbeDBError
> -        if not os.path.exists (builddir):
> -            raise ElbeDBError( "project directory does not exist" )
> +        if not os.path.exists(builddir):
> +            raise ElbeDBError("project directory does not exist")
>  
>          with session_scope(self.session) as s:
>              try:
> -                p = s.query (Project).filter(Project.builddir == builddir).one()
> +                p = s.query(Project).filter(Project.builddir == builddir).one()
>              except NoResultFound:
>                  raise ElbeDBError(
> -                        "project %s is not registered in the database" %
> -                        builddir )
> +                    "project %s is not registered in the database" %
> +                    builddir)
>  
>              return ProjectData(p)
>  
> -    def set_postbuild (self, builddir, postbuild_file):
> -        if not os.path.exists (builddir):
> -            raise ElbeDBError( "project directory does not exist" )
> +    def set_postbuild(self, builddir, postbuild_file):
> +        if not os.path.exists(builddir):
> +            raise ElbeDBError("project directory does not exist")
>  
>          with session_scope(self.session) as s:
>              p = None
>              try:
> -                p = s.query (Project). \
> -                        filter(Project.builddir == builddir).one()
> +                p = s.query(Project). \
> +                    filter(Project.builddir == builddir).one()
>              except NoResultFound:
>                  raise ElbeDBError(
> -                        "project %s is not registered in the database" %
> -                        builddir )
> +                    "project %s is not registered in the database" %
> +                    builddir)
>  
>              if p.status == "busy":
>                  raise ElbeDBError(
> -                        "cannot set postbuild file while project %s is busy" %
> -                        builddir )
> +                    "cannot set postbuild file while project %s is busy" %
> +                    builddir)
>  
> -            p.edit = datetime.utcnow ()
> +            p.edit = datetime.utcnow()
>  
> -            with open (builddir+"/postbuild.sh", 'w') as dst:
> -                copyfileobj (postbuild_file, dst)
> +            with open(builddir + "/postbuild.sh", 'w') as dst:
> +                copyfileobj(postbuild_file, dst)
>  
> -            os.chmod (builddir+"/postbuild.sh", 0o755)
> -            dos2unix (builddir+"/postbuild.sh")
> +            os.chmod(builddir + "/postbuild.sh", 0o755)
> +            dos2unix(builddir + "/postbuild.sh")
>  
> -            return self._update_project_file( s, builddir, "postbuild.sh",
> -                    "application/sh", "postbuild script" )
> +            return self._update_project_file(
> +                s, builddir, "postbuild.sh", "application/sh", "postbuild script")
>  
> -    def set_savesh (self, builddir, savesh_file):
> -        if not os.path.exists (builddir):
> -            raise ElbeDBError( "project directory does not exist" )
> +    def set_savesh(self, builddir, savesh_file):
> +        if not os.path.exists(builddir):
> +            raise ElbeDBError("project directory does not exist")
>  
>          with session_scope(self.session) as s:
>              p = None
>              try:
> -                p = s.query (Project). \
> -                        filter(Project.builddir == builddir).one()
> +                p = s.query(Project). \
> +                    filter(Project.builddir == builddir).one()
>              except NoResultFound:
>                  raise ElbeDBError(
> -                        "project %s is not registered in the database" %
> -                        builddir )
> +                    "project %s is not registered in the database" %
> +                    builddir)
>  
>              if p.status == "busy":
>                  raise ElbeDBError(
> -                        "cannot set savesh file while project %s is busy" %
> -                        builddir )
> +                    "cannot set savesh file while project %s is busy" %
> +                    builddir)
>  
> -            p.edit = datetime.utcnow ()
> +            p.edit = datetime.utcnow()
>              if p.status == "empty_project" or p.status == "build_failed":
>                  p.status = "needs_build"
>              elif p.status == "build_done":
>                  p.status = "has_changes"
>  
> -            with open (builddir+"/save.sh", 'w') as dst:
> -                copyfileobj (savesh_file, dst)
> +            with open(builddir + "/save.sh", 'w') as dst:
> +                copyfileobj(savesh_file, dst)
>  
> -            os.chmod (builddir+"/save.sh", 0o755)
> -            dos2unix (builddir+"/save.sh")
> +            os.chmod(builddir + "/save.sh", 0o755)
> +            dos2unix(builddir + "/save.sh")
>  
> -            return self._update_project_file( s, builddir, "save.sh",
> -                    "application/sh", "version save script" )
> +            return self._update_project_file(
> +                s, builddir, "save.sh", "application/sh", "version save script")
>  
> -    def set_presh (self, builddir, presh_file):
> -        if not os.path.exists (builddir):
> -            raise ElbeDBError( "project directory does not exist" )
> +    def set_presh(self, builddir, presh_file):
> +        if not os.path.exists(builddir):
> +            raise ElbeDBError("project directory does not exist")
>  
>          with session_scope(self.session) as s:
>              p = None
>              try:
> -                p = s.query (Project). \
> -                        filter(Project.builddir == builddir).one()
> +                p = s.query(Project). \
> +                    filter(Project.builddir == builddir).one()
>              except NoResultFound:
>                  raise ElbeDBError(
> -                        "project %s is not registered in the database" %
> -                        builddir )
> +                    "project %s is not registered in the database" %
> +                    builddir)
>  
>              if p.status == "busy":
>                  raise ElbeDBError(
> -                        "cannot set presh file while project %s is busy" %
> -                        builddir )
> +                    "cannot set presh file while project %s is busy" %
> +                    builddir)
>  
> -            p.edit = datetime.utcnow ()
> +            p.edit = datetime.utcnow()
>              if p.status == "empty_project" or p.status == "build_failed":
>                  p.status = "needs_build"
>              elif p.status == "build_done":
>                  p.status = "has_changes"
>  
> -            with open (builddir+"/pre.sh", 'w') as dst:
> -                copyfileobj (presh_file, dst)
> +            with open(builddir + "/pre.sh", 'w') as dst:
> +                copyfileobj(presh_file, dst)
>  
> -            dos2unix (builddir+"/pre.sh")
> +            dos2unix(builddir + "/pre.sh")
>  
> -            return self._update_project_file( s, builddir, "pre.sh",
> -                    "application/sh", "pre install script" )
> +            return self._update_project_file(
> +                s, builddir, "pre.sh", "application/sh", "pre install script")
>  
> -    def set_postsh (self, builddir, postsh_file):
> -        if not os.path.exists (builddir):
> -            raise ElbeDBError( "project directory does not exist" )
> +    def set_postsh(self, builddir, postsh_file):
> +        if not os.path.exists(builddir):
> +            raise ElbeDBError("project directory does not exist")
>  
>          with session_scope(self.session) as s:
>              p = None
>              try:
> -                p = s.query (Project). \
> -                        filter(Project.builddir == builddir).one()
> +                p = s.query(Project). \
> +                    filter(Project.builddir == builddir).one()
>              except NoResultFound:
>                  raise ElbeDBError(
> -                        "project %s is not registered in the database" %
> -                        builddir )
> +                    "project %s is not registered in the database" %
> +                    builddir)
>  
>              if p.status == "busy":
>                  raise ElbeDBError(
> -                        "cannot set postsh file while project %s is busy" %
> -                        builddir )
> +                    "cannot set postsh file while project %s is busy" %
> +                    builddir)
>  
> -            p.edit = datetime.utcnow ()
> +            p.edit = datetime.utcnow()
>              if p.status == "empty_project" or p.status == "build_failed":
>                  p.status = "needs_build"
>              elif p.status == "build_done":
>                  p.status = "has_changes"
>  
> -            with open (builddir+"/post.sh", 'w') as dst:
> -                copyfileobj (postsh_file, dst)
> +            with open(builddir + "/post.sh", 'w') as dst:
> +                copyfileobj(postsh_file, dst)
>  
> -            dos2unix (builddir+"/post.sh")
> +            dos2unix(builddir + "/post.sh")
>  
> -            return self._update_project_file( s, builddir, "post.sh",
> -                    "application/sh", "post install script" )
> +            return self._update_project_file(
> +                s, builddir, "post.sh", "application/sh", "post install script")
>  
> -
> -    def set_xml (self, builddir, xml_file):
> +    def set_xml(self, builddir, xml_file):
>          # This method can throw: ElbeDBError, ValidationError, OSError
>  
> -        if not os.path.exists (builddir):
> -            raise ElbeDBError( "project directory does not exist" )
> +        if not os.path.exists(builddir):
> +            raise ElbeDBError("project directory does not exist")
>  
> -        srcxml_fname = os.path.join (builddir, "source.xml")
> +        srcxml_fname = os.path.join(builddir, "source.xml")
>  
>          if xml_file is None:
>              xml_file = srcxml_fname
> @@ -253,39 +257,44 @@ class ElbeDB(object):
>          with session_scope(self.session) as s:
>              p = None
>              try:
> -                p = s.query (Project). \
> -                        filter(Project.builddir == builddir).one()
> +                p = s.query(Project). \
> +                    filter(Project.builddir == builddir).one()
>              except NoResultFound:
>                  raise ElbeDBError(
> -                        "project %s is not registered in the database" %
> -                        builddir )
> +                    "project %s is not registered in the database" %
> +                    builddir)
>  
>              if p.status == "busy":
>                  raise ElbeDBError(
> -                        "cannot set XML file while project %s is busy" %
> -                        builddir )
> +                    "cannot set XML file while project %s is busy" %
> +                    builddir)
>  
> -            xml = ElbeXML (xml_file, url_validation=ValidationMode.NO_CHECK)    #ValidationError
> +            xml = ElbeXML(
> +                xml_file,
> +                url_validation=ValidationMode.NO_CHECK)  # ValidationError
>  
> -            p.name = xml.text ("project/name")
> -            p.version = xml.text ("project/version")
> -            p.edit = datetime.utcnow ()
> +            p.name = xml.text("project/name")
> +            p.version = xml.text("project/version")
> +            p.edit = datetime.utcnow()
>              if p.status == "empty_project" or p.status == "build_failed":
>                  p.status = "needs_build"
>              elif p.status == "build_done":
>                  p.status = "has_changes"
>  
>              if xml_file != srcxml_fname:
> -                copyfile (xml_file, srcxml_fname);    #OSError
> -
> -            self._update_project_file( s, builddir, "source.xml",
> -                    "application/xml", "ELBE recipe of the project" )
> +                copyfile(xml_file, srcxml_fname)  # OSError
>  
> +            self._update_project_file(
> +                s,
> +                builddir,
> +                "source.xml",
> +                "application/xml",
> +                "ELBE recipe of the project")
>  
>      # TODO what about source.xml ? stored always in db ? version management ?
>      #       build/needs_build state ? locking ?
>  
> -    def create_project (self, builddir, owner_id = None):
> +    def create_project(self, builddir, owner_id=None):
>          # Throws: ElbeDBError, OSError
>          directory_created = False
>  
> @@ -293,146 +302,147 @@ class ElbeDB(object):
>              with session_scope(self.session) as s:
>                  if s.query(Project).\
>                          filter(Project.builddir == builddir).count() > 0:
> -                    raise ElbeDBError( "project %s already exists in database" %
> -                            builddir )
> +                    raise ElbeDBError("project %s already exists in database" %
> +                                      builddir)
>  
>                  try:
> -                    os.makedirs (builddir)  #OSError
> +                    os.makedirs(builddir)  # OSError
>                      directory_created = True
>                  except OSError as e:
>                      if e.errno == errno.EEXIST:
>                          raise ElbeDBError(
> -                                "project directory %s already exists" %
> -                                builddir )
> +                            "project directory %s already exists" %
> +                            builddir)
>                      else:
>                          raise
>  
> -                p = Project (builddir=builddir, status="empty_project",
> -                        owner_id=owner_id)
> -                s.add (p)
> -        except:
> +                p = Project(builddir=builddir, status="empty_project",
> +                            owner_id=owner_id)
> +                s.add(p)
> +        except BaseException:
>              # If we have created a project directory but could not add the
>              # project to the database, remove the otherwise orphaned directory
>              # again.
>              if directory_created:
> -                rmtree (builddir)       #OSError
> +                rmtree(builddir)  # OSError
>              raise
>  
> -
> -    def del_project (self, builddir):
> +    def del_project(self, builddir):
>          # Throws: ElbeDBError, OSError
>          p = None
>          with session_scope(self.session) as s:
>              try:
> -                p = s.query (Project).filter(Project.builddir == builddir).one()
> +                p = s.query(Project).filter(Project.builddir == builddir).one()
>              except NoResultFound:
> -                raise ElbeDBError( "project %s is not registered in the database" %
> -                        builddir )
> +                raise ElbeDBError(
> +                    "project %s is not registered in the database" %
> +                    builddir)
>  
>              if p.status == "busy":
>                  raise ElbeDBError(
> -                        "cannot delete project %s while it is busy" %
> -                        builddir )
> +                    "cannot delete project %s while it is busy" %
> +                    builddir)
>  
> -            if os.path.exists (builddir):
> +            if os.path.exists(builddir):
>                  # delete project in background to avoid blocking caller for a
>                  # long time if the project is huge
> -                t = Thread (target=rmtree, args=[builddir])
> -                t.start ()
> -
> -            s.query( ProjectVersion ).\
> -                    filter( ProjectVersion.builddir == builddir ).delete()
> +                t = Thread(target=rmtree, args=[builddir])
> +                t.start()
>  
> -            s.query( ProjectFile ).\
> -                    filter( ProjectFile.builddir == builddir ).delete()
> +            s.query(ProjectVersion).\
> +                filter(ProjectVersion.builddir == builddir).delete()
>  
> -            s.delete (p)
> +            s.query(ProjectFile).\
> +                filter(ProjectFile.builddir == builddir).delete()
>  
> +            s.delete(p)
>  
> -    def reset_project (self, builddir, clean):
> +    def reset_project(self, builddir, clean):
>          # Throws: ElbeDBError, OSError
>          with session_scope(self.session) as s:
>              try:
> -                p = s.query (Project).filter(Project.builddir == builddir).one()
> +                p = s.query(Project).filter(Project.builddir == builddir).one()
>              except NoResultFound:
>                  raise ElbeDBError(
> -                        "project %s is not registered in the database" %
> -                        builddir )
> +                    "project %s is not registered in the database" %
> +                    builddir)
>  
> -            sourcexmlpath = os.path.join( builddir, "source.xml" )
> -            if os.path.exists( sourcexmlpath ):
> +            sourcexmlpath = os.path.join(builddir, "source.xml")
> +            if os.path.exists(sourcexmlpath):
>                  p.status = "needs_build"
>              else:
>                  p.status = "empty_project"
>  
>          if clean:
> -            targetpath = os.path.join( builddir, "target" )
> -            if os.path.exists( targetpath ):
> -                rmtree( targetpath )      # OSError
> +            targetpath = os.path.join(builddir, "target")
> +            if os.path.exists(targetpath):
> +                rmtree(targetpath)      # OSError
>  
> -            chrootpath = os.path.join( builddir, "chroot" )
> -            if os.path.exists( chrootpath ):
> -                rmtree( chrootpath )      # OSError
> +            chrootpath = os.path.join(builddir, "chroot")
> +            if os.path.exists(chrootpath):
> +                rmtree(chrootpath)      # OSError
>  
> -
> -    def save_project (self, ep):
> +    def save_project(self, ep):
>          # TODO: Recover in case writing the XML file or commiting the
>          # database entry fails
>          project = None
>  
>          with session_scope(self.session) as s:
>              try:
> -                project = s.query (Project).filter (
> -                            Project.builddir == ep.builddir).one ()
> +                project = s.query(Project).filter(
> +                    Project.builddir == ep.builddir).one()
>              except NoResultFound:
>                  pass
>  
> -            if not os.path.exists (ep.builddir):
> -                os.makedirs (ep.builddir)
> -            if not os.path.isfile (ep.builddir + "/source.xml") and ep.xml:
> -                ep.xml.xml.write (ep.builddir + "/source.xml")
> +            if not os.path.exists(ep.builddir):
> +                os.makedirs(ep.builddir)
> +            if not os.path.isfile(ep.builddir + "/source.xml") and ep.xml:
> +                ep.xml.xml.write(ep.builddir + "/source.xml")
>  
> -            with open (ep.builddir + "/source.xml") as xml_file:
> -                xml_str  = xml_file.read ()
> +            with open(ep.builddir + "/source.xml") as xml_file:
> +                xml_str = xml_file.read()
>                  if not project:
> -                    project = Project (name = ep.xml.text ("project/name"),
> -                                       version = ep.xml.text ("project/version"),
> -                                       builddir = ep.builddir,
> -                                       xml = xml_str)
> -                    s.add (project)
> +                    project = Project(name=ep.xml.text("project/name"),
> +                                      version=ep.xml.text("project/version"),
> +                                      builddir=ep.builddir,
> +                                      xml=xml_str)
> +                    s.add(project)
>                  else:
> -                    project.edit = datetime.utcnow ()
> -                    project.version = ep.xml.text ("project/version")
> +                    project.edit = datetime.utcnow()
> +                    project.version = ep.xml.text("project/version")
>                      project.xml = xml_str
>  
> -
> -    def load_project (self, builddir, logpath = None, url_validation=ValidationMode.CHECK_ALL):
> +    def load_project(
> +            self,
> +            builddir,
> +            logpath=None,
> +            url_validation=ValidationMode.CHECK_ALL):
>  
>          # pass exceptions if hook-scripts can't be loaded (they're optional)
>          postbuild_file = None
>          try:
> -            postbuild = self.get_project_file (builddir, 'postbuild.sh')
> +            postbuild = self.get_project_file(builddir, 'postbuild.sh')
>              postbuild_file = postbuild.builddir + '/' + postbuild.name
>          except ElbeDBError as e:
>              pass
>  
>          presh_file = None
>          try:
> -            presh_handle = self.get_project_file (builddir, 'pre.sh')
> +            presh_handle = self.get_project_file(builddir, 'pre.sh')
>              presh_file = presh_handle.builddir + '/' + presh_handle.name
>          except ElbeDBError as e:
>              pass
>  
>          postsh_file = None
>          try:
> -            postsh_handle = self.get_project_file (builddir, 'post.sh')
> +            postsh_handle = self.get_project_file(builddir, 'post.sh')
>              postsh_file = postsh_handle.builddir + '/' + postsh_handle.name
>          except ElbeDBError as e:
>              pass
>  
>          savesh_file = None
>          try:
> -            savesh_handle = self.get_project_file (builddir, 'save.sh')
> +            savesh_handle = self.get_project_file(builddir, 'save.sh')
>              savesh_file = savesh_handle.builddir + '/' + savesh_handle.name
>          except ElbeDBError as e:
>              pass
> @@ -440,433 +450,474 @@ class ElbeDB(object):
>          with session_scope(self.session) as s:
>              try:
>                  p = s.query(Project). \
> -                        filter(Project.builddir == builddir).one()
> -
> -                return ElbeProject (p.builddir, name=p.name, logpath=logpath,
> -                        postbuild_file=postbuild_file,
> -                        presh_file=presh_file,
> -                        postsh_file=postsh_file,
> -                        savesh_file=savesh_file,
> -                        url_validation=url_validation)
> +                    filter(Project.builddir == builddir).one()
> +
> +                return ElbeProject(p.builddir, name=p.name, logpath=logpath,
> +                                   postbuild_file=postbuild_file,
> +                                   presh_file=presh_file,
> +                                   postsh_file=postsh_file,
> +                                   savesh_file=savesh_file,
> +                                   url_validation=url_validation)
>              except NoResultFound:
>                  raise ElbeDBError(
> -                        "project %s is not registered in the database" %
> -                        builddir )
> -
> +                    "project %s is not registered in the database" %
> +                    builddir)
>  
> -    def set_busy (self, builddir, allowed_status):
> -        assert not "busy" in allowed_status
> +    def set_busy(self, builddir, allowed_status):
> +        assert "busy" not in allowed_status
>          with session_scope(self.session) as s:
>              try:
> -                p = s.query( Project ).with_lockmode( 'update' ). \
> -                        filter( Project.builddir == builddir ).one()
> +                p = s.query(Project).with_lockmode('update'). \
> +                    filter(Project.builddir == builddir).one()
>              except NoResultFound:
>                  raise ElbeDBError(
> -                        "project %s is not registered in the database" %
> -                        builddir )
> +                    "project %s is not registered in the database" %
> +                    builddir)
>  
> -            if not p.status in allowed_status:
> -                raise ElbeDBError( "project: " + builddir +
> -                        " set_busy: invalid status: " + p.status )
> +            if p.status not in allowed_status:
> +                raise ElbeDBError("project: " + builddir +
> +                                  " set_busy: invalid status: " + p.status)
>  
>              old_status = p.status
>              p.status = "busy"
>              return old_status
>  
> -
> -    def is_busy (self, builddir):
> +    def is_busy(self, builddir):
>          with session_scope(self.session) as s:
>              try:
> -                p = s.query( Project ).filter( Project.builddir == builddir ). \
> -                        one()
> +                p = s.query(Project).filter(Project.builddir == builddir). \
> +                    one()
>              except NoResultFound:
>                  raise ElbeDBError(
> -                        "project %s is not registered in the database" %
> -                        builddir )
> +                    "project %s is not registered in the database" %
> +                    builddir)
>  
>              if p.status == "busy":
>                  return True
>              else:
>                  return False
>  
> -
> -    def reset_busy (self, builddir, new_status):
> +    def reset_busy(self, builddir, new_status):
>          assert new_status == "has_changes" or \
> -               new_status == "build_done" or \
> -               new_status == "build_failed"
> +            new_status == "build_done" or \
> +            new_status == "build_failed"
>  
>          with session_scope(self.session) as s:
>              try:
> -                p = s.query( Project ).with_lockmode( 'update' ). \
> -                        filter( Project.builddir == builddir ).one()
> +                p = s.query(Project).with_lockmode('update'). \
> +                    filter(Project.builddir == builddir).one()
>              except NoResultFound:
> -                raise ElbeDBError( "project %s is not registered in the database" %
> -                        builddir )
> +                raise ElbeDBError(
> +                    "project %s is not registered in the database" %
> +                    builddir)
>  
>              if p.status != "busy":
> -                raise ElbeDBError( "project: " + builddir + " reset_busy: invalid status: " +
> -                        p.status )
> +                raise ElbeDBError(
> +                    "project: " +
> +                    builddir +
> +                    " reset_busy: invalid status: " +
> +                    p.status)
>  
>              p.status = new_status
>  
> -
> -    def has_changes (self, builddir):
> +    def has_changes(self, builddir):
>          with session_scope(self.session) as s:
>              try:
> -                p = s.query( Project ).filter( Project.builddir == builddir ). \
> -                        one()
> +                p = s.query(Project).filter(Project.builddir == builddir). \
> +                    one()
>              except NoResultFound:
>                  raise ElbeDBError(
> -                        "project %s is not registered in the database" %
> -                        builddir )
> +                    "project %s is not registered in the database" %
> +                    builddir)
>  
>              if p.status == "has_changes":
>                  return True
>              else:
>                  return False
>  
> -
> -    def get_owner_id (self, builddir):
> +    def get_owner_id(self, builddir):
>          with session_scope(self.session) as s:
>              try:
> -                p = s.query( Project ).filter( Project.builddir == builddir ).\
> -                        one()
> +                p = s.query(Project).filter(Project.builddir == builddir).\
> +                    one()
>              except NoResultFound:
> -                raise ElbeDBError( "project %s is not registered in the database" %
> -                        builddir )
> +                raise ElbeDBError(
> +                    "project %s is not registered in the database" %
> +                    builddir)
>  
>              if p.owner_id is None:
>                  return None
>              else:
>                  return int(p.owner_id)
>  
> -
> -    def set_project_version( self, builddir, new_version = None):
> +    def set_project_version(self, builddir, new_version=None):
>          if new_version == "":
> -            raise ElbeDBError( "version number must not be empty" )
> +            raise ElbeDBError("version number must not be empty")
>  
>          if not re.match("^[A-Za-z0-9_.-]{1,25}$", new_version):
> -            raise ElbeDBError( "version number must contain valid characters [A-Za-z0-9_-.]" )
> +            raise ElbeDBError(
> +                "version number must contain valid characters [A-Za-z0-9_-.]")
>  
>          with session_scope(self.session) as s:
>              try:
> -                p = s.query( Project ).filter( Project.builddir == builddir ).\
> -                        one()
> +                p = s.query(Project).filter(Project.builddir == builddir).\
> +                    one()
>              except NoResultFound:
> -                raise ElbeDBError( "project %s is not registered in the database" %
> -                        builddir )
> +                raise ElbeDBError(
> +                    "project %s is not registered in the database" %
> +                    builddir)
>  
>              if p.status == "empty_project" or p.status == "busy":
> -                raise ElbeDBError( "project: " + builddir +
> -                        " set_project_version: invalid status: " + p.status )
> -
> -            xmlpath = os.path.join( builddir, "source.xml" )
> -            xml = ElbeXML( xmlpath, url_validation=ValidationMode.NO_CHECK )
> +                raise ElbeDBError(
> +                    "project: " +
> +                    builddir +
> +                    " set_project_version: invalid status: " +
> +                    p.status)
>  
> -            if not new_version is None:
> -                xml.node( "/project/version" ).set_text( new_version )
> -                xml.xml.write( xmlpath )
> +            xmlpath = os.path.join(builddir, "source.xml")
> +            xml = ElbeXML(xmlpath, url_validation=ValidationMode.NO_CHECK)
>  
> -            p.version = xml.text( "/project/version" )
> +            if new_version is not None:
> +                xml.node("/project/version").set_text(new_version)
> +                xml.xml.write(xmlpath)
>  
> +            p.version = xml.text("/project/version")
>  
>      ### Version management ###
>  
> -    def list_project_versions (self, builddir):
> +    def list_project_versions(self, builddir):
>          with session_scope(self.session) as s:
>              try:
> -                p = s.query( Project ).filter( Project.builddir == builddir).\
> -                        one()
> +                p = s.query(Project).filter(Project.builddir == builddir).\
> +                    one()
>              except NoResultFound:
> -                raise ElbeDBError( "project %s is not registered in the database" %
> -                        builddir )
> +                raise ElbeDBError(
> +                    "project %s is not registered in the database" %
> +                    builddir)
>  
>              return [ProjectVersionData(v) for v in p.versions]
>  
> -    def save_version (self, builddir, description = None):
> +    def save_version(self, builddir, description=None):
>          with session_scope(self.session) as s:
>              try:
> -                p = s.query( Project ).filter( Project.builddir == builddir).\
> -                        one()
> +                p = s.query(Project).filter(Project.builddir == builddir).\
> +                    one()
>              except NoResultFound:
> -                raise ElbeDBError( "project %s is not registered in the database" %
> -                        builddir )
> +                raise ElbeDBError(
> +                    "project %s is not registered in the database" %
> +                    builddir)
>  
>              assert p.status == "busy"
>  
> -            sourcexmlpath = os.path.join( builddir, "source.xml" )
> -            sourcexml = ElbeXML( sourcexmlpath, url_validation=ValidationMode.NO_CHECK )
> +            sourcexmlpath = os.path.join(builddir, "source.xml")
> +            sourcexml = ElbeXML(sourcexmlpath,
> +                                url_validation=ValidationMode.NO_CHECK)
>  
> -            version = sourcexml.text( "project/version" )
> -            if s.query( ProjectVersion ).\
> -                    filter( ProjectVersion.builddir == builddir ).\
> -                    filter( ProjectVersion.version == version ).count() > 0:
> +            version = sourcexml.text("project/version")
> +            if s.query(ProjectVersion).\
> +                    filter(ProjectVersion.builddir == builddir).\
> +                    filter(ProjectVersion.version == version).count() > 0:
>                  raise ElbeDBError(
> -                        "Version %s already exists for project in %s, "
> -                        "please change version number first" %\
> -                                (version, builddir)
> -                        )
> -
> -            versionxmlname = get_versioned_filename( p.name, version,
> -                    ".version.xml" )
> -            versionxmlpath = os.path.join( builddir, versionxmlname )
> -            copyfile( sourcexmlpath, versionxmlpath )
> -
> -            v = ProjectVersion( builddir = builddir,
> -                                version = version,
> -                                description = description )
> +                    "Version %s already exists for project in %s, "
> +                    "please change version number first" %
> +                    (version, builddir)
> +                )
> +
> +            versionxmlname = get_versioned_filename(p.name, version,
> +                                                    ".version.xml")
> +            versionxmlpath = os.path.join(builddir, versionxmlname)
> +            copyfile(sourcexmlpath, versionxmlpath)
> +
> +            v = ProjectVersion(builddir=builddir,
> +                               version=version,
> +                               description=description)
>              s.add(v)
>  
> -            self._update_project_file( s, builddir, versionxmlname,
> -                    "application/xml",
> -                    "source.xml for version %s" % version )
> +            self._update_project_file(s, builddir, versionxmlname,
> +                                      "application/xml",
> +                                      "source.xml for version %s" % version)
>  
> -    def set_version_description (self, builddir, version, description):
> +    def set_version_description(self, builddir, version, description):
>          with session_scope(self.session) as s:
>              try:
> -                v = s.query( ProjectVersion ).\
> -                        filter( ProjectVersion.builddir == builddir ).\
> -                        filter( ProjectVersion.version == version ).one()
> +                v = s.query(ProjectVersion).\
> +                    filter(ProjectVersion.builddir == builddir).\
> +                    filter(ProjectVersion.version == version).one()
>              except NoResultFound:
>                  raise ElbeDBError(
> -                        "no such project version: %s (version %s)" %
> -                        (builddir, version) )
> +                    "no such project version: %s (version %s)" %
> +                    (builddir, version))
>  
>              v.description = description
>  
> -    def checkout_version_xml (self, builddir, version):
> +    def checkout_version_xml(self, builddir, version):
>          with session_scope(self.session) as s:
>              try:
> -                v = s.query( ProjectVersion ).\
> -                        filter( ProjectVersion.builddir == builddir ).\
> -                        filter( ProjectVersion.version == version ).one()
> +                v = s.query(ProjectVersion).\
> +                    filter(ProjectVersion.builddir == builddir).\
> +                    filter(ProjectVersion.version == version).one()
>              except NoResultFound:
>                  raise ElbeDBError(
> -                        "no such project version: %s (version %s)" %
> -                        (builddir, version) )
> +                    "no such project version: %s (version %s)" %
> +                    (builddir, version))
>  
>              assert v.project.status == "busy"
>  
> -            sourcexmlpath = os.path.join( builddir, "source.xml" )
> -            versionxmlname = get_versioned_filename( v.project.name, version,
> -                    ".version.xml" )
> -            versionxmlpath = os.path.join( builddir, versionxmlname )
> +            sourcexmlpath = os.path.join(builddir, "source.xml")
> +            versionxmlname = get_versioned_filename(v.project.name, version,
> +                                                    ".version.xml")
> +            versionxmlpath = os.path.join(builddir, versionxmlname)
>  
> -            copyfile( versionxmlpath, sourcexmlpath )
> +            copyfile(versionxmlpath, sourcexmlpath)
>              v.project.version = version
>  
> -    def del_version (self, builddir, version, force=False):
> +    def del_version(self, builddir, version, force=False):
>          with session_scope(self.session) as s:
>              try:
> -                v = s.query( ProjectVersion ).\
> -                        filter( ProjectVersion.builddir == builddir ).\
> -                        filter( ProjectVersion.version == version ).one()
> +                v = s.query(ProjectVersion).\
> +                    filter(ProjectVersion.builddir == builddir).\
> +                    filter(ProjectVersion.version == version).one()
>              except NoResultFound:
>                  raise ElbeDBError(
> -                        "no such project version: %s (version %s)" %
> -                        (builddir, version) )
> +                    "no such project version: %s (version %s)" %
> +                    (builddir, version))
>  
>              if not force:
>                  if v.project.status == "busy":
>                      raise ElbeDBError(
> -                            "cannot delete version of project in %s while "
> -                            "it is busy" % builddir )
> +                        "cannot delete version of project in %s while "
> +                        "it is busy" % builddir)
>  
> -            xmlname = get_versioned_filename( v.project.name, version,
> -                    ".version.xml" )
> -            xmlpath = os.path.join( builddir, xmlname )
> -            os.remove( xmlpath )
> -            s.delete( v )
> +            xmlname = get_versioned_filename(v.project.name, version,
> +                                             ".version.xml")
> +            xmlpath = os.path.join(builddir, xmlname)
> +            os.remove(xmlpath)
> +            s.delete(v)
>  
> -            s.query( ProjectFile ).filter( ProjectFile.builddir == builddir ).\
> -                    filter( ProjectFile.name == xmlname ).delete()
> +            s.query(ProjectFile).filter(ProjectFile.builddir == builddir).\
> +                filter(ProjectFile.name == xmlname).delete()
>  
> -    def get_version_xml (self, builddir, version):
> +    def get_version_xml(self, builddir, version):
>          with session_scope(self.session) as s:
>              try:
> -                v = s.query( ProjectVersion ).\
> -                        filter( ProjectVersion.builddir == builddir ).\
> -                        filter( ProjectVersion.version == version ).one()
> +                v = s.query(ProjectVersion).\
> +                    filter(ProjectVersion.builddir == builddir).\
> +                    filter(ProjectVersion.version == version).one()
>              except NoResultFound:
> -                raise ElbeDBError( "no such project version: %s (version %s)" %
> -                        (builddir, version) )
> -
> -            xmlname = get_versioned_filename( v.project.name, version,
> -                    ".version.xml" )
> -            return os.path.join( builddir, xmlname )
> +                raise ElbeDBError("no such project version: %s (version %s)" %
> +                                  (builddir, version))
>  
> +            xmlname = get_versioned_filename(v.project.name, version,
> +                                             ".version.xml")
> +            return os.path.join(builddir, xmlname)
>  
>      ### File management ###
>  
> -    def get_project_files (self, builddir):
> +    def get_project_files(self, builddir):
>          # Can throw: ElbeDBError
>          with session_scope(self.session) as s:
>              try:
> -                p = s.query (Project).filter(Project.builddir == builddir).one()
> +                p = s.query(Project).filter(Project.builddir == builddir).one()
>              except NoResultFound:
>                  raise ElbeDBError(
> -                        "project %s is not registered in the database" %
> -                        builddir )
> +                    "project %s is not registered in the database" %
> +                    builddir)
>  
>              if p.status == "busy":
> -                raise ElbeDBError( "project: " + builddir +
> -                        " get_project_files: invalid status: " + p.status )
> +                raise ElbeDBError(
> +                    "project: " +
> +                    builddir +
> +                    " get_project_files: invalid status: " +
> +                    p.status)
>  
> -            return [ ProjectFileData(f) for f in p.files ]
> +            return [ProjectFileData(f) for f in p.files]
>  
> -    def get_project_file (self, builddir, name):
> +    def get_project_file(self, builddir, name):
>          with session_scope(self.session) as s:
>              try:
> -                p = s.query (Project).filter(Project.builddir == builddir).one()
> +                p = s.query(Project).filter(Project.builddir == builddir).one()
>              except NoResultFound:
>                  raise ElbeDBError(
> -                        "project %s is not registered in the database" %
> -                        builddir )
> +                    "project %s is not registered in the database" %
> +                    builddir)
>  
>              if p.status == "busy":
> -                raise ElbeDBError( "project: " + builddir +
> -                        " get_project_file: invalid status: " + p.status )
> +                raise ElbeDBError(
> +                    "project: " +
> +                    builddir +
> +                    " get_project_file: invalid status: " +
> +                    p.status)
>  
>              try:
> -                f = s.query( ProjectFile ).\
> -                        filter( ProjectFile.builddir == builddir ).\
> -                        filter( ProjectFile.name == name ).one()
> +                f = s.query(ProjectFile).\
> +                    filter(ProjectFile.builddir == builddir).\
> +                    filter(ProjectFile.name == name).one()
>              except NoResultFound:
>                  raise ElbeDBError(
> -                        "no file %s in project %s registered" %
> -                        ( name, builddir ) )
> +                    "no file %s in project %s registered" %
> +                    (name, builddir))
>  
>              return ProjectFileData(f)
>  
> -    def add_project_file (self, builddir, name, mime_type, description = None):
> +    def add_project_file(self, builddir, name, mime_type, description=None):
>          with session_scope(self.session) as s:
>              try:
> -                p = s.query( Project ).filter( Project.builddir == builddir).\
> -                        one()
> +                p = s.query(Project).filter(Project.builddir == builddir).\
> +                    one()
>              except NoResultFound:
>                  raise ElbeDBError(
> -                        "project %s is not registered in the database" %
> -                        builddir )
> +                    "project %s is not registered in the database" %
> +                    builddir)
>  
> -            self._update_project_file( s, builddir, name, mime_type,
> -                    description )
> +            self._update_project_file(s, builddir, name, mime_type,
> +                                      description)
>  
> -    def update_project_files (self, ep):
> +    def update_project_files(self, ep):
>          with session_scope(self.session) as s:
>              try:
> -                p = s.query( Project ).\
> -                        filter( Project.builddir == ep.builddir).one()
> +                p = s.query(Project).\
> +                    filter(Project.builddir == ep.builddir).one()
>              except NoResultFound:
>                  raise ElbeDBError(
> -                        "project %s is not registered in the database" %
> -                        ep.builddir )
> +                    "project %s is not registered in the database" %
> +                    ep.builddir)
>  
>              # Delete no longer existing files from the database
> -            files = s.query( ProjectFile ).\
> -                    filter( ProjectFile.builddir == ep.builddir ).all()
> +            files = s.query(ProjectFile).\
> +                filter(ProjectFile.builddir == ep.builddir).all()
>              for f in files:
> -                if not os.path.isfile( os.path.join( ep.builddir, f.name ) ):
> -                    s.delete( f )
> +                if not os.path.isfile(os.path.join(ep.builddir, f.name)):
> +                    s.delete(f)
>  
>              # Add images from the given ElbeProject
>              if ep.targetfs:
>  
> -                images = set( ep.targetfs.images or [] )
> +                images = set(ep.targetfs.images or [])
>                  for img in images:
> -                    self._update_project_file( s, p.builddir, img,
> -                            "application/octet-stream", "Image" )
> +                    self._update_project_file(
> +                        s, p.builddir, img, "application/octet-stream", "Image")
>  
>              # Add other generated files
> -            self._update_project_file( s, p.builddir, "source.xml",
> -                    "application/xml", "Current source.xml of the project" )
> -            self._update_project_file( s, p.builddir, "licence.txt",
> -                    "text/plain; charset=utf-8", "License file" )
> -            self._update_project_file( s, p.builddir, "licence.xml",
> -                    "application/xml", "xml License file" )
> -            self._update_project_file( s, p.builddir, "validation.txt",
> -                    "text/plain; charset=utf-8", "Package list validation result" )
> -            self._update_project_file( s, p.builddir, "elbe-report.txt",
> -                    "text/plain; charset=utf-8", "Report" )
> -            self._update_project_file( s, p.builddir, "log.txt",
> -                    "text/plain; charset=utf-8", "Log file" )
> -            self._update_project_file( s, p.builddir, "sysroot.tar.xz",
> -                    "application/x-xz-compressed-tar", "sysroot for cross-toolchains" )
> -            self._update_project_file( s, p.builddir, "chroot.tar.xz",
> -                    "application/x-xz-compressed-tar", "chroot for 'native' development" )
> +            self._update_project_file(
> +                s,
> +                p.builddir,
> +                "source.xml",
> +                "application/xml",
> +                "Current source.xml of the project")
> +            self._update_project_file(
> +                s,
> +                p.builddir,
> +                "licence.txt",
> +                "text/plain; charset=utf-8",
> +                "License file")
> +            self._update_project_file(s, p.builddir, "licence.xml",
> +                                      "application/xml", "xml License file")
> +            self._update_project_file(
> +                s,
> +                p.builddir,
> +                "validation.txt",
> +                "text/plain; charset=utf-8",
> +                "Package list validation result")
> +            self._update_project_file(s, p.builddir, "elbe-report.txt",
> +                                      "text/plain; charset=utf-8", "Report")
> +            self._update_project_file(s, p.builddir, "log.txt",
> +                                      "text/plain; charset=utf-8", "Log file")
> +            self._update_project_file(
> +                s,
> +                p.builddir,
> +                "sysroot.tar.xz",
> +                "application/x-xz-compressed-tar",
> +                "sysroot for cross-toolchains")
> +            self._update_project_file(
> +                s,
> +                p.builddir,
> +                "chroot.tar.xz",
> +                "application/x-xz-compressed-tar",
> +                "chroot for 'native' development")
>  
>              # Add Repository iso images
>              for img in ep.repo_images:
>                  name = os.path.basename(img)
> -                self._update_project_file( s, p.builddir, name,
> -                        "application/octet-stream", "Repository IsoImage" )
> +                self._update_project_file(
> +                    s,
> +                    p.builddir,
> +                    name,
> +                    "application/octet-stream",
> +                    "Repository IsoImage")
>  
>              # Scan pbuilder/build directory if that exists
> -            pbresult_path = os.path.join (p.builddir, "pbuilder", "result")
> -            if os.path.isdir (pbresult_path):
> -                for f in os.listdir (pbresult_path):
> -                    self._update_project_file (s, p.builddir, os.path.join ("pbuilder", "result", f),
> -                            "application/octet-stream", "Pbuilder artifact")
> -
> -
> -    def _update_project_file (self, s, builddir, name, mime_type, description):
> -        filename = os.path.join( builddir, name )
> +            pbresult_path = os.path.join(p.builddir, "pbuilder", "result")
> +            if os.path.isdir(pbresult_path):
> +                for f in os.listdir(pbresult_path):
> +                    self._update_project_file(
> +                        s,
> +                        p.builddir,
> +                        os.path.join(
> +                            "pbuilder",
> +                            "result",
> +                            f),
> +                        "application/octet-stream",
> +                        "Pbuilder artifact")
> +
> +    def _update_project_file(self, s, builddir, name, mime_type, description):
> +        filename = os.path.join(builddir, name)
>          try:
> -            f = s.query( ProjectFile ).\
> -                    filter( ProjectFile.builddir == builddir ).\
> -                    filter( ProjectFile.name == name).one()
> +            f = s.query(ProjectFile).\
> +                filter(ProjectFile.builddir == builddir).\
> +                filter(ProjectFile.name == name).one()
>          except NoResultFound:
> -            if os.path.isfile( os.path.join( builddir, name ) ):
> -                f = ProjectFile( builddir = builddir,
> -                        name = name,
> -                        mime_type = mime_type,
> -                        description = description )
> -                s.add( f )
> +            if os.path.isfile(os.path.join(builddir, name)):
> +                f = ProjectFile(builddir=builddir,
> +                                name=name,
> +                                mime_type=mime_type,
> +                                description=description)
> +                s.add(f)
>                  return filename
>              return None
>  
> -        if os.path.isfile( filename ):
> +        if os.path.isfile(filename):
>              f.mime_type = mime_type
>              f.description = description
>          else:
> -            s.delete( f )
> +            s.delete(f)
>              None
>  
>          return filename
>  
>      ### User management ###
>  
> -    def add_user (self, name, fullname, password, email, admin):
> +    def add_user(self, name, fullname, password, email, admin):
>          # encrypt is deprecated but hash is not available in jessie
>          try:
> -            hash = pbkdf2_sha512.hash( password )
> +            hash = pbkdf2_sha512.hash(password)
>          except AttributeError:
> -            hash = pbkdf2_sha512.encrypt( password )
> +            hash = pbkdf2_sha512.encrypt(password)
>  
> -        u = User( name = name,
> -                fullname = fullname,
> -                pwhash = hash,
> -                email = email,
> -                admin = admin )
> +        u = User(name=name,
> +                 fullname=fullname,
> +                 pwhash=hash,
> +                 email=email,
> +                 admin=admin)
>  
>          with session_scope(self.session) as s:
>              if s.query(User).filter(User.name == name).count() > 0:
> -                raise ElbeDBError( "user %s already exists in the database"  %
> -                        name )
> -            s.add( u )
> +                raise ElbeDBError("user %s already exists in the database" %
> +                                  name)
> +            s.add(u)
>  
> -    def modify_user (self, userid, name, fullname, email, admin,
> -            password = None):
> +    def modify_user(self, userid, name, fullname, email, admin,
> +                    password=None):
>          with session_scope(self.session) as s:
>              try:
> -                u = s.query( User ).filter( User.id == userid ).one()
> +                u = s.query(User).filter(User.id == userid).one()
>              except NoResultFound:
> -                raise ElbeDBError( "no user with id %i" % userid )
> +                raise ElbeDBError("no user with id %i" % userid)
>  
>              # If a user name change is requested, check for uniqueness
>              if name != u.name:
>                  if s.query(User).filter(User.name == name).count() > 0:
>                      raise ElbeDBError(
> -                            "user %s already exists in the database" % name )
> +                        "user %s already exists in the database" % name)
>  
>              u.name = name
>              u.fullname = fullname
> @@ -874,35 +925,35 @@ class ElbeDB(object):
>              u.admin = admin
>  
>              # Update password only if given
> -            if not password is None:
> +            if password is not None:
>                  # encrypt is deprecated but hash is not available in jessie
>                  try:
> -                    u.pwhash = pbkdf2_sha512.hash( password )
> +                    u.pwhash = pbkdf2_sha512.hash(password)
>                  except AttributeError:
> -                    u.pwhash = pbkdf2_sha512.encrypt( password )
> +                    u.pwhash = pbkdf2_sha512.encrypt(password)
>  
> -    def del_user (self, userid):
> +    def del_user(self, userid):
>          with session_scope(self.session) as s:
>              try:
> -                u = s.query( User ).filter( User.id == userid ).one()
> +                u = s.query(User).filter(User.id == userid).one()
>              except NoResultFound:
> -                raise ElbeDBError( "no user with id %i" % userid )
> +                raise ElbeDBError("no user with id %i" % userid)
>  
>              # Get a list of all projects owned by the user to delete. Set their
>              # owner to nobody and return them to the caller later, so it can
>              # decide whether to keep the projects or delete them.
> -            orphaned_projects = s.query( Project ).\
> -                    filter( Project.owner_id == userid ).all()
> +            orphaned_projects = s.query(Project).\
> +                filter(Project.owner_id == userid).all()
>              projectlist = []
>              for p in orphaned_projects:
>                  p.owner_id = None
> -                projectlist.append( ProjectData(p) )
> +                projectlist.append(ProjectData(p))
>  
>              # Now delete the user and return the list
> -            s.delete( u )
> +            s.delete(u)
>              return projectlist
>  
> -    def validate_login (self, name, password):
> +    def validate_login(self, name, password):
>          with session_scope(self.session) as s:
>              # Find the user with the given name
>              try:
> @@ -911,13 +962,13 @@ class ElbeDB(object):
>                  raise InvalidLogin()
>  
>              # Check password, throw an exception if invalid
> -            if not pbkdf2_sha512.verify( password, u.pwhash ):
> +            if not pbkdf2_sha512.verify(password, u.pwhash):
>                  raise InvalidLogin()
>  
>              # Everything good, now return the user id to the caller
>              return int(u.id)
>  
> -    def is_admin (self, userid):
> +    def is_admin(self, userid):
>          with session_scope(self.session) as s:
>              try:
>                  u = s.query(User).filter(User.id == userid).one()
> @@ -926,39 +977,38 @@ class ElbeDB(object):
>  
>              return bool(u.admin)
>  
> -    def get_username (self, userid):
> +    def get_username(self, userid):
>          with session_scope(self.session) as s:
>              try:
>                  u = s.query(User).filter(User.id == userid).one()
>              except NoResultFound:
> -                raise ElbeDBError( "no user with id %i" % userid)
> +                raise ElbeDBError("no user with id %i" % userid)
>  
>              return str(u.name)
>  
> -    def get_user_data (self, userid):
> +    def get_user_data(self, userid):
>          with session_scope(self.session) as s:
>              try:
>                  u = s.query(User).filter(User.id == userid).one()
>              except NoResultFound:
> -                raise ElbeDBError( "no user with id %i" % userid)
> +                raise ElbeDBError("no user with id %i" % userid)
>  
>              return UserData(u)
>  
> -    def get_user_id (self, name):
> +    def get_user_id(self, name):
>          with session_scope(self.session) as s:
>              try:
>                  u = s.query(User).filter(User.name == name).one()
>              except NoResultFound:
> -                raise ElbeDBError( "no user with name %s" % name )
> +                raise ElbeDBError("no user with name %s" % name)
>  
>              return int(u.id)
>  
> -
>      @classmethod
> -    def init_db (cls, name, fullname, password, email, admin):
> -        if not os.path.exists (cls.db_path):
> +    def init_db(cls, name, fullname, password, email, admin):
> +        if not os.path.exists(cls.db_path):
>              try:
> -                os.makedirs (cls.db_path)
> +                os.makedirs(cls.db_path)
>              except OSError as e:
>                  print(str(e))
>                  return
> @@ -974,88 +1024,98 @@ class ElbeDB(object):
>  class User(Base):
>      __tablename__ = 'users'
>  
> -    id = Column (Integer, Sequence('article_aid_seq', start=1001, increment=1),
> -                 primary_key=True)
> +    id = Column(Integer, Sequence('article_aid_seq', start=1001, increment=1),
> +                primary_key=True)
>  
> -    name     = Column (String, unique=True)
> -    fullname = Column (String)
> -    pwhash   = Column (String)
> -    email    = Column (String)
> -    admin    = Column (Boolean)
> +    name = Column(String, unique=True)
> +    fullname = Column(String)
> +    pwhash = Column(String)
> +    email = Column(String)
> +    admin = Column(Boolean)
>      projects = relationship("Project", backref="owner")
>  
> +
>  class UserData (object):
> -    def __init__ (self, user):
> -        self.id         = int(user.id)
> -        self.name       = str(user.name)
> -        self.fullname   = str(user.fullname)
> -        self.email      = str(user.email)
> -        self.admin      = bool(user.admin)
> +    def __init__(self, user):
> +        self.id = int(user.id)
> +        self.name = str(user.name)
> +        self.fullname = str(user.fullname)
> +        self.email = str(user.email)
> +        self.admin = bool(user.admin)
>  
>  
>  class Project (Base):
>      __tablename__ = 'projects'
>  
> -    builddir = Column (String, primary_key=True)
> -    name     = Column (String)
> -    version  = Column (String)
> -    xml      = Column (String)
> -    status   = Column (String)
> -    edit     = Column (DateTime, default=datetime.utcnow)
> -    owner_id = Column (Integer, ForeignKey('users.id'))
> +    builddir = Column(String, primary_key=True)
> +    name = Column(String)
> +    version = Column(String)
> +    xml = Column(String)
> +    status = Column(String)
> +    edit = Column(DateTime, default=datetime.utcnow)
> +    owner_id = Column(Integer, ForeignKey('users.id'))
>      versions = relationship("ProjectVersion", backref="project")
> -    files    = relationship("ProjectFile", backref="project")
> +    files = relationship("ProjectFile", backref="project")
> +
>  
>  class ProjectData (object):
> -    def __init__ (self, project):
> -        self.builddir   = str(project.builddir)
> -        self.name       = str(project.name)
> -        self.version    = str(project.version)
> -        #self.xml        = str(project.xml) # omit, as long as not needed
> -        self.status     = str(project.status)
> -        self.edit       = datetime(project.edit.year, project.edit.month,
> -                            project.edit.day, project.edit.hour,
> -                            project.edit.minute, project.edit.second,
> -                            project.edit.microsecond, project.edit.tzinfo)
> +    def __init__(self, project):
> +        self.builddir = str(project.builddir)
> +        self.name = str(project.name)
> +        self.version = str(project.version)
> +        # self.xml        = str(project.xml) # omit, as long as not needed
> +        self.status = str(project.status)
> +        self.edit = datetime(project.edit.year, project.edit.month,
> +                             project.edit.day, project.edit.hour,
> +                             project.edit.minute, project.edit.second,
> +                             project.edit.microsecond, project.edit.tzinfo)
>  
>  
>  class ProjectVersion (Base):
>      __tablename__ = 'projectversions'
>  
> -    builddir        = Column (String, ForeignKey('projects.builddir'),
> -                              primary_key=True )
> -    version         = Column (String, primary_key=True)
> -    description     = Column (String)
> -    timestamp       = Column (DateTime, default=datetime.utcnow)
> +    builddir = Column(String, ForeignKey('projects.builddir'),
> +                      primary_key=True)
> +    version = Column(String, primary_key=True)
> +    description = Column(String)
> +    timestamp = Column(DateTime, default=datetime.utcnow)
> +
>  
>  class ProjectVersionData (object):
> -    def __init__ (self, pv):
> -        self.builddir       = str(pv.builddir)
> -        self.version        = str(pv.version)
> +    def __init__(self, pv):
> +        self.builddir = str(pv.builddir)
> +        self.version = str(pv.version)
>          if pv.description:
> -            self.description    = str(pv.description)
> +            self.description = str(pv.description)
>          else:
> -            self.description    = None
> -        self.timestamp       = datetime(pv.timestamp.year, pv.timestamp.month,
> -                                pv.timestamp.day, pv.timestamp.hour,
> -                                pv.timestamp.minute, pv.timestamp.second,
> -                                pv.timestamp.microsecond, pv.timestamp.tzinfo)
> +            self.description = None
> +        self.timestamp = datetime(
> +            pv.timestamp.year,
> +            pv.timestamp.month,
> +            pv.timestamp.day,
> +            pv.timestamp.hour,
> +            pv.timestamp.minute,
> +            pv.timestamp.second,
> +            pv.timestamp.microsecond,
> +            pv.timestamp.tzinfo)
> +
>  
>  class ProjectFile (Base):
>      __tablename__ = 'files'
>  
> -    name        = Column (String, primary_key=True)
> -    builddir    = Column (String, ForeignKey('projects.builddir'),
> -                          primary_key=True)
> -    mime_type   = Column (String, nullable=False)
> -    description = Column (String)
> +    name = Column(String, primary_key=True)
> +    builddir = Column(String, ForeignKey('projects.builddir'),
> +                      primary_key=True)
> +    mime_type = Column(String, nullable=False)
> +    description = Column(String)
> +
>  
>  class ProjectFileData (object):
> -    def __init__ (self, pf):
> -        self.name           = str(pf.name)
> -        self.builddir       = str(pf.builddir)
> -        self.mime_type      = str(pf.mime_type)
> +    def __init__(self, pf):
> +        self.name = str(pf.name)
> +        self.builddir = str(pf.builddir)
> +        self.mime_type = str(pf.mime_type)
>          if pf.description:
> -            self.description    = str(pf.description)
> +            self.description = str(pf.description)
>          else:
> -            self.description    = None
> +            self.description = None
> diff --git a/elbepack/dbaction.py b/elbepack/dbaction.py
> index 49e48a1c..097e6c5a 100644
> --- a/elbepack/dbaction.py
> +++ b/elbepack/dbaction.py
> @@ -12,6 +12,7 @@ from getpass import getpass
>  from shutil import copyfileobj
>  from elbepack.db import ElbeDB, ElbeDBError
>  
> +
>  class DbAction(object):
>  
>      actiondict = {}
> @@ -33,6 +34,7 @@ class DbAction(object):
>      def __init__(self, node):
>          self.node = node
>  
> +
>  class InitAction(DbAction):
>      tag = 'init'
>  
> @@ -40,21 +42,23 @@ class InitAction(DbAction):
>          DbAction.__init__(self, node)
>  
>      def execute(self, args):
> -        oparser = OptionParser (usage="usage: %prog db init [options]")
> -        oparser.add_option ("--name", dest="name", default="root")
> -        oparser.add_option ("--fullname", dest="fullname", default="Admin")
> -        oparser.add_option ("--password", dest="password", default="foo")
> -        oparser.add_option ("--email", dest="email", default="root at localhost")
> -        oparser.add_option ("--noadmin", dest="admin", default=True,
> -                action="store_false")
> +        oparser = OptionParser(usage="usage: %prog db init [options]")
> +        oparser.add_option("--name", dest="name", default="root")
> +        oparser.add_option("--fullname", dest="fullname", default="Admin")
> +        oparser.add_option("--password", dest="password", default="foo")
> +        oparser.add_option("--email", dest="email", default="root at localhost")
> +        oparser.add_option("--noadmin", dest="admin", default=True,
> +                           action="store_false")
>  
> -        (opt, arg) = oparser.parse_args (args)
> +        (opt, arg) = oparser.parse_args(args)
> +
> +        ElbeDB.init_db(opt.name, opt.fullname, opt.password,
> +                       opt.email, opt.admin)
>  
> -        ElbeDB.init_db (opt.name, opt.fullname, opt.password,
> -                        opt.email, opt.admin)
>  
>  DbAction.register(InitAction)
>  
> +
>  class AddUserAction(DbAction):
>      tag = 'add_user'
>  
> @@ -62,14 +66,15 @@ class AddUserAction(DbAction):
>          DbAction.__init__(self, node)
>  
>      def execute(self, args):
> -        oparser = OptionParser (usage="usage: %prog db add_user [options] <username>")
> -        oparser.add_option ("--fullname", dest="fullname")
> -        oparser.add_option ("--password", dest="password")
> -        oparser.add_option ("--email", dest="email")
> -        oparser.add_option ("--admin", dest="admin", default=False,
> -                action="store_true")
> +        oparser = OptionParser(
> +            usage="usage: %prog db add_user [options] <username>")
> +        oparser.add_option("--fullname", dest="fullname")
> +        oparser.add_option("--password", dest="password")
> +        oparser.add_option("--email", dest="email")
> +        oparser.add_option("--admin", dest="admin", default=False,
> +                           action="store_true")
>  
> -        (opt, arg) = oparser.parse_args (args)
> +        (opt, arg) = oparser.parse_args(args)
>  
>          if len(arg) != 1:
>              print("wrong number of arguments")
> @@ -82,10 +87,12 @@ class AddUserAction(DbAction):
>              password = opt.password
>  
>          db = ElbeDB()
> -        db.add_user( arg[0], opt.fullname, password, opt.email, opt.admin )
> +        db.add_user(arg[0], opt.fullname, password, opt.email, opt.admin)
> +
>  
>  DbAction.register(AddUserAction)
>  
> +
>  class DelUserAction(DbAction):
>      tag = 'del_user'
>  
> @@ -93,13 +100,14 @@ class DelUserAction(DbAction):
>          DbAction.__init__(self, node)
>  
>      def execute(self, args):
> -        oparser = OptionParser (usage="usage: %prog db del_user [options] <userid>")
> -        oparser.add_option ("--delete-projects", dest="delete_projects",
> -                default=False, action="store_true")
> -        oparser.add_option ("--quiet", dest="quiet",
> -                default=False, action="store_true")
> +        oparser = OptionParser(
> +            usage="usage: %prog db del_user [options] <userid>")
> +        oparser.add_option("--delete-projects", dest="delete_projects",
> +                           default=False, action="store_true")
> +        oparser.add_option("--quiet", dest="quiet",
> +                           default=False, action="store_true")
>  
> -        (opt, arg) = oparser.parse_args (args)
> +        (opt, arg) = oparser.parse_args(args)
>  
>          if len(arg) != 1:
>              print("usage: elbe db del_user <userid>")
> @@ -107,13 +115,13 @@ class DelUserAction(DbAction):
>  
>          try:
>              userid = int(arg[0])
> -        except:
> +        except BaseException:
>              print("userid must be an integer")
>              return
>  
>          db = ElbeDB()
>  
> -        projects = db.del_user( userid )
> +        projects = db.del_user(userid)
>  
>          if projects:
>              if not opt.quiet:
> @@ -124,15 +132,19 @@ class DelUserAction(DbAction):
>  
>          for p in projects:
>              if not opt.quiet:
> -                print("%s: %s [%s] %s" % (p.builddir, p.name, p.version, p.edit))
> +                print(
> +                    "%s: %s [%s] %s" %
> +                    (p.builddir, p.name, p.version, p.edit))
>              if opt.delete_projects:
>                  try:
> -                    db.del_project( p.builddir )
> +                    db.del_project(p.builddir)
>                  except ElbeDBError as e:
>                      print("  ==> %s " % str(e))
>  
> +
>  DbAction.register(DelUserAction)
>  
> +
>  class ListProjectsAction(DbAction):
>  
>      tag = 'list_projects'
> @@ -142,13 +154,15 @@ class ListProjectsAction(DbAction):
>  
>      def execute(self, args):
>          db = ElbeDB()
> -        projects = db.list_projects ()
> +        projects = db.list_projects()
>  
>          for p in projects:
>              print("%s: %s [%s] %s" % (p.builddir, p.name, p.version, p.edit))
>  
> +
>  DbAction.register(ListProjectsAction)
>  
> +
>  class ListUsersAction(DbAction):
>  
>      tag = 'list_users'
> @@ -158,13 +172,15 @@ class ListUsersAction(DbAction):
>  
>      def execute(self, args):
>          db = ElbeDB()
> -        users = db.list_users ()
> +        users = db.list_users()
>  
>          for u in users:
>              print("%s: %s <%s>" % (u.name, u.fullname, u.email))
>  
> +
>  DbAction.register(ListUsersAction)
>  
> +
>  class CreateProjectAction(DbAction):
>  
>      tag = 'create_project'
> @@ -173,22 +189,24 @@ class CreateProjectAction(DbAction):
>          DbAction.__init__(self, node)
>  
>      def execute(self, args):
> -        oparser = OptionParser (
> -                usage="usage: %prog db create_project [options] <project_dir>" )
> -        oparser.add_option( "--user", dest="user",
> -                help="user name of the designated project owner" )
> +        oparser = OptionParser(
> +            usage="usage: %prog db create_project [options] <project_dir>")
> +        oparser.add_option("--user", dest="user",
> +                           help="user name of the designated project owner")
>          (opt, arg) = oparser.parse_args(args)
>  
> -        if len (arg) != 1:
> +        if len(arg) != 1:
>              oparser.print_help()
>              return
>  
>          db = ElbeDB()
> -        owner_id = db.get_user_id( opt.user )
> -        db.create_project (arg[0], owner_id)
> +        owner_id = db.get_user_id(opt.user)
> +        db.create_project(arg[0], owner_id)
> +
>  
>  DbAction.register(CreateProjectAction)
>  
> +
>  class DeleteProjectAction(DbAction):
>  
>      tag = 'del_project'
> @@ -197,15 +215,17 @@ class DeleteProjectAction(DbAction):
>          DbAction.__init__(self, node)
>  
>      def execute(self, args):
> -        if len (args) != 1:
> +        if len(args) != 1:
>              print("usage: elbe db del_project <project_dir>")
>              return
>  
>          db = ElbeDB()
> -        db.del_project (args[0])
> +        db.del_project(args[0])
> +
>  
>  DbAction.register(DeleteProjectAction)
>  
> +
>  class SetXmlAction(DbAction):
>  
>      tag = 'set_xml'
> @@ -214,12 +234,13 @@ class SetXmlAction(DbAction):
>          DbAction.__init__(self, node)
>  
>      def execute(self, args):
> -        if len (args) != 2:
> +        if len(args) != 2:
>              print("usage: elbe db set_xml <project_dir> <xml>")
>              return
>  
>          db = ElbeDB()
> -        db.set_xml (args[0], args[1])
> +        db.set_xml(args[0], args[1])
> +
>  
>  DbAction.register(SetXmlAction)
>  
> @@ -232,23 +253,24 @@ class BuildAction(DbAction):
>          DbAction.__init__(self, node)
>  
>      def execute(self, args):
> -        if len (args) != 1:
> +        if len(args) != 1:
>              print("usage: elbe db build <project_dir>")
>              return
>  
>          db = ElbeDB()
> -        db.set_busy( args[0], [ "empty_project", "needs_build", "has_changes",
> -                                "build_done", "build_failed" ] )
> +        db.set_busy(args[0], ["empty_project", "needs_build", "has_changes",
> +                              "build_done", "build_failed"])
>          try:
> -            ep = db.load_project( args[0] )
> -            ep.build( skip_debootstrap = True )
> -            db.update_project_files( ep )
> +            ep = db.load_project(args[0])
> +            ep.build(skip_debootstrap=True)
> +            db.update_project_files(ep)
>          except Exception as e:
> -            db.update_project_files( ep )
> -            db.reset_busy( args[0], "build_failed" )
> +            db.update_project_files(ep)
> +            db.reset_busy(args[0], "build_failed")
>              print(str(e))
>              return
> -        db.reset_busy( args[0], "build_done" )
> +        db.reset_busy(args[0], "build_done")
> +
>  
>  DbAction.register(BuildAction)
>  
> @@ -261,18 +283,19 @@ class GetFilesAction(DbAction):
>          DbAction.__init__(self, node)
>  
>      def execute(self, args):
> -        if len (args) != 1:
> +        if len(args) != 1:
>              print("usage: elbe db get_files <project_dir>")
>              return
>  
>          db = ElbeDB()
> -        files = db.get_project_files (args[0])
> +        files = db.get_project_files(args[0])
>          for f in files:
>              if f.description:
>                  print("%-40s  %s" % (f.name, f.description))
>              else:
>                  print(f.name)
>  
> +
>  DbAction.register(GetFilesAction)
>  
>  
> @@ -284,12 +307,12 @@ class ResetProjectAction(DbAction):
>          DbAction.__init__(self, node)
>  
>      def execute(self, args):
> -        oparser = OptionParser (
> -                usage="usage: %prog db reset_project [options] <project_dir>")
> -        oparser.add_option ("--clean", dest="clean", default=False,
> -                action="store_true")
> +        oparser = OptionParser(
> +            usage="usage: %prog db reset_project [options] <project_dir>")
> +        oparser.add_option("--clean", dest="clean", default=False,
> +                           action="store_true")
>  
> -        (opt, arg) = oparser.parse_args (args)
> +        (opt, arg) = oparser.parse_args(args)
>  
>          if len(arg) != 1:
>              print("wrong number of arguments")
> @@ -297,7 +320,8 @@ class ResetProjectAction(DbAction):
>              return
>  
>          db = ElbeDB()
> -        db.reset_project (arg[0], opt.clean)
> +        db.reset_project(arg[0], opt.clean)
> +
>  
>  DbAction.register(ResetProjectAction)
>  
> @@ -315,7 +339,8 @@ class SetProjectVersionAction(DbAction):
>              return
>  
>          db = ElbeDB()
> -        db.set_project_version( args[0], args[1] )
> +        db.set_project_version(args[0], args[1])
> +
>  
>  DbAction.register(SetProjectVersionAction)
>  
> @@ -333,7 +358,7 @@ class ListVersionsAction(DbAction):
>              return
>  
>          db = ElbeDB()
> -        versions = db.list_project_versions (args[0])
> +        versions = db.list_project_versions(args[0])
>  
>          for v in versions:
>              if v.description:
> @@ -341,6 +366,7 @@ class ListVersionsAction(DbAction):
>              else:
>                  print(v.version)
>  
> +
>  DbAction.register(ListVersionsAction)
>  
>  
> @@ -352,10 +378,11 @@ class SaveVersionAction(DbAction):
>          DbAction.__init__(self, node)
>  
>      def execute(self, args):
> -        oparser = OptionParser (usage="usage: %prog db save_version <project_dir>")
> -        oparser.add_option ("--description", dest="description")
> +        oparser = OptionParser(
> +            usage="usage: %prog db save_version <project_dir>")
> +        oparser.add_option("--description", dest="description")
>  
> -        (opt, arg) = oparser.parse_args (args)
> +        (opt, arg) = oparser.parse_args(args)
>  
>          if len(arg) != 1:
>              print("wrong number of arguments")
> @@ -363,7 +390,8 @@ class SaveVersionAction(DbAction):
>              return
>  
>          db = ElbeDB()
> -        db.save_version( arg[0], opt.description )
> +        db.save_version(arg[0], opt.description)
> +
>  
>  DbAction.register(SaveVersionAction)
>  
> @@ -381,7 +409,8 @@ class DelVersionAction(DbAction):
>              return
>  
>          db = ElbeDB()
> -        db.del_version( args[0], args[1] )
> +        db.del_version(args[0], args[1])
> +
>  
>  DbAction.register(DelVersionAction)
>  
> @@ -399,8 +428,9 @@ class PrintVersionXMLAction(DbAction):
>              return
>  
>          db = ElbeDB()
> -        filename = db.get_version_xml( args[0], args[1] )
> -        with open( filename ) as f:
> -            copyfileobj( f, sys.stdout )
> +        filename = db.get_version_xml(args[0], args[1])
> +        with open(filename) as f:
> +            copyfileobj(f, sys.stdout)
> +
>  
>  DbAction.register(PrintVersionXMLAction)
> diff --git a/elbepack/debianize/barebox.py b/elbepack/debianize/barebox.py
> index 1c3e20ca..0cc64777 100644
> --- a/elbepack/debianize/barebox.py
> +++ b/elbepack/debianize/barebox.py
> @@ -12,28 +12,31 @@ from shutil import copyfile
>  from elbepack.directories import mako_template_dir
>  from elbepack.debianize.base import DebianizeBase, template
>  
> +
>  class BareBox (DebianizeBase):
>  
> -    name  = "barebox"
> +    name = "barebox"
>      files = ['Kbuild', 'Kconfig', 'README', 'TODO']
>  
> -    def __init__ (self):
> -        DebianizeBase.__init__ (self)
> +    def __init__(self):
> +        DebianizeBase.__init__(self)
>  
> -    def gui (self):
> -        self.defconfig = self.add_widget_intelligent (TitleText,
> -                name="defconfig:", value="imx_v7_defconfig")
> +    def gui(self):
> +        self.defconfig = self.add_widget_intelligent(
> +            TitleText, name="defconfig:", value="imx_v7_defconfig")
>  
> -        self.imgname = self.add_widget_intelligent (TitleText,
> -                name="Imagename:", value="barebox-phytec-phycore-imx6dl-som-nand-256mb.img")
> +        self.imgname = self.add_widget_intelligent(
> +            TitleText,
> +            name="Imagename:",
> +            value="barebox-phytec-phycore-imx6dl-som-nand-256mb.img")
>  
> -        self.cross = self.add_widget_intelligent (TitleText,
> -                name="CROSS_COMPILE:", value="arm-linux-gnueabihf-")
> +        self.cross = self.add_widget_intelligent(
> +            TitleText, name="CROSS_COMPILE:", value="arm-linux-gnueabihf-")
>  
> -        self.k_version = self.add_widget_intelligent (TitleText,
> -                name="BareboxVersion:", value="2016.10")
> +        self.k_version = self.add_widget_intelligent(
> +            TitleText, name="BareboxVersion:", value="2016.10")
>  
> -    def debianize (self):
> +    def debianize(self):
>          if self.deb['p_arch'] == 'armhf':
>              self.deb['k_arch'] = 'arm'
>          elif self.deb['p_arch'] == 'armel':
> @@ -43,30 +46,31 @@ class BareBox (DebianizeBase):
>          else:
>              self.deb['k_arch'] = self.deb['p_arch']
>  
> -        self.deb['defconfig']     = self.defconfig.get_value ()
> -        self.deb['cross_compile'] = self.cross.get_value ()
> -        self.deb['k_version']     = self.k_version.get_value ()
> -        self.deb['imgname']       = self.imgname.get_value ()
> +        self.deb['defconfig'] = self.defconfig.get_value()
> +        self.deb['cross_compile'] = self.cross.get_value()
> +        self.deb['k_version'] = self.k_version.get_value()
> +        self.deb['imgname'] = self.imgname.get_value()
>  
>          self.tmpl_dir = os.path.join(mako_template_dir, 'debianize/barebox')
> -        pkg_name = self.deb['p_name']+'-'+self.deb['k_version']
> +        pkg_name = self.deb['p_name'] + '-' + self.deb['k_version']
>  
>          for tmpl in ['control', 'rules']:
> -            with open (os.path.join('debian/', tmpl), 'w') as f:
> -                mako = os.path.join(self.tmpl_dir, tmpl+'.mako')
> -                f.write (template(mako, self.deb))
> +            with open(os.path.join('debian/', tmpl), 'w') as f:
> +                mako = os.path.join(self.tmpl_dir, tmpl + '.mako')
> +                f.write(template(mako, self.deb))
>  
>          cmd = 'dch --package barebox-' + pkg_name + \
> -                   ' -v ' + self.deb['p_version'] + \
> -                   ' --create -M -D ' + self.deb['release'] + \
> -                   ' "generated by elbe debianize"'
> -        os.system (cmd)
> +            ' -v ' + self.deb['p_version'] + \
> +            ' --create -M -D ' + self.deb['release'] + \
> +            ' "generated by elbe debianize"'
> +        os.system(cmd)
>  
> -        copyfile (os.path.join(self.tmpl_dir, 'barebox-image.install'),
> -                  'debian/barebox-image-'+pkg_name+'.install')
> -        copyfile (os.path.join(self.tmpl_dir, 'barebox-tools.install'),
> -                  'debian/barebox-tools-'+pkg_name+'.install')
> +        copyfile(os.path.join(self.tmpl_dir, 'barebox-image.install'),
> +                 'debian/barebox-image-' + pkg_name + '.install')
> +        copyfile(os.path.join(self.tmpl_dir, 'barebox-tools.install'),
> +                 'debian/barebox-tools-' + pkg_name + '.install')
>  
>          self.hint = "use 'dpkg-buildpackage -a%s' to build the package" % self.deb['p_arch']
>  
> -DebianizeBase.register (BareBox)
> +
> +DebianizeBase.register(BareBox)
> diff --git a/elbepack/debianize/base.py b/elbepack/debianize/base.py
> index 9904bcee..44d3cf01 100644
> --- a/elbepack/debianize/base.py
> +++ b/elbepack/debianize/base.py
> @@ -15,105 +15,112 @@ from shutil import copyfile
>  
>  from elbepack.templates import template
>  
> +
>  class DebianizeBase (FormMultiPage):
>  
>      srctypes = {}
>  
>      @classmethod
> -    def register (cls, srctype):
> +    def register(cls, srctype):
>          cls.srctypes[srctype.name] = srctype
>  
>      @classmethod
> -    def get_debianizer (cls):
> -        for t in cls.srctypes.values ():
> -           match = True
> -           for f in t.files:
> -               if not os.path.exists (f):
> -                   match = False
> -           if match:
> -               return t
> +    def get_debianizer(cls):
> +        for t in cls.srctypes.values():
> +            match = True
> +            for f in t.files:
> +                if not os.path.exists(f):
> +                    match = False
> +            if match:
> +                return t
>  
>          raise KeyError
>  
> -    def __init__ (self):
> -        self.deb = { }
> +    def __init__(self):
> +        self.deb = {}
>          self.tmpl_dir = None
>  
>          self.archs = ["arm64", "armhf", "armel", "amd64", "i386", "powerpc"]
>          self.formats = ["native", "git", "quilt"]
> -        self.releases = ["stable", "oldstable", "testing", "unstable", "experimental"]
> -
> -        FormMultiPage.__init__ (self)
> -
> -    def create (self):
> -        self.p_name = self.add_widget_intelligent (TitleText,
> -                name = "Name:", value = "elbe")
> -
> -        self.p_version = self.add_widget_intelligent (TitleText,
> -                name = "Version:", value = "1.0")
> -
> -        self.p_arch = self.add_widget_intelligent (TitleSelectOne,
> -                name = "Arch:",
> -                values = self.archs,
> -                value = [0],
> -                scroll_exit = True)
> -
> -        self.source_format = self.add_widget_intelligent (TitleSelectOne,
> -                name="Format:",
> -                values = self.formats,
> -                value = [0],
> -                scroll_exit = True)
> -
> -        self.release = self.add_widget_intelligent (TitleSelectOne,
> -                name = "Release:",
> -                values = self.releases,
> -                value = [0],
> -                scroll_exit = True)
> -
> -        fullname = os.environ.get ('DEBFULLNAME', failobj="Max Mustermann")
> -        self.m_name = self.add_widget_intelligent (TitleText,
> -                name = "Maintainer:", value = fullname)
> -
> -        email = os.environ.get ('DEBEMAIL', failobj="max at mustermann.org")
> -        self.m_mail = self.add_widget_intelligent (TitleText,
> -                name = "Mail:", value = email)
> -
> -        self.add_page ()
> -        self.gui ()
> -
> -        self.add_widget_intelligent (ButtonPress, name = "Save",
> -                when_pressed_function=self.on_ok)
> -
> -        self.add_widget_intelligent (ButtonPress, name = "Cancel",
> -                when_pressed_function=self.on_cancel)
> -
> -    def on_ok (self):
> -        self.deb['p_name']       = self.p_name.get_value ()
> -        self.deb['p_version']    = self.p_version.get_value ()
> -        self.deb['p_arch']       = self.archs [self.p_arch.get_value ()[0]]
> -        self.deb['m_name']       = self.m_name.get_value ()
> -        self.deb['m_mail']       = self.m_mail.get_value ()
> -        self.deb['source_format']= self.formats [self.source_format.get_value ()[0]]
> -        self.deb['release']      = self.releases [self.release.get_value ()[0]]
> -
> -        os.mkdir ('debian')
> -        os.mkdir ('debian/source')
> -
> -        self.debianize ()
> -
> -        with open ('debian/source/format', 'w') as f:
> +        self.releases = [
> +            "stable",
> +            "oldstable",
> +            "testing",
> +            "unstable",
> +            "experimental"]
> +
> +        FormMultiPage.__init__(self)
> +
> +    def create(self):
> +        self.p_name = self.add_widget_intelligent(TitleText,
> +                                                  name="Name:", value="elbe")
> +
> +        self.p_version = self.add_widget_intelligent(
> +            TitleText, name="Version:", value="1.0")
> +
> +        self.p_arch = self.add_widget_intelligent(TitleSelectOne,
> +                                                  name="Arch:",
> +                                                  values=self.archs,
> +                                                  value=[0],
> +                                                  scroll_exit=True)
> +
> +        self.source_format = self.add_widget_intelligent(TitleSelectOne,
> +                                                         name="Format:",
> +                                                         values=self.formats,
> +                                                         value=[0],
> +                                                         scroll_exit=True)
> +
> +        self.release = self.add_widget_intelligent(TitleSelectOne,
> +                                                   name="Release:",
> +                                                   values=self.releases,
> +                                                   value=[0],
> +                                                   scroll_exit=True)
> +
> +        fullname = os.environ.get('DEBFULLNAME', failobj="Max Mustermann")
> +        self.m_name = self.add_widget_intelligent(
> +            TitleText, name="Maintainer:", value=fullname)
> +
> +        email = os.environ.get('DEBEMAIL', failobj="max at mustermann.org")
> +        self.m_mail = self.add_widget_intelligent(TitleText,
> +                                                  name="Mail:", value=email)
> +
> +        self.add_page()
> +        self.gui()
> +
> +        self.add_widget_intelligent(ButtonPress, name="Save",
> +                                    when_pressed_function=self.on_ok)
> +
> +        self.add_widget_intelligent(ButtonPress, name="Cancel",
> +                                    when_pressed_function=self.on_cancel)
> +
> +    def on_ok(self):
> +        self.deb['p_name'] = self.p_name.get_value()
> +        self.deb['p_version'] = self.p_version.get_value()
> +        self.deb['p_arch'] = self.archs[self.p_arch.get_value()[0]]
> +        self.deb['m_name'] = self.m_name.get_value()
> +        self.deb['m_mail'] = self.m_mail.get_value()
> +        self.deb['source_format'] = self.formats[self.source_format.get_value()[
> +            0]]
> +        self.deb['release'] = self.releases[self.release.get_value()[0]]
> +
> +        os.mkdir('debian')
> +        os.mkdir('debian/source')
> +
> +        self.debianize()
> +
> +        with open('debian/source/format', 'w') as f:
>              mako = os.path.join(self.tmpl_dir, 'format.mako')
> -            f.write (template(mako, self.deb))
> +            f.write(template(mako, self.deb))
>  
> -        copyfile ('COPYING', 'debian/copyright')
> -        with open ('debian/compat', 'w') as f:
> -            f.write ('9')
> +        copyfile('COPYING', 'debian/copyright')
> +        with open('debian/compat', 'w') as f:
> +            f.write('9')
>  
>          if self.hint:
> -            notify (self.hint, title='Hint')
> -            sleep (10)
> +            notify(self.hint, title='Hint')
> +            sleep(10)
>  
> -        sys.exit (0)
> +        sys.exit(0)
>  
> -    def on_cancel (self):
> -        sys.exit (-2)
> +    def on_cancel(self):
> +        sys.exit(-2)
> diff --git a/elbepack/debianize/debianize.py b/elbepack/debianize/debianize.py
> index 25d48bf6..1c4b6e4f 100644
> --- a/elbepack/debianize/debianize.py
> +++ b/elbepack/debianize/debianize.py
> @@ -7,16 +7,16 @@ from npyscreen import NPSAppManaged
>  
>  # don't remove these imports; even if pylint, etc believes they are unused
>  from elbepack.debianize.kernel import Kernel
> -from elbepack.debianize.uboot  import UBoot
> +from elbepack.debianize.uboot import UBoot
>  from elbepack.debianize.barebox import BareBox
>  
>  from elbepack.debianize.base import DebianizeBase
>  
>  
>  class Debianize (NPSAppManaged):
> -    def __init__ (self, debianizer):
> +    def __init__(self, debianizer):
>          self.debianizer = debianizer
> -        NPSAppManaged.__init__ (self)
> +        NPSAppManaged.__init__(self)
>  
> -    def onStart (self):
> -        self.registerForm('MAIN', self.debianizer ())
> +    def onStart(self):
> +        self.registerForm('MAIN', self.debianizer())
> diff --git a/elbepack/debianize/kernel.py b/elbepack/debianize/kernel.py
> index 1e039de2..fa33ea1b 100644
> --- a/elbepack/debianize/kernel.py
> +++ b/elbepack/debianize/kernel.py
> @@ -12,35 +12,38 @@ from shutil import copyfile
>  from elbepack.directories import mako_template_dir
>  from elbepack.debianize.base import DebianizeBase, template
>  
> +
>  class Kernel (DebianizeBase):
>  
> -    name  = "kernel"
> +    name = "kernel"
>      files = ['Kbuild', 'Kconfig', 'MAINTAINERS', 'kernel/futex.c']
>  
> -    def __init__ (self):
> +    def __init__(self):
>          self.imgtypes = ["bzImage", "zImage", "uImage", "Image"]
>          self.imgtypes_install = ["install", "zinstall", "uinstall", "install"]
> -        DebianizeBase.__init__ (self)
> +        DebianizeBase.__init__(self)
>  
> -    def gui (self):
> -        self.loadaddr = self.add_widget_intelligent (TitleText,
> -                name="Loadaddress:", value="0x800800")
> +    def gui(self):
> +        self.loadaddr = self.add_widget_intelligent(
> +            TitleText, name="Loadaddress:", value="0x800800")
>  
> -        self.defconfig = self.add_widget_intelligent (TitleText,
> -                name="defconfig:", value="omap2plus_defconfig")
> +        self.defconfig = self.add_widget_intelligent(
> +            TitleText, name="defconfig:", value="omap2plus_defconfig")
>  
> -        self.imgtype = self.add_widget_intelligent (TitleSelectOne,
> -                name="Image Format:", values = self.imgtypes,
> -                value = [0],
> -                scroll_exit=True)
> +        self.imgtype = self.add_widget_intelligent(
> +            TitleSelectOne,
> +            name="Image Format:",
> +            values=self.imgtypes,
> +            value=[0],
> +            scroll_exit=True)
>  
> -        self.cross = self.add_widget_intelligent (TitleText,
> -                name="CROSS_COMPILE", value="arm-linux-gnueabihf-")
> +        self.cross = self.add_widget_intelligent(
> +            TitleText, name="CROSS_COMPILE", value="arm-linux-gnueabihf-")
>  
> -        self.k_version = self.add_widget_intelligent (TitleText,
> -                name="Kernelversion", value="4.4")
> +        self.k_version = self.add_widget_intelligent(
> +            TitleText, name="Kernelversion", value="4.4")
>  
> -    def debianize (self):
> +    def debianize(self):
>          if self.deb['p_arch'] == 'armhf':
>              self.deb['k_arch'] = 'arm'
>          elif self.deb['p_arch'] == 'armel':
> @@ -50,32 +53,40 @@ class Kernel (DebianizeBase):
>          else:
>              self.deb['k_arch'] = self.deb['p_arch']
>  
> -        self.deb['loadaddr']      = self.loadaddr.get_value ()
> -        self.deb['defconfig']     = self.defconfig.get_value ()
> -        self.deb['imgtype']       = self.imgtypes [self.imgtype.get_value ()[0]]
> -        self.deb['imgtype_install'] = self.imgtypes_install [self.imgtype.get_value ()[0]]
> -        self.deb['cross_compile'] = self.cross.get_value ()
> -        self.deb['k_version']     = self.k_version.get_value ()
> +        self.deb['loadaddr'] = self.loadaddr.get_value()
> +        self.deb['defconfig'] = self.defconfig.get_value()
> +        self.deb['imgtype'] = self.imgtypes[self.imgtype.get_value()[0]]
> +        self.deb['imgtype_install'] = self.imgtypes_install[self.imgtype.get_value()[
> +            0]]
> +        self.deb['cross_compile'] = self.cross.get_value()
> +        self.deb['k_version'] = self.k_version.get_value()
>  
>          self.tmpl_dir = os.path.join(mako_template_dir, 'debianize/kernel')
> -        pkg_name = self.deb['p_name']+'-'+self.deb['k_version']
> -
> -        for tmpl in ['control', 'rules', 'preinst', 'postinst', 'prerm', 'postrm']:
> -            with open (os.path.join('debian/', tmpl), 'w') as f:
> -                mako = os.path.join(self.tmpl_dir, tmpl+'.mako')
> -                f.write (template(mako, self.deb))
> +        pkg_name = self.deb['p_name'] + '-' + self.deb['k_version']
> +
> +        for tmpl in [
> +            'control',
> +            'rules',
> +            'preinst',
> +            'postinst',
> +            'prerm',
> +                'postrm']:
> +            with open(os.path.join('debian/', tmpl), 'w') as f:
> +                mako = os.path.join(self.tmpl_dir, tmpl + '.mako')
> +                f.write(template(mako, self.deb))
>  
>          cmd = 'dch --package linux-' + pkg_name + \
> -                   ' -v ' + self.deb['p_version'] + \
> -                   ' --create -M -D ' + self.deb['release'] + \
> -                   ' "generated by elbe debianize"'
> -        os.system (cmd)
> +            ' -v ' + self.deb['p_version'] + \
> +            ' --create -M -D ' + self.deb['release'] + \
> +            ' "generated by elbe debianize"'
> +        os.system(cmd)
>  
> -        copyfile (os.path.join(self.tmpl_dir, 'linux-image.install'),
> -                  'debian/linux-image-'+pkg_name+'.install')
> -        copyfile (os.path.join(self.tmpl_dir, 'linux-headers.install'),
> -                  'debian/linux-headers-'+pkg_name+'.install')
> +        copyfile(os.path.join(self.tmpl_dir, 'linux-image.install'),
> +                 'debian/linux-image-' + pkg_name + '.install')
> +        copyfile(os.path.join(self.tmpl_dir, 'linux-headers.install'),
> +                 'debian/linux-headers-' + pkg_name + '.install')
>  
>          self.hint = "use 'dpkg-buildpackage -a%s' to build the package" % self.deb['p_arch']
>  
> -DebianizeBase.register (Kernel)
> +
> +DebianizeBase.register(Kernel)
> diff --git a/elbepack/debianize/uboot.py b/elbepack/debianize/uboot.py
> index 35e0b0c0..8087bb39 100644
> --- a/elbepack/debianize/uboot.py
> +++ b/elbepack/debianize/uboot.py
> @@ -9,20 +9,22 @@ from elbepack.debianize.base import DebianizeBase
>  
>  # this is just a template to show how debianizing another component should work
>  
> +
>  class UBoot (DebianizeBase):
>  
> -    name  = "uboot"
> +    name = "uboot"
>      files = ['Kbuild', 'Kconfig', 'MAINTAINERS', 'config.mk']
>  
> -    def __init__ (self):
> +    def __init__(self):
>          print("debianization of uboot is not supported at the moment")
> -        sys.exit (-2)
> -        DebianizeBase.__init__ (self)
> +        sys.exit(-2)
> +        DebianizeBase.__init__(self)
>  
> -    def gui (self):
> +    def gui(self):
>          pass
>  
> -    def debianize (self):
> +    def debianize(self):
>          pass
>  
> -DebianizeBase.register (UBoot)
> +
> +DebianizeBase.register(UBoot)
> diff --git a/elbepack/debianreleases.py b/elbepack/debianreleases.py
> index 848d907c..f428a61d 100644
> --- a/elbepack/debianreleases.py
> +++ b/elbepack/debianreleases.py
> @@ -3,20 +3,20 @@
>  #
>  # SPDX-License-Identifier: GPL-3.0
>  
> -suite2codename = { 'oldoldoldstable': 'squeeze',
> -                   'oldoldstable': 'wheezy',
> -                   'oldstable': 'jessie',
> -                   'stable': 'stretch',
> -                   'testing': 'buster',
> -                   'unstable': 'sid',
> +suite2codename = {'oldoldoldstable': 'squeeze',
> +                  'oldoldstable': 'wheezy',
> +                  'oldstable': 'jessie',
> +                  'stable': 'stretch',
> +                  'testing': 'buster',
> +                  'unstable': 'sid',
>  
> -                   'lucid': 'lucid',
> -                   'precise': 'precise',
> -                   'quantal': 'quantal',
> -                   'raring': 'raring',
> -                   'saucy': 'saucy',
> -                   }
> +                  'lucid': 'lucid',
> +                  'precise': 'precise',
> +                  'quantal': 'quantal',
> +                  'raring': 'raring',
> +                  'saucy': 'saucy',
> +                  }
>  
>  
>  # generate reverse mapping
> -codename2suite = { v:k for k,v in suite2codename.items() }
> +codename2suite = {v: k for k, v in suite2codename.items()}
> diff --git a/elbepack/debpkg.py b/elbepack/debpkg.py
> index 269e5656..bd243d5b 100644
> --- a/elbepack/debpkg.py
> +++ b/elbepack/debpkg.py
> @@ -21,35 +21,65 @@ Multi-Arch: same
>  
>  control_template = string.Template(control_template_string)
>  
> +
>  def gen_controlfile(name, version, arch, description, deps):
> -    d = dict( name=name, version=version, arch=arch, description=description, deps=deps )
> +    d = dict(
> +        name=name,
> +        version=version,
> +        arch=arch,
> +        description=description,
> +        deps=deps)
>      return control_template.substitute(d)
>  
> -def write_file( fname, mode, cont ):
> -    f = file( fname, "w" )
> +
> +def write_file(fname, mode, cont):
> +    f = file(fname, "w")
>      f.write(cont)
>      f.close()
> -    os.chmod( fname, mode )
> +    os.chmod(fname, mode)
>  
>  
> -def build_binary_deb( name, arch, version, description, files, deps, target_dir ):
> -    tmpdir   = mkdtemp()
> +def build_binary_deb(
> +        name,
> +        arch,
> +        version,
> +        description,
> +        files,
> +        deps,
> +        target_dir):
> +    tmpdir = mkdtemp()
>      pkgfname = "%s_%s_%s" % (name, version, arch)
> -    pkgdir = os.path.join( tmpdir, pkgfname )
> +    pkgdir = os.path.join(tmpdir, pkgfname)
>  
> -    os.system( 'mkdir -p "%s"' % os.path.join( pkgdir, "DEBIAN" ) )
> -    write_file( os.path.join( pkgdir, "DEBIAN" , "control" ), 0o644, gen_controlfile(name, version, arch, description, deps) )
> +    os.system('mkdir -p "%s"' % os.path.join(pkgdir, "DEBIAN"))
> +    write_file(
> +        os.path.join(
> +            pkgdir,
> +            "DEBIAN",
> +            "control"),
> +        0o644,
> +        gen_controlfile(
> +            name,
> +            version,
> +            arch,
> +            description,
> +            deps))
>  
>      for (fname, instpath) in files:
> -        full_instpath = os.path.join( pkgdir, instpath )
> -        os.system( 'mkdir -p "%s"' % full_instpath )
> -        os.system( 'cp -a "%s" "%s"' % (fname, full_instpath) )
> -    
> -    os.system( 'dpkg-deb --build "%s"' % pkgdir )
> +        full_instpath = os.path.join(pkgdir, instpath)
> +        os.system('mkdir -p "%s"' % full_instpath)
> +        os.system('cp -a "%s" "%s"' % (fname, full_instpath))
>  
> -    os.system( 'cp -v "%s" "%s"' % (os.path.join( tmpdir, pkgfname + ".deb" ), target_dir) )
> +    os.system('dpkg-deb --build "%s"' % pkgdir)
>  
> -    os.system( 'rm -r "%s"' % tmpdir )
> +    os.system(
> +        'cp -v "%s" "%s"' %
> +        (os.path.join(
> +            tmpdir,
> +            pkgfname +
> +            ".deb"),
> +            target_dir))
>  
> -    return pkgfname+".deb"
> +    os.system('rm -r "%s"' % tmpdir)
>  
> +    return pkgfname + ".deb"
> diff --git a/elbepack/directories.py b/elbepack/directories.py
> index 5acad014..df57bcd1 100644
> --- a/elbepack/directories.py
> +++ b/elbepack/directories.py
> @@ -11,6 +11,7 @@ elbe_exe = None
>  elbe_dir = None
>  examples_dir = None
>  
> +
>  def init_directories(elbe_relpath):
>      global elbe_exe
>      global elbe_dir
> @@ -19,10 +20,10 @@ def init_directories(elbe_relpath):
>      elbe_exe = os.path.abspath(elbe_relpath)
>      elbe_dir = os.path.dirname(elbe_exe)
>  
> -    if elbe_exe.startswith ("/usr/bin/"):
> +    if elbe_exe.startswith("/usr/bin/"):
>          examples_dir = "/usr/share/doc/elbe-doc/examples"
>      else:
> -        examples_dir = os.path.join (elbe_dir, "examples")
> +        examples_dir = os.path.join(elbe_dir, "examples")
>  
>          # Set XML catalog if elbe is run from source
>          xmlcat = os.path.join(elbe_dir, "schema/catalog.xml")
> @@ -32,15 +33,16 @@ def init_directories(elbe_relpath):
>              os.environ['XML_CATALOG_FILES'] += " "
>              os.environ['XML_CATALOG_FILES'] += xmlcat
>  
> +
>  def get_cmdlist():
> -    return [ x for _, x, _ in iter_modules(elbepack.commands.__path__) ]
> +    return [x for _, x, _ in iter_modules(elbepack.commands.__path__)]
>  
> -pack_dir = elbepack.__path__[0]
>  
> -init_template_dir = os.path.join (pack_dir, "init")
> -mako_template_dir = os.path.join (pack_dir, "makofiles")
> +pack_dir = elbepack.__path__[0]
>  
> -default_preseed_fname    = os.path.join (pack_dir, "default-preseed.xml")
> -xsdtoasciidoc_mako_fname = os.path.join (pack_dir, "xsdtoasciidoc.mako")
> -elbe_pubkey_fname        = os.path.join (pack_dir, "elbe-repo.pub")
> +init_template_dir = os.path.join(pack_dir, "init")
> +mako_template_dir = os.path.join(pack_dir, "makofiles")
>  
> +default_preseed_fname = os.path.join(pack_dir, "default-preseed.xml")
> +xsdtoasciidoc_mako_fname = os.path.join(pack_dir, "xsdtoasciidoc.mako")
> +elbe_pubkey_fname = os.path.join(pack_dir, "elbe-repo.pub")
> diff --git a/elbepack/dosunix.py b/elbepack/dosunix.py
> index 0dc2d6e3..e3f907b6 100644
> --- a/elbepack/dosunix.py
> +++ b/elbepack/dosunix.py
> @@ -2,23 +2,28 @@
>  #
>  # SPDX-License-Identifier: GPL-3.0
>  
> -def dos2unix_str (d):
> +
> +def dos2unix_str(d):
>      return d.replace('\r\n', '\n')
>  
> -def unix2dos_str (d):
> +
> +def unix2dos_str(d):
>      d = d.replace('\n', '\r\n')
>      d = d.replace('\r\r\n', '\r\n')
>      return d
>  
> -def __rewrite (fn, rw_func):
> -    with open (fn, 'r+') as f:
> -        d = rw_func (f.read ())
> -        f.seek (0)
> -        f.write (d)
> -        f.truncate ()
>  
> -def dos2unix (fn):
> -    __rewrite (fn, dos2unix_str)
> +def __rewrite(fn, rw_func):
> +    with open(fn, 'r+') as f:
> +        d = rw_func(f.read())
> +        f.seek(0)
> +        f.write(d)
> +        f.truncate()
> +
> +
> +def dos2unix(fn):
> +    __rewrite(fn, dos2unix_str)
> +
>  
> -def unix2dos (fn):
> -    __rewrite (fn, unix2dos_str)
> +def unix2dos(fn):
> +    __rewrite(fn, unix2dos_str)
> diff --git a/elbepack/dump.py b/elbepack/dump.py
> index 04b0712a..8525a62e 100644
> --- a/elbepack/dump.py
> +++ b/elbepack/dump.py
> @@ -14,64 +14,69 @@ from apt import Cache
>  
>  import warnings
>  
> -def get_initvm_pkglist ():
> +
> +def get_initvm_pkglist():
>      with warnings.catch_warnings():
> -        warnings.filterwarnings("ignore",category=DeprecationWarning)
> -        cache = Cache ()
> -        cache.open ()
> -        pkglist = [APTPackage (p) for p in cache if p.is_installed]
> +        warnings.filterwarnings("ignore", category=DeprecationWarning)
> +        cache = Cache()
> +        cache.open()
> +        pkglist = [APTPackage(p) for p in cache if p.is_installed]
>          try:
> -            eb = APTPackage( cache ['elbe-bootstrap'] )
> -            pkglist.append (eb)
> +            eb = APTPackage(cache['elbe-bootstrap'])
> +            pkglist.append(eb)
>          # elbe bootstrap is not installed on pc running elbe
>          except KeyError:
>              pass
>  
>      return pkglist
>  
> -def dump_fullpkgs( xml, rfs, cache ):
> +
> +def dump_fullpkgs(xml, rfs, cache):
>      xml.clear_full_pkglist()
>  
>      instpkgs = cache.get_installed_pkgs()
>      for p in instpkgs:
> -        xml.append_full_pkg( p )
> +        xml.append_full_pkg(p)
>  
> -    sources_list = xml.xml.ensure_child( 'sources_list' )
> +    sources_list = xml.xml.ensure_child('sources_list')
>      slist = rfs.read_file("etc/apt/sources.list")
> -    sources_list.set_text( slist )
> +    sources_list.set_text(slist)
>  
>      try:
> -        preferences = xml.xml.ensure_child( 'apt_prefs' )
> +        preferences = xml.xml.ensure_child('apt_prefs')
>          prefs = rfs.read_file("etc/apt/preferences")
>          preferences.set_text(prefs)
>      except IOError:
>          pass
>  
> -def dump_debootstrappkgs( xml, cache ):
> +
> +def dump_debootstrappkgs(xml, cache):
>      xml.clear_debootstrap_pkglist()
>  
>      instpkgs = cache.get_installed_pkgs()
>      for p in instpkgs:
> -        xml.append_debootstrap_pkg( p )
> +        xml.append_debootstrap_pkg(p)
> +
>  
> -def dump_initvmpkgs (xml):
> -    xml.clear_initvm_pkglist ()
> +def dump_initvmpkgs(xml):
> +    xml.clear_initvm_pkglist()
>  
> -    instpkgs = get_initvm_pkglist ()
> +    instpkgs = get_initvm_pkglist()
>      for p in instpkgs:
> -        xml.append_initvm_pkg( p )
> +        xml.append_initvm_pkg(p)
>  
> -    sources_list = xml.xml.ensure_child( 'initvm_sources_list' )
> +    sources_list = xml.xml.ensure_child('initvm_sources_list')
>      slist = hostfs.read_file("etc/apt/sources.list")
> -    sources_list.set_text( slist )
> +    sources_list.set_text(slist)
>  
>      try:
> -        preferences = xml.xml.ensure_child( 'initvm_apt_prefs' )
> +        preferences = xml.xml.ensure_child('initvm_apt_prefs')
>          prefs = hostfs.read_file("etc/apt/preferences")
>          preferences.set_text(prefs)
>      except IOError:
>          pass
>  
> +
>  def check_full_pkgs(pkgs, fullpkgs, errorname, cache):
>      elog = ASCIIDocLog(errorname)
>  
> @@ -84,24 +89,26 @@ def check_full_pkgs(pkgs, fullpkgs, errorname, cache):
>          name = p.et.text
>          nomulti_name = name.split(":")[0]
>          if not cache.has_pkg(nomulti_name):
> -            elog.printo( "- package %s does not exist" % nomulti_name )
> +            elog.printo("- package %s does not exist" % nomulti_name)
>              errors += 1
>              continue
>  
>          if not cache.is_installed(nomulti_name):
> -            elog.printo( "- package %s is not installed" % nomulti_name )
> +            elog.printo("- package %s is not installed" % nomulti_name)
>              errors += 1
>              continue
>  
> -        ver  = p.et.get('version')
> +        ver = p.et.get('version')
>          pkg = cache.get_pkg(nomulti_name)
>          if ver and (pkg.installed_version != ver):
> -            elog.printo( "- package %s version %s does not match installed version %s" % (name, ver,  pkg.installed_version) )
> +            elog.printo(
> +                "- package %s version %s does not match installed version %s" %
> +                (name, ver, pkg.installed_version))
>              errors += 1
>              continue
>  
>      if errors == 0:
> -        elog.printo( "No Errors found" )
> +        elog.printo("No Errors found")
>  
>      if not fullpkgs:
>          return
> @@ -112,52 +119,59 @@ def check_full_pkgs(pkgs, fullpkgs, errorname, cache):
>      pindex = {}
>      for p in fullpkgs:
>          name = p.et.text
> -        ver  = p.et.get('version')
> -        md5  = p.et.get('md5')
> +        ver = p.et.get('version')
> +        md5 = p.et.get('md5')
>  
>          pindex[name] = p
>  
>          if not cache.has_pkg(name):
> -            elog.printo( "- package %s does not exist" % name )
> +            elog.printo("- package %s does not exist" % name)
>              errors += 1
>              continue
>  
>          if not cache.is_installed(name):
> -            elog.printo( "- package %s is not installed" % name )
> +            elog.printo("- package %s is not installed" % name)
>              errors += 1
>              continue
>  
>          pkg = cache.get_pkg(name)
>  
>          if pkg.installed_version != ver:
> -            elog.printo( "- package %s version %s does not match installed version %s" % (name, ver,  pkg.installed_version) )
> +            elog.printo(
> +                "- package %s version %s does not match installed version %s" %
> +                (name, ver, pkg.installed_version))
>              errors += 1
>              continue
>  
>          if pkg.installed_md5 != md5:
> -            elog.printo( "- package %s md5 %s does not match installed md5 %s" %
> -              (name, md5,  pkg.installed_md5) )
> +            elog.printo("- package %s md5 %s does not match installed md5 %s" %
> +                        (name, md5, pkg.installed_md5))
>              errors += 1
>  
>      for cp in cache.get_installed_pkgs():
>          if cp.name not in pindex:
> -            elog.printo( "additional package %s installed, that was not requested" % cp.name )
> +            elog.printo(
> +                "additional package %s installed, that was not requested" %
> +                cp.name)
>              errors += 1
>  
>      if errors == 0:
> -        elog.printo( "No Errors found" )
> +        elog.printo("No Errors found")
> +
>  
> -def elbe_report( xml, buildenv, cache, reportname, targetfs ):
> +def elbe_report(xml, buildenv, cache, reportname, targetfs):
>      outf = ASCIIDocLog(reportname)
>      rfs = buildenv.rfs
>  
> -    outf.h1( "ELBE Report for Project " + xml.text("project/name") )
> +    outf.h1("ELBE Report for Project " + xml.text("project/name"))
>  
> -    outf.printo( "report timestamp: "+datetime.now().strftime("%Y%m%d-%H%M%S") )
> -    outf.printo( "elbe: %s" % str(elbe_version) )
> +    outf.printo(
> +        "report timestamp: " +
> +        datetime.now().strftime("%Y%m%d-%H%M%S"))
> +    outf.printo("elbe: %s" % str(elbe_version))
>  
>      slist = rfs.read_file('etc/apt/sources.list')
> -    outf.h2( "Apt Sources dump" )
> +    outf.h2("Apt Sources dump")
>      outf.verbatim_start()
>      outf.print_raw(slist)
>      outf.verbatim_end()
> @@ -167,17 +181,17 @@ def elbe_report( xml, buildenv, cache, reportname, targetfs ):
>      except IOError:
>          prefs = ""
>  
> -    outf.h2( "Apt Preferences dump" )
> +    outf.h2("Apt Preferences dump")
>      outf.verbatim_start()
>      outf.print_raw(prefs)
>      outf.verbatim_end()
>  
> -    outf.h2( "Installed Packages List" )
> +    outf.h2("Installed Packages List")
>      outf.table()
>  
>      instpkgs = cache.get_installed_pkgs()
>      for p in instpkgs:
> -        outf.printo( "|%s|%s|%s" % (p.name, p.installed_version, p.origin) )
> +        outf.printo("|%s|%s|%s" % (p.name, p.installed_version, p.origin))
>      outf.table()
>  
>      # archive extraction is done before and after finetuning the first
> @@ -186,13 +200,13 @@ def elbe_report( xml, buildenv, cache, reportname, targetfs ):
>      # the second extraction is done to ensure that files from the archive
>      # can't be modified/removed in finetuning
>  
> -    outf.h2( "archive extract before finetuning" )
> +    outf.h2("archive extract before finetuning")
>  
>      if xml.has("archive"):
>          with xml.archive_tmpfile() as fp:
> -            outf.do( 'tar xvfj "%s" -C "%s"' % (fp.name, targetfs.path) )
> +            outf.do('tar xvfj "%s" -C "%s"' % (fp.name, targetfs.path))
>  
> -    outf.h2( "finetuning log" )
> +    outf.h2("finetuning log")
>      outf.verbatim_start()
>  
>      index = cache.get_fileindex()
> @@ -206,16 +220,16 @@ def elbe_report( xml, buildenv, cache, reportname, targetfs ):
>  
>      outf.verbatim_end()
>  
> -    outf.h2( "archive extract after finetuning" )
> +    outf.h2("archive extract after finetuning")
>  
>      if xml.has("archive"):
>          with xml.archive_tmpfile() as fp:
> -            outf.do( 'tar xvfj "%s" -C "%s"' % (fp.name, targetfs.path) )
> +            outf.do('tar xvfj "%s" -C "%s"' % (fp.name, targetfs.path))
>          mt_index_post_arch = targetfs.mtime_snap()
>      else:
>          mt_index_post_arch = mt_index_post_fine
>  
> -    outf.h2( "fileslist" )
> +    outf.h2("fileslist")
>      outf.table()
>  
>      tgt_pkg_list = set()
> @@ -238,11 +252,11 @@ def elbe_report( xml, buildenv, cache, reportname, targetfs ):
>          else:
>              pkg = "added in archive"
>  
> -        outf.printo( "|+%s+|%s" % (fpath,pkg) )
> +        outf.printo("|+%s+|%s" % (fpath, pkg))
>  
>      outf.table()
>  
> -    outf.h2( "Deleted Files" )
> +    outf.h2("Deleted Files")
>      outf.table()
>      for fpath in list(mt_index.keys()):
>          if fpath not in mt_index_post_arch:
> @@ -250,10 +264,10 @@ def elbe_report( xml, buildenv, cache, reportname, targetfs ):
>                  pkg = index[fpath]
>              else:
>                  pkg = "postinst generated"
> -            outf.printo( "|+%s+|%s" % (fpath,pkg) )
> +            outf.printo("|+%s+|%s" % (fpath, pkg))
>      outf.table()
>  
> -    outf.h2( "Target Package List" )
> +    outf.h2("Target Package List")
>      outf.table()
>      instpkgs = cache.get_installed_pkgs()
>      pkgindex = {}
> @@ -265,11 +279,19 @@ def elbe_report( xml, buildenv, cache, reportname, targetfs ):
>          f = targetfs.open('etc/elbe_pkglist', 'w')
>      for pkg in tgt_pkg_list:
>          p = pkgindex[pkg]
> -        outf.printo( "|%s|%s|%s|%s" % (p.name, p.installed_version, p.is_auto_installed, p.installed_md5) )
> +        outf.printo(
> +            "|%s|%s|%s|%s" %
> +            (p.name,
> +             p.installed_version,
> +             p.is_auto_installed,
> +             p.installed_md5))
>          if xml.has("target/pkgversionlist"):
> -            f.write ("%s %s %s\n" % (p.name, p.installed_version, p.installed_md5))
> +            f.write(
> +                "%s %s %s\n" %
> +                (p.name,
> +                 p.installed_version,
> +                 p.installed_md5))
>      outf.table()
>  
>      if xml.has("target/pkgversionlist"):
> -        f.close ()
> -
> +        f.close()
> diff --git a/elbepack/efilesystem.py b/elbepack/efilesystem.py
> index 8e4583e1..ebe1bca2 100644
> --- a/elbepack/efilesystem.py
> +++ b/elbepack/efilesystem.py
> @@ -17,7 +17,8 @@ from elbepack.hdimg import do_hdimg
>  from elbepack.fstab import fstabentry
>  from elbepack.licencexml import copyright_xml
>  
> -def copy_filelist( src, filelist, dst ):
> +
> +def copy_filelist(src, filelist, dst):
>      for f in filelist:
>          f = f.rstrip("\n")
>          if src.isdir(f) and not src.islink(f):
> @@ -26,7 +27,8 @@ def copy_filelist( src, filelist, dst ):
>              st = src.stat(f)
>              dst.chown(f, st.st_uid, st.st_gid)
>          else:
> -            subprocess.call(["cp", "-a", "--reflink=auto", src.fname(f), dst.fname(f)])
> +            subprocess.call(["cp", "-a", "--reflink=auto",
> +                             src.fname(f), dst.fname(f)])
>      # update utime which will change after a file has been copied into
>      # the directory
>      for f in filelist:
> @@ -35,28 +37,33 @@ def copy_filelist( src, filelist, dst ):
>              shutil.copystat(src.fname(f), dst.fname(f))
>  
>  
> -def extract_target( src, xml, dst, log, cache ):
> +def extract_target(src, xml, dst, log, cache):
>      # create filelists describing the content of the target rfs
>      if xml.tgt.has("tighten") or xml.tgt.has("diet"):
> -        pkglist = [ n.et.text for n in xml.node('target/pkg-list') if n.tag == 'pkg' ]
> +        pkglist = [n.et.text for n in xml.node(
> +            'target/pkg-list') if n.tag == 'pkg']
>          arch = xml.text("project/buildimage/arch", key="arch")
>  
>          if xml.tgt.has("diet"):
>              withdeps = []
>              for p in pkglist:
> -                deps = cache.get_dependencies( p )
> +                deps = cache.get_dependencies(p)
>                  withdeps += [d.name for d in deps]
>                  withdeps += [p]
>  
> -            pkglist = list( set( withdeps ) )
> +            pkglist = list(set(withdeps))
>  
>          file_list = []
>          for line in pkglist:
> -            file_list += src.cat_file("var/lib/dpkg/info/%s.list" %(line))
> -            file_list += src.cat_file("var/lib/dpkg/info/%s.conffiles" %(line))
> +            file_list += src.cat_file("var/lib/dpkg/info/%s.list" % (line))
> +            file_list += src.cat_file("var/lib/dpkg/info/%s.conffiles" %
> +                                      (line))
>  
> -            file_list += src.cat_file("var/lib/dpkg/info/%s:%s.list" %(line, arch))
> -            file_list += src.cat_file("var/lib/dpkg/info/%s:%s.conffiles" %(line, arch))
> +            file_list += src.cat_file("var/lib/dpkg/info/%s:%s.list" %
> +                                      (line, arch))
> +            file_list += src.cat_file(
> +                "var/lib/dpkg/info/%s:%s.conffiles" %
> +                (line, arch))
>  
>          file_list = list(sorted(set(file_list)))
>          copy_filelist(src, file_list, dst)
> @@ -67,50 +74,54 @@ def extract_target( src, xml, dst, log, cache ):
>  
>      try:
>          dst.mkdir_p("dev")
> -    except:
> +    except BaseException:
>          pass
>      try:
>          dst.mkdir_p("proc")
> -    except:
> +    except BaseException:
>          pass
>      try:
>          dst.mkdir_p("sys")
> -    except:
> +    except BaseException:
>          pass
>  
>      if xml.tgt.has("setsel"):
> -        pkglist = [ n.et.text for n in xml.node ('target/pkg-list') if n.tag == 'pkg' ]
> +        pkglist = [n.et.text for n in xml.node(
> +            'target/pkg-list') if n.tag == 'pkg']
>          psel = 'var/cache/elbe/pkg-selections'
>  
> -        with open (dst.fname (psel), 'w+') as f:
> +        with open(dst.fname(psel), 'w+') as f:
>              for item in pkglist:
>                  f.write("%s  install\n" % item)
>  
>          host_arch = log.get_command_out("dpkg --print-architecture").strip()
> -        if xml.is_cross (host_arch):
> +        if xml.is_cross(host_arch):
>              ui = "/usr/share/elbe/qemu-elbe/" + str(xml.defs["userinterpr"])
> -            if not os.path.exists (ui):
> +            if not os.path.exists(ui):
>                  ui = "/usr/bin/" + str(xml.defs["userinterpr"])
> -            log.do ('cp %s %s' % (ui, dst.fname( "usr/bin" )))
> +            log.do('cp %s %s' % (ui, dst.fname("usr/bin")))
>  
> -        log.chroot (dst.path, "/usr/bin/dpkg --clear-selections")
> -        log.chroot (dst.path, "/usr/bin/dpkg --set-selections < %s " % dst.fname (psel))
> -        log.chroot (dst.path, "/usr/bin/dpkg --purge -a")
> +        log.chroot(dst.path, "/usr/bin/dpkg --clear-selections")
> +        log.chroot(
> +            dst.path,
> +            "/usr/bin/dpkg --set-selections < %s " %
> +            dst.fname(psel))
> +        log.chroot(dst.path, "/usr/bin/dpkg --purge -a")
>  
>  
>  class ElbeFilesystem(Filesystem):
>      def __init__(self, path, clean=False):
> -        Filesystem.__init__(self,path,clean)
> +        Filesystem.__init__(self, path, clean)
>  
>      def dump_elbeversion(self, xml):
>          f = self.open("etc/elbe_version", "w+")
> -        f.write("%s %s\n" %(xml.prj.text("name"), xml.prj.text("version")))
> +        f.write("%s %s\n" % (xml.prj.text("name"), xml.prj.text("version")))
>          f.write("this RFS was generated by elbe %s\n" % (elbe_version))
>          f.write(time.strftime("%c\n"))
>          f.close()
>  
>          version_file = self.open("etc/updated_version", "w")
> -        version_file.write( xml.text ("/project/version") )
> +        version_file.write(xml.text("/project/version"))
>          version_file.close
>  
>          elbe_base = self.open("etc/elbe_base.xml", "wb")
> @@ -124,66 +135,67 @@ class ElbeFilesystem(Filesystem):
>                  with io.open(os.path.join(dir, "copyright"), "rb") as lic:
>                      lic_text = lic.read()
>              except IOError as e:
> -                log.printo( "Error while processing license file %s: '%s'" %
> -                        (os.path.join(dir, "copyright"), e.strerror))
> -                lic_text = "Error while processing license file %s: '%s'" % (os.path.join(dir, "copyright"), e.strerror)
> +                log.printo("Error while processing license file %s: '%s'" %
> +                           (os.path.join(dir, "copyright"), e.strerror))
> +                lic_text = "Error while processing license file %s: '%s'" % (
> +                    os.path.join(dir, "copyright"), e.strerror)
>  
>              try:
> -                lic_text = unicode (lic_text, encoding='utf-8')
> -            except:
> -                lic_text = unicode (lic_text, encoding='iso-8859-1')
> -
> +                lic_text = unicode(lic_text, encoding='utf-8')
> +            except BaseException:
> +                lic_text = unicode(lic_text, encoding='iso-8859-1')
>  
> -            if not f is None:
> +            if f is not None:
>                  f.write(unicode(os.path.basename(dir)))
> -                f.write(u":\n================================================================================")
> +                f.write(
> +                    u":\n================================================================================")
>                  f.write(u"\n")
>                  f.write(lic_text)
>                  f.write(u"\n\n")
>  
> -            if not xml_fname is None:
> -                licence_xml.add_copyright_file (os.path.basename(dir), lic_text)
> +            if xml_fname is not None:
> +                licence_xml.add_copyright_file(os.path.basename(dir), lic_text)
>  
> -        if not xml_fname is None:
> -            licence_xml.write (xml_fname)
> +        if xml_fname is not None:
> +            licence_xml.write(xml_fname)
>  
>  
>  class ChRootFilesystem(ElbeFilesystem):
>      def __init__(self, path, interpreter=None, clean=False):
> -        ElbeFilesystem.__init__(self,path,clean)
> +        ElbeFilesystem.__init__(self, path, clean)
>          self.interpreter = interpreter
> -        self.cwd = os.open ("/", os.O_RDONLY)
> +        self.cwd = os.open("/", os.O_RDONLY)
>          self.inchroot = False
>  
> -    def __delete__ (self):
> -        os.close (self.cwd)
> +    def __delete__(self):
> +        os.close(self.cwd)
>  
>      def __enter__(self):
>          if self.interpreter:
> -            if not self.exists ("usr/bin"):
> -                self.mkdir ("usr/bin")
> +            if not self.exists("usr/bin"):
> +                self.mkdir("usr/bin")
>  
>              ui = "/usr/share/elbe/qemu-elbe/" + self.interpreter
> -            if not os.path.exists (ui):
> +            if not os.path.exists(ui):
>                  ui = "/usr/bin/" + self.interpreter
> -            os.system ('cp %s %s' % (ui, self.fname( "usr/bin" )))
> +            os.system('cp %s %s' % (ui, self.fname("usr/bin")))
>  
> -        if self.exists ("/etc/resolv.conf"):
> -            os.system ('mv %s %s' % (self.fname ("etc/resolv.conf"),
> -                                     self.fname ("etc/resolv.conf.orig")))
> -        os.system ('cp %s %s' % ("/etc/resolv.conf",
> -                                 self.fname("etc/resolv.conf")))
> +        if self.exists("/etc/resolv.conf"):
> +            os.system('mv %s %s' % (self.fname("etc/resolv.conf"),
> +                                    self.fname("etc/resolv.conf.orig")))
> +        os.system('cp %s %s' % ("/etc/resolv.conf",
> +                                self.fname("etc/resolv.conf")))
>  
>          if self.exists("/etc/apt/apt.conf"):
> -            os.system ('cp %s %s' % (self.fname ("/etc/apt/apt.conf"),
> -                                     self.fname ("/etc/apt/apt.conf.orig")))
> -        if os.path.exists ("/etc/apt/apt.conf"):
> -            os.system ('cp %s %s' % ("/etc/apt/apt.conf",
> -                                     self.fname("/etc/apt/")))
> +            os.system('cp %s %s' % (self.fname("/etc/apt/apt.conf"),
> +                                    self.fname("/etc/apt/apt.conf.orig")))
> +        if os.path.exists("/etc/apt/apt.conf"):
> +            os.system('cp %s %s' % ("/etc/apt/apt.conf",
> +                                    self.fname("/etc/apt/")))
>  
> -        self.mkdir_p ("usr/sbin")
> -        self.write_file ("usr/sbin/policy-rc.d",
> -            0o755, "#!/bin/sh\nexit 101\n")
> +        self.mkdir_p("usr/sbin")
> +        self.write_file("usr/sbin/policy-rc.d",
> +                        0o755, "#!/bin/sh\nexit 101\n")
>  
>          self.mount()
>          return self
> @@ -193,38 +205,38 @@ class ChRootFilesystem(ElbeFilesystem):
>              self.leave_chroot()
>          self.umount()
>          if self.interpreter:
> -            os.system( 'rm -f %s' %
> -                        os.path.join(self.path, "usr/bin/"+self.interpreter) )
> +            os.system('rm -f %s' %
> +                      os.path.join(self.path, "usr/bin/" + self.interpreter))
>  
> -        os.system ('rm -f %s' % (self.fname ("etc/resolv.conf")))
> +        os.system('rm -f %s' % (self.fname("etc/resolv.conf")))
>  
> -        if self.exists ("/etc/resolv.conf.orig"):
> -            os.system ('mv %s %s' % (self.fname ("etc/resolv.conf.orig"),
> -                                     self.fname ("etc/resolv.conf")))
> +        if self.exists("/etc/resolv.conf.orig"):
> +            os.system('mv %s %s' % (self.fname("etc/resolv.conf.orig"),
> +                                    self.fname("etc/resolv.conf")))
>  
>          if self.exists("/etc/apt/apt.conf"):
> -            os.system ('rm -f %s' % (self.fname ("etc/apt/apt.conf")))
> +            os.system('rm -f %s' % (self.fname("etc/apt/apt.conf")))
>  
> -        if self.exists ("/etc/apt/apt.conf.orig"):
> -            os.system ('mv %s %s' % (self.fname ("etc/apt/apt.conf.orig"),
> -                                     self.fname ("etc/apt/apt.conf")))
> +        if self.exists("/etc/apt/apt.conf.orig"):
> +            os.system('mv %s %s' % (self.fname("etc/apt/apt.conf.orig"),
> +                                    self.fname("etc/apt/apt.conf")))
>  
>          if self.exists("/usr/sbin/policy-rc.d"):
> -            os.system ('rm -f %s' % (self.fname ("usr/sbin/policy-rc.d")))
> +            os.system('rm -f %s' % (self.fname("usr/sbin/policy-rc.d")))
>  
>      def mount(self):
>          if self.path == '/':
>              return
>          try:
> -            os.system ("mount -t proc none %s/proc" % self.path)
> -            os.system ("mount -t sysfs none %s/sys" % self.path)
> -            os.system ("mount -o bind /dev %s/dev" % self.path)
> -            os.system ("mount -o bind /dev/pts %s/dev/pts" % self.path)
> -        except:
> -            self.umount ()
> +            os.system("mount -t proc none %s/proc" % self.path)
> +            os.system("mount -t sysfs none %s/sys" % self.path)
> +            os.system("mount -o bind /dev %s/dev" % self.path)
> +            os.system("mount -o bind /dev/pts %s/dev/pts" % self.path)
> +        except BaseException:
> +            self.umount()
>              raise
>  
> -    def enter_chroot (self):
> +    def enter_chroot(self):
>          assert not self.inchroot
>  
>          os.environ["LANG"] = "C"
> @@ -239,30 +251,30 @@ class ChRootFilesystem(ElbeFilesystem):
>  
>          os.chroot(self.path)
>  
> -
> -    def _umount (self, path):
> -        if os.path.ismount (path):
> +    def _umount(self, path):
> +        if os.path.ismount(path):
>              os.system("umount %s" % path)
>  
> -    def umount (self):
> +    def umount(self):
>          if self.path == '/':
>              return
> -        self._umount ("%s/proc/sys/fs/binfmt_misc" % self.path)
> -        self._umount ("%s/proc" % self.path)
> -        self._umount ("%s/sys" % self.path)
> -        self._umount ("%s/dev/pts" % self.path)
> -        self._umount ("%s/dev" % self.path)
> +        self._umount("%s/proc/sys/fs/binfmt_misc" % self.path)
> +        self._umount("%s/proc" % self.path)
> +        self._umount("%s/sys" % self.path)
> +        self._umount("%s/dev/pts" % self.path)
> +        self._umount("%s/dev" % self.path)
>  
> -    def leave_chroot (self):
> +    def leave_chroot(self):
>          assert self.inchroot
>  
> -        os.fchdir (self.cwd)
> +        os.fchdir(self.cwd)
>  
>          self.inchroot = False
>          if self.path == '/':
>              return
>  
> -        os.chroot (".")
> +        os.chroot(".")
> +
>  
>  class TargetFs(ChRootFilesystem):
>      def __init__(self, path, log, xml, clean=True):
> @@ -280,16 +292,21 @@ class TargetFs(ChRootFilesystem):
>              for fs in xml.tgt.node("fstab"):
>                  if not fs.has("nofstab"):
>                      fstab = fstabentry(xml, fs)
> -                    f.write (fstab.get_str ())
> +                    f.write(fstab.get_str())
>              f.close()
>  
>      def part_target(self, targetdir, grub_version):
>  
>          # create target images and copy the rfs into them
> -        self.images = do_hdimg( self.log, self.xml, targetdir, self, grub_version )
> +        self.images = do_hdimg(
> +            self.log,
> +            self.xml,
> +            targetdir,
> +            self,
> +            grub_version)
>  
>          if self.xml.has("target/package/tar"):
> -            targz_name = self.xml.text ("target/package/tar/name")
> +            targz_name = self.xml.text("target/package/tar/name")
>              try:
>                  options = ''
>                  if self.xml.has("target/package/tar/options"):
> @@ -303,7 +320,7 @@ class TargetFs(ChRootFilesystem):
>                  )
>                  self.log.do(cmd % args)
>                  # only append filename if creating tarball was successful
> -                self.images.append (targz_name)
> +                self.images.append(targz_name)
>              except CommandError as e:
>                  # error was logged; continue creating cpio image
>                  pass
> @@ -313,9 +330,12 @@ class TargetFs(ChRootFilesystem):
>              cpio_name = self.xml.text("target/package/cpio/name")
>              os.chdir(self.fname(''))
>              try:
> -                self.log.do("find . -print | cpio -ov -H newc >%s" % os.path.join(targetdir,cpio_name) )
> +                self.log.do(
> +                    "find . -print | cpio -ov -H newc >%s" %
> +                    os.path.join(
> +                        targetdir, cpio_name))
>                  # only append filename if creating cpio was successful
> -                self.images.append (cpio_name)
> +                self.images.append(cpio_name)
>              except CommandError as e:
>                  # error was logged; continue
>                  pass
> @@ -325,13 +345,16 @@ class TargetFs(ChRootFilesystem):
>              sfs_name = self.xml.text("target/package/squashfs/name")
>              os.chdir(self.fname(''))
>              try:
> -                self.log.do("mksquashfs %s %s/%s -noappend -no-progress" % (self.fname(''), targetdir, sfs_name))
> +                self.log.do(
> +                    "mksquashfs %s %s/%s -noappend -no-progress" %
> +                    (self.fname(''), targetdir, sfs_name))
>                  # only append filename if creating mksquashfs was successful
> -                self.images.append (sfs_name)
> +                self.images.append(sfs_name)
>              except CommandError as e:
>                  # error was logged; continue
>                  pass
>  
> +
>  class BuildImgFs(ChRootFilesystem):
>      def __init__(self, path, interpreter):
>          ChRootFilesystem.__init__(self, path, interpreter)
> diff --git a/elbepack/elbeproject.py b/elbepack/elbeproject.py
> index 9e7a518f..99cff3e4 100644
> --- a/elbepack/elbeproject.py
> +++ b/elbepack/elbeproject.py
> @@ -25,26 +25,40 @@ from elbepack.pbuilder import pbuilder_write_config, pbuilder_write_repo_hook, p
>  from elbepack.repomanager import ProjectRepo
>  from elbepack.config import cfg
>  
> +
>  class IncompatibleArchitectureException(Exception):
> -    def __init__ (self, oldarch, newarch):
> -        Exception.__init__(self,
> -            "Cannot change architecture from %s to %s in existing project" %
> -            (oldarch, newarch) )
> +    def __init__(self, oldarch, newarch):
> +        Exception.__init__(
> +            self, "Cannot change architecture from %s to %s in existing project" %
> +            (oldarch, newarch))
> +
>  
>  class AptCacheUpdateError(Exception):
> -    def __init__ (self, e):
> -        Exception.__init__ (self, "Error Updating rpcaptcache: " + str(e))
> +    def __init__(self, e):
> +        Exception.__init__(self, "Error Updating rpcaptcache: " + str(e))
> +
>  
>  class AptCacheCommitError(Exception):
> -    def __init__ (self, msg=''):
> -        Exception.__init__ (self, "Error Committing rpcaptcache %s" % msg)
> +    def __init__(self, msg=''):
> +        Exception.__init__(self, "Error Committing rpcaptcache %s" % msg)
> +
>  
>  class ElbeProject (object):
> -    def __init__ (self, builddir, xmlpath = None, logpath = None, name = None,
> -            override_buildtype = None, skip_validate = False,
> -            url_validation = ValidationMode.CHECK_ALL, rpcaptcache_notifier = None,
> -            private_data = None, postbuild_file = None, presh_file = None,
> -            postsh_file = None, savesh_file = None):
> +    def __init__(
> +            self,
> +            builddir,
> +            xmlpath=None,
> +            logpath=None,
> +            name=None,
> +            override_buildtype=None,
> +            skip_validate=False,
> +            url_validation=ValidationMode.CHECK_ALL,
> +            rpcaptcache_notifier=None,
> +            private_data=None,
> +            postbuild_file=None,
> +            presh_file=None,
> +            postsh_file=None,
> +            savesh_file=None):
>          self.builddir = os.path.abspath(str(builddir))
>          self.chrootpath = os.path.join(self.builddir, "chroot")
>          self.targetpath = os.path.join(self.builddir, "target")
> @@ -74,119 +88,140 @@ class ElbeProject (object):
>          # Use supplied XML file, if given, otherwise use the source.xml
>          # file of the project
>          if xmlpath:
> -            self.xml = ElbeXML( xmlpath, buildtype=override_buildtype,
> -                    skip_validate=skip_validate, url_validation=url_validation )
> +            self.xml = ElbeXML(
> +                xmlpath,
> +                buildtype=override_buildtype,
> +                skip_validate=skip_validate,
> +                url_validation=url_validation)
>          else:
> -            sourcexmlpath = os.path.join( self.builddir, "source.xml" )
> -            self.xml = ElbeXML( sourcexmlpath, buildtype=override_buildtype,
> -                    skip_validate=skip_validate, url_validation=url_validation )
> +            sourcexmlpath = os.path.join(self.builddir, "source.xml")
> +            self.xml = ElbeXML(
> +                sourcexmlpath,
> +                buildtype=override_buildtype,
> +                skip_validate=skip_validate,
> +                url_validation=url_validation)
>  
> -        self.arch = self.xml.text( "project/arch", key="arch" )
> -        self.codename = self.xml.text( "project/suite" )
> +        self.arch = self.xml.text("project/arch", key="arch")
> +        self.codename = self.xml.text("project/suite")
>  
>          # If logpath is given, use an AsciiDocLog instance, otherwise log
>          # to stdout
>          if logpath:
> -            self.log = ASCIIDocLog( logpath )
> +            self.log = ASCIIDocLog(logpath)
>          else:
>              self.log = StdoutLog()
>  
> -        self.repo = ProjectRepo (self.arch, self.codename,
> -                                 os.path.join(self.builddir, "repo"), self.log)
> +        self.repo = ProjectRepo(self.arch, self.codename,
> +                                os.path.join(self.builddir, "repo"), self.log)
>  
>          # Create BuildEnv instance, if the chroot directory exists and
>          # has an etc/elbe_version
> -        if os.path.exists( self.chrootpath ):
> -            self.buildenv = BuildEnv( self.xml, self.log, self.chrootpath, clean=False )
> +        if os.path.exists(self.chrootpath):
> +            self.buildenv = BuildEnv(
> +                self.xml, self.log, self.chrootpath, clean=False)
>          else:
>              self.buildenv = None
>  
>          # Create TargetFs instance, if the target directory exists
> -        if os.path.exists( self.targetpath ):
> -            self.targetfs = TargetFs( self.targetpath, self.log,
> -                    self.buildenv.xml, clean=False )
> +        if os.path.exists(self.targetpath):
> +            self.targetfs = TargetFs(self.targetpath, self.log,
> +                                     self.buildenv.xml, clean=False)
>          else:
>              self.targetfs = None
>  
> -    def build_chroottarball (self):
> +    def build_chroottarball(self):
>          self.log.do("tar cJf %s/chroot.tar.xz \
>                  --exclude=./tmp/*  --exclude=./dev/* \
>                  --exclude=./run/*  --exclude=./sys/* \
>                  --exclude=./proc/* --exclude=./var/cache/* \
>                  -C %s ." % (self.builddir, self.chrootpath))
>  
> -    def build_sysroot (self):
> +    def build_sysroot(self):
>  
>          # ignore packages from debootstrap
> -        ignore_pkgs = [p.et.text for p in self.xml.node ("debootstrappkgs")]
> +        ignore_pkgs = [p.et.text for p in self.xml.node("debootstrappkgs")]
>          ignore_dev_pkgs = []
> -        if self.xml.has ('target/pkg-blacklist/sysroot'):
> -            ignore_dev_pkgs = [p.et.text for p in self.xml.node ("target/pkg-blacklist/sysroot")]
> +        if self.xml.has('target/pkg-blacklist/sysroot'):
> +            ignore_dev_pkgs = [p.et.text for p in self.xml.node(
> +                "target/pkg-blacklist/sysroot")]
>  
>          with self.buildenv:
>              try:
>                  self.get_rpcaptcache().mark_install_devpkgs(ignore_pkgs, ignore_dev_pkgs)
>              except SystemError as e:
> -                self.log.printo( "mark install devpkgs failed: %s" % str(e) )
> +                self.log.printo("mark install devpkgs failed: %s" % str(e))
>              try:
>                  self.get_rpcaptcache().commit()
>              except SystemError as e:
> -                self.log.printo( "commiting changes failed: %s" % str(e) )
> -                raise AptCacheCommitError (str(e))
> +                self.log.printo("commiting changes failed: %s" % str(e))
> +                raise AptCacheCommitError(str(e))
>  
>          sysrootfilelist = os.path.join(self.builddir, "sysroot-filelist")
>  
>          with self.buildenv.rfs:
> -            self.log.do( "chroot %s /usr/bin/symlinks -cr /usr/lib" %
> -                         self.chrootpath )
> +            self.log.do("chroot %s /usr/bin/symlinks -cr /usr/lib" %
> +                        self.chrootpath)
>  
>          triplet = self.xml.defs["triplet"]
>  
> -        paths = [ './usr/include', './usr/include/' + triplet,
> -                  './etc/ld.so.conf*',
> -                  './opt/*/lib/*.so', '/opt/*lib/*.so.*', './opt/*/include/',
> -                  './opt/*/lib/' + triplet, './opt/*/include/' + triplet,
> -                  './lib/*.so', './lib/*.so.*', './lib/' + triplet,
> -                  './usr/lib/*.so', './usr/lib/*.so', './usr/lib/*.so.*', './usr/lib/' + triplet ]
> -
> -        self.log.do( "rm %s" % sysrootfilelist, allow_fail=True)
> -
> -        os.chdir( self.chrootpath )
> +        paths = [
> +            './usr/include',
> +            './usr/include/' + triplet,
> +            './etc/ld.so.conf*',
> +            './opt/*/lib/*.so',
> +            '/opt/*lib/*.so.*',
> +            './opt/*/include/',
> +            './opt/*/lib/' + triplet,
> +            './opt/*/include/' + triplet,
> +            './lib/*.so',
> +            './lib/*.so.*',
> +            './lib/' + triplet,
> +            './usr/lib/*.so',
> +            './usr/lib/*.so',
> +            './usr/lib/*.so.*',
> +            './usr/lib/' + triplet]
> +
> +        self.log.do("rm %s" % sysrootfilelist, allow_fail=True)
> +
> +        os.chdir(self.chrootpath)
>          for p in paths:
> -            self.log.do( 'find -path "%s" >> %s' % (p, sysrootfilelist) )
> +            self.log.do('find -path "%s" >> %s' % (p, sysrootfilelist))
>  
> -        self.log.do( "tar cfJ %s/sysroot.tar.xz -C %s -T %s" %
> -                (self.builddir, self.chrootpath, sysrootfilelist) )
> +        self.log.do("tar cfJ %s/sysroot.tar.xz -C %s -T %s" %
> +                    (self.builddir, self.chrootpath, sysrootfilelist))
>  
>          # chroot is invalid after adding all the -dev packages
>          # it shouldn't be used to create an incremental image
> -        # 
> +        #
>          # We only remove /etc/elbe_version here.
>          # So we can still elbe chroot into the fs
> -        self.buildenv.rfs.remove( "/etc/elbe_version", noerr=True )
> +        self.buildenv.rfs.remove("/etc/elbe_version", noerr=True)
>  
> +    def pbuild(self, p):
> +        self.pdebuild_init()
> +        src_path = os.path.join(self.builddir, "pdebuilder", "current")
>  
> -    def pbuild (self, p):
> -        self.pdebuild_init ()
> -        src_path = os.path.join (self.builddir, "pdebuilder", "current")
> -
> -        self.log.printo ("retrieve pbuild sources: %s" % p.text('.').strip())
> +        self.log.printo("retrieve pbuild sources: %s" % p.text('.').strip())
>          if p.tag == 'git':
> -            self.log.do ("git clone %s %s" % (p.text('.').strip(), src_path))
> +            self.log.do("git clone %s %s" % (p.text('.').strip(), src_path))
>              try:
> -                self.log.do ("cd %s; git reset --hard %s" % (src_path, p.et.attrib['revision']))
> +                self.log.do(
> +                    "cd %s; git reset --hard %s" %
> +                    (src_path, p.et.attrib['revision']))
>              except IndexError:
>                  pass
>          elif p.tag == 'svn':
> -            self.log.do ("svn co --non-interactive %s %s" % (p.text('.').strip(), src_path))
> +            self.log.do(
> +                "svn co --non-interactive %s %s" %
> +                (p.text('.').strip(), src_path))
>          else:
> -            self.log.printo ("unknown pbuild source vcs: %s" % p.tag)
> +            self.log.printo("unknown pbuild source vcs: %s" % p.tag)
>  
> -        self.pdebuild_build ()
> +        self.pdebuild_build()
>  
> -    def build (self, skip_debootstrap = False, build_bin = False,
> -               build_sources = False, cdrom_size = None, debug = False,
> -               skip_pkglist = False, skip_pbuild = False):
> +    def build(self, skip_debootstrap=False, build_bin=False,
> +              build_sources=False, cdrom_size=None, debug=False,
> +              skip_pkglist=False, skip_pbuild=False):
>  
>          # Write the log header
>          self.write_log_header()
> @@ -195,76 +230,75 @@ class ElbeProject (object):
>          m = ValidationMode.CHECK_BINARIES
>          if build_sources:
>              m = ValidationMode.CHECK_ALL
> -        self.xml.validate_apt_sources ( m , self.arch )
> -
> +        self.xml.validate_apt_sources(m, self.arch)
>  
>          if (self.xml.has('target/pbuilder') and not skip_pbuild):
> -            if not os.path.exists ( os.path.join (self.builddir, "pbuilder") ):
> -                self.create_pbuilder ()
> -            for p in self.xml.node ('target/pbuilder'):
> -                self.pbuild (p)
> +            if not os.path.exists(os.path.join(self.builddir, "pbuilder")):
> +                self.create_pbuilder()
> +            for p in self.xml.node('target/pbuilder'):
> +                self.pbuild(p)
>                  # the package might be needed by a following pbuild, so update
>                  # the project repo that it can be installed in as
>                  # build-dependency
> -                self.repo.finalize ()
> +                self.repo.finalize()
>  
>          # To avoid update cache errors, the project repo needs to have
>          # Release and Packages files, even if it's empty. So don't do this
>          # in the if case above!
> -        self.repo.finalize ()
> +        self.repo.finalize()
>  
>          # Create the build environment, if it does not a valid one
>          # self.buildenv might be set when we come here.
>          # However, if its not a full_buildenv, we specify clean here,
>          # so it gets rebuilt properly.
>          if not self.has_full_buildenv():
> -            self.log.do( 'mkdir -p "%s"' % self.chrootpath )
> -            self.buildenv = BuildEnv( self.xml, self.log, self.chrootpath,
> -                                      build_sources = build_sources, clean = True )
> +            self.log.do('mkdir -p "%s"' % self.chrootpath)
> +            self.buildenv = BuildEnv(self.xml, self.log, self.chrootpath,
> +                                     build_sources=build_sources, clean=True)
>              skip_pkglist = False
>  
>          # Import keyring
>          self.buildenv.import_keys()
> -        self.log.printo ("Keys imported")
> +        self.log.printo("Keys imported")
>  
>          # Install packages
>          if not skip_pkglist:
>              self.install_packages()
>  
>          try:
> -            self.buildenv.rfs.dump_elbeversion (self.xml)
> +            self.buildenv.rfs.dump_elbeversion(self.xml)
>          except IOError:
> -            self.log.printo ("dump elbeversion failed")
> +            self.log.printo("dump elbeversion failed")
>  
>          # Extract target FS. We always create a new instance here with
>          # clean=true, because we want a pristine directory.
> -        self.targetfs = TargetFs( self.targetpath, self.log,
> -                self.buildenv.xml, clean=True )
> -        os.chdir( self.buildenv.rfs.fname( '' ) )
> -        extract_target( self.buildenv.rfs, self.xml, self.targetfs,
> -                self.log, self.get_rpcaptcache() )
> +        self.targetfs = TargetFs(self.targetpath, self.log,
> +                                 self.buildenv.xml, clean=True)
> +        os.chdir(self.buildenv.rfs.fname(''))
> +        extract_target(self.buildenv.rfs, self.xml, self.targetfs,
> +                       self.log, self.get_rpcaptcache())
>  
>          # Package validation and package list
>          if not skip_pkglist:
> -            validationpath = os.path.join( self.builddir, "validation.txt" )
> -            pkgs = self.xml.xml.node( "/target/pkg-list" )
> -            if self.xml.has( "fullpkgs" ):
> -                check_full_pkgs( pkgs, self.xml.xml.node( "/fullpkgs" ),
> -                        validationpath, self.get_rpcaptcache() )
> +            validationpath = os.path.join(self.builddir, "validation.txt")
> +            pkgs = self.xml.xml.node("/target/pkg-list")
> +            if self.xml.has("fullpkgs"):
> +                check_full_pkgs(pkgs, self.xml.xml.node("/fullpkgs"),
> +                                validationpath, self.get_rpcaptcache())
>              else:
> -                check_full_pkgs( pkgs, None, validationpath,
> -                        self.get_rpcaptcache() )
> -            dump_fullpkgs( self.xml, self.buildenv.rfs, self.get_rpcaptcache() )
> +                check_full_pkgs(pkgs, None, validationpath,
> +                                self.get_rpcaptcache())
> +            dump_fullpkgs(self.xml, self.buildenv.rfs, self.get_rpcaptcache())
>  
> -            self.xml.dump_elbe_version ()
> +            self.xml.dump_elbe_version()
>  
> -        self.targetfs.write_fstab (self.xml )
> +        self.targetfs.write_fstab(self.xml)
>  
>          # Dump ELBE version
>          try:
> -            self.targetfs.dump_elbeversion( self.xml )
> +            self.targetfs.dump_elbeversion(self.xml)
>          except MemoryError:
> -            self.log.printo( "dump elbeversion failed" )
> +            self.log.printo("dump elbeversion failed")
>  
>          # install packages for buildenv
>          if not skip_pkglist:
> @@ -272,64 +306,72 @@ class ElbeProject (object):
>  
>          # Write source.xml
>          try:
> -            sourcexmlpath = os.path.join( self.builddir, "source.xml" )
> -            self.xml.xml.write( sourcexmlpath )
> +            sourcexmlpath = os.path.join(self.builddir, "source.xml")
> +            self.xml.xml.write(sourcexmlpath)
>          except MemoryError:
> -            self.log.printo( "write source.xml failed (archive to huge?)" )
> +            self.log.printo("write source.xml failed (archive to huge?)")
>  
>          # Elbe report
> -        reportpath = os.path.join( self.builddir, "elbe-report.txt" )
> -        elbe_report( self.xml, self.buildenv, self.get_rpcaptcache(),
> -                reportpath, self.targetfs )
> +        reportpath = os.path.join(self.builddir, "elbe-report.txt")
> +        elbe_report(self.xml, self.buildenv, self.get_rpcaptcache(),
> +                    reportpath, self.targetfs)
>  
>          # the current license code raises an exception that interrupts the hole
>          # build if a licence can't be converted to utf-8. Exception handling can
>          # be removed as soon as the licence code is more stable
>          lic_err = False
>          try:
> -            f = io.open( os.path.join( self.builddir, "licence.txt" ), "w+", encoding='utf-8' )
> -            self.buildenv.rfs.write_licenses(f, self.log, os.path.join( self.builddir, "licence.xml"))
> +            f = io.open(
> +                os.path.join(
> +                    self.builddir,
> +                    "licence.txt"),
> +                "w+",
> +                encoding='utf-8')
> +            self.buildenv.rfs.write_licenses(
> +                f, self.log, os.path.join(
> +                    self.builddir, "licence.xml"))
>          except Exception as e:
> -            self.log.printo( "error during generating licence.txt/xml" )
> -            self.log.printo( sys.exc_info()[0] )
> +            self.log.printo("error during generating licence.txt/xml")
> +            self.log.printo(sys.exc_info()[0])
>              lic_err = True
>          finally:
>              f.close()
>          if lic_err:
> -            os.remove (os.path.join( self.builddir, "licence.txt" ))
> -            os.remove (os.path.join( self.builddir, "licence.xml" ))
> +            os.remove(os.path.join(self.builddir, "licence.txt"))
> +            os.remove(os.path.join(self.builddir, "licence.xml"))
>  
>          # Use some handwaving to determine grub version
>          # jessie and wheezy grubs are 2.0 but differ in behaviour
>          #
>          # We might also want support for legacy grub
> -        if self.get_rpcaptcache().is_installed( 'grub-pc' ):
> +        if self.get_rpcaptcache().is_installed('grub-pc'):
>              if self.codename == "wheezy":
>                  grub_version = 199
>              else:
>                  grub_version = 202
> -        elif self.get_rpcaptcache().is_installed( 'grub-legacy' ):
> -            self.log.printo( "package grub-legacy is installed, this is obsolete, skipping grub" )
> +        elif self.get_rpcaptcache().is_installed('grub-legacy'):
> +            self.log.printo(
> +                "package grub-legacy is installed, this is obsolete, skipping grub")
>              grub_version = 0
>          else:
> -            self.log.printo( "package grub-pc is not installed, skipping grub" )
> +            self.log.printo("package grub-pc is not installed, skipping grub")
>              # version 0 == skip_grub
>              grub_version = 0
> -        self.targetfs.part_target( self.builddir, grub_version )
> +        self.targetfs.part_target(self.builddir, grub_version)
>  
>          # Build cdrom images
>          self.repo_images = []
>          with self.buildenv:
>              init_codename = self.xml.get_initvm_codename()
>              if build_bin:
> -                self.repo_images += mk_binary_cdrom( self.buildenv.rfs,
> -                                                     self.arch,
> -                                                     self.codename,
> -                                                     init_codename,
> -                                                     self.xml,
> -                                                     self.builddir,
> -                                                     self.log,
> -                                                     cdrom_size=cdrom_size )
> +                self.repo_images += mk_binary_cdrom(self.buildenv.rfs,
> +                                                    self.arch,
> +                                                    self.codename,
> +                                                    init_codename,
> +                                                    self.xml,
> +                                                    self.builddir,
> +                                                    self.log,
> +                                                    cdrom_size=cdrom_size)
>              if build_sources:
>                  try:
>                      self.repo_images += mk_source_cdrom(self.buildenv.rfs,
> @@ -342,141 +384,179 @@ class ElbeProject (object):
>                                                          xml=self.xml)
>                  except SystemError as e:
>                      # e.g. no deb-src urls specified
> -                    self.log.printo( str (e) )
> -
> +                    self.log.printo(str(e))
>  
>          if self.postbuild_file:
> -            self.log.h2 ("postbuild script:")
> -            self.log.do (self.postbuild_file + ' "%s %s %s"' % (
> -                            self.builddir,
> -                            self.xml.text ("project/version"),
> -                            self.xml.text ("project/name")),
> -                         allow_fail=True)
> +            self.log.h2("postbuild script:")
> +            self.log.do(self.postbuild_file + ' "%s %s %s"' % (
> +                self.builddir,
> +                self.xml.text("project/version"),
> +                self.xml.text("project/name")),
> +                allow_fail=True)
>  
> -        os.system( 'cat "%s"' % os.path.join( self.builddir, "validation.txt" ) )
> +        os.system('cat "%s"' % os.path.join(self.builddir, "validation.txt"))
>  
> -    def pdebuild_init (self):
> +    def pdebuild_init(self):
>          # Remove pdebuilder directory, containing last build results
> -        self.log.do ('rm -rf "%s"' % os.path.join (self.builddir,
> -                                                   "pdebuilder"))
> +        self.log.do('rm -rf "%s"' % os.path.join(self.builddir,
> +                                                 "pdebuilder"))
>  
>          # Remove pbuilder/result directory
> -        self.log.do ('rm -rf "%s"' % os.path.join (self.builddir,
> -                                                   "pbuilder", "result"))
> +        self.log.do('rm -rf "%s"' % os.path.join(self.builddir,
> +                                                 "pbuilder", "result"))
>  
>          # Recreate the directories removed
> -        self.log.do ('mkdir -p "%s"' % os.path.join (self.builddir,
> -                                                     "pbuilder", "result"))
> +        self.log.do('mkdir -p "%s"' % os.path.join(self.builddir,
> +                                                   "pbuilder", "result"))
>  
> -    def pdebuild (self):
> -        self.pdebuild_init ()
> +    def pdebuild(self):
> +        self.pdebuild_init()
>  
> -        pbdir = os.path.join (self.builddir, "pdebuilder", "current")
> -        self.log.do ('mkdir -p "%s"' % os.path.join (pbdir))
> +        pbdir = os.path.join(self.builddir, "pdebuilder", "current")
> +        self.log.do('mkdir -p "%s"' % os.path.join(pbdir))
>  
>          try:
>              for orig_fname in self.orig_files:
> -                ofname = os.path.join (self.builddir, orig_fname)
> -                self.log.do ('mv "%s" "%s"' % (ofname, os.path.join(self.builddir, "pdebuilder")))
> +                ofname = os.path.join(self.builddir, orig_fname)
> +                self.log.do(
> +                    'mv "%s" "%s"' %
> +                    (ofname,
> +                     os.path.join(
> +                         self.builddir,
> +                         "pdebuilder")))
>          finally:
>              self.orig_fname = None
>              self.orig_files = []
>  
>          # Untar current_pdebuild.tar.gz into pdebuilder/current
> -        self.log.do ('tar xfz "%s" -C "%s"' % (os.path.join (self.builddir,
> -                                                  "current_pdebuild.tar.gz"),
> -                                                pbdir))
> +        self.log.do(
> +            'tar xfz "%s" -C "%s"' %
> +            (os.path.join(
> +                self.builddir,
> +                "current_pdebuild.tar.gz"),
> +                pbdir))
>  
> -        self.pdebuild_build ()
> -        self.repo.finalize ()
> +        self.pdebuild_build()
> +        self.repo.finalize()
>  
> -
> -    def pdebuild_build (self):
> +    def pdebuild_build(self):
>          try:
> -            self.log.do ('cd "%s"; pdebuild --debbuildopts "-j%s -sa" --configfile "%s" --use-pdebuild-internal --buildresult "%s"' % (
> -                os.path.join (self.builddir, "pdebuilder", "current"),
> -                cfg['pbuilder_jobs'],
> -                os.path.join (self.builddir, "pbuilderrc"),
> -                os.path.join (self.builddir, "pbuilder", "result")))
> -            self.repo.remove (os.path.join (self.builddir, "pdebuilder", "current", "debian", "control"))
> -
> -            self.repo.include (os.path.join (self.builddir,
> -                "pbuilder", "result", "*.changes"))
> +            self.log.do(
> +                'cd "%s"; pdebuild --debbuildopts "-j%s -sa" --configfile "%s" --use-pdebuild-internal --buildresult "%s"' %
> +                (os.path.join(
> +                    self.builddir, "pdebuilder", "current"), cfg['pbuilder_jobs'], os.path.join(
> +                    self.builddir, "pbuilderrc"), os.path.join(
> +                    self.builddir, "pbuilder", "result")))
> +            self.repo.remove(
> +                os.path.join(
> +                    self.builddir,
> +                    "pdebuilder",
> +                    "current",
> +                    "debian",
> +                    "control"))
> +
> +            self.repo.include(os.path.join(self.builddir,
> +                                           "pbuilder", "result", "*.changes"))
>          except CommandError as e:
> -            self.log.printo ('')
> -            self.log.printo ('Package fails to build.')
> -            self.log.printo ('Please make sure, that the submitted package builds in pbuilder')
> -
> -    def update_pbuilder (self):
> -        self.log.do ('pbuilder --update --configfile "%s" --aptconfdir "%s"' % (
> -                     os.path.join (self.builddir, "pbuilderrc"), os.path.join (self.builddir, "aptconfdir")))
> -
> -    def create_pbuilder (self):
> +            self.log.printo('')
> +            self.log.printo('Package fails to build.')
> +            self.log.printo(
> +                'Please make sure, that the submitted package builds in pbuilder')
> +
> +    def update_pbuilder(self):
> +        self.log.do(
> +            'pbuilder --update --configfile "%s" --aptconfdir "%s"' %
> +            (os.path.join(
> +                self.builddir, "pbuilderrc"), os.path.join(
> +                self.builddir, "aptconfdir")))
> +
> +    def create_pbuilder(self):
>          # Remove old pbuilder directory, if it exists
> -        self.log.do ('rm -rf "%s"' % os.path.join (self.builddir, "pbuilder"))
> +        self.log.do('rm -rf "%s"' % os.path.join(self.builddir, "pbuilder"))
>  
>          # make hooks.d and pbuilder directory
> -        self.log.do ('mkdir -p "%s"' % os.path.join (self.builddir, "pbuilder", "hooks.d"))
> -        self.log.do ('mkdir -p "%s"' % os.path.join (self.builddir, "aptconfdir", "apt.conf.d"))
> +        self.log.do(
> +            'mkdir -p "%s"' %
> +            os.path.join(
> +                self.builddir,
> +                "pbuilder",
> +                "hooks.d"))
> +        self.log.do(
> +            'mkdir -p "%s"' %
> +            os.path.join(
> +                self.builddir,
> +                "aptconfdir",
> +                "apt.conf.d"))
>  
>          # write config files
> -        pbuilder_write_config (self.builddir, self.xml, self.log)
> -        pbuilder_write_apt_conf (self.builddir, self.xml)
> -        pbuilder_write_repo_hook (self.builddir, self.xml)
> -        self.log.do ('chmod 755 "%s"' % os.path.join (self.builddir, "pbuilder", "hooks.d", "D10elbe_apt_sources"))
> +        pbuilder_write_config(self.builddir, self.xml, self.log)
> +        pbuilder_write_apt_conf(self.builddir, self.xml)
> +        pbuilder_write_repo_hook(self.builddir, self.xml)
> +        self.log.do(
> +            'chmod 755 "%s"' %
> +            os.path.join(
> +                self.builddir,
> +                "pbuilder",
> +                "hooks.d",
> +                "D10elbe_apt_sources"))
>  
>          # Run pbuilder --create
> -        self.log.do ('pbuilder --create --configfile "%s" --aptconfdir "%s" --extrapackages git' % (
> -                     os.path.join (self.builddir, "pbuilderrc"), os.path.join (self.builddir, "aptconfdir")))
> +        self.log.do(
> +            'pbuilder --create --configfile "%s" --aptconfdir "%s" --extrapackages git' %
> +            (os.path.join(
> +                self.builddir, "pbuilderrc"), os.path.join(
> +                self.builddir, "aptconfdir")))
>  
> -    def sync_xml_to_disk (self):
> +    def sync_xml_to_disk(self):
>          try:
> -            sourcexmlpath = os.path.join( self.builddir, "source.xml" )
> -            self.xml.xml.write( sourcexmlpath )
> +            sourcexmlpath = os.path.join(self.builddir, "source.xml")
> +            self.xml.xml.write(sourcexmlpath)
>          except MemoryError:
> -            self.log.printo( "write source.xml failed (archive to huge?)" )
> +            self.log.printo("write source.xml failed (archive to huge?)")
>  
> -    def get_rpcaptcache (self):
> +    def get_rpcaptcache(self):
>          if self._rpcaptcache is None:
> -            self._rpcaptcache = get_rpcaptcache( self.buildenv.rfs,
> -                    self.log.fp.name,
> -                    self.arch,
> -                    self.rpcaptcache_notifier,
> -                    self.xml.prj.has('norecommend'),
> -                    self.xml.prj.has('noauth'))
> +            self._rpcaptcache = get_rpcaptcache(
> +                self.buildenv.rfs,
> +                self.log.fp.name,
> +                self.arch,
> +                self.rpcaptcache_notifier,
> +                self.xml.prj.has('norecommend'),
> +                self.xml.prj.has('noauth'))
>          return self._rpcaptcache
>  
> -    def drop_rpcaptcache (self):
> -        self._rpcaptcache = None;
> +    def drop_rpcaptcache(self):
> +        self._rpcaptcache = None
>  
> -    def has_full_buildenv (self):
> -        if os.path.exists( self.chrootpath ):
> -            elbeversionpath = os.path.join( self.chrootpath,
> -                    "etc", "elbe_version" )
> -            if os.path.isfile( elbeversionpath ):
> +    def has_full_buildenv(self):
> +        if os.path.exists(self.chrootpath):
> +            elbeversionpath = os.path.join(self.chrootpath,
> +                                           "etc", "elbe_version")
> +            if os.path.isfile(elbeversionpath):
>                  return True
>              else:
> -                self.log.printo( "%s exists, but it does not have an etc/elbe_version file." %
> -                        self.chrootpath )
> +                self.log.printo(
> +                    "%s exists, but it does not have an etc/elbe_version file." %
> +                    self.chrootpath)
>                  # Apparently we do not have a functional build environment
>                  return False
>          else:
>              return False
>  
> -    def set_xml (self, xmlpath):
> +    def set_xml(self, xmlpath):
>          # Use supplied XML file, if given, otherwise change to source.xml
>          if not xmlpath:
> -            xmlpath = os.path.join( self.builddir, "source.xml" )
> +            xmlpath = os.path.join(self.builddir, "source.xml")
>  
> -        newxml = ElbeXML( xmlpath, buildtype=self.override_buildtype,
> -                skip_validate=self.skip_validate,
> -                url_validation=self.url_validation )
> +        newxml = ElbeXML(xmlpath, buildtype=self.override_buildtype,
> +                         skip_validate=self.skip_validate,
> +                         url_validation=self.url_validation)
>  
>          # New XML file has to have the same architecture
> -        oldarch = self.xml.text( "project/arch", key="arch" )
> -        newarch = newxml.text( "project/arch", key="arch" )
> +        oldarch = self.xml.text("project/arch", key="arch")
> +        newarch = newxml.text("project/arch", key="arch")
>          if newarch != oldarch:
> -            raise IncompatibleArchitectureException( oldarch, newarch )
> +            raise IncompatibleArchitectureException(oldarch, newarch)
>  
>          # Throw away old APT cache, targetfs and buildenv
>          self._rpcaptcache = None
> @@ -487,67 +567,71 @@ class ElbeProject (object):
>  
>          # Create a new BuildEnv instance, if we have a build directory
>          if self.has_full_buildenv():
> -            self.buildenv = BuildEnv( self.xml, self.log, self.chrootpath, clean = False )
> +            self.buildenv = BuildEnv(
> +                self.xml, self.log, self.chrootpath, clean=False)
>  
>          # Create TargetFs instance, if the target directory exists.
>          # We use the old content of the directory if no rebuild is done, so
>          # don't clean it (yet).
> -        if os.path.exists( self.targetpath ):
> -            self.targetfs = TargetFs( self.targetpath, self.log,
> -                    self.xml, clean=False )
> +        if os.path.exists(self.targetpath):
> +            self.targetfs = TargetFs(self.targetpath, self.log,
> +                                     self.xml, clean=False)
>          else:
>              self.targetfs = None
>  
> -    def write_log_header (self):
> +    def write_log_header(self):
>          if self.name:
> -            self.log.h1( "ELBE Report for Project " + self.name)
> +            self.log.h1("ELBE Report for Project " + self.name)
>          else:
> -            self.log.h1( "ELBE Report" )
> -        self.log.printo( "report timestamp: " +
> -                datetime.datetime.now().strftime("%Y%m%d-%H%M%S") )
> +            self.log.h1("ELBE Report")
> +        self.log.printo("report timestamp: " +
> +                        datetime.datetime.now().strftime("%Y%m%d-%H%M%S"))
>  
> -    def install_packages (self, buildenv=False):
> +    def install_packages(self, buildenv=False):
>          with self.buildenv:
>              # First update the apt cache
>              try:
>                  self.get_rpcaptcache().update()
>              except Exception as e:
> -                raise AptCacheUpdateError (e)
> +                raise AptCacheUpdateError(e)
>  
>              # Then dump the debootstrap packages
>              if self.buildenv.fresh_debootstrap:
>                  if self.buildenv.need_dumpdebootstrap:
> -                    dump_debootstrappkgs (self.xml, self.get_rpcaptcache ())
> -                    dump_initvmpkgs (self.xml)
> +                    dump_debootstrappkgs(self.xml, self.get_rpcaptcache())
> +                    dump_initvmpkgs(self.xml)
>                  self.buildenv.need_dumpdebootstrap = False
>                  source = self.xml
>                  try:
> -                    initxml = ElbeXML( "/var/cache/elbe/source.xml",
> -                            skip_validate=self.skip_validate, url_validation=ValidationMode.NO_CHECK )
> -                    self.xml.get_initvmnode_from( initxml )
> +                    initxml = ElbeXML(
> +                        "/var/cache/elbe/source.xml",
> +                        skip_validate=self.skip_validate,
> +                        url_validation=ValidationMode.NO_CHECK)
> +                    self.xml.get_initvmnode_from(initxml)
>                  except ValidationError as e:
> -                    self.log.printo( "/var/cache/elbe/source.xml validation failed" )
> -                    self.log.printo( str(e) )
> -                    self.log.printo( "will not copy initvm node" )
> +                    self.log.printo(
> +                        "/var/cache/elbe/source.xml validation failed")
> +                    self.log.printo(str(e))
> +                    self.log.printo("will not copy initvm node")
>                  except IOError:
> -                    self.log.printo( "/var/cache/elbe/source.xml not available" )
> -                    self.log.printo( "can not copy initvm node" )
> +                    self.log.printo("/var/cache/elbe/source.xml not available")
> +                    self.log.printo("can not copy initvm node")
>                  except NoInitvmNode:
> -                    self.log.printo( "/var/cache/elbe/source.xml is available" )
> -                    self.log.printo( "But it does not contain an initvm node" )
> +                    self.log.printo("/var/cache/elbe/source.xml is available")
> +                    self.log.printo("But it does not contain an initvm node")
>              else:
> -                sourcepath = os.path.join( self.builddir, "source.xml" )
> -                source = ElbeXML( sourcepath,
> -                        buildtype=self.override_buildtype,
> -                        skip_validate=self.skip_validate,
> -                        url_validation=self.url_validation )
> +                sourcepath = os.path.join(self.builddir, "source.xml")
> +                source = ElbeXML(sourcepath,
> +                                 buildtype=self.override_buildtype,
> +                                 skip_validate=self.skip_validate,
> +                                 url_validation=self.url_validation)
>  
> -                self.xml.get_debootstrappkgs_from( source )
> +                self.xml.get_debootstrappkgs_from(source)
>                  try:
> -                    self.xml.get_initvmnode_from( source )
> +                    self.xml.get_initvmnode_from(source)
>                  except NoInitvmNode:
> -                    self.log.printo( "source.xml is available" )
> -                    self.log.printo( "But it does not contain an initvm node" )
> +                    self.log.printo("source.xml is available")
> +                    self.log.printo("But it does not contain an initvm node")
>  
>              # Seed /etc, we need /etc/hosts for hostname -f to work correctly
>              if not buildenv:
> @@ -557,7 +641,7 @@ class ElbeProject (object):
>              # build packages can be removed
>              debootstrap_pkgs = []
>              for p in self.xml.node("debootstrappkgs"):
> -                debootstrap_pkgs.append (p.et.text)
> +                debootstrap_pkgs.append(p.et.text)
>  
>              pkgs = self.buildenv.xml.get_target_packages()
>  
> @@ -567,11 +651,11 @@ class ElbeProject (object):
>              # Now install requested packages
>              for p in pkgs:
>                  try:
> -                    self.get_rpcaptcache().mark_install( p, None )
> +                    self.get_rpcaptcache().mark_install(p, None)
>                  except KeyError:
> -                    self.log.printo( "No Package " + p )
> +                    self.log.printo("No Package " + p)
>                  except SystemError:
> -                    self.log.printo( "Unable to correct problems " + p )
> +                    self.log.printo("Unable to correct problems " + p)
>  
>              # temporary disabled because of
>              # https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=776057
> @@ -581,5 +665,5 @@ class ElbeProject (object):
>              try:
>                  self.get_rpcaptcache().commit()
>              except SystemError as e:
> -                self.log.printo( "commiting changes failed: %s" % str(e) )
> -                raise AptCacheCommitError (str(e))
> +                self.log.printo("commiting changes failed: %s" % str(e))
> +                raise AptCacheCommitError(str(e))
> diff --git a/elbepack/elbexml.py b/elbepack/elbexml.py
> index 21677822..ab1a2c92 100644
> --- a/elbepack/elbexml.py
> +++ b/elbepack/elbexml.py
> @@ -16,6 +16,7 @@ import urllib2
>  import os
>  import re
>  
> +
>  class ValidationError(Exception):
>      def __init__(self, validation):
>          Exception.__init__(self)
> @@ -24,45 +25,52 @@ class ValidationError(Exception):
>      def __repr__(self):
>          rep = "Elbe XML Validation Error\n"
>          for v in self.validation:
> -            rep += (v+'\n')
> +            rep += (v + '\n')
>          return rep
>  
>      def __str__(self):
>          retval = ""
>          for v in self.validation:
> -            retval += (v+'\n')
> +            retval += (v + '\n')
>          return retval
>  
> +
>  class NoInitvmNode(Exception):
>      pass
>  
> +
>  class ValidationMode():
>      NO_CHECK = True
>      CHECK_BINARIES = 2
>      CHECK_ALL = False
>  
> +
>  class ElbeXML(object):
> -    def __init__(self, fname, buildtype=None, skip_validate=False, url_validation=ValidationMode.NO_CHECK):
> +    def __init__(
> +            self,
> +            fname,
> +            buildtype=None,
> +            skip_validate=False,
> +            url_validation=ValidationMode.NO_CHECK):
>          if not skip_validate:
> -            validation = validate_xml (fname)
> -            if len (validation) != 0:
> -                raise ValidationError (validation)
> +            validation = validate_xml(fname)
> +            if len(validation) != 0:
> +                raise ValidationError(validation)
>  
> -        self.xml = etree( fname )
> +        self.xml = etree(fname)
>          self.prj = self.xml.node("/project")
>          self.tgt = self.xml.node("/target")
>  
>          if buildtype:
>              pass
> -        elif self.xml.has( "project/buildtype" ):
> -            buildtype = self.xml.text( "/project/buildtype" )
> +        elif self.xml.has("project/buildtype"):
> +            buildtype = self.xml.text("/project/buildtype")
>          else:
>              buildtype = "nodefaults"
>          self.defs = ElbeDefaults(buildtype)
>  
>          if not skip_validate and url_validation != ValidationMode.NO_CHECK:
> -            self.validate_apt_sources (url_validation, buildtype)
> -
> +            self.validate_apt_sources(url_validation, buildtype)
>  
>      def text(self, txt, key=None):
>          if key:
> @@ -76,9 +84,9 @@ class ElbeXML(object):
>      def node(self, path):
>          return self.xml.node(path)
>  
> -    def is_cross (self, host_arch):
> +    def is_cross(self, host_arch):
>  
> -        target = self.text ("project/buildimage/arch", key="arch")
> +        target = self.text("project/buildimage/arch", key="arch")
>  
>          if (host_arch == target):
>              return False
> @@ -91,22 +99,21 @@ class ElbeXML(object):
>  
>          return True
>  
> -    def get_primary_mirror (self, cdrompath):
> +    def get_primary_mirror(self, cdrompath):
>          if self.prj.has("mirror/primary_host"):
>              m = self.prj.node("mirror")
>  
>              mirror = m.text("primary_proto") + "://"
> -            mirror +=m.text("primary_host")  + "/"
> -            mirror +=m.text("primary_path")
> +            mirror += m.text("primary_host") + "/"
> +            mirror += m.text("primary_path")
>  
>          elif self.prj.has("mirror/cdrom") and cdrompath:
>              mirror = "file://%s" % cdrompath
>  
>          return mirror.replace("LOCALMACHINE", "10.0.2.2")
>  
> -
>      # XXX: maybe add cdrom path param ?
> -    def create_apt_sources_list (self, build_sources=False):
> +    def create_apt_sources_list(self, build_sources=False):
>          if self.prj is None:
>              return "# No Project"
>  
> @@ -115,11 +122,11 @@ class ElbeXML(object):
>  
>          mirror = ""
>          if self.prj.has("mirror/primary_host"):
> -            mirror += "deb " + self.get_primary_mirror (None)
> +            mirror += "deb " + self.get_primary_mirror(None)
>              mirror += " " + self.prj.text("suite") + " main\n"
>  
>              if build_sources:
> -                mirror += "deb-src " + self.get_primary_mirror (None)
> +                mirror += "deb-src " + self.get_primary_mirror(None)
>                  mirror += " " + self.prj.text("suite") + " main\n"
>  
>              if self.prj.has("mirror/url-list"):
> @@ -127,14 +134,16 @@ class ElbeXML(object):
>                      if url.has("binary"):
>                          mirror += "deb " + url.text("binary").strip() + "\n"
>                      if url.has("source"):
> -                        mirror += "deb-src "+url.text("source").strip()+"\n"
> +                        mirror += "deb-src " + \
> +                            url.text("source").strip() + "\n"
>  
>          if self.prj.has("mirror/cdrom"):
> -            mirror += "deb copy:///cdrom/targetrepo %s main added\n" % (self.prj.text("suite"))
> +            mirror += "deb copy:///cdrom/targetrepo %s main added\n" % (
> +                self.prj.text("suite"))
>  
>          return mirror.replace("LOCALMACHINE", "10.0.2.2")
>  
> -    def validate_repo (self, r, url_validation):
> +    def validate_repo(self, r, url_validation):
>          try:
>              fp = urllib2.urlopen(r["url"] + "InRelease", None, 10)
>          except urllib2.URLError:
> @@ -166,21 +175,21 @@ class ElbeXML(object):
>          fp.close()
>          return ret
>  
> -    def validate_apt_sources (self, url_validation, buildtype):
> -        slist = self.create_apt_sources_list ()
> -        sources_lines = slist.split ('\n')
> +    def validate_apt_sources(self, url_validation, buildtype):
> +        slist = self.create_apt_sources_list()
> +        sources_lines = slist.split('\n')
>  
>          repos = []
>          for l in sources_lines:
> -            l = re.sub(r'\[.*\] ','',l)
> -            if l.startswith ("deb copy:"):
> +            l = re.sub(r'\[.*\] ', '', l)
> +            if l.startswith("deb copy:"):
>                  # This is a cdrom, we dont verify it
>                  pass
> -            elif l.startswith ("deb-src copy:"):
> +            elif l.startswith("deb-src copy:"):
>                  # This is a cdrom, we dont verify it
>                  pass
> -            elif l.startswith ("deb ") or l.startswith ("deb-src "):
> -                lsplit = l.split (" ")
> +            elif l.startswith("deb ") or l.startswith("deb-src "):
> +                lsplit = l.split(" ")
>                  url = lsplit[1]
>                  suite = lsplit[2]
>                  section = lsplit[3]
> @@ -195,7 +204,7 @@ class ElbeXML(object):
>                  if suite.endswith('/'):
>                      r["url"] = "%s/%s" % (url, suite)
>                  else:
> -                    r["url"] = "%s/dists/%s/"  % (url, suite)
> +                    r["url"] = "%s/dists/%s/" % (url, suite)
>                  r["binstr"] = (section + "/binary-%s/Packages" % buildtype)
>                  r["srcstr"] = (section + "/source/Sources")
>                  repos.append(r)
> @@ -203,14 +212,15 @@ class ElbeXML(object):
>          if not self.prj:
>              return
>  
> -        if self.prj.has ("mirror/primary_proxy"):
> +        if self.prj.has("mirror/primary_proxy"):
>              os.environ["no_proxy"] = "10.0.2.2,localhost,127.0.0.1"
> -            proxy = self.prj.text ("mirror/primary_proxy").strip().replace("LOCALMACHINE", "10.0.2.2")
> -            os.environ ["http_proxy"] = proxy
> -            os.environ ["https_proxy"] = proxy
> +            proxy = self.prj.text(
> +                "mirror/primary_proxy").strip().replace("LOCALMACHINE", "10.0.2.2")
> +            os.environ["http_proxy"] = proxy
> +            os.environ["https_proxy"] = proxy
>          else:
> -            os.environ ["http_proxy"] = ""
> -            os.environ ["https_proxy"] = ""
> +            os.environ["http_proxy"] = ""
> +            os.environ["https_proxy"] = ""
>              os.environ["no_proxy"] = ""
>  
>          passman = urllib2.HTTPPasswordMgrWithDefaultRealm()
> @@ -231,7 +241,8 @@ class ElbeXML(object):
>                  usr, passwd = auth.split(':')
>                  passman.add_password(None, r["url"], usr, passwd)
>              if not self.validate_repo(r, url_validation):
> -                raise ValidationError (["Repository %s can not be validated" % r["url"]])
> +                raise ValidationError(
> +                    ["Repository %s can not be validated" % r["url"]])
>  
>      def get_target_packages(self):
>          return [p.et.text for p in self.xml.node("/target/pkg-list")]
> @@ -245,7 +256,7 @@ class ElbeXML(object):
>                  return
>  
>          pak = plist.append('pkg')
> -        pak.set_text( pkg )
> +        pak.set_text(pkg)
>          pak.et.tail = '\n'
>  
>      def set_target_packages(self, pkglist):
> @@ -253,10 +264,9 @@ class ElbeXML(object):
>          plist.clear()
>          for p in pkglist:
>              pak = plist.append('pkg')
> -            pak.set_text( p )
> +            pak.set_text(p)
>              pak.et.tail = '\n'
>  
> -
>      def get_buildenv_packages(self):
>          retval = []
>          if self.prj.has("buildimage/pkg-list"):
> @@ -264,100 +274,100 @@ class ElbeXML(object):
>  
>          return retval
>  
> -    def clear_pkglist( self, name ):
> -        tree = self.xml.ensure_child( name )
> +    def clear_pkglist(self, name):
> +        tree = self.xml.ensure_child(name)
>          tree.clear()
>  
> -    def append_pkg( self, aptpkg, name ):
> -        tree = self.xml.ensure_child( name )
> -        pak = tree.append( 'pkg' )
> -        pak.set_text( aptpkg.name )
> +    def append_pkg(self, aptpkg, name):
> +        tree = self.xml.ensure_child(name)
> +        pak = tree.append('pkg')
> +        pak.set_text(aptpkg.name)
>          pak.et.tail = '\n'
>          if aptpkg.installed_version is not None:
> -            pak.et.set( 'version', aptpkg.installed_version )
> -            pak.et.set( 'md5', aptpkg.installed_md5 )
> +            pak.et.set('version', aptpkg.installed_version)
> +            pak.et.set('md5', aptpkg.installed_md5)
>          else:
> -            pak.et.set( 'version', aptpkg.candidate_version )
> -            pak.et.set( 'md5', aptpkg.candidate_md5 )
> +            pak.et.set('version', aptpkg.candidate_version)
> +            pak.et.set('md5', aptpkg.candidate_md5)
>  
>          if aptpkg.is_auto_installed:
> -            pak.et.set( 'auto', 'true' )
> +            pak.et.set('auto', 'true')
>          else:
> -            pak.et.set( 'auto', 'false' )
> +            pak.et.set('auto', 'false')
>  
> -    def clear_full_pkglist( self ):
> -        tree = self.xml.ensure_child( 'fullpkgs' )
> +    def clear_full_pkglist(self):
> +        tree = self.xml.ensure_child('fullpkgs')
>          tree.clear()
>  
> -    def clear_debootstrap_pkglist( self ):
> -        tree = self.xml.ensure_child( 'debootstrappkgs' )
> +    def clear_debootstrap_pkglist(self):
> +        tree = self.xml.ensure_child('debootstrappkgs')
>          tree.clear()
>  
> -    def clear_initvm_pkglist( self ):
> -        tree = self.xml.ensure_child( 'initvmpkgs' )
> +    def clear_initvm_pkglist(self):
> +        tree = self.xml.ensure_child('initvmpkgs')
>          tree.clear()
>  
> -    def append_full_pkg( self, aptpkg ):
> -        self.append_pkg( aptpkg, 'fullpkgs' )
> +    def append_full_pkg(self, aptpkg):
> +        self.append_pkg(aptpkg, 'fullpkgs')
>  
> -    def append_debootstrap_pkg( self, aptpkg ):
> -        self.append_pkg( aptpkg, 'debootstrappkgs' )
> +    def append_debootstrap_pkg(self, aptpkg):
> +        self.append_pkg(aptpkg, 'debootstrappkgs')
>  
> -    def append_initvm_pkg( self, aptpkg ):
> -        self.append_pkg( aptpkg, 'initvmpkgs' )
> +    def append_initvm_pkg(self, aptpkg):
> +        self.append_pkg(aptpkg, 'initvmpkgs')
>  
> -    def archive_tmpfile( self ):
> +    def archive_tmpfile(self):
>          fp = NamedTemporaryFile()
> -        fp.write( standard_b64decode( self.text("archive") ) )
> +        fp.write(standard_b64decode(self.text("archive")))
>          fp.file.flush()
>          return fp
>  
> -    def get_debootstrappkgs_from( self, other ):
> -        tree = self.xml.ensure_child( 'debootstrappkgs' )
> +    def get_debootstrappkgs_from(self, other):
> +        tree = self.xml.ensure_child('debootstrappkgs')
>          tree.clear()
>  
> -        if not other.has ( 'debootstrappkgs' ):
> +        if not other.has('debootstrappkgs'):
>              return
>  
> -        for e in other.node( 'debootstrappkgs' ):
> -            tree.append_treecopy( e )
> +        for e in other.node('debootstrappkgs'):
> +            tree.append_treecopy(e)
>  
> -    def get_initvmnode_from( self, other ):
> -        ivm = other.node( 'initvm' )
> +    def get_initvmnode_from(self, other):
> +        ivm = other.node('initvm')
>          if ivm is None:
>              raise NoInitvmNode()
>  
> -        tree = self.xml.ensure_child( 'initvm' )
> +        tree = self.xml.ensure_child('initvm')
>          tree.clear()
>  
>          for e in ivm:
> -            tree.append_treecopy( e )
> +            tree.append_treecopy(e)
>  
> -        self.xml.set_child_position( tree, 0 )
> +        self.xml.set_child_position(tree, 0)
>  
> -    def get_initvm_codename (self):
> -        if self.has ("initvm/suite"):
> -            return self.text ("initvm/suite")
> +    def get_initvm_codename(self):
> +        if self.has("initvm/suite"):
> +            return self.text("initvm/suite")
>          else:
>              return None
>  
> -    def set_cdrom_mirror (self, abspath):
> +    def set_cdrom_mirror(self, abspath):
>          mirror = self.node("project/mirror")
>          mirror.clear()
>          cdrom = mirror.ensure_child("cdrom")
> -        cdrom.set_text( abspath )
> +        cdrom.set_text(abspath)
>  
> -    def dump_elbe_version (self):
> +    def dump_elbe_version(self):
>          if is_devel:
>              ver_text = elbe_version + '-devel'
>          else:
>              ver_text = elbe_version
>  
> -        version = self.xml.ensure_child ('elbe_version')
> -        version.set_text (ver_text)
> +        version = self.xml.ensure_child('elbe_version')
> +        version.set_text(ver_text)
>  
> -    def get_elbe_version (self):
> -        if self.has ('elbe_version'):
> -            return self.text ('elbe_version')
> +    def get_elbe_version(self):
> +        if self.has('elbe_version'):
> +            return self.text('elbe_version')
>          else:
>              return "no version"
> diff --git a/elbepack/filesystem.py b/elbepack/filesystem.py
> index 25f537f8..f52f9456 100644
> --- a/elbepack/filesystem.py
> +++ b/elbepack/filesystem.py
> @@ -9,6 +9,7 @@ import shutil
>  from glob import glob
>  from tempfile import mkdtemp
>  
> +
>  class Filesystem(object):
>      def __init__(self, path, clean=False):
>          self.path = os.path.abspath(path)
> @@ -20,29 +21,29 @@ class Filesystem(object):
>      def fname(self, path):
>          if path.startswith('/'):
>              path = path[1:]
> -        return os.path.join( self.path, path )
> +        return os.path.join(self.path, path)
>  
>      def open(self, path, mode="r"):
> -        return open( self.fname(path), mode )
> +        return open(self.fname(path), mode)
>  
>      def isdir(self, path):
> -        return os.path.isdir( self.fname(path) )
> +        return os.path.isdir(self.fname(path))
>  
>      def islink(self, path):
> -        return os.path.islink( self.fname(path) )
> +        return os.path.islink(self.fname(path))
>  
>      def isfile(self, path):
> -        return os.path.isfile( self.fname(path) )
> +        return os.path.isfile(self.fname(path))
>  
>      def exists(self, path):
> -        return os.path.exists( self.fname(path) )
> +        return os.path.exists(self.fname(path))
>  
>      def mkdir(self, path):
> -        os.makedirs( self.fname(path) )
> +        os.makedirs(self.fname(path))
>  
>      def symlink(self, src, path, allow_exists=False):
>          try:
> -            os.symlink( src, self.fname(path) )
> +            os.symlink(src, self.fname(path))
>          except OSError as e:
>              if e.errno != os.errno.EEXIST:
>                  raise
> @@ -50,18 +51,18 @@ class Filesystem(object):
>                  raise
>  
>      def stat(self, path):
> -        return os.stat( self.fname(path) )
> +        return os.stat(self.fname(path))
>  
>      def chown(self, path, uid, gid):
> -        os.chown( self.fname(path), uid, gid )
> +        os.chown(self.fname(path), uid, gid)
>  
>      def chmod(self, path, mode):
> -        os.chmod( self.fname(path), mode )
> +        os.chmod(self.fname(path), mode)
>  
>      def utime(self, path, times=None):
> -        os.utime( self.fname(path), times )
> +        os.utime(self.fname(path), times)
>  
> -    def cat_file(self,inf):
> +    def cat_file(self, inf):
>          content = []
>          try:
>              f = self.open(inf)
> @@ -73,77 +74,86 @@ class Filesystem(object):
>  
>      def remove(self, path, noerr=False):
>          try:
> -            return os.remove( self.fname(path) )
> -        except:
> +            return os.remove(self.fname(path))
> +        except BaseException:
>              if not noerr:
>                  raise
>  
>      def rmtree(self, path):
> -        shutil.rmtree( self.fname(path) )
> +        shutil.rmtree(self.fname(path))
>  
>      def listdir(self, path='', ignore=[], skiplinks=False):
> -        retval = [os.path.join(self.path, path, x) for x in os.listdir( self.fname(path) ) if not x in ignore]
> +        retval = [
> +            os.path.join(
> +                self.path,
> +                path,
> +                x) for x in os.listdir(
> +                self.fname(path)) if x not in ignore]
>          if skiplinks:
> -            retval = [x for x in retval if (not os.path.islink(x)) and os.path.isdir(x)]
> +            retval = [
> +                x for x in retval if (
> +                    not os.path.islink(x)) and os.path.isdir(x)]
>  
>          return retval
>  
> -    def glob( self, path ):
> -        flist = glob( self.fname( path ) )
> +    def glob(self, path):
> +        flist = glob(self.fname(path))
>          for i in flist:
> -            if not i.startswith (self.path):
> +            if not i.startswith(self.path):
>                  raise IOError("Broken glob '%s'" % path)
>  
>          return flist
>  
> -    def write_file( self, path, mode, cont ):
> -        f = self.open( path, "w" )
> +    def write_file(self, path, mode, cont):
> +        f = self.open(path, "w")
>          f.write(cont)
>          f.close()
> -        self.chmod( path, mode )
> +        self.chmod(path, mode)
>  
> -    def read_file( self, path ):
> -        fp = self.open( path, "r" )
> +    def read_file(self, path):
> +        fp = self.open(path, "r")
>          retval = fp.read()
>          fp.close()
>          return retval
>  
> -    def mkdir_p (self, newdir, mode=0o755):
> +    def mkdir_p(self, newdir, mode=0o755):
>          """works the way a good mkdir -p would...
>                  - already exists, silently complete
>                  - regular file in the way, raise an exception
>                  - parent directory(ies) does not exist, make them as well
>          """
> -        if self.isdir (newdir):
> +        if self.isdir(newdir):
>              pass
> -        elif self.isfile (newdir):
> -            raise OSError ("a file with the same name as the desired " \
> -                    "dir, '%s', already exists. in RFS %s" % (newdir, self.path))
> +        elif self.isfile(newdir):
> +            raise OSError(
> +                "a file with the same name as the desired "
> +                "dir, '%s', already exists. in RFS %s" %
> +                (newdir, self.path))
>          else:
> -            self.mkdir (newdir)
> -            self.chmod (newdir,mode)
> +            self.mkdir(newdir)
> +            self.chmod(newdir, mode)
>  
> -    def touch_file (self,fname):
> -        if self.exists (fname):
> +    def touch_file(self, fname):
> +        if self.exists(fname):
>              self.utime(fname)
>          else:
> -            fp = self.open(fname,"w")
> -            fp.close ()
> +            fp = self.open(fname, "w")
> +            fp.close()
>  
>      def walk_files(self, directory='', exclude_dirs=[]):
>          dirname = self.fname(directory)
> -        if dirname=="/":
> +        if dirname == "/":
>              striplen = 0
>          else:
>              striplen = len(dirname)
>          for dirpath, dirnames, filenames in os.walk(dirname):
>              subpath = dirpath[striplen:]
>              if len(subpath) == 0:
> -                subpath="/"
> +                subpath = "/"
>  
>              deldirs = []
>              for d in dirnames:
> -                dpath = os.path.join( subpath, d )
> +                dpath = os.path.join(subpath, d)
>                  if dpath in exclude_dirs:
>                      deldirs.append(d)
>  
> @@ -151,14 +161,14 @@ class Filesystem(object):
>                  dirnames.remove(d)
>  
>              for f in filenames:
> -                fpath = os.path.join( subpath, f )
> -                realpath = os.path.join( dirpath, f )
> +                fpath = os.path.join(subpath, f)
> +                realpath = os.path.join(dirpath, f)
>                  yield "/" + fpath, realpath
>  
>      def mtime_snap(self, dirname='', exclude_dirs=[]):
>          mtime_index = {}
>  
> -        for fpath, realpath in self.walk_files(dirname,exclude_dirs):
> +        for fpath, realpath in self.walk_files(dirname, exclude_dirs):
>              stat = os.lstat(realpath)
>              mtime_index[fpath] = stat.st_mtime
>  
> @@ -181,15 +191,14 @@ class Filesystem(object):
>          return self.__disk_usage(directory)
>  
>  
> -
> -
>  class TmpdirFilesystem (Filesystem):
> -    def __init__ (self):
> +    def __init__(self):
>          tmpdir = mkdtemp()
> -        Filesystem.__init__ (self, tmpdir)
> +        Filesystem.__init__(self, tmpdir)
>  
> -    def __del__ (self):
> +    def __del__(self):
>          shutil.rmtree(self.path, True)
>  
> -hostfs = Filesystem ('/')
> -wdfs   = Filesystem (os.getcwd())
> +
> +hostfs = Filesystem('/')
> +wdfs = Filesystem(os.getcwd())
> diff --git a/elbepack/finetuning.py b/elbepack/finetuning.py
> index e03fdd11..4e02e1a7 100644
> --- a/elbepack/finetuning.py
> +++ b/elbepack/finetuning.py
> @@ -16,6 +16,7 @@ from elbepack.repomanager import UpdateRepo
>  from elbepack.rpcaptcache import get_rpcaptcache
>  from elbepack.shellhelper import CommandError
>  
> +
>  class FinetuningAction(object):
>  
>      actiondict = {}
> @@ -40,20 +41,21 @@ class RmAction(FinetuningAction):
>          FinetuningAction.__init__(self, node)
>  
>      def execute(self, log, buildenv, target):
> -        files = target.glob( self.node.et.text )
> +        files = target.glob(self.node.et.text)
>  
>          if 'exclude' in self.node.et.attrib:
> -            exclude = self.node.et.attrib['exclude'].split (' ')
> +            exclude = self.node.et.attrib['exclude'].split(' ')
>          else:
>              exclude = []
>  
>          for f in files:
> -            if os.path.basename (f) in exclude:
> +            if os.path.basename(f) in exclude:
>                  continue
>  
> -            log.do( "rm -rvf '%s'" % f )
> +            log.do("rm -rvf '%s'" % f)
> +
>  
> -FinetuningAction.register( RmAction )
> +FinetuningAction.register(RmAction)
>  
>  
>  class MkdirAction(FinetuningAction):
> @@ -64,9 +66,11 @@ class MkdirAction(FinetuningAction):
>          FinetuningAction.__init__(self, node)
>  
>      def execute(self, log, buildenv, target):
> -        log.do( "mkdir -p " + target.fname( self.node.et.text ) )
> +        log.do("mkdir -p " + target.fname(self.node.et.text))
> +
> +
> +FinetuningAction.register(MkdirAction)
>  
> -FinetuningAction.register( MkdirAction )
>  
>  class MknodAction(FinetuningAction):
>  
> @@ -76,9 +80,16 @@ class MknodAction(FinetuningAction):
>          FinetuningAction.__init__(self, node)
>  
>      def execute(self, log, buildenv, target):
> -        log.do( "mknod " + target.fname( self.node.et.text ) + " " + self.node.et.attrib['opts'] )
> +        log.do(
> +            "mknod " +
> +            target.fname(
> +                self.node.et.text) +
> +            " " +
> +            self.node.et.attrib['opts'])
> +
> +
> +FinetuningAction.register(MknodAction)
>  
> -FinetuningAction.register( MknodAction )
>  
>  class BuildenvMkdirAction(FinetuningAction):
>  
> @@ -88,9 +99,10 @@ class BuildenvMkdirAction(FinetuningAction):
>          FinetuningAction.__init__(self, node)
>  
>      def execute(self, log, buildenv, target):
> -        log.do( "mkdir -p " + buildenv.rfs.fname( self.node.et.text ) )
> +        log.do("mkdir -p " + buildenv.rfs.fname(self.node.et.text))
> +
>  
> -FinetuningAction.register( BuildenvMkdirAction )
> +FinetuningAction.register(BuildenvMkdirAction)
>  
>  
>  class CpAction(FinetuningAction):
> @@ -105,7 +117,9 @@ class CpAction(FinetuningAction):
>          for f in src:
>              log.do("cp -av " + f + " " + target.fname(self.node.et.text))
>  
> -FinetuningAction.register( CpAction )
> +
> +FinetuningAction.register(CpAction)
> +
>  
>  class BuildenvCpAction(FinetuningAction):
>  
> @@ -119,7 +133,9 @@ class BuildenvCpAction(FinetuningAction):
>          for f in src:
>              log.do("cp -av " + f + " " + buildenv.rfs.fname(self.node.et.text))
>  
> -FinetuningAction.register( BuildenvCpAction )
> +
> +FinetuningAction.register(BuildenvCpAction)
> +
>  
>  class B2TCpAction(FinetuningAction):
>  
> @@ -133,7 +149,9 @@ class B2TCpAction(FinetuningAction):
>          for f in src:
>              log.do("cp -av " + f + " " + target.fname(self.node.et.text))
>  
> -FinetuningAction.register( B2TCpAction )
> +
> +FinetuningAction.register(B2TCpAction)
> +
>  
>  class T2BCpAction(FinetuningAction):
>  
> @@ -147,7 +165,9 @@ class T2BCpAction(FinetuningAction):
>          for f in src:
>              log.do("cp -av " + f + " " + buildenv.rfs.fname(self.node.et.text))
>  
> -FinetuningAction.register( T2BCpAction )
> +
> +FinetuningAction.register(T2BCpAction)
> +
>  
>  class T2PMvAction(FinetuningAction):
>  
> @@ -161,13 +181,15 @@ class T2PMvAction(FinetuningAction):
>              dest = self.node.et.text[1:]
>          else:
>              dest = self.node.et.text
> -        dest = os.path.join ('..', dest)
> +        dest = os.path.join('..', dest)
>  
>          src = target.glob(self.node.et.attrib['path'])
>          for f in src:
>              log.do("mv -v " + f + " " + dest)
>  
> -FinetuningAction.register( T2PMvAction )
> +
> +FinetuningAction.register(T2PMvAction)
> +
>  
>  class MvAction(FinetuningAction):
>  
> @@ -181,7 +203,9 @@ class MvAction(FinetuningAction):
>          for f in src:
>              log.do("mv -v " + f + " " + target.fname(self.node.et.text))
>  
> -FinetuningAction.register( MvAction )
> +
> +FinetuningAction.register(MvAction)
> +
>  
>  class LnAction(FinetuningAction):
>  
> @@ -192,10 +216,12 @@ class LnAction(FinetuningAction):
>  
>      def execute(self, log, buildenv, target):
>          with target:
> -            log.chroot (target.path, """/bin/sh -c 'ln -s %s "%s"' """ % (self.node.et.attrib['path'],
> -                                                      self.node.et.text))
> +            log.chroot(
> +                target.path, """/bin/sh -c 'ln -s %s "%s"' """ %
> +                (self.node.et.attrib['path'], self.node.et.text))
> +
>  
> -FinetuningAction.register( LnAction )
> +FinetuningAction.register(LnAction)
>  
>  
>  class BuildenvMvAction(FinetuningAction):
> @@ -210,7 +236,8 @@ class BuildenvMvAction(FinetuningAction):
>          for f in src:
>              log.do("mv -v " + f + " " + buildenv.rfs.fname(self.node.et.text))
>  
> -FinetuningAction.register( BuildenvMvAction )
> +
> +FinetuningAction.register(BuildenvMvAction)
>  
>  
>  class AddUserAction(FinetuningAction):
> @@ -221,23 +248,27 @@ class AddUserAction(FinetuningAction):
>          FinetuningAction.__init__(self, node)
>  
>      def execute(self, log, buildenv, target):
> -      with target:
> -        if 'groups' in self.node.et.attrib:
> -          log.chroot (target.path, '/usr/sbin/useradd -U -m -G "%s" -s "%s" "%s"' % (
> -                self.node.et.attrib['groups'],
> -                self.node.et.attrib['shell'],
> -                self.node.et.text))
> -        else:
> -          log.chroot (target.path, '/usr/sbin/useradd -U -m -s "%s" "%s"' % (
> -                self.node.et.attrib['shell'], self.node.et.text))
> -
> -        log.chroot( target.path,
> -             """/bin/sh -c 'echo "%s\\n%s\\n" | passwd %s'""" % (
> +        with target:
> +            if 'groups' in self.node.et.attrib:
> +                log.chroot(
> +                    target.path,
> +                    '/usr/sbin/useradd -U -m -G "%s" -s "%s" "%s"' %
> +                    (self.node.et.attrib['groups'],
> +                     self.node.et.attrib['shell'],
> +                     self.node.et.text))
> +            else:
> +                log.chroot(
> +                    target.path, '/usr/sbin/useradd -U -m -s "%s" "%s"' %
> +                    (self.node.et.attrib['shell'], self.node.et.text))
> +
> +            log.chroot(target.path,
> +                       """/bin/sh -c 'echo "%s\\n%s\\n" | passwd %s'""" % (
>                             self.node.et.attrib['passwd'],
>                             self.node.et.attrib['passwd'],
>                             self.node.et.text))
>  
> -FinetuningAction.register( AddUserAction )
> +
> +FinetuningAction.register(AddUserAction)
>  
>  
>  class AddGroupAction(FinetuningAction):
> @@ -248,11 +279,13 @@ class AddGroupAction(FinetuningAction):
>          FinetuningAction.__init__(self, node)
>  
>      def execute(self, log, buildenv, target):
> -      with target:
> -        log.chroot (target.path, "/usr/sbin/groupadd -f %s" % (
> -            self.node.et.text))
> +        with target:
> +            log.chroot(target.path, "/usr/sbin/groupadd -f %s" % (
> +                self.node.et.text))
> +
> +
> +FinetuningAction.register(AddGroupAction)
>  
> -FinetuningAction.register( AddGroupAction )
>  
>  class RawCmdAction(FinetuningAction):
>  
> @@ -263,9 +296,11 @@ class RawCmdAction(FinetuningAction):
>  
>      def execute(self, log, buildenv, target):
>          with target:
> -            log.chroot (target.path, self.node.et.text)
> +            log.chroot(target.path, self.node.et.text)
> +
> +
> +FinetuningAction.register(RawCmdAction)
>  
> -FinetuningAction.register( RawCmdAction )
>  
>  class CmdAction(FinetuningAction):
>  
> @@ -276,9 +311,11 @@ class CmdAction(FinetuningAction):
>  
>      def execute(self, log, buildenv, target):
>          with target:
> -            log.chroot (target.path, "/bin/sh", input=self.node.et.text)
> +            log.chroot(target.path, "/bin/sh", input=self.node.et.text)
> +
> +
> +FinetuningAction.register(CmdAction)
>  
> -FinetuningAction.register( CmdAction )
>  
>  class BuildenvCmdAction(FinetuningAction):
>  
> @@ -289,9 +326,11 @@ class BuildenvCmdAction(FinetuningAction):
>  
>      def execute(self, log, buildenv, target):
>          with buildenv:
> -            log.chroot (buildenv.path, "/bin/sh", input=self.node.et.text)
> +            log.chroot(buildenv.path, "/bin/sh", input=self.node.et.text)
> +
> +
> +FinetuningAction.register(BuildenvCmdAction)
>  
> -FinetuningAction.register( BuildenvCmdAction )
>  
>  class PurgeAction(FinetuningAction):
>  
> @@ -302,9 +341,11 @@ class PurgeAction(FinetuningAction):
>  
>      def execute(self, log, buildenv, target):
>          with target:
> -            log.chroot (target.path, "dpkg --purge " + self.node.et.text)
> +            log.chroot(target.path, "dpkg --purge " + self.node.et.text)
> +
> +
> +FinetuningAction.register(PurgeAction)
>  
> -FinetuningAction.register( PurgeAction )
>  
>  class UpdatedAction(FinetuningAction):
>  
> @@ -317,63 +358,79 @@ class UpdatedAction(FinetuningAction):
>  
>          if self.node.et.text:
>              fp = self.node.et.text
> -            log.printo ("transfer gpg key to target: " + fp)
> +            log.printo("transfer gpg key to target: " + fp)
>  
> -            os.environ ['GNUPGHOME'] = "/var/cache/elbe/gnupg"
> -            key = BytesIO ()
> -            ctx = gpgme.Context ()
> +            os.environ['GNUPGHOME'] = "/var/cache/elbe/gnupg"
> +            key = BytesIO()
> +            ctx = gpgme.Context()
>              ctx.armor = True
> -            ctx.export (fp, key)
> +            ctx.export(fp, key)
>  
> -            log.printo (str (key.getvalue ()))
> -            with open ((target.path + '/pub.key'), 'wb') as tkey:
> -                tkey.write (key.getvalue ())
> +            log.printo(str(key.getvalue()))
> +            with open((target.path + '/pub.key'), 'wb') as tkey:
> +                tkey.write(key.getvalue())
>  
> -            target.mkdir_p ("/var/cache/elbe/gnupg", mode=0o700)
> +            target.mkdir_p("/var/cache/elbe/gnupg", mode=0o700)
>              with target:
> -                os.environ ['GNUPGHOME'] = target.path + "/var/cache/elbe/gnupg"
> -                log.do ("gpg --import " + target.path + "/pub.key")
> +                os.environ['GNUPGHOME'] = target.path + "/var/cache/elbe/gnupg"
> +                log.do("gpg --import " + target.path + "/pub.key")
>  
> -        log.printo( "generate base repo")
> -        arch = target.xml.text ("project/arch", key="arch")
> +        log.printo("generate base repo")
> +        arch = target.xml.text("project/arch", key="arch")
>  
> -        buildenv.rfs.mkdir_p ('/tmp/pkgs')
> +        buildenv.rfs.mkdir_p('/tmp/pkgs')
>          with buildenv:
> -            cache = get_rpcaptcache (buildenv.rfs, "updated-repo.log", arch)
> +            cache = get_rpcaptcache(buildenv.rfs, "updated-repo.log", arch)
>  
>              pkglist = cache.get_installed_pkgs()
>              for pkg in pkglist:
>                  try:
> -                    cache.download_binary (pkg.name, '/tmp/pkgs', pkg.installed_version)
> +                    cache.download_binary(
> +                        pkg.name, '/tmp/pkgs', pkg.installed_version)
>                  except ValueError as ve:
> -                    log.printo( "No Package " + pkg.name + "-" + pkg.installed_version )
> +                    log.printo(
> +                        "No Package " +
> +                        pkg.name +
> +                        "-" +
> +                        pkg.installed_version)
>                  except FetchError as fe:
> -                    log.printo( "Package " + pkg.name + "-" + pkg.installed_version + " could not be downloaded" )
> +                    log.printo(
> +                        "Package " +
> +                        pkg.name +
> +                        "-" +
> +                        pkg.installed_version +
> +                        " could not be downloaded")
>                  except TypeError as te:
> -                    log.printo( "Package " + pkg.name + "-" + pkg.installed_version + " missing name or version" )
> +                    log.printo(
> +                        "Package " +
> +                        pkg.name +
> +                        "-" +
> +                        pkg.installed_version +
> +                        " missing name or version")
>  
> -        r = UpdateRepo (target.xml,
> -              target.path + '/var/cache/elbe/repos/base',
> -              log)
> +        r = UpdateRepo(target.xml,
> +                       target.path + '/var/cache/elbe/repos/base',
> +                       log)
>  
> -        for d in buildenv.rfs.glob ('tmp/pkgs/*.deb'):
> -            r.includedeb (d, 'main')
> -        r.finalize ()
> +        for d in buildenv.rfs.glob('tmp/pkgs/*.deb'):
> +            r.includedeb(d, 'main')
> +        r.finalize()
>  
>          slist = target.path + '/etc/apt/sources.list.d/base.list'
>          slist_txt = 'deb [trusted=yes] file:///var/cache/elbe/repos/base '
> -        slist_txt += target.xml.text ("/project/suite")
> +        slist_txt += target.xml.text("/project/suite")
>          slist_txt += " main"
>  
> -        with open (slist, 'w') as apt_source:
> -            apt_source.write (slist_txt)
> +        with open(slist, 'w') as apt_source:
> +            apt_source.write(slist_txt)
>  
> -        rmtree (buildenv.rfs.path + '/tmp/pkgs')
> +        rmtree(buildenv.rfs.path + '/tmp/pkgs')
>  
>          # allow downgrades by default
> -        target.touch_file ('/var/cache/elbe/.downgrade_allowed')
> +        target.touch_file('/var/cache/elbe/.downgrade_allowed')
> +
>  
> -FinetuningAction.register( UpdatedAction )
> +FinetuningAction.register(UpdatedAction)
>  
>  
>  def do_finetuning(xml, log, buildenv, target):
> @@ -383,9 +440,9 @@ def do_finetuning(xml, log, buildenv, target):
>  
>      for i in xml.node('target/finetuning'):
>          try:
> -            action = FinetuningAction( i )
> +            action = FinetuningAction(i)
>              action.execute(log, buildenv, target)
>          except KeyError:
>              print("Unimplemented finetuning action '%s'" % (i.et.tag))
>          except CommandError:
> -            log.printo( "Finetuning Error, trying to continue anyways" )
> +            log.printo("Finetuning Error, trying to continue anyways")
> diff --git a/elbepack/fstab.py b/elbepack/fstab.py
> index c6aaa320..6c639b17 100644
> --- a/elbepack/fstab.py
> +++ b/elbepack/fstab.py
> @@ -5,39 +5,42 @@
>  
>  import os
>  
> +
>  def get_mtdnum(xml, label):
> -  tgt = xml.node ("target")
> -  if not tgt.has("images"):
> -    raise Exception( "No images tag in target" )
> +    tgt = xml.node("target")
> +    if not tgt.has("images"):
> +        raise Exception("No images tag in target")
> +
> +    for i in tgt.node("images"):
> +        if i.tag != "mtd":
> +            continue
>  
> -  for i in tgt.node("images"):
> -    if i.tag != "mtd":
> -      continue
> +        if not i.has("ubivg"):
> +            continue
>  
> -    if not i.has("ubivg"):
> -      continue
> +        for v in i.node("ubivg"):
> +            if v.tag != "ubi":
> +                continue
>  
> -    for v in i.node("ubivg"):
> -      if v.tag != "ubi":
> -        continue
> +            if v.text("label") == label:
> +                return i.text("nr")
>  
> -      if v.text("label") == label:
> -        return i.text("nr")
> +    raise Exception("No ubi volume with label " + label + " found")
>  
> -  raise Exception( "No ubi volume with label " + label + " found" )
>  
> +def get_devicelabel(xml, node):
> +    if node.text("fs/type") == "ubifs":
> +        return "ubi" + get_mtdnum(xml,
> +                                  node.text("label")) + ":" + node.text("label")
> +    else:
> +        return "LABEL=" + node.text("label")
>  
> -def get_devicelabel( xml, node ):
> -  if node.text("fs/type") == "ubifs":
> -    return "ubi" + get_mtdnum(xml, node.text("label")) + ":" + node.text("label")
> -  else:
> -    return "LABEL=" + node.text("label")
>  
>  class mountpoint_dict (dict):
>      def __init__(self):
>          self.id_count = 0
>  
> -    def register (self, fstabentry):
> +    def register(self, fstabentry):
>          mp = fstabentry.mountpoint
>  
>          if mp in self:
> @@ -53,18 +56,16 @@ class mountpoint_dict (dict):
>  
>          while True:
>              mp, t = os.path.split(mp)
> -            if t=='':
> +            if t == '':
>                  return depth
>              depth += 1
>  
> -    def depthlist (self):
> -        mplist = sorted (self.keys(), key=mountpoint_dict.mountdepth)
> +    def depthlist(self):
> +        mplist = sorted(self.keys(), key=mountpoint_dict.mountdepth)
>  
>          return [self[x] for x in mplist]
>  
>  
> -
> -
>  class fstabentry(object):
>      def __init__(self, xml, entry, id=0):
>          if entry.has("source"):
> @@ -88,7 +89,8 @@ class fstabentry(object):
>          self.id = str(id)
>  
>      def get_str(self):
> -        return "%s %s %s %s 0 %s\n" % (self.source, self.mountpoint, self.fstype, self.options, self.passno)
> +        return "%s %s %s %s 0 %s\n" % (
> +            self.source, self.mountpoint, self.fstype, self.options, self.passno)
>  
>      def mountdepth(self):
>          h = self.mountpoint
> @@ -96,7 +98,7 @@ class fstabentry(object):
>  
>          while True:
>              h, t = os.path.split(h)
> -            if t=='':
> +            if t == '':
>                  return depth
>              depth += 1
>  
> @@ -116,5 +118,6 @@ class fstabentry(object):
>  
>          return ""
>  
> -    def losetup( self, outf, loopdev ):
> -        outf.do( 'losetup -o%d --sizelimit %d /dev/%s "%s"' % (self.offset, self.size, loopdev, self.filename) )
> +    def losetup(self, outf, loopdev):
> +        outf.do('losetup -o%d --sizelimit %d /dev/%s "%s"' %
> +                (self.offset, self.size, loopdev, self.filename))
> diff --git a/elbepack/gpg.py b/elbepack/gpg.py
> index ab3cd6dd..2bdcd8e7 100644
> --- a/elbepack/gpg.py
> +++ b/elbepack/gpg.py
> @@ -22,6 +22,7 @@ elbe_internal_key_param = """
>  </GnupgKeyParms>
>  """
>  
> +
>  class OverallStatus:
>  
>      def __init__(self):
> @@ -50,6 +51,7 @@ class OverallStatus:
>  
>          return 0
>  
> +
>  def check_signature(ctx, sig):
>      status = OverallStatus()
>  
> @@ -101,30 +103,30 @@ def check_signature(ctx, sig):
>  
>  def unsign_file(fname):
>      # check for .gpg extension and create an output filename without it
> -    if len(fname) <= 4 or fname[len(fname)-4:] != '.gpg':
> +    if len(fname) <= 4 or fname[len(fname) - 4:] != '.gpg':
>          print("The input file needs a .gpg extension")
>          return None
>  
> -    outfilename = fname[:len(fname)-4]
> +    outfilename = fname[:len(fname) - 4]
>  
> -    os.environ ['GNUPGHOME'] = "/var/cache/elbe/gnupg"
> +    os.environ['GNUPGHOME'] = "/var/cache/elbe/gnupg"
>      ctx = gpgme.Context()
>      ctx.armor = False
>  
>      try:
>          overall_status = OverallStatus()
>  
> -        with open (fname, 'r') as infile:
> -            with open (outfilename, 'w') as outfile:
> +        with open(fname, 'r') as infile:
> +            with open(outfilename, 'w') as outfile:
>  
>                  # obtain signature and write unsigned file
>                  sigs = ctx.verify(infile, None, outfile)
>  
>                  for sig in sigs:
> -                    status = check_signature (ctx, sig)
> -                    overall_status.add (status)
> +                    status = check_signature(ctx, sig)
> +                    overall_status.add(status)
>  
> -        if overall_status.to_exitcode ():
> +        if overall_status.to_exitcode():
>              return None
>  
>          return outfilename
> @@ -136,9 +138,10 @@ def unsign_file(fname):
>  
>      return None
>  
> -def sign (infile, outfile, fingerprint):
>  
> -    os.environ ['GNUPGHOME'] = "/var/cache/elbe/gnupg"
> +def sign(infile, outfile, fingerprint):
> +
> +    os.environ['GNUPGHOME'] = "/var/cache/elbe/gnupg"
>      ctx = gpgme.Context()
>      key = None
>  
> @@ -164,32 +167,34 @@ def sign_file(fname, fingerprint):
>      try:
>          with open(fname, 'r') as infile:
>              with open(outfilename, 'w') as outfile:
> -                sign (infile, outfile, fingerprint)
> +                sign(infile, outfile, fingerprint)
>      except Exception as ex:
>          print("Error signing file %s" % ex.message)
>          pass
>  
>  
> -def get_fingerprints ():
> -    os.environ ['GNUPGHOME'] = "/var/cache/elbe/gnupg"
> -    ctx = gpgme.Context ()
> -    keys = ctx.keylist ()
> +def get_fingerprints():
> +    os.environ['GNUPGHOME'] = "/var/cache/elbe/gnupg"
> +    ctx = gpgme.Context()
> +    keys = ctx.keylist()
>      fingerprints = []
>      for k in keys:
> -        fingerprints.append (k.subkeys[0].fpr)
> +        fingerprints.append(k.subkeys[0].fpr)
>      return fingerprints
>  
> -def generate_elbe_internal_key ():
> +
> +def generate_elbe_internal_key():
>      hostfs.mkdir_p("/var/cache/elbe/gnupg")
> -    os.environ ['GNUPGHOME'] = "/var/cache/elbe/gnupg"
> -    ctx = gpgme.Context ()
> +    os.environ['GNUPGHOME'] = "/var/cache/elbe/gnupg"
> +    ctx = gpgme.Context()
>      key = ctx.genkey(elbe_internal_key_param)
>  
>      return key.fpr
>  
> -def export_key (fingerprint, outfile):
> -    os.environ ['GNUPGHOME'] = "/var/cache/elbe/gnupg"
> -    ctx = gpgme.Context ()
> +
> +def export_key(fingerprint, outfile):
> +    os.environ['GNUPGHOME'] = "/var/cache/elbe/gnupg"
> +    ctx = gpgme.Context()
>      ctx.armor = True
>  
>      try:
> diff --git a/elbepack/hdimg.py b/elbepack/hdimg.py
> index 557479aa..963c6599 100644
> --- a/elbepack/hdimg.py
> +++ b/elbepack/hdimg.py
> @@ -14,9 +14,10 @@ import _ped
>  from elbepack.fstab import fstabentry, mountpoint_dict
>  from elbepack.asciidoclog import CommandError
>  
> -def mkfs_mtd( outf, mtd, fslabel, rfs, target ):
>  
> -    #generated files
> +def mkfs_mtd(outf, mtd, fslabel, rfs, target):
> +
> +    # generated files
>      img_files = []
>  
>      if not mtd.has("ubivg"):
> @@ -38,22 +39,23 @@ def mkfs_mtd( outf, mtd, fslabel, rfs, target ):
>              continue
>  
>          try:
> -            outf.do( "mkfs.ubifs -r %s -o %s.ubifs -m %s -e %s -c %s %s" % (
> -                os.path.join(target,"filesystems",fslabel[label].id),
> -                os.path.join(target,label),
> +            outf.do("mkfs.ubifs -r %s -o %s.ubifs -m %s -e %s -c %s %s" % (
> +                os.path.join(target, "filesystems", fslabel[label].id),
> +                os.path.join(target, label),
>                  ubivg.text("miniosize"),
>                  ubivg.text("logicaleraseblocksize"),
>                  ubivg.text("maxlogicaleraseblockcount"),
> -                fslabel[label].mkfsopt ) )
> +                fslabel[label].mkfsopt))
>              # only append the ubifs file if creation didn't fail
> -            img_files.append ("%s.ubifs" % label)
> +            img_files.append("%s.ubifs" % label)
>          except CommandError as e:
>              # continue creating further ubifs filesystems
>              pass
>  
>      return img_files
>  
> -def build_image_mtd( outf, mtd, target ):
> +
> +def build_image_mtd(outf, mtd, target):
>  
>      img_files = []
>  
> @@ -62,13 +64,14 @@ def build_image_mtd( outf, mtd, target ):
>  
>      ubivg = mtd.node("ubivg")
>  
> -    cfgfilename = "%s_%s.cfg" % (mtd.text("name"), mtd.node("ubivg").text("label"))
> -    fp = open( os.path.join(target, cfgfilename ), "w" )
> +    cfgfilename = "%s_%s.cfg" % (
> +        mtd.text("name"), mtd.node("ubivg").text("label"))
> +    fp = open(os.path.join(target, cfgfilename), "w")
>  
>      for vol in mtd.node("ubivg"):
>          if vol.has("label"):
> -            fp.write( "[%s]\n" % vol.text("label") )
> -            fp.write( "mode=ubi\n" )
> +            fp.write("[%s]\n" % vol.text("label"))
> +            fp.write("mode=ubi\n")
>              if not vol.has("empty"):
>                  if vol.has("binary"):
>                      tmp = ""
> @@ -78,26 +81,35 @@ def build_image_mtd( outf, mtd, target ):
>                      # copy from project directory
>                      else:
>                          tmp = target + "/" + vol.text("binary")
> -                    outf.do( "cp %s %s/%s.ubibin" % ( tmp, target, vol.text("label") ) )
> -                    img_files.append( vol.text("label") + ".ubibin" )
> -                    fp.write( "image=%s.ubibin\n" % os.path.join( target, vol.text("label") ) )
> +                    outf.do(
> +                        "cp %s %s/%s.ubibin" %
> +                        (tmp, target, vol.text("label")))
> +                    img_files.append(vol.text("label") + ".ubibin")
> +                    fp.write(
> +                        "image=%s.ubibin\n" %
> +                        os.path.join(
> +                            target,
> +                            vol.text("label")))
>                  else:
> -                    fp.write( "image=%s.ubifs\n" % os.path.join(target,
> -                        vol.text("label")) )
> +                    fp.write(
> +                        "image=%s.ubifs\n" %
> +                        os.path.join(
> +                            target,
> +                            vol.text("label")))
>              else:
> -                empt = open("/tmp/empty", "w" )
> +                empt = open("/tmp/empty", "w")
>                  empt.write("EMPTY")
>                  empt.close()
> -                fp.write( "image=/tmp/empty\n" )
> +                fp.write("image=/tmp/empty\n")
>  
> -            fp.write( "vol_type=%s\n" % vol.text("type") )
> -            fp.write( "vol_id=%s\n" % vol.text("id") )
> -            fp.write( "vol_name=%s\n" % vol.text("label") )
> +            fp.write("vol_type=%s\n" % vol.text("type"))
> +            fp.write("vol_id=%s\n" % vol.text("id"))
> +            fp.write("vol_name=%s\n" % vol.text("label"))
>  
>              if vol.text("size") != "remain":
> -                fp.write( "vol_size=%d\n" % size_to_int( vol.text("size") ) )
> +                fp.write("vol_size=%d\n" % size_to_int(vol.text("size")))
>              else:
> -                fp.write( "vol_flags=autoresize\n" )
> +                fp.write("vol_flags=autoresize\n")
>  
>      fp.close()
>  
> @@ -107,16 +119,16 @@ def build_image_mtd( outf, mtd, target ):
>          subp = ""
>  
>      try:
> -        outf.do( "ubinize %s -o %s -p %s -m %s %s/%s_%s.cfg" % (
> +        outf.do("ubinize %s -o %s -p %s -m %s %s/%s_%s.cfg" % (
>              subp,
>              os.path.join(target, mtd.text("name")),
>              ubivg.text("physicaleraseblocksize"),
>              ubivg.text("miniosize"),
>              target,
>              mtd.text("name"),
> -            ubivg.text("label") ) )
> +            ubivg.text("label")))
>          # only add file to list if ubinize command was successful
> -        img_files.append (mtd.text("name"))
> +        img_files.append(mtd.text("name"))
>  
>      except CommandError as e:
>          # continue with generating further images
> @@ -124,162 +136,240 @@ def build_image_mtd( outf, mtd, target ):
>  
>      return img_files
>  
> -def size_to_int( size ):
> +
> +def size_to_int(size):
>      if size[-1] in digits:
>          return int(size)
>  
> -    if size.endswith( "M" ):
> -        unit = 1000*1000
> +    if size.endswith("M"):
> +        unit = 1000 * 1000
>          s = size[:-1]
> -    elif size.endswith( "MiB" ):
> -        unit = 1024*1024
> +    elif size.endswith("MiB"):
> +        unit = 1024 * 1024
>          s = size[:-3]
> -    elif size.endswith( "MB" ):
> -        unit = 1000*1000
> +    elif size.endswith("MB"):
> +        unit = 1000 * 1000
>          s = size[:-2]
> -    if size.endswith( "G" ):
> -        unit = 1000*1000*1000
> +    if size.endswith("G"):
> +        unit = 1000 * 1000 * 1000
>          s = size[:-1]
> -    elif size.endswith( "GiB" ):
> -        unit = 1024*1024*1024
> +    elif size.endswith("GiB"):
> +        unit = 1024 * 1024 * 1024
>          s = size[:-3]
> -    elif size.endswith( "GB" ):
> -        unit = 1000*1000*1000
> +    elif size.endswith("GB"):
> +        unit = 1000 * 1000 * 1000
>          s = size[:-2]
> -    if size.endswith( "k" ):
> +    if size.endswith("k"):
>          unit = 1000
>          s = size[:-1]
> -    elif size.endswith( "kiB" ):
> +    elif size.endswith("kiB"):
>          unit = 1024
>          s = size[:-3]
> -    elif size.endswith( "kB" ):
> +    elif size.endswith("kB"):
>          unit = 1000
>          s = size[:-2]
>  
>      return int(s) * unit
>  
>  
> -class grubinstaller_base( object ):
> -    def __init__( self, outf ):
> +class grubinstaller_base(object):
> +    def __init__(self, outf):
>          self.outf = outf
>          self.root = None
>          self.boot = None
>  
> -    def set_boot_entry( self, entry ):
> +    def set_boot_entry(self, entry):
>          print("setting boot entry")
>          self.boot = entry
>  
> -    def set_root_entry( self, entry ):
> +    def set_root_entry(self, entry):
>          self.root = entry
>  
> -    def install( self, target ):
> +    def install(self, target):
>          pass
>  
> -class grubinstaller202( grubinstaller_base ):
>  
> -    def install( self, target ):
> +class grubinstaller202(grubinstaller_base):
> +
> +    def install(self, target):
>          if not self.root:
>              return
>  
> -        imagemnt = os.path.join(target, "imagemnt" )
> +        imagemnt = os.path.join(target, "imagemnt")
>          try:
> -            self.outf.do( 'cp -a /dev/loop0 /dev/poop0' )
> +            self.outf.do('cp -a /dev/loop0 /dev/poop0')
>  
> -            self.outf.do( 'losetup /dev/poop0 "%s"' % self.root.filename )
> -            self.outf.do( 'kpartx -as /dev/poop0' );
> -            self.outf.do( 'mount /dev/mapper/poop0p%d %s' % (self.root.partnum,imagemnt))
> +            self.outf.do('losetup /dev/poop0 "%s"' % self.root.filename)
> +            self.outf.do('kpartx -as /dev/poop0')
> +            self.outf.do(
> +                'mount /dev/mapper/poop0p%d %s' %
> +                (self.root.partnum, imagemnt))
>  
>              if self.boot:
> -                self.outf.do( 'mount /dev/mapper/poop0p%d %s' % (self.boot.partnum,os.path.join( imagemnt, "boot" ) ) )
> -
> -            self.outf.do( "mount --bind /dev %s" % os.path.join( imagemnt, "dev" ) )
> -            self.outf.do( "mount --bind /proc %s" % os.path.join( imagemnt, "proc" ) )
> -            self.outf.do( "mount --bind /sys %s" % os.path.join( imagemnt, "sys" ) )
> -
> -            self.outf.do( 'mkdir -p "%s"' % os.path.join( imagemnt, "boot/grub" ))
> -
> -            devmap = open( os.path.join( imagemnt, "boot/grub/device.map" ), "w" )
> -            devmap.write( "(hd0) /dev/poop0\n" )
> +                self.outf.do(
> +                    'mount /dev/mapper/poop0p%d %s' %
> +                    (self.boot.partnum, os.path.join(
> +                        imagemnt, "boot")))
> +
> +            self.outf.do(
> +                "mount --bind /dev %s" %
> +                os.path.join(
> +                    imagemnt,
> +                    "dev"))
> +            self.outf.do(
> +                "mount --bind /proc %s" %
> +                os.path.join(
> +                    imagemnt,
> +                    "proc"))
> +            self.outf.do(
> +                "mount --bind /sys %s" %
> +                os.path.join(
> +                    imagemnt,
> +                    "sys"))
> +
> +            self.outf.do('mkdir -p "%s"' % os.path.join(imagemnt, "boot/grub"))
> +
> +            devmap = open(os.path.join(imagemnt, "boot/grub/device.map"), "w")
> +            devmap.write("(hd0) /dev/poop0\n")
>              devmap.close()
>  
> -            self.outf.do( "chroot %s  update-initramfs -u -k all"  % imagemnt )
> -            self.outf.do( "chroot %s  update-grub2"  % imagemnt )
> +            self.outf.do("chroot %s  update-initramfs -u -k all" % imagemnt)
> +            self.outf.do("chroot %s  update-grub2" % imagemnt)
>  
> -            self.outf.do( "chroot %s grub-install --no-floppy /dev/poop0" % (imagemnt))
> +            self.outf.do(
> +                "chroot %s grub-install --no-floppy /dev/poop0" %
> +                (imagemnt))
>  
>          finally:
> -            os.unlink( os.path.join( imagemnt, "boot/grub/device.map" ))
> -            self.outf.do( "umount %s" % os.path.join( imagemnt, "dev" ), allow_fail=True )
> -            self.outf.do( "umount %s" % os.path.join( imagemnt, "proc" ), allow_fail=True )
> -            self.outf.do( "umount %s" % os.path.join( imagemnt, "sys" ), allow_fail=True )
> +            os.unlink(os.path.join(imagemnt, "boot/grub/device.map"))
> +            self.outf.do(
> +                "umount %s" %
> +                os.path.join(
> +                    imagemnt,
> +                    "dev"),
> +                allow_fail=True)
> +            self.outf.do(
> +                "umount %s" %
> +                os.path.join(
> +                    imagemnt,
> +                    "proc"),
> +                allow_fail=True)
> +            self.outf.do(
> +                "umount %s" %
> +                os.path.join(
> +                    imagemnt,
> +                    "sys"),
> +                allow_fail=True)
>  
>              if self.boot:
> -                self.outf.do( 'umount /dev/mapper/poop0p%d' % self.boot.partnum, allow_fail=True )
> +                self.outf.do(
> +                    'umount /dev/mapper/poop0p%d' %
> +                    self.boot.partnum, allow_fail=True)
> +
> +            self.outf.do(
> +                'umount /dev/mapper/poop0p%d' %
> +                self.root.partnum, allow_fail=True)
>  
> -            self.outf.do( 'umount /dev/mapper/poop0p%d' % self.root.partnum, allow_fail=True )
> +            self.outf.do('kpartx -d /dev/poop0', allow_fail=True)
> +            self.outf.do("losetup -d /dev/poop0", allow_fail=True)
>  
> -            self.outf.do( 'kpartx -d /dev/poop0', allow_fail=True );
> -            self.outf.do( "losetup -d /dev/poop0", allow_fail=True )
>  
> -class grubinstaller199( grubinstaller_base ):
> +class grubinstaller199(grubinstaller_base):
>  
> -    def install( self, target ):
> +    def install(self, target):
>          if not self.root:
>              return
>  
> -        imagemnt = os.path.join(target, "imagemnt" )
> +        imagemnt = os.path.join(target, "imagemnt")
>          try:
> -            self.outf.do( 'cp -a /dev/loop0 /dev/poop0' )
> -            self.outf.do( 'cp -a /dev/loop1 /dev/poop1' )
> -            self.outf.do( 'cp -a /dev/loop2 /dev/poop2' )
> +            self.outf.do('cp -a /dev/loop0 /dev/poop0')
> +            self.outf.do('cp -a /dev/loop1 /dev/poop1')
> +            self.outf.do('cp -a /dev/loop2 /dev/poop2')
>  
> -            self.outf.do( 'losetup /dev/poop0 "%s"' % self.root.filename )
> -            self.root.losetup( self.outf, "poop1" )
> -            self.outf.do( 'mount /dev/poop1 %s' % imagemnt )
> +            self.outf.do('losetup /dev/poop0 "%s"' % self.root.filename)
> +            self.root.losetup(self.outf, "poop1")
> +            self.outf.do('mount /dev/poop1 %s' % imagemnt)
>  
>              if self.boot:
> -                self.boot.losetup( self.outf, "poop2" )
> -                self.outf.do( 'mount /dev/poop2 %s' % (os.path.join( imagemnt, "boot" ) ) )
> +                self.boot.losetup(self.outf, "poop2")
> +                self.outf.do('mount /dev/poop2 %s' %
> +                             (os.path.join(imagemnt, "boot")))
>  
> -            devmap = open( os.path.join( imagemnt, "boot/grub/device.map" ), "w" )
> -            devmap.write( "(hd0) /dev/poop0\n" )
> -            devmap.write( "(hd0,%s) /dev/poop1\n" % self.root.number )
> +            devmap = open(os.path.join(imagemnt, "boot/grub/device.map"), "w")
> +            devmap.write("(hd0) /dev/poop0\n")
> +            devmap.write("(hd0,%s) /dev/poop1\n" % self.root.number)
>              if self.boot:
> -                devmap.write( "(hd0,%s) /dev/poop2\n" % self.boot.number )
> +                devmap.write("(hd0,%s) /dev/poop2\n" % self.boot.number)
>  
>              devmap.close()
>  
> -
> -            self.outf.do( "mount --bind /dev %s" % os.path.join( imagemnt, "dev" ) )
> -            self.outf.do( "mount --bind /proc %s" % os.path.join( imagemnt, "proc" ) )
> -            self.outf.do( "mount --bind /sys %s" % os.path.join( imagemnt, "sys" ) )
> -
> -            self.outf.do( "chroot %s  update-initramfs -u -k all"  % imagemnt )
> -            self.outf.do( "chroot %s  update-grub2"  % imagemnt )
> -
> -            self.outf.do( "chroot %s  grub-install --no-floppy /dev/loop0" % imagemnt)
> +            self.outf.do(
> +                "mount --bind /dev %s" %
> +                os.path.join(
> +                    imagemnt,
> +                    "dev"))
> +            self.outf.do(
> +                "mount --bind /proc %s" %
> +                os.path.join(
> +                    imagemnt,
> +                    "proc"))
> +            self.outf.do(
> +                "mount --bind /sys %s" %
> +                os.path.join(
> +                    imagemnt,
> +                    "sys"))
> +
> +            self.outf.do("chroot %s  update-initramfs -u -k all" % imagemnt)
> +            self.outf.do("chroot %s  update-grub2" % imagemnt)
> +
> +            self.outf.do(
> +                "chroot %s  grub-install --no-floppy /dev/loop0" %
> +                imagemnt)
>  
>          finally:
> -            os.unlink( os.path.join( imagemnt, "boot/grub/device.map" ))
> -
> -            self.outf.do( "umount -l %s" % os.path.join( imagemnt, "dev" ), allow_fail=True )
> -            self.outf.do( "umount -l %s" % os.path.join( imagemnt, "proc" ), allow_fail=True )
> -            self.outf.do( "umount -l %s" % os.path.join( imagemnt, "sys" ), allow_fail=True )
> -
> -            self.outf.do( "losetup -d /dev/poop0", allow_fail=True )
> +            os.unlink(os.path.join(imagemnt, "boot/grub/device.map"))
> +
> +            self.outf.do(
> +                "umount -l %s" %
> +                os.path.join(
> +                    imagemnt,
> +                    "dev"),
> +                allow_fail=True)
> +            self.outf.do(
> +                "umount -l %s" %
> +                os.path.join(
> +                    imagemnt,
> +                    "proc"),
> +                allow_fail=True)
> +            self.outf.do(
> +                "umount -l %s" %
> +                os.path.join(
> +                    imagemnt,
> +                    "sys"),
> +                allow_fail=True)
> +
> +            self.outf.do("losetup -d /dev/poop0", allow_fail=True)
>  
>              if self.boot:
> -                self.outf.do( 'umount /dev/poop2', allow_fail=True )
> -                self.outf.do( 'losetup -d /dev/poop2', allow_fail=True )
> +                self.outf.do('umount /dev/poop2', allow_fail=True)
> +                self.outf.do('losetup -d /dev/poop2', allow_fail=True)
> +
> +            self.outf.do('umount /dev/poop1', allow_fail=True)
> +            self.outf.do('losetup -d /dev/poop1', allow_fail=True)
>  
> -            self.outf.do( 'umount /dev/poop1', allow_fail=True )
> -            self.outf.do( 'losetup -d /dev/poop1', allow_fail=True )
>  
>  class simple_fstype(object):
>      def __init__(self, typ):
>          self.type = typ
>  
> -def create_partition(disk, part, ptype, fslabel, size_in_sectors, current_sector):
> +
> +def create_partition(
> +        disk,
> +        part,
> +        ptype,
> +        fslabel,
> +        size_in_sectors,
> +        current_sector):
>  
>      sector_size = 512
>      if part.text("size") == "remain" and disk.type == "gpt":
> @@ -287,10 +377,11 @@ def create_partition(disk, part, ptype, fslabel, size_in_sectors, current_sector
>      elif part.text("size") == "remain":
>          sz = size_in_sectors - current_sector
>      else:
> -        sz = size_to_int(part.text("size"))/sector_size
> +        sz = size_to_int(part.text("size")) / sector_size
>  
>      g = parted.Geometry(device=disk.device, start=current_sector, length=sz)
> -    if ptype != parted.PARTITION_EXTENDED and part.text("label") in fslabel and fslabel[part.text("label")].fstype == "vfat":
> +    if ptype != parted.PARTITION_EXTENDED and part.text(
> +            "label") in fslabel and fslabel[part.text("label")].fstype == "vfat":
>          fs = simple_fstype("fat32")
>          ppart = parted.Partition(disk, ptype, fs, geometry=g)
>          ppart.setFlag(_ped.PARTITION_LBA)
> @@ -315,27 +406,43 @@ def create_label(outf, disk, part, ppart, fslabel, target, grub):
>      partition_number = ppart.number
>      entry = fslabel[part.text("label")]
>      entry.offset = ppart.geometry.start * sector_size
> -    entry.size   = ppart.getLength() * sector_size
> +    entry.size = ppart.getLength() * sector_size
>      entry.filename = disk.device.path
>      entry.partnum = partition_number
>      entry.number = '{}{}'.format(disk.type, partition_number)
>  
>      if entry.mountpoint == "/":
> -        grub.set_root_entry( entry )
> +        grub.set_root_entry(entry)
>      elif entry.mountpoint == "/boot":
> -        grub.set_boot_entry( entry )
> -
> -    entry.losetup( outf, "loop0" )
> -    outf.do( 'mkfs.%s %s %s /dev/loop0' % ( entry.fstype, entry.mkfsopt, entry.get_label_opt() ) )
> -
> -    outf.do( 'mount /dev/loop0 %s' % os.path.join(target, "imagemnt" ) )
> -    outf.do( 'cp -a "%s"/* "%s"' % ( os.path.join( target, "filesystems", entry.id ), os.path.join(target, "imagemnt") ), allow_fail=True )
> -    outf.do( 'umount /dev/loop0' )
> -    outf.do( 'losetup -d /dev/loop0' )
> +        grub.set_boot_entry(entry)
> +
> +    entry.losetup(outf, "loop0")
> +    outf.do(
> +        'mkfs.%s %s %s /dev/loop0' %
> +        (entry.fstype,
> +         entry.mkfsopt,
> +         entry.get_label_opt()))
> +
> +    outf.do('mount /dev/loop0 %s' % os.path.join(target, "imagemnt"))
> +    outf.do(
> +        'cp -a "%s"/* "%s"' %
> +        (os.path.join(
> +            target, "filesystems", entry.id), os.path.join(
> +            target, "imagemnt")), allow_fail=True)
> +    outf.do('umount /dev/loop0')
> +    outf.do('losetup -d /dev/loop0')
>  
>      return ppart
>  
> -def create_logical_partitions(outf, disk, extended, epart, fslabel, target, grub):
> +
> +def create_logical_partitions(
> +        outf,
> +        disk,
> +        extended,
> +        epart,
> +        fslabel,
> +        target,
> +        grub):
>  
>      current_sector = epart.geometry.start
>      size_in_sectors = current_sector + epart.geometry.length
> @@ -345,47 +452,67 @@ def create_logical_partitions(outf, disk, extended, epart, fslabel, target, grub
>              continue
>  
>          current_sector += 2048
> -        lpart = create_partition(disk, logical, parted.PARTITION_LOGICAL, fslabel, size_in_sectors, current_sector)
> +        lpart = create_partition(
> +            disk,
> +            logical,
> +            parted.PARTITION_LOGICAL,
> +            fslabel,
> +            size_in_sectors,
> +            current_sector)
>          if logical.has("label") and logical.text("label") in fslabel:
>              create_label(outf, disk, logical, lpart, fslabel, target, grub)
>  
> -        current_sector += lpart.getLength();
> +        current_sector += lpart.getLength()
>  
> -def do_image_hd( outf, hd, fslabel, target, grub_version ):
> +
> +def do_image_hd(outf, hd, fslabel, target, grub_version):
>  
>      sector_size = 512
> -    s=size_to_int(hd.text("size"))
> +    s = size_to_int(hd.text("size"))
>      size_in_sectors = s / sector_size
>  
> -    imagename = os.path.join(target,hd.text("name"))
> -    outf.do( 'rm -f "%s"' % imagename, allow_fail=True )
> -    f = open( imagename, "wb" )
> -    f.truncate( size_in_sectors * sector_size )
> +    imagename = os.path.join(target, hd.text("name"))
> +    outf.do('rm -f "%s"' % imagename, allow_fail=True)
> +    f = open(imagename, "wb")
> +    f.truncate(size_in_sectors * sector_size)
>      f.close()
>  
> -    imag = parted.Device( imagename )
> +    imag = parted.Device(imagename)
>      if hd.tag == "gpthd":
> -        disk = parted.freshDisk(imag, "gpt" )
> +        disk = parted.freshDisk(imag, "gpt")
>      else:
> -        disk = parted.freshDisk(imag, "msdos" )
> +        disk = parted.freshDisk(imag, "msdos")
>  
>      if grub_version == 199:
> -        grub = grubinstaller199( outf )
> +        grub = grubinstaller199(outf)
>      elif grub_version == 202:
> -        grub = grubinstaller202( outf )
> +        grub = grubinstaller202(outf)
>      else:
> -        grub = grubinstaller_base( outf );
> +        grub = grubinstaller_base(outf)
>  
>      current_sector = 2048
>      for part in hd:
>  
>          if part.tag == "partition":
> -            ppart = create_partition(disk, part, parted.PARTITION_NORMAL, fslabel, size_in_sectors, current_sector)
> +            ppart = create_partition(
> +                disk,
> +                part,
> +                parted.PARTITION_NORMAL,
> +                fslabel,
> +                size_in_sectors,
> +                current_sector)
>              if part.text("label") in fslabel:
>                  create_label(outf, disk, part, ppart, fslabel, target, grub)
>          elif part.tag == "extended":
> -            ppart = create_partition(disk, part, parted.PARTITION_EXTENDED, fslabel, size_in_sectors, current_sector)
> -            create_logical_partitions(outf, disk, part, ppart, fslabel, target, grub)
> +            ppart = create_partition(
> +                disk,
> +                part,
> +                parted.PARTITION_EXTENDED,
> +                fslabel,
> +                size_in_sectors,
> +                current_sector)
> +            create_logical_partitions(
> +                outf, disk, part, ppart, fslabel, target, grub)
>          else:
>              continue
>  
> @@ -393,15 +520,15 @@ def do_image_hd( outf, hd, fslabel, target, grub_version ):
>  
>      disk.commit()
>  
> -    if hd.has( "grub-install" ) and grub_version:
> -        grub.install( target )
> +    if hd.has("grub-install") and grub_version:
> +        grub.install(target)
>  
>      return hd.text("name")
>  
>  
> -def add_binary_blob( outf, hd, target ):
> +def add_binary_blob(outf, hd, target):
>  
> -    imagename = os.path.join(target,hd.text("name"))
> +    imagename = os.path.join(target, hd.text("name"))
>  
>      for binary in hd:
>          if binary.tag != "binary":
> @@ -425,11 +552,12 @@ def add_binary_blob( outf, hd, target ):
>          else:
>              bf = os.path.join(target, binary.et.text)
>  
> -        outf.do( 'dd if="%s" of="%s" seek="%s" bs="%s" conv=notrunc' % (
> +        outf.do('dd if="%s" of="%s" seek="%s" bs="%s" conv=notrunc' % (
>              bf,
>              imagename,
>              offset,
> -            bs) )
> +            bs))
> +
>  
>  def do_hdimg(outf, xml, target, rfs, grub_version):
>      # list of created files
> @@ -441,7 +569,7 @@ def do_hdimg(outf, xml, target, rfs, grub_version):
>  
>      # Build a dictonary of mount points
>      fslabel = {}
> -    mountpoints = mountpoint_dict ()
> +    mountpoints = mountpoint_dict()
>  
>      for fs in xml.tgt.node("fstab"):
>          if fs.tag != "bylabel":
> @@ -452,69 +580,72 @@ def do_hdimg(outf, xml, target, rfs, grub_version):
>  
>          # register it with mountpoints,
>          # this also sets the id field
> -        mountpoints.register (e)
> +        mountpoints.register(e)
>  
>          fslabel[fs.text("label")] = e
>  
>      # Get the sorted list of mountpoints
> -    fslist = mountpoints.depthlist ()
> +    fslist = mountpoints.depthlist()
>  
>      # create directories, where we want our
>      # filesystems later
>      fspath = os.path.join(target, "filesystems")
> -    outf.do( 'mkdir -p %s' % fspath )
> +    outf.do('mkdir -p %s' % fspath)
>  
>      imagemnt = os.path.join(target, "imagemnt")
> -    outf.do( 'mkdir -p %s' % imagemnt )
> +    outf.do('mkdir -p %s' % imagemnt)
>  
>      # now move all mountpoints into own directories
>      # begin from deepest mountpoints
>      for l in reversed(fslist):
> -        outf.do( 'mkdir -p "%s"' % os.path.join( fspath, l.id ) )
> -        outf.do( 'mkdir -p "%s"' % rfs.fname('') + l.mountpoint )
> -        if len(rfs.listdir( l.mountpoint )) > 0:
> -            outf.do( 'mv "%s"/* "%s"' % ( rfs.fname(l.mountpoint), os.path.join(
> -                fspath, l.id ) ), allow_fail=True )
> +        outf.do('mkdir -p "%s"' % os.path.join(fspath, l.id))
> +        outf.do('mkdir -p "%s"' % rfs.fname('') + l.mountpoint)
> +        if len(rfs.listdir(l.mountpoint)) > 0:
> +            outf.do('mv "%s"/* "%s"' % (rfs.fname(l.mountpoint), os.path.join(
> +                fspath, l.id)), allow_fail=True)
>  
>      try:
>          # Now iterate over all images and create filesystems and partitions
>          for i in xml.tgt.node("images"):
>              if i.tag == "msdoshd":
> -                img = do_image_hd( outf, i, fslabel, target, grub_version )
> -                img_files.append (img)
> +                img = do_image_hd(outf, i, fslabel, target, grub_version)
> +                img_files.append(img)
>  
>              if i.tag == "gpthd":
> -                img = do_image_hd( outf, i, fslabel, target, grub_version )
> -                img_files.append (img)
> +                img = do_image_hd(outf, i, fslabel, target, grub_version)
> +                img_files.append(img)
>  
>              if i.tag == "mtd":
> -                imgs = mkfs_mtd( outf, i, fslabel, rfs, target )
> -                img_files.extend (imgs)
> +                imgs = mkfs_mtd(outf, i, fslabel, rfs, target)
> +                img_files.extend(imgs)
>      finally:
>          # Put back the filesystems into /target
>          # most shallow fs first...
>          for i in fslist:
> -            if len(os.listdir(os.path.join( fspath, i.id ))) > 0:
> -                outf.do( 'mv "%s"/* "%s"' % ( os.path.join( fspath, i.id ),
> -                    rfs.fname(i.mountpoint) ), allow_fail=True )
> +            if len(os.listdir(os.path.join(fspath, i.id))) > 0:
> +                outf.do(
> +                    'mv "%s"/* "%s"' %
> +                    (os.path.join(
> +                        fspath, i.id), rfs.fname(
> +                        i.mountpoint)), allow_fail=True)
>  
>      # Files are now moved back. ubinize needs files in place, so we run it now.
>      for i in xml.tgt.node("images"):
>          if i.tag == "mtd":
> -            imgs = build_image_mtd( outf, i, target )
> -            img_files.extend (imgs)
> +            imgs = build_image_mtd(outf, i, target)
> +            img_files.extend(imgs)
>  
>      # dd binary blobs onto images
>      for i in xml.tgt.node("images"):
>          if (i.tag == "msdoshd") or (i.tag == "gpt"):
> -            add_binary_blob( outf, i, target )
> +            add_binary_blob(outf, i, target)
>  
>      gz_img_files = []
>      for i in set(img_files):
>          try:
> -            outf.do ( 'gzip -f "%s"' % os.path.join (target, i))
> +            outf.do('gzip -f "%s"' % os.path.join(target, i))
>              # only add gz to gz_img_files, if no exception is thrown
> -            gz_img_files.append( i+".gz" )
> +            gz_img_files.append(i + ".gz")
>          except CommandError:
>              pass
>  
> diff --git a/elbepack/initvmaction.py b/elbepack/initvmaction.py
> index 86737de2..0015db92 100644
> --- a/elbepack/initvmaction.py
> +++ b/elbepack/initvmaction.py
> @@ -6,11 +6,11 @@
>  from __future__ import print_function
>  
>  import elbepack
> -from elbepack.treeutils   import etree
> +from elbepack.treeutils import etree
>  from elbepack.directories import elbe_exe
>  from elbepack.shellhelper import CommandError, system, command_out_stderr
> -from elbepack.filesystem  import wdfs, TmpdirFilesystem
> -from elbepack.elbexml     import ElbeXML, ValidationError, ValidationMode
> +from elbepack.filesystem import wdfs, TmpdirFilesystem
> +from elbepack.elbexml import ElbeXML, ValidationError, ValidationMode
>  
>  from tempfile import NamedTemporaryFile
>  
> @@ -21,51 +21,64 @@ import datetime
>  
>  import libvirt
>  
> -cmd_exists = lambda x: any(os.access(os.path.join(path, x), os.X_OK) for path in os.environ["PATH"].split(os.pathsep))
> +
> +def cmd_exists(x): return any(os.access(os.path.join(path, x), os.X_OK)
> +                              for path in os.environ["PATH"].split(os.pathsep))
>  
>  # Create download directory with timestamp,
>  # if necessary
> -def ensure_outdir (wdfs, opt):
> +
> +
> +def ensure_outdir(wdfs, opt):
>      if opt.outdir is None:
>          opt.outdir = "elbe-build-" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
>  
>      print("Saving generated Files to %s" % opt.outdir)
>  
> +
>  def ensure_initvm_defined():
> -    if self.initvm == None:
> +    if self.initvm is None:
>          sys.exit(20)
>  
> +
>  class InitVMError(Exception):
>      def __init__(self, str):
>          Exception.__init__(self, str)
>  
> +
>  class InitVMAction(object):
>      actiondict = {}
> +
>      @classmethod
>      def register(cls, action):
>          cls.actiondict[action.tag] = action
> +
>      @classmethod
>      def print_actions(cls):
>          print("available subcommands are:", file=sys.stderr)
>          for a in cls.actiondict:
>              print("   %s" % a, file=sys.stderr)
> +
>      def __new__(cls, node):
>          action = cls.actiondict[node]
>          return object.__new__(action)
> -    def __init__(self, node, initvmNeeded = True):
> -        # The tag initvmNeeded is required in order to be able to run `elbe initvm create`
> +
> +    def __init__(self, node, initvmNeeded=True):
> +        # The tag initvmNeeded is required in order to be able to run `elbe
> +        # initvm create`
>          self.conn = libvirt.open("qemu:///session")
>          try:
>              self.initvm = self.conn.lookupByName('initvm')
>          except libvirt.libvirtError:
>              self.initvm = None
> -            if initvmNeeded == True:
> +            if initvmNeeded:
>                  sys.exit(20)
>          self.node = node
>  
>      def initvm_state(self):
>          return self.initvm.info()[0]
>  
> +
>  class StartAction(InitVMAction):
>  
>      tag = 'start'
> @@ -82,14 +95,16 @@ class StartAction(InitVMAction):
>              self.initvm.create()
>              # TODO: Instead of waiting for five seconds check whether SOAP server is reachable
>              # Wait five seconds for the initvm to boot
> -            for i in range (1, 5):
> -                sys.stdout.write ("*")
> -                sys.stdout.flush ()
> -                time.sleep (1)
> +            for i in range(1, 5):
> +                sys.stdout.write("*")
> +                sys.stdout.flush()
> +                time.sleep(1)
>              print("*")
>  
> +
>  InitVMAction.register(StartAction)
>  
> +
>  class EnsureAction(InitVMAction):
>  
>      tag = 'ensure'
> @@ -99,13 +114,14 @@ class EnsureAction(InitVMAction):
>  
>      def execute(self, initvmdir, opt, args):
>          if self.initvm_state() == 5:
> -            system ('%s initvm start' % elbe_exe)
> +            system('%s initvm start' % elbe_exe)
>          elif self.initvm_state() == 1:
>              pass
>          else:
>              print("Elbe initvm in bad state.")
>              sys.exit(20)
>  
> +
>  InitVMAction.register(EnsureAction)
>  
>  
> @@ -124,12 +140,13 @@ class StopAction(InitVMAction):
>              # Shutdown initvm
>              self.initvm.shutdown()
>              while(True):
> -                sys.stdout.write ("*")
> -                sys.stdout.flush ()
> +                sys.stdout.write("*")
> +                sys.stdout.flush()
>                  if self.initvm_state() == 5:
>                      print("\nInitvm shut off.")
>                      break
> -                time.sleep (1)
> +                time.sleep(1)
> +
>  
>  InitVMAction.register(StopAction)
>  
> @@ -149,8 +166,10 @@ class AttachAction(InitVMAction):
>          print('Attaching to initvm console.')
>          system('virsh console initvm')
>  
> +
>  InitVMAction.register(AttachAction)
>  
> +
>  class StartBuildAction(InitVMAction):
>  
>      tag = 'start_build'
> @@ -160,31 +179,40 @@ class StartBuildAction(InitVMAction):
>  
>      def execute(self, initvmdir, opt, args):
>          try:
> -            have_session = os.system( "tmux has-session -t ElbeInitVMSession >/dev/null 2>&1" )
> +            have_session = os.system(
> +                "tmux has-session -t ElbeInitVMSession >/dev/null 2>&1")
>          except CommandError as e:
>              print("tmux execution failed, tmux version 1.9 or higher is required")
>              sys.exit(20)
>          if have_session != 256:
>              print("ElbeInitVMSession already exists in tmux.", file=sys.stderr)
> -            print("Try 'elbe initvm attach' to attach to the session.", file=sys.stderr)
> +            print(
> +                "Try 'elbe initvm attach' to attach to the session.",
> +                file=sys.stderr)
>              sys.exit(20)
>  
> -        system( 'TMUX= tmux new-session -d -s ElbeInitVMSession -n initvm "cd \"%s\"; make"' % initvmdir )
> +        system(
> +            'TMUX= tmux new-session -d -s ElbeInitVMSession -n initvm "cd \"%s\"; make"' %
> +            initvmdir)
> +
>  
>  InitVMAction.register(StartBuildAction)
>  
> +
>  class CreateAction(InitVMAction):
>  
>      tag = 'create'
>  
>      def __init__(self, node):
> -        InitVMAction.__init__(self, node, initvmNeeded = False)
> +        InitVMAction.__init__(self, node, initvmNeeded=False)
>  
>      def execute(self, initvmdir, opt, args):
> -        if self.initvm != None:
> +        if self.initvm is not None:
>              print("Initvm already defined.\n")
> -            print("If you want to build in your old initvm, use `elbe initvm submit <xml>`.")
> -            print("If you want to remove your old initvm from libvirt run `virsh undefine initvm`.\n")
> +            print(
> +                "If you want to build in your old initvm, use `elbe initvm submit <xml>`.")
> +            print(
> +                "If you want to remove your old initvm from libvirt run `virsh undefine initvm`.\n")
>              print("Note:")
>              print("\t1) You can reimport your old initvm via `virsh define <file>`")
>              print("\t   where <file> is the corresponding libvirt.xml")
> @@ -195,60 +223,79 @@ class CreateAction(InitVMAction):
>          cdrom = None
>  
>          if len(args) == 1:
> -            if args[0].endswith ('.xml'):
> +            if args[0].endswith('.xml'):
>                  # We have an xml file, use that for elbe init
>                  xmlfile = args[0]
>                  try:
> -                    xml = etree( xmlfile )
> +                    xml = etree(xmlfile)
>                  except ValidationError as e:
>                      print("XML file is inavlid: %s" % str(e))
>                  # Use default XML if no initvm was specified
> -                if not xml.has( "initvm" ):
> -                    xmlfile = os.path.join (elbepack.__path__[0], "init/default-init.xml")
> +                if not xml.has("initvm"):
> +                    xmlfile = os.path.join(
> +                        elbepack.__path__[0], "init/default-init.xml")
>  
> -            elif args[0].endswith ('.iso'):
> +            elif args[0].endswith('.iso'):
>                  # We have an iso image, extract xml from there.
> -                tmp = TmpdirFilesystem ()
> -                os.system ('7z x -o%s "%s" source.xml' % (tmp.path, args[0]))
> +                tmp = TmpdirFilesystem()
> +                os.system('7z x -o%s "%s" source.xml' % (tmp.path, args[0]))
>  
>                  print ('', file=sys.stderr)
>  
> -                if not tmp.isfile ('source.xml'):
> -                    print("Iso image does not contain a source.xml file", file=sys.stderr)
> -                    print("This is not supported by 'elbe initvm'", file=sys.stderr)
> +                if not tmp.isfile('source.xml'):
> +                    print(
> +                        "Iso image does not contain a source.xml file",
> +                        file=sys.stderr)
> +                    print(
> +                        "This is not supported by 'elbe initvm'",
> +                        file=sys.stderr)
>                      print("", file=sys.stderr)
>                      print("Exiting !!!", file=sys.stderr)
> -                    sys.exit (20)
> +                    sys.exit(20)
>  
>                  try:
> -                    exml = ElbeXML (tmp.fname ('source.xml'), url_validation=ValidationMode.NO_CHECK)
> +                    exml = ElbeXML(
> +                        tmp.fname('source.xml'),
> +                        url_validation=ValidationMode.NO_CHECK)
>                  except ValidationError as e:
> -                    print("Iso image does contain a source.xml file.", file=sys.stderr)
> -                    print("But that xml does not validate correctly", file=sys.stderr)
> +                    print(
> +                        "Iso image does contain a source.xml file.",
> +                        file=sys.stderr)
> +                    print(
> +                        "But that xml does not validate correctly",
> +                        file=sys.stderr)
>                      print("", file=sys.stderr)
>                      print("Exiting !!!", file=sys.stderr)
>                      print(e)
> -                    sys.exit (20)
> +                    sys.exit(20)
>  
>                  print("Iso Image with valid source.xml detected !")
> -                print("Image was generated using Elbe Version %s" % exml.get_elbe_version ())
> +                print(
> +                    "Image was generated using Elbe Version %s" %
> +                    exml.get_elbe_version())
>  
> -                os.system ('7z x -o%s "%s" elbe-keyring.gpg' % ('/tmp', args[0]))
> +                os.system(
> +                    '7z x -o%s "%s" elbe-keyring.gpg' %
> +                    ('/tmp', args[0]))
>  
> -                if tmp.isfile ('elbe-keyring.gpg'):
> +                if tmp.isfile('elbe-keyring.gpg'):
>                      print("Iso image contains a elbe-kerying")
>  
> -                xmlfile = tmp.fname ('source.xml')
> +                xmlfile = tmp.fname('source.xml')
>                  cdrom = args[0]
>              else:
> -                print("Unknown file ending (use either xml or iso)", file=sys.stderr)
> -                sys.exit (20)
> +                print(
> +                    "Unknown file ending (use either xml or iso)",
> +                    file=sys.stderr)
> +                sys.exit(20)
>          else:
>              # No xml File was specified, build the default elbe-init-with-ssh
> -            xmlfile = os.path.join (elbepack.__path__[0], "init/default-init.xml")
> +            xmlfile = os.path.join(
> +                elbepack.__path__[0],
> +                "init/default-init.xml")
>  
>          try:
> -            init_opts = '';
> +            init_opts = ''
>              if opt.devel:
>                  init_opts += ' --devel'
>  
> @@ -256,9 +303,12 @@ class CreateAction(InitVMAction):
>                  init_opts += ' --nesting'
>  
>              if cdrom:
> -                system ('%s init %s --directory "%s" --cdrom "%s" "%s"' % (elbe_exe, init_opts, initvmdir, cdrom, xmlfile))
> +                system('%s init %s --directory "%s" --cdrom "%s" "%s"' %
> +                       (elbe_exe, init_opts, initvmdir, cdrom, xmlfile))
>              else:
> -                system ('%s init %s --directory "%s" "%s"' % (elbe_exe, init_opts, initvmdir, xmlfile))
> +                system(
> +                    '%s init %s --directory "%s" "%s"' %
> +                    (elbe_exe, init_opts, initvmdir, xmlfile))
>  
>          except CommandError:
>              print("'elbe init' Failed", file=sys.stderr)
> @@ -281,12 +331,12 @@ class CreateAction(InitVMAction):
>          except CommandError:
>              print('Registering initvm in libvirt failed', file=sys.stderr)
>              print('Try `virsh undefine initvm` to delete existing initvm',
> -                    file=sys.stderr)
> +                  file=sys.stderr)
>              sys.exit(20)
>  
>          # Build initvm
>          try:
> -            system ('cd "%s"; make' % (initvmdir))
> +            system('cd "%s"; make' % (initvmdir))
>          except CommandError:
>              print("Building the initvm Failed", file=sys.stderr)
>              print("Giving up", file=sys.stderr)
> @@ -302,24 +352,28 @@ class CreateAction(InitVMAction):
>          if len(args) == 1:
>              # if provided xml file has no initvm section xmlfile is set to a
>              # default initvm XML file. But we need the original file here
> -            if args[0].endswith ('.xml'):
> +            if args[0].endswith('.xml'):
>                  # stop here if no project node was specified
>                  try:
> -                    x = ElbeXML (args[0])
> +                    x = ElbeXML(args[0])
>                  except ValidationError as e:
>                      print("XML file is inavlid: %s" % str(e))
>                      sys.exit(20)
>                  if not x.has('project'):
> -                    print("elbe initvm ready: use 'elbe initvm submit myproject.xml' to build a project");
> +                    print(
> +                        "elbe initvm ready: use 'elbe initvm submit myproject.xml' to build a project")
>                      sys.exit(0)
>  
> -                ret, prjdir, err = command_out_stderr ('%s control create_project' % (elbe_exe))
> +                ret, prjdir, err = command_out_stderr(
> +                    '%s control create_project' % (elbe_exe))
>                  xmlfile = args[0]
>              elif cdrom is not None:
> -                ret, prjdir, err = command_out_stderr ('%s control create_project' % (elbe_exe))
> -                xmlfile = tmp.fname ('source.xml')
> +                ret, prjdir, err = command_out_stderr(
> +                    '%s control create_project' % (elbe_exe))
> +                xmlfile = tmp.fname('source.xml')
>              else:
> -                ret, prjdir, err = command_out_stderr ('%s control create_project' % (elbe_exe))
> +                ret, prjdir, err = command_out_stderr(
> +                    '%s control create_project' % (elbe_exe))
>  
>              if ret != 0:
>                  print("elbe control create_project failed.", file=sys.stderr)
> @@ -330,7 +384,7 @@ class CreateAction(InitVMAction):
>              prjdir = prjdir.strip()
>  
>              cmd = '%s control set_xml %s %s' % (elbe_exe, prjdir, xmlfile)
> -            ret, msg, err = command_out_stderr (cmd)
> +            ret, msg, err = command_out_stderr(cmd)
>              if ret != 0:
>                  print("elbe control set_xml failed.", file=sys.stderr)
>                  print(err, file=sys.stderr)
> @@ -338,13 +392,15 @@ class CreateAction(InitVMAction):
>                  sys.exit(20)
>  
>              if opt.writeproject:
> -                with open (opt.writeproject, "w") as wpf:
> -                    wpf.write (prjdir)
> +                with open(opt.writeproject, "w") as wpf:
> +                    wpf.write(prjdir)
>  
>              if cdrom is not None:
>                  print("Uploading CDROM. This might take a while")
>                  try:
> -                    system ('%s control set_cdrom "%s" "%s"' % (elbe_exe, prjdir, cdrom) )
> +                    system(
> +                        '%s control set_cdrom "%s" "%s"' %
> +                        (elbe_exe, prjdir, cdrom))
>                  except CommandError:
>                      print("elbe control set_cdrom Failed", file=sys.stderr)
>                      print("Giving up", file=sys.stderr)
> @@ -361,7 +417,9 @@ class CreateAction(InitVMAction):
>                  build_opts += '--skip-pbuilder '
>  
>              try:
> -                system ('%s control build "%s" %s' % (elbe_exe, prjdir, build_opts) )
> +                system(
> +                    '%s control build "%s" %s' %
> +                    (elbe_exe, prjdir, build_opts))
>              except CommandError:
>                  print("elbe control build Failed", file=sys.stderr)
>                  print("Giving up", file=sys.stderr)
> @@ -370,7 +428,7 @@ class CreateAction(InitVMAction):
>              print ("Build started, waiting till it finishes")
>  
>              try:
> -                system ('%s control wait_busy "%s"' % (elbe_exe, prjdir) )
> +                system('%s control wait_busy "%s"' % (elbe_exe, prjdir))
>              except CommandError:
>                  print("elbe control wait_busy Failed", file=sys.stderr)
>                  print("Giving up", file=sys.stderr)
> @@ -380,12 +438,18 @@ class CreateAction(InitVMAction):
>              print("Build finished !")
>              print("")
>              try:
> -                system ('%s control dump_file "%s" validation.txt' % (elbe_exe, prjdir) )
> +                system(
> +                    '%s control dump_file "%s" validation.txt' %
> +                    (elbe_exe, prjdir))
>              except CommandError:
> -                print("Project failed to generate validation.txt", file=sys.stderr)
> +                print(
> +                    "Project failed to generate validation.txt",
> +                    file=sys.stderr)
>                  print("Getting log.txt", file=sys.stderr)
>                  try:
> -                    system ('%s control dump_file "%s" log.txt' % (elbe_exe, prjdir) )
> +                    system(
> +                        '%s control dump_file "%s" log.txt' %
> +                        (elbe_exe, prjdir))
>                  except CommandError:
>  
>                      print("Failed to dump log.txt", file=sys.stderr)
> @@ -397,24 +461,26 @@ class CreateAction(InitVMAction):
>                  print("Listing available files:")
>                  print("")
>                  try:
> -                    system ('%s control get_files "%s"' % (elbe_exe, prjdir) )
> +                    system('%s control get_files "%s"' % (elbe_exe, prjdir))
>                  except CommandError:
>                      print("elbe control Failed", file=sys.stderr)
>                      print("Giving up", file=sys.stderr)
>                      sys.exit(20)
>  
>                  print("")
> -                print('Get Files with: elbe control get_file "%s" <filename>' % prjdir)
> +                print(
> +                    'Get Files with: elbe control get_file "%s" <filename>' %
> +                    prjdir)
>              else:
>                  print ("")
>                  print ("Getting generated Files")
>                  print ("")
>  
> -                ensure_outdir (wdfs, opt)
> +                ensure_outdir(wdfs, opt)
>  
>                  try:
> -                    system ('%s control get_files --output "%s" "%s"' % (
> -                            elbe_exe, opt.outdir, prjdir ))
> +                    system('%s control get_files --output "%s" "%s"' % (
> +                        elbe_exe, opt.outdir, prjdir))
>                  except CommandError:
>                      print("elbe control get_files Failed", file=sys.stderr)
>                      print("Giving up", file=sys.stderr)
> @@ -422,15 +488,17 @@ class CreateAction(InitVMAction):
>  
>                  if not opt.keep_files:
>                      try:
> -                        system ('%s control del_project "%s"' % (
> +                        system('%s control del_project "%s"' % (
>                              elbe_exe, prjdir))
>                      except CommandError:
>                          print ("remove project from initvm failed",
> -                                file=sys.stderr)
> +                               file=sys.stderr)
>                          sys.exit(20)
>  
> +
>  InitVMAction.register(CreateAction)
>  
> +
>  class SubmitAction(InitVMAction):
>  
>      tag = 'submit'
> @@ -440,7 +508,7 @@ class SubmitAction(InitVMAction):
>  
>      def execute(self, initvmdir, opt, args):
>          try:
> -            system ('%s initvm ensure' % elbe_exe)
> +            system('%s initvm ensure' % elbe_exe)
>          except CommandError:
>              print("Starting the initvm Failed", file=sys.stderr)
>              print("Giving up", file=sys.stderr)
> @@ -450,47 +518,61 @@ class SubmitAction(InitVMAction):
>          cdrom = None
>  
>          if len(args) == 1:
> -            if args[0].endswith ('.xml'):
> +            if args[0].endswith('.xml'):
>                  # We have an xml file, use that for elbe init
>                  xmlfile = args[0]
>                  url_validation = ''
> -            elif args[0].endswith ('.iso'):
> +            elif args[0].endswith('.iso'):
>                  # We have an iso image, extract xml from there.
> -                tmp = TmpdirFilesystem ()
> -                os.system ('7z x -o%s "%s" source.xml' % (tmp.path, args[0]))
> +                tmp = TmpdirFilesystem()
> +                os.system('7z x -o%s "%s" source.xml' % (tmp.path, args[0]))
>  
>                  print("", file=sys.stderr)
>  
> -                if not tmp.isfile ('source.xml'):
> -                    print("Iso image does not contain a source.xml file", file=sys.stderr)
> -                    print("This is not supported by 'elbe initvm'", file=sys.stderr)
> +                if not tmp.isfile('source.xml'):
> +                    print(
> +                        "Iso image does not contain a source.xml file",
> +                        file=sys.stderr)
> +                    print(
> +                        "This is not supported by 'elbe initvm'",
> +                        file=sys.stderr)
>                      print("", file=sys.stderr)
>                      print("Exiting !!!", file=sys.stderr)
> -                    sys.exit (20)
> +                    sys.exit(20)
>  
>                  try:
> -                    exml = ElbeXML (tmp.fname ('source.xml'), url_validation=ValidationMode.NO_CHECK)
> +                    exml = ElbeXML(
> +                        tmp.fname('source.xml'),
> +                        url_validation=ValidationMode.NO_CHECK)
>                  except ValidationError as e:
> -                    print("Iso image does contain a source.xml file.", file=sys.stderr)
> -                    print("But that xml does not validate correctly", file=sys.stderr)
> +                    print(
> +                        "Iso image does contain a source.xml file.",
> +                        file=sys.stderr)
> +                    print(
> +                        "But that xml does not validate correctly",
> +                        file=sys.stderr)
>                      print("", file=sys.stderr)
>                      print("Exiting !!!", file=sys.stderr)
>                      print(e)
> -                    sys.exit (20)
> +                    sys.exit(20)
>  
>                  print("Iso Image with valid source.xml detected !")
> -                print("Image was generated using Elbe Version %s" % exml.get_elbe_version ())
> +                print(
> +                    "Image was generated using Elbe Version %s" %
> +                    exml.get_elbe_version())
>  
> -                xmlfile = tmp.fname ('source.xml')
> +                xmlfile = tmp.fname('source.xml')
>                  url_validation = '--skip-urlcheck'
>                  cdrom = args[0]
>              else:
> -                print("Unknown file ending (use either xml or iso)", file=sys.stderr)
> -                sys.exit (20)
> +                print(
> +                    "Unknown file ending (use either xml or iso)",
> +                    file=sys.stderr)
> +                sys.exit(20)
>  
>              outxml = NamedTemporaryFile(prefix='elbe', suffix='xml')
>              cmd = '%s preprocess -o %s %s' % (elbe_exe, outxml.name, xmlfile)
> -            ret, msg, err = command_out_stderr (cmd)
> +            ret, msg, err = command_out_stderr(cmd)
>              if ret != 0:
>                  print("elbe preprocess failed.", file=sys.stderr)
>                  print(err, file=sys.stderr)
> @@ -498,7 +580,8 @@ class SubmitAction(InitVMAction):
>                  sys.exit(20)
>              xmlfile = outxml.name
>  
> -            ret, prjdir, err = command_out_stderr ('%s control create_project' % (elbe_exe))
> +            ret, prjdir, err = command_out_stderr(
> +                '%s control create_project' % (elbe_exe))
>              if ret != 0:
>                  print("elbe control create_project failed.", file=sys.stderr)
>                  print(err, file=sys.stderr)
> @@ -508,7 +591,7 @@ class SubmitAction(InitVMAction):
>              prjdir = prjdir.strip()
>  
>              cmd = '%s control set_xml %s %s' % (elbe_exe, prjdir, xmlfile)
> -            ret, msg, err = command_out_stderr (cmd)
> +            ret, msg, err = command_out_stderr(cmd)
>              if ret != 0:
>                  print("elbe control set_xml failed2", file=sys.stderr)
>                  print(err, file=sys.stderr)
> @@ -516,13 +599,15 @@ class SubmitAction(InitVMAction):
>                  sys.exit(20)
>  
>              if opt.writeproject:
> -                with open (opt.writeproject, "w") as wpf:
> -                    wpf.write (prjdir)
> +                with open(opt.writeproject, "w") as wpf:
> +                    wpf.write(prjdir)
>  
>              if cdrom is not None:
>                  print("Uploading CDROM. This might take a while")
>                  try:
> -                    system ('%s control set_cdrom "%s" "%s"' % (elbe_exe, prjdir, cdrom) )
> +                    system(
> +                        '%s control set_cdrom "%s" "%s"' %
> +                        (elbe_exe, prjdir, cdrom))
>                  except CommandError:
>                      print("elbe control set_cdrom Failed", file=sys.stderr)
>                      print("Giving up", file=sys.stderr)
> @@ -539,7 +624,9 @@ class SubmitAction(InitVMAction):
>                  build_opts += '--skip-pbuilder '
>  
>              try:
> -                system ('%s control build "%s" %s' % (elbe_exe, prjdir, build_opts) )
> +                system(
> +                    '%s control build "%s" %s' %
> +                    (elbe_exe, prjdir, build_opts))
>              except CommandError:
>                  print("elbe control build Failed", file=sys.stderr)
>                  print("Giving up", file=sys.stderr)
> @@ -548,7 +635,7 @@ class SubmitAction(InitVMAction):
>              print("Build started, waiting till it finishes")
>  
>              try:
> -                system ('%s control wait_busy "%s"' % (elbe_exe, prjdir) )
> +                system('%s control wait_busy "%s"' % (elbe_exe, prjdir))
>              except CommandError:
>                  print("elbe control wait_busy Failed", file=sys.stderr)
>                  print("Giving up", file=sys.stderr)
> @@ -558,12 +645,18 @@ class SubmitAction(InitVMAction):
>              print("Build finished !")
>              print("")
>              try:
> -                system ('%s control dump_file "%s" validation.txt' % (elbe_exe, prjdir) )
> +                system(
> +                    '%s control dump_file "%s" validation.txt' %
> +                    (elbe_exe, prjdir))
>              except CommandError:
> -                print("Project failed to generate validation.txt", file=sys.stderr)
> +                print(
> +                    "Project failed to generate validation.txt",
> +                    file=sys.stderr)
>                  print("Getting log.txt", file=sys.stderr)
>                  try:
> -                    system ('%s control dump_file "%s" log.txt' % (elbe_exe, prjdir) )
> +                    system(
> +                        '%s control dump_file "%s" log.txt' %
> +                        (elbe_exe, prjdir))
>                  except CommandError:
>  
>                      print("Failed to dump log.txt", file=sys.stderr)
> @@ -575,24 +668,26 @@ class SubmitAction(InitVMAction):
>                  print("Listing available files:")
>                  print("")
>                  try:
> -                    system ('%s control get_files "%s"' % (elbe_exe, prjdir) )
> +                    system('%s control get_files "%s"' % (elbe_exe, prjdir))
>                  except CommandError:
>                      print("elbe control get_files Failed", file=sys.stderr)
>                      print("Giving up", file=sys.stderr)
>                      sys.exit(20)
>  
>                  print("")
> -                print('Get Files with: elbe control get_file "%s" <filename>' % prjdir)
> +                print(
> +                    'Get Files with: elbe control get_file "%s" <filename>' %
> +                    prjdir)
>              else:
>                  print("")
>                  print("Getting generated Files")
>                  print("")
>  
> -                ensure_outdir (wdfs, opt)
> +                ensure_outdir(wdfs, opt)
>  
>                  try:
> -                    system ('%s control get_files --output "%s" "%s"' % (
> -                            elbe_exe, opt.outdir, prjdir ))
> +                    system('%s control get_files --output "%s" "%s"' % (
> +                        elbe_exe, opt.outdir, prjdir))
>                  except CommandError:
>                      print("elbe control get_files Failed", file=sys.stderr)
>                      print("Giving up", file=sys.stderr)
> @@ -600,12 +695,12 @@ class SubmitAction(InitVMAction):
>  
>                  if not opt.keep_files:
>                      try:
> -                        system ('%s control del_project "%s"' % (
> +                        system('%s control del_project "%s"' % (
>                              elbe_exe, prjdir))
>                      except CommandError:
>                          print("remove project from initvm failed",
> -                                file=sys.stderr)
> +                              file=sys.stderr)
>                          sys.exit(20)
>  
> -InitVMAction.register(SubmitAction)
>  
> +InitVMAction.register(SubmitAction)
> diff --git a/elbepack/kvm.py b/elbepack/kvm.py
> index dc12613e..85e0803e 100644
> --- a/elbepack/kvm.py
> +++ b/elbepack/kvm.py
> @@ -11,13 +11,17 @@ kvm_exe_list = [
>      '/usr/bin/qemu-kvm',
>      '/usr/libexec/qemu-kvm',
>      '/usr/bin/qemu-system-x86_64'
> -    ]
> +]
>  
> -def find_kvm_exe ():
> +
> +def find_kvm_exe():
>      for fname in kvm_exe_list:
>          if os.path.isfile(fname) and os.access(fname, os.X_OK):
>              # determine kvm version
> -            cmd = subprocess.Popen (fname + ' --version', shell=True, stdout=subprocess.PIPE)
> +            cmd = subprocess.Popen(
> +                fname + ' --version',
> +                shell=True,
> +                stdout=subprocess.PIPE)
>              for line in cmd.stdout:
>                  if "version" in line:
>                      version = line.split()[3].split('(')[0].strip()
> diff --git a/elbepack/licencexml.py b/elbepack/licencexml.py
> index 32f2fcbf..21990e9e 100644
> --- a/elbepack/licencexml.py
> +++ b/elbepack/licencexml.py
> @@ -15,51 +15,54 @@ warnings.simplefilter('error')
>  
>  remove_re = re.compile(u'[\x00-\x08\x0B-\x0C\x0E-\x1F\x7F]')
>  
> -def do_heuristics (fp):
> +
> +def do_heuristics(fp):
>      c = Copyright()
>      num_licenses = 0
>      for l in fp.readlines():
> -        if l.startswith ("License:"):
> +        if l.startswith("License:"):
>              num_licenses += 1
> -            _, v = l.split (":",1)
> +            _, v = l.split(":", 1)
>              data = {"License": v.strip()}
>              lic_para = LicenseParagraph(data)
> -            c.add_license_paragraph (lic_para)
> +            c.add_license_paragraph(lic_para)
>  
>      if num_licenses > 0:
>          return c
>      else:
>          return None
>  
> -def get_heuristics_license_list (c):
> +
> +def get_heuristics_license_list(c):
>      licenses = []
>      for cc in c.all_license_paragraphs():
> -        licenses.append (cc.license.synopsis)
> +        licenses.append(cc.license.synopsis)
>  
>      return set(licenses)
>  
> +
>  class copyright_xml (object):
> -    def __init__ (self):
> -        self.outxml = etree (None)
> +    def __init__(self):
> +        self.outxml = etree(None)
>          self.pkglist = self.outxml.setroot('pkglicenses')
>  
> -    def add_copyright_file (self, pkg_name, copyright):
> +    def add_copyright_file(self, pkg_name, copyright):
>  
>          # remove illegal characters from copyright
> -        copyright, _ = remove_re.subn ('', copyright)
> +        copyright, _ = remove_re.subn('', copyright)
>  
>          xmlpkg = self.pkglist.append('pkglicense')
>          xmlpkg.et.attrib['name'] = pkg_name
> -        txtnode = xmlpkg.append ('text')
> +        txtnode = xmlpkg.append('text')
>          txtnode.et.text = copyright
>  
> -        bytesio = io.StringIO (unicode(txtnode.et.text))
> +        bytesio = io.StringIO(unicode(txtnode.et.text))
>          try:
> -            c = Copyright (bytesio)
> +            c = Copyright(bytesio)
>              files = []
>  
>              for cc in c.all_files_paragraphs():
> -                files.append ( (cc.files, cc.license.synopsis, cc.copyright) )
> +                files.append((cc.files, cc.license.synopsis, cc.copyright))
>  
>              xmlpkg.append('machinereadable')
>              xmllic = xmlpkg.append('debian_licenses')
> @@ -68,20 +71,20 @@ class copyright_xml (object):
>                  if f[1] in seen:
>                      continue
>                  seen.append(f[1])
> -                ll = xmllic.append ('license')
> +                ll = xmllic.append('license')
>                  ll.et.text = f[1]
>  
>              detailed = xmlpkg.append('detailed')
>              for f in files:
>                  ff = detailed.append('files')
>                  for g in f[0]:
> -                    gg = ff.append ('glob')
> +                    gg = ff.append('glob')
>                      gg.et.text = g
>  
> -                ll = ff.append ('license')
> +                ll = ff.append('license')
>                  ll.et.text = f[1]
>  
> -                cc = ff.append ('copyright')
> +                cc = ff.append('copyright')
>                  cc.et.text = f[2]
>  
>              return
> @@ -92,10 +95,10 @@ class copyright_xml (object):
>          bytesio.seek(0)
>          #textio = io.TextIOWrapper (bytesio, encoding='iso-8859-1')
>  
> -        c = do_heuristics (bytesio)
> +        c = do_heuristics(bytesio)
>  
> -        if not c is None:
> -            lics = get_heuristics_license_list (c)
> +        if c is not None:
> +            lics = get_heuristics_license_list(c)
>              xmlpkg.append('heuristics')
>              xmllic = xmlpkg.append('debian_licenses')
>              for i in lics:
> @@ -109,10 +112,4 @@ class copyright_xml (object):
>          return
>  
>      def write(self, fname):
> -        self.outxml.write (fname, encoding="iso-8859-1")
> -
> -        
> -
> -
> -
> -
> +        self.outxml.write(fname, encoding="iso-8859-1")
> diff --git a/elbepack/pbuilder.py b/elbepack/pbuilder.py
> index ce5dd93a..ce3e27db 100644
> --- a/elbepack/pbuilder.py
> +++ b/elbepack/pbuilder.py
> @@ -7,46 +7,65 @@ import os
>  import urllib2
>  
>  
> -
> -def pbuilder_ensure_chroot (builddir):
> +def pbuilder_ensure_chroot(builddir):
>      pass
>  
>  
> -def pbuilder_write_config (builddir, xml, log):
> -    distname = xml.prj.text ('suite')
> -    pbuilderrc_fname = os.path.join (builddir, "pbuilderrc")
> -    fp = open (pbuilderrc_fname, "w")
> -
> -    fp.write ('#!/bin/sh\n')
> -    fp.write ('set -e\n')
> -    fp.write ('MIRRORSITE="%s"\n' % xml.get_primary_mirror(False))
> -    fp.write ('OTHERMIRROR="deb http://127.0.0.1:8080%s/repo %s main"\n' % (builddir, distname))
> -    fp.write ('BASETGZ="%s"\n' % os.path.join (builddir, 'pbuilder', 'base.tgz'))
> -
> -    fp.write ('DISTRIBUTION="%s"\n' % distname)
> -
> -    fp.write ('BUILDRESULT="%s"\n' % os.path.join (builddir, 'pbuilder', 'result'))
> -    fp.write ('APTCACHE="%s"\n' % os.path.join (builddir, 'pbuilder', 'aptcache'))
> -    fp.write ('HOOKDIR="%s"\n' % os.path.join (builddir, 'pbuilder', 'hooks.d'))
> -    fp.write ('PATH="/usr/share/elbe/qemu-elbe:$PATH"\n')
> -
> -    if (xml.text( "project/arch", key="arch" ) != 'amd64'):
> -        fp.write ('ARCHITECTURE="%s"\n' % xml.text ("project/buildimage/arch", key="arch"))
> -        fp.write ('DEBOOTSTRAP="qemu-debootstrap"\n')
> -        fp.write ('DEBOOTSTRAPOPTS=("${DEBOOTSTRAPOPTS[@]}" "--arch=$ARCHITECTURE")\n')
> +def pbuilder_write_config(builddir, xml, log):
> +    distname = xml.prj.text('suite')
> +    pbuilderrc_fname = os.path.join(builddir, "pbuilderrc")
> +    fp = open(pbuilderrc_fname, "w")
> +
> +    fp.write('#!/bin/sh\n')
> +    fp.write('set -e\n')
> +    fp.write('MIRRORSITE="%s"\n' % xml.get_primary_mirror(False))
> +    fp.write(
> +        'OTHERMIRROR="deb http://127.0.0.1:8080%s/repo %s main"\n' %
> +        (builddir, distname))
> +    fp.write('BASETGZ="%s"\n' % os.path.join(builddir, 'pbuilder', 'base.tgz'))
> +
> +    fp.write('DISTRIBUTION="%s"\n' % distname)
> +
> +    fp.write(
> +        'BUILDRESULT="%s"\n' %
> +        os.path.join(
> +            builddir,
> +            'pbuilder',
> +            'result'))
> +    fp.write(
> +        'APTCACHE="%s"\n' %
> +        os.path.join(
> +            builddir,
> +            'pbuilder',
> +            'aptcache'))
> +    fp.write('HOOKDIR="%s"\n' % os.path.join(builddir, 'pbuilder', 'hooks.d'))
> +    fp.write('PATH="/usr/share/elbe/qemu-elbe:$PATH"\n')
> +
> +    if (xml.text("project/arch", key="arch") != 'amd64'):
> +        fp.write(
> +            'ARCHITECTURE="%s"\n' %
> +            xml.text(
> +                "project/buildimage/arch",
> +                key="arch"))
> +        fp.write('DEBOOTSTRAP="qemu-debootstrap"\n')
> +        fp.write(
> +            'DEBOOTSTRAPOPTS=("${DEBOOTSTRAPOPTS[@]}" "--arch=$ARCHITECTURE")\n')
>  
>      if xml.prj.has('noauth'):
> -        fp.write ('DEBOOTSTRAPOPTS=("${DEBOOTSTRAPOPTS[@]}" "--no-check-gpg")\n')
> -        fp.write ('export ALLOWUNTRUSTED="yes"\n')
> +        fp.write(
> +            'DEBOOTSTRAPOPTS=("${DEBOOTSTRAPOPTS[@]}" "--no-check-gpg")\n')
> +        fp.write('export ALLOWUNTRUSTED="yes"\n')
>  
>      # aptitude segfaults with sid armhf changeroots, great! :)
>      # link: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=834990
>      if distname == 'stretch':
> -        fp.write ('PBUILDERSATISFYDEPENDSCMD=/usr/lib/pbuilder/pbuilder-satisfydepends-experimental\n')
> +        fp.write(
> +            'PBUILDERSATISFYDEPENDSCMD=/usr/lib/pbuilder/pbuilder-satisfydepends-experimental\n')
>  
>      fp.close()
>  
> -def pbuilder_write_apt_conf (builddir, xml):
> +
> +def pbuilder_write_apt_conf(builddir, xml):
>  
>      # writing apt.conf is only necessary, when we have noauth
>      # return when its not set
> @@ -55,17 +74,18 @@ def pbuilder_write_apt_conf (builddir, xml):
>  
>      # noauth is set
>      # create pbuilder/aptconfdir/apt.conf.d/16allowuntrusted
> -    aptconf_dir = os.path.join (builddir, "aptconfdir", "apt.conf.d")
> -    fp = open (os.path.join (aptconf_dir, "16allowuntrusted"), "w")
> +    aptconf_dir = os.path.join(builddir, "aptconfdir", "apt.conf.d")
> +    fp = open(os.path.join(aptconf_dir, "16allowuntrusted"), "w")
>  
> -    # Make apt-get use --force-yes which is not specified by pbuilder-satisfy-depends
> -    fp.write ('APT::Get::force-yes "true";\n')
> +    # Make apt-get use --force-yes which is not specified by
> +    # pbuilder-satisfy-depends
> +    fp.write('APT::Get::force-yes "true";\n')
>  
>      # Also for safety add this:
> -    fp.write ('APT::Get::AllowUnauthenticated "true";\n')
> +    fp.write('APT::Get::AllowUnauthenticated "true";\n')
>  
>      # Make aptitude install untrusted packages without asking
> -    fp.write ('Aptitude::CmdLine::Ignore-Trust-Violations "true";\n')
> +    fp.write('Aptitude::CmdLine::Ignore-Trust-Violations "true";\n')
>      fp.close()
>  
>  
> @@ -81,11 +101,12 @@ def mirror_script_add_key(mirror, key_url):
>  
>      return mirror
>  
> -def pbuilder_write_repo_hook (builddir, xml):
>  
> -    pbuilder_hook_dir = os.path.join (builddir, "pbuilder", "hooks.d")
> +def pbuilder_write_repo_hook(builddir, xml):
> +
> +    pbuilder_hook_dir = os.path.join(builddir, "pbuilder", "hooks.d")
>  
> -    fp = open (os.path.join (pbuilder_hook_dir, "D10elbe_apt_sources"), "w")
> +    fp = open(os.path.join(pbuilder_hook_dir, "D10elbe_apt_sources"), "w")
>  
>      if xml.prj is None:
>          return "# No Project"
> @@ -95,28 +116,34 @@ def pbuilder_write_repo_hook (builddir, xml):
>  
>      mirror = "#!/bin/sh\n"
>  
> -    mirror += 'echo "deb http://127.0.0.1:8080' + builddir + '/repo ' + xml.prj.text("suite") + ' main" > /etc/apt/sources.list\n'
> +    mirror += 'echo "deb http://127.0.0.1:8080' + builddir + '/repo ' + \
> +        xml.prj.text("suite") + ' main" > /etc/apt/sources.list\n'
>  
> -    mirror = mirror_script_add_key (mirror, 'http://127.0.0.1:8080' + builddir + '/repo/repo.pub')
> +    mirror = mirror_script_add_key(
> +        mirror,
> +        'http://127.0.0.1:8080' +
> +        builddir +
> +        '/repo/repo.pub')
>  
>      if xml.prj.has("mirror/primary_host"):
> -        mirror += 'echo "deb ' + xml.get_primary_mirror (None) + ' ' + xml.prj.text("suite") + ' main" >> /etc/apt/sources.list\n'
> +        mirror += 'echo "deb ' + xml.get_primary_mirror(None) + ' ' + xml.prj.text(
> +            "suite") + ' main" >> /etc/apt/sources.list\n'
>  
>          if xml.prj.has("mirror/url-list"):
>              for url in xml.prj.node("mirror/url-list"):
>                  if url.has("binary"):
> -                    mirror += 'echo "deb ' + url.text("binary").strip() + '" >> /etc/apt/sources.list\n'
> +                    mirror += 'echo "deb ' + \
> +                        url.text("binary").strip() + '" >> /etc/apt/sources.list\n'
>                  if url.has("key"):
>                      key_url = url.text("key").strip()
> -                    mirror = mirror_script_add_key (mirror, key_url)
> -
> +                    mirror = mirror_script_add_key(mirror, key_url)
>  
>      if xml.prj.has("mirror/cdrom"):
> -        mirror += 'echo "deb copy:///cdrom/targetrepo %s main added" >> /etc/apt/sources.list' % (xml.prj.text("suite"))
> +        mirror += 'echo "deb copy:///cdrom/targetrepo %s main added" >> /etc/apt/sources.list' % (
> +            xml.prj.text("suite"))
>  
>      mirror += 'apt-get update\n'
>      mirror = mirror.replace("LOCALMACHINE", "10.0.2.2")
>  
> -    fp.write (mirror)
> +    fp.write(mirror)
>      fp.close()
> -
> diff --git a/elbepack/pbuilderaction.py b/elbepack/pbuilderaction.py
> index b3da68fb..64b77ce0 100644
> --- a/elbepack/pbuilderaction.py
> +++ b/elbepack/pbuilderaction.py
> @@ -7,41 +7,52 @@ from __future__ import print_function
>  
>  from elbepack.directories import elbe_exe
>  from elbepack.shellhelper import CommandError, system, command_out_stderr
> -from elbepack.filesystem  import wdfs, TmpdirFilesystem
> +from elbepack.filesystem import wdfs, TmpdirFilesystem
>  
>  import sys
>  import os
>  
> -cmd_exists = lambda x: any(os.access(os.path.join(path, x), os.X_OK) for path in os.environ["PATH"].split(os.pathsep))
> +
> +def cmd_exists(x): return any(os.access(os.path.join(path, x), os.X_OK)
> +                              for path in os.environ["PATH"].split(os.pathsep))
>  
>  # Create download directory with timestamp,
>  # if necessary
> -def ensure_outdir (wdfs, opt):
> +
> +
> +def ensure_outdir(wdfs, opt):
>      if opt.outdir is None:
>          opt.outdir = ".."
>  
>      print("Saving generated Files to %s" % opt.outdir)
>  
> +
>  class PBuilderError(Exception):
>      def __init__(self, str):
>          Exception.__init__(self, str)
>  
> +
>  class PBuilderAction(object):
>      actiondict = {}
> +
>      @classmethod
>      def register(cls, action):
>          cls.actiondict[action.tag] = action
> +
>      @classmethod
>      def print_actions(cls):
>          print("available subcommands are:", file=sys.stderr)
>          for a in cls.actiondict:
>              print("   %s" % a, file=sys.stderr)
> +
>      def __new__(cls, node):
>          action = cls.actiondict[node]
>          return object.__new__(action)
> +
>      def __init__(self, node):
>          self.node = node
>  
> +
>  class CreateAction(PBuilderAction):
>  
>      tag = 'create'
> @@ -50,10 +61,11 @@ class CreateAction(PBuilderAction):
>          PBuilderAction.__init__(self, node)
>  
>      def execute(self, opt, args):
> -        tmp = TmpdirFilesystem ()
> +        tmp = TmpdirFilesystem()
>  
>          if opt.xmlfile:
> -            ret, prjdir, err = command_out_stderr ('%s control create_project' % (elbe_exe))
> +            ret, prjdir, err = command_out_stderr(
> +                '%s control create_project' % (elbe_exe))
>              if ret != 0:
>                  print("elbe control create_project failed.", file=sys.stderr)
>                  print(err, file=sys.stderr)
> @@ -61,7 +73,9 @@ class CreateAction(PBuilderAction):
>                  sys.exit(20)
>  
>              prjdir = prjdir.strip()
> -            ret, msg, err = command_out_stderr ('%s control set_xml "%s" "%s"' % (elbe_exe, prjdir, opt.xmlfile))
> +            ret, msg, err = command_out_stderr(
> +                '%s control set_xml "%s" "%s"' %
> +                (elbe_exe, prjdir, opt.xmlfile))
>  
>              if ret != 0:
>                  print("elbe control set_xml failed.", file=sys.stderr)
> @@ -70,8 +84,8 @@ class CreateAction(PBuilderAction):
>                  sys.exit(20)
>  
>              if opt.writeproject:
> -                wpf = open (opt.writeproject, "w")
> -                wpf.write (prjdir)
> +                wpf = open(opt.writeproject, "w")
> +                wpf.write(prjdir)
>                  wpf.close()
>  
>          elif opt.project:
> @@ -80,18 +94,17 @@ class CreateAction(PBuilderAction):
>              print("you need to specify --project option", file=sys.stderr)
>              sys.exit(20)
>  
> -
>          print("Creating pbuilder")
>  
>          try:
> -            system ('%s control build_pbuilder "%s"' % (elbe_exe, prjdir))
> +            system('%s control build_pbuilder "%s"' % (elbe_exe, prjdir))
>          except CommandError:
>              print("elbe control build_pbuilder Failed", file=sys.stderr)
>              print("Giving up", file=sys.stderr)
>              sys.exit(20)
>  
>          try:
> -            system ('%s control wait_busy "%s"' % (elbe_exe, prjdir) )
> +            system('%s control wait_busy "%s"' % (elbe_exe, prjdir))
>          except CommandError:
>              print("elbe control wait_busy Failed", file=sys.stderr)
>              print("Giving up", file=sys.stderr)
> @@ -101,8 +114,10 @@ class CreateAction(PBuilderAction):
>          print("Building Pbuilder finished !")
>          print("")
>  
> +
>  PBuilderAction.register(CreateAction)
>  
> +
>  class UpdateAction(PBuilderAction):
>  
>      tag = 'update'
> @@ -121,7 +136,7 @@ class UpdateAction(PBuilderAction):
>          print("Updating pbuilder")
>  
>          try:
> -            system ('%s control update_pbuilder "%s"' % (elbe_exe, prjdir))
> +            system('%s control update_pbuilder "%s"' % (elbe_exe, prjdir))
>          except CommandError:
>              print("elbe control update_pbuilder Failed", file=sys.stderr)
>              print("Giving up", file=sys.stderr)
> @@ -131,6 +146,7 @@ class UpdateAction(PBuilderAction):
>          print("Updating Pbuilder finished !")
>          print("")
>  
> +
>  PBuilderAction.register(CreateAction)
>  
>  
> @@ -142,10 +158,12 @@ class BuildAction(PBuilderAction):
>          PBuilderAction.__init__(self, node)
>  
>      def execute(self, opt, args):
> -        tmp = TmpdirFilesystem ()
> +        tmp = TmpdirFilesystem()
>  
>          if opt.xmlfile:
> -            ret, prjdir, err = command_out_stderr ('%s control create_project --retries 60 "%s"' % (elbe_exe, opt.xmlfile))
> +            ret, prjdir, err = command_out_stderr(
> +                '%s control create_project --retries 60 "%s"' %
> +                (elbe_exe, opt.xmlfile))
>              if ret != 0:
>                  print("elbe control create_project failed.", file=sys.stderr)
>                  print(err, file=sys.stderr)
> @@ -155,14 +173,14 @@ class BuildAction(PBuilderAction):
>              prjdir = prjdir.strip()
>  
>              try:
> -                system ('%s control build_pbuilder "%s"' % (elbe_exe, prjdir))
> +                system('%s control build_pbuilder "%s"' % (elbe_exe, prjdir))
>              except CommandError:
>                  print("elbe control build_pbuilder Failed", file=sys.stderr)
>                  print("Giving up", file=sys.stderr)
>                  sys.exit(20)
>  
>              try:
> -                system ('%s control wait_busy "%s"' % (elbe_exe, prjdir) )
> +                system('%s control wait_busy "%s"' % (elbe_exe, prjdir))
>              except CommandError:
>                  print("elbe control wait_busy Failed", file=sys.stderr)
>                  print("Giving up", file=sys.stderr)
> @@ -173,16 +191,18 @@ class BuildAction(PBuilderAction):
>              print("")
>          elif opt.project:
>              prjdir = opt.project
> -            system ('%s control rm_log %s' % (elbe_exe, prjdir))
> +            system('%s control rm_log %s' % (elbe_exe, prjdir))
>          else:
> -            print("you need to specify --project or --xmlfile option", file=sys.stderr)
> +            print(
> +                "you need to specify --project or --xmlfile option",
> +                file=sys.stderr)
>              sys.exit(20)
>  
>          print("")
>          print("Packing Source into tmp archive")
>          print("")
>          try:
> -            system ('tar cfz "%s" .' % (tmp.fname ("pdebuild.tar.gz")))
> +            system('tar cfz "%s" .' % (tmp.fname("pdebuild.tar.gz")))
>          except CommandError:
>              print("tar Failed", file=sys.stderr)
>              print("Giving up", file=sys.stderr)
> @@ -193,7 +213,9 @@ class BuildAction(PBuilderAction):
>              print("Pushing orig file '%s' into pbuilder" % of)
>              print("")
>              try:
> -                system ('%s control set_orig "%s" "%s"' % (elbe_exe, prjdir, of))
> +                system(
> +                    '%s control set_orig "%s" "%s"' %
> +                    (elbe_exe, prjdir, of))
>              except CommandError:
>                  print("elbe control set_orig Failed", file=sys.stderr)
>                  print("Giving up", file=sys.stderr)
> @@ -204,13 +226,14 @@ class BuildAction(PBuilderAction):
>          print("")
>  
>          try:
> -            system ('%s control set_pdebuild "%s" "%s"' % (elbe_exe, prjdir, tmp.fname ("pdebuild.tar.gz")))
> +            system('%s control set_pdebuild "%s" "%s"' %
> +                   (elbe_exe, prjdir, tmp.fname("pdebuild.tar.gz")))
>          except CommandError:
>              print("elbe control set_pdebuild Failed", file=sys.stderr)
>              print("Giving up", file=sys.stderr)
>              sys.exit(20)
>          try:
> -            system ('%s control wait_busy "%s"' % (elbe_exe, prjdir) )
> +            system('%s control wait_busy "%s"' % (elbe_exe, prjdir))
>          except CommandError:
>              print("elbe control wait_busy Failed", file=sys.stderr)
>              print("Giving up", file=sys.stderr)
> @@ -224,15 +247,17 @@ class BuildAction(PBuilderAction):
>              print("Listing available files:")
>              print("")
>              try:
> -                system ('%s control --pbuilder-only get_files "%s"' % (elbe_exe, prjdir) )
> +                system(
> +                    '%s control --pbuilder-only get_files "%s"' %
> +                    (elbe_exe, prjdir))
>              except CommandError:
>                  print("elbe control get_files Failed", file=sys.stderr)
>                  print("", file=sys.stderr)
>                  print("dumping logfile", file=sys.stderr)
>  
>                  try:
> -                    system ('%s control dump_file "%s" log.txt' % (
> -                            elbe_exe, prjdir ))
> +                    system('%s control dump_file "%s" log.txt' % (
> +                        elbe_exe, prjdir))
>                  except CommandError:
>                      print("elbe control dump_file Failed", file=sys.stderr)
>                      print("", file=sys.stderr)
> @@ -241,25 +266,28 @@ class BuildAction(PBuilderAction):
>                  sys.exit(20)
>  
>              print("")
> -            print("Get Files with: 'elbe control get_file %s <filename>'" % prjdir)
> +            print(
> +                "Get Files with: 'elbe control get_file %s <filename>'" %
> +                prjdir)
>          else:
>              print("")
>              print("Getting generated Files")
>              print("")
>  
> -            ensure_outdir (wdfs, opt)
> +            ensure_outdir(wdfs, opt)
>  
>              try:
> -                system ('%s control --pbuilder-only get_files --output "%s" "%s"' % (
> -                        elbe_exe, opt.outdir, prjdir ))
> +                system(
> +                    '%s control --pbuilder-only get_files --output "%s" "%s"' %
> +                    (elbe_exe, opt.outdir, prjdir))
>              except CommandError:
>                  print("elbe control get_files Failed", file=sys.stderr)
>                  print("", file=sys.stderr)
>                  print("dumping logfile", file=sys.stderr)
>  
>                  try:
> -                    system ('%s control dump_file "%s" log.txt' % (
> -                            elbe_exe, prjdir ))
> +                    system('%s control dump_file "%s" log.txt' % (
> +                        elbe_exe, prjdir))
>                  except CommandError:
>                      print("elbe control dump_file Failed", file=sys.stderr)
>                      print("", file=sys.stderr)
> @@ -267,5 +295,5 @@ class BuildAction(PBuilderAction):
>  
>                  sys.exit(20)
>  
> -PBuilderAction.register(BuildAction)
>  
> +PBuilderAction.register(BuildAction)
> diff --git a/elbepack/pkgarchive.py b/elbepack/pkgarchive.py
> index 83082841..e94eb746 100644
> --- a/elbepack/pkgarchive.py
> +++ b/elbepack/pkgarchive.py
> @@ -9,39 +9,41 @@ from shutil import rmtree, copytree, move
>  from apt.package import FetchError
>  from elbepack.repomanager import RepoBase, RepoAttributes
>  
> +
>  class ArchiveRepo(RepoBase):
> -    def __init__( self, xml, path, log, origin, description, components,
> -            maxsize=None ):
> +    def __init__(self, xml, path, log, origin, description, components,
> +                 maxsize=None):
> +
> +        arch = xml.text("project/arch", key="arch")
> +        codename = xml.text("project/suite")
>  
> -        arch = xml.text( "project/arch", key="arch" )
> -        codename = xml.text( "project/suite" )
> +        repo_attrs = RepoAttributes(codename, arch, components)
>  
> -        repo_attrs = RepoAttributes (codename, arch, components)
> +        RepoBase.__init__(self,
> +                          path,
> +                          log,
> +                          None,
> +                          repo_attrs,
> +                          description,
> +                          origin)
>  
> -        RepoBase.__init__( self,
> -                           path,
> -                           log,
> -                           None,
> -                           repo_attrs,
> -                           description,
> -                           origin)
>  
>  def gen_binpkg_archive(ep, repodir):
> -    repopath = path.join( ep.builddir, repodir )
> +    repopath = path.join(ep.builddir, repodir)
>  
>      try:
> -        rmtree( repopath )
> +        rmtree(repopath)
>      except OSError as e:
>          if e.errno != errno.ENOENT:
>              raise
>  
>      # Create archive directory for packages we have to download
> -    ep.buildenv.rfs.mkdir_p( '/var/cache/elbe/pkgarchive' )
> +    ep.buildenv.rfs.mkdir_p('/var/cache/elbe/pkgarchive')
>  
>      try:
>          # Repository containing all packages currently installed
> -        repo = ArchiveRepo( ep.xml, repopath, ep.log, "Elbe",
> -                "Elbe package archive", ["main"] )
> +        repo = ArchiveRepo(ep.xml, repopath, ep.log, "Elbe",
> +                           "Elbe package archive", ["main"])
>  
>          c = ep.get_rpcaptcache()
>          pkglist = c.get_installed_pkgs()
> @@ -49,129 +51,141 @@ def gen_binpkg_archive(ep, repodir):
>          for pkg in pkglist:
>              # Use package from local APT archive, if the file exists
>              filename = pkg.installed_deb
> -            rel_path = path.join( 'var/cache/apt/archives', filename )
> -            abs_path = ep.buildenv.rfs.fname( rel_path )
> +            rel_path = path.join('var/cache/apt/archives', filename)
> +            abs_path = ep.buildenv.rfs.fname(rel_path)
>  
> -            if not path.isfile( abs_path ):
> +            if not path.isfile(abs_path):
>                  # Package file does not exist, download it and adjust path name
> -                ep.log.printo( "Package file " + filename +
> -                        " not found in var/cache/apt/archives, downloading it" )
> -                abs_path = ep.buildenv.rfs.fname( rel_path )
> +                ep.log.printo(
> +                    "Package file " +
> +                    filename +
> +                    " not found in var/cache/apt/archives, downloading it")
> +                abs_path = ep.buildenv.rfs.fname(rel_path)
>                  try:
> -                    abs_path = c.download_binary( pkg.name,
> -                            '/var/cache/elbe/pkgarchive',
> -                            pkg.installed_version )
> +                    abs_path = c.download_binary(pkg.name,
> +                                                 '/var/cache/elbe/pkgarchive',
> +                                                 pkg.installed_version)
>                  except ValueError as ve:
> -                    ep.log.printo( "No Package " + pkg.name + "-" +
> -                            pkg.installed_version )
> +                    ep.log.printo("No Package " + pkg.name + "-" +
> +                                  pkg.installed_version)
>                      raise
>                  except FetchError as fe:
> -                    ep.log.printo( "Package " + pkg.name + "-" +
> -                            pkg.installed_version + " could not be downloaded" )
> +                    ep.log.printo(
> +                        "Package " +
> +                        pkg.name +
> +                        "-" +
> +                        pkg.installed_version +
> +                        " could not be downloaded")
>                      raise
>                  except TypeError as te:
> -                    ep.log.printo( "Package " + pkg.name + "-" +
> -                            pkg.installed_version + " missing name or version" )
> +                    ep.log.printo(
> +                        "Package " +
> +                        pkg.name +
> +                        "-" +
> +                        pkg.installed_version +
> +                        " missing name or version")
>                      raise
>  
>              # Add package to repository
>              # XXX Use correct component
> -            repo.includedeb( abs_path, "main" )
> +            repo.includedeb(abs_path, "main")
>  
>          repo.finalize()
>  
>      finally:
> -        rmtree( ep.buildenv.rfs.fname( 'var/cache/elbe/pkgarchive' ) )
> +        rmtree(ep.buildenv.rfs.fname('var/cache/elbe/pkgarchive'))
>          repo.finalize()
>  
> -def checkout_binpkg_archive (ep, repodir):
> -    repopath = path.join( ep.builddir, repodir )
> -    sources_list = ep.buildenv.rfs.fname( 'etc/apt/sources.list' )
> -    sources_list_d = ep.buildenv.rfs.fname( 'etc/apt/sources.list.d' )
> -    sources_list_backup = path.join( ep.builddir, 'sources.list.orig' )
> -    sources_list_d_backup = path.join( ep.builddir, 'sources.list.d.orig' )
> -    pkgarchive = ep.buildenv.rfs.fname( 'var/cache/elbe/pkgarchive' )
> +
> +def checkout_binpkg_archive(ep, repodir):
> +    repopath = path.join(ep.builddir, repodir)
> +    sources_list = ep.buildenv.rfs.fname('etc/apt/sources.list')
> +    sources_list_d = ep.buildenv.rfs.fname('etc/apt/sources.list.d')
> +    sources_list_backup = path.join(ep.builddir, 'sources.list.orig')
> +    sources_list_d_backup = path.join(ep.builddir, 'sources.list.d.orig')
> +    pkgarchive = ep.buildenv.rfs.fname('var/cache/elbe/pkgarchive')
>  
>      with ep.buildenv:
>          try:
>              # Copy the package archive into the buildenv, so the RPCAptCache can
>              # access it
> -            ep.log.printo( "Copying package archive into build environment" )
> -            copytree( repopath, pkgarchive )
> +            ep.log.printo("Copying package archive into build environment")
> +            copytree(repopath, pkgarchive)
>  
>              # Move original etc/apt/sources.list and etc/apt/sources.list.d out
>              # of the way
> -            ep.log.printo( "Moving original APT configuration out of the way" )
> -            if path.isfile( sources_list ):
> -                move( sources_list, sources_list_backup )
> -            if path.isdir( sources_list_d ):
> -                move( sources_list_d, sources_list_d_backup )
> -
> -            # Now create our own, with the package archive being the only source
> -            ep.log.printo( "Creating new /etc/apt/sources.list" )
> +            ep.log.printo("Moving original APT configuration out of the way")
> +            if path.isfile(sources_list):
> +                move(sources_list, sources_list_backup)
> +            if path.isdir(sources_list_d):
> +                move(sources_list_d, sources_list_d_backup)
> +
> +            # Now create our own, with the package archive being the only
> +            # source
> +            ep.log.printo("Creating new /etc/apt/sources.list")
>              deb = "deb file:///var/cache/elbe/pkgarchive "
> -            deb += ep.xml.text( "/project/suite" )
> +            deb += ep.xml.text("/project/suite")
>              deb += " main"
> -            with open( sources_list, 'w' ) as f:
> -                f.write( deb )
> +            with open(sources_list, 'w') as f:
> +                f.write(deb)
>  
>              # We need to update the APT cache to apply the changed package
>              # source
> -            ep.log.printo( "Updating APT cache to use package archive" )
> +            ep.log.printo("Updating APT cache to use package archive")
>              ep.drop_rpcaptcache()
>              c = ep.get_rpcaptcache()
>              c.update()
>  
>              # Iterate over all packages, and mark them for installation or
>              # deletion, using the same logic as in commands/updated.py
> -            ep.log.printo( "Calculating packages to install/remove" )
> -            fpl = ep.xml.node( "fullpkgs" )
> +            ep.log.printo("Calculating packages to install/remove")
> +            fpl = ep.xml.node("fullpkgs")
>              pkgs = c.get_pkglist('all')
>  
>              for p in pkgs:
>                  marked = False
>                  for fpi in fpl:
>                      if p.name == fpi.et.text:
> -                        version = fpi.et.get( 'version' )
> -                        ep.log.printo( "Install " + p.name + "-" + version )
> -                        c.mark_install( p.name, version,
> -                                from_user = not fpi.et.get( 'auto' ),
> -                                nodeps = True )
> +                        version = fpi.et.get('version')
> +                        ep.log.printo("Install " + p.name + "-" + version)
> +                        c.mark_install(p.name, version,
> +                                       from_user=not fpi.et.get('auto'),
> +                                       nodeps=True)
>                          marked = True
>  
>                  if not marked:
> -                    ep.log.printo( "Delete " + p.name + "-" + version )
> -                    c.mark_delete( p.name, None )
> +                    ep.log.printo("Delete " + p.name + "-" + version)
> +                    c.mark_delete(p.name, None)
>  
>              # Now commit the changes
> -            ep.log.printo( "Commiting package changes" )
> +            ep.log.printo("Commiting package changes")
>              c.commit()
>          finally:
>              # If we changed the package sources, move back the backup
> -            if path.isdir( sources_list_d_backup ) or \
> -                    path.isfile( sources_list_backup ):
> -                ep.log.printo( "Moving back original APT configuration" )
> +            if path.isdir(sources_list_d_backup) or \
> +                    path.isfile(sources_list_backup):
> +                ep.log.printo("Moving back original APT configuration")
>                  update_needed = True
>              else:
>                  update_needed = False
>  
> -            if path.isdir( sources_list_d_backup ):
> -                move( sources_list_d_backup, sources_list_d )
> +            if path.isdir(sources_list_d_backup):
> +                move(sources_list_d_backup, sources_list_d)
>  
> -            if path.isfile( sources_list_backup ):
> -                if path.isfile( sources_list ):
> -                    remove( sources_list )
> -                move( sources_list_backup, sources_list )
> +            if path.isfile(sources_list_backup):
> +                if path.isfile(sources_list):
> +                    remove(sources_list)
> +                move(sources_list_backup, sources_list)
>  
>              # Remove the package archive from the buildenv
> -            if path.isdir( pkgarchive ):
> +            if path.isdir(pkgarchive):
>                  ep.log.printo(
> -                        "Removing package archive from build environment" )
> -                rmtree( pkgarchive )
> +                    "Removing package archive from build environment")
> +                rmtree(pkgarchive)
>  
>              # Update APT cache, if we modified the package sources
>              if update_needed:
>                  ep.log.printo(
> -                        "Updating APT cache to use original package sources" )
> +                    "Updating APT cache to use original package sources")
>                  ep.drop_rpcaptcache()
>                  ep.get_rpcaptcache().update()
> diff --git a/elbepack/pkgutils.py b/elbepack/pkgutils.py
> index b1f3135f..4b83f8e6 100644
> --- a/elbepack/pkgutils.py
> +++ b/elbepack/pkgutils.py
> @@ -35,24 +35,26 @@ except ImportError:
>  class NoKinitrdException(Exception):
>      pass
>  
> -def get_sources_list( prj, defs ):
> +
> +def get_sources_list(prj, defs):
>  
>      suite = prj.text("suite")
>  
>      slist = ""
>      if prj.has("mirror/primary_host"):
> -        mirror = "%s://%s/%s" % ( prj.text("mirror/primary_proto"),
> -                                  prj.text("mirror/primary_host").replace(
> -                                      "LOCALMACHINE", "localhost"),
> -                                  prj.text("mirror/primary_path") )
> +        mirror = "%s://%s/%s" % (prj.text("mirror/primary_proto"),
> +                                 prj.text("mirror/primary_host").replace(
> +            "LOCALMACHINE", "localhost"),
> +            prj.text("mirror/primary_path"))
>          slist += "deb %s %s main\n" % (mirror, suite)
>          slist += "deb-src %s %s main\n" % (mirror, suite)
>  
>      if prj.has("mirror/cdrom"):
>          tmpdir = mkdtemp()
>          kinitrd = prj.text("buildimage/kinitrd", default=defs, key="kinitrd")
> -        system( '7z x -o%s "%s" pool/main/%s/%s dists' % (tmpdir, prj.text("mirror/cdrom"), kinitrd[0], kinitrd) )
> -        slist += "deb file://%s %s main\n" % (tmpdir,suite)
> +        system('7z x -o%s "%s" pool/main/%s/%s dists' %
> +               (tmpdir, prj.text("mirror/cdrom"), kinitrd[0], kinitrd))
> +        slist += "deb file://%s %s main\n" % (tmpdir, suite)
>  
>      if prj.node("mirror/url-list"):
>          for n in prj.node("mirror/url-list"):
> @@ -66,29 +68,32 @@ def get_sources_list( prj, defs ):
>      return slist
>  
>  
> -def get_key_list (prj):
> +def get_key_list(prj):
>      retval = []
>      if prj.node("mirror/url-list"):
>          for n in prj.node("mirror/url-list"):
>              if n.has("key"):
>                  tmp = n.text("key").replace("LOCALMACHINE", "localhost")
> -                retval.append (tmp.strip ())
> +                retval.append(tmp.strip())
>  
>      return retval
>  
> -def get_initrd_pkg( prj, defs ):
> +
> +def get_initrd_pkg(prj, defs):
>      initrdname = prj.text("buildimage/kinitrd", default=defs, key="kinitrd")
>  
>      return initrdname
>  
> -def get_url ( arch, suite, target_pkg, mirror, comp='main' ):
> +
> +def get_url(arch, suite, target_pkg, mirror, comp='main'):
>      try:
> -        pack_url = "%s/dists/%s/%s/binary-%s/Packages" % (mirror.replace("LOCALMACHINE", "localhost"), suite, comp, arch)
> +        pack_url = "%s/dists/%s/%s/binary-%s/Packages" % (
> +            mirror.replace("LOCALMACHINE", "localhost"), suite, comp, arch)
>          packages = urllib2.urlopen(pack_url, None, 10)
>  
>          packages = packages.readlines()
> -        packages = [x for x in packages if x.startswith( "Filename" )]
> -        packages = [x for x in packages if x.find( target_pkg ) != -1]
> +        packages = [x for x in packages if x.startswith("Filename")]
> +        packages = [x for x in packages if x.find(target_pkg) != -1]
>  
>          tmp = packages.pop()
>          urla = tmp.split()
> @@ -98,45 +103,45 @@ def get_url ( arch, suite, target_pkg, mirror, comp='main' ):
>      except IndexError:
>          url = ""
>  
> -
>      return url
>  
>  
> -def get_initrd_uri_nonvirtapt ( apt_sources, target_pkg, arch ):
> +def get_initrd_uri_nonvirtapt(apt_sources, target_pkg, arch):
>      for apts in apt_sources.splitlines():
>          apts_split = apts.strip().split(' ')
>          if apts_split[0] != 'deb':
>              continue
>  
>          for comp in apts_split[2:]:
> -            pkg = get_url (arch, apts_split[2], target_pkg, apts_split[1], comp)
> +            pkg = get_url(arch, apts_split[2], target_pkg, apts_split[1], comp)
>  
>              if pkg:
>                  return "", pkg
>  
> -def get_initrd_uri( prj, defs, arch ):
> +
> +def get_initrd_uri(prj, defs, arch):
>      if arch == "default":
> -        arch  = prj.text("buildimage/arch", default=defs, key="arch")
> +        arch = prj.text("buildimage/arch", default=defs, key="arch")
>      suite = prj.text("suite")
>  
> -    name  = prj.text("name", default=defs, key="name")
> +    name = prj.text("name", default=defs, key="name")
>      apt_sources = get_sources_list(prj, defs)
> -    apt_keys    = get_key_list (prj)
> -    target_pkg  = get_initrd_pkg(prj, defs)
> +    apt_keys = get_key_list(prj)
> +    target_pkg = get_initrd_pkg(prj, defs)
>  
>      if virtapt_imported:
>          try:
> -            v = virtapt.VirtApt( name, arch, suite, apt_sources, "", apt_keys )
> +            v = virtapt.VirtApt(name, arch, suite, apt_sources, "", apt_keys)
>          except Exception as e:
> -            return get_initrd_uri_nonvirtapt (apt_sources, target_pkg, arch)
> +            return get_initrd_uri_nonvirtapt(apt_sources, target_pkg, arch)
>  
>          d = virtapt.apt_pkg.DepCache(v.cache)
>          pkg = v.cache[target_pkg]
>  
> -        c=d.get_candidate_ver(pkg)
> -        x=v.source.find_index(c.file_list[0][0])
> +        c = d.get_candidate_ver(pkg)
> +        x = v.source.find_index(c.file_list[0][0])
>  
> -        r=virtapt.apt_pkg.PackageRecords(v.cache)
> +        r = virtapt.apt_pkg.PackageRecords(v.cache)
>          r.lookup(c.file_list[0])
>          uri = x.archive_uri(r.filename)
>  
> @@ -145,39 +150,39 @@ def get_initrd_uri( prj, defs, arch ):
>  
>          return r.sha1_hash, uri
>      else:
> -        return get_initrd_uri_nonvirtapt ( apt_sources, target_pkg, arch )
> -
> +        return get_initrd_uri_nonvirtapt(apt_sources, target_pkg, arch)
>  
>      return "", ""
>  
>  
> -def get_dsc_size( fname ):
> +def get_dsc_size(fname):
>      if not virtapt_imported:
>          return 0
>  
> -    tf = TagFile( fname )
> +    tf = TagFile(fname)
>  
>      sz = os.path.getsize(fname)
>      for sect in tf:
>          if 'Files' in sect:
>              files = sect['Files'].split('\n')
> -            files = [ f.strip().split(' ') for f in files ]
> +            files = [f.strip().split(' ') for f in files]
>              for f in files:
>                  sz += int(f[1])
>  
>      return sz
>  
> -def copy_kinitrd( prj, target_dir, defs, arch="default" ):
> +
> +def copy_kinitrd(prj, target_dir, defs, arch="default"):
>      try:
>          sha1, uri = get_initrd_uri(prj, defs, arch)
>      except KeyError:
> -        raise NoKinitrdException ('no elbe-bootstrap package available')
> +        raise NoKinitrdException('no elbe-bootstrap package available')
>          return
>      except SystemError:
> -        raise NoKinitrdException ('a configured mirror is not reachable')
> +        raise NoKinitrdException('a configured mirror is not reachable')
>          return
>      except CommandError as e:
> -        raise NoKinitrdException ("couldn't download elbe-bootstrap package")
> +        raise NoKinitrdException("couldn't download elbe-bootstrap package")
>          return
>  
>      try:
> @@ -185,51 +190,89 @@ def copy_kinitrd( prj, target_dir, defs, arch="default" ):
>  
>          try:
>              if uri.startswith("file://"):
> -                system( 'cp "%s" "%s"' % ( uri[len("file://"):], os.path.join(tmpdir, "pkg.deb") ) )
> +                system('cp "%s" "%s"' %
> +                       (uri[len("file://"):], os.path.join(tmpdir, "pkg.deb")))
>              elif uri.startswith("http://"):
> -                system( 'wget -O "%s" "%s"' % ( os.path.join(tmpdir, "pkg.deb"), uri ) )
> +                system('wget -O "%s" "%s"' %
> +                       (os.path.join(tmpdir, "pkg.deb"), uri))
>              elif uri.startswith("ftp://"):
> -                system( 'wget -O "%s" "%s"' % ( os.path.join(tmpdir, "pkg.deb"), uri ) )
> +                system('wget -O "%s" "%s"' %
> +                       (os.path.join(tmpdir, "pkg.deb"), uri))
>              else:
> -                raise NoKinitrdException ('no elbe-bootstrap package available')
> +                raise NoKinitrdException('no elbe-bootstrap package available')
>          except CommandError as e:
> -            raise NoKinitrdException ("couldn't download elbe-bootstrap package")
> +            raise NoKinitrdException(
> +                "couldn't download elbe-bootstrap package")
>              return
>  
>          if len(sha1) > 0:
>              m = hashlib.sha1()
> -            with open (os.path.join(tmpdir, "pkg.deb"), "rb") as f:
> +            with open(os.path.join(tmpdir, "pkg.deb"), "rb") as f:
>                  buf = f.read(65536)
> -                while len(buf)>0:
> -                    m.update( buf )
> +                while len(buf) > 0:
> +                    m.update(buf)
>                      buf = f.read(65536)
>  
>              if m.hexdigest() != sha1:
> -                raise NoKinitrdException ('elbe-bootstrap failed to verify !!!')
> +                raise NoKinitrdException('elbe-bootstrap failed to verify !!!')
>          else:
>              print("-----------------------------------------------------")
>              print("WARNING:")
>              print("Using untrusted elbe-bootstrap")
>              print("-----------------------------------------------------")
>  
> -
>          try:
> -            system( 'dpkg -x "%s" "%s"' % ( os.path.join(tmpdir, "pkg.deb"), tmpdir ) )
> +            system('dpkg -x "%s" "%s"' %
> +                   (os.path.join(tmpdir, "pkg.deb"), tmpdir))
>          except CommandError:
>              try:
>                  # dpkg did not work, try falling back to ar and tar
> -                system( 'ar p "%s" data.tar.gz | tar xz -C "%s"' % ( os.path.join(tmpdir, "pkg.deb"), tmpdir ) )
> +                system('ar p "%s" data.tar.gz | tar xz -C "%s"' %
> +                       (os.path.join(tmpdir, "pkg.deb"), tmpdir))
>              except CommandError:
> -                system( 'ar p "%s" data.tar.xz | tar xJ -C "%s"' % ( os.path.join(tmpdir, "pkg.deb"), tmpdir ) )
> -
> +                system('ar p "%s" data.tar.xz | tar xJ -C "%s"' %
> +                       (os.path.join(tmpdir, "pkg.deb"), tmpdir))
>  
> -
> -        # copy is done twice, because paths in elbe-bootstarp_1.0 and 0.9 differ
> +        # copy is done twice, because paths in elbe-bootstarp_1.0 and 0.9
> +        # differ
>          if prj.has("mirror/cdrom"):
> -            system( 'cp "%s" "%s"' % ( os.path.join( tmpdir, 'var', 'lib', 'elbe', 'initrd', 'initrd-cdrom.gz' ), os.path.join(target_dir, "initrd.gz") ) )
> +            system(
> +                'cp "%s" "%s"' %
> +                (os.path.join(
> +                    tmpdir,
> +                    'var',
> +                    'lib',
> +                    'elbe',
> +                    'initrd',
> +                    'initrd-cdrom.gz'),
> +                    os.path.join(
> +                    target_dir,
> +                    "initrd.gz")))
>          else:
> -            system( 'cp "%s" "%s"' % ( os.path.join( tmpdir, 'var', 'lib', 'elbe', 'initrd', 'initrd.gz' ), os.path.join(target_dir, "initrd.gz") ) )
> -
> -        system( 'cp "%s" "%s"' % ( os.path.join( tmpdir, 'var', 'lib', 'elbe', 'initrd', 'vmlinuz' ), os.path.join(target_dir, "vmlinuz") ) )
> +            system(
> +                'cp "%s" "%s"' %
> +                (os.path.join(
> +                    tmpdir,
> +                    'var',
> +                    'lib',
> +                    'elbe',
> +                    'initrd',
> +                    'initrd.gz'),
> +                    os.path.join(
> +                    target_dir,
> +                    "initrd.gz")))
> +
> +        system(
> +            'cp "%s" "%s"' %
> +            (os.path.join(
> +                tmpdir,
> +                'var',
> +                'lib',
> +                'elbe',
> +                'initrd',
> +                'vmlinuz'),
> +                os.path.join(
> +                target_dir,
> +                "vmlinuz")))
>      finally:
> -        system( 'rm -rf "%s"' % tmpdir )
> +        system('rm -rf "%s"' % tmpdir)
> diff --git a/elbepack/projectmanager.py b/elbepack/projectmanager.py
> index 15664ce5..c775ed1a 100644
> --- a/elbepack/projectmanager.py
> +++ b/elbepack/projectmanager.py
> @@ -20,352 +20,383 @@ from elbepack.asyncworker import PdebuildJob, CreatePbuilderJob, UpdatePbuilderJ
>  from elbepack.asyncworker import BuildChrootTarJob
>  from elbepack.elbexml import ValidationMode
>  
> +
>  class ProjectManagerError(Exception):
> -    def __init__ (self, message):
> -        Exception.__init__( self, message )
> +    def __init__(self, message):
> +        Exception.__init__(self, message)
> +
>  
>  class AlreadyOpen(ProjectManagerError):
> -    def __init__ (self, builddir, username):
> -        ProjectManagerError.__init__( self,
> -                "project in %s is already opened by %s" % (builddir, username) )
> +    def __init__(self, builddir, username):
> +        ProjectManagerError.__init__(
> +            self, "project in %s is already opened by %s" %
> +            (builddir, username))
> +
>  
>  class PermissionDenied(ProjectManagerError):
> -    def __init__ (self, builddir):
> -        ProjectManagerError.__init__( self,
> -                "permission denied for project in %s" % builddir )
> +    def __init__(self, builddir):
> +        ProjectManagerError.__init__(
> +            self, "permission denied for project in %s" %
> +            builddir)
> +
>  
>  class NoOpenProject(ProjectManagerError):
> -    def __init__ (self):
> -        ProjectManagerError.__init__( self, "must open a project first" )
> +    def __init__(self):
> +        ProjectManagerError.__init__(self, "must open a project first")
> +
>  
>  class InvalidState(ProjectManagerError):
> -    def __init__ (self, message):
> -        ProjectManagerError.__init__( self, message )
> +    def __init__(self, message):
> +        ProjectManagerError.__init__(self, message)
>  
>  
>  class OpenProjectFile(object):
> -    def __init__ (self, pfd, mode = 'r'):
> -        self.path = path.join( pfd.builddir, pfd.name )
> +    def __init__(self, pfd, mode='r'):
> +        self.path = path.join(pfd.builddir, pfd.name)
>          self.mime_type = pfd.mime_type
> -        self.fobj = open( self.path, mode )
> +        self.fobj = open(self.path, mode)
>  
>  
>  class ProjectManager(object):
> -    def __init__ (self, basepath):
> +    def __init__(self, basepath):
>          self.basepath = basepath    # Base path for new projects
>          self.db = ElbeDB()          # Database of projects and users
> -        self.worker = AsyncWorker( self.db )
> -        self.userid2project = {}    # (userid, ElbeProject) map of open projects
> +        self.worker = AsyncWorker(self.db)
> +        # (userid, ElbeProject) map of open projects
> +        self.userid2project = {}
>          self.builddir2userid = {}   # (builddir, userid) map of open projects
>          self.lock = Lock()          # Lock protecting our data
>  
>      def stop(self):
>          self.worker.stop()
>  
> -    def new_project (self, userid):
> +    def new_project(self, userid):
>          subdir = str(uuid4())
> -        builddir = path.join( self.basepath, subdir )
> -        self.db.create_project( builddir, owner_id=userid )
> +        builddir = path.join(self.basepath, subdir)
> +        self.db.create_project(builddir, owner_id=userid)
>          return builddir
>  
> -    def create_project (self, userid, xml_file, url_validation=ValidationMode.CHECK_ALL):
> +    def create_project(
> +            self,
> +            userid,
> +            xml_file,
> +            url_validation=ValidationMode.CHECK_ALL):
>          subdir = str(uuid4())
> -        builddir = path.join( self.basepath, subdir )
> +        builddir = path.join(self.basepath, subdir)
>  
>          with self.lock:
>              # Try to close old project, if any
> -            self._close_current_project( userid )
> +            self._close_current_project(userid)
>  
> -            self.db.create_project( builddir, owner_id=userid )
> +            self.db.create_project(builddir, owner_id=userid)
>  
>              try:
> -                self.db.set_xml( builddir, xml_file )
> -            except:
> +                self.db.set_xml(builddir, xml_file)
> +            except BaseException:
>                  # Delete the project, if we cannot assign an XML file
> -                self.db.del_project( builddir )
> +                self.db.del_project(builddir)
>                  raise
>  
>              # Open the new project
> -            logpath = path.join( builddir, "log.txt" )
> -            ep = self.db.load_project( builddir, logpath, url_validation=url_validation )
> +            logpath = path.join(builddir, "log.txt")
> +            ep = self.db.load_project(
> +                builddir, logpath, url_validation=url_validation)
>  
> -            self.userid2project[ userid ] = ep
> -            self.builddir2userid[ builddir ] = userid
> +            self.userid2project[userid] = ep
> +            self.builddir2userid[builddir] = userid
>  
>          return builddir
>  
> -    def open_project (self, userid, builddir, url_validation=ValidationMode.CHECK_ALL):
> -        self._check_project_permission( userid, builddir )
> +    def open_project(
> +            self,
> +            userid,
> +            builddir,
> +            url_validation=ValidationMode.CHECK_ALL):
> +        self._check_project_permission(userid, builddir)
>  
>          with self.lock:
>              if builddir in self.builddir2userid:
> -                if self.builddir2userid[ builddir ] == userid:
> +                if self.builddir2userid[builddir] == userid:
>                      # Same project selected again by the same user, don't do
>                      # anything
>                      return
>                  else:
>                      # Already opened by a different user
> -                    raise AlreadyOpen( builddir,
> -                            self.db.get_username(
> -                                self.builddir2userid[ builddir ] ) )
> +                    raise AlreadyOpen(builddir,
> +                                      self.db.get_username(
> +                                          self.builddir2userid[builddir]))
>  
>              # Try to close the old project of the user, if any
> -            self._close_current_project( userid )
> +            self._close_current_project(userid)
>  
>              # Load project from the database
> -            logpath = path.join( builddir, "log.txt" )
> -            ep = self.db.load_project( builddir, logpath, url_validation=url_validation )
> +            logpath = path.join(builddir, "log.txt")
> +            ep = self.db.load_project(
> +                builddir, logpath, url_validation=url_validation)
>  
>              # Add project to our dictionaries
> -            self.userid2project[ userid ] = ep
> -            self.builddir2userid[ builddir ] = userid
> +            self.userid2project[userid] = ep
> +            self.builddir2userid[builddir] = userid
>  
> -    def close_current_project (self, userid):
> +    def close_current_project(self, userid):
>          with self.lock:
> -            self._close_current_project( userid )
> +            self._close_current_project(userid)
>  
> -    def del_project (self, userid, builddir):
> -        self._check_project_permission( userid, builddir )
> +    def del_project(self, userid, builddir):
> +        self._check_project_permission(userid, builddir)
>  
>          with self.lock:
>              # Does anyone have the project opened right now?
>              if builddir in self.builddir2userid:
> -                if self.builddir2userid[ builddir ] == userid:
> +                if self.builddir2userid[builddir] == userid:
>                      # If the calling user has opened it, then close it and
>                      # proceed if closed sucessfully.
> -                    self._close_current_project( userid )
> +                    self._close_current_project(userid)
>                  else:
>                      # TODO: Admin should be allowed to delete projects
>                      # that are currently opened by other users
> -                    raise AlreadyOpen( builddir,
> -                            self.db.get_username(
> -                                self.builddir2userid[ builddir ] ) )
> +                    raise AlreadyOpen(builddir,
> +                                      self.db.get_username(
> +                                          self.builddir2userid[builddir]))
>  
> -        self.db.del_project( builddir )
> +        self.db.del_project(builddir)
>  
> -    def get_current_project_data (self, userid):
> +    def get_current_project_data(self, userid):
>          with self.lock:
> -            builddir = self._get_current_project( userid ).builddir
> -            return self.db.get_project_data( builddir )
> +            builddir = self._get_current_project(userid).builddir
> +            return self.db.get_project_data(builddir)
>  
> -    def get_current_project_files (self, userid):
> +    def get_current_project_files(self, userid):
>          with self.lock:
> -            builddir = self._get_current_project( userid ).builddir
> -            return self.db.get_project_files( builddir )
> +            builddir = self._get_current_project(userid).builddir
> +            return self.db.get_project_files(builddir)
>  
> -    def open_current_project_file (self, userid, filename, mode = 'r'):
> +    def open_current_project_file(self, userid, filename, mode='r'):
>          with self.lock:
> -            builddir = self._get_current_project( userid, allow_busy=False ).builddir
> +            builddir = self._get_current_project(
> +                userid, allow_busy=False).builddir
>  
> -            pfd = self.db.get_project_file( builddir, filename )
> -            return OpenProjectFile( pfd, mode )
> +            pfd = self.db.get_project_file(builddir, filename)
> +            return OpenProjectFile(pfd, mode)
>  
> -    def set_current_project_private_data (self, userid, private_data):
> +    def set_current_project_private_data(self, userid, private_data):
>          with self.lock:
> -            ep = self._get_current_project( userid )
> +            ep = self._get_current_project(userid)
>              ep.private_data = private_data
>  
> -    def get_current_project_private_data (self, userid):
> +    def get_current_project_private_data(self, userid):
>          private_data = None
>          with self.lock:
> -            ep = self._get_current_project( userid )
> +            ep = self._get_current_project(userid)
>              private_data = ep.private_data
>          return private_data
>  
> -    def set_current_project_xml (self, userid, xml_file):
> +    def set_current_project_xml(self, userid, xml_file):
>          with self.lock:
> -            ep = self._get_current_project( userid, allow_busy=False )
> +            ep = self._get_current_project(userid, allow_busy=False)
>  
> -            self.db.set_xml( ep.builddir, xml_file )
> +            self.db.set_xml(ep.builddir, xml_file)
>  
> -    def set_current_project_upload_cdrom (self, userid):
> +    def set_current_project_upload_cdrom(self, userid):
>          with self.lock:
> -            ep = self._get_current_project( userid, allow_busy=False )
> +            ep = self._get_current_project(userid, allow_busy=False)
>  
> -            ep.xml.set_cdrom_mirror (path.join (ep.builddir, 'uploaded_cdrom.iso'))
> -            ep.sync_xml_to_disk ()
> +            ep.xml.set_cdrom_mirror(
> +                path.join(
> +                    ep.builddir,
> +                    'uploaded_cdrom.iso'))
> +            ep.sync_xml_to_disk()
>  
>              # Make db reload the xml file
> -            self.db.set_xml (ep.builddir, None)
> +            self.db.set_xml(ep.builddir, None)
>  
> -    def set_current_project_postbuild (self, userid, postbuild_file):
> +    def set_current_project_postbuild(self, userid, postbuild_file):
>          with self.lock:
> -            ep = self._get_current_project( userid, allow_busy=False )
> +            ep = self._get_current_project(userid, allow_busy=False)
>  
> -            f = self.db.set_postbuild( ep.builddir, postbuild_file )
> +            f = self.db.set_postbuild(ep.builddir, postbuild_file)
>              ep.postbuild_file = f
>  
> -    def set_current_project_savesh (self, userid, savesh_file):
> +    def set_current_project_savesh(self, userid, savesh_file):
>          with self.lock:
> -            ep = self._get_current_project( userid, allow_busy=False )
> +            ep = self._get_current_project(userid, allow_busy=False)
>  
> -            f = self.db.set_savesh( ep.builddir, savesh_file )
> +            f = self.db.set_savesh(ep.builddir, savesh_file)
>              ep.savesh_file = f
>  
> -    def set_current_project_presh (self, userid, presh_file):
> +    def set_current_project_presh(self, userid, presh_file):
>          with self.lock:
> -            ep = self._get_current_project( userid, allow_busy=False )
> +            ep = self._get_current_project(userid, allow_busy=False)
>  
> -            f = self.db.set_presh( ep.builddir, presh_file )
> +            f = self.db.set_presh(ep.builddir, presh_file)
>              ep.presh_file = f
>  
> -    def set_current_project_postsh (self, userid, postsh_file):
> +    def set_current_project_postsh(self, userid, postsh_file):
>          with self.lock:
> -            ep = self._get_current_project( userid, allow_busy=False )
> +            ep = self._get_current_project(userid, allow_busy=False)
>  
> -            f = self.db.set_postsh( ep.builddir, postsh_file )
> +            f = self.db.set_postsh(ep.builddir, postsh_file)
>              ep.postsh_file = f
>  
> -    def set_current_project_version( self, userid, new_version ):
> +    def set_current_project_version(self, userid, new_version):
>          with self.lock:
> -            ep = self._get_current_project( userid, allow_busy=False )
> +            ep = self._get_current_project(userid, allow_busy=False)
>  
> -            self.db.set_project_version( ep.builddir, new_version )
> -            ep.xml.node( "/project/version" ).set_text( new_version )
> +            self.db.set_project_version(ep.builddir, new_version)
> +            ep.xml.node("/project/version").set_text(new_version)
>  
> -    def list_current_project_versions( self, userid ):
> +    def list_current_project_versions(self, userid):
>          with self.lock:
> -            ep = self._get_current_project( userid )
> -            return self.db.list_project_versions( ep.builddir )
> +            ep = self._get_current_project(userid)
> +            return self.db.list_project_versions(ep.builddir)
>  
> -    def save_current_project_version( self, userid, description = None ):
> +    def save_current_project_version(self, userid, description=None):
>          with self.lock:
> -            ep = self._get_current_project( userid, allow_busy=False )
> +            ep = self._get_current_project(userid, allow_busy=False)
>  
> -            self.worker.enqueue( SaveVersionJob( ep, description ) )
> +            self.worker.enqueue(SaveVersionJob(ep, description))
>  
> -    def checkout_project_version( self, userid, version ):
> +    def checkout_project_version(self, userid, version):
>          with self.lock:
> -            ep = self._get_current_project( userid, allow_busy=False )
> +            ep = self._get_current_project(userid, allow_busy=False)
>  
> -            self.worker.enqueue( CheckoutVersionJob( ep, version ) )
> +            self.worker.enqueue(CheckoutVersionJob(ep, version))
>  
> -    def set_current_project_version_description( self, userid, version,
> -            description ):
> +    def set_current_project_version_description(self, userid, version,
> +                                                description):
>          with self.lock:
> -            ep = self._get_current_project( userid )
> -            self.db.set_version_description( ep.builddir, version, description )
> +            ep = self._get_current_project(userid)
> +            self.db.set_version_description(ep.builddir, version, description)
>  
> -    def del_current_project_version( self, userid, version ):
> +    def del_current_project_version(self, userid, version):
>          with self.lock:
> -            ep = self._get_current_project( userid, allow_busy=False )
> +            ep = self._get_current_project(userid, allow_busy=False)
>  
> -            name = ep.xml.text( "project/name" )
> -            self.db.del_version( ep.builddir, version )
> +            name = ep.xml.text("project/name")
> +            self.db.del_version(ep.builddir, version)
>  
>              # Delete corresponding package archive, if existing
> -            pkgarchive = get_versioned_filename( name, version, ".pkgarchive" )
> -            pkgarchive_path = path.join( ep.builddir, pkgarchive )
> +            pkgarchive = get_versioned_filename(name, version, ".pkgarchive")
> +            pkgarchive_path = path.join(ep.builddir, pkgarchive)
>              try:
> -                rmtree( pkgarchive_path )
> +                rmtree(pkgarchive_path)
>              except OSError as e:
>                  if e.errno != errno.ENOENT:
>                      raise
>  
> -    def build_current_project (self, userid, build_bin, build_src, skip_pbuilder):
> +    def build_current_project(
> +            self,
> +            userid,
> +            build_bin,
> +            build_src,
> +            skip_pbuilder):
>          with self.lock:
> -            ep = self._get_current_project (userid, allow_busy=False)
> -            self.worker.enqueue (BuildJob (ep, build_bin, build_src,
> -                skip_pbuilder))
> +            ep = self._get_current_project(userid, allow_busy=False)
> +            self.worker.enqueue(BuildJob(ep, build_bin, build_src,
> +                                         skip_pbuilder))
>  
> -    def update_pbuilder (self, userid):
> +    def update_pbuilder(self, userid):
>          with self.lock:
> -            ep = self._get_current_project (userid, allow_busy=False)
> -            self.worker.enqueue (UpdatePbuilderJob (ep))
> +            ep = self._get_current_project(userid, allow_busy=False)
> +            self.worker.enqueue(UpdatePbuilderJob(ep))
>  
> -    def build_pbuilder (self, userid):
> +    def build_pbuilder(self, userid):
>          with self.lock:
> -            ep = self._get_current_project (userid, allow_busy=False)
> -            self.worker.enqueue (CreatePbuilderJob (ep))
> +            ep = self._get_current_project(userid, allow_busy=False)
> +            self.worker.enqueue(CreatePbuilderJob(ep))
>  
> -    def build_current_pdebuild (self, userid):
> +    def build_current_pdebuild(self, userid):
>          with self.lock:
> -            ep = self._get_current_project (userid, allow_busy=False)
> -            if not path.isdir (path.join (ep.builddir, "pbuilder")):
> -                raise InvalidState ('No pbuilder exists: run "elbe pbuilder create --project %s" first' % ep.builddir)
> +            ep = self._get_current_project(userid, allow_busy=False)
> +            if not path.isdir(path.join(ep.builddir, "pbuilder")):
> +                raise InvalidState(
> +                    'No pbuilder exists: run "elbe pbuilder create --project %s" first' %
> +                    ep.builddir)
>  
> -            self.worker.enqueue (PdebuildJob (ep))
> +            self.worker.enqueue(PdebuildJob(ep))
>  
>      def set_orig_fname(self, userid, fname):
>          with self.lock:
> -            ep = self._get_current_project (userid, allow_busy=False)
> -            if not path.isdir (path.join (ep.builddir, "pbuilder")):
> -                raise InvalidState ('No pbuilder exists: run "elbe pbuilder create --project %s" first' % ep.builddir)
> +            ep = self._get_current_project(userid, allow_busy=False)
> +            if not path.isdir(path.join(ep.builddir, "pbuilder")):
> +                raise InvalidState(
> +                    'No pbuilder exists: run "elbe pbuilder create --project %s" first' %
> +                    ep.builddir)
>  
>              ep.orig_fname = fname
>              ep.orig_files.append(fname)
>  
>      def get_orig_fname(self, userid):
>          with self.lock:
> -            ep = self._get_current_project (userid, allow_busy=False)
> -            if not path.isdir (path.join (ep.builddir, "pbuilder")):
> -                raise InvalidState ('No pbuilder exists: run "elbe pbuilder create --project %s" first' % ep.builddir)
> +            ep = self._get_current_project(userid, allow_busy=False)
> +            if not path.isdir(path.join(ep.builddir, "pbuilder")):
> +                raise InvalidState(
> +                    'No pbuilder exists: run "elbe pbuilder create --project %s" first' %
> +                    ep.builddir)
>  
>              return ep.orig_fname
>  
> -
> -    def build_chroot_tarball (self, userid):
> +    def build_chroot_tarball(self, userid):
>          with self.lock:
> -            ep = self._get_current_project (userid, allow_busy=False)
> -            self.worker.enqueue (BuildChrootTarJob (ep))
> +            ep = self._get_current_project(userid, allow_busy=False)
> +            self.worker.enqueue(BuildChrootTarJob(ep))
>  
> -    def build_sysroot (self, userid):
> +    def build_sysroot(self, userid):
>          with self.lock:
> -            ep = self._get_current_project (userid, allow_busy=False)
> -            self.worker.enqueue (BuildSysrootJob (ep))
> +            ep = self._get_current_project(userid, allow_busy=False)
> +            self.worker.enqueue(BuildSysrootJob(ep))
>  
> -    def build_update_package (self, userid, base_version):
> +    def build_update_package(self, userid, base_version):
>          with self.lock:
> -            c = self._get_current_project_apt_cache( userid )
> +            c = self._get_current_project_apt_cache(userid)
>              if c.get_changes():
>                  raise InvalidState(
> -                        "project %s has uncommited package changes, "
> -                        "please commit them first" )
> +                    "project %s has uncommited package changes, "
> +                    "please commit them first")
>  
> -            ep = self._get_current_project( userid )
> -            self.worker.enqueue( GenUpdateJob ( ep, base_version ) )
> +            ep = self._get_current_project(userid)
> +            self.worker.enqueue(GenUpdateJob(ep, base_version))
>  
> -    def apt_upd_upgr (self, userid):
> +    def apt_upd_upgr(self, userid):
>          with self.lock:
> -            ep = self._get_current_project( userid, allow_busy=False )
> -            self.worker.enqueue( APTUpdUpgrJob( ep ) )
> +            ep = self._get_current_project(userid, allow_busy=False)
> +            self.worker.enqueue(APTUpdUpgrJob(ep))
>  
> -    def apt_update (self, userid):
> +    def apt_update(self, userid):
>          with self.lock:
> -            ep = self._get_current_project( userid, allow_busy=False )
> -            self.worker.enqueue( APTUpdateJob( ep ) )
> +            ep = self._get_current_project(userid, allow_busy=False)
> +            self.worker.enqueue(APTUpdateJob(ep))
>  
> -    def apt_commit (self, userid):
> +    def apt_commit(self, userid):
>          with self.lock:
> -            ep = self._get_current_project( userid, allow_busy=False )
> -            self.worker.enqueue( APTCommitJob( ep ) )
> +            ep = self._get_current_project(userid, allow_busy=False)
> +            self.worker.enqueue(APTCommitJob(ep))
>  
> -    def apt_clear (self, userid):
> +    def apt_clear(self, userid):
>          with self.lock:
> -            c = self._get_current_project_apt_cache( userid )
> +            c = self._get_current_project_apt_cache(userid)
>              c.clear()
>  
> -    def apt_mark_install (self, userid, pkgname, version):
> +    def apt_mark_install(self, userid, pkgname, version):
>          with self.lock:
> -            c = self._get_current_project_apt_cache( userid )
> -            c.mark_install( pkgname, version )
> -            ep = self._get_current_project( userid )
> +            c = self._get_current_project_apt_cache(userid)
> +            c.mark_install(pkgname, version)
> +            ep = self._get_current_project(userid)
>              pkgs = ep.xml.get_target_packages()
> -            if not pkgname in pkgs:
> +            if pkgname not in pkgs:
>                  pkgs.append(pkgname)
>              ep.xml.set_target_packages(pkgs)
>  
> -
> -    def apt_mark_upgrade (self, userid, pkgname, version):
> +    def apt_mark_upgrade(self, userid, pkgname, version):
>          with self.lock:
> -            c = self._get_current_project_apt_cache( userid )
> -            c.mark_upgrade( pkgname, version )
> +            c = self._get_current_project_apt_cache(userid)
> +            c.mark_upgrade(pkgname, version)
>  
> -    def apt_mark_delete (self, userid, pkgname, version):
> +    def apt_mark_delete(self, userid, pkgname, version):
>          with self.lock:
> -            c = self._get_current_project_apt_cache( userid )
> +            c = self._get_current_project_apt_cache(userid)
>  
> -            ep = self._get_current_project( userid )
> +            ep = self._get_current_project(userid)
>              pkgs = ep.xml.get_target_packages()
>  
>              if pkgname in pkgs:
> @@ -376,7 +407,7 @@ class ProjectManager(object):
>  
>              debootstrap_pkgs = []
>              for p in ep.xml.xml.node("debootstrappkgs"):
> -                debootstrap_pkgs.append (p.et.text)
> +                debootstrap_pkgs.append(p.et.text)
>  
>              # temporary disabled because of
>              # https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=776057
> @@ -385,176 +416,174 @@ class ProjectManager(object):
>  
>      def get_debootstrap_pkgs(self, userid):
>          with self.lock:
> -            ep = self._get_current_project( userid )
> +            ep = self._get_current_project(userid)
>  
>              debootstrap_pkgs = []
>              for p in ep.xml.xml.node("debootstrappkgs"):
> -                debootstrap_pkgs.append (p.et.text)
> +                debootstrap_pkgs.append(p.et.text)
>  
>              return debootstrap_pkgs
>  
> -    def apt_mark_keep (self, userid, pkgname, version):
> +    def apt_mark_keep(self, userid, pkgname, version):
>          with self.lock:
> -            c = self._get_current_project_apt_cache( userid )
> -            c.mark_keep( pkgname, version )
> +            c = self._get_current_project_apt_cache(userid)
> +            c.mark_keep(pkgname, version)
>  
> -            ep = self._get_current_project( userid )
> +            ep = self._get_current_project(userid)
>              pkgs = ep.xml.get_target_packages()
> -            if not pkgname in pkgs:
> +            if pkgname not in pkgs:
>                  pkgs.append(pkgname)
>              ep.xml.set_target_packages(pkgs)
>  
> -    def apt_get_target_packages (self, userid):
> +    def apt_get_target_packages(self, userid):
>          with self.lock:
> -            ep = self._get_current_project( userid )
> +            ep = self._get_current_project(userid)
>              return ep.xml.get_target_packages()
>  
> -
> -    def apt_upgrade (self, userid, dist_upgrade = False):
> +    def apt_upgrade(self, userid, dist_upgrade=False):
>          with self.lock:
> -            c = self._get_current_project_apt_cache( userid )
> -            c.upgrade( dist_upgrade )
> +            c = self._get_current_project_apt_cache(userid)
> +            c.upgrade(dist_upgrade)
>  
> -    def apt_get_changes (self, userid):
> +    def apt_get_changes(self, userid):
>          with self.lock:
> -            c = self._get_current_project_apt_cache( userid )
> +            c = self._get_current_project_apt_cache(userid)
>              return c.get_changes()
>  
> -    def apt_get_marked_install (self, userid, section='all'):
> +    def apt_get_marked_install(self, userid, section='all'):
>          with self.lock:
> -            c = self._get_current_project_apt_cache( userid )
> -            return c.get_marked_install (section=section)
> +            c = self._get_current_project_apt_cache(userid)
> +            return c.get_marked_install(section=section)
>  
> -    def apt_get_installed (self, userid, section='all'):
> +    def apt_get_installed(self, userid, section='all'):
>          with self.lock:
> -            c = self._get_current_project_apt_cache( userid )
> -            return c.get_installed_pkgs (section=section)
> +            c = self._get_current_project_apt_cache(userid)
> +            return c.get_installed_pkgs(section=section)
>  
> -    def apt_get_upgradeable (self, userid, section='all'):
> +    def apt_get_upgradeable(self, userid, section='all'):
>          with self.lock:
> -            c = self._get_current_project_apt_cache( userid )
> -            return c.get_upgradeable (section=section)
> +            c = self._get_current_project_apt_cache(userid)
> +            return c.get_upgradeable(section=section)
>  
> -    def apt_get_pkglist (self, userid, section='all'):
> +    def apt_get_pkglist(self, userid, section='all'):
>          with self.lock:
> -            c = self._get_current_project_apt_cache( userid )
> -            return c.get_pkglist( section )
> +            c = self._get_current_project_apt_cache(userid)
> +            return c.get_pkglist(section)
>  
> -    def apt_get_pkg (self, userid, term):
> +    def apt_get_pkg(self, userid, term):
>          with self.lock:
> -            c = self._get_current_project_apt_cache( userid )
> -            return c.get_pkg( term )
> +            c = self._get_current_project_apt_cache(userid)
> +            return c.get_pkg(term)
>  
> -    def apt_get_pkgs (self, userid, term):
> +    def apt_get_pkgs(self, userid, term):
>          with self.lock:
> -            c = self._get_current_project_apt_cache( userid )
> -            return c.get_pkgs( term )
> +            c = self._get_current_project_apt_cache(userid)
> +            return c.get_pkgs(term)
>  
> -    def apt_get_sections (self, userid):
> +    def apt_get_sections(self, userid):
>          with self.lock:
> -            c = self._get_current_project_apt_cache( userid )
> +            c = self._get_current_project_apt_cache(userid)
>              return c.get_sections()
>  
> -    def read_current_project_log (self, userid):
> +    def read_current_project_log(self, userid):
>          with self.lock:
> -            ep = self._get_current_project( userid )
> -            logpath = path.join( ep.builddir, "log.txt" )
> -            f = open( logpath, "r" )
> +            ep = self._get_current_project(userid)
> +            logpath = path.join(ep.builddir, "log.txt")
> +            f = open(logpath, "r")
>          try:
>              data = f.read()
>          finally:
>              f.close()
>          return data
>  
> -    def rm_log (self, userid, builddir):
> -        ep = self._get_current_project( userid )
> -        with open (os.path.join (ep.builddir, 'log.txt'), 'w', 0):
> +    def rm_log(self, userid, builddir):
> +        ep = self._get_current_project(userid)
> +        with open(os.path.join(ep.builddir, 'log.txt'), 'w', 0):
>              pass
>  
>      def add_deb_package(self, userid, filename):
> -        ep = self._get_current_project( userid )
> +        ep = self._get_current_project(userid)
>  
> -        t = filename[-3:] # filetype of uploaded file
> +        t = filename[-3:]  # filetype of uploaded file
>  
>          if t == 'dsc':
> -            ep.repo.includedsc(os.path.join( ep.builddir, filename))
> +            ep.repo.includedsc(os.path.join(ep.builddir, filename))
>          elif t == 'deb':
> -            ep.repo.includedeb(os.path.join( ep.builddir, filename))
> +            ep.repo.includedeb(os.path.join(ep.builddir, filename))
>  
>          ep.repo.finalize()
>  
> -
> -    def current_project_has_changes (self, userid):
> +    def current_project_has_changes(self, userid):
>          with self.lock:
> -            builddir = self._get_current_project( userid ).builddir
> -            return self.db.has_changes( builddir )
> +            builddir = self._get_current_project(userid).builddir
> +            return self.db.has_changes(builddir)
>  
> -    def current_project_is_busy (self, userid, part):
> +    def current_project_is_busy(self, userid, part):
>          with self.lock:
> -            ep = self._get_current_project( userid )
> +            ep = self._get_current_project(userid)
>              count = 0
>  
>              # function is called with part=None for elbe 1.0 clients
> -            if part == None:
> -                return self.db.is_busy( ep.builddir ), ""
> +            if part is None:
> +                return self.db.is_busy(ep.builddir), ""
>  
>              l = None
> -            with open (os.path.join (ep.builddir, 'log.txt'), 'r', 0) as lf:
> +            with open(os.path.join(ep.builddir, 'log.txt'), 'r', 0) as lf:
>                  for l in lf:
>                      if count == part:
> -                        l = str(part+1) + '###' + str(l)
> -                        return self.db.is_busy( ep.builddir ), str(l)
> +                        l = str(part + 1) + '###' + str(l)
> +                        return self.db.is_busy(ep.builddir), str(l)
>                      count = count + 1
>              # don't crash if logfile doesn't exist
>              if not l:
>                  l = None
>              l = str(part) + '###' + str(l)
> -            return self.db.is_busy( ep.builddir ), l
> +            return self.db.is_busy(ep.builddir), l
>  
> -    def _get_current_project (self, userid, allow_busy=True):
> +    def _get_current_project(self, userid, allow_busy=True):
>          # Must be called with self.lock held
> -        if not userid in self.userid2project:
> +        if userid not in self.userid2project:
>              raise NoOpenProject()
>  
> -        ep = self.userid2project[ userid ]
> +        ep = self.userid2project[userid]
>  
>          if not allow_busy:
> -            if self.db.is_busy( ep.builddir ):
> -                raise InvalidState( "project %s is busy" % ep.builddir )
> +            if self.db.is_busy(ep.builddir):
> +                raise InvalidState("project %s is busy" % ep.builddir)
>  
>          return ep
>  
> -    def _close_current_project (self, userid):
> +    def _close_current_project(self, userid):
>          # Must be called with self.lock held
>  
>          if userid in self.userid2project:
> -            builddir = self.userid2project[ userid ].builddir
> -            if self.db.is_busy( builddir ):
> +            builddir = self.userid2project[userid].builddir
> +            if self.db.is_busy(builddir):
>                  raise InvalidState(
> -                        "project in directory %s of user %s is currently busy and cannot be closed" %
> -                        ( builddir, self.db.get_username( userid ) ) )
> +                    "project in directory %s of user %s is currently busy and cannot be closed" %
> +                    (builddir, self.db.get_username(userid)))
>  
> -            del self.builddir2userid[ builddir ]
> -            del self.userid2project[ userid ]
> +            del self.builddir2userid[builddir]
> +            del self.userid2project[userid]
>  
> -    def _check_project_permission (self, userid, builddir):
> -        if self.db.is_admin( userid ):
> +    def _check_project_permission(self, userid, builddir):
> +        if self.db.is_admin(userid):
>              # Admin may access all projects
>              return
>  
> -        if self.db.get_owner_id( builddir ) != userid:
> +        if self.db.get_owner_id(builddir) != userid:
>              # Project of another user, deny access
> -            raise PermissionDenied( builddir )
> +            raise PermissionDenied(builddir)
>  
>          # User is owner, so allow it
>  
> -    def _get_current_project_apt_cache( self, userid ):
> +    def _get_current_project_apt_cache(self, userid):
>          # Must be called with self.lock held
> -        ep = self._get_current_project( userid, allow_busy=False )
> +        ep = self._get_current_project(userid, allow_busy=False)
>  
>          if not ep.has_full_buildenv():
>              raise InvalidState(
> -                    "project in directory %s does not have a functional "
> -                    "build environment" % ep.builddir )
> +                "project in directory %s does not have a functional "
> +                "build environment" % ep.builddir)
>  
>          return ep.get_rpcaptcache()
> diff --git a/elbepack/repomanager.py b/elbepack/repomanager.py
> index b424f6ec..c3f6cda3 100644
> --- a/elbepack/repomanager.py
> +++ b/elbepack/repomanager.py
> @@ -13,23 +13,24 @@ from debian.deb822 import Deb822
>  from elbepack.gpg import generate_elbe_internal_key, export_key
>  from elbepack.shellhelper import CommandError
>  
> +
>  class RepoAttributes(object):
> -    def __init__ (self, codename, arch, components,
> -            mirror='http://ftp.de.debian.org/debian' ):
> +    def __init__(self, codename, arch, components,
> +                 mirror='http://ftp.de.debian.org/debian'):
>          self.codename = codename
> -        if type (arch) is str:
> -            self.arch = set ([arch])
> +        if type(arch) is str:
> +            self.arch = set([arch])
>          else:
> -            self.arch = set (arch)
> +            self.arch = set(arch)
>  
>          if type(components) is str:
> -            self.components = set ([components])
> +            self.components = set([components])
>          else:
> -            self.components = set (components)
> +            self.components = set(components)
>  
>          self.mirror = mirror
>  
> -    def __add__ (self, other):
> +    def __add__(self, other):
>          """ Over simplistic Add implementation only useful for
>              our current implementation"""
>  
> @@ -37,14 +38,27 @@ class RepoAttributes(object):
>              return [self, other]
>          else:
>              assert self.mirror == other.mirror
> -            ret_arch = self.arch.union (other.arch)
> -            ret_comp = self.components.union (other.components)
> +            ret_arch = self.arch.union(other.arch)
> +            ret_comp = self.components.union(other.components)
>  
> -            return [ RepoAttributes (self.codename, ret_arch, ret_comp, self.mirror) ]
> +            return [
> +                RepoAttributes(
> +                    self.codename,
> +                    ret_arch,
> +                    ret_comp,
> +                    self.mirror)]
>  
>  
>  class RepoBase(object):
> -    def __init__( self, path, log, init_attr, repo_attr, origin, description, maxsize=None):
> +    def __init__(
> +            self,
> +            path,
> +            log,
> +            init_attr,
> +            repo_attr,
> +            origin,
> +            description,
> +            maxsize=None):
>  
>          self.vol_path = path
>          self.volume_count = 0
> @@ -72,144 +86,194 @@ class RepoBase(object):
>              self.keyid = generate_elbe_internal_key()
>              self.gen_repo_conf()
>  
> -    def get_volume_fs( self, volume ):
> +    def get_volume_fs(self, volume):
>          if self.maxsize:
> -            volname = os.path.join( self.vol_path, "vol%02d" % volume )
> +            volname = os.path.join(self.vol_path, "vol%02d" % volume)
>              return Filesystem(volname)
>          else:
>              return Filesystem(self.vol_path)
>  
> -    def new_repo_volume( self ):
> +    def new_repo_volume(self):
>          self.volume_count += 1
>          self.fs = self.get_volume_fs(self.volume_count)
>          self.gen_repo_conf()
>  
> -    def gen_repo_conf( self ):
> -        self.fs.mkdir_p( "conf" )
> -        fp = self.fs.open( "conf/distributions", "w")
> +    def gen_repo_conf(self):
> +        self.fs.mkdir_p("conf")
> +        fp = self.fs.open("conf/distributions", "w")
>  
>          need_update = False
>  
>          for att in self.attrs:
> -            fp.write( "Origin: " + self.origin + "\n" )
> -            fp.write( "Label: " + self.origin + "\n" )
> -            fp.write( "Suite: " + codename2suite[ att.codename ] + "\n" )
> -            fp.write( "Codename: " + att.codename + "\n" )
> -            fp.write( "Architectures: " + " ".join (att.arch) + "\n" )
> -            fp.write( "Components: " + " ".join (att.components.difference (set (["main/debian-installer"]))) + "\n" )
> -            fp.write( "UDebComponents: " + " ".join (att.components.difference (set (["main/debian-installer"]))) + "\n" )
> -            fp.write( "Description: " + self.description + "\n" )
> -            fp.write( "SignWith: " + self.keyid + "\n" )
> +            fp.write("Origin: " + self.origin + "\n")
> +            fp.write("Label: " + self.origin + "\n")
> +            fp.write("Suite: " + codename2suite[att.codename] + "\n")
> +            fp.write("Codename: " + att.codename + "\n")
> +            fp.write("Architectures: " + " ".join(att.arch) + "\n")
> +            fp.write("Components: " +
> +                     " ".join(att.components.difference(set(["main/debian-installer"]))) +
> +                     "\n")
> +            fp.write("UDebComponents: " +
> +                     " ".join(att.components.difference(set(["main/debian-installer"]))) +
> +                     "\n")
> +            fp.write("Description: " + self.description + "\n")
> +            fp.write("SignWith: " + self.keyid + "\n")
>  
>              if 'main/debian-installer' in att.components:
> -                fp.write( "Update: di\n" )
> +                fp.write("Update: di\n")
>  
> -                ufp = self.fs.open( "conf/updates", "w" )
> +                ufp = self.fs.open("conf/updates", "w")
>  
> -                ufp.write( "Name: di\n" )
> -                ufp.write( "Method: " + att.mirror + "\n" )
> -                ufp.write( "VerifyRelease: blindtrust\n" )
> -                ufp.write( "Components: \n" )
> -                ufp.write( "GetInRelease: no\n" )
> +                ufp.write("Name: di\n")
> +                ufp.write("Method: " + att.mirror + "\n")
> +                ufp.write("VerifyRelease: blindtrust\n")
> +                ufp.write("Components: \n")
> +                ufp.write("GetInRelease: no\n")
>                  # It would be nicer, to use this
>                  # ufp.write( "Architectures: " + " ".join (att.arch) + "\n" )
>                  # But we end up with 'armel amd64' sometimes.
>                  # So lets just use the init_attr...
>                  if self.init_attr:
> -                    ufp.write( "Architectures: " + " ".join (self.init_attr.arch) + "\n" )
> +                    ufp.write(
> +                        "Architectures: " +
> +                        " ".join(
> +                            self.init_attr.arch) +
> +                        "\n")
>                  else:
> -                    ufp.write( "Architectures: " + " ".join (att.arch) + "\n" )
> +                    ufp.write("Architectures: " + " ".join(att.arch) + "\n")
>  
> -                ufp.write ( "UDebComponents: main>main\n" )
> +                ufp.write("UDebComponents: main>main\n")
>                  ufp.close()
>  
>                  need_update = True
>  
> -            fp.write( "\n" )
> +            fp.write("\n")
>          fp.close()
>  
>          with self.fs.open("repo.pub", "w") as pubkey_fp:
>              keyring = export_key(self.keyid, pubkey_fp)
>              if keyring:
> -                shutil.copyfile(keyring, os.path.join (self.fs.path,
> -                                                       "elbe-keyring.gpg"))
> +                shutil.copyfile(keyring, os.path.join(self.fs.path,
> +                                                      "elbe-keyring.gpg"))
>  
>          if need_update:
> -            self.log.do( 'reprepro --export=force --basedir "' + self.fs.path + '" update' )
> +            self.log.do(
> +                'reprepro --export=force --basedir "' +
> +                self.fs.path +
> +                '" update')
>          else:
> -            self.log.do( 'reprepro --basedir "' + self.fs.path + '" export ' + att.codename )
> -
> -    def finalize( self ):
> -        os.environ ['GNUPGHOME'] = "/var/cache/elbe/gnupg"
> +            self.log.do(
> +                'reprepro --basedir "' +
> +                self.fs.path +
> +                '" export ' +
> +                att.codename)
> +
> +    def finalize(self):
> +        os.environ['GNUPGHOME'] = "/var/cache/elbe/gnupg"
>          for att in self.attrs:
> -            self.log.do( 'reprepro --basedir "' + self.fs.path + '" export ' + att.codename )
> +            self.log.do(
> +                'reprepro --basedir "' +
> +                self.fs.path +
> +                '" export ' +
> +                att.codename)
>  
> -    def _includedeb( self, path, codename, component):
> +    def _includedeb(self, path, codename, component):
>          if self.maxsize:
> -            new_size = self.fs.disk_usage("") + os.path.getsize( path )
> +            new_size = self.fs.disk_usage("") + os.path.getsize(path)
>              if new_size > self.maxsize:
>                  self.new_repo_volume()
>  
> -        self.log.do( 'reprepro --keepunreferencedfiles --export=never --basedir "' + self.fs.path + '" -C ' + component + ' includedeb ' + codename + ' ' + path )
> -
> -    def includedeb (self, path, component="main", pkgname=None, force=False):
> +        self.log.do(
> +            'reprepro --keepunreferencedfiles --export=never --basedir "' +
> +            self.fs.path +
> +            '" -C ' +
> +            component +
> +            ' includedeb ' +
> +            codename +
> +            ' ' +
> +            path)
> +
> +    def includedeb(self, path, component="main", pkgname=None, force=False):
>          # pkgname needs only to be specified if force is enabled
>          try:
> -            self._includedeb (path, self.repo_attr.codename, component)
> +            self._includedeb(path, self.repo_attr.codename, component)
>          except CommandError as ce:
> -            if force and not pkgname is None:
> +            if force and pkgname is not None:
>                  # Including deb did not work.
>                  # Maybe we have the same Version with a
>                  # different md5 already.
>                  #
>                  # Try remove, and add again.
>                  self.removedeb(pkgname, component)
> -                self._includedeb (path, self.repo_attr.codename, component)
> +                self._includedeb(path, self.repo_attr.codename, component)
>              else:
>                  raise ce
>  
> -    def include_init_deb (self, path, component="main"):
> -        self._includedeb (path, self.init_attr.codename, component)
> +    def include_init_deb(self, path, component="main"):
> +        self._includedeb(path, self.init_attr.codename, component)
>  
> -    def _include( self, path, codename, component):
> -        self.log.do( 'reprepro --ignore=wrongdistribution --ignore=surprisingbinary --keepunreferencedfiles --export=never --basedir "' + self.fs.path  + '" -C ' + component + ' -P normal -S misc include ' + codename + ' ' + path )
> +    def _include(self, path, codename, component):
> +        self.log.do(
> +            'reprepro --ignore=wrongdistribution --ignore=surprisingbinary --keepunreferencedfiles --export=never --basedir "' +
> +            self.fs.path +
> +            '" -C ' +
> +            component +
> +            ' -P normal -S misc include ' +
> +            codename +
> +            ' ' +
> +            path)
>  
>      def _removedeb(self, pkgname, codename, component):
> -        self.log.do( "reprepro --basedir %s remove %s %s" % (self.fs.path, codename, pkgname))
> +        self.log.do(
> +            "reprepro --basedir %s remove %s %s" %
> +            (self.fs.path, codename, pkgname))
>  
>      def removedeb(self, pkgname, component="main"):
> -        self._removedeb (pkgname, self.repo_attr.codename, component)
> +        self._removedeb(pkgname, self.repo_attr.codename, component)
>  
>      def _removesrc(self, srcname, codename, component):
> -        self.log.do( "reprepro --basedir %s removesrc %s %s" % (self.fs.path, codename, srcname))
> +        self.log.do(
> +            "reprepro --basedir %s removesrc %s %s" %
> +            (self.fs.path, codename, srcname))
>  
>      def removesrc(self, path, component="main"):
>          for p in Deb822.iter_paragraphs(file(path)):
>              if 'Source' in p:
> -                self._removesrc(p['Source'], self.repo_attr.codename, component)
> +                self._removesrc(
> +                    p['Source'],
> +                    self.repo_attr.codename,
> +                    component)
>  
> -    def _remove( self, path, codename, component):
> -        os.environ ['GNUPGHOME'] = "/var/cache/elbe/gnupg"
> +    def _remove(self, path, codename, component):
> +        os.environ['GNUPGHOME'] = "/var/cache/elbe/gnupg"
>          for p in Deb822.iter_paragraphs(file(path)):
>              if 'Source' in p:
>                  self._removesrc(p['Source'], codename, component)
>              elif 'Package' in p:
>                  self._removedeb(p['Package'], codename, component)
>  
> -    def _includedsc( self, path, codename, component):
> +    def _includedsc(self, path, codename, component):
>          if self.maxsize:
> -            new_size = self.fs.disk_usage("") + get_dsc_size( path )
> +            new_size = self.fs.disk_usage("") + get_dsc_size(path)
>              if new_size > self.maxsize:
>                  self.new_repo_volume()
>  
>          if self.maxsize and (self.fs.disk_usage("") > self.maxsize):
>              self.new_repo_volume()
>  
> -        self.log.do( 'reprepro --keepunreferencedfiles --export=never --basedir "' + self.fs.path  + '" -C ' + component + ' -P normal -S misc includedsc ' + codename + ' ' + path )
> -
> -    def includedsc( self, path, component="main", force=False):
> +        self.log.do(
> +            'reprepro --keepunreferencedfiles --export=never --basedir "' +
> +            self.fs.path +
> +            '" -C ' +
> +            component +
> +            ' -P normal -S misc includedsc ' +
> +            codename +
> +            ' ' +
> +            path)
> +
> +    def includedsc(self, path, component="main", force=False):
>          try:
> -            self._includedsc (path, self.repo_attr.codename, component)
> +            self._includedsc(path, self.repo_attr.codename, component)
>          except CommandError as ce:
>              if force:
>                  # Including dsc did not work.
> @@ -218,122 +282,143 @@ class RepoBase(object):
>                  #
>                  # Try remove, and add again.
>                  self.removesrc(path, component)
> -                self._includedsc (path, self.repo_attr.codename, component)
> +                self._includedsc(path, self.repo_attr.codename, component)
>              else:
>                  raise ce
>  
> -    def include( self, path, component="main"):
> -        self._include (path, self.repo_attr.codename, component)
> +    def include(self, path, component="main"):
> +        self._include(path, self.repo_attr.codename, component)
>  
> -    def remove( self, path, component="main"):
> -        self._remove (path, self.repo_attr.codename, component)
> +    def remove(self, path, component="main"):
> +        self._remove(path, self.repo_attr.codename, component)
>  
> -    def include_init_dsc( self, path, component="main"):
> -        self._includedsc (path, self.init_attr.codename, component)
> +    def include_init_dsc(self, path, component="main"):
> +        self._includedsc(path, self.init_attr.codename, component)
>  
> -    def buildiso( self, fname ):
> +    def buildiso(self, fname):
>          files = []
>          if self.volume_count == 0:
>              new_path = '"' + self.fs.path + '"'
> -            self.log.do( "genisoimage -o %s -J -joliet-long -R %s" % (fname, new_path) )
> -            files.append (fname)
> +            self.log.do(
> +                "genisoimage -o %s -J -joliet-long -R %s" %
> +                (fname, new_path))
> +            files.append(fname)
>          else:
> -            for i in range(self.volume_count+1):
> +            for i in range(self.volume_count + 1):
>                  volfs = self.get_volume_fs(i)
>                  newname = fname + ("%02d" % i)
> -                self.log.do( "genisoimage -o %s -J -joliet-long -R %s" % (newname,
> -                                                             volfs.path) )
> -                files.append (newname)
> +                self.log.do(
> +                    "genisoimage -o %s -J -joliet-long -R %s" %
> +                    (newname, volfs.path))
> +                files.append(newname)
>  
>          return files
>  
>  
>  class UpdateRepo(RepoBase):
> -    def __init__( self, xml, path, log ):
> -        self.xml  = xml
> +    def __init__(self, xml, path, log):
> +        self.xml = xml
>  
> -        arch = xml.text("project/arch", key="arch" )
> +        arch = xml.text("project/arch", key="arch")
>          codename = xml.text("project/suite")
>  
> -        repo_attrs = RepoAttributes (codename, arch, "main")
> +        repo_attrs = RepoAttributes(codename, arch, "main")
> +
> +        RepoBase.__init__(self,
> +                          path,
> +                          log,
> +                          None,
> +                          repo_attrs,
> +                          "Update",
> +                          "Update")
>  
> -        RepoBase.__init__( self,
> -                           path,
> -                           log,
> -                           None,
> -                           repo_attrs,
> -                           "Update",
> -                           "Update")
>  
>  class CdromInitRepo(RepoBase):
> -    def __init__( self, arch, init_codename, path, log, maxsize, mirror='http://ftp.de.debian.org/debian'  ):
> +    def __init__(self, arch, init_codename, path, log, maxsize,
> +                 mirror='http://ftp.de.debian.org/debian'):
>  
> -        init_attrs = RepoAttributes (init_codename, "amd64", ["main", "main/debian-installer"], mirror)
> +        init_attrs = RepoAttributes(
> +            init_codename, "amd64", [
> +                "main", "main/debian-installer"], mirror)
>  
> -        RepoBase.__init__( self,
> -                           path,
> -                           log,
> -                           None,
> -                           init_attrs,
> -                           "Elbe",
> -                           "Elbe InitVM Cdrom Repo",
> -                           maxsize )
> +        RepoBase.__init__(self,
> +                          path,
> +                          log,
> +                          None,
> +                          init_attrs,
> +                          "Elbe",
> +                          "Elbe InitVM Cdrom Repo",
> +                          maxsize)
>  
> -class CdromBinRepo(RepoBase):
> -    def __init__( self, arch, codename, init_codename, path, log, maxsize, mirror='http://ftp.debian.org/debian'  ):
>  
> -        repo_attrs = RepoAttributes (codename, arch, ["main", "added"], mirror)
> +class CdromBinRepo(RepoBase):
> +    def __init__(
> +            self,
> +            arch,
> +            codename,
> +            init_codename,
> +            path,
> +            log,
> +            maxsize,
> +            mirror='http://ftp.debian.org/debian'):
> +
> +        repo_attrs = RepoAttributes(codename, arch, ["main", "added"], mirror)
>          if init_codename is not None:
> -            init_attrs = RepoAttributes (init_codename, "amd64", ["main", "main/debian-installer"], mirror)
> +            init_attrs = RepoAttributes(
> +                init_codename, "amd64", [
> +                    "main", "main/debian-installer"], mirror)
>          else:
>              init_attrs = None
>  
> -        RepoBase.__init__( self,
> -                           path,
> -                           log,
> -                           init_attrs,
> -                           repo_attrs,
> -                           "Elbe",
> -                           "Elbe Binary Cdrom Repo",
> -                           maxsize )
> +        RepoBase.__init__(self,
> +                          path,
> +                          log,
> +                          init_attrs,
> +                          repo_attrs,
> +                          "Elbe",
> +                          "Elbe Binary Cdrom Repo",
> +                          maxsize)
>  
>  
>  class CdromSrcRepo(RepoBase):
> -    def __init__( self, codename, init_codename, path, log, maxsize ):
> -        repo_attrs = RepoAttributes (codename, "source", ["main", "added"])
> +    def __init__(self, codename, init_codename, path, log, maxsize):
> +        repo_attrs = RepoAttributes(codename, "source", ["main", "added"])
>          if init_codename is not None:
> -            init_attrs = RepoAttributes (init_codename, "source", ["main", "main/debian-installer"])
> +            init_attrs = RepoAttributes(
> +                init_codename, "source", [
> +                    "main", "main/debian-installer"])
>          else:
>              init_attrs = None
>  
> -        RepoBase.__init__( self,
> -                           path,
> -                           log,
> -                           init_attrs,
> -                           repo_attrs,
> -                           "Elbe",
> -                           "Elbe Source Cdrom Repo",
> -                           maxsize)
> +        RepoBase.__init__(self,
> +                          path,
> +                          log,
> +                          init_attrs,
> +                          repo_attrs,
> +                          "Elbe",
> +                          "Elbe Source Cdrom Repo",
> +                          maxsize)
>  
>  
>  class ToolchainRepo(RepoBase):
> -    def __init__( self, arch, codename, path, log):
> -        repo_attrs = RepoAttributes (codename, arch, "main")
> -        RepoBase.__init__( self,
> -                           path,
> -                           log,
> -                           None,
> -                           repo_attrs,
> -                           "toolchain",
> -                           "Toolchain binary packages Repo" )
> +    def __init__(self, arch, codename, path, log):
> +        repo_attrs = RepoAttributes(codename, arch, "main")
> +        RepoBase.__init__(self,
> +                          path,
> +                          log,
> +                          None,
> +                          repo_attrs,
> +                          "toolchain",
> +                          "Toolchain binary packages Repo")
> +
>  
>  class ProjectRepo(RepoBase):
> -    def __init__( self, arch, codename, path, log):
> -        repo_attrs = RepoAttributes (codename, arch + ' source', "main")
> -        RepoBase.__init__( self,
> -                           path,
> -                           log,
> -                           None,
> -                           repo_attrs,
> -                           "Local",
> -                           "Self build packages Repo" )
> +    def __init__(self, arch, codename, path, log):
> +        repo_attrs = RepoAttributes(codename, arch + ' source', "main")
> +        RepoBase.__init__(self,
> +                          path,
> +                          log,
> +                          None,
> +                          repo_attrs,
> +                          "Local",
> +                          "Self build packages Repo")
> diff --git a/elbepack/rfs.py b/elbepack/rfs.py
> index a04c7b65..6272c61e 100644
> --- a/elbepack/rfs.py
> +++ b/elbepack/rfs.py
> @@ -11,28 +11,33 @@ from elbepack.efilesystem import BuildImgFs
>  from elbepack.templates import write_pack_template, get_preseed, preseed_to_text
>  from elbepack.shellhelper import CommandError
>  
> +
>  class DebootstrapException (Exception):
> -    def __init__ (self):
> -        Exception.__init__ (self, "Debootstrap Failed")
> +    def __init__(self):
> +        Exception.__init__(self, "Debootstrap Failed")
> +
>  
>  class BuildEnv ():
> -    def __init__ (self, xml, log, path, build_sources=False, clean=False ):
> +    def __init__(self, xml, log, path, build_sources=False, clean=False):
>  
>          self.xml = xml
>          self.log = log
>          self.path = path
>  
> -        self.rfs = BuildImgFs (path, xml.defs["userinterpr"])
> +        self.rfs = BuildImgFs(path, xml.defs["userinterpr"])
>  
>          if clean:
>              self.rfs.rmtree("")
>  
>          # TODO think about reinitialization if elbe_version differs
> -        if not self.rfs.isfile( "etc/elbe_version" ):
> +        if not self.rfs.isfile("etc/elbe_version"):
>              # avoid starting daemons inside the buildenv
> -            self.rfs.mkdir_p ("usr/sbin")
> -            self.rfs.write_file ("usr/sbin/policy-rc.d", 0o755, "#!/bin/sh\nexit 101\n")
> -            self.debootstrap ()
> +            self.rfs.mkdir_p("usr/sbin")
> +            self.rfs.write_file(
> +                "usr/sbin/policy-rc.d",
> +                0o755,
> +                "#!/bin/sh\nexit 101\n")
> +            self.debootstrap()
>              self.fresh_debootstrap = True
>              self.need_dumpdebootstrap = True
>          else:
> @@ -40,28 +45,30 @@ class BuildEnv ():
>              self.fresh_debootstrap = False
>              self.need_dumpdebootstrap = False
>  
> -        self.initialize_dirs (build_sources=build_sources)
> -        self.create_apt_prefs ()
> +        self.initialize_dirs(build_sources=build_sources)
> +        self.create_apt_prefs()
>  
>      def cdrom_umount(self):
> -        if self.xml.prj.has ("mirror/cdrom"):
> -            cdrompath = self.rfs.fname( "cdrom" )
> -            self.log.do ('umount "%s"' % cdrompath)
> +        if self.xml.prj.has("mirror/cdrom"):
> +            cdrompath = self.rfs.fname("cdrom")
> +            self.log.do('umount "%s"' % cdrompath)
>  
>      def cdrom_mount(self):
>          if self.xml.has("project/mirror/cdrom"):
>              cdrompath = self.rfs.fname("cdrom")
> -            self.log.do( 'mkdir -p "%s"' % cdrompath )
> -            self.log.do( 'mount -o loop "%s" "%s"'
> -               % (self.xml.text("project/mirror/cdrom"), cdrompath ) )
> +            self.log.do('mkdir -p "%s"' % cdrompath)
> +            self.log.do('mount -o loop "%s" "%s"'
> +                        % (self.xml.text("project/mirror/cdrom"), cdrompath))
>  
>      def __enter__(self):
> -        if os.path.exists (self.path+'/../repo/pool'):
> +        if os.path.exists(self.path + '/../repo/pool'):
>              self.log.do("mv %s/../repo %s" % (self.path, self.path))
> -            self.log.do('echo "deb copy:///repo %s main" > %s/etc/apt/sources.list.d/local.list' % (
> -                        self.xml.text ("project/suite"), self.path))
> -            self.log.do('echo "deb-src copy:///repo %s main" >> %s/etc/apt/sources.list.d/local.list' % (
> -                        self.xml.text ("project/suite"), self.path))
> +            self.log.do(
> +                'echo "deb copy:///repo %s main" > %s/etc/apt/sources.list.d/local.list' %
> +                (self.xml.text("project/suite"), self.path))
> +            self.log.do(
> +                'echo "deb-src copy:///repo %s main" >> %s/etc/apt/sources.list.d/local.list' %
> +                (self.xml.text("project/suite"), self.path))
>          self.cdrom_mount()
>          self.rfs.__enter__()
>          return self
> @@ -69,20 +76,22 @@ class BuildEnv ():
>      def __exit__(self, type, value, traceback):
>          self.rfs.__exit__(type, value, traceback)
>          self.cdrom_umount()
> -        if os.path.exists (self.path+'/repo'):
> +        if os.path.exists(self.path + '/repo'):
>              self.log.do("mv %s/repo %s/../" % (self.path, self.path))
>              self.log.do("rm %s/etc/apt/sources.list.d/local.list" % self.path)
>  
> -    def debootstrap (self):
> +    def debootstrap(self):
>  
>          cleanup = False
> -        suite = self.xml.prj.text ("suite")
> +        suite = self.xml.prj.text("suite")
>  
> -        primary_mirror = self.xml.get_primary_mirror(self.rfs.fname('/cdrom/targetrepo') )
> +        primary_mirror = self.xml.get_primary_mirror(
> +            self.rfs.fname('/cdrom/targetrepo'))
>  
>          if self.xml.prj.has("mirror/primary_proxy"):
>              os.environ["no_proxy"] = "10.0.2.2,localhost,127.0.0.1"
> -            proxy = self.xml.prj.text("mirror/primary_proxy").strip().replace("LOCALMACHINE", "localhost")
> +            proxy = self.xml.prj.text(
> +                "mirror/primary_proxy").strip().replace("LOCALMACHINE", "localhost")
>              os.environ["http_proxy"] = proxy
>              os.environ["https_proxy"] = proxy
>          else:
> @@ -93,37 +102,38 @@ class BuildEnv ():
>          os.environ["LANG"] = "C"
>          os.environ["LANGUAGE"] = "C"
>          os.environ["LC_ALL"] = "C"
> -        os.environ["DEBIAN_FRONTEND"]="noninteractive"
> -        os.environ["DEBONF_NONINTERACTIVE_SEEN"]="true"
> +        os.environ["DEBIAN_FRONTEND"] = "noninteractive"
> +        os.environ["DEBONF_NONINTERACTIVE_SEEN"] = "true"
>  
> -        self.log.h2( "debootstrap log" )
> +        self.log.h2("debootstrap log")
>  
> -        arch = self.xml.text ("project/buildimage/arch", key="arch")
> +        arch = self.xml.text("project/buildimage/arch", key="arch")
>  
>          host_arch = self.log.get_command_out(
> -                "dpkg --print-architecture").strip ()
> +            "dpkg --print-architecture").strip()
>  
> -        if not self.xml.is_cross (host_arch):
> +        if not self.xml.is_cross(host_arch):
>              # ignore gpg verification if install from cdrom, cause debootstrap
>              # seems to ignore /etc/apt/trusted.gpg.d/elbe-keyring.gpg
>              # 01/2017 manut
> -            if self.xml.has("project/noauth") or self.xml.has("project/mirror/cdrom"):
> +            if self.xml.has(
> +                    "project/noauth") or self.xml.has("project/mirror/cdrom"):
>                  cmd = 'debootstrap --no-check-gpg --arch=%s "%s" "%s" "%s"' % (
> -                            arch, suite, self.rfs.path, primary_mirror)
> +                    arch, suite, self.rfs.path, primary_mirror)
>              else:
>                  cmd = 'debootstrap --arch=%s "%s" "%s" "%s"' % (
> -                            arch, suite, self.rfs.path, primary_mirror)
> +                    arch, suite, self.rfs.path, primary_mirror)
>  
>              try:
>                  self.cdrom_mount()
> -                self.log.do( cmd )
> +                self.log.do(cmd)
>              except CommandError:
>                  cleanup = True
> -                raise DebootstrapException ()
> +                raise DebootstrapException()
>              finally:
>                  self.cdrom_umount()
>                  if cleanup:
> -                    self.rfs.rmtree ("/")
> +                    self.rfs.rmtree("/")
>  
>              return
>  
> @@ -141,41 +151,40 @@ class BuildEnv ():
>  
>          try:
>              self.cdrom_mount()
> -            self.log.do (cmd)
> +            self.log.do(cmd)
>  
>              ui = "/usr/share/elbe/qemu-elbe/" + self.xml.defs["userinterpr"]
>  
> -            if not os.path.exists (ui):
> +            if not os.path.exists(ui):
>                  ui = "/usr/bin/" + self.xml.defs["userinterpr"]
>  
> -            self.log.do ('cp %s %s' % (ui, self.rfs.fname( "usr/bin" )))
> +            self.log.do('cp %s %s' % (ui, self.rfs.fname("usr/bin")))
>  
>              if self.xml.has("project/noauth"):
> -                self.log.chroot (self.rfs.path,
> -                                 '/debootstrap/debootstrap --no-check-gpg --second-stage')
> +                self.log.chroot(
> +                    self.rfs.path,
> +                    '/debootstrap/debootstrap --no-check-gpg --second-stage')
>              else:
> -                self.log.chroot (self.rfs.path,
> -                                 '/debootstrap/debootstrap --second-stage')
> -
> +                self.log.chroot(self.rfs.path,
> +                                '/debootstrap/debootstrap --second-stage')
>  
> -            self.log.chroot (self.rfs.path, 'dpkg --configure -a')
> +            self.log.chroot(self.rfs.path, 'dpkg --configure -a')
>  
>          except CommandError:
>              cleanup = True
> -            raise DebootstrapException ()
> +            raise DebootstrapException()
>          finally:
>              self.cdrom_umount()
>              if cleanup:
> -                self.rfs.rmtree ("/")
> -
> +                self.rfs.rmtree("/")
>  
>      def virtapt_init_dirs(self):
> -        self.rfs.mkdir_p ("/cache/archives/partial")
> -        self.rfs.mkdir_p ("/etc/apt/preferences.d")
> -        self.rfs.mkdir_p ("/db")
> -        self.rfs.mkdir_p ("/log")
> -        self.rfs.mkdir_p ("/state/lists/partial")
> -        self.rfs.touch_file ("/state/status")
> +        self.rfs.mkdir_p("/cache/archives/partial")
> +        self.rfs.mkdir_p("/etc/apt/preferences.d")
> +        self.rfs.mkdir_p("/db")
> +        self.rfs.mkdir_p("/log")
> +        self.rfs.mkdir_p("/state/lists/partial")
> +        self.rfs.touch_file("/state/status")
>  
>      def import_keys(self):
>          if self.xml.has('project/mirror/url-list'):
> @@ -185,36 +194,40 @@ class BuildEnv ():
>                      name = l.split('/')[-1]        # Filename of key
>  
>                      myKey = urllib2.urlopen(l).read()
> -                    self.log.do('echo "%s" > %s' % (myKey, self.rfs.fname("tmp/key.pub")))
> +                    self.log.do(
> +                        'echo "%s" > %s' %
> +                        (myKey, self.rfs.fname("tmp/key.pub")))
>                      with self.rfs:
> -                        self.log.chroot(self.rfs.path, 'apt-key add /tmp/key.pub' )
> +                        self.log.chroot(
> +                            self.rfs.path, 'apt-key add /tmp/key.pub')
>                      self.log.do('rm -f %s' % self.rfs.fname("tmp/key.pub"))
>  
> -    def initialize_dirs (self, build_sources=False):
> -        mirror = self.xml.create_apt_sources_list (build_sources=build_sources)
> +    def initialize_dirs(self, build_sources=False):
> +        mirror = self.xml.create_apt_sources_list(build_sources=build_sources)
>  
>          if self.rfs.exists("etc/apt/sources.list"):
>              self.rfs.remove("etc/apt/sources.list")
>  
> -        self.rfs.write_file ("etc/apt/sources.list", 0o644, mirror)
> +        self.rfs.write_file("etc/apt/sources.list", 0o644, mirror)
>  
> -        self.rfs.mkdir_p( "var/cache/elbe" )
> +        self.rfs.mkdir_p("var/cache/elbe")
>  
> -        preseed = get_preseed( self.xml )
> -        preseed_txt = preseed_to_text( preseed )
> -        self.rfs.write_file( "var/cache/elbe/preseed.txt", 0o644, preseed_txt )
> +        preseed = get_preseed(self.xml)
> +        preseed_txt = preseed_to_text(preseed)
> +        self.rfs.write_file("var/cache/elbe/preseed.txt", 0o644, preseed_txt)
>          with self.rfs:
> -            self.log.chroot( self.rfs.path, 'debconf-set-selections < %s' % self.rfs.fname("var/cache/elbe/preseed.txt") )
> -
> +            self.log.chroot(
> +                self.rfs.path, 'debconf-set-selections < %s' %
> +                self.rfs.fname("var/cache/elbe/preseed.txt"))
>  
> -    def create_apt_prefs (self):
> +    def create_apt_prefs(self):
>  
>          filename = self.rfs.path + "/etc/apt/preferences"
>  
> -        if os.path.exists (filename):
> -            os.remove (filename)
> +        if os.path.exists(filename):
> +            os.remove(filename)
>  
> -        self.rfs.mkdir_p ("/etc/apt")
> +        self.rfs.mkdir_p("/etc/apt")
>  
>          pinned_origins = []
>          if self.xml.has('project/mirror/url-list'):
> @@ -223,7 +236,7 @@ class BuildEnv ():
>                      continue
>  
>                  repo = url.node('binary')
> -                if not 'pin' in repo.et.attrib:
> +                if 'pin' not in repo.et.attrib:
>                      continue
>  
>                  origin = urlparse.urlsplit(repo.et.text.strip()).hostname
> @@ -232,34 +245,56 @@ class BuildEnv ():
>                      package = repo.et.attrib['package']
>                  else:
>                      package = '*'
> -                pinning = { 'pin':     pin,
> -                            'origin':  origin,
> -                            'package': package }
> +                pinning = {'pin': pin,
> +                           'origin': origin,
> +                           'package': package}
>                  pinned_origins.append(pinning)
>  
> -        d = { "xml":   self.xml,
> -              "prj":   self.xml.node("/project"),
> -              "pkgs":  self.xml.node("/target/pkg-list"),
> -              "porgs": pinned_origins }
> +        d = {"xml": self.xml,
> +             "prj": self.xml.node("/project"),
> +             "pkgs": self.xml.node("/target/pkg-list"),
> +             "porgs": pinned_origins}
>  
> -        write_pack_template( filename, "preferences.mako", d )
> +        write_pack_template(filename, "preferences.mako", d)
>  
> -    def seed_etc( self ):
> +    def seed_etc(self):
>          passwd = self.xml.text("target/passwd")
> -        self.log.chroot( self.rfs.path,
> -             """/bin/sh -c 'echo "%s\\n%s\\n" | passwd'""" % (passwd, passwd) )
> +        self.log.chroot(
> +            self.rfs.path, """/bin/sh -c 'echo "%s\\n%s\\n" | passwd'""" %
> +            (passwd, passwd))
>  
>          hostname = self.xml.text("target/hostname")
>          domain = self.xml.text("target/domain")
>  
> -        self.log.chroot( self.rfs.path, """/bin/sh -c 'echo "127.0.0.1 %s.%s %s elbe-daemon" >> /etc/hosts'""" % (hostname, domain, hostname) )
> -        self.log.chroot( self.rfs.path, """/bin/sh -c 'echo "%s" > /etc/hostname'""" % hostname )
> -        self.log.chroot( self.rfs.path, """/bin/sh -c 'echo "%s.%s" > /etc/mailname'""" % (hostname, domain) )
> -
> -        if self.xml.has( "target/console" ):
> -            serial_con, serial_baud = self.xml.text( "target/console" ).split(',')
> +        self.log.chroot(
> +            self.rfs.path,
> +            """/bin/sh -c 'echo "127.0.0.1 %s.%s %s elbe-daemon" >> /etc/hosts'""" %
> +            (hostname,
> +             domain,
> +             hostname))
> +        self.log.chroot(
> +            self.rfs.path,
> +            """/bin/sh -c 'echo "%s" > /etc/hostname'""" %
> +            hostname)
> +        self.log.chroot(
> +            self.rfs.path, """/bin/sh -c 'echo "%s.%s" > /etc/mailname'""" %
> +            (hostname, domain))
> +
> +        if self.xml.has("target/console"):
> +            serial_con, serial_baud = self.xml.text(
> +                "target/console").split(',')
>              if serial_baud:
> -                self.log.chroot(self.rfs.path,"""/bin/sh -c '[ -f /etc/inittab ] && echo "T0:23:respawn:/sbin/getty -L %s %s vt100" >> /etc/inittab'""" % (serial_con, serial_baud), allow_fail=True )
> -                self.log.chroot(self.rfs.path, """/bin/sh -c '[ -f /lib/systemd/system/serial-getty at .service ] && ln -s /lib/systemd/system/serial-getty at .service /etc/systemd/system/getty.target.wants/serial-getty@%s.service'""" % serial_con, allow_fail=True )
> +                self.log.chroot(
> +                    self.rfs.path,
> +                    """/bin/sh -c '[ -f /etc/inittab ] && echo "T0:23:respawn:/sbin/getty -L %s %s vt100" >> /etc/inittab'""" %
> +                    (serial_con,
> +                     serial_baud),
> +                    allow_fail=True)
> +                self.log.chroot(
> +                    self.rfs.path,
> +                    """/bin/sh -c '[ -f /lib/systemd/system/serial-getty at .service ] && ln -s /lib/systemd/system/serial-getty at .service /etc/systemd/system/getty.target.wants/serial-getty@%s.service'""" %
> +                    serial_con,
> +                    allow_fail=True)
>              else:
> -                self.log.printo("parsing console tag failed, needs to be of '/dev/ttyS0,115200' format.")
> +                self.log.printo(
> +                    "parsing console tag failed, needs to be of '/dev/ttyS0,115200' format.")
> diff --git a/elbepack/rpcaptcache.py b/elbepack/rpcaptcache.py
> index 694a66e4..9cd50d0c 100644
> --- a/elbepack/rpcaptcache.py
> +++ b/elbepack/rpcaptcache.py
> @@ -15,98 +15,120 @@ import sys
>  import time
>  import warnings
>  
> +
>  class InChRootObject(object):
>      def __init__(self, rfs):
>          self.rfs = rfs
>          self.rfs.enter_chroot()
> -        self.finalizer = Finalize(self,self.rfs.leave_chroot,exitpriority=10)
> +        self.finalizer = Finalize(self, self.rfs.leave_chroot, exitpriority=10)
> +
>  
>  class RPCAPTCache(InChRootObject):
> -    def __init__( self, rfs, log, arch, notifier=None, norecommend = False, noauth = True ):
> -        sys.stdout   = open (log, 'a', buffering=0)
> -        sys.stderr   = open (log, 'a', buffering=0)
> -        self.logfile = open (log, 'a', buffering=0)
> +    def __init__(
> +            self,
> +            rfs,
> +            log,
> +            arch,
> +            notifier=None,
> +            norecommend=False,
> +            noauth=True):
> +        sys.stdout = open(log, 'a', buffering=0)
> +        sys.stderr = open(log, 'a', buffering=0)
> +        self.logfile = open(log, 'a', buffering=0)
>  
>          InChRootObject.__init__(self, rfs)
>  
>          self.notifier = notifier
> -        config.set ("APT::Architecture", arch)
> +        config.set("APT::Architecture", arch)
>          if norecommend:
> -            config.set ("APT::Install-Recommends", "1")
> +            config.set("APT::Install-Recommends", "1")
>          else:
> -            config.set ("APT::Install-Recommends", "0")
> +            config.set("APT::Install-Recommends", "0")
>  
>          if noauth:
> -            config.set ("APT::Get::AllowUnauthenticated", "1")
> +            config.set("APT::Get::AllowUnauthenticated", "1")
>          else:
> -            config.set ("APT::Get::AllowUnauthenticated", "0")
> +            config.set("APT::Get::AllowUnauthenticated", "0")
>  
> -        self.cache = Cache(progress=ElbeOpProgress ())
> +        self.cache = Cache(progress=ElbeOpProgress())
>          self.cache.open(progress=ElbeOpProgress())
>  
> -    def dbg_dump( self, filename ):
> -        ts = time.localtime ()
> +    def dbg_dump(self, filename):
> +        ts = time.localtime()
>          filename = filename + (
> -                '_%02d%02d%02d' % (ts.tm_hour, ts.tm_min, ts.tm_sec))
> -        with open (filename, 'w') as dbg:
> +            '_%02d%02d%02d' % (ts.tm_hour, ts.tm_min, ts.tm_sec))
> +        with open(filename, 'w') as dbg:
>              for p in self.cache:
> -                dbg.write ('%s %s %d %d %d %d %d %d %d %d %d %d %d %d\n' % (
> -                    p.name, p.candidate.version, p.marked_keep, p.marked_delete,
> -                    p.marked_upgrade, p.marked_downgrade, p.marked_install,
> -                    p.marked_reinstall, p.is_auto_installed, p.is_installed,
> -                    p.is_auto_removable, p.is_now_broken, p.is_inst_broken,
> -                    p.is_upgradable))
> -
> -    def get_sections( self ):
> -        ret = list(set( [p.section for p in self.cache] ))
> +                dbg.write(
> +                    '%s %s %d %d %d %d %d %d %d %d %d %d %d %d\n' %
> +                    (p.name,
> +                     p.candidate.version,
> +                     p.marked_keep,
> +                     p.marked_delete,
> +                     p.marked_upgrade,
> +                     p.marked_downgrade,
> +                     p.marked_install,
> +                     p.marked_reinstall,
> +                     p.is_auto_installed,
> +                     p.is_installed,
> +                     p.is_auto_removable,
> +                     p.is_now_broken,
> +                     p.is_inst_broken,
> +                     p.is_upgradable))
> +
> +    def get_sections(self):
> +        ret = list(set([p.section for p in self.cache]))
>          ret.sort()
>          return ret
>  
> -    def get_pkglist( self, section ):
> +    def get_pkglist(self, section):
>          if section == 'all':
> -            ret = [ APTPackage(p) for p in self.cache ]
> +            ret = [APTPackage(p) for p in self.cache]
>          else:
> -            ret = [ APTPackage(p) for p in self.cache if p.section == section ]
> +            ret = [APTPackage(p) for p in self.cache if p.section == section]
>  
>          return ret
>  
> -    def mark_install( self, pkgname, version, from_user=True, nodeps=False ):
> +    def mark_install(self, pkgname, version, from_user=True, nodeps=False):
>          p = self.cache[pkgname]
>          if version:
> -            p.candidate = p.versions[ version ]
> -        p.mark_install( auto_fix = not nodeps,
> -                auto_inst = not nodeps,
> -                from_user = from_user )
> +            p.candidate = p.versions[version]
> +        p.mark_install(auto_fix=not nodeps,
> +                       auto_inst=not nodeps,
> +                       from_user=from_user)
>  
> -    def mark_install_devpkgs( self, ignore_pkgs, ignore_dev_pkgs ):
> -        ignore_pkgs.remove ('libc6') # we don't want to ignore libc
> +    def mark_install_devpkgs(self, ignore_pkgs, ignore_dev_pkgs):
> +        ignore_pkgs.remove('libc6')  # we don't want to ignore libc
>          # we don't want to ignore libstdc++
>          try:
> -            ignore_pkgs.remove ('libstdc++5')
> -        except:
> +            ignore_pkgs.remove('libstdc++5')
> +        except BaseException:
>              pass
>          try:
> -            ignore_pkgs.remove ('libstdc++6')
> -        except:
> +            ignore_pkgs.remove('libstdc++6')
> +        except BaseException:
>              pass
>          # list all debian src packages of all installed packages that don't
>          # come from debootstrap
> -        src_list = [p.candidate.source_name for p in self.cache if p.is_installed and p.name not in ignore_pkgs ]
> +        src_list = [
> +            p.candidate.source_name for p in self.cache if p.is_installed and p.name not in ignore_pkgs]
>          # go through all packages, remember package if its source package
>          # matches one of the installed packages and the binary package is a
>          # '-dev' package
> -        dev_list = [s for s in self.cache if (s.candidate.source_name in src_list and s.name.endswith ('-dev'))]
> +        dev_list = [
> +            s for s in self.cache if (
> +                s.candidate.source_name in src_list and s.name.endswith('-dev'))]
>          for p in dev_list:
>              if p.name not in ignore_dev_pkgs:
> -                p.mark_install ()
> +                p.mark_install()
>          # ensure that the symlinks package will be installed (it's needed for
>          # fixing links inside the sysroot
> -        self.cache ['symlinks'].mark_install ()
> +        self.cache['symlinks'].mark_install()
>  
>          for p in ignore_dev_pkgs:
> -            self.cache [p].mark_delete ()
> +            self.cache[p].mark_delete()
>  
> -    def cleanup (self, exclude_pkgs):
> +    def cleanup(self, exclude_pkgs):
>          for p in self.cache:
>              if (p.is_installed and not p.is_auto_installed) or p.is_auto_removable:
>                  remove = True
> @@ -114,54 +136,53 @@ class RPCAPTCache(InChRootObject):
>                      if x == p.name:
>                          remove = False
>                  if remove:
> -                    p.mark_delete( auto_fix=True, purge=True )
> +                    p.mark_delete(auto_fix=True, purge=True)
>  
> -    def mark_upgrade( self, pkgname, version ):
> +    def mark_upgrade(self, pkgname, version):
>          p = self.cache[pkgname]
>          if version:
> -            p.candidate = p.versions[ version ]
> +            p.candidate = p.versions[version]
>          p.mark_upgrade()
>  
> -    def mark_delete( self, pkgname, version ):
> +    def mark_delete(self, pkgname, version):
>          p = self.cache[pkgname]
> -        p.mark_delete( purge=True )
> +        p.mark_delete(purge=True)
>  
> -    def mark_keep( self, pkgname, version ):
> +    def mark_keep(self, pkgname, version):
>          p = self.cache[pkgname]
>          p.mark_keep()
>  
> -
> -    def update( self ):
> +    def update(self):
>          self.cache.update(fetch_progress=ElbeAcquireProgress())
>          self.cache.open(progress=ElbeOpProgress())
>  
>      def commit(self):
> -        os.environ["DEBIAN_FRONTEND"]="noninteractive"
> -        os.environ["DEBONF_NONINTERACTIVE_SEEN"]="true"
> -        self.cache.commit( ElbeAcquireProgress(),
> -                           ElbeInstallProgress(fileno=self.logfile.fileno()) )
> +        os.environ["DEBIAN_FRONTEND"] = "noninteractive"
> +        os.environ["DEBONF_NONINTERACTIVE_SEEN"] = "true"
> +        self.cache.commit(ElbeAcquireProgress(),
> +                          ElbeInstallProgress(fileno=self.logfile.fileno()))
>          self.cache.open(progress=ElbeOpProgress())
>  
>      def clear(self):
>          self.cache.clear()
>  
>      def get_dependencies(self, pkgname):
> -        deps = getalldeps( self.cache, pkgname )
> +        deps = getalldeps(self.cache, pkgname)
>          return [APTPackage(p, cache=self.cache) for p in deps]
>  
> -    def get_installed_pkgs( self, section='all' ):
> +    def get_installed_pkgs(self, section='all'):
>          # avoid DeprecationWarning: MD5Hash is deprecated, use Hashes instead
>          # triggerd by python-apt
>          with warnings.catch_warnings():
> -            warnings.filterwarnings("ignore",category=DeprecationWarning)
> +            warnings.filterwarnings("ignore", category=DeprecationWarning)
>              if section == 'all':
>                  pl = [APTPackage(p) for p in self.cache if p.is_installed]
>              else:
> -                pl = [APTPackage(p) for p in self.cache if (p.section == section
> -                    and p.is_installed)]
> +                pl = [APTPackage(p) for p in self.cache if (
> +                    p.section == section and p.is_installed)]
>              return pl
>  
> -    def get_fileindex( self ):
> +    def get_fileindex(self):
>          index = {}
>  
>          for p in self.cache:
> @@ -171,47 +192,50 @@ class RPCAPTCache(InChRootObject):
>  
>          return index
>  
> -    def get_marked_install( self, section='all' ):
> +    def get_marked_install(self, section='all'):
>          if section == 'all':
>              ret = [APTPackage(p) for p in self.cache if p.marked_install]
>          else:
>              ret = [APTPackage(p) for p in self.cache if (p.section == section
> -                and p.marked_install)]
> +                                                         and p.marked_install)]
>          return ret
>  
>      def get_upgradeable(self, section='all'):
>          if section == 'all':
> -            ret = [ APTPackage(p) for p in self.cache if p.is_upgradable]
> +            ret = [APTPackage(p) for p in self.cache if p.is_upgradable]
>          else:
> -            ret = [ APTPackage(p) for p in self.cache if (p.section == section
> -                and p.is_upgradable)]
> +            ret = [APTPackage(p) for p in self.cache if (p.section == section
> +                                                         and p.is_upgradable)]
>          return ret
>  
> -    def upgrade( self, dist_upgrade = False ):
> -        self.cache.upgrade( dist_upgrade )
> +    def upgrade(self, dist_upgrade=False):
> +        self.cache.upgrade(dist_upgrade)
>  
> -    def get_changes( self ):
> +    def get_changes(self):
>          changes = self.cache.get_changes()
> -        return [ APTPackage(p) for p in changes ]
> +        return [APTPackage(p) for p in changes]
>  
> -    def has_pkg( self, pkgname ):
> +    def has_pkg(self, pkgname):
>          return pkgname in self.cache
>  
> -    def is_installed( self, pkgname ):
> -        if not pkgname in self.cache:
> +    def is_installed(self, pkgname):
> +        if pkgname not in self.cache:
>              return False
>          return self.cache[pkgname].is_installed
>  
> -    def get_pkg( self, pkgname ):
> -        return APTPackage( self.cache[pkgname] )
> +    def get_pkg(self, pkgname):
> +        return APTPackage(self.cache[pkgname])
>  
> -    def get_pkgs( self, pkgname ):
> -        return [APTPackage (self.cache[p]) for p in sorted (self.cache.keys()) if pkgname in p.lower()]
> +    def get_pkgs(self, pkgname):
> +        return [
> +            APTPackage(
> +                self.cache[p]) for p in sorted(
> +                self.cache.keys()) if pkgname in p.lower()]
>  
> -    def compare_versions( self, ver1, ver2 ):
> -        return version_compare( ver1, ver2 )
> +    def compare_versions(self, ver1, ver2):
> +        return version_compare(ver1, ver2)
>  
> -    def download_binary( self, pkgname, path, version=None ):
> +    def download_binary(self, pkgname, path, version=None):
>          p = self.cache[pkgname]
>          if version is None:
>              pkgver = p.installed
> @@ -219,10 +243,10 @@ class RPCAPTCache(InChRootObject):
>              pkgver = p.versions[version]
>  
>          rel_filename = pkgver.fetch_binary(path,
> -                ElbeAcquireProgress())
> -        return self.rfs.fname( rel_filename )
> +                                           ElbeAcquireProgress())
> +        return self.rfs.fname(rel_filename)
>  
> -    def download_source( self, pkgname, path, version=None ):
> +    def download_source(self, pkgname, path, version=None):
>          p = self.cache[pkgname]
>          if version is None:
>              pkgver = p.installed
> @@ -231,15 +255,23 @@ class RPCAPTCache(InChRootObject):
>  
>          rel_filename = pkgver.fetch_source(path,
>                                             ElbeAcquireProgress(), unpack=False)
> -        return self.rfs.fname( rel_filename )
> +        return self.rfs.fname(rel_filename)
>  
>  
>  class MyMan(BaseManager):
>      pass
>  
> -MyMan.register( "RPCAPTCache", RPCAPTCache )
>  
> -def get_rpcaptcache(rfs, log, arch, notifier=None, norecommend=False, noauth=True):
> +MyMan.register("RPCAPTCache", RPCAPTCache)
> +
> +
> +def get_rpcaptcache(
> +        rfs,
> +        log,
> +        arch,
> +        notifier=None,
> +        norecommend=False,
> +        noauth=True):
>      mm = MyMan()
>      mm.start()
>  
> diff --git a/elbepack/shellhelper.py b/elbepack/shellhelper.py
> index 9273aeac..c63de75a 100644
> --- a/elbepack/shellhelper.py
> +++ b/elbepack/shellhelper.py
> @@ -6,6 +6,7 @@
>  import os
>  from subprocess import Popen, PIPE, STDOUT
>  
> +
>  class CommandError(Exception):
>      def __init__(self, cmd, returncode):
>          Exception.__init__(self)
> @@ -14,7 +15,8 @@ class CommandError(Exception):
>  
>      def __repr__(self):
>          return "Error: %d returned from Command %s" % (
> -                                             self.returncode, self.cmd)
> +            self.returncode, self.cmd)
> +
>  
>  def system(cmd, allow_fail=False):
>      ret = os.system(cmd)
> @@ -26,7 +28,7 @@ def system(cmd, allow_fail=False):
>  
>  def command_out(cmd, input=None, output=PIPE):
>      if input is None:
> -        p = Popen(cmd, shell=True, stdout=output, stderr=STDOUT )
> +        p = Popen(cmd, shell=True, stdout=output, stderr=STDOUT)
>          out, stderr = p.communicate()
>      else:
>          p = Popen(cmd, shell=True, stdout=output, stderr=STDOUT, stdin=PIPE)
> @@ -34,8 +36,9 @@ def command_out(cmd, input=None, output=PIPE):
>  
>      return p.returncode, out
>  
> +
>  def system_out(cmd, input=None, allow_fail=False):
> -    code, out = command_out(cmd,input)
> +    code, out = command_out(cmd, input)
>  
>      if code != 0:
>          if not allow_fail:
> @@ -43,9 +46,10 @@ def system_out(cmd, input=None, allow_fail=False):
>  
>      return out
>  
> +
>  def command_out_stderr(cmd, input=None):
>      if input is None:
> -        p = Popen(cmd, shell=True, stdout=PIPE, stderr=PIPE )
> +        p = Popen(cmd, shell=True, stdout=PIPE, stderr=PIPE)
>          output, stderr = p.communicate()
>      else:
>          p = Popen(cmd, shell=True, stdout=PIPE, stderr=PIPE, stdin=PIPE)
> @@ -53,8 +57,9 @@ def command_out_stderr(cmd, input=None):
>  
>      return p.returncode, output, stderr
>  
> +
>  def system_out_stderr(cmd, input=None, allow_fail=False):
> -    code, out, err = command_out(cmd,input)
> +    code, out, err = command_out(cmd, input)
>  
>      if code != 0:
>          if not allow_fail:
> diff --git a/elbepack/soapclient.py b/elbepack/soapclient.py
> index abdb378b..041923e8 100644
> --- a/elbepack/soapclient.py
> +++ b/elbepack/soapclient.py
> @@ -21,6 +21,7 @@ from httplib import BadStatusLine
>  from elbepack.filesystem import Filesystem
>  from elbepack.elbexml import ElbeXML, ValidationMode
>  
> +
>  def set_suds_debug(debug):
>      import logging
>      if debug:
> @@ -36,11 +37,12 @@ def set_suds_debug(debug):
>          logging.getLogger('suds.umx.typed').setLevel(logging.ERROR)
>          logging.getLogger('suds.client').setLevel(logging.CRITICAL)
>  
> +
>  class ElbeSoapClient(object):
> -    def __init__(self, host, port, user, passwd, retries = 10, debug=False):
> +    def __init__(self, host, port, user, passwd, retries=10, debug=False):
>  
>          # Mess with suds logging, for debug, or squelch warnings
> -        set_suds_debug (debug)
> +        set_suds_debug(debug)
>  
>          # Attributes
>          self.wsdl = "http://" + host + ":" + str(port) + "/soap/?wsdl"
> @@ -51,7 +53,7 @@ class ElbeSoapClient(object):
>          while self.control is None:
>              self.retries += 1
>              try:
> -                self.control = Client (self.wsdl)
> +                self.control = Client(self.wsdl)
>              except socket.error as e:
>                  if self.retries > retries:
>                      raise e
> @@ -70,19 +72,19 @@ class ElbeSoapClient(object):
>          self.service = self.control.service
>  
>          # We have a Connection, now login
> -        self.service.login(user,passwd)
> +        self.service.login(user, passwd)
>  
> -    def download_file (self, builddir, filename, dst_fname):
> -        fp = file (dst_fname, "w")
> +    def download_file(self, builddir, filename, dst_fname):
> +        fp = file(dst_fname, "w")
>          part = 0
>  
>          while True:
>              try:
> -                ret = self.service.get_file (builddir, filename, part)
> +                ret = self.service.get_file(builddir, filename, part)
>              except BadStatusLine as e:
>                  retry = retry - 1
>                  if not retry:
> -                    fp.close ()
> +                    fp.close()
>                      print("file transfer failed", file=sys.stderr)
>                      sys.exit(20)
>  
> @@ -90,28 +92,34 @@ class ElbeSoapClient(object):
>                  print(ret, file=sys.stderr)
>                  sys.exit(20)
>              if ret == "EndOfFile":
> -                fp.close ()
> +                fp.close()
>                  return
>  
> -            fp.write (binascii.a2b_base64 (ret))
> +            fp.write(binascii.a2b_base64(ret))
>              part = part + 1
>  
> +
>  class ClientAction(object):
>      actiondict = {}
> +
>      @classmethod
>      def register(cls, action):
>          cls.actiondict[action.tag] = action
> +
>      @classmethod
>      def print_actions(cls):
>          print("available subcommands are:", file=sys.stderr)
>          for a in cls.actiondict:
>              print("   %s" % a, file=sys.stderr)
> +
>      def __new__(cls, node):
>          action = cls.actiondict[node]
>          return object.__new__(action)
> +
>      def __init__(self, node):
>          self.node = node
>  
> +
>  class RemoveLogAction(ClientAction):
>  
>      tag = 'rm_log'
> @@ -120,16 +128,17 @@ class RemoveLogAction(ClientAction):
>          ClientAction.__init__(self, node)
>  
>      def execute(self, client, opt, args):
> -        if len (args) != 1:
> +        if len(args) != 1:
>              print("usage: elbe control rm_log <project_dir>", file=sys.stderr)
>              sys.exit(20)
>  
>          builddir = args[0]
> -        client.service.rm_log (builddir)
> +        client.service.rm_log(builddir)
>  
>  
>  ClientAction.register(RemoveLogAction)
>  
> +
>  class ListProjectsAction(ClientAction):
>  
>      tag = 'list_projects'
> @@ -138,16 +147,19 @@ class ListProjectsAction(ClientAction):
>          ClientAction.__init__(self, node)
>  
>      def execute(self, client, opt, args):
> -        projects = client.service.list_projects ()
> +        projects = client.service.list_projects()
>  
>          try:
>              for p in projects.SoapProject:
> -                print("%s\t%s\t%s\t%s\t%s" %  (p.builddir, p.name, p.version, p.status, str(p.edit)))
> +                print("%s\t%s\t%s\t%s\t%s" %
> +                      (p.builddir, p.name, p.version, p.status, str(p.edit)))
>          except AttributeError:
>              print("No projects configured in initvm")
>  
> +
>  ClientAction.register(ListProjectsAction)
>  
> +
>  class ListUsersAction(ClientAction):
>  
>      tag = 'list_users'
> @@ -156,13 +168,15 @@ class ListUsersAction(ClientAction):
>          ClientAction.__init__(self, node)
>  
>      def execute(self, client, opt, args):
> -        users = client.service.list_users ()
> +        users = client.service.list_users()
>  
>          for u in users.string:
>              print(u)
>  
> +
>  ClientAction.register(ListUsersAction)
>  
> +
>  class CreateProjectAction(ClientAction):
>  
>      tag = 'create_project'
> @@ -172,11 +186,13 @@ class CreateProjectAction(ClientAction):
>  
>      def execute(self, client, opt, args):
>  
> -        uuid = client.service.new_project ()
> +        uuid = client.service.new_project()
>          print(uuid)
>  
> +
>  ClientAction.register(CreateProjectAction)
>  
> +
>  class ResetProjectAction(ClientAction):
>  
>      tag = 'reset_project'
> @@ -185,12 +201,15 @@ class ResetProjectAction(ClientAction):
>          ClientAction.__init__(self, node)
>  
>      def execute(self, client, opt, args):
> -        if len (args) != 1:
> -            print("usage: elbe control reset_project <project_dir>", file=sys.stderr)
> +        if len(args) != 1:
> +            print(
> +                "usage: elbe control reset_project <project_dir>",
> +                file=sys.stderr)
>              sys.exit(20)
>  
>          builddir = args[0]
> -        client.service.reset_project (builddir)
> +        client.service.reset_project(builddir)
> +
>  
>  ClientAction.register(ResetProjectAction)
>  
> @@ -203,15 +222,19 @@ class DeleteProjectAction(ClientAction):
>          ClientAction.__init__(self, node)
>  
>      def execute(self, client, opt, args):
> -        if len (args) != 1:
> -            print("usage: elbe control del_project <project_dir>", file=sys.stderr)
> +        if len(args) != 1:
> +            print(
> +                "usage: elbe control del_project <project_dir>",
> +                file=sys.stderr)
>              sys.exit(20)
>  
>          builddir = args[0]
> -        client.service.del_project (builddir)
> +        client.service.del_project(builddir)
> +
>  
>  ClientAction.register(DeleteProjectAction)
>  
> +
>  class SetXmlAction(ClientAction):
>  
>      tag = 'set_xml'
> @@ -220,40 +243,45 @@ class SetXmlAction(ClientAction):
>          ClientAction.__init__(self, node)
>  
>      def execute(self, client, opt, args):
> -        if len (args) != 2:
> -            print("usage: elbe control set_xml <project_dir> <xml>", file=sys.stderr)
> +        if len(args) != 2:
> +            print(
> +                "usage: elbe control set_xml <project_dir> <xml>",
> +                file=sys.stderr)
>              sys.exit(20)
>  
>          builddir = args[0]
>          filename = args[1]
>  
>          try:
> -            x = ElbeXML (filename, skip_validate=True, url_validation=ValidationMode.NO_CHECK)
> +            x = ElbeXML(
> +                filename,
> +                skip_validate=True,
> +                url_validation=ValidationMode.NO_CHECK)
>          except IOError as e:
>              print("%s is not a valid elbe xml file" % filename)
> -            sys.exit (20)
> +            sys.exit(20)
>  
> -        if not x.has ('target'):
> -          print("<target> is missing, this file can't be built in an initvm",
> +        if not x.has('target'):
> +            print("<target> is missing, this file can't be built in an initvm",
>                    file=sys.stderr)
> -          sys.exit (20)
> +            sys.exit(20)
>  
>          size = 1024 * 1024
>          part = 0
> -        with open (filename, "rb") as fp:
> +        with open(filename, "rb") as fp:
>              while (True):
> -                xml_base64 = binascii.b2a_base64(fp.read (size))
> +                xml_base64 = binascii.b2a_base64(fp.read(size))
>                  # finish upload
> -                if len (xml_base64) == 1:
> -                    part = client.service.upload_file (builddir,
> -                                                       "source.xml",
> -                                                       xml_base64,
> -                                                       -1)
> +                if len(xml_base64) == 1:
> +                    part = client.service.upload_file(builddir,
> +                                                      "source.xml",
> +                                                      xml_base64,
> +                                                      -1)
>                  else:
> -                    part = client.service.upload_file (builddir,
> -                                                       "source.xml",
> -                                                       xml_base64,
> -                                                       part)
> +                    part = client.service.upload_file(builddir,
> +                                                      "source.xml",
> +                                                      xml_base64,
> +                                                      part)
>                  if part == -1:
>                      print("project busy, upload not allowed")
>                      return part
> @@ -273,13 +301,14 @@ class BuildAction(ClientAction):
>          ClientAction.__init__(self, node)
>  
>      def execute(self, client, opt, args):
> -        if len (args) != 1:
> +        if len(args) != 1:
>              print("usage: elbe control build <project_dir>", file=sys.stderr)
>              sys.exit(20)
>  
>          builddir = args[0]
> -        client.service.build (builddir, opt.build_bin, opt.build_sources,
> -                opt.skip_pbuilder)
> +        client.service.build(builddir, opt.build_bin, opt.build_sources,
> +                             opt.skip_pbuilder)
> +
>  
>  ClientAction.register(BuildAction)
>  
> @@ -292,12 +321,15 @@ class BuildSysrootAction(ClientAction):
>          ClientAction.__init__(self, node)
>  
>      def execute(self, client, opt, args):
> -        if len (args) != 1:
> -            print("usage: elbe control build-sysroot <project_dir>", file=sys.stderr)
> +        if len(args) != 1:
> +            print(
> +                "usage: elbe control build-sysroot <project_dir>",
> +                file=sys.stderr)
>              sys.exit(20)
>  
>          builddir = args[0]
> -        client.service.build_sysroot (builddir)
> +        client.service.build_sysroot(builddir)
> +
>  
>  ClientAction.register(BuildSysrootAction)
>  
> @@ -310,8 +342,10 @@ class GetFileAction(ClientAction):
>          ClientAction.__init__(self, node)
>  
>      def execute(self, client, opt, args):
> -        if len (args) != 2:
> -            print("usage: elbe control get_file <project_dir> <file>", file=sys.stderr)
> +        if len(args) != 2:
> +            print(
> +                "usage: elbe control get_file <project_dir> <file>",
> +                file=sys.stderr)
>              sys.exit(20)
>  
>          builddir = args[0]
> @@ -319,16 +353,18 @@ class GetFileAction(ClientAction):
>          dst_fname = filename
>  
>          if opt.output:
> -            fs = Filesystem ('/')
> -            dst = os.path.abspath (opt.output)
> -            fs.mkdir_p (dst)
> -            dst_fname = str (os.path.join (dst, filename))
> +            fs = Filesystem('/')
> +            dst = os.path.abspath(opt.output)
> +            fs.mkdir_p(dst)
> +            dst_fname = str(os.path.join(dst, filename))
>  
> -        client.download_file (builddir, filename, dst_fname)
> +        client.download_file(builddir, filename, dst_fname)
>          print("%s saved" % dst_fname)
>  
> +
>  ClientAction.register(GetFileAction)
>  
> +
>  class BuildChrootAction(ClientAction):
>  
>      tag = 'build_chroot_tarball'
> @@ -337,16 +373,20 @@ class BuildChrootAction(ClientAction):
>          ClientAction.__init__(self, node)
>  
>      def execute(self, client, opt, args):
> -        if len (args) != 1:
> -            print("usage: elbe control build_chroot_tarball <project_dir>", file=sys.stderr)
> +        if len(args) != 1:
> +            print(
> +                "usage: elbe control build_chroot_tarball <project_dir>",
> +                file=sys.stderr)
>              sys.exit(20)
>  
>          builddir = args[0]
>  
> -        client.service.build_chroot_tarball (builddir)
> +        client.service.build_chroot_tarball(builddir)
> +
>  
>  ClientAction.register(BuildChrootAction)
>  
> +
>  class DumpFileAction(ClientAction):
>  
>      tag = 'dump_file'
> @@ -355,8 +395,10 @@ class DumpFileAction(ClientAction):
>          ClientAction.__init__(self, node)
>  
>      def execute(self, client, opt, args):
> -        if len (args) != 2:
> -            print("usage: elbe control dump_file <project_dir> <file>", file=sys.stderr)
> +        if len(args) != 2:
> +            print(
> +                "usage: elbe control dump_file <project_dir> <file>",
> +                file=sys.stderr)
>              sys.exit(20)
>  
>          builddir = args[0]
> @@ -364,18 +406,20 @@ class DumpFileAction(ClientAction):
>  
>          part = 0
>          while True:
> -            ret = client.service.get_file (builddir, filename, part)
> +            ret = client.service.get_file(builddir, filename, part)
>              if ret == "FileNotFound":
>                  print(ret, file=sys.stderr)
>                  sys.exit(20)
>              if ret == "EndOfFile":
>                  return
>  
> -            sys.stdout.write (binascii.a2b_base64 (ret))
> +            sys.stdout.write(binascii.a2b_base64(ret))
>              part = part + 1
>  
> +
>  ClientAction.register(DumpFileAction)
>  
> +
>  class GetFilesAction(ClientAction):
>  
>      tag = 'get_files'
> @@ -384,17 +428,19 @@ class GetFilesAction(ClientAction):
>          ClientAction.__init__(self, node)
>  
>      def execute(self, client, opt, args):
> -        if len (args) != 1:
> -            print("usage: elbe control get_files <project_dir>", file=sys.stderr)
> +        if len(args) != 1:
> +            print(
> +                "usage: elbe control get_files <project_dir>",
> +                file=sys.stderr)
>              sys.exit(20)
>  
>          builddir = args[0]
> -        files = client.service.get_files (builddir)
> +        files = client.service.get_files(builddir)
>  
>          nfiles = 0
>  
>          for f in files[0]:
> -            if opt.pbuilder_only and not f.name.startswith ('pbuilder'):
> +            if opt.pbuilder_only and not f.name.startswith('pbuilder'):
>                  continue
>  
>              if opt.matches and not fnmatch.fnmatch(f.name, opt.matches):
> @@ -407,17 +453,19 @@ class GetFilesAction(ClientAction):
>                  print("%s" % (f.name))
>  
>              if opt.output:
> -                fs = Filesystem ('/')
> -                dst = os.path.abspath (opt.output)
> -                fs.mkdir_p (dst)
> -                dst_fname = str (os.path.join (dst, os.path.basename (f.name)))
> -                client.download_file (builddir, f.name, dst_fname)
> +                fs = Filesystem('/')
> +                dst = os.path.abspath(opt.output)
> +                fs.mkdir_p(dst)
> +                dst_fname = str(os.path.join(dst, os.path.basename(f.name)))
> +                client.download_file(builddir, f.name, dst_fname)
>  
>          if nfiles == 0:
> -            sys.exit (10)
> +            sys.exit(10)
> +
>  
>  ClientAction.register(GetFilesAction)
>  
> +
>  class WaitProjectBusyAction(ClientAction):
>  
>      tag = 'wait_busy'
> @@ -426,15 +474,17 @@ class WaitProjectBusyAction(ClientAction):
>          ClientAction.__init__(self, node)
>  
>      def execute(self, client, opt, args):
> -        if len (args) != 1:
> -            print("usage: elbe control wait_busy <project_dir>", file=sys.stderr)
> +        if len(args) != 1:
> +            print(
> +                "usage: elbe control wait_busy <project_dir>",
> +                file=sys.stderr)
>              sys.exit(20)
>  
>          builddir = args[0]
>          part = 1
>  
>          while True:
> -            busy = client.service.get_project_busy (builddir, part)
> +            busy = client.service.get_project_busy(builddir, part)
>              if busy == 'FINISH':
>                  break
>              else:
> @@ -449,17 +499,24 @@ class WaitProjectBusyAction(ClientAction):
>  
>                          localtime = time.asctime(time.localtime(time.time()))
>                          try:
> -                            print(localtime + " -- " + log[1].replace('\n', ''))
> +                            print(
> +                                localtime +
> +                                " -- " +
> +                                log[1].replace(
> +                                    '\n',
> +                                    ''))
>                          except IndexError:
>                              pass
>                      else:
>                          time.sleep(1)
>                  else:
>                      print("strange part: %d (skipped)" % part)
> -                    part = part+1
> +                    part = part + 1
> +
>  
>  ClientAction.register(WaitProjectBusyAction)
>  
> +
>  class SetCdromAction(ClientAction):
>  
>      tag = 'set_cdrom'
> @@ -470,25 +527,29 @@ class SetCdromAction(ClientAction):
>      def execute(self, client, opt, args):
>          size = 1024 * 1024
>  
> -        if len (args) != 2:
> -            print("usage: elbe control set_cdrom <project_dir> <cdrom file>", file=sys.stderr)
> +        if len(args) != 2:
> +            print(
> +                "usage: elbe control set_cdrom <project_dir> <cdrom file>",
> +                file=sys.stderr)
>              sys.exit(20)
>  
>          builddir = args[0]
>          filename = args[1]
>  
> -        fp = file (filename, "r")
> -        client.service.start_cdrom (builddir)
> +        fp = file(filename, "r")
> +        client.service.start_cdrom(builddir)
>          while True:
> -            bindata = fp.read (size)
> -            client.service.append_cdrom (builddir, binascii.b2a_base64 (bindata))
> -            if len (bindata) != size:
> +            bindata = fp.read(size)
> +            client.service.append_cdrom(builddir, binascii.b2a_base64(bindata))
> +            if len(bindata) != size:
>                  break
>  
> -        client.service.finish_cdrom (builddir)
> +        client.service.finish_cdrom(builddir)
> +
>  
>  ClientAction.register(SetCdromAction)
>  
> +
>  class SetOrigAction(ClientAction):
>  
>      tag = 'set_orig'
> @@ -497,27 +558,32 @@ class SetOrigAction(ClientAction):
>          ClientAction.__init__(self, node)
>  
>      def execute(self, client, opt, args):
> -        size =  1024 * 1024
> +        size = 1024 * 1024
>  
> -        if len (args) != 2:
> -            print("usage: elbe control set_orig <project_dir> <orig file>", file=sys.stderr)
> +        if len(args) != 2:
> +            print(
> +                "usage: elbe control set_orig <project_dir> <orig file>",
> +                file=sys.stderr)
>              sys.exit(20)
>  
>          builddir = args[0]
>          filename = args[1]
>  
> -        fp = file (filename, "r")
> -        client.service.start_upload_orig (builddir, os.path.basename(filename))
> +        fp = file(filename, "r")
> +        client.service.start_upload_orig(builddir, os.path.basename(filename))
>          while True:
> -            bindata = fp.read (size)
> -            client.service.append_upload_orig (builddir, binascii.b2a_base64 (bindata))
> -            if len (bindata) != size:
> +            bindata = fp.read(size)
> +            client.service.append_upload_orig(
> +                builddir, binascii.b2a_base64(bindata))
> +            if len(bindata) != size:
>                  break
>  
> -        client.service.finish_upload_orig (builddir)
> +        client.service.finish_upload_orig(builddir)
> +
>  
>  ClientAction.register(SetOrigAction)
>  
> +
>  class ShutdownInitvmAction(ClientAction):
>  
>      tag = 'shutdown_initvm'
> @@ -526,18 +592,20 @@ class ShutdownInitvmAction(ClientAction):
>          ClientAction.__init__(self, node)
>  
>      def execute(self, client, opt, args):
> -        if len (args) != 0:
> +        if len(args) != 0:
>              print("usage: elbe control shutdown_initvm", file=sys.stderr)
>              sys.exit(20)
>  
>          # if shutdown kills the daemon before it can answer the request
>          try:
> -            client.service.shutdown_initvm ()
> +            client.service.shutdown_initvm()
>          except BadStatusLine:
>              pass
>  
> +
>  ClientAction.register(ShutdownInitvmAction)
>  
> +
>  class SetPdebuilderAction(ClientAction):
>  
>      tag = 'set_pdebuild'
> @@ -548,25 +616,30 @@ class SetPdebuilderAction(ClientAction):
>      def execute(self, client, opt, args):
>          size = 1024 * 1024
>  
> -        if len (args) != 2:
> -            print("usage: elbe control set_pdebuild <project_dir> <pdebuild file>", file=sys.stderr)
> +        if len(args) != 2:
> +            print(
> +                "usage: elbe control set_pdebuild <project_dir> <pdebuild file>",
> +                file=sys.stderr)
>              sys.exit(20)
>  
>          builddir = args[0]
>          filename = args[1]
>  
> -        fp = file (filename, "r")
> -        client.service.start_pdebuild (builddir)
> +        fp = file(filename, "r")
> +        client.service.start_pdebuild(builddir)
>          while True:
> -            bindata = fp.read (size)
> -            client.service.append_pdebuild (builddir, binascii.b2a_base64 (bindata))
> -            if len (bindata) != size:
> +            bindata = fp.read(size)
> +            client.service.append_pdebuild(
> +                builddir, binascii.b2a_base64(bindata))
> +            if len(bindata) != size:
>                  break
>  
> -        client.service.finish_pdebuild (builddir)
> +        client.service.finish_pdebuild(builddir)
> +
>  
>  ClientAction.register(SetPdebuilderAction)
>  
> +
>  class BuildPbuilderAction(ClientAction):
>  
>      tag = 'build_pbuilder'
> @@ -575,15 +648,19 @@ class BuildPbuilderAction(ClientAction):
>          ClientAction.__init__(self, node)
>  
>      def execute(self, client, opt, args):
> -        if len (args) != 1:
> -            print("usage: elbe control build_pbuilder <project_dir>", file=sys.stderr)
> +        if len(args) != 1:
> +            print(
> +                "usage: elbe control build_pbuilder <project_dir>",
> +                file=sys.stderr)
>              sys.exit(20)
>  
>          builddir = args[0]
> -        client.service.build_pbuilder (builddir)
> +        client.service.build_pbuilder(builddir)
> +
>  
>  ClientAction.register(BuildPbuilderAction)
>  
> +
>  class UpdatePbuilderAction(ClientAction):
>  
>      tag = 'update_pbuilder'
> @@ -592,25 +669,32 @@ class UpdatePbuilderAction(ClientAction):
>          ClientAction.__init__(self, node)
>  
>      def execute(self, client, opt, args):
> -        if len (args) != 1:
> -            print("usage: elbe control update_pbuilder <project_dir>", file=sys.stderr)
> +        if len(args) != 1:
> +            print(
> +                "usage: elbe control update_pbuilder <project_dir>",
> +                file=sys.stderr)
>              sys.exit(20)
>  
>          builddir = args[0]
> -        client.service.update_pbuilder (builddir)
> +        client.service.update_pbuilder(builddir)
> +
>  
>  ClientAction.register(UpdatePbuilderAction)
>  
> +
>  class RepoAction(ClientAction):
>      repoactiondict = {}
> +
>      @classmethod
>      def register(cls, action):
>          cls.repoactiondict[action.tag] = action
> +
>      @classmethod
>      def print_actions(cls):
>          print("available subcommands are:", file=sys.stderr)
>          for a in cls.repoactiondict:
>              print("   %s" % a, file=sys.stderr)
> +
>      def __new__(cls, node):
>          action = cls.repoactiondict[node]
>          return object.__new__(action)
> @@ -624,12 +708,15 @@ class ListPackagesAction(RepoAction):
>          RepoAction.__init__(self, node)
>  
>      def execute(self, client, opt, args):
> -        if len (args) != 1:
> -            print("usage: elbe prjrepo list_packages <project_dir>", file=sys.stderr)
> +        if len(args) != 1:
> +            print(
> +                "usage: elbe prjrepo list_packages <project_dir>",
> +                file=sys.stderr)
>              sys.exit(20)
>  
>          builddir = args[0]
> -        print(client.service.list_packages (builddir))
> +        print(client.service.list_packages(builddir))
> +
>  
>  RepoAction.register(ListPackagesAction)
>  
> @@ -642,19 +729,24 @@ class DownloadAction(RepoAction):
>          RepoAction.__init__(self, node)
>  
>      def execute(self, client, opt, args):
> -        if len (args) != 1:
> +        if len(args) != 1:
>              print("usage: elbe prjrepo download <project_dir>", file=sys.stderr)
>              sys.exit(20)
>  
>          builddir = args[0]
>          filename = "repo.tar.gz"
> -        client.service.tar_prjrepo (builddir, filename)
> +        client.service.tar_prjrepo(builddir, filename)
>  
> -        dst_fname = os.path.join(".", "elbe-projectrepo-" + datetime.now().strftime("%Y%m%d-%H%M%S") + ".tar.gz")
> +        dst_fname = os.path.join(
> +            ".",
> +            "elbe-projectrepo-" +
> +            datetime.now().strftime("%Y%m%d-%H%M%S") +
> +            ".tar.gz")
>  
> -        client.download_file (builddir, filename, dst_fname)
> +        client.download_file(builddir, filename, dst_fname)
>          print("%s saved" % dst_fname)
>  
> +
>  RepoAction.register(DownloadAction)
>  
>  
> @@ -669,20 +761,20 @@ class UploadPackageAction(RepoAction):
>          # Uploads file f into builddir in intivm
>          size = 1024 * 1024
>          part = 0
> -        with open (f, "rb") as fp:
> +        with open(f, "rb") as fp:
>              while (True):
> -                xml_base64 = binascii.b2a_base64(fp.read (size))
> +                xml_base64 = binascii.b2a_base64(fp.read(size))
>                  # finish upload
> -                if len (xml_base64) == 1:
> -                    part = client.service.upload_file (builddir,
> -                                                       f,
> -                                                       xml_base64,
> -                                                       -1)
> +                if len(xml_base64) == 1:
> +                    part = client.service.upload_file(builddir,
> +                                                      f,
> +                                                      xml_base64,
> +                                                      -1)
>                  else:
> -                    part = client.service.upload_file (builddir,
> -                                                       f,
> -                                                       xml_base64,
> -                                                       part)
> +                    part = client.service.upload_file(builddir,
> +                                                      f,
> +                                                      xml_base64,
> +                                                      part)
>                  if part == -1:
>                      print("project busy, upload not allowed")
>                      return -1
> @@ -691,8 +783,10 @@ class UploadPackageAction(RepoAction):
>                      break
>  
>      def execute(self, client, opt, args):
> -        if len (args) != 2:
> -            print("usage: elbe prjrepo upload_pkg <project_dir> <deb/dsc file>", file=sys.stderr)
> +        if len(args) != 2:
> +            print(
> +                "usage: elbe prjrepo upload_pkg <project_dir> <deb/dsc file>",
> +                file=sys.stderr)
>              sys.exit(20)
>  
>          builddir = args[0]
> @@ -704,12 +798,12 @@ class UploadPackageAction(RepoAction):
>          print("Check files...")
>  
>          # Check filetype
> -        if filename[-3:] not in ['dsc','deb']:
> +        if filename[-3:] not in ['dsc', 'deb']:
>              print("Error: Only .dsc and .deb files allowed to upload.")
>          else:
>              filetype = filename[-4:]
>  
> -        files = [filename] # list of all files which will be uploaded
> +        files = [filename]  # list of all files which will be uploaded
>  
>          # Parse .dsc-File and append neccessary source files to files
>          if filetype == '.dsc':
> @@ -723,7 +817,8 @@ class UploadPackageAction(RepoAction):
>                  print("File %s not found." % f)
>                  abort = True
>          # Abort if one or more source files are missing
> -        if abort: sys.exit(20)
> +        if abort:
> +            sys.exit(20)
>  
>          print("Start uploading file(s)...")
>          for f in files:
> diff --git a/elbepack/templates.py b/elbepack/templates.py
> index 7bd4232a..eb817aa9 100644
> --- a/elbepack/templates.py
> +++ b/elbepack/templates.py
> @@ -13,32 +13,39 @@ from elbepack.directories import mako_template_dir, default_preseed_fname
>  from mako.template import Template
>  from mako import exceptions
>  
> -def fix_linebreak_escapes (s):
> +
> +def fix_linebreak_escapes(s):
>      return s.replace('\\\n', '${"\\\\"}\n')
>  
> +
>  def template(fname, d, linebreak=False):
>      try:
>          if linebreak:
> -            return Template(filename=fname,preprocessor=fix_linebreak_escapes).render(**d)
> +            return Template(
> +                filename=fname,
> +                preprocessor=fix_linebreak_escapes).render(
> +                **d)
>          else:
>              return Template(filename=fname).render(**d)
> -    except:
> +    except BaseException:
>          print(exceptions.text_error_template().render())
>          raise
>  
> -def write_template( outname, fname, d, linebreak=False ):
> +
> +def write_template(outname, fname, d, linebreak=False):
>      outfile = open(outname, "w")
> -    outfile.write( template( fname, d, linebreak ) )
> +    outfile.write(template(fname, d, linebreak))
>      outfile.close()
>  
> -def write_pack_template( outname, fname, d, linebreak=False ):
> -    template_name = os.path.join( mako_template_dir, fname )
>  
> -    write_template( outname, template_name, d, linebreak )
> +def write_pack_template(outname, fname, d, linebreak=False):
> +    template_name = os.path.join(mako_template_dir, fname)
> +
> +    write_template(outname, template_name, d, linebreak)
>  
>  
> -def get_preseed( xml ):
> -    def_xml = etree( default_preseed_fname )
> +def get_preseed(xml):
> +    def_xml = etree(default_preseed_fname)
>  
>      preseed = {}
>      for c in def_xml.node("/preseed"):
> @@ -58,8 +65,9 @@ def get_preseed( xml ):
>  
>      return preseed
>  
> -def get_initvm_preseed( xml ):
> -    def_xml = etree( default_preseed_fname )
> +
> +def get_initvm_preseed(xml):
> +    def_xml = etree(default_preseed_fname)
>  
>      preseed = {}
>      for c in def_xml.node("/preseed"):
> @@ -79,10 +87,10 @@ def get_initvm_preseed( xml ):
>  
>      return preseed
>  
> -def preseed_to_text( pres ):
> +
> +def preseed_to_text(pres):
>      retval = ""
> -    for k,v in pres.items():
> +    for k, v in pres.items():
>          retval += "%s\t%s\t%s\t%s\n" % (k[0], k[1], v[0], v[1])
>  
>      return retval
> -
> diff --git a/elbepack/toolchain.py b/elbepack/toolchain.py
> index a9a30e9a..4095d132 100644
> --- a/elbepack/toolchain.py
> +++ b/elbepack/toolchain.py
> @@ -7,10 +7,11 @@ import string
>  import glob
>  import os
>  
> -arch2triple = { "armhf": "arm-linux-gnueabihf", "armel": "arm-linux-gnueabi" }
> +arch2triple = {"armhf": "arm-linux-gnueabihf", "armel": "arm-linux-gnueabi"}
> +
>  
>  class Toolchain(object):
> -    def __init__( self, path, arch, triple=None ):
> +    def __init__(self, path, arch, triple=None):
>          self.path = path
>          self.arch = arch
>          if triple:
> @@ -18,53 +19,55 @@ class Toolchain(object):
>          else:
>              self.triple = arch2triple[arch]
>  
> -
> -    def get_fullpath( self, path ):
> +    def get_fullpath(self, path):
>          replace = {"triple": self.triple}
>          tpl = string.Template(path)
> -        p = tpl.substitute( replace )
> +        p = tpl.substitute(replace)
>  
> -        return os.path.join( self.path, p )
> +        return os.path.join(self.path, p)
>  
> -
> -    def get_files_for_pkg( self, pkgname ):
> +    def get_files_for_pkg(self, pkgname):
>          globs = self.pkg_libs[pkgname]
>  
>          files = []
>          for g in globs:
> -            gg = os.path.join( self.get_fullpath( self.gcc_libpath ), g )
> -            files += glob.glob( gg )
> +            gg = os.path.join(self.get_fullpath(self.gcc_libpath), g)
> +            files += glob.glob(gg)
>  
>          return files
>  
> +
>  class LinaroToolchain(Toolchain):
>      libc_path = "${triple}/libc"
>      gcc_libpath = "${triple}/lib"
> -    pkg_libs = { "libasan0": ["libasan.so.*"],
> -                 "libatomic1": ["libatomic.so.*"],
> -                 "libgcc1": ["libgcc_s.so.*"],
> -                 "libgfortran3": ["libgfortran.so.*"],
> -                 "libgomp1": ["libgomp.so.*"],
> -                 "libmudflap0": ["libmudflap.so.*", "libmudflapth.so.*"],
> -                 "libssp0": ["libssp.so.*"],
> -                 "libstdc++6": ["libstdc++.so.*"] }
> -
> -    pkg_deps = { "libasan0": "libc6 (>= 2.13-28), libstdc++ (>= 4.8.3), libgcc1 (>= 4.8.3)",
> -                 "libatomic1": "libc6 (>= 2.13-28)",
> -                 "libgcc1": "libc6 (>= 2.13-28)",
> -                 "libgfortran3": "libgcc1 (>= 4.8.3)",
> -                 "libgomp1": "libc6 (>= 2.13-28)",
> -                 "libmudflap0": "libc6 (>= 2.13-28)",
> -                 "libssp0": "libc6 (>= 2.13-28)",
> -                 "libstdc++6": "libc6 (>= 2.13-28), libgcc1 (>= 4.8.3)" }
> +    pkg_libs = {"libasan0": ["libasan.so.*"],
> +                "libatomic1": ["libatomic.so.*"],
> +                "libgcc1": ["libgcc_s.so.*"],
> +                "libgfortran3": ["libgfortran.so.*"],
> +                "libgomp1": ["libgomp.so.*"],
> +                "libmudflap0": ["libmudflap.so.*", "libmudflapth.so.*"],
> +                "libssp0": ["libssp.so.*"],
> +                "libstdc++6": ["libstdc++.so.*"]}
> +
> +    pkg_deps = {
> +        "libasan0": "libc6 (>= 2.13-28), libstdc++ (>= 4.8.3), libgcc1 (>= 4.8.3)",
> +        "libatomic1": "libc6 (>= 2.13-28)",
> +        "libgcc1": "libc6 (>= 2.13-28)",
> +        "libgfortran3": "libgcc1 (>= 4.8.3)",
> +        "libgomp1": "libc6 (>= 2.13-28)",
> +        "libmudflap0": "libc6 (>= 2.13-28)",
> +        "libssp0": "libc6 (>= 2.13-28)",
> +        "libstdc++6": "libc6 (>= 2.13-28), libgcc1 (>= 4.8.3)"}
> +
>  
>  class LinaroToolchainArmel(LinaroToolchain):
>      gcc_libpath = "arm-linux-gnueabihf/lib/arm-linux-gnueabi"
>  
> -def get_toolchain( typ, path, arch ):
> -    if typ=="linaro":
> +
> +def get_toolchain(typ, path, arch):
> +    if typ == "linaro":
>          return LinaroToolchain(path, arch)
> -    if typ=="linaro_armel":
> +    if typ == "linaro_armel":
>          return LinaroToolchainArmel(path, arch)
>  
>      raise Exception
> diff --git a/elbepack/treeutils.py b/elbepack/treeutils.py
> index 2f12d788..3989bced 100644
> --- a/elbepack/treeutils.py
> +++ b/elbepack/treeutils.py
> @@ -6,11 +6,12 @@
>  from __future__ import print_function
>  
>  from lxml.etree import ElementTree, SubElement, Element
> -from lxml.etree import XMLParser,parse
> +from lxml.etree import XMLParser, parse
>  import copy
>  
>  # ElementTree helpers
>  
> +
>  class eiter(object):
>      def __init__(self, it):
>          self.it = it
> @@ -31,114 +32,113 @@ class eiter(object):
>      def next(self):
>          return self.__next__()
>  
> +
>  class ebase(object):
>      def __init__(self, et):
>          self.et = et
>  
> -    def text( self, path, **args ):
> -        el = self.et.find("./"+path)
> +    def text(self, path, **args):
> +        el = self.et.find("./" + path)
>          if (el is None) and "default" not in args:
> -            raise Exception( "Cant find path %s" % path )
> +            raise Exception("Cant find path %s" % path)
>          elif (el is None) and "default" in args:
>              default = args["default"]
>              if type(default) == str:
>                  return default
>              else:
> -                return default[ args["key"] ]
> +                return default[args["key"]]
>          else:
>              return el.text
>  
>      @property
> -    def tag( self ):
> +    def tag(self):
>          return self.et.tag
>  
> -    def node( self, path ):
> -        retval = self.et.find("./"+path)
> -        if not retval is None:
> -            return elem( retval )
> +    def node(self, path):
> +        retval = self.et.find("./" + path)
> +        if retval is not None:
> +            return elem(retval)
>          else:
>              return None
>  
> -    def all( self, path ):
> +    def all(self, path):
>          return map(elem, self.et.findall(path))
>  
> -    def __iter__( self ):
> +    def __iter__(self):
>          return eiter(iter(self.et))
>  
> -    def has( self, path ):
> +    def has(self, path):
>          return not (self.et.find(path) is None)
>  
> -    def set_text( self, text ):
> +    def set_text(self, text):
>          self.et.text = text
>  
> -    def clear( self ):
> +    def clear(self):
>          self.et.clear()
>  
> -    def append_treecopy( self, other ):
> -        self.et.append( copy.deepcopy( other.et ) )
> -
> +    def append_treecopy(self, other):
> +        self.et.append(copy.deepcopy(other.et))
>  
>  
>  class elem(ebase):
> -    def __init__( self, el ):
> -        ebase.__init__( self, el )
> +    def __init__(self, el):
> +        ebase.__init__(self, el)
>  
> -    def ensure_child( self, tag ):
> -        retval = self.et.find("./"+tag)
> -        if not retval is None:
> -            return elem( retval )
> +    def ensure_child(self, tag):
> +        retval = self.et.find("./" + tag)
> +        if retval is not None:
> +            return elem(retval)
>          else:
> -            return elem( SubElement( self.et, tag ) )
> +            return elem(SubElement(self.et, tag))
>  
> -    def append( self, tag ):
> -        retval = elem( SubElement( self.et, tag ) )
> +    def append(self, tag):
> +        retval = elem(SubElement(self.et, tag))
>          retval.et.tail = '\n'
>          return retval
>  
> -    def set_child_position( self, child, pos ):
> -        self.et.remove( child.et )
> -        self.et.insert( pos, child.et )
> -
> -    def remove_child( self, child ):
> -        self.et.remove( child.et )
> +    def set_child_position(self, child, pos):
> +        self.et.remove(child.et)
> +        self.et.insert(pos, child.et)
>  
> +    def remove_child(self, child):
> +        self.et.remove(child.et)
>  
>  
>  class etree(ebase):
> -    def  __init__( self, fname ):
> -        if not fname is None:
> +    def __init__(self, fname):
> +        if fname is not None:
>              parser = XMLParser(huge_tree=True, remove_comments=False)
> -            et = parse (fname, parser=parser)
> +            et = parse(fname, parser=parser)
>          else:
> -            et = ElementTree( file=None )
> +            et = ElementTree(file=None)
>  
> -        ebase.__init__( self, et )
> +        ebase.__init__(self, et)
>  
> -    def write( self, fname, encoding=None ):
> +    def write(self, fname, encoding=None):
>          # Make sure, that we end with a newline
>          self.et.getroot().tail = '\n'
>          self.et.write(fname, encoding=encoding)
>  
> -    def tostring (self):
> -        return self.et.tostring ()
> +    def tostring(self):
> +        return self.et.tostring()
>  
> -    def ensure_child( self, tag ):
> -        retval = self.et.find("./"+tag)
> -        if not retval is None:
> -            return elem( retval )
> +    def ensure_child(self, tag):
> +        retval = self.et.find("./" + tag)
> +        if retval is not None:
> +            return elem(retval)
>          else:
> -            return elem( SubElement( self.et.getroot(), tag ) )
> +            return elem(SubElement(self.et.getroot(), tag))
>  
> -    def set_child_position( self, child, pos ):
> +    def set_child_position(self, child, pos):
>          root = self.et.getroot()
> -        root.remove( child.et )
> -        root.insert( pos, child.et )
> +        root.remove(child.et)
> +        root.insert(pos, child.et)
>  
> -    def setroot( self, tag ):
> -        retval = elem( Element (tag) )
> -        self.et._setroot( retval.et )
> +    def setroot(self, tag):
> +        retval = elem(Element(tag))
> +        self.et._setroot(retval.et)
>          return retval
>  
>      @property
> -    def root( self ):
> +    def root(self):
>          return elem(self.et.getroot())
> diff --git a/elbepack/updated.py b/elbepack/updated.py
> index b4846706..1e630c01 100644
> --- a/elbepack/updated.py
> +++ b/elbepack/updated.py
> @@ -25,7 +25,7 @@ from syslog import syslog
>  from zipfile import (ZipFile, BadZipfile)
>  
>  from elbepack.aptprogress import (ElbeInstallProgress,
> - ElbeAcquireProgress, ElbeOpProgress)
> +                                  ElbeAcquireProgress, ElbeOpProgress)
>  from elbepack.gpg import unsign_file
>  from elbepack.treeutils import etree
>  
> @@ -35,8 +35,9 @@ try:
>  except ImportError:
>      downgrade_prevention_feature_available = False
>  
> +
>  class UpdateStatus:
> -    def __init__ (self):
> +    def __init__(self):
>          self.monitor = None
>          self.observer = None
>          self.soapserver = None
> @@ -46,136 +47,140 @@ class UpdateStatus:
>          self.verbose = False
>          self.repo_dir = ""
>          self.status_file = '/var/cache/elbe/update_state.txt'
> -        with rw_access_file (self.status_file, self) as f:
> -            f.write ('ready')
> -            f.truncate ()
> +        with rw_access_file(self.status_file, self) as f:
> +            f.write('ready')
> +            f.truncate()
>  
> -    def write_status (self, msg):
> -        with rw_access_file (self.status_file, self) as f:
> -            f.seek (0)
> -            f.write (msg)
> -            f.truncate ()
> +    def write_status(self, msg):
> +        with rw_access_file(self.status_file, self) as f:
> +            f.seek(0)
> +            f.write(msg)
> +            f.truncate()
>  
> -    def set_progress (self, step, percent=''):
> +    def set_progress(self, step, percent=''):
>          self.step = step
> -        self.write_status ('in_progress\t%d\t%s' % (step, percent))
> +        self.write_status('in_progress\t%d\t%s' % (step, percent))
>  
> -    def set_finished (self, result):
> +    def set_finished(self, result):
>          self.step = 0
> -        self.write_status ('finished\t%s' % result)
> +        self.write_status('finished\t%s' % result)
>  
> -    def log (self, msg):
> -        if not msg.endswith ('\n'):
> +    def log(self, msg):
> +        if not msg.endswith('\n'):
>              msg += '\n'
>  
>          # parse progress of apt from aptprogress output
>          if self.step == 3:
>              msg_a = msg.split()
> -            self.set_progress (3, msg_a [0])
> +            self.set_progress(3, msg_a[0])
>  
>          if self.step:
> -            msg = "(" + str (self.step) + "/3) " + msg
> +            msg = "(" + str(self.step) + "/3) " + msg
>          if self.monitor:
>              try:
> -                self.monitor.service.msg (msg)
> -            except:
> +                self.monitor.service.msg(msg)
> +            except BaseException:
>                  print("logging to monitor failed, removing monitor connection")
>                  self.monitor = None
>                  print(msg)
>          try:
> -            syslog (msg)
> -        except:
> +            syslog(msg)
> +        except BaseException:
>              print(msg)
>          if self.verbose:
>              print(msg)
>  
> +
>  class UpdateApplication (Application):
>      def __init__(self, *args, **kargs):
>          Application.__init__(self, *args, **kargs)
> -        self.status = UpdateStatus ()
> +        self.status = UpdateStatus()
> +
>  
>  class UpdateService (ServiceBase):
> -    @rpc (_returns=String)
> -    def list_snapshots (self):
> +    @rpc(_returns=String)
> +    def list_snapshots(self):
>          # use comma seperated string because array of string triggers a bug in
>          # python suds :(
>          snapshots = ""
>  
> -        if os.path.isfile ("/etc/elbe_base.xml"):
> +        if os.path.isfile("/etc/elbe_base.xml"):
>              snapshots += "base_version,"
>  
> -        lists = os.listdir ("/etc/apt/sources.list.d")
> +        lists = os.listdir("/etc/apt/sources.list.d")
>  
>          for l in lists:
> -            snapshots += l[:len(l)-5] + ","
> +            snapshots += l[:len(l) - 5] + ","
>  
>          return snapshots
>  
> -    @rpc (String, _returns=String)
> -    def apply_snapshot (self, version):
> +    @rpc(String, _returns=String)
> +    def apply_snapshot(self, version):
>          if version == "base_version":
>              fname = "/etc/elbe_base.xml"
>          else:
>              fname = self.app.status.repo_dir + "/" + version + "/new.xml"
>  
>          try:
> -            apply_update (fname, self.app.status)
> +            apply_update(fname, self.app.status)
>          except Exception as err:
>              print("%s" % str(err))
> -            self.app.status.set_finished ('error')
> +            self.app.status.set_finished('error')
>              return "apply snapshot %s failed" % version
>  
> -        self.app.status.set_finished ('OK')
> +        self.app.status.set_finished('OK')
>          return "snapshot %s applied" % version
>  
> -    @rpc (String)
> -    def register_monitor (self, wsdl_url):
> -        self.app.status.monitor = Client (wsdl_url)
> -        self.app.status.log ("connection established")
> +    @rpc(String)
> +    def register_monitor(self, wsdl_url):
> +        self.app.status.monitor = Client(wsdl_url)
> +        self.app.status.log("connection established")
> +
>  
>  class rw_access_file:
> -    def __init__ (self, filename, status):
> +    def __init__(self, filename, status):
>          self.filename = filename
> -        self.rw = rw_access (filename, status)
> +        self.rw = rw_access(filename, status)
>  
> -    def __enter__ (self):
> -        self.rw.__enter__ ()
> -        self.f = open (self.filename, 'w')
> +    def __enter__(self):
> +        self.rw.__enter__()
> +        self.f = open(self.filename, 'w')
>          return self.f
>  
> -    def __exit__ (self, type, value, traceback):
> -        if os.path.isfile (self.filename):
> -            self.f.close ()
> -        self.rw.__exit__ (type, value, traceback)
> +    def __exit__(self, type, value, traceback):
> +        if os.path.isfile(self.filename):
> +            self.f.close()
> +        self.rw.__exit__(type, value, traceback)
> +
>  
>  class rw_access:
> -    def __init__ (self, directory, status):
> +    def __init__(self, directory, status):
>          self.status = status
>          self.directory = directory
> -        self.mount = self.get_mount ()
> -        self.mount_orig = self.get_mount_status ()
> +        self.mount = self.get_mount()
> +        self.mount_orig = self.get_mount_status()
>  
> -    def __enter__ (self):
> +    def __enter__(self):
>          if self.mount_orig == 'ro':
> -            self.status.log ("remount %s read/writeable" % self.mount)
> +            self.status.log("remount %s read/writeable" % self.mount)
>              cmd = "mount -o remount,rw %s" % self.mount
> -            os.system (cmd)
> +            os.system(cmd)
>  
> -    def __exit__ (self, type, value, traceback):
> +    def __exit__(self, type, value, traceback):
>          if self.mount_orig == 'ro':
> -            self.status.log ("remount %s readonly" % self.mount)
> -            os.system ("sync")
> +            self.status.log("remount %s readonly" % self.mount)
> +            os.system("sync")
>              cmd = "mount -o remount,ro %s" % self.mount
> -            ret = os.system (cmd)
> +            ret = os.system(cmd)
>  
> -    def get_mount_status (self):
> -        with open ('/etc/mtab') as mtab:
> -            mtab_lines = mtab.readlines ()
> +    def get_mount_status(self):
> +        with open('/etc/mtab') as mtab:
> +            mtab_lines = mtab.readlines()
>              # take care, to use the last mount if overlayed mountpoints are
>              # used: e.g. rootfs / rootfs rw 0 0 vs. /dev/root / ext2 ro
>              ret = 'unknown'
>              for ml in mtab_lines:
> -                mle = ml.split (' ')
> +                mle = ml.split(' ')
>                  if mle[1] == self.mount:
>                      attr_list = mle[3].split(',')
>                      for attr in attr_list:
> @@ -185,15 +190,16 @@ class rw_access:
>                              ret = 'rw'
>          return ret
>  
> -    def get_mount (self):
> -        path = os.path.realpath (os.path.abspath (self.directory))
> +    def get_mount(self):
> +        path = os.path.realpath(os.path.abspath(self.directory))
>          while path != os.path.sep:
> -            if os.path.ismount (path):
> +            if os.path.ismount(path):
>                  return path
> -            path = os.path.abspath (os.path.join (path, os.pardir))
> +            path = os.path.abspath(os.path.join(path, os.pardir))
>          return path
>  
> -def fname_replace (s):
> +
> +def fname_replace(s):
>      allowed = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ"
>      allowed += "0123456789"
>      allowed += "_-."
> @@ -205,65 +211,70 @@ def fname_replace (s):
>              res += '_'
>      return res
>  
> -def update_sourceslist (xml, update_dir, status):
> +
> +def update_sourceslist(xml, update_dir, status):
>      # only create sources list entry if repo is valid
> -    if not os.path.isdir (update_dir + '/dists'):
> -        status.log ('invalid repository, not added to sources.list')
> +    if not os.path.isdir(update_dir + '/dists'):
> +        status.log('invalid repository, not added to sources.list')
>          return
>  
> -    deb =  "deb [trusted=yes] file://" + update_dir + " " + xml.text ("/project/suite")
> +    deb = "deb [trusted=yes] file://" + \
> +        update_dir + " " + xml.text("/project/suite")
>      deb += " main\n"
>      fname = "/etc/apt/sources.list.d/"
> -    fname += fname_replace (xml.text ("/project/name")) + "_"
> -    fname += fname_replace (xml.text ("/project/version"))
> +    fname += fname_replace(xml.text("/project/name")) + "_"
> +    fname += fname_replace(xml.text("/project/version"))
>      fname += ".list"
>  
> -    with rw_access_file (fname, status) as f:
> -        f.write (deb)
> +    with rw_access_file(fname, status) as f:
> +        f.write(deb)
> +
>  
> -def mark_install (depcache, pkg, version, auto, status):
> +def mark_install(depcache, pkg, version, auto, status):
>      for v in pkg.version_list:
> -        if v.ver_str == str (version):
> -            depcache.set_candidate_ver (pkg, v)
> -            depcache.mark_install (pkg, False, not auto)
> +        if v.ver_str == str(version):
> +            depcache.set_candidate_ver(pkg, v)
> +            depcache.mark_install(pkg, False, not auto)
>              return
>  
> -    status.log ("ERROR: " + pkg.name + version + " is not available in the cache")
> +    status.log("ERROR: " + pkg.name + version +
> +               " is not available in the cache")
> +
>  
> -def _apply_update (fname, status):
> +def _apply_update(fname, status):
>  
>      try:
> -        xml = etree (fname)
> -    except:
> -        raise Exception ( "reading %s failed " % fname )
> +        xml = etree(fname)
> +    except BaseException:
> +        raise Exception("reading %s failed " % fname)
>  
> -    fpl = xml.node ("fullpkgs")
> +    fpl = xml.node("fullpkgs")
>  
> -    sources = apt_pkg.SourceList ()
> -    sources.read_main_list ()
> +    sources = apt_pkg.SourceList()
> +    sources.read_main_list()
>  
> -    status.log ("initialize apt")
> -    apt_pkg.init ()
> -    cache = apt_pkg.Cache (progress=ElbeOpProgress (cb=status.log))
> +    status.log("initialize apt")
> +    apt_pkg.init()
> +    cache = apt_pkg.Cache(progress=ElbeOpProgress(cb=status.log))
>  
> -    status.set_progress (1)
> -    status.log ("updating package cache")
> -    cache.update (ElbeAcquireProgress (cb=status.log), sources)
> +    status.set_progress(1)
> +    status.log("updating package cache")
> +    cache.update(ElbeAcquireProgress(cb=status.log), sources)
>      # quote from python-apt api doc: "A call to this method does not affect the
>      # current Cache object, instead a new one should be created in order to use
>      # the changed index files."
> -    cache = apt_pkg.Cache (progress=ElbeOpProgress (cb=status.log))
> -    depcache = apt_pkg.DepCache (cache)
> -    hl_cache = apt.cache.Cache (progress=ElbeOpProgress (cb=status.log))
> -    hl_cache.update (fetch_progress=ElbeAcquireProgress (cb=status.log))
> +    cache = apt_pkg.Cache(progress=ElbeOpProgress(cb=status.log))
> +    depcache = apt_pkg.DepCache(cache)
> +    hl_cache = apt.cache.Cache(progress=ElbeOpProgress(cb=status.log))
> +    hl_cache.update(fetch_progress=ElbeAcquireProgress(cb=status.log))
>  
>      # go through package cache, if a package is in the fullpkg list of the XML
>      #  mark the package for installation (with the specified version)
>      #  if it is not mentioned in the fullpkg list purge the package out of the
>      #  system.
> -    status.set_progress (2)
> -    status.log ("calculating packages to install/remove")
> -    count = len (hl_cache)
> +    status.set_progress(2)
> +    status.log("calculating packages to install/remove")
> +    count = len(hl_cache)
>      step = count / 10
>      i = 0
>      percent = 0
> @@ -271,33 +282,33 @@ def _apply_update (fname, status):
>          i = i + 1
>          if not (i % step):
>              percent = percent + 10
> -            status.log (str (percent) + "% - " + str (i) + "/" + str (count))
> -            status.set_progress (2, str (percent) + "%")
> +            status.log(str(percent) + "% - " + str(i) + "/" + str(count))
> +            status.set_progress(2, str(percent) + "%")
>  
> -        pkg = cache [p.name]
> +        pkg = cache[p.name]
>          marked = False
>          for fpi in fpl:
>              if pkg.name == fpi.et.text:
> -                mark_install (depcache, pkg,
> -                            fpi.et.get('version'),
> -                            fpi.et.get('auto'),
> -                            status)
> +                mark_install(depcache, pkg,
> +                             fpi.et.get('version'),
> +                             fpi.et.get('auto'),
> +                             status)
>                  marked = True
>  
>          if not marked:
> -            depcache.mark_delete (pkg, True)
> +            depcache.mark_delete(pkg, True)
>  
> -    status.set_progress (3)
> -    status.log ("applying snapshot")
> -    depcache.commit (ElbeAcquireProgress (cb=status.log),
> -                     ElbeInstallProgress (cb=status.log))
> +    status.set_progress(3)
> +    status.log("applying snapshot")
> +    depcache.commit(ElbeAcquireProgress(cb=status.log),
> +                    ElbeInstallProgress(cb=status.log))
>      del depcache
>      del hl_cache
>      del cache
>      del sources
>  
>      version_file = open("/etc/updated_version", "w")
> -    version_file.write( xml.text ("/project/version") )
> +    version_file.write(xml.text("/project/version"))
>      version_file.close()
>  
>  
> @@ -307,207 +318,220 @@ def mkdir_p(path):
>      except OSError as exc:
>          if exc.errno == errno.EEXIST and os.path.isdir(path):
>              pass
> -        else: raise
> +        else:
> +            raise
> +
>  
> -def execute (cmd, status):
> -    output = subprocess.check_output (cmd, stderr=subprocess.STDOUT)
> -    for o in output.rstrip ().split ('\n'):
> +def execute(cmd, status):
> +    output = subprocess.check_output(cmd, stderr=subprocess.STDOUT)
> +    for o in output.rstrip().split('\n'):
>          if o:
> -            status.log (o)
> +            status.log(o)
>  
> -def pre_sh (current_version, target_version, status):
> +
> +def pre_sh(current_version, target_version, status):
>      if os.path.isfile('/var/cache/elbe/' + 'pre.sh'):
> -        execute (
> -          ['/var/cache/elbe/' + 'pre.sh', current_version, target_version],
> -          status)
> +        execute(
> +            ['/var/cache/elbe/' + 'pre.sh', current_version, target_version],
> +            status)
> +
>  
> -def post_sh (current_version, target_version, status):
> +def post_sh(current_version, target_version, status):
>      if os.path.isfile('/var/cache/elbe/' + 'post.sh'):
> -        execute (
> -          ['/var/cache/elbe/' + 'post.sh', current_version, target_version],
> -          status)
> +        execute(
> +            ['/var/cache/elbe/' + 'post.sh', current_version, target_version],
> +            status)
> +
> +
> +def get_target_version(fname):
> +    xml = etree(fname)
> +    return xml.text("/project/version")
> +
> +
> +def get_current_version():
> +    with open("/etc/updated_version", "r") as version_file:
> +        return version_file.read()
>  
> -def get_target_version (fname):
> -    xml = etree (fname)
> -    return xml.text ("/project/version")
>  
> -def get_current_version ():
> -    with open ("/etc/updated_version", "r") as version_file:
> -        return version_file.read ()
> +def get_base_version():
> +    xml = etree("/etc/elbe_base.xml")
> +    return xml.text("/project/version")
>  
> -def get_base_version ():
> -    xml = etree ("/etc/elbe_base.xml")
> -    return xml.text ("/project/version")
>  
> -def is_downgrade (target_version, current_version, base_version):
> +def is_downgrade(target_version, current_version, base_version):
>      current = current_version
>      if current == "":
>          current = base_version
> -    return version.parse (target_version) < version.parse (current)
> +    return version.parse(target_version) < version.parse(current)
>  
> -def is_downgrade_allowed ():
> -    return os.path.isfile ("/var/cache/elbe/.downgrade_allowed")
>  
> -def reject_downgrade (status, new_xml_file):
> +def is_downgrade_allowed():
> +    return os.path.isfile("/var/cache/elbe/.downgrade_allowed")
> +
> +
> +def reject_downgrade(status, new_xml_file):
>      t_ver = get_target_version(new_xml_file)
>      b_ver = get_base_version()
>  
>      try:
>          c_ver = get_current_version()
>      except IOError as e:
> -        status.log ('get current version failed: ' + str (e))
> +        status.log('get current version failed: ' + str(e))
>          c_ver = ""
>  
> -    if is_downgrade (t_ver, c_ver, b_ver) and not is_downgrade_allowed ():
> -        status.log ('Update is a downgrade and downgrades are not allowed')
> +    if is_downgrade(t_ver, c_ver, b_ver) and not is_downgrade_allowed():
> +        status.log('Update is a downgrade and downgrades are not allowed')
>          return True
>  
>      return False
>  
> -def apply_update (fname, status):
> +
> +def apply_update(fname, status):
>      # As soon as python-apt closes its opened files on object deletion
>      # we can drop this fork workaround. As long as they keep their files
>      # open, we run the code in an own fork, than the files are closed on
>      # process termination an we can remount the filesystem readonly
>      # without errors.
> -    p = Process (target=_apply_update, args=(fname, status))
> -    with rw_access ("/", status):
> +    p = Process(target=_apply_update, args=(fname, status))
> +    with rw_access("/", status):
>          try:
>              t_ver = get_target_version(fname)
> -        except:
> -            status.log ('Reading xml-file failed!')
> +        except BaseException:
> +            status.log('Reading xml-file failed!')
>              return
>  
>          try:
>              c_ver = get_current_version()
>          except IOError as e:
> -            status.log ('get current version failed: ' + str (e))
> +            status.log('get current version failed: ' + str(e))
>              c_ver = ""
>  
> -        pre_sh (c_ver, t_ver, status)
> -        p.start ()
> -        p.join ()
> -        status.log ("cleanup /var/cache/apt/archives")
> +        pre_sh(c_ver, t_ver, status)
> +        p.start()
> +        p.join()
> +        status.log("cleanup /var/cache/apt/archives")
>          # don't use execute() here, it results in an error that the apt-cache
>          # is locked. We currently don't understand this behaviour :(
> -        os.system ("apt-get clean")
> +        os.system("apt-get clean")
>          if p.exitcode != 0:
> -            raise Exception ( "Applying update failed. See logfile for more information" )
> -        post_sh (c_ver, t_ver, status)
> +            raise Exception(
> +                "Applying update failed. See logfile for more information")
> +        post_sh(c_ver, t_ver, status)
>  
> -def action_select (upd_file, status):
>  
> -    status.log ( "updating: " + upd_file)
> +def action_select(upd_file, status):
> +
> +    status.log("updating: " + upd_file)
>  
>      try:
> -        upd_file_z = ZipFile (upd_file)
> +        upd_file_z = ZipFile(upd_file)
>      except BadZipfile:
> -        status.log ("update aborted (bad zip file: %s)" % upd_file)
> +        status.log("update aborted (bad zip file: %s)" % upd_file)
>          return
>  
> -    if not "new.xml" in upd_file_z.namelist ():
> -        status.log ("update invalid (new.xml missing)")
> +    if "new.xml" not in upd_file_z.namelist():
> +        status.log("update invalid (new.xml missing)")
>          return
>  
> -    with rw_access ("/tmp", status):
> -        upd_file_z.extract ("new.xml", "/tmp/")
> +    with rw_access("/tmp", status):
> +        upd_file_z.extract("new.xml", "/tmp/")
>  
>      # prevent downgrades (if available)
>      if downgrade_prevention_feature_available:
>          try:
> -            if reject_downgrade (status, "/tmp/new.xml"):
> +            if reject_downgrade(status, "/tmp/new.xml"):
>                  return
>          except Exception as e:
> -            status.log ('Error while reading XML files occurred: ' + str(e))
> +            status.log('Error while reading XML files occurred: ' + str(e))
>              return
>  
> -    xml = etree ("/tmp/new.xml")
> -    prefix = status.repo_dir + "/" + fname_replace (xml.text ("/project/name"))
> -    prefix += "_" + fname_replace (xml.text ("/project/version")) + "/"
> +    xml = etree("/tmp/new.xml")
> +    prefix = status.repo_dir + "/" + fname_replace(xml.text("/project/name"))
> +    prefix += "_" + fname_replace(xml.text("/project/version")) + "/"
>  
> -    status.log ("preparing update: " + prefix)
> +    status.log("preparing update: " + prefix)
>  
> -    with rw_access (prefix, status):
> -        for i in upd_file_z.namelist ():
> +    with rw_access(prefix, status):
> +        for i in upd_file_z.namelist():
>  
> -            (dirname, filename) = os.path.split (i)
> +            (dirname, filename) = os.path.split(i)
>  
>              try:
> -                zi = upd_file_z.getinfo (i)
> -                upd_file_z.extract (zi, prefix)
> -                os.chmod (prefix + '/' + i, zi.external_attr >> 16)
> +                zi = upd_file_z.getinfo(i)
> +                upd_file_z.extract(zi, prefix)
> +                os.chmod(prefix + '/' + i, zi.external_attr >> 16)
>              except OSError:
> -                status.log ("extraction failed: %s" % sys.exc_info () [1])
> +                status.log("extraction failed: %s" % sys.exc_info()[1])
>                  return
>  
> -    with rw_access ("/var/cache/elbe", status):
> +    with rw_access("/var/cache/elbe", status):
>          if os.path.isfile(prefix + '/' + 'pre.sh'):
>              try:
> -                copy (prefix + '/' + 'pre.sh', '/var/cache/elbe/' + 'pre.sh')
> +                copy(prefix + '/' + 'pre.sh', '/var/cache/elbe/' + 'pre.sh')
>              except OSError as e:
> -                status.log ('presh-copy failed: ' + str (e))
> +                status.log('presh-copy failed: ' + str(e))
>              except IOError as e:
> -                status.log ('presh-copy failed: ' + str (e))
> +                status.log('presh-copy failed: ' + str(e))
>  
>          if os.path.isfile(prefix + '/' + 'post.sh'):
>              try:
> -                copy (prefix + '/' + 'post.sh', '/var/cache/elbe/' + 'post.sh')
> +                copy(prefix + '/' + 'post.sh', '/var/cache/elbe/' + 'post.sh')
>              except OSError as e:
> -                status.log ('postsh-copy failed: ' + str (e))
> +                status.log('postsh-copy failed: ' + str(e))
>              except IOError as e:
> -                status.log ('postsh-copy failed: ' + str (e))
> +                status.log('postsh-copy failed: ' + str(e))
>  
> -    if os.path.isdir (prefix + "conf"):
> -        status.log ("copying config files:")
> -        for path, pathname, filenames in os.walk (prefix + "conf"):
> +    if os.path.isdir(prefix + "conf"):
> +        status.log("copying config files:")
> +        for path, pathname, filenames in os.walk(prefix + "conf"):
>              dst = path[len(prefix + "conf"):]
> -            with rw_access (dst, status):
> +            with rw_access(dst, status):
>                  for f in filenames:
> -                    src = os.path.join (path, f)
> -                    status.log ("cp " + src + " " + dst)
> +                    src = os.path.join(path, f)
> +                    status.log("cp " + src + " " + dst)
>                      try:
> -                        mkdir_p (dst)
> -                        copyfile (src, dst + '/' + f)
> +                        mkdir_p(dst)
> +                        copyfile(src, dst + '/' + f)
>                      except OSError as e:
> -                        status.log ('failed: ' + str (e))
> +                        status.log('failed: ' + str(e))
>                      except IOError as e:
> -                        status.log ('failed: ' + str (e))
> -        with rw_access (prefix + "conf", status):
> -            rmtree (prefix + "conf")
> +                        status.log('failed: ' + str(e))
> +        with rw_access(prefix + "conf", status):
> +            rmtree(prefix + "conf")
>  
> -    if os.path.isdir (prefix + "cmd"):
> -        status.log ("executing scripts:")
> -        for path, pathname, filenames in os.walk (prefix + "cmd"):
> +    if os.path.isdir(prefix + "cmd"):
> +        status.log("executing scripts:")
> +        for path, pathname, filenames in os.walk(prefix + "cmd"):
>              for f in filenames:
> -                cmd = os.path.join (path, f)
> -                if os.path.isfile (cmd):
> -                    status.log ('exec: ' + cmd)
> +                cmd = os.path.join(path, f)
> +                if os.path.isfile(cmd):
> +                    status.log('exec: ' + cmd)
>                      try:
> -                        execute (cmd, status)
> +                        execute(cmd, status)
>                      except OSError as e:
> -                        status.log ('exec: ' + cmd + ' - ' + str (e))
> -        with rw_access (prefix + "cmd", status):
> -            rmtree (prefix + "cmd")
> +                        status.log('exec: ' + cmd + ' - ' + str(e))
> +        with rw_access(prefix + "cmd", status):
> +            rmtree(prefix + "cmd")
>  
> -    if os.path.isdir (prefix + "repo"):
> +    if os.path.isdir(prefix + "repo"):
>          try:
> -            update_sourceslist (xml, prefix + "repo", status)
> +            update_sourceslist(xml, prefix + "repo", status)
>          except Exception as err:
> -            status.log (str (err))
> -            status.set_finished ('error')
> -            status.log ("update apt sources list failed: " + prefix)
> +            status.log(str(err))
> +            status.set_finished('error')
> +            status.log("update apt sources list failed: " + prefix)
>              return
>  
>          try:
> -            apply_update ("/tmp/new.xml", status)
> +            apply_update("/tmp/new.xml", status)
>          except Exception as err:
> -            status.log (str (err))
> -            status.set_finished ('error')
> -            status.log ("apply update failed: " + prefix)
> +            status.log(str(err))
> +            status.set_finished('error')
> +            status.log("apply update failed: " + prefix)
>              return
>  
> -        status.set_finished ('OK')
> -        status.log ("update done: " + prefix)
> +        status.set_finished('OK')
> +        status.log("update done: " + prefix)
>  
>  
>  def is_update_file(upd_file):
> @@ -516,11 +540,11 @@ def is_update_file(upd_file):
>          return True
>  
>      try:
> -        upd_file_z = ZipFile (upd_file)
> +        upd_file_z = ZipFile(upd_file)
>      except BadZipfile:
>          return False
>  
> -    if not "new.xml" in upd_file_z.namelist ():
> +    if "new.xml" not in upd_file_z.namelist():
>          return False
>  
>      return True
> @@ -528,31 +552,32 @@ def is_update_file(upd_file):
>  
>  update_lock = threading.Lock()
>  
> +
>  def handle_update_file(upd_file, status, remove=False):
>      with update_lock:
> -        status.log ("checking file: " + str(upd_file))
> +        status.log("checking file: " + str(upd_file))
>          root, extension = os.path.splitext(upd_file)
>  
>          if extension == ".gpg":
> -            fname = unsign_file (upd_file)
> +            fname = unsign_file(upd_file)
>              if remove:
> -                os.remove (upd_file)
> +                os.remove(upd_file)
>              if fname:
> -                action_select (fname, status)
> +                action_select(fname, status)
>                  if remove:
> -                    os.remove (fname)
> +                    os.remove(fname)
>              else:
> -                status.log ("checking signature failed: " + str(upd_file))
> +                status.log("checking signature failed: " + str(upd_file))
>  
>          elif status.nosign:
> -            action_select (upd_file, status)
> +            action_select(upd_file, status)
>              if remove:
> -                os.remove (upd_file)
> +                os.remove(upd_file)
>          else:
> -            status.log ("ignore file: " + str(upd_file))
> +            status.log("ignore file: " + str(upd_file))
>  
>  
> -def shutdown (signum, fname, status):
> +def shutdown(signum, fname, status):
>      status.stop = True
>      for mon in status.monitors:
>          mon.stop()
> diff --git a/elbepack/updated_monitors.py b/elbepack/updated_monitors.py
> index 8653a7ec..8026be13 100644
> --- a/elbepack/updated_monitors.py
> +++ b/elbepack/updated_monitors.py
> @@ -38,27 +38,28 @@ if udev_available:
>              try:
>                  if fields[0] == dev:
>                      return fields[1]
> -            except:
> +            except BaseException:
>                  pass
>          return None
>  
> -
>      class USBMonitor (UpdateMonitor):
>          def __init__(self, status, recursive=False):
>              super(USBMonitor, self).__init__(status)
>              self.recursive = recursive
>              self.context = pyudev.Context()
>              self.monitor = pyudev.Monitor.from_netlink(self.context)
> -            self.observer = pyudev.MonitorObserver(self.monitor, self.handle_event)
> +            self.observer = pyudev.MonitorObserver(
> +                self.monitor, self.handle_event)
>  
>          def handle_event(self, action, device):
> -            if ( action == 'add'
> -                 and device.get('ID_BUS') == 'usb'
> -                 and device.get('DEVTYPE') == 'partition' ):
> +            if (action == 'add'
> +                and device.get('ID_BUS') == 'usb'
> +                    and device.get('DEVTYPE') == 'partition'):
>  
>                  mnt = self.get_mountpoint_for_device(device.device_node)
>                  if not mnt:
> -                    self.status.log("Detected USB drive but it was not mounted.")
> +                    self.status.log(
> +                        "Detected USB drive but it was not mounted.")
>                      return
>  
>                  for (dirpath, dirnames, filenames) in os.walk(mnt):
> @@ -69,16 +70,18 @@ if udev_available:
>                      for f in filenames:
>                          upd_file = os.path.join(dirpath, f)
>                          if is_update_file(upd_file):
> -                            self.status.log("Found update file '%s' on USB-Device."
> -                                % upd_file)
> -                            handle_update_file(upd_file, self.status, remove=False)
> +                            self.status.log(
> +                                "Found update file '%s' on USB-Device." %
> +                                upd_file)
> +                            handle_update_file(
> +                                upd_file, self.status, remove=False)
>                          if self.status.stop:
>                              break
>                      if (not self.recursive) or self.status.stop:
>                          break
>  
>          def start(self):
> -            self.status.log ("monitoring USB")
> +            self.status.log("monitoring USB")
>              self.observer.start()
>  
>          def stop(self):
> @@ -93,7 +96,7 @@ if udev_available:
>                  try:
>                      if fields[0] == dev:
>                          return fields[1]
> -                except:
> +                except BaseException:
>                      pass
>              return None
>  
> @@ -101,39 +104,39 @@ if udev_available:
>  class FileMonitor (UpdateMonitor):
>  
>      class EventHandler (pyinotify.ProcessEvent):
> -        def __init__ (self, status):
> -            pyinotify.ProcessEvent.__init__ (self)
> +        def __init__(self, status):
> +            pyinotify.ProcessEvent.__init__(self)
>              self.status = status
>  
> -        def process_IN_CLOSE_WRITE (self, event):
> +        def process_IN_CLOSE_WRITE(self, event):
>              handle_update_file(event.pathname, self.status, remove=True)
>  
>      class ObserverThread (threading.Thread):
> -        def __init__ (self, status, monitor):
> -            threading.Thread.__init__ (self, name="ObserverThread")
> +        def __init__(self, status, monitor):
> +            threading.Thread.__init__(self, name="ObserverThread")
>              self.status = status
>              self.monitor = monitor
>  
> -        def run (self):
> -            self.status.log ("monitoring updated dir")
> +        def run(self):
> +            self.status.log("monitoring updated dir")
>  
>              while 1:
> -                if self.monitor.notifier.check_events (timeout=1000):
> -                    self.monitor.notifier.read_events ()
> -                    self.monitor.notifier.process_events ()
> +                if self.monitor.notifier.check_events(timeout=1000):
> +                    self.monitor.notifier.read_events()
> +                    self.monitor.notifier.process_events()
>  
>                  if self.status.stop:
>                      if self.status.soapserver:
> -                        self.status.soapserver.shutdown ()
> +                        self.status.soapserver.shutdown()
>                      return
>  
>      def __init__(self, status, update_dir):
>          super(FileMonitor, self).__init__(status)
> -        self.wm = pyinotify.WatchManager ()
> -        self.notifier = pyinotify.Notifier (self.wm)
> -        self.wm.add_watch (update_dir, pyinotify.IN_CLOSE_WRITE,
> -                           proc_fun=FileMonitor.EventHandler (self.status))
> -        self.observer = FileMonitor.ObserverThread (self.status, self)
> +        self.wm = pyinotify.WatchManager()
> +        self.notifier = pyinotify.Notifier(self.wm)
> +        self.wm.add_watch(update_dir, pyinotify.IN_CLOSE_WRITE,
> +                          proc_fun=FileMonitor.EventHandler(self.status))
> +        self.observer = FileMonitor.ObserverThread(self.status, self)
>  
>      def start(self):
>          self.observer.start()
> diff --git a/elbepack/updatepkg.py b/elbepack/updatepkg.py
> index caa423b9..cf6dd7f4 100644
> --- a/elbepack/updatepkg.py
> +++ b/elbepack/updatepkg.py
> @@ -14,26 +14,29 @@ from elbepack.dump import dump_fullpkgs
>  from elbepack.ziparchives import create_zip_archive
>  from elbepack.repomanager import UpdateRepo
>  
> +
>  class MissingData(Exception):
> -    def __init__ (self, message):
> -        Exception.__init__( self, message )
> +    def __init__(self, message):
> +        Exception.__init__(self, message)
> +
>  
> -def inlucdedir (destination, directory, source, mode=None):
> +def inlucdedir(destination, directory, source, mode=None):
>      dst = destination + '/' + directory
> -    copytree (source, dst)
> +    copytree(source, dst)
>      if mode:
>          for dp, dn, fn in os.walk(dst):
>              for f in fn:
> -                p = os.path.join (dp, f)
> -                os.chmod (p, mode)
> +                p = os.path.join(dp, f)
> +                os.chmod(p, mode)
>  
> -def gen_update_pkg (project, xml_filename, upd_filename,
> -        override_buildtype = None, skip_validate = False, debug = False,
> -        cmd_dir = None, cfg_dir=None):
> +
> +def gen_update_pkg(project, xml_filename, upd_filename,
> +                   override_buildtype=None, skip_validate=False, debug=False,
> +                   cmd_dir=None, cfg_dir=None):
>  
>      if xml_filename:
> -        xml = ElbeXML( xml_filename, buildtype=override_buildtype,
> -                skip_validate=skip_validate )
> +        xml = ElbeXML(xml_filename, buildtype=override_buildtype,
> +                      skip_validate=skip_validate)
>  
>          if not xml.has("fullpkgs"):
>              raise MissingData("Xml does not have fullpkgs list")
> @@ -46,7 +49,7 @@ def gen_update_pkg (project, xml_filename, upd_filename,
>  
>          cache = project.get_rpcaptcache()
>  
> -        instpkgs  = cache.get_installed_pkgs()
> +        instpkgs = cache.get_installed_pkgs()
>          instindex = {}
>  
>          for p in instpkgs:
> @@ -59,12 +62,12 @@ def gen_update_pkg (project, xml_filename, upd_filename,
>  
>          for p in xmlpkgs:
>              name = p.et.text
> -            ver  = p.et.get('version')
> -            md5  = p.et.get('md5')
> +            ver = p.et.get('version')
> +            md5 = p.et.get('md5')
>  
>              xmlindex[name] = p
>  
> -            if not name in instindex:
> +            if name not in instindex:
>                  print("package removed: %s" % name)
>                  continue
>  
> @@ -76,14 +79,16 @@ def gen_update_pkg (project, xml_filename, upd_filename,
>              if comp == 0:
>                  print("package ok: %s-%s" % (name, ipkg.installed_version))
>                  if debug:
> -                    fnamelist.append( pfname )
> +                    fnamelist.append(pfname)
>                  continue
>  
>              if comp > 0:
>                  print("package upgrade: %s" % pfname)
> -                fnamelist.append( pfname )
> +                fnamelist.append(pfname)
>              else:
> -                print("package downgrade: %s-%s" % (name, ipkg.installed_version))
> +                print(
> +                    "package downgrade: %s-%s" %
> +                    (name, ipkg.installed_version))
>  
>          for p in instpkgs:
>              if p.name in xmlindex:
> @@ -91,54 +96,61 @@ def gen_update_pkg (project, xml_filename, upd_filename,
>  
>              print("package %s newly installed" % p.name)
>              pfname = p.installed_deb
> -            fnamelist.append( pfname )
> -
> +            fnamelist.append(pfname)
>  
>      update = os.path.join(project.builddir, "update")
>  
> -    if os.path.exists( update ):
> -        rmtree( update )
> +    if os.path.exists(update):
> +        rmtree(update)
>  
> -    os.system( 'mkdir -p %s' % update )
> +    os.system('mkdir -p %s' % update)
>  
>      if xml_filename:
> -        repodir = os.path.join(update, "repo" )
> +        repodir = os.path.join(update, "repo")
>  
> -        repo = UpdateRepo( xml, repodir, project.log )
> +        repo = UpdateRepo(xml, repodir, project.log)
>  
>          for fname in fnamelist:
> -            path = os.path.join( project.chrootpath, "var/cache/apt/archives", fname )
> -            repo.includedeb( path )
> +            path = os.path.join(
> +                project.chrootpath,
> +                "var/cache/apt/archives",
> +                fname)
> +            repo.includedeb(path)
>  
> -        repo.finalize ()
> +        repo.finalize()
>  
>          dump_fullpkgs(project.xml, project.buildenv.rfs, cache)
>  
> -        project.xml.xml.write( os.path.join( update, "new.xml" ) )
> -        os.system( "cp %s %s" % (xml_filename, os.path.join( update, "base.xml" )) )
> +        project.xml.xml.write(os.path.join(update, "new.xml"))
> +        os.system(
> +            "cp %s %s" %
> +            (xml_filename,
> +             os.path.join(
> +                 update,
> +                 "base.xml")))
>      else:
> -        os.system( "cp source.xml update/new.xml")
> +        os.system("cp source.xml update/new.xml")
>  
>      if project.presh_file:
> -        copyfile (project.presh_file, update + '/pre.sh')
> -        os.chmod (update + '/pre.sh', 0o755)
> +        copyfile(project.presh_file, update + '/pre.sh')
> +        os.chmod(update + '/pre.sh', 0o755)
>  
>      if project.postsh_file:
> -        copyfile (project.postsh_file, update + '/post.sh')
> -        os.chmod (update + '/post.sh', 0o755)
> +        copyfile(project.postsh_file, update + '/post.sh')
> +        os.chmod(update + '/post.sh', 0o755)
>  
>      if cmd_dir:
> -        inlucdedir (update, 'cmd', cmd_dir, mode=0o755)
> +        inlucdedir(update, 'cmd', cmd_dir, mode=0o755)
>  
>      if cfg_dir:
> -        inlucdedir (update, 'conf', cfg_dir)
> +        inlucdedir(update, 'conf', cfg_dir)
>  
> -    create_zip_archive( upd_filename, update, "." )
> +    create_zip_archive(upd_filename, update, ".")
>  
>      if project.postbuild_file:
> -        project.log.h2 ("postbuild script")
> -        project.log.do (project.postbuild_file+' "%s %s %s"'%(
> +        project.log.h2("postbuild script")
> +        project.log.do(project.postbuild_file + ' "%s %s %s"' % (
>              upd_filename,
> -            project.xml.text ("project/version"),
> -            project.xml.text ("project/name")),
> -          allow_fail=True)
> +            project.xml.text("project/version"),
> +            project.xml.text("project/name")),
> +            allow_fail=True)
> diff --git a/elbepack/validate.py b/elbepack/validate.py
> index bad71398..ea53fd22 100644
> --- a/elbepack/validate.py
> +++ b/elbepack/validate.py
> @@ -5,7 +5,8 @@
>  
>  import sys
>  from lxml import etree
> -from lxml.etree import XMLParser,parse
> +from lxml.etree import XMLParser, parse
> +
>  
>  def validate_xml(fname):
>      schema_file = "https://www.linutronix.de/projects/Elbe/dbsfed.xsd"
> @@ -14,26 +15,27 @@ def validate_xml(fname):
>      schema = etree.XMLSchema(schema_tree)
>  
>      try:
> -        xml = parse(fname,parser=parser)
> +        xml = parse(fname, parser=parser)
>  
>          if schema.validate(xml):
>              return []
>      except etree.XMLSyntaxError:
>          return ["XML Parse error\n" + str(sys.exc_info()[1])]
> -    except:
> -        return ["Unknown Exception during validation\n" + str(sys.exc_info()[1])]
> +    except BaseException:
> +        return ["Unknown Exception during validation\n" +
> +                str(sys.exc_info()[1])]
>  
>      # We have errors, return them in string form...
>      errors = []
>      uses_xinclude = False
>  
>      for err in schema.error_log:
> -        errors.append ("%s:%d error %s" % (err.filename, err.line, err.message))
> +        errors.append("%s:%d error %s" % (err.filename, err.line, err.message))
>          if "http://www.w3.org/2003/XInclude" in err.message:
>              uses_xinclude = True
>  
>      if uses_xinclude:
> -        errors.append ("\nThere are XIncludes in the XML file. Run 'elbe preprocess' first!\n")
> +        errors.append(
> +            "\nThere are XIncludes in the XML file. Run 'elbe preprocess' first!\n")
>  
>      return errors
> -
> diff --git a/elbepack/version.py b/elbepack/version.py
> index c47c7a85..d038b4cd 100644
> --- a/elbepack/version.py
> +++ b/elbepack/version.py
> @@ -7,7 +7,7 @@ from elbepack.directories import pack_dir
>  from platform import linux_distribution
>  
>  elbe_version = "2.9.2"
> -running_os = linux_distribution ()
> +running_os = linux_distribution()
>  
>  if pack_dir == '/usr/lib/python2.7/dist-packages/elbepack':
>      is_devel = False
> diff --git a/elbepack/virtapt.py b/elbepack/virtapt.py
> index 92c67c3d..5963a86b 100644
> --- a/elbepack/virtapt.py
> +++ b/elbepack/virtapt.py
> @@ -9,7 +9,8 @@ import apt_pkg
>  import os
>  import sys
>  
> -# don't remove the apt import, it is really needed, due to some magic in apt_pkg
> +# don't remove the apt import, it is really needed, due to some magic in
> +# apt_pkg
>  import apt
>  
>  from tempfile import mkdtemp
> @@ -18,154 +19,150 @@ from elbepack.directories import elbe_pubkey_fname
>  
>  
>  class VirtApt:
> -    def __init__ (self, name, arch, suite, sources, prefs, keylist=[]):
> +    def __init__(self, name, arch, suite, sources, prefs, keylist=[]):
>  
>          self.projectpath = mkdtemp()
> -        self.initialize_dirs ()
> +        self.initialize_dirs()
>  
> -        self.create_apt_sources_list (sources)
> -        self.create_apt_prefs        (prefs)
> -        self.setup_gpg ()
> +        self.create_apt_sources_list(sources)
> +        self.create_apt_prefs(prefs)
> +        self.setup_gpg()
>          for k in keylist:
> -            self.add_pubkey_url (k)
> -
> -        apt_pkg.config.set ("APT::Architecture", arch)
> -        apt_pkg.config.set ("APT::Architectures", arch)
> -        apt_pkg.config.set ("Acquire::http::Proxy::127.0.0.1", "DIRECT")
> -        apt_pkg.config.set ("APT::Install-Recommends", "0")
> -        apt_pkg.config.set ("Dir", self.projectpath)
> -        apt_pkg.config.set ("APT::Cache-Limit", "0")
> -        apt_pkg.config.set ("APT::Cache-Start", "32505856")
> -        apt_pkg.config.set ("APT::Cache-Grow", "2097152")
> -        apt_pkg.config.set ("Dir::State", "state")
> -        apt_pkg.config.set ("Dir::State::status", "status")
> -        apt_pkg.config.set ("Dir::Cache", "cache")
> -        apt_pkg.config.set ("Dir::Etc", "etc/apt")
> -        apt_pkg.config.set ("Dir::Log", "log")
> -        apt_pkg.config.set ("APT::Get::AllowUnauthenticated", "0")
> +            self.add_pubkey_url(k)
> +
> +        apt_pkg.config.set("APT::Architecture", arch)
> +        apt_pkg.config.set("APT::Architectures", arch)
> +        apt_pkg.config.set("Acquire::http::Proxy::127.0.0.1", "DIRECT")
> +        apt_pkg.config.set("APT::Install-Recommends", "0")
> +        apt_pkg.config.set("Dir", self.projectpath)
> +        apt_pkg.config.set("APT::Cache-Limit", "0")
> +        apt_pkg.config.set("APT::Cache-Start", "32505856")
> +        apt_pkg.config.set("APT::Cache-Grow", "2097152")
> +        apt_pkg.config.set("Dir::State", "state")
> +        apt_pkg.config.set("Dir::State::status", "status")
> +        apt_pkg.config.set("Dir::Cache", "cache")
> +        apt_pkg.config.set("Dir::Etc", "etc/apt")
> +        apt_pkg.config.set("Dir::Log", "log")
> +        apt_pkg.config.set("APT::Get::AllowUnauthenticated", "0")
>  
>          apt_pkg.init_system()
>  
> -        self.source = apt_pkg.SourceList ()
> +        self.source = apt_pkg.SourceList()
>          self.source.read_main_list()
> -        self.cache = apt_pkg.Cache ()
> +        self.cache = apt_pkg.Cache()
>          try:
> -            self.cache.update(self,self.source)
> -        except:
> +            self.cache.update(self, self.source)
> +        except BaseException:
>              pass
>  
> -        apt_pkg.config.set ("APT::Default-Release", suite)
> +        apt_pkg.config.set("APT::Default-Release", suite)
>  
> -        self.cache = apt_pkg.Cache ()
> +        self.cache = apt_pkg.Cache()
>          try:
> -            self.cache.update(self,self.source)
> -        except:
> +            self.cache.update(self, self.source)
> +        except BaseException:
>              pass
>  
>      def __del__(self):
> -        os.system( 'rm -rf "%s"' % self.projectpath )
> +        os.system('rm -rf "%s"' % self.projectpath)
>  
> -    def start (self):
> +    def start(self):
>          pass
>  
> -    def stop (self):
> +    def stop(self):
>          pass
>  
> -    def pulse (self, obj):
> +    def pulse(self, obj):
>          return True
>  
> -    def mkdir_p (self, newdir, mode=0o755):
> +    def mkdir_p(self, newdir, mode=0o755):
>          """works the way a good mkdir -p would...
>                  - already exists, silently complete
>                  - regular file in the way, raise an exception
>                  - parent directory(ies) does not exist, make them as well
>          """
> -        if os.path.isdir (newdir):
> +        if os.path.isdir(newdir):
>              pass
> -        elif os.path.isfile (newdir):
> -            raise OSError ("a file with the same name as the desired " \
> -                    "dir, '%s', already exists." % newdir)
> +        elif os.path.isfile(newdir):
> +            raise OSError("a file with the same name as the desired "
> +                          "dir, '%s', already exists." % newdir)
>          else:
> -            os.makedirs (newdir, mode)
> +            os.makedirs(newdir, mode)
>              """ mode is not set correctly """
> -            os.system ("chmod 777 "+newdir)
> +            os.system("chmod 777 " + newdir)
>  
> -    def touch (self, file):
> -        if os.path.exists (file):
> -            os.utime (file, None)
> +    def touch(self, file):
> +        if os.path.exists(file):
> +            os.utime(file, None)
>          else:
> -            file = open (file,"w")
> -            file.close ()
> -
> -    def initialize_dirs (self):
> -        self.mkdir_p (self.projectpath + "/cache/archives/partial")
> -        self.mkdir_p (self.projectpath + "/etc/apt/preferences.d")
> -        self.mkdir_p (self.projectpath + "/etc/apt/trusted.gpg.d")
> -        self.mkdir_p (self.projectpath + "/db")
> -        self.mkdir_p (self.projectpath + "/log")
> -        self.mkdir_p (self.projectpath + "/state/lists/partial")
> -        self.touch   (self.projectpath + "/state/status")
> -
> -    def setup_gpg (self):
> +            file = open(file, "w")
> +            file.close()
> +
> +    def initialize_dirs(self):
> +        self.mkdir_p(self.projectpath + "/cache/archives/partial")
> +        self.mkdir_p(self.projectpath + "/etc/apt/preferences.d")
> +        self.mkdir_p(self.projectpath + "/etc/apt/trusted.gpg.d")
> +        self.mkdir_p(self.projectpath + "/db")
> +        self.mkdir_p(self.projectpath + "/log")
> +        self.mkdir_p(self.projectpath + "/state/lists/partial")
> +        self.touch(self.projectpath + "/state/status")
> +
> +    def setup_gpg(self):
>          ring_path = self.projectpath + "/etc/apt/trusted.gpg"
> -        if not os.path.isdir ("/etc/apt/trusted.gpg.d"):
> +        if not os.path.isdir("/etc/apt/trusted.gpg.d"):
>              print("/etc/apt/trusted.gpg.d doesn't exist")
>              print("apt-get install debian-archive-keyring may fix this problem")
> -            sys.exit (20)
> +            sys.exit(20)
>  
> -        system ('cp /etc/apt/trusted.gpg "%s"' % ring_path )
> +        system('cp /etc/apt/trusted.gpg "%s"' % ring_path)
>  
>          gpg_options = '--keyring "%s" --no-auto-check-trustdb --trust-model always --no-default-keyring --homedir "%s"' % (
> -                ring_path,
> -                self.projectpath)
> +            ring_path, self.projectpath)
>  
> -        system ('gpg %s --import "%s"' % (
> -                gpg_options,
> -                elbe_pubkey_fname))
> +        system('gpg %s --import "%s"' % (
> +            gpg_options,
> +            elbe_pubkey_fname))
>  
>          trustkeys = os.listdir("/etc/apt/trusted.gpg.d")
>          for key in trustkeys:
>              print("Import %s: " % key)
>              try:
> -                system ('gpg %s --import "%s"' % (
> +                system('gpg %s --import "%s"' % (
>                      gpg_options,
> -                    os.path.join ("/etc/apt/trusted.gpg.d", key)))
> +                    os.path.join("/etc/apt/trusted.gpg.d", key)))
>              except CommandError as e:
>                  print("adding elbe-pubkey to keyring failed")
>  
> -    def add_pubkey_url (self, url):
> +    def add_pubkey_url(self, url):
>          ring_path = self.projectpath + "/etc/apt/trusted.gpg"
>          tmpkey_path = self.projectpath + "/tmpkey.gpg"
>          gpg_options = '--keyring "%s" --no-auto-check-trustdb --trust-model always --no-default-keyring --homedir "%s"' % (
> -                ring_path,
> -                self.projectpath)
> +            ring_path, self.projectpath)
>  
>          try:
> -            system ('wget -O "%s" "%s"' % (tmpkey_path, url))
> -            system ('gpg %s --import "%s"' % (
> -                    gpg_options,
> -                    tmpkey_path))
> +            system('wget -O "%s" "%s"' % (tmpkey_path, url))
> +            system('gpg %s --import "%s"' % (
> +                gpg_options,
> +                tmpkey_path))
>          finally:
> -            system ('rm "%s"' % tmpkey_path, allow_fail=True)
> +            system('rm "%s"' % tmpkey_path, allow_fail=True)
>  
> -
> -    def create_apt_sources_list (self, mirror):
> +    def create_apt_sources_list(self, mirror):
>          filename = self.projectpath + "/etc/apt/sources.list"
>  
> -        if os.path.exists (filename):
> -            os.remove (filename)
> +        if os.path.exists(filename):
> +            os.remove(filename)
>  
> -        file = open (filename,"w")
> -        file.write (mirror)
> -        file.close ()
> +        file = open(filename, "w")
> +        file.write(mirror)
> +        file.close()
>  
> -    def create_apt_prefs (self, prefs):
> +    def create_apt_prefs(self, prefs):
>          filename = self.projectpath + "/etc/apt/preferences"
>  
> -        if os.path.exists (filename):
> -            os.remove (filename)
> -
> -        file = open (filename,"w")
> -        file.write (prefs)
> -        file.close ()
> +        if os.path.exists(filename):
> +            os.remove(filename)
>  
> +        file = open(filename, "w")
> +        file.write(prefs)
> +        file.close()
> diff --git a/elbepack/xmldefaults.py b/elbepack/xmldefaults.py
> index d85a5da3..be23f5e0 100644
> --- a/elbepack/xmldefaults.py
> +++ b/elbepack/xmldefaults.py
> @@ -12,179 +12,180 @@ import sys
>  from elbepack.kvm import find_kvm_exe
>  
>  armel_defaults = {
> -        "arch":         "armel",
> -        "mem":          "256",
> -        "interpreter":  "qemu-system-arm",
> -        "interpreterversion": "0.0.0",
> -        "userinterpr":  "qemu-arm-static",
> -        "console":      "ttyAMA0,115200n1",
> -        "machine":      "versatilepb",
> -        "nicmodel":     "smc91c111",
> -        "triplet":      "arm-linux-gnueabi"
> +    "arch": "armel",
> +    "mem": "256",
> +    "interpreter": "qemu-system-arm",
> +    "interpreterversion": "0.0.0",
> +    "userinterpr": "qemu-arm-static",
> +    "console": "ttyAMA0,115200n1",
> +    "machine": "versatilepb",
> +    "nicmodel": "smc91c111",
> +    "triplet": "arm-linux-gnueabi"
>  }
>  
>  armel_linaro48_defaults = {
> -        "arch":         "armel",
> -        "mem":          "256",
> -        "interpreter":  "qemu-system-arm",
> -        "interpreterversion": "0.0.0",
> -        "userinterpr":  "qemu-arm-static",
> -        "console":      "ttyAMA0,115200n1",
> -        "machine":      "versatilepb",
> -        "nicmodel":     "smc91c111",
> -        "triplet":      "arm-linux-gnueabi",
> -        "toolchaintype":"linaro_armel",
> -        "toolchainver": "4.8.3",
> +    "arch": "armel",
> +    "mem": "256",
> +    "interpreter": "qemu-system-arm",
> +    "interpreterversion": "0.0.0",
> +    "userinterpr": "qemu-arm-static",
> +    "console": "ttyAMA0,115200n1",
> +    "machine": "versatilepb",
> +    "nicmodel": "smc91c111",
> +    "triplet": "arm-linux-gnueabi",
> +    "toolchaintype": "linaro_armel",
> +    "toolchainver": "4.8.3",
>  }
>  
>  armel_virtio_defaults = {
> -        "arch":         "armel",
> -        "mem":          "256",
> -        "interpreter":  "qemu-system-arm-virtio",
> -        "interpreterversion": "0.0.0",
> -        "userinterpr":  "qemu-arm-static",
> -        "console":      "ttyAMA0,115200n1",
> -        "machine":      "versatilepb",
> -        "nicmodel":     "smc91c111",
> -        "triplet":      "arm-linux-gnueabi"
> +    "arch": "armel",
> +    "mem": "256",
> +    "interpreter": "qemu-system-arm-virtio",
> +    "interpreterversion": "0.0.0",
> +    "userinterpr": "qemu-arm-static",
> +    "console": "ttyAMA0,115200n1",
> +    "machine": "versatilepb",
> +    "nicmodel": "smc91c111",
> +    "triplet": "arm-linux-gnueabi"
>  }
>  
>  armhf_defaults = {
> -        "arch":         "armhf",
> -        "mem":          "256",
> -        "interpreter":  "qemu-system-arm",
> -        "interpreterversion": "0.0.0",
> -        "userinterpr":  "qemu-arm-static",
> -        "console":      "ttyAMA0,115200n1",
> -        "machine":      "versatilepb -cpu cortex-a9",
> -        "nicmodel":     "smc91c111",
> -        "triplet":      "arm-linux-gnueabihf"
> +    "arch": "armhf",
> +    "mem": "256",
> +    "interpreter": "qemu-system-arm",
> +    "interpreterversion": "0.0.0",
> +    "userinterpr": "qemu-arm-static",
> +    "console": "ttyAMA0,115200n1",
> +    "machine": "versatilepb -cpu cortex-a9",
> +    "nicmodel": "smc91c111",
> +    "triplet": "arm-linux-gnueabihf"
>  }
>  
>  armhf_linaro48_defaults = {
> -        "arch":         "armhf",
> -        "mem":          "256",
> -        "interpreter":  "qemu-system-arm",
> -        "interpreterversion": "0.0.0",
> -        "userinterpr":  "qemu-arm-static",
> -        "console":      "ttyAMA0,115200n1",
> -        "machine":      "versatilepb -cpu cortex-a9",
> -        "nicmodel":     "smc91c111",
> -        "triplet":      "arm-linux-gnueabihf",
> -        "toolchaintype":"linaro",
> -        "toolchainver": "4.8.3",
> +    "arch": "armhf",
> +    "mem": "256",
> +    "interpreter": "qemu-system-arm",
> +    "interpreterversion": "0.0.0",
> +    "userinterpr": "qemu-arm-static",
> +    "console": "ttyAMA0,115200n1",
> +    "machine": "versatilepb -cpu cortex-a9",
> +    "nicmodel": "smc91c111",
> +    "triplet": "arm-linux-gnueabihf",
> +    "toolchaintype": "linaro",
> +    "toolchainver": "4.8.3",
>  }
>  
>  armhf_virtio_defaults = {
> -        "arch":         "armhf",
> -        "mem":          "256",
> -        "interpreter":  "qemu-system-arm-virtio",
> -        "interpreterversion": "0.0.0",
> -        "userinterpr":  "qemu-arm-static",
> -        "console":      "ttyAMA0,115200n1",
> -        "machine":      "versatilepb -cpu cortex-a9",
> -        "nicmodel":     "virtio",
> -        "triplet":      "arm-linux-gnueabihf"
> +    "arch": "armhf",
> +    "mem": "256",
> +    "interpreter": "qemu-system-arm-virtio",
> +    "interpreterversion": "0.0.0",
> +    "userinterpr": "qemu-arm-static",
> +    "console": "ttyAMA0,115200n1",
> +    "machine": "versatilepb -cpu cortex-a9",
> +    "nicmodel": "virtio",
> +    "triplet": "arm-linux-gnueabihf"
>  }
>  
>  aarch64_defaults = {
> -        "arch":         "arm64",
> -        "mem":          "256",
> -        "interpreter":  "qemu-system-aarch64",
> -        "interpreterversion": "0.0.0",
> -        "userinterpr":  "qemu-aarch64-static",
> -        "console":      "ttyAMA0,115200n1",
> -        "machine":      "virt -cpu cortex-a57",
> -        "nicmodel":     "virtio",
> -        "triplet":      "aarch64-linux-gnu"
> +    "arch": "arm64",
> +    "mem": "256",
> +    "interpreter": "qemu-system-aarch64",
> +    "interpreterversion": "0.0.0",
> +    "userinterpr": "qemu-aarch64-static",
> +    "console": "ttyAMA0,115200n1",
> +    "machine": "virt -cpu cortex-a57",
> +    "nicmodel": "virtio",
> +    "triplet": "aarch64-linux-gnu"
>  }
>  
>  ppc_defaults = {
> -        "arch":         "powerpc",
> -        "mem":          "256",
> -        "interpreter":  "qemu-system-ppc",
> -        "interpreterversion": "0.0.0",
> -        "userinterpr":  "qemu-ppc-static",
> -        "console":      "ttyPZ0,115200n1",
> -        "machine":      "mac99",
> -        "nicmodel":     "rtl8139",
> -        "triplet":      "powerpc-linux-gnu"
> +    "arch": "powerpc",
> +    "mem": "256",
> +    "interpreter": "qemu-system-ppc",
> +    "interpreterversion": "0.0.0",
> +    "userinterpr": "qemu-ppc-static",
> +    "console": "ttyPZ0,115200n1",
> +    "machine": "mac99",
> +    "nicmodel": "rtl8139",
> +    "triplet": "powerpc-linux-gnu"
>  }
>  
>  ppcspe_defaults = {
> -        "arch":         "powerpcspe",
> -        "mem":          "512",
> -        "interpreter":  "qemu-system-ppc",
> -        "interpreterversion": "0.0.0",
> -        "userinterpr":  "qemu-ppc-static",
> -        "console":      "ttyS0,115200n1",
> -        "machine":      "mpc8544ds",
> -        "nicmodel":     "rtl8139",
> -        "triplet":      "powerpc-linux-gnuspe"
> +    "arch": "powerpcspe",
> +    "mem": "512",
> +    "interpreter": "qemu-system-ppc",
> +    "interpreterversion": "0.0.0",
> +    "userinterpr": "qemu-ppc-static",
> +    "console": "ttyS0,115200n1",
> +    "machine": "mpc8544ds",
> +    "nicmodel": "rtl8139",
> +    "triplet": "powerpc-linux-gnuspe"
>  }
>  
>  amd64_defaults = {
> -        "arch":         "amd64",
> -        "mem":          "1024",
> -        "interpreter":  find_kvm_exe ()[0],
> -        "interpreterversion": find_kvm_exe ()[1],
> -        "console":      "ttyS0,115200n1",
> -        "machine":      "pc",
> -        "nicmodel":     "virtio",
> -        "triplet":      "x86_64-linux-gnu"
> +    "arch": "amd64",
> +    "mem": "1024",
> +    "interpreter": find_kvm_exe()[0],
> +    "interpreterversion": find_kvm_exe()[1],
> +    "console": "ttyS0,115200n1",
> +    "machine": "pc",
> +    "nicmodel": "virtio",
> +    "triplet": "x86_64-linux-gnu"
>  }
>  
>  i386_defaults = {
> -        "arch":         "i386",
> -        "mem":          "1024",
> -        "interpreter":  "kvm",
> -        "interpreterversion": "0.0.0",
> -        "console":      "ttyS0,115200n1",
> -        "machine":      "pc",
> -        "nicmodel":     "virtio",
> -        "triplet":      "i386-linux-gnu"
> +    "arch": "i386",
> +    "mem": "1024",
> +    "interpreter": "kvm",
> +    "interpreterversion": "0.0.0",
> +    "console": "ttyS0,115200n1",
> +    "machine": "pc",
> +    "nicmodel": "virtio",
> +    "triplet": "i386-linux-gnu"
>  }
>  
>  archindep_defaults = {
> -        "kinitrd":      "elbe-bootstrap",
> -        "name":         "elbe-buildenv",
> -        "size":         "20G",
> -        "img":          "qcow2",
> +    "kinitrd": "elbe-bootstrap",
> +    "name": "elbe-buildenv",
> +    "size": "20G",
> +    "img": "qcow2",
>  }
>  
> -defaults = { "armel": armel_defaults,
> -             "armel-linaro48": armel_linaro48_defaults,
> -             "armel-virtio": armel_virtio_defaults,
> -             "armhf": armhf_defaults,
> -             "armhf-linaro48": armhf_linaro48_defaults,
> -             "armhf-virtio": armhf_virtio_defaults,
> -             "aarch64": aarch64_defaults,
> -             "ppc": ppc_defaults,
> -             "ppcspe": ppcspe_defaults,
> -             "amd64": amd64_defaults,
> -             "i386": i386_defaults,
> -             "nodefaults": {}  }
> +defaults = {"armel": armel_defaults,
> +            "armel-linaro48": armel_linaro48_defaults,
> +            "armel-virtio": armel_virtio_defaults,
> +            "armhf": armhf_defaults,
> +            "armhf-linaro48": armhf_linaro48_defaults,
> +            "armhf-virtio": armhf_virtio_defaults,
> +            "aarch64": aarch64_defaults,
> +            "ppc": ppc_defaults,
> +            "ppcspe": ppcspe_defaults,
> +            "amd64": amd64_defaults,
> +            "i386": i386_defaults,
> +            "nodefaults": {}}
>  
>  xml_field_path = {
> -        "arch":         "project/buildimage/arch",
> -        "size":         "project/buildimage/size",
> -        "img":          "project/buildimage/img",
> -        "mem":          "project/buildimage/mem",
> -        "interpreter":  "project/buildimage/interpreter",
> -        "console":      "project/buildimage/console",
> -        "machine":      "project/buildimage/machine",
> -        "nicmodel":     "project/buildimage/NIC/model",
> -        "kinitrd":      "project/buildimage/kinitrd",
> -        "name":         "project/name"
> +    "arch": "project/buildimage/arch",
> +    "size": "project/buildimage/size",
> +    "img": "project/buildimage/img",
> +    "mem": "project/buildimage/mem",
> +    "interpreter": "project/buildimage/interpreter",
> +    "console": "project/buildimage/console",
> +    "machine": "project/buildimage/machine",
> +    "nicmodel": "project/buildimage/NIC/model",
> +    "kinitrd": "project/buildimage/kinitrd",
> +    "name": "project/name"
>  }
>  
> +
>  def get_random_mac():
> -    binaddr = [random.randint(0,255) for i in range(6) ]
> +    binaddr = [random.randint(0, 255) for i in range(6)]
>      binaddr[0] &= 0xfe
>      binaddr[0] |= 0x02
>      s = ["%02x" % x for x in binaddr]
>  
> -    return ':'.join (s)
> +    return ':'.join(s)
>  
>  
>  class ElbeDefaults(object):
> @@ -205,7 +206,7 @@ class ElbeDefaults(object):
>  
>          self.generic_defaults = archindep_defaults
>  
> -    def __getitem__( self, key ):
> +    def __getitem__(self, key):
>          if key in self.defaults:
>              return self.defaults[key]
>          if key in self.generic_defaults:
> diff --git a/elbepack/xmlpreprocess.py b/elbepack/xmlpreprocess.py
> index 26e27580..3b479fef 100644
> --- a/elbepack/xmlpreprocess.py
> +++ b/elbepack/xmlpreprocess.py
> @@ -5,12 +5,14 @@
>  
>  import sys
>  from lxml import etree
> -from lxml.etree import XMLParser,parse
> +from lxml.etree import XMLParser, parse
> +
>  
>  class XMLPreprocessError(Exception):
> -    def __init__ (self, message):
> +    def __init__(self, message):
>          Exception.__init__(self, message)
>  
> +
>  def xmlpreprocess(fname, output):
>      schema_file = "https://www.linutronix.de/projects/Elbe/dbsfed.xsd"
>      parser = XMLParser(huge_tree=True)
> @@ -18,17 +20,22 @@ def xmlpreprocess(fname, output):
>      schema = etree.XMLSchema(schema_tree)
>  
>      try:
> -        xml = parse(fname,parser=parser)
> +        xml = parse(fname, parser=parser)
>          xml.xinclude()
>  
>          if schema.validate(xml):
> -            xml.write(output, encoding="UTF-8", pretty_print=True, compression=9)
> +            xml.write(
> +                output,
> +                encoding="UTF-8",
> +                pretty_print=True,
> +                compression=9)
>              return
>  
>      except etree.XMLSyntaxError:
>          raise XMLPreprocessError("XML Parse error\n" + str(sys.exc_info()[1]))
> -    except:
> -        XMLPreprocessError("Unknown Exception during validation\n" + str(sys.exc_info()[1]))
> +    except BaseException:
> +        XMLPreprocessError(
> +            "Unknown Exception during validation\n" + str(sys.exc_info()[1]))
>  
>      # We have errors, return them in string form...
>      errors = []
> diff --git a/elbepack/ziparchives.py b/elbepack/ziparchives.py
> index 798d46e8..92c58866 100644
> --- a/elbepack/ziparchives.py
> +++ b/elbepack/ziparchives.py
> @@ -6,24 +6,25 @@
>  from zipfile import ZipFile, ZipInfo, ZIP_DEFLATED
>  import os
>  
> -def create_zip_archive( zipfilename, path, inarchpath ):
> -    with ZipFile( zipfilename, "w", ZIP_DEFLATED ) as zf:
> +
> +def create_zip_archive(zipfilename, path, inarchpath):
> +    with ZipFile(zipfilename, "w", ZIP_DEFLATED) as zf:
>          for root, dirs, files in os.walk(path):
> -            archpath = os.path.join( inarchpath, os.path.relpath( root, path ) )
> -            zf.write( root, archpath )
> +            archpath = os.path.join(inarchpath, os.path.relpath(root, path))
> +            zf.write(root, archpath)
>              for f in files:
> -                filename = os.path.join( root, f )
> +                filename = os.path.join(root, f)
>                  if not os.path.isfile(filename):
>                      continue
> -                archname = os.path.join( archpath, f )
> +                archname = os.path.join(archpath, f)
>                  # this hack is needed to avoid leading ./ in the archive
> -                while archname.startswith ('./'):
> +                while archname.startswith('./'):
>                      archname = archname[2:]
> -                zi = ZipInfo( archname)
> -                stat = os.stat( path + '/' + archname )
> +                zi = ZipInfo(archname)
> +                stat = os.stat(path + '/' + archname)
>                  zi.external_attr = stat.st_mode << 16
>                  # this hack is needed to use the external attributes
> -                # there is no way to set a zipinfo object directly to an archive
> -                with open (filename, 'rb') as f:
> -                    zf.writestr( zi, f.read () )
> -
> +                # there is no way to set a zipinfo object directly to an
> +                # archive
> +                with open(filename, 'rb') as f:
> +                    zf.writestr(zi, f.read())
> diff --git a/setup.py b/setup.py
> index 7a246824..413a0150 100644
> --- a/setup.py
> +++ b/setup.py
> @@ -14,13 +14,14 @@ from distutils.command.install import install
>  
>  from elbepack.version import elbe_version
>  
> +
>  def abspath(path):
>      """A method to determine absolute path
>  for a relative path inside project's directory."""
>  
>      return os.path.abspath(
> -            os.path.join(
> -                os.path.dirname(__file__), path))
> +        os.path.join(
> +            os.path.dirname(__file__), path))
>  
>  
>  class my_install(install):
> @@ -28,11 +29,11 @@ class my_install(install):
>      def run(self):
>          install.run(self)
>          if self.root:
> -            envvars = dict({"prefix": self.prefix, "DESTDIR": self.root}, **dict(os.environ))
> +            envvars = dict(
> +                {"prefix": self.prefix, "DESTDIR": self.root}, **dict(os.environ))
>          else:
>              envvars = dict({"prefix": self.prefix}, **dict(os.environ))
>  
> -
>          docs_dir = abspath("./docs/")
>  
>          output = subprocess.Popen("make install",
> @@ -42,30 +43,31 @@ class my_install(install):
>                                    env=envvars).communicate()[0]
>          print(output)
>  
> +
>  setup(name='elbe',
>        version=elbe_version,
>        description='RootFS builder',
>        author='Torben Hohn',
>        author_email='torbenh at linutronix.de',
>        url='http://elbe-rfs.org/',
> -      packages=['elbepack', \
> -                'elbepack.commands', \
> -                'elbepack.daemons', \
> -                'elbepack.daemons.soap', \
> -                'elbepack.debianize' ],
> -      package_data = {'elbepack': ["makofiles/*.mako", \
> -                                   "makofiles/debianize/*/*.mako", \
> -                                   "makofiles/debianize/*/*.install", \
> -                                   "makofiles/debianize/*/copyright", \
> -                                   "init/*.mako", \
> -                                   "init/*.xml", \
> -                                   "*.pub", \
> -                                   "default-preseed.xml", \
> -                                   "xsdtoasciidoc.mako"],
> -                          'schema': ["dbsfed.xsd", "xml.xsd"]},
> +      packages=['elbepack',
> +                'elbepack.commands',
> +                'elbepack.daemons',
> +                'elbepack.daemons.soap',
> +                'elbepack.debianize'],
> +      package_data={'elbepack': ["makofiles/*.mako",
> +                                 "makofiles/debianize/*/*.mako",
> +                                 "makofiles/debianize/*/*.install",
> +                                 "makofiles/debianize/*/copyright",
> +                                 "init/*.mako",
> +                                 "init/*.xml",
> +                                 "*.pub",
> +                                 "default-preseed.xml",
> +                                 "xsdtoasciidoc.mako"],
> +                    'schema': ["dbsfed.xsd", "xml.xsd"]},
>        scripts=['elbe'],
>        cmdclass={"install": my_install},
> -      data_files= [
> +      data_files=[
>            ('/usr/share/doc/elbe-doc/', glob.glob("docs/elbe-schema-reference*")),
>            ('/usr/share/doc/elbe-doc/', glob.glob("docs/elbeoverview-en*")),
>            ('/usr/share/doc/elbe-doc/contrib/linux', glob.glob("contrib/linux/*")),
> @@ -74,4 +76,4 @@ setup(name='elbe',
>                glob.glob("examples/includes/prod/*.xml")),
>            ('/usr/share/doc/elbe-doc/examples/includes/devel',
>                glob.glob("examples/includes/devel/*.xml"))],
> -)
> +      )
> diff --git a/test/updated.py b/test/updated.py
> index df3471e8..dc6a830b 100755
> --- a/test/updated.py
> +++ b/test/updated.py
> @@ -20,57 +20,61 @@ from spyne.server.wsgi import WsgiApplication
>  from suds.client import Client
>  from wsgiref.simple_server import make_server
>  
> +
>  class MonitorService (ServiceBase):
> -    @rpc (String)
> -    def msg (self, m):
> +    @rpc(String)
> +    def msg(self, m):
>          print(m)
>  
> +
>  class MonitorThread (threading.Thread):
> -    def __init__ (self, port):
> -        threading.Thread.__init__ (self, name="MonitorThread")
> +    def __init__(self, port):
> +        threading.Thread.__init__(self, name="MonitorThread")
>          self.port = port
>          self.server = None
>  
> -    def run (self):
> +    def run(self):
>          print("monitor ready :%s" % (self.port))
>          application = Application([MonitorService], 'monitor',
>                                    in_protocol=Soap11(validator='lxml'),
>                                    out_protocol=Soap11())
>          wsgi_application = WsgiApplication(application)
> -        self.server = make_server ("", int(self.port), wsgi_application)
> -        self.server.serve_forever ()
> +        self.server = make_server("", int(self.port), wsgi_application)
> +        self.server.serve_forever()
>  
> -def shutdown (monitor):
> +
> +def shutdown(monitor):
>      if monitor.server:
> -        monitor.server.shutdown ()
> +        monitor.server.shutdown()
> +
> +    monitor.join()
> +    sys.exit(0)
>  
> -    monitor.join ()
> -    sys.exit (0)
>  
> -oparser = OptionParser (usage="usage: %prog [options]")
> +oparser = OptionParser(usage="usage: %prog [options]")
>  
> -oparser.add_option ("--debug", dest="debug", action="store_true",
> -                    default=False,
> -                    help="run in debug mode")
> +oparser.add_option("--debug", dest="debug", action="store_true",
> +                   default=False,
> +                   help="run in debug mode")
>  
> -oparser.add_option ("--target", dest="target",
> -                    help="ip or hostname of target")
> +oparser.add_option("--target", dest="target",
> +                   help="ip or hostname of target")
>  
> -oparser.add_option ("--port", dest="port",
> -                    help="port of updated on target")
> +oparser.add_option("--port", dest="port",
> +                   help="port of updated on target")
>  
> -oparser.add_option ("--listen", dest="host",
> -                    help="interface ip")
> +oparser.add_option("--listen", dest="host",
> +                   help="interface ip")
>  
> -oparser.add_option ("--monitorport", dest="monitorport",
> -                    help="port used for update monitor")
> +oparser.add_option("--monitorport", dest="monitorport",
> +                   help="port used for update monitor")
>  
> -(opt,args) = oparser.parse_args (sys.argv)
> +(opt, args) = oparser.parse_args(sys.argv)
>  
>  if opt.debug:
>      import logging
> -    logging.basicConfig (level=logging.INFO)
> -    logging.getLogger ('suds.client').setLevel (logging.DEBUG)
> +    logging.basicConfig(level=logging.INFO)
> +    logging.getLogger('suds.client').setLevel(logging.DEBUG)
>  
>  if not opt.target:
>      target = "localhost"
> @@ -80,7 +84,7 @@ else:
>  if not opt.port:
>      port = "8088"
>  else:
> -    port = str (opt.port)
> +    port = str(opt.port)
>  
>  if not opt.monitorport:
>      monitorport = "8087"
> @@ -94,28 +98,28 @@ else:
>  
>  wsdl = "http://" + target + ":" + port + "/?wsdl"
>  try:
> -    control = Client (wsdl)
> -except:
> +    control = Client(wsdl)
> +except BaseException:
>      print(wsdl, "not reachable")
> -    sys.exit (1)
> +    sys.exit(1)
>  
> -monitor = MonitorThread (monitorport)
> -monitor.start ()
> +monitor = MonitorThread(monitorport)
> +monitor.start()
>  
> -time.sleep (1) # hack to ensure that monitor server was started
> +time.sleep(1)  # hack to ensure that monitor server was started
>  
>  try:
>      monitor_wsdl = "http://" + host + ":" + monitorport + "/?wsdl"
> -    control.service.register_monitor (monitor_wsdl)
> -except:
> +    control.service.register_monitor(monitor_wsdl)
> +except BaseException:
>      print("monitor couldn't be registered (port already in use?)")
> -    shutdown (monitor)
> +    shutdown(monitor)
>  
>  while 1:
> -    s = control.service.list_snapshots ()
> +    s = control.service.list_snapshots()
>      snapshots = []
>      try:
> -        snapshots = s.split (',')
> +        snapshots = s.split(',')
>  
>          print("select snapshot:")
>          i = 0
> @@ -123,14 +127,14 @@ while 1:
>              if s:
>                  print("  [%d] %s" % (i, s))
>              i = i + 1
> -    except:
> +    except BaseException:
>          print("no snapshots available")
>  
> -    sys.stdout.write ("% ")
> -    sys.stdout.flush ()
> +    sys.stdout.write("% ")
> +    sys.stdout.flush()
>  
>      try:
> -        n = int (input ())
> -        print(control.service.apply_snapshot (snapshots [n]))
> -    except:
> -        shutdown (monitor)
> +        n = int(input())
> +        print(control.service.apply_snapshot(snapshots[n]))
> +    except BaseException:
> +        shutdown(monitor)
> -- 
> 2.15.1
> 
> 
> _______________________________________________
> elbe-devel mailing list
> elbe-devel at linutronix.de
> https://lists.linutronix.de/mailman/listinfo/elbe-devel

-- 
Mit freundlichen Grüßen
Torben Hohn

Linutronix GmbH

Standort: Bremen

Phone: +49 7556 25 999 18; Fax.: +49 7556 25 999 99

Firmensitz / Registered Office: D-88690 Uhldingen, Bahnhofstr. 3
Registergericht / Local District Court: Amtsgericht Freiburg i. Br.; HRB
Nr. / Trade register no.: 700 806

Geschäftsführer / Managing Directors: Heinz Egger, Thomas Gleixner

Eine Bitte von uns: Sollten Sie diese E-Mail irrtümlich erhalten haben,
benachrichtigen Sie uns in diesem Falle bitte sobald wie es Ihnen
möglich ist, durch Antwort-Mail. Vielen Dank!
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: Digital signature
URL: <http://lists.linutronix.de/pipermail/elbe-devel/attachments/20180105/886b9911/attachment-0001.sig>


More information about the elbe-devel mailing list