[elbe-devel] elbe-testing v1.9.23

Manuel Traut manuel.traut at linutronix.de
Fri Jul 7 00:05:07 CEST 2017


Hi Lukasz,

> >> >I will test how it's working next week. One thing I noticed already is,
> >> >that when I connect to the initvm using "elbe initvm attach" I do not
> >> >get the console, i.e. no login prompt, which leaves me in the position
> >> >of not being able to connect/login.
> >>
> >> The above problem of not getting the login prompt has now a workaround
> >> suggested today by Frank Erdrich [1]. Would there be a solution for that?
> >
> >i just tried to reproduce the issue. I have created an lxc container
> >with
> >Debian stretch, containing the following:
> >
> >||/ Name                    Version          Architecture
> >Description
> >+++-=======================-================-================-
> >====================================================
> >ii  seabios                 1.10.2-1         all              Legacy BIOS implementation
> >ii  qemu-system-x86         1:2.8+dfsg-6     amd64            QEMU full system emulation binaries (x86)
> >
> >$ lsb_release -a
> >No LSB modules are available.
> >Distributor ID: Debian
> >Description:    Debian GNU/Linux 9.0 (stretch)
> >Release:        9.0
> >Codename:       stretch
> >
> >I used current elbe from 'elbe-testing' 1.9.24
> >
> >But i was able to get a login prompt using 'elbe initvm attach'
> >
> >What's the difference on our setup?
> 
> I:
> 1) created a VMware VM with Debian 9.0 (stretch) from the official repo
> 2) enabled virtualization of the CPU in the VM Processor Settings (Virtualize Intel VT-x/EPT or AMD-V/RVI)
> 3) installed elbe v1.9.24 from "elbe-testing"
> 4) created a new initvm instance with "elbe initvm create"
> 
> My versions of the seabios and qemu-system-x86 packages are exactly the same as those presented above, so is the output of the lsb_release command. The initvm gets created correctly (no error messages during build) but after elbe reboots the initvm at the end of the building process:
> 
> $ elbe initvm create
> ...
> Requesting system reboot
> [  641.468543] reboot: Restarting system
> mkdir -p .stamps
> touch .stamps/stamp-install-initial-image
> *****
> 
> it does not boot correctly. The qemu-system-x86 process uses ~150% of CPU (apparently "does something" in the background) but I cannot connect to the initvm: I get the tmux screen with the green bar at the bottom, but no login prompt is presented:
> 
> $ elbe initvm attach
> /usr/bin/kvm -M pc \
>         -device virtio-rng-pci \
>         -device virtio-net-pci,netdev=user.0 \
>         -drive file=buildenv.img,if=virtio,bus=1,unit=0 \
>         -no-reboot \
>         -netdev user,ipv4,id=user.0,hostfwd=tcp::5022-:22,hostfwd=tcp::7587-:7588 \
>         -m 1024 \
>         -usb \
>         -nographic \
>         -smp `nproc`
> 
> 
> 
> [ElbeInitV0:initvm*                                                   "stretch" 08:35 06-Jul-17
> 
> 
> I can't submit jobs to it either: "elbe initvm submit", "elbe control list_projects" and other commands just hang and do nothing. That also applies to "elbe initvm stop", so the only way to proceed is to kill the tmux session.
> 
> The issue can be resolved editing the initvm/Makefile and changing the MACHINE value from "pc" to "pc-i440fx-2.6"
> 
> < MACHINE?=pc
> ---
> > MACHINE?=pc-i440fx-2.6
> 
> With this modification the initvm starts correctly, I can attach and submit XML files to it as usual.

It sounds like an issue specific for the nested vmware setup.

I don't like to modify the qemu parameters, because it is somehow related with
reproducability.

Is there a way to detect, that we run on a netsted vmware setup? Than we could
at least give a hint, that there is a known bug and describe the workaround.

Regards,

  Manuel




More information about the elbe-devel mailing list