Comment Installer Xen Sous Debian 7
Updated Debian 7: 7.1 released. June 15th, 2013. The Debian project is pleased to announce the first update of its stable distribution Debian 7 (codename wheezy).This update mainly adds corrections for security problems to the stable release, along with a few adjustments for serious problems.
Citrix XenCenter is client software with GUI for managing XenServer/XCP hosts remotely. Using XenCenter, you can create virtual machines (VMs), access VM consoles, and configure VM storage and networking.
As of this writing, Citrix only offers a Windows native client for XenCenter, and they don't seem to plan on releasing XenCenter Linux client any time soon. So if you would like to install XenCenter on Linux, you need to find an alternative to XenCenter on Linux, which is what this post is about. Fortunately, there is a pretty good open-source alternative to XenCenter on Linux, which is called. It allows users to manage XenServer and Xen Cloud Platform (XCP) hosts remotely via GUI. You can install OpenXenManager on Linux as follows. Install OpenXenManager on Debian or Ubuntu.
Once you are connected to a remote XenServer host, you will be able to see the resources (CPU, memory, storage) available on the host, and access its virtual console via OpenXenManager GUI. To create a guest VM: To create a new storage repository: As an open-source clone of XenCenter, OpenXenManager implements pretty much the same functionality of XenCenter. The latest OpenXenManager even supports Citrix-specific features of XenCenter, such as activating a free XenServer license and installing XenServer updates.
Contents. Introduction KVM is a full virtualization solution for Linux on x86 (64-bit included) hardware containing virtualization extensions (Intel VT or AMD-V). It consists of a loadable kernel module, kvm.ko, that provides the core virtualization infrastructure and a processor specific module, kvm-intel.ko or kvm-amd.ko. In Debian, and are alternatives to KVM.
Installation It is possible to install only QEMU and KVM for a very minimal setup, but most users will also want or even for a GUI. For Debian/stretch, jessie-backports, and newer: # apt install qemu-kvm libvirt-clients libvirt-daemon-system For jessie and older: # apt-get install qemu-kvm libvirt-bin The daemon daemon will start automatically at boot time and load the appropriate kvm modules, kvm-amd or kvm-intel, which are shipped with the Linux kernel Debian package.
If you intend to create VMs from the command-line, install. In order to be able to manage virtual machines as regular user, that user needs to be added to some groups. For Debian/stretch, jessie-backports, and newer: # adduser libvirt # adduser libvirt-qemu For jessie and older: # adduser kvm # adduser libvirt You should then be able to list your domains: # virsh list -all libvirt defaults to qemu:///session for non-root. So from you'll need to do: $ virsh -connect qemu:///system list -all You can use LIBVIRTDEFAULTURI to change this. Creating a new guest The easiest way to create and manage VM guest is using GUI application Virtual Machine Manager. In alternative, you may create VM guest via command line.
Below is example to create a Squeeze guest with name squeeze-amd64: virt-install -virt-type kvm -name squeeze-amd64 -memory 512 -cdrom /iso/Debian/cdimage.debian.orgmirrorcdimagearchive6.0.10liveamd64isohybriddebianlive6.0.10amd64gnomedesktop.iso -disk size=4 -os-variant debiansqueeze Since the guest has no network connection yet, you will need to use the GUI to complete the install. You can avoid pulling the ISO by using the -location option. To obtain text console for the installation you can also provide -extra-args 'console=ttyS0': virt-install -virt-type kvm -name squeeze-amd64 -location -extra-args 'console=ttyS0' -v -os-variant debiansqueeze -disk size=4 -memory 512 For a fully automated install look into preseed or debootstrap. Setting up bridge networking Between VM guests By default, QEMU uses macvtap in VEPA mode to provide NAT internet access or bridged access with other guest. Unfortunately, this setup could not let the host to communicate with any guests.
Between VM host and guests To let communications between VM host and VM guests, you may setup a macvlan bridge on top of a dummy interface similar as below. After the configuration, you can set using interface dummy0 (macvtap) in bridged mode as the network configuration in VM guests configuration.
Modprobe dummy ip link add dummy0 type dummy ip link add link dummy0 macvlan0 type macvlan mode bridge ifconfig dummy0 up ifconfig macvlan0 192.168.1.2 broadcast 192.168.1.255 netmask 255.255.255.0 up Between VM host, guests and the world In order to let communications between host, guests and outside world, you may and as described. For example, you may modify network configuration file /etc/network/interfaces for setup ethernet interface eth0 to a bridge interface br0 similar as below. After the configuration, you can set using Bridge Interface br0 as the network connection in VM guests configuration. Auto lo iface lo inet loopback # The primary network interface auto eth0 #make sure we don't get addresses on our raw device iface eth0 inet manual iface eth0 inet6 manual #set up bridge and give it a static ip auto br0 iface br0 inet static address 192.168.1.2 netmask 255.255.255.0 network 192.168.1.0 broadcast 192.168.1.255 gateway 192.168.1.1 bridgeports eth0 bridgestp off bridgefd 0 bridgemaxwait 0 dns-nameservers 8.8.8.8 #allow autoconf for ipv6 iface br0 inet6 auto acceptra 1 Managing VMs from the command-line You can then use the command to start and stop virtual machines. VMs can be generated using. For more details see the page. Virtual machines can also be controlled using the kvm command in a similar fashion to.
Below are some frequently used commands: Start a configured VM guest 'VMGUEST': # virsh start VMGUEST Notify the VM guest 'VMGUEST' to graceful shutdown: # virsh shutdown VMGUEST Force the VM guest 'VMGUEST' to shutdown in case it is hanged, i.e. Graceful shutdown not work: # virsh destroy VMGUEST Managing VM guests with a GUI On the other hand, if you want to use a graphical UI to manage the VMs, you can use the Virtual Machine Manager.
Automatic guest management on host shutdown/startup Guest behavior on host shutdown/startup is configured in /etc/default/libvirt-guests. This file specifies whether guests should be shutdown or suspended, if they should be restarted on host startup, etc.
First parameter defines where to find running guests. For instance: # URIs to check for running guests # example: URIS='default xen:/// vbox+tcp://host/system lxc:///' URIS=qemu:///system Performance Tuning Below are some options which can improve performance of VM guests. CPU.
Assign virtual CPU core to dedicated physical CPU core. Edit the VM guest configuration, assume the VM guest name is 'VMGUEST' having 4 virtual CPU core # virsh edit VMGUEST.
Add below codes after the line ' where vcpu are the virtual cpu core id; cpuset are the allocated physical CPU core id. Adjust the number of lines of vcpupin to reflect the vcpu count and cpuset to reflect the actual physical cpu core allocation. In general, the higher half physical CPU core are the hyperthreading cores which cannot provide full core performance while have the benefit of increasing the memory cache hit rate. A general rule of thumb to set cpuset is:. For the first vcpu, assign a lower half cpuset number. For example, if the system has 4 core 8 thread, the valid value of cpuset is between 0 to 7, the lower half is therefore between 0 to 3.
For the second and the every second vcpu, assign its higher half cpuset number. For example, if you assigned the first cpuset to 0, then the second cpuset should be set to 4. For the third vcpu and above, you may need to determine which physical cpu core share the memory cache more to the first vcpu as described and assign it to the cpuset number to increase the memory cache hit rate. Disk I/O Disk I/O is usually the bottleneck of performance due to its characteristics.
Comment Installer Xen Sous Debian 7 11
Unlike CPU and RAM, VM host may not allocate a dedicated storage hardware for a VM. Worse, disk is the slowest component between them. There is two types of disk bottleneck, throughput and access time. A modern harddisk can perform 100MB/s throughput which is sufficient for most of the systems. While a modern harddisk can only provides around 60 transactions per seconds (tps). For VM Host, you can benchmark different disk I/O parameters to get the best tps for your disk.