http://xenbits.xen.org/people/andrewcoop/feature-levelling/feature-levelling-E.pdf

From IRC #xen

20:20 -!- Irssi: #xen: Total of 238 nicks [6 ops, 0 halfops, 0 voices, 232 normal]
20:20 -!- Channel #xen created Fri Jun 24 00:03:57 2005
20:20 -!- Irssi: Join to #xen was synced in 1 secs
20:21 -!- P2Vme [uid89951@gateway/web/irccloud.com/x-lqngxzvluinixwai] has joined #xen
20:21 < sunkan> When using Linux PV domU's are there any restrictions on CPU compatibility for live migration other than i386/x86_64?
20:22 -!- PryMar56 [~prymar@unaffiliated/prymar56] has quit [Ping timeout: 264 seconds]
20:22 < epretorious> yes - both hosts must have the same set of capabilities.
20:22 < epretorious> sunkan: yes - both hosts must have the same set of capabilities.
20:23 < epretorious> sunkan: i.e., `cat /proc/cpuinfo`
20:26 < sunkan> epretorious: Ok, is it possible to "downgrade" a newer CPU to make it compatible with an older one?
20:26 < sunkan> epretorious: Like masking capabilities or something like that.
20:28 < andyhhp> in theory, yes
20:28 < andyhhp> in practice, more complicated
20:29 < andyhhp> http://xenbits.xen.org/people/andrewcoop/feature-levelling/feature-levelling-E.pdf
20:29 < andyhhp> I am literally right this moment trying to work on fixing it, but it is complicated
20:30 -!- cale250 [~cale250@unaffiliated/cale250] has quit [Ping timeout: 252 seconds]
20:30 < sunkan> Ok, well at this time it is for my home setup. But I think it may be needed at work some time as well.
20:31 < sunkan> For my knowledge, are there any differences between the requirements when comparing HVM/PV guests?
20:32 -!- cale250 [~cale250@unaffiliated/cale250] has joined #xen
20:32 -!- skylite [~skylite@5402F494.dsl.pool.telekom.hu] has joined #xen
20:33 < sunkan> Sounds like there are some differences (reading the PDF)
20:34 < andyhhp> correct
20:35 < sunkan> I have heard that we might be going to some hybrid instead ov PV guests, is that still ongoing work (or maybe completed, I'm not 
                up2date on that).
20:36 < andyhhp> that is PVH
20:37 < andyhhp> which falls into the "HVM" category as far as that document is concerned
20:37 < sunkan> Right, could not remember the acronym.. Is that something that is working yet?
20:37 < sunkan> I'm on Xen 4.4.1 (Debian Jessie)
20:38 < andyhhp> no - that is not working yet
20:38 -!- PryMar56 [~prymar@unaffiliated/prymar56] has joined #xen
20:39 -!- skweek [~skweek@USF-Gold-Wifi-nat-79.laptops.usf.edu] has quit [Ping timeout: 250 seconds]
20:40 < sunkan> I guess just trying some migrations does not guarantee anything just because they work. If the guest later on runs something 
                that uses some specific feature it can fail another time later on?
20:41 -!- cale250 [~cale250@unaffiliated/cale250] has quit [Ping timeout: 260 seconds]
20:41 < sunkan> I mean, when migrating between different CPU's and not masking anything.
20:45 -!- cale250 [~cale250@unaffiliated/cale250] has joined #xen
20:49 < sunkan> Interesting read, but I don't understand it all. But sounds like at this time HVM is easier than PV due to possibility to 
                control CPUID access to report different data to domU?
20:50 < epretorious> sunkan, fwiw: i believe that vmware esx/vsphere performs cpu-masking.
20:50 < andyhhp> sunkan: HVM is easier, but is still no less broken in the current implementation
20:50 < epretorious> sunkan, i don't know if vmware esxi is still "free" or not but that might suit your environment.
20:50 < sunkan> epretorious: Yes I'm pretty sure they do, we use it in our DC but they don't have pure PV machines at all.
20:51 < epretorious> sunkan, is there a requirement that your guests be paravirtualized?
20:52 < epretorious> sunkan, or just that your cpu's don't support the virt extensions?
20:52 < epretorious> sunkan, iirc: vmware esxi doesn't require VTx/SVE.
20:53 < sunkan> epretorious: No, but I have run PV machines since many many years back. It just feels better/smarter - but I think my opinions 
                on that will probably have to change with PVH as that was supposed to have some real benefits over plain PV.
20:54 -!- bdmc [bdmc@cl-745.bos-01.us.sixxs.net] has joined #xen
20:54 < sunkan> epretorious: At first I did not have any virtualization support on the CPU's but that has change since several years ago.
20:56 < sunkan> For now I will have to live with restarting guests in my "private" environment..
20:57 < sunkan> Anybody know whether XenServer 6.2 or 6.5 does any checking before live migrating machines between pools?
20:57 < sunkan> Or is that up to the administrator to keep track on?
20:58 < andyhhp> checks are made
20:59 < andyhhp> but all the XenServer problems in that document still apply
21:00 < sunkan> Not sure I understood them perfectly, but I guess it is not completely safe to migrate HVM guests between different CPUs in 
                XenServer then?
21:01 < andyhhp> what hardware have you got in your pool?
21:01 < andyhhp> older hardware will function, newer wont
21:02 < sunkan> I don't remember on top of my head. It's quite new Intel HP ProLiant blade servers.
21:02 < andyhhp> you will probably hit issues then
21:02 < andyhhp> depending on how different the two cpus are
21:03 < sunkan> I think we have had some issues already and did not think about this as a possible cause at the time. But will mention to my 
                collegues to think about it in the future. We normally don't migrate much between pools, but since it is possible now it is 
                sometimes discussed as an option for some non-interruptive maintenance work.
21:05 < andyhhp> I am working to fix it in XenServer Dundee
21:05 -!- bdmc [bdmc@cl-745.bos-01.us.sixxs.net] has quit [Ping timeout: 246 seconds]
21:05 < sunkan> Cool, we are not on Creedence yet though.. We usually have to take things a bit safe (slow)..
21:06 < andyhhp> it has been broken since forever, but doesn't show up on older hardware
21:07 < andyhhp> and now things are so bad that Xapi doesn't actually have a clue which features are actually advertised to the guest, so can't 
                 even evaluate whether a migration is safe or not
21:07 < sunkan> Ouch
21:07 < andyhhp> there are problems at all levels, including the hypervisor
21:08 < sunkan> Hope you can find and fix them all then :)
21:10 < andyhhp> I am rewriting it from scratch