15:01:16 <knesenko> #startmeeting oVirt Infra
15:01:16 <ovirtbot> Meeting started Mon Mar  3 15:01:16 2014 UTC.  The chair is knesenko. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:01:16 <ovirtbot> Useful Commands: #action #agreed #help #info #idea #link #topic.
15:01:25 <knesenko> eedri: ewoud here ?
15:01:29 <knesenko> bkp: here ?
15:01:29 * eedri here
15:01:33 <knesenko> #chair eedri
15:01:33 <ovirtbot> Current chairs: eedri knesenko
15:01:34 * bkp here
15:01:39 <knesenko> #chair bkp
15:01:39 <ovirtbot> Current chairs: bkp eedri knesenko
15:01:41 <eedri> doron_afk, you want to join?
15:01:55 <eedri> knesenko, obasan & dcaro are ooo
15:01:59 * doron here
15:02:06 <knesenko> #chair doron
15:02:06 <ovirtbot> Current chairs: bkp doron eedri knesenko
15:02:22 <knesenko> ewoud: hi how are you ? Joining the meeting ?
15:02:25 <knesenko> orc_orc: hey
15:02:32 <knesenko> orc_orc: want to join ?
15:02:44 <knesenko> anyone from kimchi ?
15:02:59 <doron> alitke: here?
15:03:22 <alitke> hi
15:03:45 <knesenko> #topic Hosting
15:03:48 <knesenko> ok he all
15:03:54 <knesenko> lets start the meeting
15:04:03 <knesenko> few updated regarding the hosting ....
15:04:44 <knesenko> there was an outage in rackspace, so ouir servers were down for 15-20 min
15:04:54 <knesenko> but they are back and everythong is ok now
15:05:26 <knesenko> rackspace said :
15:05:48 <ewoud> knesenko: yes
15:06:05 <knesenko> there was a cable isseu
15:06:08 <knesenko> #chair ewoud
15:06:08 <ovirtbot> Current chairs: bkp doron eedri ewoud knesenko
15:06:38 <knesenko> #info there was an outage in rackspace. Issue fixed and everything works fine now
15:06:59 <knesenko> also we took rackspace03 and now we are using it as jenkins slave
15:07:11 <knesenko> #info rackspace03 was added as jenkins slave
15:07:15 <knesenko> eedri: anything else ?
15:07:43 <ewoud> knesenko: single slare or with virtualisation?
15:08:02 <knesenko> ewoud: regarding the outage ?
15:08:37 <ewoud> knesenko: I meant is rackspace03 a single slave or is it a virtulisation host with virtual slaves on top?
15:08:51 <knesenko> ewoud: single slave
15:08:55 <ewoud> knesenko: ok
15:09:00 <eedri> knesenko, i think we're not using it too much now
15:09:18 <eedri> knesenko, rackspace3 i mean, we tried using it for building engine, but now that we solved the build issue (open files)
15:09:25 <eedri> knesenko, we can utilize it more
15:09:32 <knesenko> eedri: ok
15:09:33 <ewoud> #info rackspace03 added as a bare metal jenkins slave
15:10:27 <knesenko> anything else on hosting ?
15:11:35 <knesenko> moving to Forman and Puppet then
15:11:44 <knesenko> #topic Foreman and Puppet
15:11:50 <knesenko> q to ewoud
15:12:07 <knesenko> ewoud: can we easily reprovision slaves from foreman UI ?
15:13:00 <ccowley> all: hi, relative newbie here, gonna hang out and see what you're covering
15:13:51 <eedri> ccowley, hey, welcome!
15:14:06 <doron> ccowley: welcome to the infra session!
15:14:29 <knesenko> ccowley: welcome
15:14:30 <ewoud> knesenko: we should
15:14:44 <ewoud> ccowley: welcome
15:14:57 <knesenko> ewoud: did you tried it ?
15:15:00 <ewoud> ccowley: any specific interest?
15:15:23 <ewoud> knesenko: not in ovirt infra foreman
15:15:28 <ewoud> knesenko: hardware or virtual?
15:15:37 <knesenko> ewoud: virtual ....
15:15:49 <knesenko> ewoud: and I assume will need both in some point
15:16:19 <knesenko> ewoud: but pxe solution wont work for us right, since we are talking about different networks here
15:16:34 <ccowley> ewoud: Many, Puppet and Foreman probably primarilty, but I am pretty broad (and deep in many subjects).
15:16:48 <ewoud> knesenko: we could deploy a smartproxy which manages PXE there
15:16:53 <ewoud> ccowley: sounds familiar :)
15:16:59 <ccowley> ewoud: Currently consulting on an Openstack project in the day job to give you an idea
15:17:10 <eedri> ccowley, nice..
15:17:26 <ewoud> ccowley: and timezone wise?
15:17:57 <ccowley> ewoud: GMT+1 (France, but I am English)
15:18:02 <ewoud> knesenko: but wasn't there a template in rackspace ovirt to deploy?
15:18:16 <knesenko> ewoud: I have no idea ...
15:18:21 <knesenko> eedri: ^^ ?
15:18:32 * eedri reading
15:18:36 <knesenko> ok never mind, I am just asking ....
15:18:51 <eedri> ewoud, we don't have space
15:18:51 <ewoud> ccowley: I think we're mostly in that timezone as well, so that's convinient
15:18:54 <eedri> ewoud, for templates
15:18:54 <knesenko> anyway we will plan for new infra and will do it properly
15:18:59 <eedri> ewoud, that's the problem...
15:19:12 <eedri> ewoud, vms wouldn't start... due to the limitation of 10% free space
15:19:26 <eedri> ewoud, this is the reason we wanted to move to gluster storage
15:19:43 <eedri> ewoud, but then hit issues with rackspace...
15:19:46 <knesenko> eedri: ewoud also we need to create a puppet class to clean jenkins slaves workspace , right ?
15:19:59 <ewoud> knesenko: don't think so
15:20:05 <ewoud> knesenko: you may be thinking of cleaning /tmp
15:20:08 <eedri> ewoud, knesenko maybe we should revisit this issue, since it might take some time to get the new hardware
15:20:22 <eedri> ewoud, knesenko surely a couple of months at the least
15:20:31 <ewoud> knesenko: eedri what's the concrete use case of reprovision now?
15:20:47 <eedri> ewoud, what knesenko is saying is that we have rackspace vm slaves with no enough space
15:20:55 <ewoud> upgrade from f18 to f20 through reinstall?
15:20:58 <eedri> ewoud, and you can't control how many jobs will run on it
15:21:15 <eedri> ewoud, so we can add a cronjob via puppet (ugly) to clean old workpaces (3 days old?)
15:21:37 <eedri> ewoud, or if there is another way of limiting a certain slave space for workspaces via jenkins
15:21:49 <ewoud> eedri: jenkins has no built in mechanism for this?
15:21:58 <eedri> ewoud, i think till now we used isos
15:22:20 <eedri> ewoud, not sure, i think it can warn or take the slave offline if it doesn't have enough space
15:22:39 <eedri> ewoud, but i'm not sure it will actively run over data on workspace
15:22:48 <eedri> ewoud, or delete old workspaces
15:23:52 <knesenko> toughts ?
15:23:57 <knesenko> thoughts ?
15:24:03 <ewoud> I'm confused, do we have 2 issues now?
15:24:05 <eedri> knesenko, ewoud https://wiki.jenkins-ci.org/display/JENKINS/Workspace+Cleanup+Plugin
15:24:17 <ewoud> there was one of reprovision and another of filling up slaves?
15:24:23 <eedri> we could use this, but it will add more time per build (i.e delte workspace after build is done)
15:24:26 <ewoud> or is it the same issue?
15:24:30 <eedri> ewoud, different issues
15:25:16 <eedri> so better to it periodically on the slave or via the master with groovy script
15:25:44 <eedri> ewoud, unless you have other proposal
15:26:01 <ewoud> eedri: I like the plugin with post-build cleanup
15:26:15 <eedri> ewoud, only downsize is it will make builds run longer
15:26:23 <ewoud> won't 100% failsafe, but it sounds like the easiest short term solution
15:26:28 <eedri> ewoud, yea
15:26:38 <eedri> ewoud, we can try it out and see how much time it adds
15:26:46 <ewoud> but other scripts might interfere with jenkins actually running
15:26:53 <knesenko> eedri: ewoud so we agreed on trying that plugin ?
15:26:56 <eedri> ewoud, the long term solution is adding more slaves, or scheduling reprovisioin of slaves
15:27:05 <ewoud> eedri: +1
15:27:13 <eedri> ewoud, not sure, if they clean only very old dirs, like a few days old
15:27:21 <eedri> ewoud, but using the plugin is safer
15:29:13 <knesenko> eedri: ewoud so plugin then ?
15:29:24 <eedri> knesenko, let's try it
15:29:30 <eedri> knesenko, add it to the todo list
15:29:56 <knesenko> #info try to use a workspace cleanup plugin for jenkins slaves
15:30:02 <knesenko> #info https://wiki.jenkins-ci.org/display/JENKINS/Workspace+Cleanup+Plugin
15:30:11 <knesenko> anything else on puppet/foreman ?
15:30:15 <eedri> knesenko, another issue is gerrit hooks on gerrit
15:30:19 <eedri> knesenko, not sure it's related to puppet/foreman
15:30:32 <knesenko> eedri: not related ...
15:30:36 <knesenko> and dcaro is not here ...
15:30:48 <eedri> knesenko, yea, anyway it's worth adding an open issue
15:30:49 <knesenko> but we can discuss it after Jenkins topic
15:30:52 <eedri> knesenko, +1
15:30:56 <knesenko> #topic Jenkins
15:31:00 <knesenko> eedri: hello :) :)
15:31:12 <eedri> knesenko, ok, few issues i'm aware of
15:31:31 <eedri> knesenko, 1st - i changed to default behavior of gerrit trigger plugin to not fail on build failuire
15:31:47 <eedri> knesenko, not sure why we didn't do it till now, it will prevent false positives on patches failing on infra issues
15:31:57 <eedri> knesenko, so now jenkins will only give -1 on unstable builds
15:32:45 <knesenko> eedri: +1
15:32:47 <eedri> knesenko, 2nd, like i said there are some open issues with new hooks installed, regarding bug-url, so dcaro should look into that once he's back
15:33:05 <eedri> knesenko, i think there should also be a wiki describing on all existing hooks and thier logic
15:33:14 <eedri> maybe there is one and i'm not aware of
15:33:36 <knesenko> #action dcaro create a wiki about gerrit hooks and their logic
15:33:55 <eedri> another issue was strange git failures..
15:34:13 <eedri> knesenko, which people sent to infra, not sure if all of them were caused by loop devices on rackspace vms
15:34:19 <eedri> but should also be looked into
15:34:40 <eedri> fabiand, i remember that some ovirt-node jobs were leaving open loop devices right?
15:34:46 <eedri> which forced us to reboot the slave
15:34:48 <knesenko> eedri: correct
15:34:56 <knesenko> eedri: I remember that too
15:35:19 <eedri> knesenko, there was also an selinux issue, not sure if it's resolved yet
15:35:20 <fabiand> eedri, in some circumstances that can happen yes, but also the ovirt-live job has this risk
15:35:25 <orc_orc_> I see those orphan loop devices when I get build failures ... perhaps a wrapper to do clean up is in order?
15:35:29 <eedri> knesenko, it was one of the minidells
15:35:44 <eedri> orc_orc, can it be cleaned while host is up?
15:35:46 <eedri> orc_orc, w/o reboot?
15:35:52 <orc_orc_> eedri: ues
15:36:01 <orc_orc_> yes ... sorry -- broken typing hand
15:36:02 <eedri> orc_orc, i believe that the test should handle it
15:36:09 <eedri> orc_orc, and post cleanup phase
15:36:13 <fabiand> eedri, rbarry is working on docker support, maybe that will help - in the Node case - with the loop device problem temporarily
15:36:20 <eedri> orc_orc, usually each job should be indepandant
15:36:24 <eedri> and not affect the slave for other jobs
15:36:28 <orc_orc_> youc check to see if anything holds it open, and if not, can remove it
15:36:36 <fabiand> eedri, orc_orc - host needs to be rebooted when there are oprhaned loop devices
15:36:51 <eedri> orc_orc, so each resource the job creates -> it should remove at the end
15:36:55 * fabiand had this check in some ovirt-node jobs, but back then noone was interested ..
15:37:03 <fabiand> eedri, sometimes that is just not possible
15:37:14 <eedri> fabiand, hmm
15:37:21 <fabiand> eedri, livecd-toiols is quite good at removing oprhans and if very often does, the problem is that in some cases it failes ..
15:37:27 <fabiand> but those cases are very hard to catch ..
15:37:30 <eedri> fabiand, so maybe the ideal solution for ovirt-node is to resintall the vm each time it runs?
15:37:37 <eedri> fabiand, but thats not possible yet
15:37:42 <eedri> fabiand, with our infra
15:38:04 <fabiand> eedri, we can limit the nbumber of times when ovirt-node is build, that will reduce the risk to get orphans
15:38:06 <eedri> fabiand, still need jenkins plugin for ovirt or foreman + connection to provision vms on the fly
15:38:15 <fabiand> yep, that would be great ..
15:38:39 <fabiand> eedri, on the longterm our build system will change, then the risk is mitigated ..
15:38:46 <fabiand> as we will use VMs to build node .
15:38:50 <eedri> fabiand, ok
15:38:51 <orc_orc_> eedri: what blocker prevents spinning up a new VM per build from a gold master, and tearing down later, per build?
15:38:53 <fabiand> for now there ain't much ..
15:39:04 <eedri> orc_orc, well
15:39:09 <fabiand> eedri, we could have a dedicated vm for node building . then the VM could reboot after each build ..
15:39:11 <eedri> orc_orc_, which vm would you like to spin?
15:39:20 <eedri> orc_orc, are you talking about jenkins slave?
15:39:27 <eedri> orc_orc, or the job itself to add a vm?
15:39:50 <eedri> fabiand, that's also a possibility
15:40:03 <eedri> fabiand, we'll need to see how many vms we have, not sure current infra can support it
15:40:34 <fabiand> eedri, ack  - a global note: the orphans are more likely to appear when a job with livecdtools is canceled (ovirt-node or ovirt-live)
15:40:52 <orc_orc_> eedri: the last listed .. a job to spin up a VM and them move into it to build, with a teardown when done
15:40:52 <eedri> orc_orc, if you want the master jenkins to spin vms on demand, then you need api for the relevant cloud service
15:41:26 <eedri> orc_orc, ok, that means that we need the job to run on baremetal slave
15:41:37 <eedri> orc_orc, and spin a vm via ovirt/libvirt ?
15:41:55 <orc_orc_> eedri: yes
15:42:14 <eedri> orc_orc, needs coding to do that, i think fabiand has something with igord
15:42:16 <fabiand> eedri, orc_orc_ - once we are at that point we can also do igor testing (functional testing of node)
15:42:22 <fabiand> :)
15:42:28 <eedri> orc_orc, we can try doing that on the minidells
15:42:40 <eedri> orc_orc, since they are the only baremetal hosts we have, or on the rackspace 03
15:42:47 <fabiand> eedri, can't we hack our hosts to support nesting - should be fine if they are AMDs
15:42:57 <eedri> fabiand, we can
15:43:02 <knesenko> not sure we have AMD
15:43:02 <orc_orc_> I had forgotton igor altho I did a CO ... i will thy this locally
15:43:06 <knesenko> do we ?
15:43:17 <eedri> fabiand, but i'm not sure we want to do it on our minidells while network to tlv is 10mb
15:43:18 <orc_orc_> problem w nesting is performance I thot
15:43:28 <fabiand> I think it's working with intel as well, but AMD seems to be a bit more mature ..
15:43:29 <ccowley> fabiand: I do nesting on Intel's too, it is no problem
15:43:48 <fabiand> orc_orc_, but IMO performance is not the ciritical point here ..
15:43:56 <orc_orc_> fabiand: ok
15:43:59 <eedri> fabiand, as long as its not hogging the build
15:44:03 <fabiand> eedri, agreed - that's why I wanted to bring in the nested thing ..
15:44:06 <eedri> fabiand, and causing quueu to build up
15:44:23 <eedri> fabiand, the whole infra is in kind of a "halt" status
15:44:32 <fabiand> I don't think the performance is that bad. sotware meulation would be bad, but nesting should be ok ..
15:44:32 <eedri> fabiand, limbo if you may call it
15:44:45 <fabiand> eedri, but yes - we can also change the node build schedule ..
15:44:47 <fabiand> :)
15:44:59 <eedri> fabiand, since on the one hand we decided we might migrate out of rackspace, but we didn't get new hardware yet
15:45:05 <orc_orc_> eedri: If I set up a short term unit w 72G ram, and 6T of disk, but in low bandwidth would this be useful?
15:45:22 <eedri> orc_orc, anything will be usufull for jenkins.ovirt.org :)
15:45:25 <orc_orc_> I have one spare sitting not yet in production
15:45:46 <eedri> orc_orc, doesn't have to be open for ssh also, you can connect it with jnlp
15:45:51 <eedri> like the minidells
15:46:21 <orc_orc_> eedri: it would need to locally mirror the git etc, as I could not take the load or repeated pulls
15:46:38 <orc_orc_> there is ssh access through two paths
15:46:57 <orc_orc_> but it is otherwise NAT isolated
15:47:24 <orc_orc_> C6 or F 19 or 20 base preferred?
15:47:48 <eedri> i think c6 is better
15:47:59 <orc_orc_> me too, but I am prejudiced :)
15:48:43 <orc_orc_> I cannot get to setting it up until next MOnday but will do so then
15:48:56 <eedri> orc_orc, no problem
15:49:10 <orc_orc_> if RHEL 7 drops, would you prefer the beta instead?
15:49:20 <eedri> knesenko, ewoud we may need to do a meeting on infra status and what can we do in the meantime
15:49:25 <eedri> until new hardware is in place
15:49:27 <eedri> orc_orc, yea
15:49:36 <eedri> orc_orc, rhel7 might be great, there is an open ticket on it
15:49:39 <orc_orc_> eedri: ok -- I most of that rebuild solved
15:49:50 <ewoud> eedri: +1 on meeting
15:50:14 <eedri> ewoud, maybe we should also scheudle a monthly meeting recuurent
15:50:22 <eedri> to handle long term issue or tickets
15:50:36 <ewoud> eedri: sounds like a good idea
15:51:00 <orc_orc_> #info orc_orc_ to provision an isolated testing unit, preferably on rhel 7
15:51:05 <doron> eedri: or we can dedicate first or last 10 minutes of this meeting to long term issues.
15:51:25 <karimb> i buddies, i get a Caught event [NETWORK_UPDATE_VM_INTERFACE] from an other product when creating a 2nics vm. could it be caused by ovirt ?
15:51:52 <eedri> doron, yea, but past exp showed we end up the meeting before we can review tickets for e.g
15:51:56 <orc_orc_> karimb: there is a meeting active ... please stand by
15:52:40 <doron> eedri: some meetings take longer, and that's fine. getting everyone together is not an easy task.
15:53:01 <doron> so  as long as we do it here, we should be able to clarify the relevant issues.
15:53:19 <eedri> doron, ok
15:54:15 <eedri> ok, let's continue
15:54:26 <eedri> knesenko, can you add info /action to what we agreed
15:54:54 <knesenko> eedri: I lost you ... was in the middle of ovirt-node update for fabiand
15:55:04 <fabiand> knesenko+
15:55:10 <knesenko> eedri: you can do it as well
15:55:14 <knesenko> eedri: #action
15:55:52 <eedri> #action orc_orc will try to add additional jenkins slave, possibly rhel7 beta
15:56:25 <eedri> #action agreed to try and think on adding nested vms or spawning vms on baremetal slaves
15:56:50 <eedri> these might be worth adding as trac ticket to follow up
15:57:18 <orc_orc_> * nod * as to trac -- I am likely to need help in getting local mirroring set up
15:57:35 <orc_orc_> I do not know how all the moving parts fir together
15:57:38 <eedri> did we agreed to have a section in the meeting for infra issues long-term?
15:57:42 <orc_orc_> ... fit ...
15:57:44 <eedri> or we'll schedule another meeting
15:57:49 <eedri> orc_orc, sure
15:58:09 <knesenko> eedri: I think we can do it in this meeting as well
15:58:21 <fabiand> there is already a ticket for nesting: https://fedorahosted.org/ovirt/ticket/78
15:58:34 <eedri> fabiand, +1
15:58:53 <eedri> fabiand, so it's just a matter of deciding how to push the infra, considering our current status
15:59:12 <fabiand> If I understood you correctly: yes
15:59:13 <fabiand> :)
15:59:57 <ccowley> fabiand: any particular reason to run with nesting, rather than something thinner (LXC?)
16:00:38 <fabiand> ccowley, yep - the loop device orphans - IIUIC the orphans will not vanish when we use lxc ..
16:01:41 <eedri> #action check offline jenkins slaves on rackspace and re-enable/reprovision
16:01:52 <ccowley> fabiand: OK, valid point - I am not fully up to speed with these things yet :-)
16:02:13 <fabiand> ccowley, :)
16:03:08 <eedri> ok, let continue
16:03:31 * ewoud semi afk due to other work
16:03:34 <ewoud> ping me if needed
16:03:34 <eedri> knesenko, you want to talk about build system?
16:03:42 <knesenko> eedri: we are out of time ...
16:03:53 <knesenko> eedri: and I don't think its related
16:04:13 <eedri> knesenko, i would like to spend a few min revewing the trac tickets
16:04:20 <eedri> knesenko, if people are willing to stay
16:04:39 <knesenko> I am here
16:04:47 <eedri> orc_orc, doron ewoud ?
16:05:02 <doron> still here
16:05:04 <ccowley> eedri: I'm here for a while longer
16:05:16 <orc_orc_> still here all day
16:05:19 <bkp> Hanging on
16:05:20 <eedri> knesenko, ok.. so let's do a quick scan
16:05:33 <knesenko> #topic Review tickets
16:06:26 <eedri> #link https://fedorahosted.org/ovirt/report/1
16:07:10 <eedri> suggestion - maybe worth doing the meeting with bluejeans/hangout?
16:07:12 <orc_orc_> a date sort of most recent first is probably most useful, to triage from?
16:07:26 <eedri> so we can review the tickets for e.g
16:07:29 <eedri> orc_orc, +1
16:08:02 <knesenko> eedri: irc is not enough ? :)
16:08:25 <orc_orc_> how about close infra meeting ,and restart as triage meeting?
16:08:36 <ccowley> eedri: IRC is SOOOOO 90s, all the cool kids on Hangouts
16:08:47 <eedri> ccowley, or blue jeans
16:08:48 <orc_orc_> ccowley: but leaves no useful log
16:09:25 <eedri> knesenko, orc_orc looking at the list of open tickets, that might take some time
16:09:32 <ccowley> eedri: never heard of blue jeans (I am old  ... not really).
16:09:40 <orc_orc_> eedri: only one way to eat an elephant
16:09:44 <eedri> might be better to review them offline and maybe continue on the list
16:09:49 <eedri> or do a follow up meeting
16:10:03 <eedri> cause i see some are opened by dcaro
16:10:03 <ccowley> orc_orc_: true
16:10:05 <eedri> and he's not around
16:10:16 <eedri> unless there is a specific ticket anyone want to talk about?
16:10:18 <orc_orc_> http://bluejeans.com/trial/video-conferencing-from-blue-jeans?utm_source=google&utm_medium=cpc&utm_term=bluejeans&utm_campaign=Brand_-_BlueJeans_-_Exact&gclid=CKTbuZPg9rwCFbFaMgodxBcAfg   seems to be a non free yet another vidconf system
16:10:19 <eedri> and it is urgent
16:10:55 <eedri> we didn't heard from aline
16:11:03 <eedri> from kimchi
16:11:16 <eedri> on power pc hardware/vms
16:11:17 <orc_orc_> strangely there is not a priority column in that canned trac report query
16:11:28 * eedri doesn't fancy trac too much
16:11:53 <eedri> that why i suggested a seperate meeting for the tickets, seems it might take a while
16:12:10 <orc_orc_> eedri: there are worse ;)
16:12:26 <eedri> i think some can be closed
16:12:27 <eedri> for example
16:12:30 <eedri> https://fedorahosted.org/ovirt/ticket/100
16:12:31 <orc_orc_> C moved from bugzilla to Mantis, which is REALLY bad
16:12:33 <eedri> this was created
16:12:36 <eedri> JIRA is nice
16:12:38 <eedri> or reminde
16:13:21 <ccowley> eedri: Jira is great, even if you have to pay for it
16:13:25 <eedri> https://fedorahosted.org/ovirt/ticket/104 - this i think can also be closed
16:13:34 <eedri> but waiting for david
16:13:50 <orc_orc_> +1 close 100
16:13:52 <eedri> knesenko, maybe add action to send email to list? so people can review tickets and update status
16:13:58 <eedri> orc_orc, closed
16:15:26 <eedri> knesenko, can we close https://fedorahosted.org/ovirt/ticket/72
16:17:14 <orc_orc_> this looks like a better form for the canned query: but I cannot seem to save it:  https://fedorahosted.org/ovirt/query?status=assigned&status=new&status=reopened&col=id&col=summary&col=status&col=owner&col=type&col=priority&col=milestone&group=priority&order=priority
16:17:16 <knesenko> eedri: yes close
16:17:51 <k3rmat> I am running an oVirt cluster with iscsi as my shared storage. Creating, running, migrating, deleting, and HA on guests works great. As soon as I try to create a template from a powered off guest, my iscsi datacenter goes inactive for 10 minutes before the template creation fails/quits. Has anyone else experienced behavior like this?
16:17:53 <eedri> https://fedorahosted.org/ovirt/ticket/48
16:17:54 <knesenko> ok lets end the meeting . Seems like it will never be ended :)
16:17:57 <knesenko> #endmeeting