13:17:08 <fabiand> #startmeeting oVirt Node Weekly Meeting
13:17:08 <ovirtbot> Meeting started Tue Jul 15 13:17:08 2014 UTC.  The chair is fabiand. Information about MeetBot at http://wiki.debian.org/MeetBot.
13:17:08 <ovirtbot> Useful Commands: #action #agreed #help #info #idea #link #topic.
13:17:16 <fabiand> #chairs rbarry dougsland
13:17:22 * rbarry here
13:17:27 <fabiand> #chairs rbarry dougsland  apuimedo
13:17:32 <fabiand> morning rbarry
13:18:16 <rbarry> Morning, fabiand
13:18:59 <fabiand> mornin'
13:19:01 <fabiand> #topic Agenda
13:19:18 <fabiand> #info Action Item Review
13:19:29 <fabiand> #info Next Release (3.1)
13:20:51 <fabiand> #info 3.5 Feature Status
13:20:59 <fabiand> #info Other Items
13:21:17 <fabiand> #topic Action Item Review
13:21:54 <fabiand> Last meeting: http://resources.ovirt.org/meetings/ovirt/2014/ovirt.2014-07-08-13.03.txt
13:21:55 <fabiand> #link http://resources.ovirt.org/meetings/ovirt/2014/ovirt.2014-07-08-13.03.txt
13:22:12 <fabiand> #info fabiand and rbarry to test the ovirt-node iso
13:22:46 <fabiand> I've at least tested the core functionality
13:23:15 <fabiand> #info QE team discovered some issues
13:23:42 <rbarry> I didn't come up with anything in smoke testing, so I'm glad QE did
13:23:54 <rbarry> Well, not glad, but...
13:24:17 <rbarry> Happy they also tested it
13:24:30 <fabiand> #link http://lists.ovirt.org/pipermail/devel/2014-July/008142.html
13:24:43 <fabiand> rbarry, yep, I'm also happy that they did.
13:24:50 <fabiand> I think they didn't find anything serious
13:24:55 * fabiand just fwd'ed the email to devel@
13:26:04 <peetaur2> "Do you want Setup to configure the firewall? (Yes, No) [Yes]:"  what effect will this cause? Just a one time rule for the admin portal? Or is it beneficial in some other way in the future, as I'm adding services of some kind?
13:26:25 <fabiand> Let's continue …
13:26:40 <fabiand> #topic Next Release (3.1)
13:27:23 <fabiand> As nothing critical came up, I'd basically branch off
13:27:31 <fabiand> Any critical fixes must be backported.
13:29:14 <dougsland> +1
13:29:30 <fabiand> dougsland, hey - good morning as well.
13:29:42 <dougsland> fabiand, morning
13:29:55 <fabiand> #topic 3.5 Feature Status
13:30:48 <fabiand> #info generic-registration -- Needs some clearifying
13:31:03 <fabiand> #info hosted-engine-plugin -- Needs a maintainer
13:31:22 <fabiand> #info virtual-appliance -- Has a working jenkins build
13:31:39 <rbarry> Congratulations on that, fabiand
13:31:49 <fabiand> rbarry, thank you.
13:32:02 <fabiand> rbarry, it was - a fight. It took 6 days, full six days.
13:33:05 <fabiand> Anyhow.
13:33:32 <fabiand> We need a volunteer who is taking over the maintainreship of the ovirt-node-plugin-hosted engine.
13:33:38 <fabiand> And I'd liek to speak about that next week.
13:34:20 <rbarry> I may be able to. I'll to actually use it this week to get an idea of how it works
13:35:00 <dougsland> I didn't use this plugin yet but if required I can do it also.
13:35:01 <fabiand> Cool, that is a nice initiative :)
13:35:07 <fabiand> Amazing!
13:35:15 * fabiand stepps back ;)
13:35:26 <fabiand> Yes, let us sort it out next week.
13:35:37 <dougsland> ok o>
13:35:45 <fabiand> #info Other Items
13:35:57 <fabiand> Besides that I did some long outstanding bug work
13:36:06 <fabiand> Basically updated the status of many
13:37:25 <eedri> dcaro, ping
13:37:29 <dcaro> eedri: pong
13:37:41 <fabiand> If there is nothing else, rbarry dougsland I'd end this meeting for this week
13:37:47 <rbarry> Nothing from me
13:37:49 <dougsland> not from my side.
13:50:40 <peetaur2> feature request: make it wait instead of cancelling if for whatever reason, yum was running.       ERROR ] Failed to execute stage 'Transaction setup': Existing lock /var/run/yum.pid: another copy is running as pid 26459.
13:50:51 <peetaur2> quite annoying to see that
14:08:10 <jhernand> derez: ping
14:40:19 <msivak> jhernand: http://fpaste.org/118115/05435206/ is this a known issue?
14:43:58 <jhernand> msivak: No, it isn't
14:44:15 <jhernand> msivak: Can you enable debug and show me the output?
14:46:14 <msivak> jhernand: a second
14:47:00 <jhernand> msivak: And what version of the SDK are you using?
14:47:30 <msivak> jhernand: that was master with metadata from the engine's master
14:47:45 <jhernand> msivak: Ok
14:50:20 <msivak> jhernand: here you go, there is some noise as the beginning, just ignore it.. http://www.fpaste.org/118120/40543576/
14:52:36 <msivak> jhernand: the same code works with 3.4 SDK
14:53:15 <jhernand> msivak: I see, there is a place where we should check for null and we don't
14:55:12 <jhernand> msivak: http://gerrit.ovirt.org/30115
14:55:23 <jhernand> msivak: Can you open a bug?
14:56:03 <msivak> jhernand: what product should I use, RHEV 3.5?
14:56:34 <jhernand> msivak: If you found this with the rhevm-sdk, then RHEV-M, if with ovirt-engine-sdk then oVirt
15:01:07 <msivak> jhernand: https://bugzilla.redhat.com/show_bug.cgi?id=1119812
15:04:48 <jhernand> msivak: Can you review the patch?
15:27:39 <peetaur2> engine-setup says "[ INFO  ] Still waiting for VDSM host to become operational..." and "[ ERROR ] Timed out while waiting for host to start. Please check the logs.". And "/etc/init.d/vdsmd status" says "VDS daemon is not running, and its watchdog is running". How do I fix this?
15:36:29 <ojorge> /etc/init.d/vdsmd start
15:36:37 <ojorge> manually start vdsmd
15:38:16 <peetaur2> it doesn't work; it just says it's not running if I run status after that
15:39:00 <peetaur2> http://bpaste.net/show/HrZnry3kru8D1naejzt7/
15:41:39 <Dick-Tracy> whats you vdsm.log say peetaur2
15:42:05 <peetaur2> nothing... 0 bytes
15:45:13 <Dick-Tracy> what your distro ?
15:45:29 <peetaur2> CentOS 6.5
15:45:57 <Dick-Tracy> then end of /var/log/messages then ?
15:46:19 <peetaur2> Jul 15 17:46:05 bcserver12 respawn: slave '/usr/share/vdsm/vdsm --pidfile /var/run/vdsm/vdsmd.pid' died too quickly, respawning slave
15:46:21 <peetaur2> Jul 15 17:46:06 bcserver12 respawn: slave '/usr/share/vdsm/vdsm --pidfile /var/run/vdsm/vdsmd.pid' died too quickly for more than 30 seconds, master sleeping for 900 seconds
15:49:52 <peetaur2> and this is what it looks like if I run that command in quotes there:     http://bpaste.net/show/9TN4IHAvH5SPAXvPk8GL/
15:50:13 <peetaur2> what does it mean? "You must monkey_patch before importing thread or threading modules"
15:52:22 <peetaur2> I have tested oVirt version 3.3 and 3.4, and both do the same.
15:57:19 <peetaur2> seems related to this https://www.mail-archive.com/users@ovirt.org/msg19506.html      I have python-pthreading-0.1.3-2.el6.noarch which is what they listed there as the flawed one
16:09:56 <peetaur2> Dick-Tracy: so thanks for the help... the /var/log/messages line lead to the command that lead to finding that mail. It's installing 3.4 again now, and I will try tomorrow.
17:06:07 <urthmover> I have a storage domain reports having 863G free.  The Total Space is 913G.  Currently there are no virtual machines using this storage domain.  When I ssh into the host there is 2.2M according to du.  How do I reclaim the space from this mystery?
17:13:22 <Moe__> urthmover: you might want to send that question to the list so that it doesnt get lost here. Users with the answer may not be watching
17:19:53 <urthmover> Moe__: good suggestion.  The Users list?
17:29:21 <urthmover> Moe__: just submitted...thanks again
20:58:41 <urthmover> I'm getting ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand] (DefaultQuartzScheduler_Worker-96) Command GetCapabilitiesVDSCommand(HostName = jmini04, HostId = fbd6fd2c-1698-465b-9dad-9493845fce59, vds=Host[jmini04,fbd6fd2c-1698-465b-9dad-9493845fce59]) execution failed. Exception: VDSNetworkException: java.net.ConnectException: Connection refused
20:59:33 <urthmover> over and over.  Does anyone have a suggestion on what to look at or fix?  that is an error in /var/log/ovirt-engine/engine.log
20:59:55 <urthmover> ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand] (DefaultQuartzScheduler_Worker-95) Command GetCapabilitiesVDSCommand(HostName = jmini06, HostId = e4534f2b-e13b-4afa-87a2-020fae8eafbc, vds=Host[jmini06,e4534f2b-e13b-4afa-87a2-020fae8eafbc]) execution failed. Exception: VDSNetworkException: java.net.ConnectException: Connection refused
21:00:16 <urthmover> the vds=Host is changing .  I have 10 hosts.  gonna check the available storage domains
22:23:53 <urthmover> fyi I was having nfs mounting problems on boot
06:24:59 <thomas> Hi there!
06:32:49 <thomas> I have a little problem on hand with ovirt - over the last couple of weeks, I set up a cluster consisting out of 4 servers and one engine host. Those servers have 2 system drives and 4 data store drives, bundled into individual RAIDs, all machines have a 10Gbps network adapter. Everything worked OK and my setup is fully up and running now, in order to have the GlusterFS talk to the servers via the faster card (the onboard one is used to connect to them), I h
06:32:50 <thomas> ave made an entry in each host's (and the engine's) host file and called them gluster1 - 4, that's how I also bound them to the cluster.
06:34:12 <thomas> Now, my problem is that I can not just click on a console file in order to connect to the hosts, because it will use the gluster* host names instead of the publicly available ones. I could easily avoid this via doing the hosts trick and using the public host names, but how do I change a hosts IP address?
06:34:17 <thomas> (aka host name)
07:12:14 <thomas> clear
07:26:34 <leaboy> jhernand: hi
07:27:43 <leaboy> jhernand: how could I change the ovirt-engine's IP,
07:29:11 <jhernand> leaboy: That used to be a problem, but I don't know if it is now. I don't have an answer. Please send a mail to users@ovirt.org
07:29:45 <leaboy> jhernand: ok, thanks
07:30:41 <thomas> jhernand: Any suggestions to the issue I posted above?
07:32:47 <jhernand> thomas: Sorry, I just entered the room, didn't see your question. Can you repeat?
07:37:04 <thomas> ok
07:37:58 <thomas> I have a little problem on hand with ovirt - over the last couple of weeks, I set up a cluster consisting out of 4 servers and one engine host. Those servers have 2 system drives and 4 data store drives, bundled into individual RAIDs, all machines have a 10Gbps network adapter. Everything worked OK and my setup is fully up and running now, in order to have the GlusterFS talk to the servers via the faster card (the onboard one is used to connect to them),
07:38:05 <thomas> I have made an entry in each host's (and the engine's) host file and called them gluster1 - 4, that's how I also bound them to the cluster.
07:38:05 <thomas> Now, my problem is that I can not just click on a console file in order to connect to the hosts, because it will use the gluster* host names instead of the publicly available ones. I could easily avoid this via doing the hosts trick and using the public host names, but how do I change a hosts IP address (aka host name)?
07:39:26 <jhernand> thomas: You may create an additional network, and mark it as the "display" network, so that access to consoles will use it instead of the storage network.
07:41:17 <thomas> oh ok. I will have to try that, thanks!
08:28:39 <peetaur2> does oVirt really require direct IO support?
09:36:37 <thomas> Hello! Has anybody got experience with CentOS 7 and ovirt?
09:37:34 <clarkee> not yet.
09:37:42 <thomas> hmk
09:37:42 <clarkee> you shoudl give it a shot and report back :D
09:37:46 <thomas> I will!
09:37:57 <clarkee> i know a lot of people would be interested
09:39:59 <thomas> Do you have any idea on backup solutions for ovirt? Is there any web-interfacy way to do it or will I have to write my own scripts? What's best practise?
09:40:10 <clarkee> there's the API for backup
09:40:26 <clarkee> but there is *some* effort being put into it
09:40:56 <clarkee> this url is interesting for backing up the agent : http://www.ovirt.org/User_talk:Stkeimond/Backing_Up_And_Restoring_OVirt
09:41:59 <clarkee> and for vm backup you might want to bookmark this : http://www.ovirt.org/Features/Backup_Provider
09:43:08 <thomas> thank you very much
09:43:14 <thomas> this is just what I needed
10:44:03 <YamakasY_> I have some hosts that are non resposive and the engine "thingks" there are vm's running on it...
10:44:13 <YamakasY_> I forgot my logic how to fix it
10:47:11 <jvandewege> clarkee: thomas There is a new supported utility that does all the work for you: engine-backup see http://www.ovirt.org/Ovirt-engine-backup
10:47:35 <clarkee> oh yes
10:47:37 <clarkee> forgot that
10:47:58 <clarkee> thanks jvandewege  :D
10:48:02 <clarkee> thomas: hope you saw this :)
11:12:38 <YamakasY_> jvandewege: no vacation yet ?
11:13:22 <jvandewege> YamakasY_: depending on the mood and the weather its either always vacation or not :-)
11:13:28 <peetaur2> Does oVirt really require direct IO support? It won't let me install on a file system that doesn't support direct IO... should I hack the script to skip that check? ;)
11:13:50 <jvandewege> YamakasY_: no, not yet, last 2wks of aug
11:14:05 <ojorge> peetaur2, the data storage domains filesystem have some requirements....
11:15:34 <peetaur2> ojorge: yes it's a storage domain. But why does it need direct IO? It should use O_SYNC rather than direct for integrity, so why does it want direct IO?
11:15:52 <YamakasY_> jvandewege: ah good period indeed!
11:16:08 <ojorge> peetaur2, look at the requirements for the qemu raw images
11:17:06 <YamakasY_> we really should be able that the firewall is not set by default installation
11:18:00 <YamakasY_> oh ah, I could use rebooted function :)
11:18:51 <peetaur2> I suppose this link says that it is used for migration http://wiki.qemu.org/Migration/Storage
11:18:58 <peetaur2> I have no idea where a list of requirements like that would be though
11:25:23 <thomas> jvanderwege: Yes I saw it, thanks!
11:34:04 <ojorge> peetaur2, yeah i know, i had issues a year ago trying to use a zfs fs for a data storage domain , somewere i found the reason but i dont remember it well .
11:36:22 <peetaur2> ojorge: the reason is that it's not implemented on zfsonlinux https://github.com/zfsonlinux/zfs/issues/224
11:36:31 <peetaur2> ojorge: what was your solution? to not use zfs?
11:37:36 <thomas> jvanderwege: Is there such a nice thing for VMs?
11:38:40 <clarkee> if i change my display network from ovirtmgt, i can't start any hosts
11:38:51 <clarkee> says vmnet25 (or whatever) isn't mapped
11:40:45 <ojorge> peetaur2, no, i made zvols , then put ext4 on them
11:41:18 <YamakasY_> erm woodcrest is not "supported" anymore ?
11:42:00 <ojorge> peetaur2, somehow it works, with enough time up the server works even better
11:42:22 <YamakasY_> I cannot approve a host
11:42:33 <ojorge> peetaur2, i had some bonnie results somewere but...
11:42:34 <ojorge> /dev/zvol/storage/vmpool2:
11:42:34 <ojorge> Timing buffered disk reads: 920 MB in  3.04 seconds = 302.38 MB/sec
11:43:52 <peetaur2> ojorge: :)
11:44:18 <peetaur2> ojorge: I plan to use the basics... maybe lz4, but definitely snapshots and zfs send. What is the best strategy to do that with zvols? probably not one huge ext4 zvol, right?
11:46:19 <ojorge> peetaur2, off course not, keep a lot of free space for snapshots and just in case u have to do something crazy
11:47:16 <peetaur2> heh yeah right... need some for snapshots
11:47:22 <YamakasY_> anyone using woodcrest cpu's ?
11:47:30 <peetaur2> but would you make one big ext4 to be simple, or would you split it on many zvols?
11:48:01 <ojorge> peetaur2, xfs is a bit better at write perfomance (bonnie gets you nice results, i even made a nice 3d graph of xfs and ext4 with different block sizes, etc )  and in the long term but i had issues with the stock kernel of centos 6.4
11:48:45 <ojorge> peetaur2, ext4 just works , late when i moved all my vm's to ext4 i found in centosplus kernel the xfs bug patched , too late
11:51:11 <ojorge> peetaur2, anyway YMMV ...
11:52:25 <peetaur2> I am still wondering about your opinion on one zvol + free space  vs many zvols
11:52:43 <peetaur2> it would be far easier to manage one, if it didn't have side effects, like taking super long to zfs send
11:53:00 <ojorge> peetaur2, with zfs you may add some nice ssd disks for caching ( im in the process of doeing that now) , even fusion io cards if you have enough cash
11:53:40 <peetaur2> the machine I'm building it on has hwraid with bbu and does 150k iops :D so I think I don't need that. ;)
11:53:41 <ojorge> peetaur2, oh, you want to use one zvol for each vm ?
11:54:18 <peetaur2> if ovirt had built in support for making zvols on the fly, that would be ideal... but I just mean if I make a bash script that does replication every 20 min, it'll be easier to manage one volume instead of many
11:55:17 <ojorge> peetaur2, your bbu cant be larger than a ssd, and when your raid card bitrots your fs slowly and painfully , eating silently through your backups, well... you will never use hardware raid again.
11:55:37 <ojorge> peetaur2, but again , YMMv
11:56:20 <peetaur2> yeah I don't like hwraid :D but the 'in flight' bitrot is likely small, and the on disk bitrot will be fixed by zfs, so I think it's fine.
11:56:30 <ojorge> peetaur2, once you have enough resources , i decided flexibility was the key in my case
11:56:43 <peetaur2> if only somebody sane built a zfs HBA ... with a BBU write cache, there would be absolutely no advantage to hwraid.
11:57:41 <YamakasY_> jvandewege: ping
11:59:13 <oved_> fabiand, what bridge should we use? do you have one?
11:59:19 <oved_> fabiand, or shall I start mine?
11:59:37 <ojorge> peetaur2, if you can just get a couple old intel 320 for your zil
12:00:14 <ojorge> peetaur2, just for testing ....
12:14:45 <YamakasY_> damn why are some centos mirrors so damn slow
12:15:19 <clarkee> haha
12:15:22 <clarkee> "internet"
12:16:17 <YamakasY_> clarkee: no internalnet is what we need
12:16:43 <YamakasY_> I don't want to have a large centos mirror too atm
12:17:01 <clarkee> i have one.
12:17:03 <clarkee> it's handy.
12:17:05 <clarkee> mad handies.
12:21:41 <YamakasY_> clarkee: I have one for Ubuntu... that one is about 200GB large or so :)
12:21:50 <YamakasY_> didn't check last time... I thought I reduces it to 100GB
12:21:56 <YamakasY_> *reduced
12:23:00 <jvandewege> YamakasY_: you can setup a mirror but only mirror those parts you need. Got a local mirror of centos-6.5, x86_64 os+updates+scl = ~5-6G probably less
12:24:22 <jvandewege> YamakasY_: rsync  -avSHP --delete --exclude "local*" --exclude "isos" --exclude "addons" --exclude "centosplus" --exclude "contrib" --exclude "cr" --exclude "extras" --exclude "fasttrack" --exclude "os" --exclude "xen4" --exclude "i386" --exclude "drpms" --bwlimit=1000 rsync://mirror.1000mbps.com/centos/6.5/ /var/www/repo/6.5/
12:24:26 <YamakasY_> jvandewege: yeah, that's true... will do maybe, but as I use IPA and some stuff it's wiser to have the whole mirror maybe...
12:24:36 <YamakasY_> jvandewege: ah kewl!
12:24:44 <YamakasY_> jvandewege: going to test 7 next week!
12:24:49 <clarkee> nice nice
12:24:56 <clarkee> YamakasY_: i have a 6 and 7 mirror here
12:24:57 <jvandewege> YamakasY_: I exclude 'os' here because I used the DVD for that
12:25:00 <YamakasY_> jvandewege: thanks
12:25:08 <clarkee> saves me a lot of time, bandwidth i don't care about but time i do :D
12:25:21 <YamakasY_> jvandewege: too noisy here in the serverroom :P
12:25:44 <YamakasY_> I can try to put a DVD into the SAS slot of the blade
12:25:49 <jvandewege> timing is 10min for external repos, less than 4 for internal :-)
12:25:49 <YamakasY_> less clables
12:26:05 <YamakasY_> nice indeed
12:26:12 <YamakasY_> for Ubuntu I use one... great!
12:26:35 <YamakasY_> my installer is fast, really... but sometimes with yum update it's damn slow
12:26:59 <clarkee> also yum-presto++
12:27:46 <YamakasY_> nah we need to wait for the right katello integration jvandewege normally should say :)
12:30:12 <YamakasY_> damn these hosts had some package issues
12:30:20 <YamakasY_> now they are up again :)
12:30:32 * YamakasY_ is fintuning 100% every day
12:30:37 <YamakasY_> finetuning
12:30:47 <YamakasY_> nah I mean 100% of the time per day
12:30:48 <YamakasY_> hehe
12:31:01 <YamakasY_> should puppetize that part more
12:31:39 <YamakasY_> what actually only is slow... the db updates for yum... the mirrors are most of the time fast
12:32:37 <YamakasY_> ewoud: are you @ xentower now ? we should ban you hehe
12:33:57 <YamakasY_> jvandewege: ok, mirrors are fast, dbupdates are slow
12:34:01 <YamakasY_> seen that more often
12:38:46 <YamakasY_> anyone a solution why a intel 5180 cannot be added to the cluster ? others are running there
12:41:56 <jvandewege> YamakasY_: you're sure virt is on in the bios as is nx?
12:47:29 <YamakasY_> jvandewege: that's a good question... I thought it was... but CPU's have been changed on that machine a while back ago as I was waiting for some other things too
12:53:40 <YamakasY> jvandewege: it's on
13:03:57 <bkp> sbonazzo Ping
13:04:06 <sbonazzo> bkp: hi
13:04:52 <bkp> sbonazzo: I got your status updates (I was proactive this week). Do you want me to just post the good bits in today's meeting to save time?
13:05:03 <bkp> Then you can add/elaborate on whatever.
13:05:21 <sbonazzo> bkp: not more news than the sent status
13:05:37 <sbonazzo> bkp: we just need a quick update on blockers
13:05:56 <bkp> sbonazzo: Which the individual teams can address.
13:06:21 <bkp> So I will post your status, and you can jump in and make comments if needed.
13:06:41 <sbonazzo> bkp: ok, thanks :-)
13:07:59 <SvenKieske> sbonazzo: just added another possible blocker to 3.5 tracker bug
13:08:32 <SvenKieske> unfortunately I got no time to test 3.4.3 before the release :/
13:11:16 <sbonazzo> SvenKieske: yes, make sense, not reproduced yet btw
13:11:21 <kobi> Hi, does any of have problems with the UI on the latest?
13:18:07 <YamakasY> jvandewege: mhh, no clue that it doesn't support the cpu
13:24:16 <yzaslavs|mtg> rnori: hi
13:38:02 <YamakasY> jvandewege: fixed... some packge update issue
13:39:04 <YamakasY> or package mismatch
13:40:59 <derez> :wq
13:42:32 <YamakasY> derez: you need to do that in vi!
13:50:54 <lvernia> bkp: Will be a little late!
13:51:09 <bkp> lvernia: What?
13:51:15 <bkp> Slacker!
13:51:16 <bkp> :)
13:51:25 <lvernia> You know me :)
13:52:52 <sbonazzo> msivak: hi, new build of ovirt-scheduler-proxy for 3.4.3 GA?
13:53:06 <msivak> sbonazzo: no changes afaik
13:53:33 <sbonazzo> msivak: ok, thanks
13:54:31 <msivak> sbonazzo: hmm now that I think about it.. how much time do I have? :)
13:55:05 <sbonazzo> msivak: yum repo composition is tomorrow morning, it would be nice to have a build today :-)
13:55:24 <msivak> sbonazzo: can you check what is in the nightly?
13:55:37 <sbonazzo> msivak: for 3.4?
13:55:42 <msivak> sbonazzo: yes
13:56:21 <sbonazzo> msivak: ovirt-scheduler-proxy-0.1.4-1.fc19.noarch.rpm, looks the same version of last 3.4.2 GA release
13:57:08 <msivak> sbonazzo: ok, I will start the build in a second
14:00:56 <bkp> Two-warning for the oVirt Weekly Sync...
14:01:06 <bkp> *Two-minute
14:03:55 <ovirtbot> bkp: Error: Can't start another meeting, one is in progress.
14:04:11 <bkp> #endmeeting
14:04:33 <bkp> fabiand You need to stop your meeting
14:04:44 <fabiand> oh ...
14:04:49 <fabiand> #endmeeting