14:59:11 #startmeeting oVirt Infra 14:59:11 Meeting started Mon Nov 11 14:59:11 2013 UTC. The chair is knesenko. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:59:11 Useful Commands: #action #agreed #help #info #idea #link #topic. 14:59:21 #chair dcaroest eedri o 14:59:21 Current chairs: dcaroest eedri knesenko o 14:59:24 #chair dcaroest eedri obasan 14:59:24 Current chairs: dcaroest eedri knesenko o obasan 15:00:07 #unchair o 15:00:07 Current chairs: dcaroest eedri knesenko obasan 15:00:18 Rydekull: here ? 15:00:26 #topic Hosting 15:00:32 let's start 15:00:41 hello all ! 15:00:48 I hope you are doing good 15:00:53 so ... 15:00:53 hi! ovirt03 is still unreachable :( 15:00:57 dcaroest: :( 15:01:04 that's what I tried to ask 15:01:12 )) 15:01:24 so we are still blocked on rackspace migration 15:01:38 knesenko, what's the status? 15:01:47 knesenko, why we keep getting problems with ovirt03 server? 15:02:19 knesenko, problems with the hardware there? 15:02:32 eedri: because there are some network issues there 15:03:08 yep, something is messed up there 15:03:08 i tried to configure bridge on it, and i was disconnected 15:03:18 and from there we can't connect to thhe server 15:03:30 knesenko, didn't they move it to the same network as 1/2? 15:03:33 also there were some issues with the VPN connection 15:03:42 yes they are .. 15:04:02 but something gone wrong with the bridge creation and we lost connectivity 15:04:24 so dcaroest didn't managed to reboot the server from PM 15:04:30 knesenko, what are they saying? 15:04:41 rackspace guys found the issue with PM , and they fixed that 15:04:59 and still we can't connect to ovirt3 15:05:02 right dcaroest ? 15:05:27 they rebooted the machine, but after that we are not able to connect through ssh or console or anything yet 15:05:41 dcaroest, so they have an open ticket on it now? 15:06:05 I'm in the middle of writing it 15:07:33 #info we need to push more on ovirt03 fix 15:07:50 now they have an open ticket 15:07:52 #action dcaroest reply to rackspace ticket and ask them to fix the issue 15:07:57 dcaroest: thanks 15:08:22 obasan: anything new with monitoring ? 15:08:23 knesenko, what's the next step once it's ready? 15:08:31 knesenko, nope. had a busy week. 15:08:32 I do not know the rackspace approach -- is there a way to get an 'out of band' console? 15:08:47 eedri: add it into the setup , add gluster storage 15:08:55 and start migrating VMs to it 15:09:02 obasan: ok thanks 15:09:30 orc_orc: like a power management console ? 15:09:48 well -- like: virsh consle acme 15:09:56 console* 15:10:16 so one could self-repair a bad set of netrwok settings 15:10:42 #chair orc_orc 15:10:42 Current chairs: dcaroest eedri knesenko obasan orc_orc 15:10:53 orc_orc: we have something, but it doesn't works 15:11:00 rackspace should fix it 15:11:07 if you rent dedicated servers you should get IPMI or similar solution 15:11:17 knesenko: hmmm .. 15:12:20 ok what next here ? 15:12:20 orc_orc: they use DRAC for that, but we can't even reach the DRAC web 15:12:40 dcaroest: perhaps ask for a proxy to it? 15:13:48 orc_orc: let's see what they have to say 15:13:54 * nod * 15:14:11 orc_orc: well... we have the vpn and that works (I can connect to ovirt02/01) but there's a little mess with the networks and we can't reach 03 nor it's DRAC interface, thay will have to sort that out anyhow 15:15:35 ok anything else on hosting ? 15:16:32 #topic Puppet and Foreman 15:16:43 dcaroest: the stage is yours 15:17:33 unfortunately nothing new here :/, the r10k patch is still on the air 15:17:40 ok 15:17:52 I noticed that we don't have epel repo class 15:17:57 Am I right ? 15:18:58 let me see, but I believe you ;) 15:19:26 I didn't found it 15:20:41 I don't thinks there's any 15:21:00 nop, there isn't 15:21:25 #action knesenko create epel repository puppet class 15:21:37 anything else here ? 15:21:39 eedri: ? 15:21:41 obasan: ? 15:21:46 knesenko, not on my end 15:21:50 hosting? 15:21:51 ... I have been working on getting ovirt under nest on Centos 6, so IU can have local puppet and foreman under ovirt instances working and clocked by a kernel macking the kvm_intel nest module enabled 15:21:51 orc_orc: Error: ".." is not a valid command. 15:21:56 ... I have been working on getting ovirt under nest on Centos 6, so IU can have local puppet and foreman under ovirt instances working and clocked by a kernel macking the kvm_intel nest module enabled 15:23:04 orc_orc, plz tell me if you had problems. I have an experience with this 15:23:23 obasan: thank you -- I shall 15:23:36 ok good 15:23:44 #topic Jenkins 15:23:49 hello eedri 15:23:50 :) 15:24:40 knesenko, sorry, in parallel here 15:25:11 I have a proposal: http://ci.openstack.org/jenkins_jobs.html 15:26:02 dcaroest, this is a neat solution. that we're already familiar with. it could be helpful if we scale our env 15:26:10 It's something that we use at work, it let's you define jenkins jobs in yaml files (that can be included between them) 15:26:39 I changed the other day a lot of jobs to use the whitelists manually... I dont wont to do that again ;) 15:28:32 dcaroest: agree 15:28:41 abjections ? 15:28:46 objections /? 15:28:50 dcaroest: +1 15:29:00 dcaroest, +1 15:29:39 #action create a basic templates for jenkins jobs based on http://ci.openstack.org/jenkins_jobs.html 15:30:22 +1 15:30:43 will simplify our jobs management immensly 15:32:36 orc_orc: have you seen Barbapapa ? 15:32:53 #info knesenko is working on new upgrade job that will support parameters 15:33:14 YamakasY: Barbapapa was mentioned before -- it is unfamiliar fo me, but I bookmarked it 15:33:22 The code is ready, ewoud had some comments so I need to fix them ... 15:33:44 it looked as though my grandchildren and I should watch Barbapapa together 15:34:57 orc_orc: barbapapa is kewl! 15:35:02 yesterday he was a boat 15:35:10 knesenko, will it run on default on nightlies? 15:35:16 knesenko, if it doesn't given params? 15:35:22 eedri: yes 15:35:25 knesenko, +1 15:35:46 actually the plan is that publish job will trigger this job 15:36:07 knesenko,+1 with exceptioon 15:36:07 eedri: but yes, there are default values 15:36:23 knesenko, after publish job is done, there is another scripts that runs on resources.ovirt.org 15:36:32 knesenko, that take around 10-15 to recreate repos 15:36:38 hm ... 15:36:40 ok ... 15:36:48 knesenko, so we'll need to see how to make sure we're taking latest rpoms 15:37:02 knesenko, trigger by URL might be an option 15:37:17 knesenko, if we can monitor changes to the yum repo for e.g 15:39:21 eedri: will think about it 15:39:36 eedri: watch the timespamp on the repodata directory in question and it will tell you when there is a new transactionset 15:39:53 orc_orc, yea, might be a good indication 15:41:18 eedri: that is a pull (polling) method - a push method would be to have to have a local select on the directory and 'curl' a rebuild requiest out to the scheduler 15:42:00 orc_orc: i will be glad if you will help me with that 15:42:22 knesenko: * nod * I am in channel all the time -- please ping me when you wish to work through it 15:42:30 Hi 15:42:32 orc_orc: once I will finish with hte job, I"ll ping you and will try to implement it 15:42:36 knesenko: inotify can do this for you 15:42:55 not sure if anyone is still here who I was talking to earlier regarding importing an image from a crashed ovirt 15:43:00 #action knesenko ping orc_orc once I finish with upgrade_params job 15:43:18 orc_orc, you mean inotify on the server and triggering the job from the resources.ovirt.org side 15:43:35 eedri: yes -- I do that a lot as it is less loady than a poll loop 15:44:04 select is almost always a better solution than poll 15:44:17 orc_orc, agree 15:44:41 orc_orc, jenkins also has a way triggering jobs from git/gerrit w/o polling, via hooks 15:45:24 eedri: makes sense: 'hook' is a nomenclature for a 'push' type trigger 15:46:32 orc_orc,yea. 15:46:34 knesenko, ok, let's continue 15:47:44 eedri: anything else on jenkins ? 15:48:27 #topic Other issues 15:48:37 lets review some tickets ? 15:49:42 sure! 15:49:50 knesenko, +1 15:50:08 https://fedorahosted.org/ovirt/report/1 15:52:01 dcaroest: can you take a look on this one - https://fedorahosted.org/ovirt/ticket/93 15:52:01 ? 15:52:26 eedri: still relevant ? - https://fedorahosted.org/ovirt/ticket/84 15:52:33 will it be possible to host more than 2 datacenters with local storage on 3.3.1? 3.3 doesn't let you attach more than two hosts if you are using local storage DCs. 15:52:35 knesenko: nop, I was not aware of that 15:52:40 * eedri looking 15:52:53 obasan: please this one for u - https://fedorahosted.org/ovirt/ticket/83 15:53:02 knesenko, i think it's relevant 15:53:19 knesenko, since mvn doesn't ship with centos afaik, unless someone knows otherwise 15:53:28 knesenko, at least it's not shipped with rhel 15:53:35 eedri: ok ... please assign it to someone 15:53:41 eedri: also this one - https://fedorahosted.org/ovirt/ticket/88 15:53:50 eedri: seems like we are missing ubuntu slave ... 15:54:11 eedri: and I think make sense to install it only after rackspace migration 15:54:14 knesenko, yes - maybe we can reinstall f19-vm03 15:54:26 knesenko, since it's already down for some time, looks like we can handle f19 load without it 15:54:43 eedri: +1 , please comment in the ticket 15:54:55 dcaroest: https://fedorahosted.org/ovirt/ticket/92 - how hard is that ? 15:54:58 as to #84 maven -- I do not see it in EPEL either 15:55:22 orc_orc, what i proposed is to add a link from the maven dir jenkins installs to PATH 15:55:27 orc_orc, but it's quite ugly 15:55:55 orc_orc, i'm trying to think on another option - maybe to wget mvn tar.gz from deploy it 15:56:02 ick 15:56:09 orc_orc, or build mvn ourselfves 15:56:12 unversioned and un-reproducable 15:56:19 knesenko: it can be tricky, for security issues, are the slaves 100% isolated? 15:56:39 orc_orc, the problem arise when you need mvn from a 'shell cmd' job 15:56:40 dcaroest: yes ... NAT 15:56:48 orc_orc, and not via standard maven jobs 15:57:06 dcaroest:we can use the key as parameter in foreman 15:57:22 knesenko: the problem is that if we use the public puppet repo for 'internal' hosts, those hosts should be isolated from the 'internal' networks 15:57:44 dcaroest: they are isolated 15:57:55 dcaroest: they are in guest network ... 15:58:09 obasan: did you moved the new slave you added to the guest netowrk ?> 15:58:20 knesenko: then it should be easy :) 15:58:36 dcaroest: can you take care of that ? 15:58:42 knesenko, I did not add any slaves recently 15:58:47 orc_orc: want to take something ? :) 15:58:52 reassigned 15:58:53 obasan: ok 15:59:19 orc_orc: btw, if you want permissions to machines etc., just send email to infra and will vote 15:59:20 knesenko: I wanted to get my setup working first so I could start docoing puppet deployment recipies 15:59:28 knesenko: will do 15:59:29 orc_orc: +1 15:59:33 we have response for ovirt03, they will investigate 15:59:45 but I really want much more to be packaged and in Fedora, and license verified 15:59:46 dcaroest: great :) 16:00:16 (my personal goal set centers on this) ... or EPEL 16:00:34 nice, I'd like to see it on fedora! 16:00:36 orc_orc: what I really want is to build all ovirt related project in fedora koji 16:00:37 knesenko: which bug did you ahve in mind? 16:00:39 build system 16:00:56 orc_orc: but its not related to the infra 16:00:57 knesenko: a noble goal, but it needs to be 'free' 16:01:12 orc_orc: what do you mean be saying free ? 16:01:13 and some in ovirt is not under acceptable licenses, I think 16:01:38 there was a special exception on some jar as I recall, recently mentioend 16:02:17 hmmm....that's interesting 16:02:24 eedri: do you know something about that ? 16:02:37 * orc_orc looks 16:02:41 eedri: that we have some license issues with ovirt ? 16:02:52 knesenko, i'm not familiar with any licenses issues under ovirt 16:03:03 knesenko, best to ask on arch/infra 16:03:04 eedri: me too 16:03:08 knesenko, dneary will know 16:03:37 orc_orc, can you send email on it to infra if you have the info? 16:03:45 eedri: I shall 16:03:51 orc_orc, +1 16:05:19 i think knesenko got disconnected 16:05:45 ok guys we are out of time 16:06:05 orc_orc: i would like to take the pkging conversation offline 16:06:12 i am interested to hear what you have to say 16:06:15 knesenko: noted 16:06:19 anything else ? 16:06:31 thanks everyone 16:06:42 please try to work on your tasks if you have time ! 16:06:44 thanks ! 16:06:49 #endmeeting