14:02:16 <knesenko> #startmeeting oVirt Infra
14:02:16 <ovirtbot> Meeting started Mon Oct 21 14:02:16 2013 UTC.  The chair is knesenko. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:02:16 <ovirtbot> Useful Commands: #action #agreed #help #info #idea #link #topic.
14:02:27 <knesenko> #chair obasan orc_orc dcaro_ eedri_
14:02:27 <ovirtbot> Current chairs: dcaro_ eedri_ knesenko obasan orc_orc
14:02:34 <Rydekull> o7
14:03:14 <knesenko> #chair Rydekull
14:03:14 <ovirtbot> Current chairs: Rydekull dcaro_ eedri_ knesenko obasan orc_orc
14:03:27 <knesenko> #topic Hosting
14:03:27 <ewoud> http://www.ovirt.org/Infrastructure_team_meetings should be updated more often btw
14:03:43 <knesenko> ewoud: agree
14:03:51 <knesenko> so lets start
14:03:55 <knesenko> small update
14:04:03 <knesenko> regarding reackspace servers
14:04:19 <knesenko> Itamar replied to my migration email ... did you saw it ?
14:04:35 <knesenko> there are some issues with gluster sotrage DCs ... migration libvirt etc.
14:05:15 <clarkee> are there :/
14:05:15 <ewoud> how much of a blocker is it going to be for us?
14:05:17 * clarkee avoids it
14:05:23 <knesenko> so currently we can run slaves without a migration feature , right ?
14:05:30 <knesenko> no migration
14:05:33 <knesenko> +-
14:05:39 <knesenko> for now
14:06:00 <ewoud> it's only live migration, which we don't have now either
14:06:00 <knesenko> but you will be able to poweroff a VM and restart it on another host
14:06:05 <knesenko> ugly, but will work
14:06:11 <knesenko> right
14:06:16 <knesenko> and snapshots as well
14:06:19 <orc_orc> as I have not attempted a migration, I do not hold an opinion
14:06:25 <knesenko> we don't use snapshots for jenkins slaves
14:07:00 <knesenko> that's all I think
14:07:11 <ewoud> so what's the planning now?
14:07:21 <ewoud> in terms of time
14:07:43 <knesenko> somehow rackspace03 was installed in a different LAN
14:08:02 <knesenko> so I am waiting for Red Hat guys to approve moving rackspace03 to 01,02 network
14:08:10 <knesenko> so it will work faster for us
14:08:24 <knesenko> after that I will continue on a migration
14:08:45 <ewoud> ok
14:09:02 <knesenko> ok
14:09:15 <knesenko> other issues on hosting ?
14:10:04 <knesenko> #chair dcaro
14:10:04 <ovirtbot> Current chairs: Rydekull dcaro dcaro_ eedri_ knesenko obasan orc_orc
14:10:10 <ewoud> don't think so
14:10:13 <orc_orc> knesenko: perhaps slow pings to jenkins that I mentioned last Friday?
14:10:45 <knesenko> dcaro: where do we run our jenkins ?
14:10:51 <ewoud> orc_orc: I haven't caught that, but I should mention that our (very limited) icinga instance also runs at alterway so that might explain it
14:10:56 <knesenko> dcaro: its a vm or phys. host ?
14:11:13 <ewoud> jenkins is a physical machine now
14:11:14 <orc_orc> ewoud: http://gallery.herrold.com/nagios-jenkins.ovirt.org-slow.png
14:11:27 <orc_orc> my nagios consistently is taking over 100 mSec
14:11:39 <ewoud> orc_orc: is that all the time or just once?
14:11:45 <apuimedo> knesenko: I uploaded a first version of the patch for the net cleanup
14:11:49 <orc_orc> ewoud: http://gallery.herrold.com/nagios-jenkins.ovirt.org-slow.png
14:11:53 <apuimedo> i'll test it later
14:11:53 <orc_orc> pretty regularly
14:11:57 <knesenko> apuimedo: put on review please
14:12:02 <apuimedo> done
14:12:02 <ewoud> orc_orc: and where is your nagios physically located because I get a consistent 18 ms here
14:12:07 * eedri here
14:12:08 <knesenko> apuimedo: thanks !
14:12:12 <knesenko> #chair eedri
14:12:12 <ovirtbot> Current chairs: Rydekull dcaro dcaro_ eedri eedri_ knesenko obasan orc_orc
14:12:21 <eedri> knesenko, topic?
14:12:25 <knesenko> hosting
14:12:27 <orc_orc> in a multi-homed DC used ly local govt, between Chicago and atlanta
14:12:48 <ewoud> orc_orc: jenkins is located in .fr so just the distance can explain that latency
14:13:09 <orc_orc> ewoud: I will dial up the acceptible latency
14:13:24 <Rydekull> That's quite exactly an acceptable latency for that distance
14:13:42 <apuimedo> knesenko: did you fix the python-nose yum issue?
14:13:51 <knesenko> apuimedo: yes ... should be ok now
14:14:01 <knesenko> apuimedo: please let me know if there are some issues
14:14:06 <knesenko> ok guys next ?
14:14:07 <eedri> apuimedo, which should ensure latest python-nose with puppet then
14:14:16 <eedri> apuimedo, since it will fail on other slaves
14:14:19 <knesenko> eedri: latest comes from pi[p
14:14:36 <eedri> knesenko, i know, i assume we can use pip upgrade --latest?
14:14:41 <apuimedo> eedri: knesenko: well, on el6 what I usually do is to have python-nose installed from yum
14:14:43 <knesenko> #topic Foreman puppet
14:14:44 <eedri> knesenko, with puppet exec {} or similar
14:14:46 <apuimedo> and overwrite it with pip
14:14:55 <apuimedo> (so I get the upstream nose version)
14:15:06 <ewoud> eedri: I think you can use provider => pip on package
14:15:09 <knesenko> I think ewoud and dcaro can help us how to upgrade nose with pip via puppet
14:15:19 <apuimedo> probably
14:15:24 <knesenko> ewoud: dcaro news ?
14:15:39 <apuimedo> the alternative would be to rebuild upstream src rpm for el6, I guess
14:16:02 <ewoud> knesenko: orc_orc kindly reminded us that our documentation is lacking, so I hope to fix that
14:16:32 <knesenko> ewoud: which documentation ?
14:16:34 <ewoud> knesenko: other than that we should get some testing infrastructure going because again a patch introduces a duplicate package
14:16:37 <orc_orc> ewoud: I am working thru documentation in the wiki, so placing it there would caseu me to 'tweak' it ;)
14:16:38 <ewoud> knesenko: exactly ;)
14:16:45 <knesenko> ewoud: haha
14:16:45 <orc_orc> cause*
14:16:47 <knesenko> ewoud: +1
14:17:11 <knesenko> btw I updated some Infra info
14:17:12 <knesenko> http://www.ovirt.org/Community
14:17:19 <knesenko> added link to the infra page
14:17:30 <ewoud> orc_orc: if you've written something, please ping me on IRC or per mail and I'll gladly review it
14:17:36 <knesenko> also added some new content here - http://www.ovirt.org/Infrastructure
14:17:41 <orc_orc> ewoud: will do
14:18:05 <knesenko> ewoud: added the mapping by your request - http://www.ovirt.org/Infrastructure_oVirt_Instances
14:18:22 <knesenko> ewoud: need to map alterway servers as well
14:18:33 <knesenko> ewoud: dcaro what avout rk10 ?
14:19:05 <ewoud> knesenko: about the instances, I'd prefer it if we could somehow get that live from foreman eventually
14:19:45 <knesenko> ewoud: hm ...
14:19:46 <ewoud> knesenko: r10k could use a review, and a finishing touch, but mostly a review
14:19:52 <dcaro> sorry, I'm back
14:20:30 <eedri> ewoud, +1
14:20:39 <ewoud> knesenko: https://foreman.ovirt.org/hosts should in theory contain all the hosts we have
14:20:40 <eedri> ewoud, creating inventory from foreman
14:21:15 <ewoud> if we upgrade to foreman 1.3, we can link back VMs to the compute resources
14:21:36 <ewoud> so you get an easy way to see if it's physical or virtual
14:21:43 <ewoud> and if virtual, also a console + power mgmt
14:22:03 <dcaro> ewoud: +1
14:22:48 <knesenko> k
14:22:52 <ewoud> but placing it all in foreman, means it's less open and people like orc_orc will have a harder time than needed
14:23:05 <ohadlevy> foreman++
14:23:24 <ewoud> so maybe we can limit access with read only accounts at first?
14:23:39 <ewoud> ohadlevy: no suprise you're a fan :)
14:23:50 <knesenko> I"d like to have something similar to https://apps.fedoraproject.org/
14:23:55 <dcaro> ohadlevy: xd
14:24:02 <knesenko> I have a ticket on it ... and its 99% ready
14:24:09 <orc_orc> I will clone any needed infra backend -- simply having a separate private puppet repo with keying is enough for me to be able to replicate the rest
14:24:31 <dcaro> ewoud: maybe we can use the api to generate a little html page with the data
14:24:38 <ewoud> dcaro: I was thinking the same thing
14:24:40 <orc_orc> ... or puppet pulling keying fro a private git instance ...
14:24:40 <ovirtbot> orc_orc: Error: ".." is not a valid command.
14:24:46 <orc_orc> ... or puppet pulling keying fro a private git instance ...
14:25:26 <ewoud> orc_orc: since not all is in puppet yet, it may not be complete
14:25:32 * ewoud will brb
14:25:50 <orc_orc> ewoud: * nod *
14:27:07 <knesenko> dcaro: can you review r10k patch ?
14:27:21 <eedri> knesenko, we also have other puppet related tasks on trac
14:27:22 <ewoud> back
14:27:39 <ewoud> dcaro: http://gerrit.ovirt.org/19141 that is
14:28:03 <ewoud> orc_orc: btw, currently we're lacking in our monitoring so any help to improve that is welcome
14:28:42 <orc_orc> ewoud: I can expose my nagios if wanted ... presently I just ahve it emailing me
14:28:57 <orc_orc> I run this for the LSB effort anyways
14:29:49 <knesenko> obasan: we have monitoring.ovirt.org right ?
14:29:51 <ewoud> orc_orc: could be helpful to get us started on monitoring.ovirt.org
14:29:59 <obasan> knesenko, indeed.
14:30:02 <ewoud> knesenko: yes, but it's only monitoring a small part of our infra
14:30:02 <orc_orc> also, in setting up the nagios, I did portmapping of the targets to see whta to wathc, and was somewhat surprised at listening ports
14:30:14 <orc_orc> what* watch*
14:30:16 <knesenko> orc_orc: sync with obasan and see how can you improve it
14:30:23 <orc_orc> knesenko: will do
14:30:25 <ewoud> I'd like to use puppet exported resources to build the nagios config
14:30:27 <obasan> orc_orc, +1
14:30:42 <knesenko> #action orc_orc sync with obasan to improve monitoring.ovirt.org
14:30:56 <knesenko> something else on puppet foreman ?
14:31:32 <ewoud> I'm going to prepare a 1.3 upgrade of foreman.ovirt.org
14:31:39 <knesenko> ewoud: +1
14:31:49 <obasan> ewoud, +1
14:31:57 <ewoud> but I think I'm going to upgrade $company foreman first to see how well it goes
14:32:02 <eedri> ewoud, +1
14:32:05 <ewoud> got a bit more testing infra there
14:32:12 <eedri> ewoud, we're still running 1.1 on $company :(
14:32:26 <ewoud> eedri: the upgrade is not hard at all
14:32:43 <eedri> ewoud, we had some issues to upgrade to 1.2.1 on other teams
14:32:52 <eedri> ewoud, so we're doing it carefullly (side by side)
14:33:45 <ewoud> I'm also preparing a blog series on $company blog on how we're managing foreman there, so when that's ready I'll send you a link as well
14:33:49 <dcaro> knesenko: I'll try+
14:34:30 <ewoud> other that, I don't think there's anything new on puppet/foreman
14:34:42 <eedri> ewoud, there is
14:34:51 <eedri> ewoud, bare metal Power mgmt
14:35:22 <ewoud> eedri: that's possible, but I haven't used that yet
14:35:22 <eedri> ewoud, as i understood, it's supported from newer 1.2.1 version via api
14:36:07 <ewoud> eedri: we could at least start by setting up ipmi for that
14:36:39 <eedri> ewoud, yea, anyhow, we dont really need it, since most of our usage is vms
14:36:41 <obasan> ewoud, eedri I think that one of the 1.3 features is a better upgrade path
14:38:01 <eedri> obasan, +1
14:38:20 <ewoud> so anything else?
14:38:33 <orc_orc> there was mention of running out of space on one unit
14:38:43 <orc_orc> I run this, which RHEL has dropped long ago: ftp://ftp.owlriver.com/pub/mirror/ORC/diskcheck/
14:38:50 <orc_orc> which can be tuned to email alerts
14:39:25 <knesenko> #topic Jenkins
14:39:43 <knesenko> eedri: updates ?
14:40:33 <eedri> knesenko, yes
14:40:42 <eedri> knesenko, there are some new jobs
14:40:58 <eedri> knesenko, running per patch on engine 3.2 & 3.3 - create + upgrade db
14:41:30 <eedri> knesenko, we still facing issues with vdsm-python-copen conflict with vdsm-copen pkg
14:41:41 <eedri> knesenko, maybe ybronhei or danken can elaborate on it
14:41:57 <eedri> knesenko, afaik vdsm should not build vdsm-copen-python pkg anymore
14:42:39 <ybronhei> knesenko: I can
14:42:58 <ewoud> eedri: did you uninstall vdsm-python-cpopen and install python-cpopen + vdsm?
14:43:17 <ybronhei> knesenko: if you used the same slave for ovirt-3.3 and master you must remove vdsm-python-cpopen first (or python-cpopen if you switch from master to ovirt-3.3)
14:43:36 <knesenko> eedri: ^^
14:43:39 <eedri> ybronhei, the problem is that we run make rpm
14:43:48 <ybronhei> eedri: knesenko: sorry about that, this until i'll update python-cpopen spec
14:43:50 <eedri> ybronhei, and make rpm builds vdsm-python-copen and it shouldn't build it anymore
14:43:54 <orc_orc> ybronhei: the act of removing vdsm-python-cpopen breaks another dependency
14:44:02 <ybronhei> eedri: no.. thats not the problem
14:44:11 <ybronhei> orc_orc: what do you mean?
14:44:22 <eedri> ybronhei, or you need to update spec file to Obselete it
14:44:22 <orc_orc> ybronhei: I posted about it last week -- looking
14:44:37 <ybronhei> orc_orc: I recall you said it requires also remove of vdsm rpm
14:44:53 <ybronhei> orc_orc: that's alright for now ... anyhow we have issues with the upgrade :P
14:45:17 <ybronhei> eedri: I know, but its on python-cpopen spec file .. so it'll take a bit and it doesn't relate to the build
14:45:54 <ybronhei> eedri: the upgrade issue strongly relates to the build, so this I want to fix first
14:45:55 <eedri> ybronhei, oh, you build that pkg as well?
14:46:33 <eedri> ybronhei, ok
14:46:47 <eedri> ybronhei, so job should avoid installing vdsm-python-copen now
14:46:50 <eedri> ybronhei, vdsm won't require it>?
14:49:06 <ewoud> unrelated, dcaro why do you have 3 accounts in gerrit?
14:49:19 <dcaro> ewoud: I do?
14:49:59 <ewoud> dcaro: if I try to add you as reviewer, I get 3 options, 2 @redhat.com and 1 @gmail.com
14:50:16 <eedri> ewoud, he cloned himself
14:50:26 <eedri> ewoud, so he can review 3 patches in parallel
14:50:30 <dcaro> ewoud: I have registered more than one email
14:50:32 <orc_orc> heh
14:50:32 <ewoud> eedri: if it gives you more time, how do I do that?
14:50:34 <dcaro> xd
14:50:51 <eedri> ewoud, you need a clonning machine
14:50:52 <ewoud> dcaro: but I can't select your @redhat.com
14:51:27 <dcaro> ewoud: that's strange
14:51:47 <knesenko> dcaro: I sent you email about that
14:53:07 <dcaro> knesenko: really? did not read it :/ I'll take a look
14:53:21 <knesenko> dcaro: k
14:53:29 <knesenko> anything else on jenkins ?
14:53:46 <eedri> knesenko, we should verify all our jobs run on maste and 3.3
14:53:57 <eedri> knesenko, so we won't miss regression like we had on 3.3.1
14:54:05 <dcaro> knesenko: found it
14:54:09 <dcaro> (the email)
14:54:16 <eedri> knesenko, also, we plan to add new upgrade jobs from stable rpms to nightly rpms
14:54:25 <ybronhei> eedri: but job will have too because it depends if you run ovirt-3.3 or master
14:54:28 <ewoud> knesenko: I created http://gerrit.ovirt.org/20366 to fix http://gerrit.ovirt.org/20319
14:54:45 <knesenko> ewoud: saw it ... dup pkg
14:54:57 <knesenko> ewoud: thanks
14:55:09 <ewoud> and http://gerrit.ovirt.org/20367 so we can automatically verify that it at least compiles
14:56:25 <knesenko> ewoud: +1
14:56:33 <eedri> knesenko, also
14:56:40 <eedri> knesenko, obasan upgraded jenkins to latest LTS
14:56:47 <eedri> obasan, any issues with that?
14:56:55 <eedri> obasan, any plugins were upadted?
14:58:30 <knesenko> eedri: seems like there are no issues
14:58:37 <knesenko> ok guys we are out of time
14:58:45 <knesenko> anything else before we finish ?
14:59:46 <knesenko> thank you all
14:59:51 <knesenko> #endmeeting