14:02:31 <ewoud> #startmeeting
14:02:31 <ovirtbot> Meeting started Mon Sep 23 14:02:31 2013 UTC.  The chair is ewoud. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:02:31 <ovirtbot> Useful Commands: #action #agreed #help #info #idea #link #topic.
14:02:33 <ewoud> #chair eedri_
14:02:33 <ovirtbot> Current chairs: eedri_ ewoud
14:02:41 <ewoud> Rydekull: ping
14:02:45 * eedri_ here
14:02:46 <ewoud> dcaro: ping
14:02:51 <ewoud> obasan not here?
14:03:02 <eedri_> ewoud, he's around, will join shortley
14:03:05 <dcaro> ewoud: I'm here
14:03:26 <ewoud> #chair dcaro
14:03:26 <ovirtbot> Current chairs: dcaro eedri_ ewoud
14:03:59 <dcaro> It seems that the infra meetings page is not updated: http://www.ovirt.org/Infrastructure_team_meetings
14:04:00 <bjuanico> #chair emitor
14:04:31 <ewoud> dcaro: sounds like we should
14:04:38 <ewoud> eedri_: you sent some points in by mail
14:04:41 * ewoud looks at the archives
14:04:46 <eedri_> ewoud, yea i have more
14:05:08 <ewoud> http://lists.ovirt.org/pipermail/infra/2013-September/003969.html
14:05:37 <ewoud> let's keep that as agenda and then see what other points we need to discuss
14:05:50 <ewoud> #topic network functional tests
14:06:02 <ewoud> * Waiting for new job owner (toni) to provide info on how to differ from network patches
14:06:06 <ewoud> * should consider using zuul for filtering patches (http://ci.openstack.org/zuul/)
14:06:24 <ewoud> I recall having a look at zuul in the past, but then it was considered too much work
14:07:14 <dcaro> ewoud: yep, we did not need then a way to discriminate patches per origin/maintener/files...
14:08:16 <ewoud> anyone has experience with zuul?
14:09:50 <ewoud> eedri_: what would you like to discuss at this point?
14:10:37 <dcaro> we want to use it also internally, so maybe we can test it there first and see if it's worth the effort for us in ovirt
14:11:16 <ewoud> sounds good
14:11:47 <eedri> ewoud, ?
14:11:52 <dcaro> I think that internally is more likely to be used anyway
14:11:53 <eedri> ewoud, i think my irc got frooze
14:12:01 <eedri> ewoud, sorry
14:12:11 <eedri> ewoud, we're talking about network function tests?
14:12:15 <ewoud> eedri: yes
14:12:24 <ewoud> eedri: I was wondering what you wanted to discuss
14:12:41 <eedri> ewoud, i think that next week when vdsm will move to stable branch 3.3, (or maybe even now) we can move it to run per gerrit
14:12:53 <eedri> ewoud, vdsm patch load is lower than engine, so we might be OK
14:13:06 <eedri> ewoud, anyway for 3.3, that's for sure
14:13:13 <ewoud> eedri: sounds good
14:13:24 <ewoud> dcaro suggested to try zuul first internally and then evaluate if it would be good for ovirt
14:14:24 <eedri> ewoud, that would be wise
14:14:26 <eedri> ewoud, +1
14:14:54 <ewoud> #agreed enable per patch network tests for 3.3, maybe for master as well
14:15:20 <ewoud> #agreed look at zuul after RH tried it
14:16:38 <eedri> ewoud, ok, next topic?
14:16:41 <ewoud> #topic plan migration from local storage dc to new gluster based storage with new server
14:16:48 <ewoud> * install fedora 19 on new ovirt03.redhat.com server
14:16:52 <ewoud> * migrate jenkins slaves from local storage to gluster
14:17:02 <ewoud> I assume you meant ovirt03.ovirt.org instead of redhat.com
14:17:10 <ewoud> or rackspace03?
14:17:40 <eedri> ewoud, not sure, i heard it's redhat.com, but we need to check.
14:17:57 <eedri> ewoud, of course we'll need new dns name for rackspace03.ovirt.org
14:18:06 <eedri> ewoud, all info in on rackspace ticket system
14:18:33 <ewoud> eedri: but I can't see that afaik
14:19:17 <eedri> ewoud, i don't remember who has access to the ticket system, anyway once we'll reinstall it as other server everyone will have access
14:19:28 <eedri> ewoud, the same way as existing rackspace servers
14:19:46 <eedri> dcaro, can you open a ticket to rackspace to install f19 on it?
14:19:50 <eedri> dcaro, or did we did that last time?
14:19:59 * eedri remeber some problems with console access
14:20:19 <dcaro> eedri: yep, they did it, as nobody was able to connect using the console
14:20:29 <dcaro> (and get a working keyboard)
14:21:02 <eedri> ewoud, dcaro ok, so we need to open a ticket first and request them to install f19
14:21:14 <eedri> ewoud, so we'll able to install nested vms on it
14:22:22 <dcaro> #action dcaro to open a ticket to install f19 on ovirt03.redhat.com
14:22:35 <eedri> dcaro, +1
14:22:35 <dcaro> mm, I think that ewoud is the one that has to set the action xd
14:22:49 <eedri> dcaro, i think anyone that has chair can
14:23:10 <eedri> dcaro, what about dns entries?
14:23:24 <eedri> dcaro, is it done via rackspace or internalling in redhat
14:23:25 <ewoud> sorry, had an interrupt
14:23:32 <dcaro> ovirtbot ignores me then :,(
14:23:32 <ovirtbot> dcaro: Error: "ignores" is not a valid command.
14:23:37 <ewoud> #chair obasan
14:23:37 <ovirtbot> Current chairs: dcaro eedri_ ewoud obasan
14:23:51 <ewoud> dcaro: it doesn't ignore you, just doesn't reply when you make an action item
14:24:02 <dcaro> ok, good then :)
14:24:37 <eedri> dcaro, can you get the ip of the host and open ticket for dns entry for rackspace01.ovirt.oirg?
14:24:37 <ewoud> eedri: RH managed ovirt.org, not sure about reverse dns
14:24:49 <eedri> dcaro, i think obasan did that yesterday for artifacotry
14:25:02 <obasan> eedri, yes. it's done
14:25:02 <dcaro> eedri: okok
14:25:14 <eedri> dcaro, so you can give obasan the ip and he can open a ticket
14:26:31 <dcaro> ok
14:28:05 <eedri> ewoud, ok, i think we can discuss migration of cluster once we have that 3rd host
14:28:35 <ewoud> eedri: sounds good
14:29:24 <ewoud> eedri: I think we can also reinstall hosts quite fast using foreman
14:29:44 <ewoud> #agreed we look at the migration after the third host at rackspace has been installed
14:30:05 <eedri> ewoud, this means we need to make sure in dhcp next server is foreman for all hosts
14:30:13 <eedri> ewoud, not sure rackspace will go for that, no?
14:30:15 <ewoud> #action dcaro ensure rackspace03.ovirt.org points to the new host
14:30:44 <ewoud> eedri: maybe we can use ovirt templates, maybe a DHCP server
14:30:54 <eedri> ewoud, ok
14:31:13 <ewoud> you can select a template since in ovirt you always have a template, even if it's the blank
14:31:22 <eedri> ewoud, yea, so not use TFTP
14:31:27 <eedri> ewoud, hmm..
14:31:30 <eedri> ewoud, actually you can't
14:31:39 <eedri> ewoud, use template to reinstall bare-metal?
14:31:54 <ewoud> eedri: valid point
14:31:57 <eedri> ewoud, :)
14:32:51 <ewoud> shall I ensure we can DHCP there as well?
14:33:19 <eedri> ewoud, i think we must if we want to reinstall bare-metal
14:33:24 <dcaro> jejeje, too many virtualized levels xd
14:33:33 <eedri> ewoud, might be trickey with all firewalls rules
14:33:44 <eedri> ewoud, might need to enable tftp access from foreman to them
14:34:20 <ewoud> #action ewoud ensure we can kickstart using dhcp at rackspace
14:34:38 <dcaro> eedri: ewoud maybe the dhcp can be managed by rackspace (they set up the next-server option on their dhcp servers), but tftp must be on foreman
14:34:47 <dcaro> (foreman-proxy actually)
14:34:55 <ewoud> dcaro: you still need to make the DHCP reservations
14:35:25 <ewoud> at $employer we deploy many foreman smartproxies
14:35:28 <ewoud> so it's not that hard
14:35:45 <ewoud> next topic?
14:35:51 <eedri> ewoud, yes
14:35:58 <ewoud> #topic ovirt tools (iso uploader/image uploader) jobs
14:36:03 <ewoud> * sandro to request power user permissions
14:36:06 <ewoud> * infra to decide how to implement nfs shares
14:36:15 <eedri> indeed
14:36:17 <eedri> sbonazzo, ping
14:36:29 <ewoud> doesn't power user go through the normal flow?
14:36:39 <eedri> sbonazzo, we're talking about adding jobs on upstream to test engine tools
14:36:58 <eedri> sbonazzo, please send a request for power user for jenkins in order to gain acccess for jenkins
14:37:15 <eedri> ewoud, now we need to decide how we implement nfs share for jobs
14:37:28 <ewoud> eedri: why is nfs needed again?
14:37:30 <eedri> ewoud, and what are the security impilication for that (puppet implement?)
14:37:39 <eedri> ewoud, for testing iso-uploader
14:37:51 <eedri> ewoud, or image-uploader
14:37:58 <eedri> ewoud, the images needs to be mounted somewhere
14:38:04 <ewoud> eedri: can we just run an NFS server on localhost only?
14:38:32 <eedri> ewoud, and what will you do if the job run on different vms
14:38:41 <eedri> ewoud, you'll waste space of GB on each vm to store it?
14:38:55 <ewoud> eedri: if all vms have a small NFS server, it should be fine security wise
14:38:56 <eedri> ewoud, and we'll need to copy it manually each time we reinstall a vm
14:39:00 <dcaro> eedri: the images were supposed to be really small sbonazzo?
14:39:16 <eedri> dcaro, i think those are windows/linux images
14:39:22 <eedri> dcaro, so 500MB?
14:39:50 <ewoud> eedri: any reason it should test with full images?
14:39:55 <dcaro> last time I talked with sandro he told me that they were empty files (0s), less than 10 MB
14:40:00 <dcaro> iirc
14:40:10 <eedri> dcaro, well in that case, maybe it's better to use local nfs
14:40:25 <eedri> dcaro, ewoud and we can add a puppet class to install nfs server on each jenkins slave
14:41:06 <dcaro> we can just setup a share with access only from localhost for the jenkins jobs, and let then create folders inside for the images
14:41:07 <ewoud> eedri: sounds good, then only allow localhost, ensure the owner is uid 36.36
14:41:57 <eedri> ewoud, ok
14:42:21 <ewoud> eedri: since sbonazzo isn't responding, mind sending a mail if that's a good solution?
14:42:42 <dcaro> but we have to make sure that's all that is needed (I think there were some restrictions for the uid of the folders, not sure), someone should talk with sandro and get all the requirements
14:43:01 <eedri> ewoud, yes
14:43:18 <ewoud> dcaro: IIRC it must be uid 36.36
14:43:19 <eedri> ewoud, sandro says he's joning
14:43:23 <eedri> ewoud, yes
14:43:28 <eedri> ewoud, for vdsm:kvm
14:43:38 <dcaro> okok
14:44:01 <ewoud> #action eedri verify with sbonazzo if a localhost nfs server with uid 36.36 is sufficient
14:44:10 <ewoud> next topic?
14:44:33 <eedri> yes
14:44:34 <ewoud> #topic new jenkins LTS version available
14:44:37 <ewoud> * multiple bugs fixed, should upgrade ASAP
14:44:43 <ewoud> any downsides?
14:45:09 <eedri> http://jenkins-ci.org/changelog-stable
14:45:12 <orc_orc> ewoud: what is the fallback plan if there turn out to be problems?
14:45:25 <ewoud> orc_orc: with jenkins upgrade?
14:45:26 <eedri> #info jenkins LTS has update pending, change log- http://jenkins-ci.org/changelog-stable
14:45:31 <orc_orc> ewoud: yes
14:45:45 <eedri> ewoud, orc_orc it is possible to run yum downgrade
14:45:50 <eedri> ewoud, i've done it in the past
14:46:06 <eedri> ewoud, orc_orc but this is LTS, not latest version, odds for it to break completely is low
14:46:13 * sbonazzo here
14:46:14 <orc_orc> true -- I was thinking more a pre update level zero backup and just starting that backup
14:46:41 <eedri> ewoud, orc_orc well.. there isn't much to backup except the jenkins.jar file
14:46:52 * ewoud also has good experiences upgrading jenkins
14:47:03 <ewoud> generally very stable
14:47:21 <ewoud> sometimes plugins break, but jenkins itself is generally fine
14:47:23 <eedri> the issues i had were with specific plugins, which we can disable if they are making jenkins unstable
14:47:39 <eedri> and could be pinpointed in jenkins.log
14:47:44 <ewoud> a while back the git plugin broke, but downgrading worked
14:48:18 <eedri> ewoud, dcaro maybe schdule the upgrade for thursday? while tlv site is on holiday
14:48:33 <ewoud> eedri: fine by me
14:48:34 <dcaro> ewoud: eedri I think we should do a backup of the jobs configuration anyhow, just in case
14:48:44 <eedri> dcaro, don't we have a jenkins job for it?
14:49:00 <ewoud> I thought we did that continiously, but it never hurts
14:49:01 <eedri> http://jenkins.ovirt.org/job/backup_jenkins_org/
14:49:09 <eedri> ewoud, yea, backup never hurts
14:49:26 * eedri recalls it backs up to alterway02
14:49:36 <ewoud> eedri: will you do the upgrade?
14:49:39 <eedri> we should probably migrate those to rackspace as well ocne we have that setup
14:49:46 <eedri> ewoud, i won't be around unfourtunately
14:49:50 <ewoud> eedri: ok
14:49:58 <dcaro> eedri: yep, it's there and running
14:49:59 <eedri> ewoud, if you want me to do it, i can do it next week perhaps
14:50:19 <eedri> obasan, can you do it?
14:50:30 <eedri> obasan, maybe on wed noon even
14:50:45 <eedri> mburns, ping
14:51:01 <ewoud> it's just running a backup + yum update, I can do it
14:51:24 <eedri> ok
14:51:29 <dcaro> I'll be there if you need help
14:51:31 <mburns> eedri: pong
14:51:44 <eedri> ewoud, we'll need to review plugins updated later
14:51:53 <ewoud> eedri: yes, just jenkins
14:51:57 <eedri> mburns, can we move all node jobs to use centos slaves?
14:52:06 <eedri> mburns, i think the last rhel slave from amazon got offline
14:52:07 <ewoud> #action ewoud update jenkins to latests LTS on thursday
14:52:24 <mburns> eedri: yes, we can move to centos
14:52:33 <ewoud> eedri: I think it's still reporting to puppet so it's not shut down yet
14:52:44 <eedri> ewoud, strange..
14:52:55 <eedri> ewoud, we should try to login to it i guess
14:52:59 <eedri> ewoud, see why jenkins can't connect
14:53:15 <eedri> ewoud, but in general i think we should move a way from amazon vms into rackspace
14:53:18 <ewoud> where do I announce? arch@, vdsm@, engine@?
14:53:34 <eedri> ewoud, usually on infra + engine devel
14:53:38 <ewoud> eedri: never mind, it's a F18 slave
14:53:48 <ewoud> eedri: https://foreman.ovirt.org/hosts/ip-10-82-253-208.ec2.internal
14:53:49 <eedri> ewoud, oh, yea, that's for testing
14:54:01 <ewoud> I think the rhel slaves were never added at all
14:54:01 <eedri> ewoud, it's online just because of the db_report_job
14:54:08 <eedri> ewoud, for puppet?
14:54:13 <ewoud> eedri: yes
14:54:52 <ewoud> so, anything else on jenkins?
14:54:59 <eedri> ewoud, yes
14:55:02 <mburns> eedri: done
14:55:22 <eedri> mburns, if stuff fail - we might need to update puppet classes to install rpms on those
14:55:31 <mburns> eedri: ack
14:55:46 <eedri> ewoud, all gerrit jobs (unit tests/findbugs) were moved to monitor stable branch ovirt-engine-3.3
14:55:49 <mburns> eedri: as long as jenkins user has sudo, it shouldn't fail
14:55:58 <mburns> our job is smart enough to setup what it needs
14:56:10 <eedri> ewoud, we should now have capacity to run them since the load of patches are lower (hopefully :)
14:56:19 <eedri> mburns, nothing like magic jenkisn jobs! :)
14:56:29 <ewoud> eedri: so we don't monitor master anymore?
14:56:40 <eedri> ewoud, i kept one job - checkstyle
14:56:47 <eedri> ewoud, i'm talking about per patch job, yet?
14:56:51 <eedri> ewoud, not the normal jobs
14:56:54 <ewoud> eedri: ah ok
14:57:08 <eedri> ewoud, also renamed them
14:57:17 <ewoud> #info jenkins per patch jobs only for 3.3 stable branch
14:57:26 <eedri> ewoud, we should decide if & which jobs do we want to clone for 3.3 branch as well
14:57:55 <ewoud> if we had sufficient capacity, I'd test it all
14:58:08 <eedri> ewoud, dcaro can a single job run on both branches?
14:58:14 <eedri> or it needs to be matrix job
14:58:22 <ewoud> eedri: yes, assuming the build instructions are the same
14:58:40 <eedri> ewoud, how do you do it
14:58:42 <ewoud> at $employer, I've done the same for gerrit + jenkins
14:58:46 * ewoud looks
14:58:49 <eedri> ewoud, use '**' in branch?
14:59:08 <eedri> ewoud, for gerrit trigger that's easy
14:59:18 <eedri> ewoud, the plugin gives you option to add new branch to monitor
14:59:35 <ewoud> eedri: yes, '**' for all branches
15:00:55 <ewoud> so, anything else on jenkins?
15:00:59 <eedri> ewoud, and it actually runs the code twice?
15:01:01 <eedri> ewoud, yes
15:01:24 <ewoud> eedri: it just fires a build for each change it detects
15:01:51 <eedri> ewoud, as i mentioned earlier the last rhel64 vm is currently offline, so we need to monitor and see if all jobs are working on centos instead
15:02:04 <eedri> ewoud, ok, so i don't see a reason why not change all jobs to monitor '**'
15:02:50 <ewoud> eedri: let's try it
15:03:06 <ewoud> #agreed change all jobs to monitor '**' instead of a single branch
15:03:19 <mburns> ewoud: eedri:  *all* jobs?
15:03:57 <eedri> mburns, well.. all ovirt-engine jobs
15:04:02 <mburns> eedri: ok
15:04:06 <ewoud> #undo
15:04:06 <ovirtbot> Removing item from minutes: <MeetBot.items.Agreed object at 0x9abac4c>
15:04:14 <ewoud> #agreed change all ovirt-engine jobs to monitor '**' instead of a single branch
15:05:03 <ewoud> anything else on jenkins?
15:05:08 <eedri> ewoud, next item is artifactory server
15:05:26 <ewoud> we ran out of time, but let's do it quickly
15:05:31 <ewoud> #topic artifactory server
15:05:50 <eedri> ewoud, is vm stable enough to install on it? or we still have network issues
15:06:00 <ewoud> #info installed a basic centos 6 on artifactory.ovirt.org, but having network issues
15:06:16 <ewoud> eedri: still network issues
15:06:38 <eedri> ewoud, i sae you email to infra, maybe it's best to add alterway contact there
15:06:57 <ewoud> eedri: yes, I'll look through the archives to see who it exactly was
15:07:52 <ewoud> #action ewoud forward mail about network to our alterway contact
15:08:27 <ewoud> I should note that if this works well, we can use exactly the same to migrate resources.ovirt.org from linode01
15:08:59 <ewoud> anything else on the agenda?
15:09:29 <eedri> there are puppet and infra stuff, but maybe postpone it for next meeting? if people needs to go
15:10:09 * ewoud is taking mondays off from work till the end of the year
15:10:17 <ewoud> so I should have some more time again to set that up
15:10:18 <eedri> :D
15:10:26 <eedri> ewoud, nice!
15:11:00 <ewoud> still had 25 vacation days left
15:11:16 <eedri> ewoud, wow.. you should do a long vacation
15:11:22 <ewoud> anyway, let's finish up the meeting
15:11:23 * eedri thinks on diving in palao
15:11:25 <eedri> ewoud, ok
15:11:41 <ewoud> going once
15:11:46 <ewoud> going twice
15:11:48 <ewoud> thanks all
15:11:49 <ewoud> #endmeeting