13:59:07 #startmeeting Infra Weekly 13:59:07 Meeting started Mon Sep 9 13:59:07 2013 UTC. The chair is knesenko. Information about MeetBot at http://wiki.debian.org/MeetBot. 13:59:07 Useful Commands: #action #agreed #help #info #idea #link #topic. 13:59:08 (I mean, you have that right in the wiki) 13:59:12 hi 13:59:13 eedri: knesenko: that was my assumption, but wanted to ask to be sure. thanks 13:59:19 dneary, I'm not sure if to be happy or sad ;) 13:59:19 #chair ewoud eedri dcaro 13:59:19 Current chairs: dcaro eedri ewoud knesenko 13:59:24 Rydekull: here ? 13:59:40 #chair obasan 13:59:40 Current chairs: dcaro eedri ewoud knesenko obasan 13:59:50 itamar, You're welcome, I think :-) 13:59:55 thnx 13:59:57 dneary: hello ... want to join ? 14:00:48 #topic Hosting and Issues 14:00:53 Hi all 14:00:57 knesenko: now Im here :) 14:01:03 so lets start 14:01:17 last week dcaro installed new HDs on rackspace servers . 14:01:23 dcaro: want to update us ? 14:01:59 knesenko: I think Rydekull was looking at setting up gluster on it 14:02:17 knesenko, yes, the HDs are already installed for 2 weeks iirc 14:02:18 knesenko: but he found out someone created a filesystem on it and ran into issues there 14:02:21 ewoud: yes I know ... I tried to ping him ... 14:02:36 yep, it seems that the hd on rackspace 2 was 'connected' but it lacks the raid configuration, I opened a ticket and we are gonna make sure we can use it after 14:02:52 dcaro: ok thanks 14:03:08 so team .. we need to decide what should we do with these HDs 14:03:20 dcaro: on which server we have RAID ? 14:03:32 knesenko: 01 has raid configured 14:03:44 knesenko, As a chair, you mean? 14:03:47 ok so I assume 01 will be used for backups 14:03:51 dneary: yes 14:04:10 but what about gluster ? 14:04:17 which mode do we want to run there ? 14:04:18 * dneary is not bothered about being a chair 14:05:55 knesenko: I'd say we create gluster on it and create a backup VM on top of that 14:06:12 knesenko: I'd rather not log in to the hypervisor for backups 14:06:52 ewoud: so you mean we need to change the setup that we are running on rackspace servers right now 14:06:52 ewoud, +1 14:07:03 right now we have local storage there ... 14:07:14 knesenko: we can combine them 14:07:34 knesenko: leave the current jenkins slaves on local storage, create a gluster volume on the new storage 14:07:45 ewoud: its impossible to add more than one storage domain into the local DC 14:07:46 knesenko: that's an extra storage pool we can create VMs from 14:07:50 knesenko: argh 14:08:05 ewoud: that's why we need to change the setup 14:08:08 knesenko, can't we create another DC? 14:08:17 we don't have hosts for that 14:08:52 I think that we should export the slaves we have right now 14:08:59 remove the setup ... 14:09:03 knesenko: I think it could be worth it to change it, but I have insufficient experience with gluster to know if our jenkins slaves will run on them 14:09:03 create a gluster 14:09:17 knesenko, where will you export it to? 14:09:17 and use this gluster volume for the engine setup 14:09:44 will create some tmp export domain or something 14:09:44 knesenko, and since almost all of our jenkins slaves is running on rackspace now, how much downtime of the slaves are we talking about 14:09:59 hehee .... it depends ... 14:10:18 we need to prepare a plan for it 14:10:46 I think that the project can take 3-4 hours 14:10:47 exportdomain can be NFS 14:12:05 I think that it should be pushed asap 14:12:12 +1 from me 14:12:18 +1 from me as well 14:12:37 i have a suggestion 14:12:44 eedri: yes please 14:12:45 why not test it on a single host 1st? 14:12:53 eedri: test what ? 14:12:55 so we'll still have a few vms running 14:13:03 adding gluster storage 14:13:11 eedri: sounds like a good plan 14:13:13 sorry for popping into the meeting - could you add the EL6 gluster-3.4.0 packages to the nightly repo? ybronhei asked for that in vdsm meeting. 14:13:19 you can create a new DC 14:13:25 and move one host to there 14:13:33 also - might worth asking rackspace for a 3rd host 14:13:36 eedri: possible ... 14:13:40 only if for temporary 14:13:42 for migration 14:13:49 eedri: that's why we need a plan 14:14:02 dneary, can we request an additional host from rackspace? 14:14:15 eedri, It's a question of budget 14:14:15 eedri: given the trouble we had with getting running, would that be better this time? 14:14:16 dneary, it will reduce risk and downtime for migrating our DC to gluster storage 14:14:31 danken: I think that's possible, but is it for every EL6 host? 14:14:31 eedri, Do you know how much it would cost? 14:14:36 dneary, if only as temporaray 14:14:42 dneary, no idea 14:15:00 #action knesenko create a plan for gluster and engine installation/migration for rackspace servers 14:15:04 next .... 14:15:07 eedri, We can take it up with Red Hat IT; we could also take it up with the board 14:15:10 anything else on hosting ? 14:15:16 danken, nighlies builds are for projects built from jenkins 14:15:25 We'd need to document everything that's on the existing servers first, to show they're all over-subscribed 14:15:35 danken, since we're not building gluster, i don't think that the place for it. 14:15:55 danken, users should have another repo enabled with those (we can add it to stable/testing( 14:16:11 #topic Foreman and Puppet 14:16:18 ewoud: dcaro any news here ? 14:16:26 dneary, yes, any specific contact? 14:16:40 eedri, I've been dealing with Jim Strong in the past 14:16:43 ewoud: dcaro any chance we merge this 0- http://gerrit.ovirt.org/#/c/16907/ :) :) 14:16:45 We should CC Itamar too 14:16:46 dneary, same here 14:16:58 knesenko: not on my side, I played a littel with r10k, but haven't finished anything yet 14:17:06 knesenko: same here 14:17:20 so lets merge the patch ... 14:17:21 #action eedri to query on request an additonal server on rackspace for migration to gluster DC 14:17:22 :) ? 14:17:28 eedri: +1 14:17:47 eedri: update me with the answer ... or cc me to the email ... 14:17:47 We could also ask on the board list 14:18:29 mburns: yes I looked into that but I was not able to figure it out 14:18:43 mburns: some sort of screencast ? 14:18:55 #topic Jenkins 14:18:57 eedri: frankly, I find this convincing. If you work with nightlies, you're probably crazy enough to download gluster yourself. 14:19:01 eedri: thanks. 14:19:09 eedri: dcaro obasan news here ? 14:19:17 Yamakasi: basically, yes 14:19:34 ewoud: please merge the ntp patch ... or abandon it :) 14:19:36 danken: I am wondering, since we use the EL6 slaves for all versions if it won't conflict 14:19:47 knesenko, we had an open issue on migrating network functional test to per patch 14:19:53 knesenko: will do after the meeting 14:19:58 knesenko: we have a new job for ovirt-log-collector, just checks pep8 14:20:28 dcaro: yes saw it today :) and it works :) 14:20:34 danken, anyone picking up on that? i think giseppua isn't available anymore? 14:21:50 eedri: sorry for not being attentive - I need some context. 14:22:15 danken, last week giusspa (sorry for misspelling) enabled the network function tests on jenkins 14:22:26 danken, we talked about converting it to run per patch on only network patches 14:22:47 danken, so he had an action item to provide us a way to different network patches from others 14:22:57 knesenko, we have another issue on jenkins 14:23:29 knesenko, http://jenkins.ovirt.org/job/ovirt_db_report_engine/ stopped working after the f18-vm02 from ec2 went offline 14:23:47 knesenko, it seems that it used local install / files on that specific vm 14:24:11 knesenko, i'm not sure if anyone is still using that job, but we need to send email to engine-devel and ask if it's still relevant 14:24:46 in that case, we can either start the vm and restore that data or the owner of the job should create a puppet class to enable running that job on any vm 14:24:58 eedri: yes ... 14:25:00 eedri: I think that job was created by Libor, he also left 14:25:02 eedri: I see no reason to limit it to network patches. 14:25:07 eedri: are you taling about - /var/lib/dbreport/schemaspy.sh ? 14:25:12 knesenko, yes 14:25:24 eedri: ok .... can you send the email regarding the job ? 14:25:29 eedri: if the test eats too much cpu we can limit its rate to once an hour 14:25:30 knesenko, not sure if it's something custom or install 14:25:41 danken, it's not cpu i'm worried about 14:25:46 danken, it's capacity 14:25:53 #action eedri send email regarding http://jenkins.ovirt.org/job/ovirt_db_report_engine 14:26:00 danken, and amount of patches running on jenkins 14:26:09 other issues regarding jenkins ? 14:26:35 so limit this job's rate to once in 2 hours. 14:26:55 #topic Other Issues 14:26:58 danken, i'm not sure we can, dcaro ^ 14:27:03 ok guys .... other issues ? 14:27:05 knesenko, wait, there's another jenkins issue 14:27:17 knesenko, i've enabled another per patch job - engine unit tests 14:27:37 knesenko, so now we have 3 verifies on each patch (findbugs, unit tests, checkstyle) 14:27:53 knesenko, actually we can disable checkstyle if unit tests now runs 14:28:02 eedri: sounds good 14:28:29 eedri: danken: I don't know of any way of limiting the number of job runs per hour, not sure if it's useful either, will you queue them ot just discard the ones out of quota? 14:28:56 dcaro, danken i agree, i think jenkins will trigger a job once a gerrit event will happen 14:29:52 anything else guys ? 14:31:19 knesenko, trac tickets? 14:32:11 I don't think that somebody touched tickets 14:32:45 eedri: we can check the status of the assigned tickets 14:32:47 knesenko, we should probably review them, so if some can be closed 14:32:52 knesenko, or are still relevant 14:33:00 I didn't had time to work on my tickets 14:33:24 eedri: ok 14:36:44 #endmeeting