14:00:06 <quaid> #startmeeting
14:00:06 <ovirtbot> Meeting started Tue Jul 17 14:00:06 2012 UTC.  The chair is quaid. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:06 <ovirtbot> Useful Commands: #action #agreed #help #info #idea #link #topic.
14:00:15 <quaid> #topic oVirt Infra weekly meeting
14:00:23 <quaid> #meetingname oVirt Infra weekly meeting
14:00:23 <ovirtbot> The meeting name has been set to 'ovirt_infra_weekly_meeting'
14:00:33 <quaid> #topic Roll call & howdy
14:00:36 <quaid> howdy :)
14:00:48 * mburns still here
14:01:04 * eedri1 here
14:01:26 * RobertM here
14:01:44 * mgoldboi here
14:02:47 <ovirtbot> 14[[07Infrastructure team meetings14]]4 !10 02http://wiki.ovirt.org/w/index.php?diff=3896&oldid=3887&rcid=3992 5* 03Quaid 5* (+49) 10/* 2012-07-17 */ adding Gerrit agenda item
14:03:05 <quaid> ok, I think we've got all the items on the agenda
14:03:13 <quaid> #topic Agenda
14:03:19 <quaid> http://wiki.ovirt.org/wiki/Infrastructure_team_meetings#2012-07-17
14:03:36 <quaid> Welcome new maintainers How to Handle donated hardware Jenkins migration from EC2 - status Push on ovirt-engine RPMs sync to ovirt.org - status Enabling Gerrit patches - everyone vs. limited All other business
14:03:36 <hitvx> mburns: and the new one?
14:03:41 <quaid> ooh, bad format, sorry
14:03:56 <mburns> hitvx: patches should be applied
14:04:00 <quaid> '''Agenda'''
14:04:00 <quaid> * Welcome new maintainers
14:04:00 <quaid> * How to Handle donated hardware
14:04:00 <quaid> * Jenkins migration from EC2 - status
14:04:00 <quaid> * Push on ovirt-engine RPMs sync to ovirt.org - status
14:04:03 <quaid> * Enabling Gerrit patches - everyone vs. limited
14:04:05 <quaid> * All other business
14:04:10 <quaid> <eoagenda />
14:04:36 <quaid> anyone have anything else for the agenda before we start?
14:04:48 <ofrenkel> \me here
14:05:06 * ofrenkel here
14:06:14 <quaid> ok, moving on ...
14:06:29 <quaid> #Topic Welcome new maintainers
14:07:02 <quaid> I sent out the invitation to maintainers
14:07:33 <quaid> so far, the following folks have said "yes" in some form or other :)
14:08:09 <quaid> Ewoud, Itamar, Mike, Eyal, and myself as maintainers
14:08:31 <quaid> and Moran has agreed to be the test subject for our new process to become a maintainer :)
14:08:37 <RobertM> quaid, Add RobertM to that list
14:08:47 <quaid> RobertM: thanks, done
14:08:50 * mgoldboi bootcamp is starting...
14:09:18 <quaid> "Get down and do 20 pushups and 3 init scripts!"
14:09:54 <quaid> thanks folks for stepping up & being recognized and willing to lead within the oVirt project
14:10:04 * RobertM wonder what would take longer the 20 pushups or the 3 init scripts :)
14:11:10 <quaid> and as Eyal said, our goal is not to do everything ourselves, but to better enable other people to help where they are interested
14:11:47 <quaid> my hope is that our using modern, useful, popular developer-focused infra will be attractive to people -
14:12:55 <quaid> we're creating an infrastructure of participation that very much like a modern dev environment one might use for a startup through to a corporate dev team - lots of opportunities to learn new skills, try new ideas, all in a safe from Earth-shattering-when-we-fail environment
14:13:07 <quaid> </speech>
14:13:27 <quaid> and with that, let's get to some real business
14:13:41 <quaid> #topic How to handle donated hardware/VM hosts
14:14:11 <quaid> #chair mburns eedri1 RobertM mgoldboi ofrenkel
14:14:11 <ovirtbot> Current chairs: RobertM eedri1 mburns mgoldboi ofrenkel quaid
14:14:41 <eedri1> i've been testing running gerrit patches from ovirt-node and vdsm, seems like new vm donated by ewoud can handle the load
14:15:00 <eedri1> we just need to sort out the security issue (another topic)
14:15:42 <quaid> RobertM: wiki history suggests you posted this topic to the agenda, what thoughts about it did you have?
14:15:44 <eedri1> i'm assuming we can add more small ovirt project other than ovirt-engine to that vm as well
14:16:31 <RobertM> quaid, I based this weeks on last
14:16:41 <quaid> eedri1: right, so one point here is, "How do we decide where to put what service?"
14:16:51 <eedri1> quaid, well
14:17:06 <RobertM> Also the question of securty comes up as well.
14:17:10 <eedri1> i would say trial and error for starts
14:17:15 <quaid> RobertM: ok, good, it did seem relevant to me, too
14:17:34 <eedri1> for now our main goal is to add as much jobs as we can to run per gerrit patches
14:17:46 <eedri1> in order to minimize failures on commits
14:17:55 <quaid> RobertM: regarding overall security, I'd started a security audit process in the past but didn't get it all completed
14:18:29 * quaid does not see that he ever got any of that on to the wiki
14:18:48 <RobertM> quaid, We can save this for another topic but it is clear half the infra time being used is going to be about Jenkins.
14:18:59 <quaid> eedri1: to make sure I get it, there is an on-commit hook whenever a patch is merged, and it runs related tests on Jenkins?
14:19:27 <eedri1> quaid, today all jobs running on jenkins run post-commit, so we only see the error after it was commited to git
14:19:54 <eedri1> quaid, idea is to run jobs that used a special gerrit plugin that identifies a change inside gerrit (i.e patch)
14:19:57 <quaid> RobertM: that makes sense, and is the sort of reason I want to use a PaaS such as OpenShift for the basics (WordPress, MediaWiki, Mailman-when-we-can) to offload the basic infra work ... so we can focus on the cool stuff such as Jenkins :0
14:20:15 <eedri1> quaid, and run a 'verify' job on that patch, pre-commit. if the job fails jenkins will add -1 to that patch
14:20:41 * eedri1 had bad exp with using openshift for jenkins
14:20:45 <RobertM> quaid, I can see Wordpress and mediawiki but mailman and repo are to big to be housed on OpenShift
14:20:53 <eedri1> not that flexiable, but for other usages it might be good
14:21:00 <quaid> eedri1: that sounds good, although what happens if there are a large numnber of patches being committed, how quickly can Jenkins run those tests and respond?
14:21:11 <eedri1> quaid, exactly!
14:21:22 <eedri1> quaid, that's why we need more vms and resources
14:21:38 <eedri1> quaid, so jenkins can handle the load., there are other ways to tackle this
14:21:54 <eedri1> quaid, like listening on a specific folder or branch per project
14:21:57 <RobertM> Also I have a concern about our current build process.  We have a large number of open slots but seeing a lot of pending jobs.
14:22:27 <quaid> eedri1: yeah, about OpenShift and Jenkins, that makes sense not to use - the EC2-based service doesn't work for our Jenkins needs. We can reconsider if something fundamentally changes in OpenShift
14:22:34 <eedri1> RobertM, that's because jenkins limits the amount of the same job per node
14:22:48 <eedri1> RobertM, i've had problems with that in past, it needs to be tested
14:22:56 <gestahlt> Just a quick question: Should the bridge device ifcfg-ovirtmgmt be on the node?
14:22:57 <RobertM> There seems to be a lot of blocker that limit Jenkins abilty to do things in parrel
14:23:14 <eedri1> RobertM, thing is, since the amazon ec2 vms are so slow, a new commit is commited before the previous one is done testing
14:23:43 <eedri1> RobertM, it can be done via a plugin, just need to be tested and verify it works well
14:24:26 * eedri1 talks about 'throttle concurent builds' option in job config
14:24:37 <RobertM> gestahlt, Yes ovirtmgmt needs to be on the node
14:24:55 <gestahlt> Okay, because i have another bridge which has the correct config (brem1)
14:25:40 <eedri1> RobertM, i think i can enable running the same job on different nodes
14:25:48 <mburns> gestahlt: vdsm-reg should handle that on startup when the node is registered to engine
14:25:55 <eedri1> RobertM, it will clear the backlog in current job load i think
14:26:05 <eedri1> quaid, can you add that to actions?
14:26:06 <RobertM> eedri1, From what I am seeing the dependency tree is more the issue.
14:26:22 <quaid> eedri1: fwiw, you can declare actions as well
14:26:26 <eedri1> RobertM, well if you mean why we run certain jobs only after ovirt-engine?
14:26:48 <adamw> anybody run vdi with ovirt?
14:26:51 <eedri1> #action eedri to test running jenkins job in parallel on different nodes
14:27:08 <quaid> eedri1: thanks, not trying to be lazy :) just enabling
14:27:37 <eedri1> RobertM, the reason for that is not running multiple jobs that might fail if ovirt-engine compilation fails
14:28:03 <eedri1> RobertM, so to avoid multiple failures and use of jenkins slaves, we only run those if ovirt-engine works
14:28:25 <RobertM> eedri1, I understand that.  But the build takes a while.  We might want to see about breaking it down into smaller piece that can run in parrel.
14:29:02 <eedri1> RobertM, i agree with that, but that's an issue for the devel to support
14:29:11 <RobertM> My concern is we are only using half the slots on what we already have.  Adding more slots might not help the build process.
14:29:18 <mgoldboi> eedri1: RobertM: should run base compilation and than run in parallel
14:29:30 <mgoldboi> as a first step
14:29:33 <eedri1> RobertM, if we can split unit tests to smaller jobs, its better, not sure if that possible due to maven deps
14:29:44 <eedri1> mgoldboi, thats what happens today
14:30:15 <eedri1> mgoldboi, RobertM base ovirt-engine compilations --> triggers findbugs,unit-tests,db tests, etc... all in parallel
14:30:16 <mgoldboi> eedri1: we need the monitoring job that can trigger other jobs
14:30:47 <mgoldboi> eedri1: than we would bbe able to synch it all - right
14:31:04 <mgoldboi> ?
14:31:15 <eedri1> there are some improvement that can be done, like i mentioned earlier
14:31:39 <eedri1> still while we use slow amazon ec2 slaves we'll still have x3 run time per job than we normally have
14:32:02 <eedri1> i believe that once we'll have stonger vms as jenkins slaves, most of the backlog issues will be resovled
14:32:26 <quaid> eedri1: how is progress on that?
14:33:35 <quaid> do we need more VMs right now than has been offered?
14:33:46 <eedri1> quaid, well.. like i said ewoud vm can be used for now to run some of the jobs + gerrit patches from ovirt-node + vdsm
14:33:54 <eedri1> (after we'll solve security issues)
14:34:12 <eedri1> quaid, once the vms that have been offered will be put into use, i believe that will suffice
14:34:14 <eedri1> for now
14:34:35 <quaid> eedri1: gestahlt offered some others last week, too; are we going to be able to bring those up?
14:34:51 <quaid> eedri1: and if yes, do we still need to look at getting more right away?
14:34:54 <eedri1> quaid, sure, as soon as he is ready and the slaves can be accessed
14:35:11 <eedri1> quaid, it depends on how long we'll have gestahlt vms at hand
14:35:31 <eedri1> quaid, we need to be prepared that those won't be available at some point
14:35:32 <RobertM> I just order 16G of ram to bring my 2 nodes up to 16G of ram.
14:35:43 <quaid> right, we need to establish what the base ongoing need is & how to fulfill it
14:36:13 <eedri1> quaid, since we're going to use VMs, it's easy to start with a basic configuration and see where are the bottle necks
14:36:29 <eedri1> quaid, like i said in the email thread, we can start with 3-4 VMs
14:36:49 <eedri1> quaid, and if we need more, we'll create new VMs
14:36:57 <quaid> #agreed Start new Jenkins VMs with basic config and see where the bottlenecks are
14:37:26 <eedri1> quaid, jenkins also has a monitoring plugin that allows admins to see cpu/memory/load status on each vm
14:37:31 <gestahlt> quaid: The offer still stands, as soon i get ovirt running in the cluster
14:37:51 <gestahlt> quaid: then you can have it
14:37:59 <tjikkun_work> also the vm provided by oxilion (ewoud) has munin: http://jenkins.ekohl.nl/munin/ekohl.nl/jenkins.ekohl.nl/index.html
14:38:08 <quaid> eedri1: do you think we should get a basic host from a service provider, one that can run at least one Jenkins slave as baremetal or VM?
14:38:09 <eedri1> quaid, another thing to consider is enabling puppet as soon as we can
14:38:28 <eedri1> quaid, otherwise we'll have to manually config each of these new VMs. one by one
14:38:33 <quaid> ick
14:39:08 <eedri1> quaid, we need at least one bare-metal for automation tests which needs a hypervisor to run
14:39:17 <quaid> #action Getting Puppet running is a top priority
14:39:47 <quaid> eedri1: ah, good point, thank you - that justifies getting a baremetal host no matter what
14:39:47 <eedri1> quaid, you'll need a puppet master for it (puppet.ovirt.org?)
14:40:07 <eedri1> i.e new vm
14:40:19 <eedri1> might worth checking if it can be hosted in openshift also
14:40:43 <quaid> #action Research for service provider for a baremetal host for automation tests and possible Jenkins VM slaves if room
14:40:43 <RobertM> puppet could run on Kitchen Sink.
14:40:57 <quaid> RobertM: was just going to ask you that, let's go with that
14:41:04 <quaid> RobertM: are you working on that? or was it someone else?
14:41:33 <RobertM> Need dns create for puppet.ovirt.org
14:41:50 <eedri1> kitchen sick?
14:41:52 <eedri1> sink
14:41:56 <eedri1> sorry for my ignorance
14:42:13 <quaid> eedri1: linode01.ovirt.org
14:42:16 <RobertM> eedri1, linode01 my nick name although I ment sink
14:42:18 <tjikkun_work> probably the server that has everything but a kitchen sink ;)
14:42:37 <quaid> tjikkun_work: everything *and* the kitchen sink, in this case :)
14:42:47 <eedri1> :D
14:42:50 <eedri1> got it
14:43:26 <quaid> #action quaid to get puppet.ovirt.org DNS
14:44:02 <RobertM> I don't know if it make a diff I can get a 1U in a local colo for about $40 a month and have 2 1U boxes that could be used?
14:44:40 <eedri1> do we have budget for that?
14:45:08 <quaid> RobertM: that is sort-of what I was looking in to
14:45:12 <RobertM> eedri1, I have no idea what kind of budget we have.
14:45:26 <quaid> eedri1: honestly not sure - I am going to ask Itamar, to start, and see where we go from there
14:45:54 <quaid> RobertM: yeah, the project is cash-poor :) but our large corporate sponsor friends might have some trees we can shake
14:46:34 <eedri1> quaid, maybe it's best ask the community if someone can donate a bare-metal server?
14:46:34 <RobertM> quaid, That is what I assumed
14:46:44 <quaid> or there may be a hosting service that wants to contribute via a dedicated host, although that may take time to find
14:46:57 <quaid> eedri1: +1
14:47:04 <eedri1> quaid, RobertM there is another option
14:47:20 <eedri1> we can try using fake-qemu to run the tests on a VM
14:47:40 <quaid> #action Ask on Board and arch@ mailing lists for a baremetal host for automation tests
14:47:43 <eedri1> need to cosult the guys writing the tests if it will run on a fake host
14:48:48 <RobertM> eedri1, Still learning Jenkins what would it take to run a bare mental node?  Could it be ran off a 35/35 Home connection?
14:49:09 <eedri1> its sure is mental
14:49:09 <eedri1> :
14:49:11 <eedri1> :)
14:49:12 <RobertM> What supporting stuff would need to be onsite?
14:49:52 <eedri1> from my exp, some tests requires a duo or quad core cpu
14:49:55 * RobertM If you guys are going to make fun of my sucky spelling it will take up the entire meeting :)
14:50:26 <eedri1> 35/35?
14:50:26 <quaid> RobertM: oh, you have to let us go on at least one per meeting, when you do a great one such as "bare mental"!
14:50:51 <quaid> down/up speed?
14:50:59 <eedri1> i think that should be enough
14:51:11 <eedri1> we can test it if you want
14:51:16 <RobertM> eedri1, I have two of these that I bought to use with ovirt.  http://cgi.ebay.com/ws/eBayISAPI.dll?ViewItem&item=270990362471
14:51:21 <eedri1> that is if you have a spare @ home
14:51:49 <eedri1> nice..
14:51:53 <eedri1> we have 2 options
14:52:05 <RobertM> I just bought 16G of ram to kick them up to 16G
14:52:12 <eedri1> 1 - allow access to jenkins.ovirt.org to those hosts to manage them with SSH
14:52:13 <quaid> we can always start with @ home, whatever gets us positive changes soonest
14:52:35 <eedri1> 2 - allow only access from them to jenkins and run in headless mode (i'm still investigating that option)
14:52:52 <eedri1> option 1 will of course require setting up a firewall rules
14:53:00 <eedri1> to allow only jenkins.ovirt.org
14:53:52 <adamw> any ideas why i can't yum update ovirt? http://pastebin.com/3J8SJBk4
14:54:02 <adamw> seems like i have to reinstall?
14:55:05 <RobertM> adamw, Just uninstall ovirt-engine and reinstall.  Package format changed since beta1
14:55:13 <adamw> oh
14:55:18 <tjikkun_work> adamw, try yum distro-sync
14:55:29 <tjikkun_work> ah, or what RobertM said :0
14:55:58 <RobertM> Just make sure you run ovirt-upgrade before starting anything up or your DB could be toast
14:56:17 <quaid> hmm, ok
14:56:46 <quaid> eedri1: can we take that Jenkins config discussion to the infra@ list?
14:56:55 <quaid> sounds like we'll need more time to discuss than just here today
14:56:56 <RobertM> eedri1, quaid I bought the hardware because I was planing to use to for centos builds of ovirt.  The hardware is currently doing nothing.
14:57:05 <eedri1> quaid, +1
14:57:23 <RobertM> I am fine with that.
14:57:30 <eedri1> RobertM, great, jenkins accepts warmly new slaves...
14:57:45 <quaid> #action eedri1 bring discussion to infra@ about how to do distributed Jenkins
14:57:49 * RobertM Note today is my birthday and I am heading out after the meeting
14:57:58 <eedri1> congrats!
14:57:58 <quaid> RobertM: happy birthday :)
14:58:03 <eedri1> mazal tov as we say
14:58:22 <mburns> happy birthday!
14:58:39 <mburns> can i celebrate by leaving after the meeting too?
14:58:45 <eedri1> +1
14:58:49 <quaid> is there time to talk about the Gerrit patches security concern eedri1 brought up?
14:58:57 <quaid> or should we push that to a list discussion, too?
14:59:00 <eedri1> action - bring beer to next meetings
14:59:02 <RobertM> mburns, Depends would your boss be ok with that :)
14:59:34 <RobertM> quaid, +1 for Gerrit patches to list
14:59:38 <quaid> cool
14:59:41 <mburns> prob not...
14:59:45 <quaid> ok
14:59:58 <quaid> I think we ranged a bit but actually covered all the topics we needed to, save the Gerrit one
15:00:11 <eedri1> quaid, we missed the security thing
15:00:17 <RobertM> eedri1, Are you the one who was working on coping builds to a nightly repo?
15:00:20 <quaid> yeah, that's the remaining
15:00:22 <eedri1> quaid, do i have a few moments to talk about it?
15:00:25 <quaid> eedri1: can we also take that to the list?
15:00:28 <quaid> I have time
15:00:36 <eedri1> quaid, sure
15:00:38 <quaid> let's open the discussion, we can finish on list of we need
15:00:49 <adamw> maybe i'm doing something the wrong way, but each time i've installed ovirt-engine then go to add iscsi storage, it says "use host:" and has a drop down, but it's blank... any ideas?
15:00:50 <quaid> any objections?
15:00:51 <eedri1> as for the repo sync
15:01:30 <eedri1> my end is quite ready... i need someone to give me access to ovirt.org and to create repos from the files
15:01:51 <eedri1> there is a job right now that runs per commit and creates ovirt-engine rpms
15:02:10 <RobertM> eedri1, jenkins@jenkins.ovirt.com can ssh without password to jenkins@www.ovirt.org
15:02:19 <eedri1> only we need to do is add to this job action (scp/rsync/nfs) to ovirt.org
15:02:26 <eedri1> RobertM, not at the moment afaik
15:02:39 <RobertM> eedri1, I set that up over the weekend
15:02:47 <eedri1> RobertM, great
15:02:54 <eedri1> RobertM, so i just need to know dest dir
15:03:02 <eedri1> $JENKINS_HOME/rpms?
15:03:13 <eedri1> and you'll pick the rpms from there?
15:03:33 <mburns> are we running these nightly?
15:03:38 <RobertM> eedri1, That would work.  Although I suggest.   $JENKINS_HOME/rpms/$project
15:03:38 <mburns> the sync?
15:03:40 <mburns> or per build?
15:03:43 <eedri1> i mean $JENKINS_HOME/rpms/$PROJECT_NAME/
15:03:46 * mburns thinks nightly...
15:04:00 <eedri1> right now it runs per commit
15:04:08 * eedri1 can change it to run nighly
15:04:11 <mburns> i think we should only sync nightly
15:04:18 <RobertM> We can run createrepo nightly
15:04:51 <RobertM> Since they aren't in the same DC makes since to do nightly.
15:05:04 <mburns> have a job that triggers each day at midnight, and copies latest build from vdsm, engine, node to ovirt.org
15:05:07 <eedri1> so not to overload ovirt.org?
15:05:31 <mburns> then have job on ovirt.org that picks up the new builds at 1 AM (or something like that
15:05:34 <mburns> yes
15:05:41 <eedri1> mburns, so we'll need to change all current jobs to put all rpms to a main dir on master
15:05:50 <mburns> eedri1: no
15:06:00 <mburns> eedri1: just have them all publish rpms as artifacts
15:06:17 <quaid> #action quaid to get eedri1 sudo on linode01.ovirt.org
15:06:23 <mburns> then have publish to master job copy artifacts from each other job
15:07:06 <eedri1> mburns, but artifacts can be on different slaves, no?
15:07:22 <mburns> eedri1: pretty sure they get archived on master
15:07:28 <mburns> but jenkins handles all that...
15:07:33 <eedri1> mburns, ok, i'll have to check it
15:07:37 <RobertM> That is my understanding
15:07:42 <mburns> eedri1: look at ovirt-node-iso job
15:07:52 <mburns> it copies artifacts from ovirt-node-stable job
15:08:10 <mburns> no restriction needed on which host they run
15:08:37 <eedri1> mburns, aah.. of course, a jenkins plugin to do it :)
15:08:50 <eedri1> mburns, "copy artifacts from another project..."
15:09:14 <mburns> ;-)
15:09:17 <eedri1> mburns, nice, never used it before
15:09:30 <eedri1> mburns, then again ..theres 500+ plugins..
15:09:34 <mburns> yep
15:10:19 <mburns> eedri1: just need to pull artifacts from lastStable build, then organize them correctly
15:10:31 <mburns> and copy them over to ovirt.org
15:10:52 <mburns> ovirt.org job should cleanup the old nightly, then create new nightly directory structure
15:11:25 <mburns> consume the packages from ~jenkins/ and put them in right place
15:11:35 <mburns> then run create repo (in all correct places)
15:11:40 <mburns> create md5sum files
15:11:57 <mburns> then cleanup old ~jenkins versions
15:12:58 <adamw> i think i'm getting my terms confused -- if i install ovirt-engine on a system, is it an ovirt host or just the manager of hosts?
15:13:01 <RobertM> I think we should keep 2 or 3 days worth of builds
15:13:26 <RobertM> adamw, Manager of hosts.
15:13:45 <mburns> RobertM: that's not the definition of nightly
15:13:58 <adamw> ah
15:14:03 <eedri1> RobertM, you have history already in jenkins job
15:14:04 <RobertM> adamw, It doesn't actually run an VM is just manages nodes
15:14:11 <eedri1> RobertM, which can be accesssed and downloaded
15:14:34 <eedri1> RobertM, per build
15:14:42 <RobertM> mburns, Just think about possible minors and all the other stuff I have seen out there.
15:14:59 <adamw> i see.. and i think i asked this before -- it can't do both? or -- it can't run inside a vm ?
15:15:06 <mburns> RobertM: yes, but i'm comparing to fedora rawhide
15:15:11 <mburns> only one version available
15:15:16 <mburns> though you can get old builds
15:15:36 <mburns> and there is no guarantee of stability with rawhide
15:15:40 <eedri1> mburns, ok, i'll set up the jenkins job, once it's ready we can continue this on infra@ovirt.org or here..
15:15:48 <mburns> eedri1: ack
15:15:58 <RobertM> adamw, The all in one plugin does that it allows you to run both a node and engine on the same hosts but it is not recommend you run the engine on a node.
15:16:17 <eedri1> #action eedri to set up nighly jenkins job to collect lastest stable rpms from all ovirt projects and copy to ovirt.org/$JENKINS_HOME/rpms/$project
15:16:20 <RobertM> eedri1, Yes that is the next step
15:16:59 <mburns> RobertM: eedri1:  you need to decide on layout and format of how stuff is sent over
15:17:04 <mburns> i.e. tar.gz file?
15:17:14 <mburns> what is the directory structure?
15:17:21 <mburns> who owns putting things in the right place?
15:17:41 <mburns> might make sense to have the jenkins job do all of that
15:17:42 <eedri1> mburns, why not just putting all the rpms in each $project dir
15:17:58 <eedri1> do we need another resolution?
15:18:03 <mburns> eedri1: it could be argued either way
15:18:20 <mburns> but in the end, they have to be in certain locations in ovirt.org/releases/nightly
15:18:25 <eedri1> and use rsync or scp the rpms
15:18:31 <eedri1> that's another isseu
15:18:41 <mburns> i.e., all f17 rpms are under nightly/fedora/17
15:18:42 <eedri1> i'm talking about just copying from jenkins to a dropbox on ovirt.org
15:18:57 <eedri1> after that will be complete, we'll disscuss where to delpoy the repos
15:19:03 <mburns> eedri1: yes, you could tranfer using rsync/scp/etc...
15:19:35 <mburns> it's more expensive, network bandwidth-wise
15:19:40 <RobertM> One minor change  ovirt.org/$JENKINS_HOME/rpms/$project/$release
15:20:00 <mburns> and src files need to go in a separate location in the end
15:20:06 <ovirtbot> 14[[07Screencasts14]]4 !10 02http://wiki.ovirt.org/w/index.php?diff=3897&oldid=3846&rcid=3993 5* 03DNeary 5* (+1128) 10Add first two testing/demo scenarios
15:20:12 <mburns> we should be archiving both src rpms and src tar balls
15:20:31 <mburns> and binary artifacts should also be sent across (ovirt-node iso image)
15:20:41 <mburns> and that goes in a separate location
15:20:57 <mburns> but those are details that you guys can work out
15:21:01 * eedri1 suggest bringing it on infra@ovirt.org and decide on the directory stucture
15:21:54 <mburns> ack
15:22:13 <RobertM> +1 on infra@ovirt.org for the topic
15:22:25 <eedri1> ok, i think all other issues will be disscussed on infra@ovirt.org , so we can close up
15:22:27 <RobertM> Of finally directory sturcture
15:22:36 <quaid> ok!
15:22:44 <mburns> my main point is that how jenkins put the files there (format/layout) and how the script on the ovirt.org side expects those files to be laid out must agree
15:22:45 <quaid> yeah, what is the action to the list we have?
15:23:08 <eedri1> talking about security concerns when allowing gerrit patch jobs on jekins
15:23:12 <quaid> (it can be group owned for now)
15:23:28 <quaid> #action discuss on infra@ about security concerns when allowing gerrit patch jobs on jenkins
15:23:30 <eedri1> finalizing rpms directory stucture on ovirt.org
15:23:55 <quaid> #action decide where Jenkins puts files, how the script on ovirt.org expects files to be laid out, and make those two agree
15:24:02 <quaid> #undo
15:24:02 <ovirtbot> Removing item from minutes: <MeetBot.items.Action object at 0xa297fcc>
15:24:14 <quaid> #action finalize rpms directory structure on ovirt.org, to match Jenkiins/script needs/expectations
15:24:22 <quaid> better :)
15:24:27 <quaid> ok, anything else?
15:24:40 * quaid ready to close, if not
15:24:53 <eedri1> something about distrbuting builds
15:25:00 <eedri1> on bare-metal iirc
15:28:20 <quaid> #action discuss on infra@ how to distribute builds - baremetal, VMS, etc.
15:29:21 <quaid> #action may still need a dedicated baremetal host somewhere; quaid is asking around about budget, others can help define what we need & research hosting providers
15:29:25 <quaid> ok, I think that's it
15:29:39 * quaid will close in 10 seconds
15:29:43 * ewoud waves
15:29:51 <ewoud> what's the point of bare metal?
15:29:59 <quaid> ewoud: have a great rest of your day off
15:30:17 <RobertM> ewoud, Being able to run VM for testing VM creation
15:30:22 <eedri1> ewoud, to use as hypervisor
15:30:36 <quaid> and with that
15:30:40 <quaid> #endmeeting