Community
Participate
Working Groups
Matt and I were just thinking out loud based on observations we've made over the years. Our current Hudson works. At times, it works quite well, and at other times, not so much. We are considering a totally different approach that could potentially solve (or lessen) some stability issues, as well as be extremely convenient for you, our beloved committers. Here's what we have in mind: 1. Each participating project gets a projectname-hudson user ID. This user ID could optionally be in the same group as your project with write access to all the project's resources (Git repos, downloads area). No one would ever know the passwords for this user ID. 2. Each project is given one independent Hudson master instance with, say, 2 executors. This would be nothing more than Hudson running as the projectname-hudson user ID on some assigned TCP port. 3. Each Project Hudson instance would be accessible as https://hudson.eclipse.org/projectname 4. Project committers could optionally be admins (or partial admins) of their Hudson instance. Here's where I think this solution will shine: - better build stability since there are no slaves - one project's builds cannot affect others - since builds are performed as a User ID with permissions equivalent to a project committer, the build system could tag, commit, sign and publish downloads without needing to go to shell or use crontab. - projects could create jobs and administer their own Hudson instance Some notes: - shared tools would remain in /shared as they are now, so all the project Hudson instances would essentially look like today's Hudson instance, minus all the slaves - The current Hudson instance would remain in place, perhaps indefinitely, for those projects who don't want their own instance. - Windows and Mac test jobs would continue to run on the "shared" instance. We won't create platform-specific slaves for specific projects unless we're sufficiently convinced we're not signing our lives away Open issues: - I don't know what kind of maintenance nightmare this could be. If we have 200 Hudson instances, would that be more maintenance than what we have today? - If projects are free to install plugins and configure their Hudson as they please, what happens if their instance becomes fubar? Comments are welcome!
+1 Regarding maintenance questions, this can be handled projects themselves by having two scripts: 1. Reboot project's Hudson. 2. Reimage project's Hudson. If these scripts are accessible to project committers from the portal, there should be less of a need for webmaster intervention than today.
+1 We @ Eclipse Scout AFAIK hadn't (m)any problems concerning slaves or other project. But I like the idea of one instance per project. The most interesting aspect would be the technical user with commit rights on the scm for tags and such. Thanks Stephan
One of the things we are working, at the Hudson project, for Hudson 3.1.0 is called Team Concept (http://wiki.eclipse.org/Hudson-ci/features/Team_Concept). The basic idea is to introduce multi-tenancy in Hudson, so logged in users get to see the jobs belong to that team and pubic jobs only, though other teams are hosted in that Hudson instance. Though this won't reduce the burden on a single Hudson, gives ability to host multiple teams (Eclipse Project) in a single Hudson (rather than one project per Hudson as proposed) Another important thing we are working on is to reduce Hudson memory foot print. This is also our focal point for Hudson 3.1.0. One interesting long term plan in our TODO list is Meta-Hudson, ability to manager multiple Hudson masters from a single umbrella. This we believe is extremely important for large consumers of Hudson like Eclipse Foundation. I'm guessing all three together will be an ideal solution for this proposal.
+1 What a brilliant idea! Especially the projectname-hudson user will be a big step to make the automation and unification of signing, deployment and release processes easier.
+1 This will help to improve the signing, publishing process and hopefully help to avoid the frequent performance problems we face since long time with the shared instance using slaves.
I've asked Thanh to start investigating this. We'll post up observations as we see them. Thanks for the comments so far.
(In reply to comment #3) > I'm guessing all three together will be an ideal solution for this proposal. This all looks very promising. I'll admit that I am drawn to the natural privilege separation of having multiple Hudson instances running as different users, though.
(In reply to comment #7) > This all looks very promising. I'll admit that I am drawn to the natural > privilege separation of having multiple Hudson instances running as different > users, though. Why I said that was we have done some home work on memory requirements, for internal Oracle purpose. Per Hudson instance takes about 128 MB JVM Overhead + 512 MB - 1 GB for jobs and builds. Multiplying that by 200 instances goes in order of Terabytes. Probably you might have taken that in to account already.
We've taken into account the memory requirements of this solution. Thanks.
+1 Put my two cents in... any improvement in that area would be great! And this kind of isolation seems to have the potential to solve many problems.
I was talking with Gunnar yesterday, and he enlightened me to a common use-case where an instance is always-off which can be brought to life the time a build runs. This can be useful for those projects that don't build nightly, or for weekly, stable or release builds. Since the project's job information would persist in the projectname-hudson user ID's home, an instance can remain off to save RAM and be brought into service within seconds.
I had a thought and wanted to document regarding Windows and Mac slaves. How would we handle this case? Would the projects be provided with details on how to add the Windows / Mac slave to their Hudson instance? or would we also provide Windows and Mac HIPP instances?
(In reply to comment #12) > I had a thought and wanted to document regarding Windows and Mac slaves. How > would we handle this case? Please see comment 0.
> 1. Each participating project gets a projectname-hudson user ID. Current user naming convention is: genie.tlp.projectname since the project-specific user-id could potentially be used for other, non-Hudson tasks. The genie.tlp.projectname home directories will be on NFS. We'll run the Hudson app from NFS since that will allow us to "move" the project's Hudson instance from one server to another easily. The project's Hudson instance will use a local disk* for its workspace. This will resolve the issues with file contention using NFS. If Hudson is launched on a different server, the workspace will be created automagically. We'll have to plan for a process for frequently cleaning up HIPP server workspace storage. Konstantin's ideas of self-serve reboot and reimage are part of what we're designing, but may not be in "HIPP 1.0" * Current HIPP servers have almost 1T of local storage, but in this case, local storage could be anything the machine considers "local", such as an image file on a remote system mounted with the loopback device, a GlusterFS mount, an iSCSI initiator, etc.
Now that Kepler release is out, I'd like to start rolling this HIPP to interested projects sometime this month. I'm thinking of rolling it out to a couple of projects at a time to work out any issues with our Hudson image and the process. With that said I'd like to invite any interested projects who'd like to volunteer to try it out to create a bug and link it to this one. I will look into setting up a few projects in the coming weeks.
Sapphire is interested. https://bugs.eclipse.org/bugs/show_bug.cgi?id=412123
The Hudson project is interested. Please create one for us. We would be interested to use the latest version of Hudson 3.0.1. Ho do we request to upgrade to latest version when one becomes available? Thanks,
(In reply to comment #17) > Ho do we request to upgrade to latest version when one becomes available? I would open a new bug with what project and version of hudson.
jgit and egit are interested. We would like to use a common HIPP jointly for both projects and we would like to also run the gerrit verification builds on this Hudson.
ECF is also interested to use a HIPP instance. Even though ECF uses a external build machine, a HIPP instance will allow us to more easily integrate with Gerrit.
Code Recommenders and Code Recommenders Incubation would like to request a (single) HIPP too. Similar to egit, we'd like to run the gerrit verification jobs on this server including sonar report etc. (the full program I guess).
Orion also requesting a new HIPP, opened bug 415988.
UOMo would also like to participate. Should we create a separate ticket for this request?
I created one like Orion https://bugs.eclipse.org/bugs/show_bug.cgi?id=416742
Thanh, I've incorporated HIPP into our Hudson docs: http://wiki.eclipse.org/Hudson Please feel free to add/correct as required.
(In reply to Werner Keil from comment #24) > I created one like Orion https://bugs.eclipse.org/bugs/show_bug.cgi?id=416742 There are 2 issues, hope it's OK to comment here, I don't want to reopen the ticket for UOMo unless necessary. First, the Upper/lower case of the project name is not correct on some labels. "uOmo" should be "UOMo". Beside, all Hudson instances have some cookie bug, and I see the entire UI in Danish, not English or German. I am not in Denmark at all, nor is the OS Danish, so where does Hudson remember it? Can we reset that somehow, I can't see any language in my profile, but maybe it's Danish and I did not notice it?;-/ TIA, Werner
(In reply to Werner Keil from comment #26) > (In reply to Werner Keil from comment #24) > > I created one like Orion https://bugs.eclipse.org/bugs/show_bug.cgi?id=416742 > > There are 2 issues, hope it's OK to comment here, I don't want to reopen the > ticket for UOMo unless necessary. > > First, the Upper/lower case of the project name is not correct on some > labels. > "uOmo" should be "UOMo". > The language issue only occurs on Chrome, was able to fix in the browser (it is a known problem of Hudson we also had at Maersk;-) but others remain, will reopen the UOMo specific ticket.
HIPP work is complete and it seems to be successful. Closing.
thanks for your hard work which made the life of committers easier :-)
Thanks. We're not done with HIPP though; our next focus is: Bug 422507 - [HIPP] Provide a way for projects to upgrade their HIPP instance We've started working on a mechanism whereby we (webmasters) create new Hudson HIPP images and allow committers to upgrade their HIPP as their schedule allows by using the HIPP Control tools in My Account [1] [1] https://dev.eclipse.org/site_login/myaccount.php