@Override and interface

Jim Leary, my colleague at CloudBees, got me into digging into this.

The question is around putting the @Override annoation on a method that implements an interface method, like this:

public class Foo implements Runnable {
    public void run() {}

As you can see in the javadoc, when @Override was originally introduced, such use was not allowed. javac 1.5 rejects this, too (I verified this in 1.5.0_22.)

Sun intended to change this in 1.6. Javac 1.6 indeed changed the behaviour to allow it (verified this in 1.6.0_26), but someone forgot to update the documentation, as you can see in the Java 6 API reference.

The interesting thing is, if you use Javac 1.6 with “-source 1.5″ and/or “-target 1.5″. In all the possible 3 combinations, the above code compiles. Is this a bug, or is this correct? The interesting thing is that the semantics of @Override is defined in the library, not in the Java language spec. So an argument can be made that this is as it should be — JLS, which governs the -source/-target switches, have nothing to do with this annotation. It’s akin to your code relying on newly introduced types in Java 6. If you compile them with Javac 1.6 with -source 1.5, it won’t raise an error.

But IDEs do seem to tie this with the language level. Jim said Eclipse, when set to language level 1.5, it will flag the above code as an error. I verified that IntelliJ does the same (but only in the editor, as the actual compilation happens via javac so the build will succeed.)

So the end result is ugly. If you open the project in your IDE, you see all these errors, but your build (nor test nor any actual execution, for that matter) will not catch this problem. Even if this was a bug in javac, I don’t see it getting “fixed” — the last thing you want is your security update relese to Java6 break all your builds.

I guess the right thing to do for projects (like Jenkins) is to try to avoid putting @Override on interfaces and as we discover them, remove them. So that people who open the source tree in IDE won’t see those false positive errors. This is a bummer because it’s actually useful to have @Override on interfaces (that’s why the behaviour was changed in 1.6 in the first place!) Does anyone know of a FindBugs rule or some refactoring tool to check this? Or should these be filed as bugs against IDEs? For enforcing something that’s not in JLS?

Quiz answer: memory leak in Java

I posted a little quiz yesterday, and here is the answer.

The short answer is that InputStream needs to be closed. It’s easy to see why if it’s FileInputStream because you know the file handle needs to be released. But in this case, it’s just ByteArrayInputStream. We can just let GC recycle all the memory, right?

Turns out GZIPInputStream (or more precisely Deflater that it uses internally) uses native zlib code to perform decompression, so it’s actually occupying more memory (about 32K-64K depending on the compression level, I believe) on the native side, while its Java heap footprint is small. So if you allocate enough of those, you can end up eating a lot of native memory, while Java heap is still mostly idle. Even though those GZipInputStreams are no longer referenced, it just doesn’t create enough heap pressure to cause the GC to run.

And eventually you eat up all the native memory, and zlib’s malloc fails, and you get OutOfMemoryError (or your system starts to swap like crazy and your system effectively becomes unusable first.)

The other interesting thing to note is that -XX:HeapDumpOnOutOfMemoryError doesn’t do anything in this case. I read the JVM source code and I learned that heap dump only happens when OOME is caused during 3 or 4 specific memory allocation operations, like allocating a Java object, array, GC saturation, and a few other things. There are many other code passes in JVM that reports OOME, like this zlib malloc failure, that doesn’t trigger heap dump. There’s no question HeapDumpOnOutOfMemoryError is useful, but just beware that in some cases it doesn’t get created.

I knew that GZipInputStream is using native code internally, but I didn’t think about it too much when I was putting this original code together. Humans can’t think about all the transitive object graph and its implications.

The other lesson is that now I know why ps sometimes report such a big memory footprint for JVM while jmap reports only a modest usage. The difference is native memory outside Java heap, although unfortunately I don’t think there’s any easy way to check what’s eating the native memory.

My colleague and friend Paul Sandoz pointed out that if GZipInputStream was nice enough to free them up at EOF, it would have saved a lot of hassle, and I think he’s right — one still needs to consider the case where IOException causes the processing to abort before hitting EOF, but it would have helped, because those abnormal cases would be rare. I mean, there’s no harm in doing so, and anything that makes the library more robust in the face of abuse is a good thing, especially when the failure mode is this cryptic.

Quiz time: memory leak in Java

Today I had an interesting debugging exercise, and I felt like I learned a new lesson that’s worth sharing with the rest of the world.

I had the following code, which takes a small-ish byte array and deserializes it into an object (let’s say someNotTooBigData is something like new byte[]{1,5,4, ... some data... }.) Seems innocent enough, no?

voidObject foo() {
	byte[] buf = someNotTooBigData();
	return new ObjectInputStream(new GZIPInputStream(
	    new ByteArrayInputStream(buf))).readObject();

But when this is executed frequently enough, like while(true) { foo(); }, it creates OutOfMemoryError. Can you tell why? I’ll post the answer tomorrow.

Ken Cavanaugh had passed away

I’ve just learned that Ken Cavanaugh had passed away. He was my colleague back in Sun, and we had worked on a few small projects together.

When I joined Sun, he was already THE CORBA guy AFAIK, and when I left Sun, AFAIK he was still THE CORBA guy. And I was at Sun for, like 8 years. Not many people have a passion that lasts that long for any given field of technology, and that left me a rather strong impression. I always hoped that I could be like that when I get to his age.

Certain people emits an aura of confidence/reassurance. You can tell right away that he knows what he’s doing/saying when he does/says something. Ken was one of such people for me. He thus obviously commanded the respect he deserves, and you can see it in the guest posts that his colleagues left on his website that that’s not just me. I can really only use English well enough for dry technical matters, so I can’t really describe the feeling very well. I’m just very sorry to hear the news.

My epic battle with findbugs-maven-plugin and how I utterly lost

It started quite innoucently. I was looking at this thread in Jenkins dev list and thought it’d be a good idea to get some critical findbugs errors to fail a build. My goal was simple, I want to run some high-priority findbugs checkers during the build, and if they report any error, I want the build to fail. I wanted this to be in a profile, so that I don’t need to wait for FindBugs to finish if I just want to build.

Should be simple enough, you’d think. Nope.

I’ve spent the entire afternoon getting this going. There were several issues in the plugin and relevant places that blocked my progress. In the hope that other people won’t suffer the same loss, here are those:

  • Maven 2.x and Maven 3.x site plugins are totally incompatible, breaking the setup that used to work. AFAIK there’s still no reasonable approach to enable your project to build both with Maven 2 and Maven 3, when it comes to site stuff, and that pretty much includes all the code analysis plugins. Maven 3 breaks backward compatibility with the site configuration of Maven 2, so the POM setup that used to work will no longer work with Maven 3. (So if someone tells you that Maven 3 is compatible with Maven2, don’t let them fool you.) It silently ignores all you have in and does nothing. So you’ll have to move the reporting configuration into a new location. This used to make it impossible to share the site configuration between Maven2 and Maven3, but I was told that the latest Maven site plugin version 3.0 no longer has this problem.

  • Findbugs plugin documentation seems to offer two mojos to generate reports — findbugs:check and findbugs:findbugs. But the check mojo actually isn’t capable of generating any reports. Two dozen or so configuration options to tweak the report generation you see in the doc are totally bogus. They are unused and ignored. (correction 8/24: what I missed is that the check plugin designates findbugs:findbugs as a pre-requisite.)

  • Some people tell you that you can invoke mvn findbugs:findbugs directly to generate report, but this is rather problematic if you actually try it. Firstly, it will generate XML but not HTML, so it’s useless for human beings. It does tell you how many bugs it found, but it doesn’t tell anything that actually points you to where the offending code is. One is supposed to be able to work around that by running the findbugs:gui mojo, but AFAICT this mojo is utterly broken. Secondly, if you invoke findbugs:findbugs mojo directly, it doesn’t pick up the same configuration that it uses during the site generation (one picks up build/plugins/plugin, the other looks at reporting/plugins/plugin). Again, AFAIK there’s no way to have those two modes of invocation use the same configuration.

  • You need to make sure that Maven at least compiled your source code before running site. The FindBugs mojo will happily skip itself if there’s no class files to work on, and unless you are smart enough to figure out what the mysterious “canGenerate=false” line means, you’ll waste your time trying to figure out why the mojo isn’t working, like I did.

  • Remember my use case of making the build fail in case of serious FindBugs issues? Documentation might make you believe that findbugs:check mojo is able to do this, but there are two large pit-falls. One is what I’ve already described, namely that this doesn’t actually run FindBugs, and instead it expects that you’ve already run it. The other is that if it doesn’t find any trace of FindBugs running, it happily skips itself. The consequence is that mvn clean install will always complete successfully, even if your code has FindBugs violations. I still haven’t figured out how to make this whole thing work. As I mentioned, findbugs report generation itself requires that the source code be compiled, so I guess you’d have to invoke Maven like mvn clean compile site install or something. This is just ridiculous.

  • In FindBugs, you can specify what rules you want to enforce and what rules you want to ignore. You describe this in a filter file. In a multi-module project, it tends to be more convenient to have just one filter file that all your modules use, rather than having many similar filter definitions. But this seemingly typical use case just doesn’t work with the Maven FindBugs plugin, because the path you specify in the filter file configuration is always interpreted relative to the current Maven module, and there seems to be no way to have it point to the base directory of the project (the ${project.basedir} macro also expands to the current module’s base directory, which is useless.) The documentation does talk about this and gives you a work around. As a Maven plugin developer myself, I understand where they are coming from, but as a Git user, the assumption that requires such a cumbersome workaround (of being able to check out and build modules individually, like you can in Subversion) is unnecessary, yet I still have to pay all the price. This doesn’t make sense.

I’m sorry to say this, but this is a disaster. Integrating FindBugs in Ant project, generating HTML report, and failing a build in case of significant error is fairly straight-forward, and takes maybe 10 or 20 lines at max. But here in Maven, it takes more lines in your POM, not to mention one whole Maven module just for the filter file, plus all these pitfalls. And it still doesn’t attain my original goal of making critical FindBugs issues fail the build.

Experiences like this made me really want to switch to Gradle, but alas, it’s no longer my call alone to make changes like that. So for the time being, I think I’m going back to my good trusted Maven antrun extended plugin. At least it works. And Stephen, this is why Ant fragment is actually more maintainable than the magical combination of Maven hacks.

I’m traveling for the next two weeks

I’m at San Francisco airport now to start my first around-the-world tour!

My first stop will be Tokyo, my home town. There’ll be a Jenkins user meet-up, whose 88 seats are booked solid. This time the topic is about various scripting languages, and I’ll be presenting about the recent Ruby/Jenkins work in the core with cowboyd. On Tuesday, I’ll be doing one of the keynotes in Japan Java User Group Cross-Community Conference. This is actually a full day event with 3 concurrent tracks, showing the degree of high interest in Java in Tokyo.

I have to make sure I won’t forget to attend the Jenkins project meeting in that night. It’s 1am my time. I think the technology made the distance a non-issue, but the time zone difference is really fragmenting our community (especially Asian communities from the rest of the world), and I with we could somehow fix that.

Then I’ll be heading to Paris, to present in the “What’s Next?” event by Zenika. While this is a for-pay conference, Zenika is also generously hosting a free Jenkins user meetup in Friday night. I’ll be also speaking there, along with a number of French Jenkins developers. If you are using Jenkins or thinking about using it, this is the opportunity to learn about a thing or two and get to know some of the people behind it. Then the following Saturday, we’ll shift the gear a bit and will be doing a hackathon. While the meetup is more for users of Jenkins, hackathon is more for the current and wanna-be developers of Jenkins and its plugins. It’s also a whole day event, so you’ll have more chance of really getting to know people. So if you are already plugin developers or thinking about writing one, please join us. On 5/30, I’lll be doing the last show in Paris, at SFEIR CloudCamp about Jenkins.

On 6/1, I’ll be heading to London and will be doing another talk in Skillsmatter in the evening. The next day, I’ll be doing a one-day training, and I believe some seats are still available. And with that, I’m back to San Francisco!

So please forgive me for any delay in responses to e-mails, and I hope to see as many of you on the road.

Upcoming Webinar “Mastering Jenkins Security”

I’ll be doing another Jenkins webinar titled “Mastering Jenkins Security” in the next Thursday 10am Pacific Time. It’s a free event, so please register.

After the first webinar, I got a number of feedbacks about the future webinar topics. So when we thought about doing the next one, this came fairly naturally. Unlike the first one, this time the idea is to pick one topic and go do some in-depth discussions. It’s harder to do in a conference, and so I think it’s better suited for webinars.

By default, no security is enabled in Jenkins, so in an environment where a stricter access control is more benefitial, an administrator needs to set this up to suit their needs, and there’s just a lot of different ways people want to configure it. So in this webinar, I’ll start by outlining the basic design of the security system in Jenkins — authentication and authorization — so that you can build a sufficient mental model of how it works, how they interact, and how it can be made to fit your needs.

We’ll then go through the major implementations of those two pluggability points, so that you can pick the right implementation for your needs. There are some plugins, like Active Directory plugin or OpenID plugin, that tightly integrates with respective systems that provide great integration experience. Then there are other plugins, like script security realm, which provides a general purpose mechanism that can be used to integrate Jenkins with arbitrary systems with little effort. Then there’s an entirely different approach of delegating authentication outside Jenkins to the front end reverse proxy. On the authorization side, there are lesser but still a number of options that you can choose from.

Aside from the authentication/authorization, I’ll discuss the security implications of running builds in Jenkins and other standard webapp security considerations, such as cross-site scripting problems, cross-site request forgery issues, and other attack vectors. I think it’d be useful for those who run Jenkins for a larger team.

So once again, please register if you are interested in attending, and if you have future topic suggestions, please let me know!

Bye bye Hudson, Hello Jenkins

(This post was originally made under an incorrect location, so I’m moving it here. The contents haven’t changed since its original post on Jan 11th 2011.)

As some of you may know, toward the end of the last year, Hudson project has experienced a fire drill — you can see it here, here, here, and here. In short, the issue was that Oracle was asserting a trademark right to the project name “Hudson”, and that caused some considerable concerns to the community. Since then, key community members were talking with Oracle, in an attempt to produce some kind of proposal for a stable structure and arrangement, which was then going to be proposed to the Hudson community.

And as Andrew posted in hudson-labs.org, there is an update now — the negotiation didn’t work.

The central issue was that we couldn’t convince Oracle to put the trademark under a neutral party’s custody (such as Software Freedom Conservancy), to level the playing field. In a project where the community makes commits two order of magnitudes bigger than Oracle, we felt such an arrangement is necessary to ensure that meritocracy continues to function.

Aside from this, Oracle wanted a broader change to the way the Hudson project operates, including a far more formal change review process, architectures, 3rd party dependencies and their licenses, and so on. Those policies are worth discussing on their own, but it was very risky idea to have someone external to the project draw them up. Instead, in a normal OSS project, such processes would normally come out from the dev community itself, based on how it has been functioning. This is where I felt that the lack of “level playing field” I mentioned above is already affecting us. (And on that note, there’s another asymmetry about the CLAs, that we haven’t even touched on.)

All of those still might not have been a show-stopper if we felt that there is a genuine trust between us, but in this case, we just failed to build such a relationship, even after a month long conversation.

So in the end, we’d like to propose the community that we regrettably abandon the name “Hudson” and rename to “Jenkins” — it’s another English-sounding butler name that doesn’t collide with any software project as far as I can tell. This option was something we’d have liked to avoid, for all the obvious reasons, but I’m convinced that for a long-term health of the project, this is the only choice. It makes me sad at a personal level too, as I named this project Hudson back in 2004, and cherished it ever since. But the storm is gathering over the horizon, and the time to act is now.

The details of the proposal is again in the posting at Hudson Labs, so I won’t repeat it here. One thing I wanted to stress is that we’d like to move Jenkins under the umbrella of SFC, a neutral overlord that doesn’t concern itself with the daily technical matters of the project, just like how Sun was. That’s the model under which Hudson has grown, and I think it still fits us well.

There will be a poll running to get the broader community concensus. Please give us your support, and please let your voice be heard.

Deadlock that you can’t avoid

A seemingly innocent user report in the Hudson IRC channel turns into an interesting “discovery” (for me anyway) about JVM. Namely, if you got two threads initializing classes in the opposite order, you can get into a dead lock.

For this test, I wrote the following class. In this way, initialization of Foo results in the initialization of Bar:

package test;
public class Foo {
    static {
        try {
            System.out.println("Initializing Foo");
            new Bar();
            System.out.println("Foo initialized");
        } catch (Exception e) {
            throw new Error(e);

I then wrote the Bar class that does the opposite:

package test;
public class Bar {
    static {
        try {
            System.out.println("Initializing Bar");
            new Foo();
            System.out.println("Bar initialized");
        } catch (Exception e) {
            throw new Error(e);

Now, if you initialize them simultaneously from the opposite direction like this:

public class App {
    public static void main(String[] args) {
        new Thread() { public void run() { new Foo(); }
        new Thread() { public void run() { new Bar(); }

And you’ll see that it deadlocks:

"Thread-1" prio=10 tid=0x0000000040696000 nid=0x2d6e in Object.wait() [0x00007ff087ce5000]
   java.lang.Thread.State: RUNNABLE
	at test.Bar.(Bar.java:11)
	at test.App$2.run(App.java:14)

"Thread-0" prio=10 tid=0x0000000040688000 nid=0x2d6d in Object.wait() [0x00007ff087de6000]
   java.lang.Thread.State: RUNNABLE
	at test.Foo.(Foo.java:11)
	at test.App$1.run(App.java:8)

Obviously, in production code, the path from initialization of class Foo to class Bar will be much longer, but you get the idea. I’m kind of surprised that this isn’t a real widespread problem in JavaEE. Developers don’t normally care about the class initialization, and on the server side you tend to have a lot of threads doing random things…