I am happy to announce the acceptance of a paper of mine at the 17th IEEE International EDOC Conference. The paper is titled “Measuring the Portability of Executable Service-Oriented Processes” and the conference will take place from 9 – 13, September in Vancouver, Canada. A pre-print version of the paper can be found here.
In this post, I will summarize the paper and comment on the review process, which involved an aspect new to me. Let me start in order:
The Review Process
Regular research papers for EDOC are ten pages in IEEE double column style and submission is handled via Easychair. New to me for this paper was the rebuttal phase that took place around one and a half months after the initial paper submission. From my point of view, the whole approach of the rebuttal phase turned out to be quite useful and helped to improve the quality of the paper and the reviews. At the time of the rebuttal, five reviewers had submitted comments to my paper and I received a notification that contained all of them. This sounds very much like a normal acceptance notification, but the rebuttal notification did not contain the reviewer’s decision concerning the paper. Moreover, I was able to write a response to the reviewers which they could take into account when finalizing their reviews. The purpose of this answer is to reply to any questions posed by the reviewers and to point out any factual errors in the reviews. As such, it is an opportunity for the authors to shift the favours of the acceptance. For my paper at EDOC, I am quite happy with the quality of my reviews. All reviewers had clearly read the paper and understood the point of it. Each reviewer pointed out critical aspects, the clarification of which would improve the paper. Four of the reviews sounded fairly positive and I assume that their rating was a weak accept or accept. One reviewer was more critical and had probably assigned a rating of weak reject or neutral.
Now, I had the opportunity to reply to all of the reviewers in combination with a limit of five hundred words. The following strategy turned out to yield positive results for my paper:
- I thanked all of the reviewers for their effort.
- I responded to every reviewer, no matter if the review sounded positive or not.
- For every issue raised, I briefly acknowledged the issue, outlined if it was fixable in the paper, and if so, how I would fix it.
Naturally, most of my reply (about half of it) addressed the most critical review. Still, I think that showing respect to the effort made by every reviewer is the right thing to do here.
When the final acceptance notification came in, I was quite happy: The paper got accepted. The four rather positive reviews remained unchanged and had an overall rating of accept. The more critical reviewer had modified her review and acknowledged that, given I change the paper in the way I described in my response, she would upgrade her rating to weak accept. Now, I was in a rather nice position: Thanks to the early notification and the response, I had already drafted the changes I would make to the paper for the final version and got the work done relatively quick.
As a summary, I am quite happy with the rebuttal phase and in my point of view, the benefit you get in terms of quality outweighs the effort it adds to the review process. I hope that more conferences will include a rebuttal phase, similar to EDOC.
The paper forms the next step in my effort of building a quality comparison framework for the portability of service-oriented processes. In the paper I propose a metrics framework for quantifying the degree of portability of a process and validate the framework theoretically and practically. The practical validation uses several libraries of BPEL processes.
The portability of a program can be computed by contrasting the effort of porting it with the effort of rewriting it from scratch. A substitute for the effort in this case are lines of code: Portability can be computed as the relation of the amount of lines that have to be changed for porting to all lines (because strictly speaking all lines will have to be rewritten if you start anew from scratch). In the paper, I improve this way of calculating portability by feeding in additional data in the computation. Not every line of code is equally portable or nonportable. If a lot of runtimes exist, as is the case for BPEL runtimes, code elements can be supported for instance by all but one, half of them, or only a single one. As outlined in the figure below, runtimes tend to support only a subset of the overall specification, and only this subset is truly portable. This holds not only for BPEL, but for any standards-based runtime.
A portability metric should reflect this aspect. Thanks to previous work on BPEL conformance in runtimes, I can precisely compute the amount of runtimes that support an element in a process definition. That way I can contrast not only portable and nonportable lines of code, but consider a degree of portability for every line of code, which results in a more precise metric value. Additionally, I use this mechanism to calculate the control-flow portability (by only considering the portability of activities) and communication portability (by only considering the portability of communication activities). If you are interested in more details of the definition and validation, please refer to the paper. Finally, I also built a tool to compute all these metrics from BPEL files, it is open source and you can find it here. I also have a previous post that describes how to use the tool. Feel free to use or contribute, I welcome feedback!