In my work of the last years, I have dealt quite often with the Web Services Business Process Execution Language (BPEL). BPEL is an open specification of a (Web services-based) process language and thanks to that you can write process definitions in this language without locking in to a specific BPEL engine. So if your engine vendor decides to increase the prices for the the next release, you can just switch to one of the several open source engines available without having to modifiy your actual program code. That – at least – is the theory.
Practically, we have used different engines in our group over the last years and it has always annoyed us that each engine comes with its pecularities, specialities, limits in support, and addons. No engine actually supports the complete specification, but only varying parts and that essentially links a process definition to the platform of its definition and makes the portability of process definitions an illusion. Furthermore, the gap of what is defined in the specification and what actually works for an engine (as oposed to what engine providers claim that works) was so large for some engines that it made development really frustrating from time to time. That – at least – was my impression.
Recently, I teamed up with my colleague Simon Harrer to replace this impression with some hard facts. That is, we wanted to get a comparable picture of what parts of the BPEL specification actually are supported in today’s engines. The outcome of our conspirancy is betsy, a tool for testing BPEL engines, in particular for determining the standard conformance of engines. It is freely available on Github and licensed under LGPL, so feel free to use it and we also welcome participation and improvements to it. In this blog post, I give a short outline of its structure and describe how it works. A more comprehensive description is available in its architectural whitepaper.
Besty consists of a testing engine that can transform test cases written in pure standard BPEL to deployment artifacts for specific engines, execute the tests, and aggregate the results to a set of reports. On top of that, betsy provides a large set of test cases (140) for checking standard conformance to the BPEL spec.
Requirements and Execution
Betsy is written in Groovy and makes heavy use of soapUI for sending and validating SOAP messages. The build tool we use is gradle. To install and run betsy, you need:
- JDK 1.7.0_3 (64 bit) or higher (including the setting of the JAVA_HOME environment variable)
- Ant 1.8.3 or higher (including the setting of ANT_HOME)
- SoapUI 4.5.1 (64 bit)
In the current version of the tool, we link SoapUI by its installation path (which defaults to C:\Program Files\SmartBear\soapUI-4.5.0) and use .bat scripts. Please note that this ties the tool to the Windows operating system family. However, the scripts and installation paths can be modified to work on Linux as well. Most of the scripts do nothing more than starting up and shutting down specific Tomcat instances, so this should not be too difficult.
You can download the software by cloning from our git repository at github. Simply use this command:
git clone firstname.lastname@example.org:uniba-dsg/betsy
On execution, you provide betsy with the name of the engine(s) you want to test and the names of the test cases you want to execute. A run of betsy works like this:
That is, betsy first organizes execution and result directories, executes each test case specified for each engine specified and thereafter aggregates the test results. Each test case execution is strictly sequential and each engine is reinstalled from scratch for each test case. This implies that a run can take quite long. For 140 test cases and five engines, it takes around seven hours on our testing server (i7 with 16 GB RAM). However, this is a necessary restriction as parallel test executions can corrupt the results – some engines turned out to not handle parallelism very well – and single test cases were capable to disable engines for any further use and make a reinstallation necessary. This indicates that performance testing might be worthwhile (and would likely produce outrageous results), but so far we limited the tool to conformance testing.
To fire up betsy, you can can use gradle with according parameters. A call would for instance be:
gradlew run -Pargs="ode SEQUENCE,FLOW"
which executes the test cases for the sequence and flow activities on Apache ode. If you leave out the arguments and just execute
gradlew run in the project root, all tests for all engines we support will be exeuted, so keep in mind that it can take time.
Betsy natively supports five engines. All are open source and written in Java. The engines are:
Betsy currently provides reports in html, csv and latex table format. The html reports offer the possibilty to drill down to the SOAP messages exchanged for every test case and engine. Here is a rough outline of how they look (I cannot embed the generated html in the blog post, so it is just an image):
These are the results for all engines and three tests for certain structured activities. If you look at the complete set using all 140 test cases, it is not at all (repeat: not at all) that green. A discussion of the implications of our findings will be part of a future blog post. For now, I hope I could raise your interest in betsy! If you want to know more, visit the project page.