Once again TSSJS was a well-organized event with lots of
interesting content. Hot topics that I took notice of were RIA, new languages, and obviously distributed computing and scalability.
I arrived on Tuesday morning, which gave me a chance to meet John Davies, Ted Neward, Kirk Peperdine and Holly Commins. We found a nice spot not to far from Charles Birdge. At some point we started discussing the reasons we’re seeing a burst of new languages. The discussion about languages is thought-provoking. Ted Neward (One of my favorite presenters) seems to be spending a lot of his time recently thinking about this topic. He explained over dinner (while he was completely jet-lagged!) his view. I’ll try and summarize
the main points:
- One size doesn’t fit all – we shouldn’t try to force one language to do everything and expect it to be good at it all. The concept of using multiple languages
in the same application is actually something we’ve been practicing for a while
serves a specific purpose.
- Different semantics require different expressions, i.e., different languages. An example that was given was Scala and Erlag and the notion
of parallel programming as a first class-citizen in the language (as opposed to a set of libraries and explicit APIs in Java). The argument (brought up
by Kirk) is that you can’t leverage
multi-core platforms without languages that were designed to do so. It reminds me that indeed multi-threaded programming wasn’t common until threading became native in the languages. Now you can’t think of writing even a simple application without threads. So i think that Ted and Kirk have a valid point.
- Usability and productivity – how many lines of code are required to
express a certain idea? There are many examples that show how different use cases in Java could be relatively verbose and complex with
comparison to the “new” languages.
- JVM/CLR makes it easy to introduce new languages as just new views and
perspectives running on the same platform. Previously, languages such as Perl and TCL had to be built with an entire
stack, typically based on C or C++, and had to be ported to various platforms and
operating systems. This approach made the choice of language and language
interoperability quite difficult as the decision to choose one languages over the
other was considered a “catholic marriage”. Today, the JVM in Java and the CLR in .Net enable
better separation of concerns. They provide a common platform that can easily
support multiple languages. This simplifies interoperability of different languages within the same application. A good example is the new support for dynamic languages in Java 6 and in .Net. This makes the
language decision simpler, as the impact of this
decision on our project is less drastic and less risky than it was before.
While I think that all of the points are valid I couldn’t avoid thinking that we’re forgetting past experience. For example, you
could easily argue that lines-of-code is only one measurement of productivity. Another measure of productivity is maintenance, i.e., how simple it is to read
the code and understand it, transfer it to another programmer, etc. My concern
is that if the language becomes too flexible and enables each of
us to write our own extension, we’re going to find ourselves in a position
where the only person that understands the code is the person who wrote it, and even that would hold true only for a certain period of time. Think about C++ templates, macros, operator overloading, multiple inheritance — a lot of “nice”
features that made our code very flexible, but less readable due to the large
number of indirection we had to go through to parse a single class.
One of the things I liked when I switched to Java from C++ was the fact
that to understand my colleagues’ code all I needed was to read their .java files. In most cases I didn’t really need documentation and it was
fairly simple to parse the code because Java restricted much of the flexibility
that I just mentioned. Trying to do the exact same thing with C++
requires parsing of header files, macros and typedefs. Another issue is that introducing multiple languages can be quite
complex and a barrier to productivity, due to limited skill sets within a
certain project, even if choosing a language is less risky than before.
I think the concurrency argument is only a
temporary one. I’d hate to choose a
different language just for that, because it’s something that I expect to see native
in Java. So far we managed to deal with multi-core and parallel
programming quite effectively with Java using event-driven architecture (EDA) and master/worker patterns, and
abstract a lot of the concurrent programming with things such as
Futures, Remoting, etc. Surely having some of the features of Scala or
Erlang as part of Java would have made our life simpler, but if
I measure the value vs. risk involved, I’m not sure it justifies using it right now in a real project.
Don’t get me wrong. I’m not saying that there is anything wrong with these languages. What I’m arguing is that we need to be very careful before we
choose them and make sure that we’re
measuring the right value, rather than assuming that
any of the above arguments applies to our application without proper analysis. Ted
was able to convince me to look further into this topic – so I’m probably going
to give Scala a try and get a real feel of it.
The event started on Wednesday morning with a very good presentation by Stephan Janssen. Stephan is the founder of Parleys.com. He is also the founder of the JavaPolis
conference held annually in Belgium. He talked about his experience with a wide range of RIA platforms: DHTML, Adobe Flex/Air,
JavaFX, Google Web Toolkit (GWT) and Microsoft Silverlight. He discussed his personal experience
in using the various technologies as part of Parleys.com.
combination of a general overview with real-life examples made
the discussion quite interesting and lively. The bottom line of this part of the talk was to
use Adobe Flex, if you’re building a site in the short-term and JavaFX if you’re
planning on launching your site in about a year’s time – due to the maturity cycle and the
gaps between the two technologies. Personally I found the fact
that there are so many options to do the same thing quite confusing. I wish that we could press the fast-forward button on the maturity cycle of these technologies. Working with previous versions of Parleys.com I must say that i was very impressed with
the progress and the *right* use of technologies to build the new version of the
Another interesting and quite innovative idea that Stephan presented
was about hosting services and collaboration with academic partners. The hosting
service will enable companies like ours to host their live presentations in the
Parleys.com site. In addition, you can embed the presentation in a blog entry. You can also record your talk online, using a web-based application.
The partnership with academic institutions enables scaling not just the
content, but also the bandwidth, similar to the way downloads works. IMO Parleys.com
could easily become the YouTube of online presentations. If you
missed the presentation I’d recommend watching this interview here:
My own presentation, Getting ready for the cloud, seems to have been well-received, although I had some concern that it might be too high-level for some of the audience. You can read
some of the comments posted by others here and here.
The presentation included a live demo of a web 2.0 application (displaying market
data) running live on Amazon EC2. Although the demo ran over a wireless line, it went surprisingly smooth and I was able to easily redeploy and relocate
instances through a simple drag & drop using our UI, which was hosted on
one of the EC2 machines. The following day, Uri Cohen gave a session in which he showed the details of what’s going on behind the scenes and reviewed the actual code and API used in the demo. If you’re interested in experiencing it yourself, you can try out the same demo on our new EC2 version.
TSSJS was a good opportunity to meet in person the winners of our OpenSpaces developer competition. I heard interesting stories about what drove them to write their projects. The
common theme was the technology challenge – they heard about our technology and
scaling pattern and wanted to get a feel for themselves of how it works.
BTW, Jason Carreira, one of the winners, has since worked on another project: a scalable Twitter-like application using
GigaSpaces and EC2 (an alpha version already exists, he is now looking for
hosting opportunities). And Leonardo Goncalves — the first prize winner — is already thinking of the next version of his
project. The third winner – Kirill Ishanov — is also planning to participate in next year’s contest. At the
end of the first day we showed a video of some of the judges (John Davies
and Jullian Browne were missing from the video). It’s a light-hearted video in which the judges also makes fun of Joe Ottinger:)
Two of the talks I very much enjoyed were given by John Davies, formerly the founder and CTO of C24 (which was sold to IONA), who has recently started a new venture called Incept5. I’ve worked with John for many years now and we often have excellent chats about ideas in our respective markets. John’s first talk was one I’d heard before, but as always, he updated it with new anecdotes and ideas. He talked about extreme enterprise architectures, specifically ESBs and grid in the low-latency, high-volume, complex envrionments of investment banking. John started by explaining the value of a millisecond to the high-end institutions, literally in terms of dollars, something like $100 million per ms. He went on to talk about compiled languages compared to Java for this sort of processing. It was interesting to see John walk through a very high performance matching and reconciliation engine we had designed together for a client a few years ago and it’s exciting to hear that his new company will specialize in this is. John talked about some of the clever coding patterns that had to be implemented to provide linear scalability, and although master-worker was the pattern of choice for scaling, it wasn’t as simple as just writing lots of workers.
John’s second talk was new to me, and although we discussed the ideas in it in the past, it was fascinating to hear them presented. The room was packed — standing room only — as it was a topic near and dear to the hearts of many developers and architects: “The Enterprise without a Database”. I thought this would just be an extension of caching, but John went on to emphasize the huge amounts of time and energy (human) being lost on Object-Relational Mapping (ORM). Why do we still persist our well-established Object-Oriented models into a relational database? While ORM is simple at the example level, it doesn’t scale given the levels of complexity in today’s messaging standards. John made this very clear by example. I got the feeling that he was holding back the solution, perhaps to be released by his new company, but it was clear that there are alternatives to ORM: from caching objects to using CLOBs in classic databases. This is obviously an area to watch, as John always has a good vision for these sort of things.
At the end of the day I had the chance to have a beer with different people in a nice Mexican restaurant in Prague (courtesy of
Jodie, the cameraman, and his local friend). After a few beers, mojitos and lots of peanuts (courtesy of John Davis:)), the topic of open source software (OSS) came up. I think that we
all agreed that being open is a no-brainer, and that’s the way software products
should be built. Being open doesn’t necessarily mean
free – take Jive and Atlassian, for example. They sell commercial products, but they provide customers with the source code
Another model is the dual-license model such as that used by Red Hat and MySQL. It’s sometimes referred to as the Fedora model. It means that you have a choice to use a free version but if you do, you’re on your own. If you choose to use the supported version,
you’re going to be charged a subscription, for which you’ll get extra features and
I argued that it is important to have a
solid business model behind a product/project. It should be as important to the
users as to the company developing the product. if a product doesn’t have
a solid business model two things might happen: the project/product is going
to be abandoned at some point due to lack of funding, or the owner of the product will change the licensing model to monetize
on the IP and established user-base. We’ve seen both scenarios happening already.
I also argued that the Fedora model is usually successful only as part
of a commoditization strategy. For example, JBoss’s strategy was to go after the lower end of WebLogic/WebSphere accounts. The same applies to MySQL. This strategy seems to only work to
a certain limit. I argued that this model is not proven in an emerging product category,
where large investments in market education and innovation are required to achieve massive and sustainable adoption. In such cases, the Jive/Confluence
model seems to be a better fit. Anyway, this topic is worth a separate discussion,
so I’ll leave it at that for now.
Unfortunately I had to leave on Friday (to be at my daughter’s end-of-year party at school), so I missed Shay Banon’s presentation. Based on what I heard it went very well.
Anyway, it was a real fun event and i look forward to next