Last JavaOne I attended ‘HTTP 2.0 – What do I need to know?’, an excellent talk by Hadi Hariri. Many of the solutions of HTTP/2 are solutions to problems I face daily. Since I’m a curious guy I wanted to know what was happening at packet level in this awesomeness. Soon I faced SSL-decoded-packet-problems (in practice all HTTP/2 traffic is encrypted). Hadi mentioned Wireshark had support to solve this problem. He made it sound very easy, but since I wrote this article it was a bit harder.
The method I’ll explain to decode HTTP/2 can also be applied to HTTP/1.1
|1.||12-apr-15||ETU EK Powerman Holland Duathlon 2015 – Sprint||Horst||5-20-2,5km||1:07:37 [u]
|2.||29-aug-15||Run Bike Run Borne||Borne||6,5-30-4,8km||1:47:22 [u]
- 31-dec-15 – Sylvestercross, Soest
dikgedrukt is een PR
One of my complaints about Spark was that it wasn’t possible to set a dynamic maximum rate. This is a problem in many jobs since the maximum throughput isn’t always linear with the output rate. Another issue is with local testing. You have to set the rate to extremely low values and experiment a lot to make a Spark job usable on a local machine.
But all these problems are in the past with the introduction of backpressure (I believe it’s spelled as back pressure, but I’ll stick to the Spark notation).
On my current project we are running Spark on top of Yarn. Since Hadoop causes dependency problems and feels a bit ancient I was looking for an alternative. At JPoint we have a few a days a year to try things out, this was one of them. I paired with Eelco to get things up and running.
This article will show you how to run Spark on top of Mesos on your Mac (or Linux and probably a combination of these two).
After using Spark for a few months we thought we had a pretty good grip on how to use it. The documentation of Spark appeared pretty decent and we had acceptable performance on most jobs. On one Job we kept hitting limits which were much lower than with that Jobs predecessor (Storm). When we did some research we found out we didn’t understand Spark as good as we thought.
My colleague Jethro pointed me to an article by Gerard Maas and I found another great article by Michael Noll. Combined with the Spark docs and some Googlin’ I wrote this article to help you tune your Spark Job. We improved our throughput by 600% (and then the elasticsearch cluster became the new bottle neck)
After fixing an earlier classpath problem a new and more difficult problem showed up. Again the evil-doer is and ancient version of Jackson.
Unfortunately all classes are loaded from one big container with Spark/Hadoop/Yarn, this causes a lot of problems. The spark.files.userClassPathFirst option is still ‘experimental’ which, in our case, meant it just didn’t work. But again, we found a solution. Our system engineers also wanted a reproducible solution so in the end it’s just a small recipe.
Bootstrapping a Spring Context from the classpath is quite simple, even from Mappers/Reducers in Hadoop. When you want to use enviroment specific property files it gets a bit harder. In our Mappers and Reducers we rely heavily on Spring Beans who perform the magic. Besides that we load different properties per environment (ie a MongoDB). After several iterations we came up with a nice solution. You can also leave out the Spring part and use this solution to broadcast Properties to all your Mappers/Reducers.