[{"content":"As soon as Google\u0026rsquo;s blog post \u0026ldquo;Introducing Jib — build Java Docker images better\u0026rdquo; was online, all my channels went crazy about Jib. That was a bit surprising as Jib was started over one year ago but with this blog post this project rockets with more than 1000 new GitHub stars within one day. Crazy.\nI got a lot asked yesterday how Jib compares to fabric8io/docker-maven-plugin (d-m-p) or fabric8io/fabric8-maven-plugin which includes d-m-p.\nLet me try to shed some light on the differences and pro and cons of both approaches.\nHow Jib works # Let\u0026rsquo;s first have a quick look what Jib offers today and what makes it unique.\nLooking at the Jib Maven plugin it currently supports three goals:\njib:dockerBuild for assembling an image and loading it to a Docker daemon. jib:build for assembling the image and pushing it to a Docker registry. jib:exportDockerContext for creating a Dockerfile along with all the files required for performing a build against a Docker daemon. The unique asset of Jib is that it does all of this without consulting a Docker daemon by creating all image layers and metadata locally, conforming to either the Docker or OCI format. And that all directly with plain Java code.\nAlso, Jib assumes a very opinionated project setup with a so-called flat classpath app with main class, dependencies and resources all in different artefacts. Compare this to fat jars popularised by Spring Boot, where the application code, resources and dependencies are all stored in the same jar file. There are some drawbacks of flat classpath app, but one benefit is, that you can organise the various files into several layers in your image, putting the one which is changing less (like dependencies) at the bottom of the layer stack. That\u0026rsquo;s what Jib does: It puts all dependency jars into one layer, all resource files (like property files to be loaded from the classpath) in another and the application classes into a third layer. All of these layers get aggressively cached locally. That way, you can be much faster when recreating images than with fat jars which apparently can be stored only in one single layer.\nBut let\u0026rsquo;s have a look how Jib works in detail. The steps performed by a jib:dockerBuild or jib:build are:\nFetch base image\u0026rsquo;s layers and cache them locally. By default the base image is gcr.io/distroless/java, but this can be configured.\nCreate three application image layers for\nDependencies Resources Classes Since these layers are cached, if any of them doesn\u0026rsquo;t change (which is likely for dependencies), then the layer is not recreated.\nCreate the application image locally. The two previous steps were performed asynchronously; this step continues when both previous steps have been finished. The ENTRYPOINT of this image is fixed set to:\njava \u0026lt;jvm-flags\u0026gt; -cp dep_dir/*:resource_dir/:class_dir/ \u0026lt;main-class\u0026gt; where \u0026lt;jvm-flags\u0026gt; can optionally be configured and \u0026lt;main-class\u0026gt; is the mandatory main class to specify. The information leading to the classpath comes from the underlying Maven or Gradle project information. Also, any Java arguments configured to become the CMD of the image, as well the exposed ports (EXPOSE) can be added, too.\nFinally the local image layers along with its meta-data is tarred up, and either loaded into a Docker daemon or pushed to a Docker registry.\nHow does Jib compare to d-m-p ? # Jib is impressive for the use case it supports and brings a fresh spin to the way how Java apps can be packaged into Docker images.\nThe most significant benefits of Jib are:\nFast for incremental builds when you have a flat classpath application, resulting in three different layers for dependencies, resources and your application classes. No Docker daemon required, which reduces the build requirements and increases security because the image creation happens without root permissions. Produces reproducible images by wiping out file owner and modifications date. However, not sure whether e.g. generated timestamps in resource files like properties are wiped out, too. It supports both, Maven and Gradle. However, there are also some limitations. Some of them might be tackled in the future, but other might not be changed due to the unique way how Jib works:\nCan be only used for simple flat classpath Java applications. There is currently no support for fat jars (i.e. Spring Boot fat jars) nor other packaging formats like WAR file. Simplistic startup of the application with a plain java call instead of using a full features startup script like run-java-sh. No additional files like configuration files outside the classpath or agents like jmx_exporter agent can be added (but there is a PR pending for agents). Fixed classpath order, e.g. doesn\u0026rsquo;t allow for overwritten resources in dependencies as dependencies are always first on the classpath. Jib uses a custom XML configuration syntax instead of plain Dockerfile syntax (which I often heard as a major critique about d-m-p which also supports a custom XML configuration, but as alternative to Dockerfiles). d-m-p provides some additional features which are not supported by Jib, like\nRunning containers for integration testing (that\u0026rsquo;s very likely the most prominent difference) Dockerfile support docker-compose.yml support Enhanced authentication support OpenShift and Amazon ECR support Support for watching code change and then automatically triggering a rebuild of images and restart of containers Support for arbitrary assembly and base images, including Spring Boot fat jar and JavaEE containers. Healthchecks And if you jump to fabric8-maven-plugin, which includes d-m-p for its image building business, you have even more high level features like a zero config mode which analyses your pom.xml and selects opinionated defaults like base images and handcrafted startup scripts, depending on the type of tech stack you are using (like Spring Boot, Thorntail, Vert.x, Tomcat, \u0026hellip;)\nNext steps \u0026hellip; # This overview is only a quick glance on Jib. In one of the next posts, I plan to show some real-life examples and also measuring the performance gain by using Jib.\nAlso, there a plans for d-m-p to add support for Jib backend and other daemonless build systems like img, buildah or kaniko. The mid to longterm plan is to enhance the build abstraction within d-m-p and offer, based on the Java project given, different ways to build images.\nBTW, if you are interested in what\u0026rsquo;s going on in Docker image building business these days, you probably might find this KubeCon presentation as useful as I did. Daemonless FTW :)\nPsst, d-m-p also likes GitHub ★ ;-)\n","date":"11 July 2018","externalUrl":null,"permalink":"/jib-vs-dmp/","section":"Posts","summary":"As soon as Google’s blog post “Introducing Jib — build Java Docker images better” was online, all my channels went crazy about Jib. That was a bit surprising as Jib was started over one year ago but with this blog post this project rockets with more than 1000 new GitHub stars within one day. Crazy.\nI got a lot asked yesterday how Jib compares to fabric8io/docker-maven-plugin (d-m-p) or fabric8io/fabric8-maven-plugin which includes d-m-p.\nLet me try to shed some light on the differences and pro and cons of both approaches.\n","title":"First look at Jib","type":"posts"},{"content":"","date":"11 July 2018","externalUrl":null,"permalink":"/posts/","section":"Posts","summary":"","title":"Posts","type":"posts"},{"content":"","date":"11 July 2018","externalUrl":null,"permalink":"/","section":"Roland Huß","summary":"","title":"Roland Huß","type":"page"},{"content":"Yesterday a blog post Using Docker from Maven and Maven from Docker by Kostis Kapelonis was published which gives some insights on the possible relationships between Docker and Maven. The article makes some essential points really, and gives an overview for the two remaining Docker Maven plugins as well as how Codefresh recommends doing Docker multi-stage builds as the alternative. As I\u0026rsquo;m the maintainer of the fabric8io/docker-maven-plugin, I\u0026rsquo;d like to comment on this matter.\nI already commented on the original blog post (thanks for approving the comment), but I\u0026rsquo;m happy to repeat my arguments here again.\nThe article ditches two docker-maven-plugins before it promotes Docker multi-stage builds for some reasons.\nTo be honest, I think both approaches have their benefits, but let me comment first on two arguments given concerning the fabric8io/docker-maven-plugin.\nThere have been cases in the past where Docker has broken compatibility even between its client and server, so a Maven plugin that uses the same API will instantly break as well.\nThis compatibility issue might be right especially if you use a typed approach to access the Docker REST API which is used by various Docker client libraries. As explained in the post, fabric8 d-m-p accesses the Docker daemon directly without any client library and with not marshalling. This is because it accesses only the parts required for the plugin\u0026rsquo;s feature set, which also means that json responses are handled in a very defensive and untyped way.\nAnd yes, there was one issue in the early days in 2014 with a backwards-incompatible API change from Docker. This issue could be fixed quite quickly because d-m-p hadn\u0026rsquo;t to wait for a client library to be updated. However, since then there never has been an issue and for the core functionality that d-m-p uses.\nI think the relevance of Docker API incompatibilities is exaggerated in this blog post.\nHopefully, the fabric8 plugin also supports plain Dockerfiles. Even there, however, it has some strong opinions. It assumes that the Dockerfile of a project is in src/main/docker and also it uses the assembly syntax for actually deciding what artefact is available during the Docker build step.\nThat is simply not true. You can just put a Dockerfile on the same level as the pom.xml, refer to your artefacts in the target/ directory (with Maven property substitution), and then declare the plugin without any configuration. See my other blog post for a short description of how it works.\nBTW, the reason for the own XML syntax is a historical one. The plugin started in 2014 when Dockerfile was entirely unknown to Java developers. But Maven plugin XML configuration was (and still is) a well-known business. As time passed by and Docker become more and more popular for Java developers, the Dockerfile syntax is well known now these days, too. So, I completely agree, that you should use Dockerfiles if possible, and that\u0026rsquo;s why the plugin supports Dockerfiles as a first-class citizen since the recent versions. The next step is to add similar support for docker-compose.yml files for running containers. There is already docker compose support included, albeit a bit hidden.\nI agree that multi-stage Docker builds are fantastic for generating reproducible builds, as the build tool (Maven) is used in a well-defined version. However, using a locally installed Maven during development has advantages, too. E.g. the local Maven repository avoids downloading artefacts over and over again, resulting in much faster build times and turnaround times. Of course, you can add caching to the mix for multi-stage builds, but then the setup gets more and more involved. Compare this to using a d-m-p for which you don\u0026rsquo;t even need a local Docker CLI installed, and you can \u0026lsquo;just start\u0026rsquo;. For CI builds this properly doesn\u0026rsquo;t matter much though (and that\u0026rsquo;s what the blog post is all about I guess).\nOther advantages of using fabric8\u0026rsquo;s d-m-p :\nRunning all your containers (app + deps) locally without the need of support from a CI system. As a side note, the custom compose-like syntax of codefresh\u0026rsquo;s CI is not so much different to the custom configuration syntax of fabric8\u0026rsquo;s d-m-p. It\u0026rsquo;s custom. Extended authentication support against various registries (Amazon ECR, Google GCR, \u0026hellip;) Automatic rebuilds during development with docker:watch which increase turnaround times tremendously Download support files (e.g. startup scripts) automatically by just declaring a dependency in the plugin (blog post pending) \u0026hellip;. and even more stuff. In the end, your mileage may vary, but having an article conclusion without really trying to compare pros and cons of both approaches is far too biased for me.\nUpdate: Kostis replied to my comment, and an interesting discussion is going over there\n","date":"5 July 2018","externalUrl":null,"permalink":"/dmp-not-so-bad/","section":"Posts","summary":"Yesterday a blog post Using Docker from Maven and Maven from Docker by Kostis Kapelonis was published which gives some insights on the possible relationships between Docker and Maven. The article makes some essential points really, and gives an overview for the two remaining Docker Maven plugins as well as how Codefresh recommends doing Docker multi-stage builds as the alternative. As I’m the maintainer of the fabric8io/docker-maven-plugin, I’d like to comment on this matter.\nI already commented on the original blog post (thanks for approving the comment), but I’m happy to repeat my arguments here again.\n","title":"docker-maven-plugin might be still useful","type":"posts"},{"content":"I\u0026rsquo;m a big fan of the Camel Java DSL for defining Camel routes with a RouteBuilder. This is super easy and slim. However, in this blog post I show you a nerdy trick how this can be done even more elegant.\nIf you are a Camel user, you know, that defining a route for a given Camel context ctx ist just a matter to implement the configure() method of the abstract RouteBuilder class:\nctx.add(new RouteBuilder { @Override public void configure() throws Exception { from(\u0026#34;file:data/inbox?noop=true\u0026#34;) .to(\u0026#34;file:data/outbox\u0026#34;); } }); Its really simple and you can use the whole Camel machinery from within your configure() method.\nHowever, this kind of configuration can be performed even simpler. Let\u0026rsquo;s assume that you have a no-op default implementation of RouteBuilder called Routes:\npublic class Routes extends RouteBuilder { @Override public void configure() throws Exception { } } Then, the configuration can be rewritten simply as\nctx.add(new Routes {{ from(\u0026#34;file:data/inbox?noop=true\u0026#34;) .to(\u0026#34;file:data/outbox\u0026#34;); }}); This trick just uses Java\u0026rsquo;s object initializers, a not so well known language feature. The inspiration for providing the DSL context like this comes from JMockit which defines its mock expectations the same way. I think object initializers are really an elegant albeit hipster way to implement DSLs.\nAlthough you can easily define the Routes class on your own, you might vote for this Camel issue or pull request if you want to have this in upstream Camel, too.\n","date":"3 July 2018","externalUrl":null,"permalink":"/camel-routes-simplified/","section":"Posts","summary":"I’m a big fan of the Camel Java DSL for defining Camel routes with a RouteBuilder. This is super easy and slim. However, in this blog post I show you a nerdy trick how this can be done even more elegant.\n","title":"Elegant Camel route configuration","type":"posts"},{"content":"As you might know, one of my Open Source babies is the one and only fabric8io/docker-maven-plugin (d-m-p). If you already use this Maven plugin, you know, that it is super powerful and flexible to configure. This flexibility comes at a price so that the configuration can become quite complicated. Now, if you only want to build Docker images with Maven, I have good news: Since 0.25.1 d-m-p supports a zero XML configuration mode, the so-called Simple Dockerfile Build mode.\nThe idea of this mode started with a Twitter discussion:\nAnd actually, it\u0026rsquo;s true: If all that you want is to build a single Docker image from a Dockerfile, then the initial configuration is indeed too complex.\nd-m-p already supports plain Dockerfiles for quite some time, and that even for multiple images. However, you still have to reference those Dockerfiles in the XML configuration of the plugin.\nSince 0.25.1 you can now use the so-called Simple Dockerfile Build mode (kudos go to Rohan Kumar for the initial implementation). All you have to do is to add d-m-p to your pom.xml and add a Dockerfile. The smallest possible Maven project for creating a Docker image consists of this pom.xml\n\u0026lt;project\u0026gt; \u0026lt;modelVersion\u0026gt;4.0.0\u0026lt;/modelVersion\u0026gt; \u0026lt;groupId\u0026gt;fabric8\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;smallest\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;1-SNAPSHOT\u0026lt;/version\u0026gt; \u0026lt;build\u0026gt; \u0026lt;plugins\u0026gt; \u0026lt;plugin\u0026gt; \u0026lt;groupId\u0026gt;io.fabric8\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;docker-maven-plugin\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;0.26.0\u0026lt;/version\u0026gt; \u0026lt;/plugin\u0026gt; \u0026lt;/plugins\u0026gt; \u0026lt;/build\u0026gt; \u0026lt;/project\u0026gt; and a Dockerfile alongside this pom:\nFROM busybox CMD [\u0026#34;echo\u0026#34;, \u0026#34;Hello\u0026#34;, \u0026#34;world!\u0026#34;] This image does not really much. With mvn docker:build you can build it, with docker run fabric8/smallest you can test it. Or use mvn docker:run so that you even don\u0026rsquo;t have to provide the image name.\nA more realistic Dockerfile could look like\nFROM openjdk:jre ARG jar=target/app-1.0.0-SNAPSHOT.jar ADD $jar /app.jar CMD java -cp /app.jar HelloWorld where we define jar as build arg in the Dockerfile but also as property in the pom.xml:\n\u0026lt;properties\u0026gt; \u0026lt;jar\u0026gt;${project.build.directory}/${project.build.finalName}.jar\u0026lt;/jar\u0026gt; \u0026lt;/properties\u0026gt; You can use Maven properties in the Dockerfile which get automatically replaced by docker:build when creating the image. But you can use that Dockerfile also without Maven with docker build . You can\u0026rsquo;t use Maven properties with . directly as dots are not allowed in Docker build args, therefore we use an extra property. However, a maven-less usage probably does not make much sense when you don\u0026rsquo;t also build the artefacts. The full example can be found in the dmp GitHub repo.\nIf you can forgo Docker build args you can use predefined Maven properties directly:\nFROM openjdk:jre ADD ${project.build.directory}/${project.build.finalName}.jar /app.jar CMD java -cp /app.jar HelloWorld The image name is auto-generated, but you can set this name also by yourself by setting the property docker.name (and you can use placeholders within this name)\nYou can even start the container with mvn docker:run although without any additional configuration (e.g. like port mappings). Also, you can docker:push the image.\nYou can still configure certain aspects like authentication or bind d-m-p goals to default lifecycle phases. Using this mode is very similar to the functionality offered by spotify/dockerfile-maven.\nIf you need more horsepower, you can gradually expand on this simple setup. Features which are waiting to be discovered are:\nSetup of multiple images for running integration tests Custom networks and volumes for your tests Using docker-compose files for running the containers directly from the plugin Exporting the Docker image as an archive Watch docker containers and restart them when the code changes All can be configured via properties, too and since with the latest versions, you can mix it with XML configuration. \u0026hellip;. If you are interested in more to find out, then please have a look at the reference manual.\nI\u0026rsquo;m curious what you think about this new mode. Please use the comments below if you want to leave some feedback. In fact, there are concrete plans for d-m-p to include the generators from the fabric8-maven-plugin functionality of autodetecting the tech stacks used for creating opinionated Docker images.\n","date":"12 April 2018","externalUrl":null,"permalink":"/simple-dockerfile-mode-dmp/","section":"Posts","summary":"As you might know, one of my Open Source babies is the one and only fabric8io/docker-maven-plugin (d-m-p). If you already use this Maven plugin, you know, that it is super powerful and flexible to configure. This flexibility comes at a price so that the configuration can become quite complicated. Now, if you only want to build Docker images with Maven, I have good news: Since 0.25.1 d-m-p supports a zero XML configuration mode, the so-called Simple Dockerfile Build mode.\n","title":"When a Dockerfile is just good enough","type":"posts"},{"content":"Octobox is for sure one of my favourite tools in my GitHub centred developer workflow. It is incredible for GitHub notification management which allows me to ignore all the hundreds of GitHub notification emails I get daily.\nOctobox is a Ruby-on-Rails application and can be used as SaaS at octobox.io or installed and used separately. Running Octobox in an own account is especially appealing for privacy reasons and for advanced features which are not enabled in the hosted version (like periodic background fetching or more information per notification).\nThis post shows how Octobox can be ported to the free \u0026ldquo;starter\u0026rdquo; tier of OpenShift Online.\nApplication setup # An Octobox installation consists of three parts:\nOctobox itself, a Rails application Redis as an ephemeral cache, used as a session store Postgresql as the backend database Naturally, this would lead to three services. However, as I\u0026rsquo;m not striving for an HA setup and the sake of simplicity, I decided to combine Octobox and Redis in a single pod. Since a combined lifecycle for Octobox and Redis is a reasonable, fair choice, this reduces the number of OpenShift resource objects considerably.\nAs persistent store for Postgres, we use a plain PersistentVolume which is good enough for our low-footprint database requirements.\nDocker Images # To get an application onto OpenShift, you first need to package all parts of your application into Docker images which eventually become container during runtime.\nThere are some restrictions for Docker images to be usable on OpenShift. The most important one is that all containers run under a random UID, which is part of the Unix group root. This restriction has the consequence that all directories and files to which the application process want to write should belong to group root and must be group writable.\nOctobox already is distributed as a Docker image and has recently be [updated][octobox-dockerfile-chgrp] to be OpenShift compatible. The Postgres image is directly picked up from an OpenShift provided ImageStream, so there is no issue at all. The Redis Imagee is also already prepared for OpenShift However, when using Redis from this image in an ephemeral mode (so not using persistence) there is a subtle issue which prevents starting the Pod: As the Dockerfile declares a VOLUME and even though in our setup we don\u0026rsquo;t need it, we have to declare a volume in the Pod definition anyway. Otherwise, you end up with a cryptic error message in the OpenShift console (like can't create volume ...). An emptyDir volume as perfectly good enough for this.\nTemplate # To install the application an OpenShift Template has been created. It contains the following objects\nDeploymentConfigs for \u0026ldquo;Octobox with Redis\u0026rdquo; and \u0026ldquo;Postgres\u0026rdquo; Services for Octobox and Postgres PersistentVolumeClaim for Postgres A route for accessing the app is created later on the OpenShift console. Please refer to these installation instructions for more details how to use this templats.\nOpenShift Online Starter # OpenShift Online Starter is the free tier of OpenShift online which is very useful for learning OpenShift concept and get one\u0026rsquo;s feet wet. However, it has some quite restrictive resource limitations:\n1 GB Memory 1 GB Storage This budget is good enough for small applications like Octobox, but if you need more horsepower than you can easily upgrade to OpenShift Online Pro.\nThe challenge is now to distribute the three parts (Octobox, Postgres, Redis) over these 1 GB. As Octobox as rails application is quite a memory hog, we want to dedicate as much memory as possible to it. For Postgres, we do not need much Memory at all, so 50 to 100 MB is good enough. The same for Redis as an initial guess. We can always tune this later if we found out that our initial guess a wrong.\nOk, let\u0026rsquo;s start with:\n875 MB Octobox 50 MB Redis 75 MB Postgres When trying out these limits, I quickly found out that this doesn\u0026rsquo;t work. The reason is that OpenShift Online has a minimum size for a container which is 100 MB. Also, you can\u0026rsquo;t choose requests and limits freely, but there is a fixed ratio of 50% to calculate the request from a given limit (the request specified is always ignored). This fact not only means that you get a Burstable QoS class, but also that you have to specify 200 MB as limit to get at least 100 MB request to exceed the required minimum.\nSo we end up with:\n600 MB Octobox 200 MB Redis 200 MB Postgres Apparently, this is not optimal, but that\u0026rsquo;s how it works for OpenShift Online Starter tier (and probably also the Pro Tier). For other OpenShift cluster it, of course, depends on the setup of this specific cluster. We could put Redis and Octobox in the same container, and start two processes in the container. This change would free up another 150 MB for Octobox but is ugly design. So we won\u0026rsquo;t do it ;-)\ntl;dr # Porting an application to OpenShift is not difficult. Especially the free OpenShift Online Starter is very appealing for such experiments. The challenges are mostly around creating proper Docker images and getting resource limits right. As a result, you get a decent running and managed installation.\nFor the full installation instructions, please refer to the OpenShift specific Octobox installation instructions.\n","date":"25 March 2018","externalUrl":null,"permalink":"/octobox-oso/","section":"Posts","summary":"Octobox is for sure one of my favourite tools in my GitHub centred developer workflow. It is incredible for GitHub notification management which allows me to ignore all the hundreds of GitHub notification emails I get daily.\nOctobox is a Ruby-on-Rails application and can be used as SaaS at octobox.io or installed and used separately. Running Octobox in an own account is especially appealing for privacy reasons and for advanced features which are not enabled in the hosted version (like periodic background fetching or more information per notification).\nThis post shows how Octobox can be ported to the free “starter” tier of OpenShift Online.\n","title":"Bringing Octobox to OpenShift Online","type":"posts"},{"content":"Since I got my first Amazon Echo end of last year, I love it. And although, as a typical German, I\u0026rsquo;m still a bit concerned about data privacy, at the end, convenience wins (as always :). There are many things which work flawlessly, and to be honest, the most used feature for me is a simple timer. But when it comes to aggregate actions, Alexa is still quite limited. Ok, you can define your routines, but for only an insufficient set of fixed actions. What I really would love to have is to start the radio when I get up in the morning, but this is not possible at the moment.\nSo I remembered my last years Amazon Dash button hacks and thought it would be cool to combine both, the Dash button and Alexa.\nAnd here it is, my weekend hack \u0026hellip;..\nIn a nutshell, the setup looks like:\nConfigure your router to not forward packets from your Dash Button. Spoof on ARP requests for the Dash button\u0026rsquo;s MAC. If found, call out to a text-to-speech service to convert configured Alexa commands to audio. Play the received audio output via RaspberryPi attached speakers. That\u0026rsquo;s it. You can use this sample code for doing the dirty work, but maybe you are interested in some more details.\nAmazon Dash Button # Amazon Dash Button is part of Amazon\u0026rsquo;s consumer goods ordering service. This button contains a Wifi sender and is quite inexpensive. Each button is specific for a brand for which you can connect to a specific good. When you press the button, this good is ordered (e.f. 24 cans of beer ;-)\nBut this intended use case is not the only way how you can use this button. In fact, it can be used just as a plain Wifi button for any purpose.\nFirst of all, you have to buy such a button e.g. 5 Euro here in Germany, but you can spend this five bucks for your first order. You just configure it as described by Amazon and maybe order something to spend your credits.\nAfter this, you have to block the button in your home Wifi router for calling out to the internet. For obvious reasons, this is very important ;-) When the button is blocked, it will flash red eventually when being pressed (in contrast to flashing green when an order is placed).\nWhen you press the button, it first asks via DHCP for an IP address. The MAC address of the button is relevant, so its time to pick that up, e.g.\nBy looking into your DHCP servers log By trying arp -a By checking your Wifi Router\u0026rsquo;s admin UI Via Wireshark spoofing This Mac address will be watched for later by spoofing ARP package traffic. When golang is your preferred programming language, then you can use directly rhuss/dash which is based on top of gopacket to watch for certain ARP packages and trigger an action when received.\nAmazon Polly API access # In our use case, when we detect that a button is pressed, then we want to send out some fixed, text-based audio. For converting text coming from a configuration, a text-to-speech service is used.\nThere are several such services available. For our purpose, we are using Amazon Polly, which offers a free tier for the first 12 months (including 5 million characters per month, fair enough for a handful of buttons ;-) and then 4 $ per one million of chars.\nA short cost calculation beyond the free tier: 100 characters for Alexa commands per button press (which is already quite a mouthful) costs ~ 0.4 cents. Or the other way round, for five bucks you can press the button 1.250 times: Three times a day per yeare for 5 $ Well, for me that\u0026rsquo;s worth the fun ;-)\nYou need an AWS account to access the speech API. The access and secret token can be either that of your root AWS account, or you probably should create a dedicated IAM User.\nOf course, you can also use different text-to-speech tool. Maybe even the good old Unix speak will do it ? Not tried it yet, but will check that for sure very soonish. For now, the Polly voices are recognised quite well by my echo so that I won\u0026rsquo;t change it right now.\nRaspberry Pi Audio # The final jigsaw piece is the hardware on which to run the watcher. For my use case, a Raspberry Pi 2 with some inexpensive speakers was totally good enough.\ndash2alexa command # The dash2alexa command actually takes a configuration file (defautl: ~/.dash2alexa.yml)\n# Sample configuration file for dash2alexa # Adapt and copy it to ~/.dash2alexa.yml # Access and secret for accessing the services access: \u0026#34;..........\u0026#34; secret: \u0026#34;..........\u0026#34; # Network interface to listen for ARP requests when a Dash button is pressed interface: \u0026#34;wlan0\u0026#34; # Language (\u0026#34;de\u0026#34; or \u0026#34;en\u0026#34;) language: \u0026#34;de\u0026#34; # Gender which can be either \u0026#34;male\u0026#34; or \u0026#34;female\u0026#34; gender: \u0026#34;male\u0026#34; # Keyword to use for alexa keyword: \u0026#34;Alexa\u0026#34; # Player to use when playing an mp3 sound file player: \u0026#34;mpeg123\u0026#34; # List of Dash Buttons with associated Alexa command buttons: # Symbolic name - name: \u0026#34;heineken\u0026#34; # Mac adress of Dash button mac: \u0026#34;ac:63:be:00:11:22\u0026#34; # How many seconds to wait between Alexa commands wait: 4 # Messages to talk messages: - \u0026#34;Lautstärke 4\u0026#34; - \u0026#34;Spiele Bayern 3\u0026#34; There\u0026rsquo;s not much documentation yet, but some will follow soon. Feel free to adapt the code, and I\u0026rsquo;m happy to integrate any pull requests. Also, as I\u0026rsquo;m a bloody golang greenhorn still, would be curious whether things could be done better.\ntl;dr # This little hack uses Amazon Echo via its \u0026lsquo;Audio API\u0026rsquo; to perform specific action on a button press. It\u0026rsquo;s entirely suited for a situation when its calm around, like putting the button right beside the bed for getting started even when your Echo is out of sight.\nAnd finally, it\u0026rsquo;s just pure fun ;-) Enjoy!\nP.S. Let me know in the comments whether you tried it out, too and how it works for you.\n","date":"12 March 2018","externalUrl":null,"permalink":"/dash-2-alexa/","section":"Posts","summary":"Since I got my first Amazon Echo end of last year, I love it. And although, as a typical German, I’m still a bit concerned about data privacy, at the end, convenience wins (as always :). There are many things which work flawlessly, and to be honest, the most used feature for me is a simple timer. But when it comes to aggregate actions, Alexa is still quite limited. Ok, you can define your routines, but for only an insufficient set of fixed actions. What I really would love to have is to start the radio when I get up in the morning, but this is not possible at the moment.\nSo I remembered my last years Amazon Dash button hacks and thought it would be cool to combine both, the Dash button and Alexa.\nAnd here it is, my weekend hack …..\n","title":"Dash2Alexa - Amazon Alexa Audio API Access","type":"posts"},{"content":"Our Ansible Playbooks for installing Kubernetes on a Raspberry Pi Cluster have been constantly updated and are now using the awesome kubeadm. The update to Kubernetes 1.6. was a bit tricky, though.\nRecently I had the luck to meet Mr. @kubernetesonarm Lucas Käldström at the DevOps Gathering where he demoed his multi-arch cluster. That was really impressing. Lucas really squeezes out the maximum what is possible these days with Raspberry Pis and other SOC devices on the Kubernetes platform. Please follow his Workshop on GitHub for a multi-platform setup with ingress controller, persistent volumes, custom API servers and more.\nNeedless to say that after returning home one of the first task was to update our Ansible playbooks for updating to Kubernetes 1.6 on my RasPi cluster. The goal of these playbooks are a bit different than Lucas workshop setup: Instead of living at the edge, the goal here is to provide an easy, automated and robust way to install a standard Kubernetes installation on a Raspberry Pi 3 cluster. kubeadm is a real great help and makes many things so much easier. However there are still some steps to do in addition.\nAfter following the workshop instructions it turned out soon, that it was probably not the best time for the update. Kubernetes 1.6. has just been released and it turned out that last minute pre-release changes broke kubeadm 1.6.0. Luckily these were fixed quickly with 1.6.1. However the so called self hosted mode of kubeadm broke, too (and is currently still broken in 1.6.1 but should be fixed soon). So the best bet for the moment is to use a standard install (with external processes for api-server et. al).\nAlso this time I wanted to use Weave instead of Flannel as the overlay network. In turned out that this didn\u0026rsquo;t worked on my cluster because every of my nodes got the same virtual Mac address assigned by Weave. That\u0026rsquo;s because this address is calculated based on /etc/machine-id. And guess what. All my nodes had the same machine id 9989a26f06984d6dbadc01770f018e3b. This it what the base Hypriot 1.4.0 system decides to install (in fact it is derived by systemd-machine-id-setup from /var/lib/dbus/machine-id). And every Hypriot installation out there has this very same machine-id ;-) For me it wasn\u0026rsquo;t surprising, that this happened (well, developing bugs is our daily business ;-), but I was quite puzzled that this hasn\u0026rsquo;t been a bigger issue yet, because I suspect that especially in cluster setups (may it be Docker Swarm or Kubernetes) at some point the nodes need their unique id. Of course most of the time the IP and hostname is enough. But for a more rigorous UUID /etc/machine-id is normally good fit.\nAfter knowing this and re-creating the UUID on my own (with dbus-uuidgen \u0026gt; /etc/machine-id) everything works smoothly now again, so that I have a base Kubernetes 1.6 cluster with DNS and proper overlay network again. Uff, was quite a mouthful of work :)\nYou find the installation instructions and the updated playbooks at https://github.com/Project31/ansible-kubernetes-openshift-pi3. If your router is configured properly, it takes not much more than half an hour to setup the full cluster. I did it several times now since last week, always starting afresh with flashing the SD cards. I can confirm that its reproducible and idempotent now ;-)\nThe next steps are to add persistent volumes with Rook, Træfik as ingress controller and an own internal registry.\nFeel free to give it a try and open many issues ;-)\n","date":"5 April 2017","externalUrl":null,"permalink":"/k8s-on-pi-update/","section":"Posts","summary":"Our Ansible Playbooks for installing Kubernetes on a Raspberry Pi Cluster have been constantly updated and are now using the awesome kubeadm. The update to Kubernetes 1.6. was a bit tricky, though.\n","title":"RasPi 3 Kubernetes Cluster - An Update","type":"posts"},{"content":"From time to time people come to me and say: \u0026ldquo;I really would love Jolokia if only it would be RESTful\u0026rdquo;. This post tells you why.\nI really like REST, yes I do. If I would crete a new application on a green field, its remote access API would very likely obey the REST paradigm.1\nHowever, Jolokia is a different beast. It is a bridge to the world of JMX, providing an open minded alternative to the rusty and Java specific JSR-160 standard. Its protocol is based on JSON over HTTP, so in principle it could be REST. But it is not, mainly for the following two reasons:\nJMX resource naming is a mess # Jolokia doesn\u0026rsquo;t not have any influence on the naming of the resources it accesses. These resources are JMX MBeans and their identifiers are ObjectNames. ObjectNames have a certain structure but beside this they can be named arbitrarily. So if you want to provide an HTTP API for accessing these repositories, this free form addressing poses some challenges, especially for read operations with GET. For example it is impossible to transmit a slash (/) or backslash (\\\\) as part of an URL\u0026rsquo;s path info. The reason is security related, and each application server handles this differently: Tomcat for example completely rejects such requests whereas Wildfly / Undertow refuses to URL decode %2F (for /) and %5C for \\\\. Jetty doesn\u0026rsquo;t care much. So in order to address a JMX MBean which contains these characters as part of their names, the typical encoding as part of an URL path doesn\u0026rsquo;t work. One could use query parameters for this kind of addressing and in fact, Jolokia supports this, too. But it\u0026rsquo;s still ugly. Also, implementers of MBeans tend to put semantic information into the MBean name like the port of a connector or the name of a database scheme. It can\u0026rsquo;t excluded that the MBean name alone can carry sensitive information. However GET urls are not secured via the transport protocol and tend to end up in log files. So, its much safer to send these requests via POST, even when only performing read operations on JMX attributes.\nBulk requests # A special feature of Jolokia are Bulk Requests. This allows a very efficient monitoring of multiple values with a single HTTP request. It works by sending a list of individual, JSON encoded Jolokia requests with a single HTTP POST request. That list can contain any valid Jolokia operation: Reading and writing attributes, executing some operations, searching for or listing of MBeans. The heterogenous nature of this kind of requests makes it hard to map them to one single HTTP verb as REST suggests. Also, the sheer length of the request parameter forbids to send a bulk request via GET as Servlet container or other application servers impose certain restrictions on the length of an URL, which vary however from server to server.\nJolokia implements both # For every Jolokia operation, we play both: GET and POST 2. As an integration tool, which helps to bridge different worlds without really having control over these worlds, the focus is on maximal flexibility so that it can adapt to any environment where it is used. REST is only of second importance here, but if you think the issues described above can be solved in a more RESTful way, I\u0026rsquo;m more than open.\nI\u0026rsquo;ve to confess that I\u0026rsquo;m really not a REST expert, so if you don\u0026rsquo;t agree with my arguments, I\u0026rsquo;d kindly ask you to leave a comment or tweet me for corrections.\u0026#160;\u0026#x21a9;\u0026#xfe0e;\nor: \u0026ldquo;Country and Western\u0026rdquo;\u0026#160;\u0026#x21a9;\u0026#xfe0e;\n","date":"3 November 2016","externalUrl":null,"permalink":"/jolokia-is-not-rest/","section":"Posts","summary":"From time to time people come to me and say: “I really would love Jolokia if only it would be RESTful”. This post tells you why.\n","title":"Why Jolokia is not RESTful","type":"posts"},{"content":"Now that some weeks has been passed we all had time to absorb the revised Java EE 8 proposal presented at Java One. As you know, some JSRs remained, some things were added and some stuff was dropped. Java EE Management API 2.0, supposed to be a modern successor of JSR 77, is one of the three JSRs to be dropped.\nWhat does this mean for the future of Java EE management and monitoring ?\nFirst of all it\u0026rsquo;s fair to state that JSR 373 never really took off. Since February 2015 there were not more than 86 mails on the expert group mailing list, half of them written in March 2015 during incubation. At the latest of January 2016 it was clear that JSR-373 is not on Oracle\u0026rsquo;s focus anymore. To be honest, even we members of the expert group, we were not able to push this JSR further.\nHow did it come that far ? Let\u0026rsquo;s have a look back into history.\nAll starts with JSR 3 back in 1999. This first JMX specification is the foundation of all Java resources management. As it can been seen by its age, Java folks took care about Management and Monitoring from the very beginning on. And even better, since JS2E 5 JMX is integral part of Java SE so its available on every JVM out there.\nOver the years, additional JSRs were added on top of this base:\nJSR 160 defines a remote protocol for JMX,, which is based on RMI. This might have been a good decision in 2003, but turned out to be awful to use especially for non-Java based monitoring systems. JSR 262 was started to overcome this by defined a \u0026ldquo;WebServices Connector for Java Management Extensions Agents\u0026rdquo; which was mostly around SOAP services. However although even an initial implementation existed, it was withdrawn before the final release. It\u0026rsquo;s not completely clear why it was stopped in 2008 and later withdrawn, as the public review ballot has been approved, although it was a tight result. The biggest objections were on dependencies on \u0026ldquo;proprietary\u0026rdquo; WS-* specifications. \u0026ldquo;J2EE Management\u0026rdquo; JSR 77 was finished in 2002 and defines a hierarchy how Management and Monitoring resources exposed by a Java EE server is structured. It allows a uniform interface for how to access the various Java EE resources, like web applications or connector pools. Beside this it also defines how statistics are exposed by defining various metrics formats. However, implementing the StatisticsProvider model is not mandatory and from my personal experience it was implemented only rarely by some vendors and if so, not for every resource. JSR 88 complements JSR 77 and defines a common format for deploying Java EE artefacts. JSR 255 was started to be the next version of JMX and supposed to be included in Java 7. Although it was already nearly finished and integrated, it didn\u0026rsquo;t make it into Java 7 (nor Java 8). The spec was then dormant until it was finally withdrawn in 2016. With the dead of JMX 2.0 in 2009 the evolution of JMX as a standard for Java SE has stalled. But what\u0026rsquo;s about Java EE Management ? At least JSR 77 is still part of Java EE 7 and for Java EE 8 the successor was supposed to be JSR 373. JSR 373 tackles the problem of remote access, whereas JSR 77 still relies on RMI as a standard implementation protocol as defined in JSR 160.\nThe two major goals of JSR 373 were:\nProvide an update of the hierarchal resource and statistics structure as defined by JSR 77 Provide a REST access to these resources independent of JMX In the often cited Java EE 8 Community Survey more than 60% were in favour of defining a new API for managing application, which should be based on REST (83% pro-votes). This finally lead to JSR 373. However, as it seems in retrospective, a deep interest in this topic was not really given and probably lead to this final decision to drop JSR 373 from Java EE 8.\nSo, what is the state of Monitoring and Management of Java and in particular Java EE applications nowadays and what can be expected in the future ? Let\u0026rsquo;s have a look into the crystal ball.\nJMX is here to stay. It is part of Java SE and I don\u0026rsquo;t know of any plans for removing it from future Java editions. Ok, it feels a bit rusty but it is still rock solid and gives you deep insight in the state of your JVM. With tools like Jolokia you can overcome most of the restrictions JSR 160 imposes. (Disclaimer: since I\u0026rsquo;m the author of Jolokia all my personal opinions given here should be evaluated in this light :) It is not clear how the Management API of Java EE 8 and beyond looks like. It does not look like that JSR 77 will survive. Will there be a standard for Java EE management at all ? Probably not, and so there is the danger that vendors will push their proprietary management APIs, which already happens to some extent. Luckily, most of these proprietary APIs are also mirrored in JMX these days. On the other hand, it could be also a good thing that there is no other Management API which is not based on JMX. That\u0026rsquo;s because you will always need JMX to monitor the basic aspects like Heap Memory usage or Thread count, which are covered by Java SE. Adding a different, REST like protocol for Java EE monitoring requires operators to access a Java EE server with two different protocols (JMX and Rest), duplicating configuration efforts on the monitoring side. This can only be avoided if the Java EE resources are mirrored in JMX, too. To sum it up, I think its a shame that Management and Monitoring, which played a prominent role over the whole evolution of Java EE, will probably be dropped completely in Java EE 8. As a replacement the new Health Check API has been announced, but to be honest, that can\u0026rsquo;t be a full replacement for classical management and monitoring where the evaluation of a system\u0026rsquo;s health is done on a dedicated monitoring platform (e.g. like Nagios or Prometheus). These platforms take the plain metrics data exposed by the application and does the data evaluation on their own.\nThe good thing is still that you have JMX to the rescue and I\u0026rsquo;m pretty sure that this technology will survive also this storm. Especially if vendors are willing to support it for their application server metrics, too.\nEven without a Java EE standard.\n","date":"10 October 2016","externalUrl":null,"permalink":"/java-management-is-dead/","section":"Posts","summary":"Now that some weeks has been passed we all had time to absorb the revised Java EE 8 proposal presented at Java One. As you know, some JSRs remained, some things were added and some stuff was dropped. Java EE Management API 2.0, supposed to be a modern successor of JSR 77, is one of the three JSRs to be dropped.\nWhat does this mean for the future of Java EE management and monitoring ?\n","title":"Java EE Management is dead","type":"posts"},{"content":"Let\u0026rsquo;s build a Raspberry Pi Cluster running Docker and Kubernetes. There has been already a handful of good recipes, however this howto is a bit different and provides some unique features.\nMy main motivation for going the Raspberry Pi road for a Kubernetes cluster was that I wanted something fancy for my Kubernetes talk to show, shamelessly stealing the idea from others (kudos to @KurtStam, @saturnism, @ArjenWassink and @kubernetesonarm for the inspiration ;-)\nI.e. the following Pi-K8s projects already existed:\nkubernetes-installer-rpi : A set up shell scripts and precompiled ARM binaries for running Kubernetes by @KurtStam on top of the Hypriot Docker Image for Raspberry Pi. Kubernetes on ARM : An opinionated approach by @kubernetesonarm with an own installer for setting up Kubernetes no only for the Pi but also for other ARM based platforms. K8s on Rpi : Another shell based installer for installing a Kubernetes cluster by @ArjenWassink and @saturnism When there are already multiple recipes out there, why then trying yet another approach ?\nMy somewhat selfish goals were:\nUsing (and learning on the way) Ansible for not only a one-shot installation but also maintainance and upgrades. Learning myself how to setup a Kubernetes cluster. This setup includes flannel as an overlay network, the SkyDNS extension and soon also a registry. Using Ansible helps me to incrementally add on top of things already installed. Want to use WiFi for connecting the cluster. See below for the reason. Get OpenShift Origin running and be able to switch between Ansible and OpenShift via Ansible. Create a demonstration platform for my favourite development and integration platform fabric8. As it turns out the whole experience was very enlightening to me. Its one thing to start Kubernetes on a single node within a VM (because multiple VM-based nodes kill soon your machine resourcewise) or having a small bare metal cluster, which blinks red and green and where you can plug wires at will. Not to mention the the geek factor :)\nShopping List # Here\u0026rsquo;s my shopping list for a Raspberry Pi 3 cluster, along with (non-affiliate) links to (German) shops, but I\u0026rsquo;m sure you can find them elswhere, too.\nAmount Part Price 4 Raspberry Pi 3 4 * 38 EUR 4 Micro SD Card 32 GB 4 * 11 EUR 1 WLAN Router 22 EUR 4 USB wires 9 EUR 1 Power Supply 30 EUR 1 Case 10 EUR 3 Intermediate Case Plate 3 * 7 EUR All in all, a 4 node Pi cluster for 288 EUR (as of April 2016). Not so bad.\nSome remarks:\nUsing WiFi for the connection has the big advantage that the Raspberry Pi 3 integrated BCM43438 WiFi chip doesn\u0026rsquo;t go over USB and saves valuable bandwidth used for IO in general. That way you are able to to get ~ 25 MB/s for disk IO and network traffic, respectively. And also less cables, of course. You can alway plug the power wire for demos, too ;-) A class 10 Mirco SD is recommended but it doesn\u0026rsquo;t have to be the fastest on the world as the USB bus only allows around 35 MB/s anyway. Initial Pi Setup # Most of the installation is automated by using Ansible. However the initial setup is a bit more involved. It certainly can be improved (e.g. automatic filesystem expanding of the initial Raspian setup). If you have ideas how to improve this, please open issues and PRs on Project31/ansible-kubernetes-openshift-pi3. Several base distributions has been tried out. It turned out that the most stable setup is based on a stock Raspian. Unfortunately it doesn\u0026rsquo;t provide a headless WLAN setup as it is possible with the latest Hypriot images, but for the moment it much more stable (I had strange kernel panics and 200% CPU load issues with the Hypriot image for no obvious reasons). Since this is a one time effort, let\u0026rsquo;s use Raspbian. If you want to try out the Hypriot image, there\u0026rsquo;s an experimental branch for the Ansible playbooks which can be used with Hypriot. I will retry Hypriot OS for sure some times later.\nDownload the latest Raspian image and store it as raspbian.zip :\ncurl -L https://downloads.raspberrypi.org/raspbian_lite_latest \\ -o raspbian.zip Install Hypriots\u0026rsquo; flash installer script. Follow the directions on the installation page.\nInsert you Micro-SD card in your Desktop computer (via an adapter possibly) and run\nflash raspbian.zip You will be asked to which device to write. Check this carefully, otherwise you could destroy your Desktop OS if selecting the the wrong device. Typically its something like /dev/disk2 on OS X, but depends on the number of hard drives you have.\nInsert the Micro SSD card into your Raspberry Pi and connect it to a monitor and keyboard. Boot up. Login in with pi / raspberry. Then:\nraspi-config --expand-rootfs vi /etc/wpa_supplicant/wpa_supplicant.conf and then add your WLAN credentials\nnetwork={ ssid=\u0026quot;MySSID\u0026quot; psk=\u0026quot;s3cr3t\u0026quot; } Reboot\nRepeat step 2. to 5. for each Micro SD card.\nNetwork Setup # It is now time to configure your WLAN router. This of course depends on which router you use. The following instructions are based on a TP-Link TL-WR802N which is quite inexepensive but still absolutely ok for our purposes since it sits very close to the cluster and my notebook anyway.\nFirst of all you need to setup the SSID and password. Use the same credentials with which you have configured your images.\nMy setup is, that I span a private network 192.168.23.0/24 for the Pi cluster which my MacBook also joins via its integrated WiFi.\nThe addresses I have chosen are :\n| 192.168.23.1 | WLAN Router | | 192.168.23.100 | MacBook\u0026rsquo;s WLAN | | 192.168.23.200 \u0026hellip; 192.168.23.203 | Raspberry Pis |\nThe MacBook is setup for NAT and forwarding from this private network to the internet. This script helps in setting up the forwarding and NAT rules on OS X.\nIn order to configure your WLAN router you need to connect to it according to its setup instructions. The router is setup in Access Point mode with DHCP enabled. As soon as the MAC of the Pis are known (which you can see as soon as they connect for the first time via WiFi), I configured them to always use the same DHCP lease. For the TL-WR802N this can be done in the configuration section DHCP -\u0026gt; Address Reservation. In the DHCP -\u0026gt; DHCP-Settings the default gateway is set to 192.168.23.100, which my notebook\u0026rsquo;s WLAN IP.\nStartup all nodes, you should be able to ping every node in your cluster. I added n0 \u0026hellip; n3 to my notebook\u0026rsquo;s /etc/hosts pointing to 192.168.23.200 \u0026hellip; 192.168.23.203 for convenience.\nYou should be able to ssh into every Pi with user pi and password raspberry. Also, if you set up the forwarding on your desktop properly you should be able to ping from within the pi to the outside world.\nAnsible Playbooks # After this initial setup is done, the next step is to initialize the base system with Ansible. You will need Ansible 2 installed on your desktop (e.g. brew install ansible when running on OS X)\nAnsible Configuration # Checkout the Ansible playbooks:\ngit clone https://github.com/Project31/ansible-kubernetes-openshift-pi3.git k8s-pi cd k8s-pi Copy over hosts.example and adapt it to your needs\ncp hosts.example hosts vi hosts There are three Ansible groups which are refered to in the playbooks:\n| pis | All cluster node | n0, n1, n2, n3 | | master | Master node | n0 | | nodes | All nodes which are not master | n1, n2, n3|\nCopy over the configuration and adapt it.\ncp config.yml.example config.yml vi config.yml You should at least put in your WLAN credentials, but you are also free to adapt the other values.\nBasic Node Setup # If you have already created a cluster with these playbooks and want to start a fresh, please be sure that you cleanup your ~/.ssh/known_hosts from the old host keys. You should be able to ssh into each of the nodes without warnings. Also you must be able to reach the internet from the nodes.\nIn the next step the basic setup (without Kubernetes) is performed. This is done by\nansible-playbook -k -i hosts setup.yml When you are prompted for the password, use raspberry. You will probably also need to confirm the SSH authentity for each host with yes.\nThe following steps will be applied by this command (which may take a bit):\nDocker will be installed from the Hypriot repositories Your public SSH key is copied over to pi\u0026rsquo;s authenticated_keys and the users password will be taken from config.yml Some extra tools are installed for your convenience and some benchmarking: hdparm iperf mtr Hostname is set to the name of the node configured. Also /etc/hosts is setup to contain all nodes with their short names. A swapfile is enabled (just in case) With this basic setup you have already a working Docker environment.\nNow its time to reboot the whole cluster since some required boot params has been added. Plug the wire.\nKubernetes Setup # The final step for a working Kubernetes cluster is to run\nansible-playbook -i hosts kubernetes.yml This will install one master at n0 and threed additional nodes n1, n2, n3.\nThe following features are enabled:\netcd, flanneld and kubelet are installed as systemd services on the master kubelet and flanneld are installed as systemd services on the nodes Docker is configured to use the Flannel overlay network kubectl is installed (and an alias k) If there are some issues when restarting services in the master, don\u0026rsquo;t worry. However you should best restart the master node n0 when this happens, because when setting up the other nodes the would fail if not all services are running on the master.\nAfter an initial installation it may take a bit until all infrastructure docker images has been loaded. Eventually should be able to use kubectl get nodes from e.g. n0. When this wotks but you see only one node, please reboot the cluster since some services may have not been started on the nodes (plug the cables when n0 is ready).\nInstall SkyDNS # For service discovery via DNS you should finally install the SkyDNS addon, but only when the cluster is running, i.e. the master must be up and listening. For this final step call:\nansible-playbook -i hosts skydns.yml Wrap Up # This has become a rather long recipe. I re-did everything from scratch within 60 minutes, so this could be considered as a lower boundary (because I already did it several times :). The initial setup might be a bit flaky, but should be easy to fix. I\u0026rsquo;d love to hear your feedback on this, and maybe we get it more stable afterwards. Remember, that\u0026rsquo;s my first Ansible playbook :)\nNow go out, buy and setup your Kubernetes cluster and have fun :-)\n","date":"27 April 2016","externalUrl":null,"permalink":"/kubernetes-on-raspberry-pi3/","section":"Posts","summary":"Let’s build a Raspberry Pi Cluster running Docker and Kubernetes. There has been already a handful of good recipes, however this howto is a bit different and provides some unique features.\n","title":"A Raspberry Pi 3 Kubernetes Cluster","type":"posts"},{"content":"rhuss/docker-maven-plugin is dead, long live fabric8io/docker-maven-plugin !\nIf you follow the Docker Maven Plugin Scene1, you probably noticed that there has been quite some progress in the last year. Started as a personal research experiment early 2014, rhuss/docker-maven-plugin (d-m-p) has took off a little bit. With more than 300 GitHub stars it\u0026rsquo;s now the second most popular docker-maven-plugin. With 38 contributors we were able to do 36 releases. It is really fantastic to see that many people contributing to a rather niche product. Many kudos go out to Jae for his many contributions and continued support on fixing and answering issues. Thanks also for always being very patient with my sometimes quite opinionated and picky code reviews :)\nHowever it is now time to ignite the next stage and bring this personal \u0026lsquo;pet\u0026rsquo; project to a wider context. And what is better suited here than the fabric8 community ?\nFabric8 is a next generation DevOps and integration platform for Docker based applications, with a focus on Kubernetes and OpenShift as orchestration and build infrastructure. Its a collection of multiple interrelated projects including Maven tooling for interacting with Kubernetes and OpenShift. d-m-p is already included as foundation for creating Docker application images.\nI\u0026rsquo;m very happy that d-m-p has now found its place in this ecosystem where it will continue to flourish even faster.\nThe fabric8 community is very open and has established multiple communications channels on which you will find d-m-p now, too:\n#fabric8 on irc.freenode.net is an IRC channel with a lot of helpful hands (including myself) A mailing list for more in depth discussions Issues are still tracked with GitHub issues d-m-p specific blog posts will go out on the fabric8 blog in the future. So, what changed ?\nrhuss/docker-maven-plugin has been transferred to fabric8io/docker-maven-plugin The Maven group id has changed from org.jolokia to io.fabric8 for all releases 0.14.0 and later. CI and release management will be done on the fabric8 platform. And what will not change ?\nd-m-p will always be usable with plain Docker, speaking either to a remote or local Docker daemon. No Kubernetes, no OpenShift required. I\u0026rsquo;ll continue to work on d-m-p ;-) Thanks so much for all the fruitful feedback and pull requests. Keep on rocking ;-)\nwith more than 15 docker-maven-plugins its probably fair to call it a \u0026ldquo;scene\u0026rdquo; ;-)\u0026#160;\u0026#x21a9;\u0026#xfe0e;\n","date":"24 February 2016","externalUrl":null,"permalink":"/dmp-moves-on/","section":"Posts","summary":"rhuss/docker-maven-plugin is dead, long live fabric8io/docker-maven-plugin !\n","title":"docker-maven-plugin moves on","type":"posts"},{"content":"Dealing with multiple Docker registries is hard, mostly because the meta information where a image is located is part of a Docker image\u0026rsquo;s name, which is typically used as an identifier, too.\nLet\u0026rsquo;s see how the rhuss/docker-maven-plugin deals with this peculiarity.\nWhen setting up a Maven build for creating Docker images out of your Java application, the classical way to specify the registry where to push the final Docker image is to bake it into the image\u0026rsquo;s name. The drawback however is that you couple your build to this particular registry so that it is not possible to push your image to another registry when building the image.\nPull and Push # The docker-maven-plugin (d-m-p in short) interacts1 with Docker registries in two use cases:\nPulling base images from a registry when building images with docker:build or starting images with docker:start Pushing built images to a registry with docker:push In both cases you can define your build agnostic from any registry by omitting the registry part in your image names2 and specify it externally as meta information. This can be done in various ways:\nAdding it to the plugin configuration as an \u0026lt;registry\u0026gt; element. This can be easily put into a Maven profile (either directly in the pom.xml or also in ~/.m2/settings.xml). Using a system property docker.registry when running Maven As a final fallback an environment variable DOCKER_REGISTRY can be used, too. For example,\nmvn -Ddocker.registry=myregistry.domain.com:5000 docker:push When you combine build and push steps in a single call like in\nmvn package docker:build docker:push a pull operation for a base image and a push operation can happen. To allow different registries in this situation the properties docker.pull.registry and docker.push.registry are supported, too, (with the corresponding configuration elements \u0026lt;pullRegistry\u0026gt; and \u0026lt;pushRegistry\u0026gt;, respectively).\nWhen pushing an image this way, the following happens behind the scene (assuming an image named user/myimage and target registry myregistry:5000)\nThe image user/myimage is tagged temporarily as myregistry:5000/user/myimage in the Docker daemon. The image myregistry:5000/user/myimage is pushed. The tag is removed again. Authentication # That\u0026rsquo;s all fine, but how does d-m-p deal with authentication ? Again, there are several possibilities how authentication can be performed against a registry:\nUsing a \u0026lt;authConfig\u0026gt; section in the plugin configuration with\u0026lt;username\u0026gt; and \u0026lt;password\u0026gt; elements. Providing system properties docker.username and docker.password when running Maven Using a \u0026lt;server\u0026gt; configuration in ~/.m2/settings.xml with possible encrypted password. That\u0026rsquo;s the most maven-ish way for doing authentication. Login into the registry with docker login. The plugin will pick up the credentials from ~/.docker/config.json There are again variants to distinguish between authentication for pulling and pushing images to registries (e.g. docker.push.username and docker.push.password). All the details can be found in the reference manual.\nUsing the OpenShift Registry # OpenShift is an awesome PaaS platform on top of Kubernetes. It comes with an own Docker registry which can be used by d-m-p, too. However, there are some things to watch out for.\nFirst of all, the registry needs to be exposed to the outside so that a Docker daemon outside the OpenShift cluster can talk with the registry:\noc expose service/docker-registry --hostname=docker-registry.mydomain.com The hostname provided should be resolved by your host to the OpenShift API server\u0026rsquo;s IP (this happens automatically if you use the fabric8 OpenShift Vagrant image for a one-node developer installation of OpenShift).\nNext, it is important to know, that the OpenShift registry use the regular OpenShift SSO authentication, so you have to login into OpenShift before you can push to the registry. The access token obtained from the login is then used as the password for accessing the registry:\n# Login to OpenShift. Credentials are stored in ~/.kube/config.json: oc login # Use user and access token for authentication: mvn docker:push -Ddocker.registry=docker-registry.mydomain.com \\ -Ddocker.username=$(oc whoami) \\ -Ddocker.password=$(oc whoami -t) The last step can be simplified by using -Ddocker.useOpenShiftAuth which does the user and token lookup transparently.\nmvn docker:push -Ddocker.registry=docker-registry.mydomain.com \\ -Ddocker.useOpenShiftAuth The configuration option useOpenShiftAuth again comes in multiple flavours: a default one, and dedicated for push and pull operations (docker.pull.useOpenShiftAuth and docker.push.useOpenShiftAuth).\ntl;dr # Among all the many docker maven plugins, rhuss/docker-maven-plugin provides the most flexible options for accessing Docker registries and authentication. The gory details can be found in the reference manual which documents registry handling and authentication in detail.\nThe interaction is always indirectly via the Docker daemon, since a Docker client like d-m-p only talks with the Docker daemon directly.\u0026#160;\u0026#x21a9;\u0026#xfe0e;\nOf course you can the registry part in your image names in which case this registry has always the highest priority.\u0026#160;\u0026#x21a9;\u0026#xfe0e;\n","date":"21 January 2016","externalUrl":null,"permalink":"/registry-magic-with-dmp/","section":"Posts","summary":"Dealing with multiple Docker registries is hard, mostly because the meta information where a image is located is part of a Docker image’s name, which is typically used as an identifier, too.\nLet’s see how the rhuss/docker-maven-plugin deals with this peculiarity.\n","title":"Registry Magic with docker-maven-plugin","type":"posts"},{"content":"This screencast gives a live demo of the forthcoming JMX notification support in Jolokia 2.0.\nJolokia supports currently two notification modes. In all modes, the Jolokia agent itself subscribe to a JMX notification locally and then dispatches the notifications to its clients.\nPull Mode : Here, the agent keeps the notification received for a client in memory and sends it back on an JMX request to a Jolokia specific MBean. A client typically queries this notification MBean periodically. SSE Mode : Server Sent Events are a W3C standard for pushing events from an HTTP server to a client. With this mode the Jolokia agents directly pushes any notification it receives to the client. The advantage is of course a much lower latency compared to the pull mode, but SSE is not available for Internet Explorer, including 11. What a pity. The Jolokia protocol has been extended with the top level action notification and subcommands.\nregister / unregister : Register / unregister a notification client add / remove : Add / remove a listener subscription list : list all subscriptions for a client ping : Keep subscription alive open : Use for creating a back channel. E.g. the SSE mode keeps this GET request for pushing back an event stream. Currently only the new Jolokia JavaScript client supports JMX notification. If you are interested in having it in other clients (e.g. Java), too, please let me know. I would be more than happy for coders jumping on the Jolokia bandwagon since there is still quite some stuff to do for 2.0.\nThe source code to this demo and the new Jolokia JavaScript client is on GitHub: https://github.com/jolokia-org/jolokia-client-javascript.\n","date":"13 January 2016","externalUrl":null,"permalink":"/jolokia-notifications/","section":"Posts","summary":"This screencast gives a live demo of the forthcoming JMX notification support in Jolokia 2.0.\n","title":"Jolokia 2.0 - JMX Notifications","type":"posts"},{"content":"I hope you all had a good start into 2016 and have charged all your batteries during the time of stillness.\nJolokia had a good start, too. During the holiday season I took the opportunity to continue to work on version 2.0 which now takes on form. If you have followed the history of Jolokia you know that work on 2.0 started early 2013 but advanced quite slowly for multiple reasons.\nNow its time to go out on a limb with announcing Jolokia 2.0 for 2016. A bit of pressure sometimes really helps ;-)\nHere\u0026rsquo;s are the major themes for Jolokia 2.0:\nJolokia 2.0 will be backwards compatible on the protocol level. This is a design goal. There might be some changes in default values, however this should be easy to fix. Any such change will be announced prominently (like artefact renaming). So, all your clients will be usable with 2.0 with minor changes.\nJMX Notification support is here. Yeah, this was quite some work. The extensions to the Jolokia 2.0 protocol are able to push notifications in various ways. Currently the agents supports two modes:\nPull mode will collect JMX notification on the server (agent) side and can be fetched by a client with an HTTP request, which typically happens periodically. This introduces some latencies but is the most robust way to transmit notifications to a client. SSE mode uses Server Sent Events for pushing JMX notifications immediately with very low latency. This is the preferred mode if a client supports this (Internet Explorer does not). The notification support is nearly complete on the agent side, and the Jolokia JavaScript client already supports both modes. In the future more mode like WebSockets or Web-Hooks should be easy to add. The next post will give a demo about the notification support.\nNamespaces extend Jolokia beyond JMX which means you can access other entities than JMX MBeans with the very same protocol. This feature is still in the conceptual state but one can easily imagine to access\nSpring Beans CDI Objects JNDI Directories Zookeeper Directories \u0026hellip;. the same was as JMX. The namespace is selected as part of the (MBean) name. More on this in this design document. Since this feature would extend the usage pattern of Jolokia quite a bit, I\u0026rsquo;m not 100% sure whether to include it into 2.0 since it feels a bit against my Unix based education (\u0026ldquo;do one thing and do it well\u0026rdquo;).\nWith the addition of even more features, modularization becomes even more important. Jolokia was and is always picky about its footprint, which is currently 430k for the WAR agent with all features included. Jolokia 2.0 introduces various internal services which can be picked and chosen by repackaging the agent. Or the agent can be extended with own functionality, too. A way for easily packaging and creating agents will be provided either by a Web-UI or by a CLI tool (or both).\nIn addition there are also some non-functional changes to polish Jolokia a bit:\nNon-agent based addition like client libraries, integration tests, JBoss Forge support are extracted into extra GitHub repositories. All this will happen within the GitHub organization jolokia-org. The first project here is the JavaScript client which already moved to a dedicated jolokia-client-javascript repository. The website will get a face-lift. Documentation will switch from Docbook to a Markdown or AsciiDoc based format. Finally some stuff will get dropped. This happens because of limited resources (Jolokia, to be frankly, still doesn\u0026rsquo;t have a big community, so that most of the work is done by a single person. \u0026lsquo;would like to change that, though) and because I think these feature never took off:\nMule agent. I never got much feedback from the Mule community so I\u0026rsquo;m really not sure whether this agent is really used or needed. Jolokia 1.x will continue to support the Mule agent, however there will be no stock Jolokia 2.0 Mule agent. Said that, you are always free to adopt Jolokia 2.0 to the Mule management platform. Considering the extra code needed included in Jolokia 1.3 for Mule support this should be fairly trivial. I\u0026rsquo;m happy to support anyone doing the port. Also, there is always the alternative to use the JVM agent for attaching Jolokia to Mule, which is the preferred way for 2.0 to monitor Mule with Jolokia. Spring Roo Support will be dropped for much the same reasons. I never received an issue on the Jolokia Spring Roo support, which is a clear sign that nobody is using it. It might popup as an extra project. So, what\u0026rsquo;s the roadmap ? Here\u0026rsquo;s the plan:\nMilestone 2.0.0-M1 is here. You find the JVM and WAR agents in Maven central. Every month, a new milestone will be released. Final release is aligned to Red Hat Summit / DevNation. July 1st. Isn\u0026rsquo;t this a nice new year\u0026rsquo;s resolution ? ;-)\nIn the next post I will demo JMX notifications and how you can use them in your JavaScript projects.\n","date":"6 January 2016","externalUrl":null,"permalink":"/jolokia-in-2016/","section":"Posts","summary":"I hope you all had a good start into 2016 and have charged all your batteries during the time of stillness.\nJolokia had a good start, too. During the holiday season I took the opportunity to continue to work on version 2.0 which now takes on form. If you have followed the history of Jolokia you know that work on 2.0 started early 2013 but advanced quite slowly for multiple reasons.\nNow its time to go out on a limb with announcing Jolokia 2.0 for 2016. A bit of pressure sometimes really helps ;-)\n","title":"Welcome to 2016 - the year Jolokia 2.0 will see the light of day","type":"posts"},{"content":"When I had to create multiple Docker base images which only differ slightly for some minor variations I couldn\u0026rsquo;t avoid to feel quite dirty because of all the copying \u0026amp; pasting of Dockerfile fragments. We all know how this smells, but unfortunately Docker has only an answer for inheritance but not for composition of Docker images. Luckily there is now fish-pepper, a multi-dimensional docker build generator, which steps into the breach.\nFor example consider a Java base image: Some users might require Java 7, some want Java 8. For running Microservices a JRE might be sufficient. In other use cases you need a full JDK. These four variants are all quite similar with respect to documentation, Dockerfiles and support files like startup scripts. Copy-and-paste might seem to work for the initial setup but there are severe drawbacks considering image evolution or introduction of even more parameters.\nWith fish-pepper you can use flexible templates which are filled with variations of the base image (like 'version' : ['java7', 'java8'], 'type': ['jdk', 'jre']) and which will create multiple, similar Dockerfile builds.\nThe main configuration of an image family is images.yml which defines the possible parameters. For the example above it is\nfish-pepper: params: - \u0026#34;version\u0026#34; - \u0026#34;type\u0026#34; The possible values for these parameters are given in a dedicated config section:\nconfig: version: openjdk7: java: \u0026#34;java:7u79\u0026#34; fullVersion: \u0026#34;OpenJDK 1.7.0_79\u0026#34; openjdk8: java: \u0026#34;java:8u45\u0026#34; fullVersion: \u0026#34;OpenJDK 1.8.0_45\u0026#34; type: jre: extension: \u0026#34;-jre\u0026#34; jdk: extension: \u0026#34;-jdk\u0026#34; Given this configuration, four builds will be generated when calling fish-pepper, one for each combination of version (\u0026ldquo;openjdk7\u0026rdquo; and \u0026ldquo;openjdk8\u0026rdquo;) and type (\u0026ldquo;jre\u0026rdquo; and \u0026ldquo;jdk\u0026rdquo;) parameter values.\nThese value can now be filled into templates which are stored in a templates/ directory. The Dockerfile in this directory can refer to this configuration through a context object fp:\nFROM {{ \u0026#34;{{= fp.config.version.java + fp.config.type.extension \u0026#34; }}}} ..... Templates use DoT.js as template engine, so that the full expressiveness of JavaScript is available. The fish-pepper context object fp holds the configuration and more.\nThe given configuration will lead to four Docker build directories:\nimages/ +---- openjdk7 | +--- jre -- Dockerfile, ... | +--- jdk -- Dockerfile, ... | +---- opendjk8 +--- jre -- Dockerfile, ... +--- jdk -- Dockerfile, ... The generated build files can also be used directly to create the images with fish-pepper build. This will reach out to a Docker daemon and create the images java-openjdk7-jre, java-openjdk7-jdk, java-openjdk8-jre and java-openjdk8-jdk.\nAlternatively these builds can be used as content for automated Docker Hub builds when checked into Github. The full example can be found on GitHub.\nBut wait, there is more:\nBlocks can be used to reuse Dockerfile snippets and files to include across images. Blocks can be stored locally or referenced via a remote Git repository. Examples for blocks are generic startup scripts or other value add functionality like enabling agents like agent bond. Flexible file mappings allow multiple alternative templates. Defaults allow shared configuration between multiple parameter values. fish-pepper can be seen in its fully beauty in fabric8io/base-images where more than twenty five base images are maintained with fish-pepper.\nWith node.js you can install fish-pepper super easy with\nnpm -g install fish-pepper In the following blogs I will show more usage examples, especially how \u0026ldquo;blocks\u0026rdquo; can be easily reused and shared.\n","date":"7 September 2015","externalUrl":null,"permalink":"/fish-pepper-announcement/","section":"Posts","summary":"When I had to create multiple Docker base images which only differ slightly for some minor variations I couldn’t avoid to feel quite dirty because of all the copying \u0026 pasting of Dockerfile fragments. We all know how this smells, but unfortunately Docker has only an answer for inheritance but not for composition of Docker images. Luckily there is now fish-pepper, a multi-dimensional docker build generator, which steps into the breach.\n","title":"fish-pepper - Docker on Capsaicin","type":"posts"},{"content":"As you might know, Jmx4Perl is the mother of Jolokia. But what might be not so known is, that Jmx4Perl provides a set of nice CLI tools for accessing Jolokia agents. However, installing Jmx4Perl manually is cumbersome because of its many Perl and also native dependencies.\nHowever, if you are a Docker user there is now a super easy way to benefit from this gems.\nEven if Perl is not your cup of tea, you might like the following tool (for which of course no Perl knowledge is required at all):\njmx4perl is a command line tool for one-shot querying Jolokia agents. It is perfectly suited for shell scripts. j4psh is a readline based, JMX shell with coloring and command line completion. You can navigate the JMX namespace like directories with cd and ls, read JMX attributes with cat and execute operations with exec. jolokia is an agent management tool which helps you in downloading Jolokia agents of various types (war, jvm, osgi, mule) and versions. It also knows how to repackage agents e.g. for enabling security for the war agent by in-place modification of the web.xml descriptor. check_jmx4perl is a full featured Nagios plugin. How can you now use these tools ? All you need is a running Docker installation. The tools mentioned above are all included within the Docker image jolokia/jmx4perl which is available from Docker Hub.\nSome examples:\n# Get some basic information of the server docker run --rm -it jolokia/jmx4perl \\ jmx4perl http://localhost:8080/jolokia # Download the current jolokia.war agent docker run --rm -it -v `pwd`:/jolokia jolokia/jmx4perl \\ jolokia # Start an interactive JMX shell # server \u0026#34;tomcat\u0026#34; is defined in ~/.j4p/jmx4perl.config docker run --rm -it -v ~/.j4p:/root/.j4p jolokia/jmx4perl \\ j4psh tomcat In these examples we mounted some volumes:\nIf you put your server definitions into ~/.j4p/jmx4perl.config you can use them by mounting this directory as volume with -v ~/.j4p:/root/.j4p. For the management tool jolokia it is recommended to mount the local directory with -v $(pwd):/jolokia so that downloaded artefacts are stored in the current host directory. (Note for boot2docker users: This works only when you are in a directory below you home directory) It is recommended to use aliases as abbreviations:\nalias jmx4perl=\u0026#34;docker run --rm -it -v ~/.j4p:/root/.j4p jolokia/jmx4perl jmx4perl\u0026#34; alias jolokia=\u0026#34;docker run --rm -it -v `pwd`:/jolokia jolokia/jmx4perl jolokia\u0026#34; alias j4psh=\u0026#34;docker run --rm -it -v ~/.j4p:/root/.j4p jolokia/jmx4perl j4psh\u0026#34; As an additional benefit of using Jmx4Perl that way, you can access servers which are not directly reachable by you. The Jolokia agent must be reachable by the Docker daemon only. For example, you can communicate with a SSL secured Docker daemon running in a DMZ only. From there you can easily reach any other server with a Jolokia agent installed, so there is no need to open access to all servers from your local host directly.\nFinally, here\u0026rsquo;s a short appetiser with an (older) demo showing j4psh in action.\n","date":"28 July 2015","externalUrl":null,"permalink":"/jmx4perl-docker/","section":"Posts","summary":"As you might know, Jmx4Perl is the mother of Jolokia. But what might be not so known is, that Jmx4Perl provides a set of nice CLI tools for accessing Jolokia agents. However, installing Jmx4Perl manually is cumbersome because of its many Perl and also native dependencies.\nHowever, if you are a Docker user there is now a super easy way to benefit from this gems.\n","title":"Jmx4Perl for everyone","type":"posts"},{"content":"Ok, you know Docker. And since you are a Java developer you want to know how you can use this in your daily development workflow. You probably also heard about the docker-maven-plugin which seamlessly creates Docker images, starts and stops Docker containers and more all with a concise configuration syntax.\nAnd now there is this new goal docker:watch.\nWe developers are lazy, right ? We want our code to compile fast, we want the servers to start up fast. And we want to test changes quickly. That\u0026rsquo;s why we love OSGi1 and JRebel. And we want this for Docker containers, too.\nGood news. docker-maven-plugin will support hot rebuild of Docker images and hot restart of containers with a new Maven goal docker:watch. It will be released with version 0.12.1. For the brave coder 0.12.1-SNAPSHOT is already out there, the documentation can be found here.\nBut before losing more words, here\u0026rsquo;s a sneak preview.\nThat\u0026rsquo;s of course not entirely true. We love OSGi \u0026hellip; or we hate it with passion. But even the haters don\u0026rsquo;t hate it for its hot deployment abilities.\u0026#160;\u0026#x21a9;\u0026#xfe0e;\n","date":"30 June 2015","externalUrl":null,"permalink":"/maven-docker-watch/","section":"Posts","summary":"Ok, you know Docker. And since you are a Java developer you want to know how you can use this in your daily development workflow. You probably also heard about the docker-maven-plugin which seamlessly creates Docker images, starts and stops Docker containers and more all with a concise configuration syntax.\nAnd now there is this new goal docker:watch.\n","title":"docker:watch","type":"posts"},{"content":"Building a Docker wormhole is easy.\nA wormhole is a special type of structure that some scientists think might exist, connecting parts of space and time that are not usually connected\n\u0026mdash; Cambridge Dictionaries Online\nIn Docker universe we have several uses cases, which require a Docker installation within a Docker container. For example the OpenShift Builds use images whose container\u0026rsquo;s are meant to create application images. They include a whole development environment including possibly a compiler and a build tool. During the build a Docker daemon is accessed for creating the final application image.\nThe question is now, how a build can access the Docker daemon ? In general, there are two possibilities:\nDocker in Docker is a project which allows you to run a Docker daemon within a Docker container. Technically this is quite tricky and there seems to be some issues with this approach, especially because you have to run this container in privileged mode. This is the Matryoshka doll pattern. Or you use the Wormhole pattern described in this post. The idea is to get access to the Docker daemon running a container from within the container. As you know a Docker host can be configured to be accessible by two alternative methods: Via a Unix socket or via a TCP socket.\nUsing the Unix socket of the surrounding docker daemon is easy: Simply share the path to the unix socket as a volume:\n# Map local unix socket into the container docker run -it -v /var/run/docker.sock:/var/run/docker.sock ... Then within the container you can use the Docker CLI or any tool that uses the Unix socket at usual.\nRunning over the TCP socket is a bit more tricky because you have to find out the address of your Docker daemon host. This can best be done by examining the routing table within the container:\n# Lookup and parse routing table host=$(ip route show 0.0.0.0/0 | \\ grep -Eo 'via \\S+' | \\ awk '{print $2}'); export DOCKER_HOST=tcp://${host}:2375 This works fine as long you are not using SSL. With SSL in place you need have access to the SSL client certificates. Of course this is achieved again with a volume mount. Assuming that you are using boot2docker this could look like\n# Mount certs into the container docker run -ti -v ~/.boot2docker/certs/boot2docker-vm/:/certs .... This will mount your certs at /certs within the container and can be used to set the DOCKER_HOST variable.\nif [ -f /certs/key.pem ]; then # If certs are mounted, use SSL ... export DOCKER_CERT_PATH=/certs export DOCKER_TLS_VERIFY=1 export DOCKER_HOST=tcp://${host}:2376 else # ... otherwise use plain http export DOCKER_TLS_VERIFY=0 export DOCKER_HOST=tcp://${host}:2375 fi There is some final gotcha as that the server certificate can not be verified because it doesn\u0026rsquo;t contain the docker host IP as seen from the container. See this issue for details. As workaround you have to unset DOCKER_TLS_VERIFY for the moment when using the docker client.\nBoth ways are useful and are leaner and possibly more secure than having a Matryoshka doll approach.\nFinally there is still the question, why on earth wormhole pattern ? Like in a wormhole (also known as Einstein-Rosen Bridge) you can reach through the wormhole a point (the outer docker daemon) in spacetime which is normally not reachable (because a container is supposed to be its \u0026ldquo;own\u0026rdquo; world). Another fun fact: If you create a container through a wormhole this container it\u0026rsquo;s not your daughter, its your sister. Feels a bit freaky, or ? Alternatively you could call it also Münchhausen pattern because you create something with the exact the identically means you have been created yourself (like in the Münchhausen trilemma).\nOr feel free to call it what you like ;-)\n","date":"17 June 2015","externalUrl":null,"permalink":"/docker-wormhole-pattern/","section":"Posts","summary":"Building a Docker wormhole is easy.\n","title":"The Docker Wormhole Pattern","type":"posts"},{"content":"The HTTP-JMX Bridge Jolokia allows easy access to JMX. It exposes all JMX information and operations via an REST-like interface and has tons of nifty features. Jmx4Perl on the other side is a client for Jolokia, which beside Perl access modules also provides quite some nice CLI tools for accessing and installing Jolokia. This post explains how install these tools on OS X.\nJmx4Perl provides some nice CLI commands:\njmx4perl is a simple access tool which is useful for quick queries and ideal for inclusion in shell scripts. j4psh is a powerful interactive, readline based JMX shell with tab completion and syntax highlighting. jolokia is a tool for managing Jolokia agents (downloading, changing init properties etc.) All this tools are very helpful in order to explore the JMX namespace and installing the agent. They all are fairly good documented and each of them probably deserves an own blog post.\nHowever, the installation or Perl modules and programs is a bit tedious. Although cpan helps here and also resolves transitive dependencies it\u0026rsquo;s still a lengthy process, which fails from time to time. Native Linux packages are planned, but don\u0026rsquo;t hold your breath ;-).\nFor OS X users with Homebrew can install Jmx4Perl quite easily, though:\n$ brew install cpanm $ cpanm --sudo install JMX::Jmx4Perl This will do all the heavy lifting for you and at the end all the fine Jmx4Perl tools are installed and available under /usr/local/bin.\nj4psh uses libreadline for the input handling. For the best user experience GNU ReadLine is recommended. Unfortunately, OS X doesn\u0026rsquo;t ship with a true libreadline but with libedit which is a stripped down version of libreadline. In order to use GNU readline, some tricks are needed which are described in this recipe. For me, the following steps worked (but are probably a bit \u0026ldquo;dirty\u0026rdquo;):\n$ brew install readline $ brew link --force readline $ sudo mv /usr/lib/libreadline.dylib /tmp/libreadline.dylib $ cpanm --sudo Term::ReadLine::Gnu $ sudo mv /tmp/libreadline.dylib /usr/lib/libreadline.dylib $ brew unlink readline These steps are really only necessary if you need advanced readline functionality (or a coloured prompt in j4psh ;-).\n","date":"23 March 2015","externalUrl":null,"permalink":"/jmx4perl-on-osx/","section":"Posts","summary":"The HTTP-JMX Bridge Jolokia allows easy access to JMX. It exposes all JMX information and operations via an REST-like interface and has tons of nifty features. Jmx4Perl on the other side is a client for Jolokia, which beside Perl access modules also provides quite some nice CLI tools for accessing and installing Jolokia. This post explains how install these tools on OS X.\n","title":"Jmx4Perl on OS X","type":"posts"},{"content":"A health check is a useful technique for determining the overall operational state of a system in a consolidated form. It provides some kind of internal monitoring which collects metrics, evaluates them against some thresholds and provides a unified result. Health checks are now coming to Jolokia. This post explains the strategy to include health checks into Jolokia without blowing up the agents to much.\nHealth checks are different to classical monitoring solutions like Nagios, where external systems collect metrics and evaluate them against some threshold on their own. While monitoring with Nagios was and is always possible with Jolokia (and in fact was the original motivation for creating it), intrinsic health checks were avoided for the vanilla agent up to now because of the extra complexity they introduce into the agent. One of the major design goals of Jolokia is to keep it small and focussed.\nThe upcoming release 1.3.0 (scheduled for the end of this month) will introduce a simple plugin architecture into Jolokia which allows to hook into the agent\u0026rsquo;s lifecycle. A so called MBeanPlugin in Jolokia also allows access to the agent configuration and to the JMX system. Currently it is supported for the WAR and JVM agent, where plugins are created via a simple class path lookup. For the OSGi agent it is planned that it will pick up plugins as OSGi services.\nHaving this new infrastructure in place, extra functionality like health checks can be added easily. The GitHub repository jolokia-extra was created to host various extensions to the Jolokia agent, also to keep the original agent as lean as possible. Beside the new health checks there is already an extension jsr77 for simplifying the access to JSR-77 compliant JEE Servers like WebSphere.\nThe new health addon in jolokia-extra has just been started. Currently it contains not much more as proof-of-concept with some hardcoded health checks, but it already illustrate the concept: A MBeanPlugin registers a certain SampleHealthCheckMBean during startup which exposes the health checks as JMX operations (and which can be executed as usual with Jolokia). These operations have access to JMX via the MBeanPluginContext and can query any MBean in the system.\nBut that is only the beginning. There are still a lot of design decisions to take:\nHow should health check specification look like ? Should it be done via JSON or should a more expressive DSL based e.g. on Groovy should be used ? How are the health check store on the agent side ? Looking them up in the filesystem (from a configurable path with a sane default like ~/.jolokia_healthchecks) Baking it into the agent jar Uploading it via an MBean operation (and then storing them in the filesystem as well) What kind of meta data should be provided so that consoles like hawt.io can dynamically create their health check views ? How should the parameter and return value for the health checks look like ? If you would like to participate, the discussion about the implementation details will take place in issue #1 and the current working state is summarized in this wiki page.\n","date":"17 January 2015","externalUrl":null,"permalink":"/health-checks/","section":"Posts","summary":"A health check is a useful technique for determining the overall operational state of a system in a consolidated form. It provides some kind of internal monitoring which collects metrics, evaluates them against some thresholds and provides a unified result. Health checks are now coming to Jolokia. This post explains the strategy to include health checks into Jolokia without blowing up the agents to much.\n","title":"Health Checks with Jolokia","type":"posts"},{"content":"A local Maven repository serves as a cache for artifacts and dependencies, we all know this. This helps in speeding up things but can cause subtle problems when doing releases. Docker can help here a bit for avoiding caching issues.\nBefore doing a release I typically move ~/.m2/repository away to be really sure that everybody else can build the source as well and that any dependencies are also on the remote Maven repository. This is a bit tedious, because it is a manual process and you can forget to move the old directory back which will was a LOT of disk space over time.\nDocker can help here a bit: Since yesterday there is an official Maven image which can be used to build your project. The nice thing for doing releases with this image is, that it always starts afresh with an empty local Maven repository.\nAssuming you are currently located in the top-level directory holding your pom.xml you can use this single command for running a real clean build:\ndocker run -it --rm \\ -v \u0026quot;$(pwd)\u0026quot;:/usr/src/mymaven \\ -w /usr/src/mymaven \\ maven:3.2-jdk-7 \\ mvn clean install With this call you mount your project directory into /usr/src/mymaven on the container, change to this directory in the container and call mvn clean install. At the end, your container will be removed (--rm) so there is no chance that you might forget to clean up afterwards.\nOf course it will download all the artifacts each time, so it is not a good idea to use this approach for your daily developer business (especially if you using Maven central as remote Maven repository).\nYou can also play around with various versions of Maven by changing the image tag so at the end you can be really sure, that your project will build everywhere. Please refer to the Docker Hub page for details.\nUpdate: As pointed out by Noah Zucker on Twitter you can redirect of course the local repository via -Dmaven.repo.local=/tmp/clean-repo temporarily to a new location. Which is confessedly much simpler and I would prefer that one instead if you don\u0026rsquo;t need to check with different JDKs or Maven versions. Sometimes you don\u0026rsquo;t see the forest for the trees if you come from the wrong direction (e.g. looking for use case of a specific docker image).\n","date":"7 November 2014","externalUrl":null,"permalink":"/clean-maven-builds-with-docker/","section":"Posts","summary":"A local Maven repository serves as a cache for artifacts and dependencies, we all know this. This helps in speeding up things but can cause subtle problems when doing releases. Docker can help here a bit for avoiding caching issues.\n","title":"Real clean Maven builds with Docker","type":"posts"},{"content":"My docker-maven-plugin is undergoing a major refactoring. This post explains the motivation behind this and also what you can expect in the very near future. The configuration syntax becomes much cleaner and implicit behavior was removed.\nOriginally, I needed a docker-maven-plugin for a very specific use case: To test Jolokia, the HTTP-JMX bridge in all the JEE and non-JEE servers out there. This was a very manual process: Fire up the VirtualBox image with all those servers installed, start a server, deploy Jolokia, run integration tests, stop the server, start the next server \u0026hellip;. It takes easily half a day or more to do the tests before each release. That\u0026rsquo;s not the kind of QA you are looking for, really. But with docker there is finally the opportunity to automate all this: Deploy a single application on multiple different servers while controlling the lifecycle of theses servers from within the build.\nEarly this year when I searched the Web, I couldn\u0026rsquo;t find a good Docker build integration, so I decided to write my own docker-maven-plugin to back my use case. Today you find nearly a dozen of Maven plugins for your Docker business. However, only four plugins (alexec, wouterd, spotify and rhuss) are still actively maintained. A later blog post will present a detailed comparison between those four plugins (or come to my W-JAX session), but this post is about the evolution of the rhuss plugin.\nIt turned out that the plugin works quite well, people liked and starred it on GitHub. It provides also some unique features like creating Docker images from assembly descriptors.\nBut I was not so happy.\nThe reason is, that I started from a very special, probably very uncommon use case: A single application, multiple different servers for multiples tests. A much more common scenario is to have a fixed application server brand for the application, running with multiple linked backend containers like databases. My plugin doesn\u0026rsquo;t work well with running multiple containers at once. Or to state it otherwise: The plugin was not prepared for orchestration of multiple docker containers.\nAlso, there was too much happening magically behind the scenes: When pushing a data image, it was implicitly build. When starting a container for integration test, the data container is also build before.\nTwo operational modes were supported: One with images holding the server and data separately in two containers (linked via volumes) and one so called merged image, holding both, the application and server together in one image. This is perfect for creating micro services. The mode is determined only by a configuration flag (mergeData), but it is not really clear how many and what Docker images are created. And it was hard to document which is always a very bad smell.\nSo I changed the configuration syntax completely.\nIt is now much more explicit and you will know merely by looking at the configuration which and how many containers will be started during integration testing and what the container with the application will look like. I don\u0026rsquo;t want to go into much detail here, the post is already too long. Instead here is an example of the new syntax:\n\u0026lt;plugin\u0026gt; \u0026lt;groupId\u0026gt;org.jolokia\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;docker-maven-plugin\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;0.10.1\u0026lt;/version\u0026gt; \u0026lt;configuration\u0026gt; \u0026lt;images\u0026gt; \u0026lt;image\u0026gt; \u0026lt;name\u0026gt;consol/tomcat-7.0\u0026lt;/name\u0026gt; \u0026lt;run\u0026gt; \u0026lt;volumes\u0026gt; \u0026lt;from\u0026gt;jolokia/docker-jolokia-demo\u0026lt;/from\u0026gt; \u0026lt;/volumes\u0026gt; \u0026lt;ports\u0026gt; \u0026lt;port\u0026gt;jolokia.port:8080\u0026lt;/port\u0026gt; \u0026lt;/ports\u0026gt; \u0026lt;wait\u0026gt; \u0026lt;url\u0026gt;http://localhost:${jolokia.port}/jolokia\u0026lt;/url\u0026gt; \u0026lt;time\u0026gt;10000\u0026lt;/time\u0026gt; \u0026lt;/wait\u0026gt; \u0026lt;/run\u0026gt; \u0026lt;/image\u0026gt; \u0026lt;image\u0026gt; \u0026lt;name\u0026gt;jolokia/docker-jolokia-demo\u0026lt;/name\u0026gt; \u0026lt;build\u0026gt; \u0026lt;assemblyDescriptor\u0026gt;src/main/assembly.xml\u0026lt;/assemblyDescriptor\u0026gt; \u0026lt;/build\u0026gt; \u0026lt;/image\u0026gt; \u0026lt;/images\u0026gt; \u0026lt;/configuration\u0026gt; \u0026lt;/plugin\u0026gt; This examples creates and starts two containers during docker:start, linked together via the volumes directive. The \u0026lt;run\u0026gt; configuration section is used to describe the runtime behavior for docker:start and docker:stop, and \u0026lt;build\u0026gt; is for specifying how images are build up during docker:build.\nAlternatively, a single image could be created:\n\u0026lt;plugin\u0026gt; \u0026lt;groupId\u0026gt;org.jolokia\u0026lt;/groupId\u0026gt; \u0026lt;artifactId\u0026gt;docker-maven-plugin\u0026lt;/artifactId\u0026gt; \u0026lt;version\u0026gt;0.10.1\u0026lt;/version\u0026gt; \u0026lt;configuration\u0026gt; \u0026lt;images\u0026gt; \u0026lt;image\u0026gt; \u0026lt;name\u0026gt;jolokia/docker-jolokia-combined-demo\u0026lt;/name\u0026gt; \u0026lt;build\u0026gt; \u0026lt;baseImage\u0026gt;consol/tomcat-7.0\u0026lt;/baseImage\u0026gt; \u0026lt;assemblyDescriptor\u0026gt;src/main/assembly.xml\u0026lt;/assemblyDescriptor\u0026gt; \u0026lt;/build\u0026gt; \u0026lt;run\u0026gt; \u0026lt;ports\u0026gt; \u0026lt;port\u0026gt;jolokia.port:8080\u0026lt;/port\u0026gt; \u0026lt;/ports\u0026gt; \u0026lt;wait\u0026gt; \u0026lt;url\u0026gt;http://localhost:${jolokia.port}/jolokia\u0026lt;/url\u0026gt; \u0026lt;/wait\u0026gt; \u0026lt;/run\u0026gt; \u0026lt;/image\u0026gt; \u0026lt;/images\u0026gt; \u0026lt;/configuration\u0026gt; \u0026lt;/plugin\u0026gt; Here consol/tomcat-7.0 is used as base for the image to build and the data referenced in the assembly descriptor is copied into the image. So there is no need to volume-link them together.\nI won\u0026rsquo;t repeat the old, more confusing syntax for this both use cases here, you find it in the current online documentation.\nSaid all that, and since rhuss/docker-maven-plugin is still pre-1.0, I take the liberty to change it without much thoughts on backwards compatibility (you can easily update old configurations). The new syntax is available since 0.10.1, the old syntax will still be used in the 0.9.x line. Everybody is encouraged to upgrade to 0.10.x, although the documentation still reflects the old syntax (will be fixed soon). Please refer to the examples on the new-config branch for more details. An upgrade path will be available soon, too.\nThere will be a 1.0.0 release before the end of this year.\nPlease let me know your feedback on the new syntax and what features you would like to see. Everything is moving before the 1.0.0 freeze. You can open an issue for any suggestion or feature request.\n","date":"13 October 2014","externalUrl":null,"permalink":"/docker-maven-plugin-rewrite/","section":"Posts","summary":"My docker-maven-plugin is undergoing a major refactoring. This post explains the motivation behind this and also what you can expect in the very near future. The configuration syntax becomes much cleaner and implicit behavior was removed.\n","title":"Docker maven plugin rewrite","type":"posts"},{"content":"While on the way of transforming the Jolokia integration test suite from a tedious, manual, half-a-day procedure to a full automated process I ran into and felt in love with Docker. As a byproduct a java-jolokia docker repository emerged, which can be easily used as a Java base image for enabling a Jolokia JVM agent during startup for any Java application.\nThese images are variants of the official java Java docker image. In order to use the Jolokia agent, a child image should call the script jolokia_opts (which is in the path). This will echo all relevant startup options that should be included as argument to the Java startup command.\nHere is a simple example for creating a Tomcat 7 images which starts Jolokia along with Tomcat:\nFROM jolokia/java-jolokia:7 ENV TOMCAT_VERSION 7.0.55 ENV TC apache-tomcat-${TOMCAT_VERSION} EXPOSE 8080 8778 RUN wget http://archive.apache.org/dist/tomcat/tomcat-7/v${TOMCAT_VERSION}/bin/${TC}.tar.gz RUN tar xzf ${TC}.tar.gz -C /opt CMD env CATALINA_OPTS=$(jolokia_opts) /opt/${TC}/bin/catalina.sh run (Don\u0026rsquo;t forget to use $(jolokia_opts) or with backticks, but not ${jolokia_opts})\nThe configuration of the Jolokia agent can be influenced with various environments variables which can be given when starting the container:\nJOLOKIA_OFF : If set disables activation of Jolokia. By default, Jolokia is enabled. JOLOKIA_CONFIG : If set uses this file (including path) as Jolokia JVM agent properties (as described in Jolokia\u0026rsquo;s reference manual. By default this is /opt/jolokia/jolokia.properties. If this file exists, it will automatically be taken as configuration JOLOKIA_HOST : Host address to bind to (Default: 0.0.0.0) JOLOKIA_PORT : Port to use (Default: 8778) JOLOKIA_USER : User for authentication. By default authentication is switched off. JOLOKIA_PASSWORD : Password for authentication. By default authentication is switched off. So, if you start your tomcat with docker run -e JOLOKIA_OFF no agent will be started.\nCurrently this image is available from Docker Hub for the latest versions of Java 6,7 and 8, respectively, as they are provided by the official Docker java image.\nOther base images can be easily added by using the configuration and templates from a super simple node based build system.\nAll appserver images from ConSol/docker-appserver (Docker Hub) are based now on this image, so Jolokia will always be by your side ;-)\n","date":"9 October 2014","externalUrl":null,"permalink":"/jolokia-docker-image/","section":"Posts","summary":"While on the way of transforming the Jolokia integration test suite from a tedious, manual, half-a-day procedure to a full automated process I ran into and felt in love with Docker. As a byproduct a java-jolokia docker repository emerged, which can be easily used as a Java base image for enabling a Jolokia JVM agent during startup for any Java application.\n","title":"Spicy Docker Java Images with Jolokia","type":"posts"},{"content":"NSEnter is a nice way to connect to a running Docker container. This post presents a script to simplify the usage of nsenter together with Boot2Docker.\nThere is still quite some dust around Docker and after gaining more and more experience, new patterns and anti-patterns are emerging.\nOne of those anti-patterns is the usage of an SSH daemon inside an image for debugging, backup and troubleshooting purposes. Jérôme Petazzoni\u0026rsquo;s Blog Post explains this nicely. In addition it provides proper solutions for common use cases for which SSH is currently used.\nNevertheless I still have this irresistible urge to login into a container. And if it is only for looking around and checking out the environment (call me old-fashioned, that\u0026rsquo;s ok ;-)\nLuckily Jérôme provides a perfect solution to satisfy this thirst: nsenter. This allows you to enter into container namespaces. On the GitHub page you find the corresponding recipe for installing and using nsenter on a Linux host.\nIf you want to use it from OS X with e.g. Boot2Docker you need to login into the VM hosting the Docker daemon and then connect to a running container.\nAs described in the NSenter README you can use a simple alias for doing this transparently\ndocker-enter() { boot2docker ssh \u0026#39;[ -f /var/lib/boot2docker/nsenter ] || docker run --rm -v /var/lib/boot2docker/:/target jpetazzo/nsenter\u0026#39; boot2docker ssh -t sudo /var/lib/boot2docker/docker-enter \u0026#34;$@\u0026#34; } For a bit more comfort with usage information and error checking you can convert this to a small shell script like docker-enter which needs to be installed within the path (on OS X). As arguments it expects a container id or name and optionally a command (with args) to execute in the container. This script also will automatically install nsenter on the boot2docker VM if not already present (like the shell function above does this as well):\n10:20 [~] $ docker ps -q 5bf8a161cceb 10:20 [~] $ docker-enter 5bf8a161cceb bash Unable to find image \u0026#39;jpetazzo/nsenter\u0026#39; locally Pulling repository jpetazzo/nsenter Installing nsenter to /target Installing docker-enter to /target root@5bf8a161cceb:/# If you want even more comfort with bash completion you can add the small Bash completion script docker-enter_commands (inspired by and copied from Docker\u0026rsquo;s bash completion) to your ~/.bash_completion_scripts/ directory (or wherever your completion scripts are located, e.g. /usr/local/etc/bash_completion.d if you installed bash-completion via brew). This setup completes on container names and ids on the arguments for docker-enter. Alternatively you can put the commands together with the shell function code above directly into your ~/.bashrc, too.\nP.S. After writing this post, I\u0026rsquo;ve found out, that this topic has been already covered in another blog post previously by Lajos Papp. That\u0026rsquo;s also where the shell function definition in the nsenter README originates from. Give credit to whom it’s due.\n","date":"1 September 2014","externalUrl":null,"permalink":"/nsenter-with-boot2docker/","section":"Posts","summary":"NSEnter is a nice way to connect to a running Docker container. This post presents a script to simplify the usage of nsenter together with Boot2Docker.\n","title":"Using NSEnter with Boot2Docker","type":"posts"},{"content":"Recently I gave a Meetup talk for the Docker Munich Meetup Group which explained how Docker can help developers to improve integration tests and to ship applications.\nThe slides are online as well as the demo project.\nDuring the demo I used Butterfly for an in-browser shell, which was quite cool, I guess ;-) (This is obviously not enabled in the online slides).\nI\u0026rsquo;m going to continue to celebrate my Docker-♡ with another two talks in autumn:\nBoosting your developer toolbox with Docker at JBoss One Day Talk, September, 29. in Germering (Munich, Germany) Docker für Java Entwickler (in german) at W-JAX 14, November, 3. - 7. in Munich And there is a slight chance (since the CFP has not yet been declined ;-) to talk at Devoxx about Docker. JavaZone unfortunately declined my CFP, I guess there are already too many riding the docker horse (which is a good thing).\nNevertheless I will attend both conferences with talks about Jolokia, and I\u0026rsquo;m really looking forward to it.\n\u0026lsquo;guess it will become a hot autumn (hotter than this german 2014 summer for sure) \u0026hellip;.\n","date":"25 August 2014","externalUrl":null,"permalink":"/docker-for-developers/","section":"Posts","summary":"Recently I gave a Meetup talk for the Docker Munich Meetup Group which explained how Docker can help developers to improve integration tests and to ship applications.\n","title":"Docker for (Java) Developers","type":"posts"},{"content":"Jolokia has configurable CORS support so that it plays nicely together with the Browser world when it comes to cross origin requests. However, Jolokia’s CORS support is not without gotchas. This post explains how Jolokias CORS supports works, what are the issues and how I plan to solve them.\ntldr; Jolokia CORS support is configured via jolokia-access.xml but has issues with authenticated requests which are tackled for the next release 1.3.0\nCORS Primer # CORS (Cross Origin Resource Sharing) is a specification for browsers to allow controlled access for JavaScript code to locations which are different than the origin of the JavaScript code itself.\nIn simple cases, it works more or less like this:\nA JavaScript code (coming from the original location http://a.com) requests HTTP access via XMLHttpRequest to http://b.com Since the the origin of the script and target URL of the request differs, the browser adds some extract checking on the response of this request. The request to b.com contains a header Origin: http://a.com. The server at b.com answering the request has to decided upon this header whether it wants allow this request. The server decision is contained in the response header Access-Control-Allow-Origin The value of this header can be either a literal URL (e.g. http://a.com) or a wildcard like in Access-Control-Allow-Origin: * which allows access from any original location. The browser finally decides whether it returns the response to the JavaScript based on the returned access control header. If not, is throws an exception before handing out the response data. This is it for simple requests. A simple request has the following characteristics:\nHTTP method is either GET,HEAD or POST The request contains only the following headers Accept Accept-Language or Content-Language Content-Type with the value application/x-www-form-urlencoded, multipart/form-data or text/plain If this criteria are not match for a request (e.g. because it uses a different method or additional headers), a so called preflight request is sent to the server before the actual request is performed. The preflight is an HTTP request with method OPTIONS and contains the headers Origin (http://a.com in our case), Access-Control-Request-Method for the HTTP method requested and Access-Control-Request-Headers with a comma separated list of additional header names. The server in turn answers with the allowed request methods and headers, whether an authenticated request is allowed and how long the client might cache this answer. An important point is, that a preflight request must not be authenticated.\nAnd in fact, browsers never sent an authentication header with the preflight request even when already authenticated against the target server. More on this later.\nJolokia CORS Support # By default, Jolokia allows any CORS request. For the preflight the agent answers with\nAccess-Control-Allow-Origin: http://a.com Access-Control-Allow-Headers: accept, authorization, content-type The allowed headers returned are exactly the same headers as requested. For the real request with an origin header Origin: http://a.com the answer is\nAccess-Control-Allow-Origin: http://a.com Access-Control-Allow-Credentials: true For best computability Jolokia always answers with the provided Origin: which is extracted from the request (except when the origin is null in which case the wildcard * is returned.\nThis behavior can be tuned by adapting the jolokia-access.xml policy as described in the reference manual :\n\u0026lt;cors\u0026gt; \u0026lt;allow-origin\u0026gt;http://www.jolokia.org\u0026lt;/allow-origin\u0026gt; \u0026lt;allow-origin\u0026gt;*://*.jmx4perl.org\u0026lt;/allow-origin\u0026gt; \u0026lt;strict-checking/\u0026gt; \u0026lt;/cors\u0026gt; If a \u0026lt;cors\u0026gt; section is present in jolokia-access.xml then only those hosts declared in this sections are allowed. The Origin URLs to match against can be specified either literally or as pattern containing the wildcard *. The optional declaration \u0026lt;strict-checking/\u0026gt; is not really connected to CORS but helps in defending against Cross-Site-Request-Forgery (CSRF). If this option is given, then the given patterns are used for every request to compare it against the Origin: or Referer: header (not only for CORS requests).\nCORS and Authentication # Since Authorization: is for CORS not a simple header, when authentication is used, preflight checking is always applied. However, there is often a catch 22:\nThe preflight check using the OPTIONS HTTP Method must not be authenticated as explained above, so browser doesn’t send the appropriate authentication headers when doing the preflight. The Jolokia agent is typically secured completely no matter which HTTP method is used. The preflight check fails, the request fails. The only clean solution is to setup Jolokia Authentication that way that OPTIONS request are not secured.\nLet’s have a look at the individual Agents:\nJVM Agent # Since the JVM agent does all the security stuff on its own, it is not a big deal to introduce this specific behavior. Next one.\nWAR Agent # The WAR agent use authentication and authorization as defined in the Servlet Specification, i.e. the appropriate \u0026lt;security-constraint\u0026gt; must be added manually to the web.xml (jolokia is a CLI tool which helps in this repackaging). Unfortunately there is no way to secure the same \u0026lt;url-pattern\u0026gt; differently for different HTTP Methods (i.e. secured with an \u0026lt;auth-constraint\u0026gt; for GET and POST, but accessible for everybody for OPTIONS). I tried hard by providing multiple \u0026lt;security-constraint\u0026gt; but failed miserably (if you know how to this, please let me know).\nThe only solution is to switch over to checking a given role on our own without relying on the declarative JEE security mechanism. Since we can check the role programmatically (HttpServletRequest.isUserInRole()) this should not be that big deal. But it’s still some work ….\nOSGi Agent # When using an OSGI HttpService adding this behavior should not be difficult since security is handled programmatically here as well (HttpContext.handleSecurity())\nOther Agent variants # This is the dark matter, because I don’t know where and how Jolokia is integrated directly into a bigger context. I know that ActiveMQ, Karaf and Spring Boot uses Jolokia internally. In order to support authenticated CORS access they probably needs to be changed to allow unauthorized OPTIONS access for everybody. Since this is not under my control I have no idea when and even whether it ever will happen. Generic purpose console like hawt.io rely in some setups on CORS access so it would be real cool if we can get it out there. Help with this is highly appreciated ;-)\nRoadmap # Since 1.2.2 is already finished and about to be published today, the stuff I can do as described above will go into a 1.3.0. Looking back at my release history this will probably be ready approx. end of august.\n","date":"18 August 2014","externalUrl":null,"permalink":"/jolokia-cors/","section":"Posts","summary":"Jolokia has configurable CORS support so that it plays nicely together with the Browser world when it comes to cross origin requests. However, Jolokia’s CORS support is not without gotchas. This post explains how Jolokias CORS supports works, what are the issues and how I plan to solve them.\n","title":"Jolokia and CORS","type":"posts"},{"content":"If you have ever sent or received mail messages via Java, chances are high that you have used JavaMail for this task. Most of the time JavaMail does an excellent job and a lot of use cases are described in the JavaMail FAQ. But there are still some additional quirks you should be aware of when doing advanced mail operations like adding or removing attachments (or “Parts”) from existing mails retreived from some IMAP or POP3 store. This post gives a showcase for how to remove an attachment from a mail at an arbitrary level which has been obtained from an IMAP store.\nIt points to the pitfalls which are waiting and shows some possible solutions. The principles laid out here are important for adding new attachments to a mail as well, but that’s yet another story.\nJavaMail objects # Before we start manipulating mail messages it is important to understand how these are represented in the JavaMail world.\nThe starting point is the Message. It has a content and a content type. The content can be any Java object representing the mail content, like a plain text (String) or raw image data. But it can also be a Multipart object: this is the case when a message’s content consists of more than a single item. A Multipart object is a container which holds one ore more BodyPart objects. These BodyParts, like a Message, have a content and a content type (in fact, both Message and BodyPart implement the same interface Part which carries these properties).\nBeside plain content, A BodyPart can contain another Multipart or even another Message, a so called nested message (e.g. a message forwarded as attachment) with content type message/rfc822.\nAs you can see, the structure of a Message can be rather heterogenous, a tree with nodes of different types. The following picture illustrates the tree structure for a sample message.\nThis object tree can be navigated in both directions:\ngetContent() on Parts like Message or BodyPart to get to the child of this node. The return type is a java.lang.Object and in case of a plain BodyPart can be quite huge. Before calling Part.getContent() be sure to check whether it contains a container by checking for its content type via Part.isMimeType(\u0026quot;multipart/*\u0026quot;) or Part.isMimeType(\u0026quot;message/rfc822\u0026quot;) getParent() on Multipart or BodyPart returns the parent node, which is of type BodyPart. Note that there is no way to get from a nested Message to its parent BodyPart. If you need to traverse the tree upwards with nested messages on the way, you first have to extract the path to this node from the top down. E.g. while identifying the part to remove you could store the parent BodyParts on a stack. First approach # Back to our use case of removing an attachment at an arbitrary level within a mail. First, a Message from the IMAP Store needs to be obtained, e.g. by looking it up in an IMAPFolder via its UID:\nSession session = Session.getDefaultInstance(new Properties()); Store store = session.getStore(\u0026#34;imap\u0026#34;); store.connect(\u0026#34;imap.example.com\u0026#34;,-1,\u0026#34;user\u0026#34;,\u0026#34;password\u0026#34;); IMAPFolder folder = (IMAPFolder) store.getFolder(\u0026#34;INBOX\u0026#34;); IMAPMessage originalMessage = (IMAPMessage) folder.getMessageByUID(42L); Next, the fetched message is copied over to a fresh MimeMessage since the IMAPMimeMessage obtained from the store is marked as read-only and can\u0026rsquo;t be modified:\nMessage message = new MimeMessage(originalMessage); // Mark original message for a later expunge originalMessage.setFlag(Flags.Flag.DELETED, true); Now the part to be removed needs to be identified. The detailed code is not shown here, but it is straight forward: You need to traverse the cloned Message top down to identify the Part, e.g. by its part number (a positional index) or by its content id. Be careful, though, not to call getContent() except for BodyParts of type multipart/* or message/rfc822, since this would trigger a lazy fetch of the part\u0026rsquo;s content into memory. Probably not something you want to do while looking up a part. I think, I already said this. ;-)\nMimePart partToRemove = partExtractor.getPartByPartNr(message,\u0026#34;2.1\u0026#34;); It\u0026rsquo;s time to remove the body part from its parent in the hierarchy and store the changed message back into the store. You can mark the original message as DELETED and expunge it on the folder. If you have the UIDEXTENSION available on your IMAP store, you can selectively delete this single message, otherwise your only choice is to remove all messages marked as deleted at once (\u0026ldquo;Empty Trash\u0026rdquo;).\nMultipart parent = partToRemove.getParent(); parent.removeBodyPart(partToRemove); // Update headers and append new message to folder message.saveChanges(); folder.appendMessages(new Message[] { message }); // Mark as deleted and expunge originalMessage.setFlag(Flags.Flag.DELETED, true); folder.expunge(new Message[]{ originalMessage }); We are done now.\nBut wait, that\u0026rsquo;s not good enough \u0026hellip; # If you try the code above, you will probably be a bit surprised. If you fetch back the newly saved message from the folder, you will find that the attachment has not been removed at all.\nDragons are waiting here.\nThe problem is that a JavaMail Part does heavy internal caching: it keeps a so called content stream until a new content is set for it. So even if you modify the hierarchy of objects as described above, the original content is kept until you update the content of the parents yourself and the cache is thrown away. Our part has not been removed because the cached content stream has not yet been invalidated. The solution is to get rid of the cached content stream (aka \u0026lsquo;refresh the message\u0026rsquo;). You could set the content directly via Part.setContent(oldPart.getContent(),oldPart.getContentType()), but this is dangerous in so far as it will load the part content into memory. (Did I already mention this?) That\u0026rsquo;s really not something you are keen on if you want to remove this Britney Spears Video to save some IMAP space. The alternative is to work on the wrapped DataHandler only. A DataHandler (defined in Java Activation) is not much more than a reference to the content stream. Setting the DataHandler on a Part via Part.setDataHandler() also causes it to invalidate its cached content, so a later Part.writeTo() will stream out the new content. Unfortunately, this has to be done on every parent up to the root. A brute force solution is to start from the top and refresh every content with\n// Recursively go through and save all changes if (message.isMimeType(\u0026#34;multipart/*\u0026#34;)) { refreshRecursively((Multipart) message.getContent()); } Multipart part = (Multipart) message.getContent(); message.setContent(part); message.saveChanges(); ... void refreshRecursively(Multipart pPart) throws MessagingException, IOException { for (int i=0;i\u0026lt;pPart.getCount();i++) { MimeBodyPart body = (MimeBodyPart) pPart.getBodyPart(i); if (body.isMimeType(\u0026#34;message/rfc822\u0026#34;)) { // Refresh a nested message Message nestedMsg = (Message) body.getContent(); if (nestedMsg.isMimeType(\u0026#34;multipart/*\u0026#34;)) { Multipart mPart = (Multipart) body.getContent(); refreshRecursively(mPart); nestedMsg.setContent(mPart); } nestedMsg.saveChanges(); } else if (body.isMimeType(\u0026#34;multipart/*\u0026#34;)) { Multipart mPart = (Multipart) body.getContent(); refreshRecursively(mPart); } body.setDataHandler(body.getDataHandler()); } } However, we can be smarter here: Since we already identified the part to remove, we can make our way upwards to the root message via the getParent() method on Multipart and BodyPart (which, by the way are not connected via any interface or inheritance relationship).\nBodyPart bodyParent = null; Multipart multipart = parent; do { if (multipart.getParent() instanceof BodyPart) { bodyParent = (BodyPart) multipart.getParent(); bodyParent.setDataHandler(bodyParent.getDataHandler()); multipart = bodyParent.getParent(); } else { // It\u0026#39;s a Message, probably the toplevel message // but could be a nested message, too (in which // case we have to stop here, too) bodyParent = null; } } while (bodyParent != null); Finally you need to update the uppermost message headers, too with a\nMimeMessage.saveChanges() As you might have noticed, this works as long as there is no nested message in the chain of BodyParts up to the root. Since a Message doesn\u0026rsquo;t have any parent, we need some other means to get the BodyPart which is the parent of an enclosed Message. One way is to keep track of the chain of parent BodyParts when identifying the part to remove e.g. by extending the part extractor to support a stack of parent BodyParts which will be in:\nStack\u0026lt;MimeBodyPart\u0026gt; parentBodys = new Stack\u0026lt;MimeBodyPart\u0026gt;(); MimePart partToRemove = partExtractor.getPartByPartNr(message,\u0026#34;2.1\u0026#34;,parentBodys); .... This example could be extended to remove multipart containers on the fly, if only one part is left after removal and replace the multipart with its then last child or remove an empty multipart altogether when its last child has been removed.\nSummary # Hopefully, I could sketch out that there are several points to take care of when manipulating existing JavaMail Messages (it\u0026rsquo;s not that difficult if you build up one from scratch). The code shown above is only a starting point, but it hopefully saves you some time when you start wondering why on earth your nicely trimmed message isn\u0026rsquo;t stored correctly on the IMAP store.\n","date":"29 March 2010","externalUrl":null,"permalink":"/removing-attachments-with-javamail/","section":"Posts","summary":"If you have ever sent or received mail messages via Java, chances are high that you have used JavaMail for this task. Most of the time JavaMail does an excellent job and a lot of use cases are described in the JavaMail FAQ. But there are still some additional quirks you should be aware of when doing advanced mail operations like adding or removing attachments (or “Parts”) from existing mails retreived from some IMAP or POP3 store. This post gives a showcase for how to remove an attachment from a mail at an arbitrary level which has been obtained from an IMAP store.\n","title":"Removing attachments with JavaMail","type":"posts"},{"content":"This site contains my thoughts and articles on computer related stuff. The content is probably a bit Kubernetes, cloud-native, and developer tooling biased, but this might shift over time. My goal is to write regularly. Hard enough.\nAbout me # I am Roland Huß, a developer and engineer, coding for over two decades (mostly in Java and Go), living and working in Franconia, loving chilis.\nI do Open Source. Among my projects are Jolokia, the JSON/HTTP bridge to JMX, and a docker-maven-plugin.\nFor information about my professional career, find me on LinkedIn.\nContact # Roland Huß, blog@ro14nd.de\nLegal # Privacy Policy License (CC BY 4.0) ","externalUrl":null,"permalink":"/about/","section":"Roland Huß","summary":"This site contains my thoughts and articles on computer related stuff. The content is probably a bit Kubernetes, cloud-native, and developer tooling biased, but this might shift over time. My goal is to write regularly. Hard enough.\nAbout me # I am Roland Huß, a developer and engineer, coding for over two decades (mostly in Java and Go), living and working in Franconia, loving chilis.\n","title":"About","type":"page"},{"content":"","externalUrl":null,"permalink":"/categories/","section":"Categories","summary":"","title":"Categories","type":"categories"},{"content":" Content License # Unless otherwise noted, the content on this blog is licensed under the Creative Commons Attribution 4.0 International License (CC BY 4.0).\nYou are free to # Share — copy and redistribute the material in any medium or format Adapt — remix, transform, and build upon the material for any purpose, even commercially Under the following terms # Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. Code Examples # Code snippets and examples in blog posts are provided under the Apache License 2.0 unless otherwise stated.\n","externalUrl":null,"permalink":"/license/","section":"Roland Huß","summary":"Content License # Unless otherwise noted, the content on this blog is licensed under the Creative Commons Attribution 4.0 International License (CC BY 4.0).\nYou are free to # Share — copy and redistribute the material in any medium or format Adapt — remix, transform, and build upon the material for any purpose, even commercially Under the following terms # Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. Code Examples # Code snippets and examples in blog posts are provided under the Apache License 2.0 unless otherwise stated.\n","title":"License","type":"page"},{"content":" 1. Data controller # Roland Huß, blog@ro14nd.de\n2. Hosting # This website is hosted on GitHub Pages (GitHub Inc., USA). GitHub may process technical data such as IP addresses. For more information, see the GitHub Privacy Statement.\n3. Comments # Comments are powered by Giscus, which is based on GitHub Discussions. A GitHub account is required to participate. The GitHub Privacy Statement applies.\n4. Analytics # This website does not use cookies or tracking. No personal data is collected for analytics purposes.\n5. Your rights # You have the right to access, rectification, erasure, restriction of processing, data portability, and objection. Contact me at the email address above.\n6. Social media links # This website contains links to LinkedIn, Mastodon, and Bluesky. These are simple hyperlinks with no tracking or data transfer to these services when visiting this website.\n","externalUrl":null,"permalink":"/datenschutz/","section":"Roland Huß","summary":"1. Data controller # Roland Huß, blog@ro14nd.de\n2. Hosting # This website is hosted on GitHub Pages (GitHub Inc., USA). GitHub may process technical data such as IP addresses. For more information, see the GitHub Privacy Statement.\n3. Comments # Comments are powered by Giscus, which is based on GitHub Discussions. A GitHub account is required to participate. The GitHub Privacy Statement applies.\n","title":"Privacy Policy","type":"page"},{"content":"","externalUrl":null,"permalink":"/series/","section":"Series","summary":"","title":"Series","type":"series"},{"content":"","externalUrl":null,"permalink":"/tags/","section":"Tags","summary":"","title":"Tags","type":"tags"}]