<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Kubernetes on Roland Huß</title><link>https://ro14nd.de/tags/kubernetes/</link><description>Recent content in Kubernetes on Roland Huß</description><generator>Hugo -- gohugo.io</generator><language>en</language><copyright>© 2026 Roland Huß</copyright><lastBuildDate>Sat, 04 Apr 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://ro14nd.de/tags/kubernetes/index.xml" rel="self" type="application/rss+xml"/><item><title>Blog winter is over</title><link>https://ro14nd.de/blog-winter-is-over/</link><pubDate>Sat, 04 Apr 2026 00:00:00 +0000</pubDate><guid>https://ro14nd.de/blog-winter-is-over/</guid><description>&lt;p&gt;The last post on this blog was about &lt;a href="https://github.com/GoogleContainerTools/jib" target="_blank" rel="noreferrer"&gt;Jib&lt;/a&gt;, Google&amp;rsquo;s daemonless Java image builder. That was July 2018. Almost eight years ago. Anybody remember when that was the latest hotness?&lt;/p&gt;
&lt;p&gt;Before that, I wrote about Docker when Docker was still exciting and built a &lt;a href="https://ro14nd.de/kubernetes-on-raspberry-pi3/" &gt;Kubernetes cluster on Raspberry Pi 3&lt;/a&gt; nodes when that was still a weekend adventure. I spent way too many words on Jolokia and JMX. 27 posts between 2010 and 2018, then silence. If you&amp;rsquo;ve been reading tech blogs long enough, you know how that goes.&lt;/p&gt;
&lt;p&gt;So what breaks eight years of silence?&lt;/p&gt;

&lt;h2 class="relative group"&gt;Why the silence
 &lt;div id="why-the-silence" class="anchor"&gt;&lt;/div&gt;
 
 &lt;span
 class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100 select-none"&gt;
 &lt;a class="text-primary-300 dark:text-neutral-700 !no-underline" href="#why-the-silence" aria-label="Anchor"&gt;#&lt;/a&gt;
 &lt;/span&gt;
 
&lt;/h2&gt;
&lt;p&gt;There&amp;rsquo;s no dramatic story here, no burnout or life crisis.
I just stopped at some point because my writing energy went elsewhere.
I co-authored &lt;a href="https://learning.oreilly.com/library/view/kubernetes-patterns-2nd/9781098131678/" target="_blank" rel="noreferrer"&gt;Kubernetes Patterns&lt;/a&gt; with Bilgin Ibryam (first edition in 2019, second in 2023). More recently, I finished &lt;a href="https://learning.oreilly.com/library/view/generative-ai-on/9781098171919/" target="_blank" rel="noreferrer"&gt;Generative AI on Kubernetes&lt;/a&gt; with Daniele Zonca in 2026.&lt;/p&gt;
&lt;img src="https://ro14nd.de/images/blog-winter-is-over/books-k8s-patterns-genai.png" alt="Kubernetes Patterns and Generative AI on Kubernetes" style="max-width: 350px; margin: 1em auto; display: block;" /&gt;
&lt;p&gt;Writing books is a strange experience.
You pour months into a manuscript, and when it ships, you&amp;rsquo;re proud and drained at the same time.
But once the last book was out in March 2026, something shifted. The pressure was gone, the gap between having something to say and sitting down to write it started closing, and I wanted to write shorter, more opinionated pieces again.&lt;/p&gt;

&lt;h2 class="relative group"&gt;What changed
 &lt;div id="what-changed" class="anchor"&gt;&lt;/div&gt;
 
 &lt;span
 class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100 select-none"&gt;
 &lt;a class="text-primary-300 dark:text-neutral-700 !no-underline" href="#what-changed" aria-label="Anchor"&gt;#&lt;/a&gt;
 &lt;/span&gt;
 
&lt;/h2&gt;
&lt;p&gt;The tech world of 2026 looks nothing like 2018. AI-first engineering isn&amp;rsquo;t a conference buzzword anymore. It&amp;rsquo;s how I work every day. The way I write code, design systems, and think about developer experience has shifted more in the last year than in the decade before. Last month I scaffolded a complete operator in two days that would have taken two weeks in 2018, and most of my work was writing the specification, not the code.&lt;/p&gt;
&lt;p&gt;Context engineering became something I care about deeply. Not in the abstract &amp;ldquo;prompt engineering&amp;rdquo; sense, but in the practical &amp;ldquo;how do you structure specifications so that AI agents produce useful output&amp;rdquo; sense. I&amp;rsquo;ve been deep into Spec Driven Development, particularly &lt;a href="https://github.com/github/spec-kit" target="_blank" rel="noreferrer"&gt;spec-kit&lt;/a&gt; from GitHub. I have opinions about it, even as the whole field is still taking shape.&lt;/p&gt;
&lt;p&gt;When you find yourself explaining the same ideas in conversations, Slack threads, and pull request descriptions over and over again, that&amp;rsquo;s usually a sign you should write them down properly.&lt;/p&gt;

&lt;h2 class="relative group"&gt;What&amp;rsquo;s coming
 &lt;div id="whats-coming" class="anchor"&gt;&lt;/div&gt;
 
 &lt;span
 class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100 select-none"&gt;
 &lt;a class="text-primary-300 dark:text-neutral-700 !no-underline" href="#whats-coming" aria-label="Anchor"&gt;#&lt;/a&gt;
 &lt;/span&gt;
 
&lt;/h2&gt;
&lt;p&gt;The topic I keep coming back to is &lt;strong&gt;context engineering and Spec Driven Development&lt;/strong&gt;: how I structure specifications for agentic coding workflows, what works, what fails, and why the &amp;ldquo;just give it a prompt&amp;rdquo; approach misses the point. This will probably be a recurring theme, because the field is moving fast and there&amp;rsquo;s a lot to figure out in public.&lt;/p&gt;
&lt;p&gt;Close behind is &lt;strong&gt;AgentOps on Kubernetes&lt;/strong&gt;. Running agentic workloads on Kubernetes and OpenShift is a different beast than classic web services. Agents are unpredictable, long-running, and resource-hungry. They talk to the outside world in ways that make security teams nervous. I&amp;rsquo;m ramping up on this topic professionally, figuring out how to operate these workloads in a secure and scalable way. Expect posts about the particular demands of AI agents and why your existing Deployment patterns won&amp;rsquo;t cut it.&lt;/p&gt;
&lt;p&gt;Beyond those two, expect posts about AI-first engineering in daily practice (the surprising wins, the things that still don&amp;rsquo;t work), agentic coding projects and tools, the home K3s cluster that&amp;rsquo;s been running on five Raspberry Pi 4 nodes for over five years, book-adjacent Kubernetes patterns, and whatever else catches my attention.&lt;/p&gt;

&lt;h2 class="relative group"&gt;On AI and this blog
 &lt;div id="on-ai-and-this-blog" class="anchor"&gt;&lt;/div&gt;
 
 &lt;span
 class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100 select-none"&gt;
 &lt;a class="text-primary-300 dark:text-neutral-700 !no-underline" href="#on-ai-and-this-blog" aria-label="Anchor"&gt;#&lt;/a&gt;
 &lt;/span&gt;
 
&lt;/h2&gt;
&lt;p&gt;Since AI changed how I work, it naturally changed how I write too. I use AI tools for ideation, polishing, and managing the publishing workflow. The thinking and the opinions stay mine. Every post carries an &lt;a href="https://aiattribution.github.io" target="_blank" rel="noreferrer"&gt;AI attribution tag&lt;/a&gt; at the bottom so you know exactly what role AI played. I apply the same principle I advocate in the &lt;a href="https://vibe-coding-manifesto.com/" target="_blank" rel="noreferrer"&gt;Responsible Vibe Coding Guide&lt;/a&gt;: use AI as a tool, but own the result. I&amp;rsquo;ll write more about the process in a future post.&lt;/p&gt;

&lt;h2 class="relative group"&gt;Let&amp;rsquo;s see
 &lt;div id="lets-see" class="anchor"&gt;&lt;/div&gt;
 
 &lt;span
 class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100 select-none"&gt;
 &lt;a class="text-primary-300 dark:text-neutral-700 !no-underline" href="#lets-see" aria-label="Anchor"&gt;#&lt;/a&gt;
 &lt;/span&gt;
 
&lt;/h2&gt;
&lt;p&gt;I&amp;rsquo;m not going to promise a posting schedule. That kind of commitment didn&amp;rsquo;t work last time, and I see no reason to repeat it. But I have more things to write about now than at any point during the old blog&amp;rsquo;s run. Eight years ago, Jib was the latest hotness. The next post won&amp;rsquo;t take that long.&lt;/p&gt;
&lt;div class="ai-attribution"&gt;
&lt;p&gt;&lt;a href="https://aiattribution.github.io/statements/AIA-HAb-CeNc-Hin-R-?model=Claude%20Opus%204.6-v1.0" target="_blank" rel="noreferrer"&gt;AIA HAb CeNc Hin R Claude Opus 4.6 v1.0&lt;/a&gt;&lt;/p&gt;
&lt;/div&gt;</description></item><item><title>Bringing Octobox to OpenShift Online</title><link>https://ro14nd.de/octobox-oso/</link><pubDate>Sun, 25 Mar 2018 00:00:00 +0000</pubDate><guid>https://ro14nd.de/octobox-oso/</guid><description>&lt;p&gt;&lt;a href="https://github.com/octobox/octobox" target="_blank" rel="noreferrer"&gt;Octobox&lt;/a&gt; is for sure one of my favourite tools in my GitHub centred developer workflow.
It is incredible for GitHub notification management which allows me to ignore all the hundreds of GitHub notification emails I get daily.&lt;/p&gt;
&lt;p&gt;Octobox is a Ruby-on-Rails application and can be used as SaaS at &lt;a href="https://octobox.io" target="_blank" rel="noreferrer"&gt;octobox.io&lt;/a&gt; or installed and used separately.
Running Octobox in an own account is especially appealing for privacy reasons and for advanced features which are not enabled in the hosted version (like periodic background fetching or more information per notification).&lt;/p&gt;
&lt;p&gt;This post shows how Octobox can be ported to the free &amp;ldquo;starter&amp;rdquo; tier of &lt;a href="https://www.openshift.com/pricing/index.html" target="_blank" rel="noreferrer"&gt;OpenShift Online&lt;/a&gt;.&lt;/p&gt;
&lt;img src="https://ro14nd.de/images/octobox-oso/octobox.png" style="margin: auto;"/&gt;

&lt;h2 class="relative group"&gt;Application setup
 &lt;div id="application-setup" class="anchor"&gt;&lt;/div&gt;
 
 &lt;span
 class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100 select-none"&gt;
 &lt;a class="text-primary-300 dark:text-neutral-700 !no-underline" href="#application-setup" aria-label="Anchor"&gt;#&lt;/a&gt;
 &lt;/span&gt;
 
&lt;/h2&gt;
&lt;p&gt;An Octobox installation consists of three parts:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Octobox itself, a Rails application&lt;/li&gt;
&lt;li&gt;Redis as an ephemeral cache, used as a session store&lt;/li&gt;
&lt;li&gt;Postgresql as the backend database&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Naturally, this would lead to three services.
However, as I&amp;rsquo;m not striving for an HA setup and the sake of simplicity, I decided to combine Octobox and Redis in a single pod.
Since a combined lifecycle for Octobox and Redis is a reasonable, fair choice, this reduces the number of OpenShift resource objects considerably.&lt;/p&gt;
&lt;p&gt;As persistent store for Postgres, we use a plain &lt;code&gt;PersistentVolume&lt;/code&gt; which is good enough for our low-footprint database requirements.&lt;/p&gt;

&lt;h2 class="relative group"&gt;Docker Images
 &lt;div id="docker-images" class="anchor"&gt;&lt;/div&gt;
 
 &lt;span
 class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100 select-none"&gt;
 &lt;a class="text-primary-300 dark:text-neutral-700 !no-underline" href="#docker-images" aria-label="Anchor"&gt;#&lt;/a&gt;
 &lt;/span&gt;
 
&lt;/h2&gt;
&lt;p&gt;To get an application onto OpenShift, you first need to package all parts of your application into Docker images which eventually become container during runtime.&lt;/p&gt;
&lt;p&gt;There are some &lt;a href="https://docs.openshift.org/latest/creating_images/guidelines.html" target="_blank" rel="noreferrer"&gt;restrictions&lt;/a&gt; for Docker images to be usable on OpenShift.
The most important one is that all containers run under a random UID, which is part of the Unix group &lt;code&gt;root&lt;/code&gt;.
This restriction has the consequence that all directories and files to which the application process want to write should belong to group &lt;code&gt;root&lt;/code&gt; and must be group writable.&lt;/p&gt;
&lt;p&gt;Octobox already is distributed as a &lt;a href="https://github.com/octobox/octobox/blob/bd6c2cbc4745363240482f36210509830d0c4bc1/Dockerfile" target="_blank" rel="noreferrer"&gt;Docker image&lt;/a&gt; and has recently be [updated][octobox-dockerfile-chgrp] to be OpenShift compatible.
The &lt;a href="https://docs.openshift.com/container-platform/3.7/using_images/db_images/postgresql.html" target="_blank" rel="noreferrer"&gt;Postgres image&lt;/a&gt; is directly picked up from an OpenShift provided ImageStream, so there is no issue at all.
The &lt;a href="https://hub.docker.com/r/centos/redis-32-centos7/" target="_blank" rel="noreferrer"&gt;Redis Imagee&lt;/a&gt; is also already prepared for OpenShift
However, when using Redis from this image in an ephemeral mode (so not using persistence) there is a subtle issue which prevents starting the Pod:
As the Dockerfile declares a &lt;a href="https://github.com/sclorg/redis-container/blob/7689bf310dc29f363f0cf7e0e74a457cda5a3f6e/3.2/Dockerfile#L73" target="_blank" rel="noreferrer"&gt;VOLUME&lt;/a&gt; and even though in our setup we don&amp;rsquo;t need it, we &lt;strong&gt;have&lt;/strong&gt; to declare a volume in the Pod definition anyway.
Otherwise, you end up with a cryptic error message in the OpenShift console (like &lt;code&gt;can't create volume ...&lt;/code&gt;).
An &lt;code&gt;emptyDir&lt;/code&gt; volume as perfectly good enough for this.&lt;/p&gt;

&lt;h2 class="relative group"&gt;Template
 &lt;div id="template" class="anchor"&gt;&lt;/div&gt;
 
 &lt;span
 class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100 select-none"&gt;
 &lt;a class="text-primary-300 dark:text-neutral-700 !no-underline" href="#template" aria-label="Anchor"&gt;#&lt;/a&gt;
 &lt;/span&gt;
 
&lt;/h2&gt;
&lt;p&gt;To install the application an &lt;a href="https://github.com/octobox/octobox/blob/master/openshift/octobox-template.yml" target="_blank" rel="noreferrer"&gt;OpenShift Template&lt;/a&gt; has been created.
It contains the following objects&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;DeploymentConfig&lt;/code&gt;s for &amp;ldquo;Octobox with Redis&amp;rdquo; and &amp;ldquo;Postgres&amp;rdquo;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;Service&lt;/code&gt;s for Octobox and Postgres&lt;/li&gt;
&lt;li&gt;&lt;code&gt;PersistentVolumeClaim&lt;/code&gt; for Postgres&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;A route for accessing the app is created later on the OpenShift console.
Please refer to these &lt;a href="https://github.com/octobox/octobox/blob/master/openshift/OPENSHIFT_INSTALLATION.md" target="_blank" rel="noreferrer"&gt;installation instructions&lt;/a&gt; for more details how to use this templats.&lt;/p&gt;

&lt;h2 class="relative group"&gt;OpenShift Online Starter
 &lt;div id="openshift-online-starter" class="anchor"&gt;&lt;/div&gt;
 
 &lt;span
 class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100 select-none"&gt;
 &lt;a class="text-primary-300 dark:text-neutral-700 !no-underline" href="#openshift-online-starter" aria-label="Anchor"&gt;#&lt;/a&gt;
 &lt;/span&gt;
 
&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://www.openshift.com/pricing/index.html" target="_blank" rel="noreferrer"&gt;OpenShift Online Starter&lt;/a&gt; is the free tier of OpenShift online which is very useful for learning OpenShift concept and get one&amp;rsquo;s feet wet.
However, it has some quite restrictive resource limitations:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;1 GB Memory&lt;/li&gt;
&lt;li&gt;1 GB Storage&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This budget is good enough for small applications like Octobox, but if you need more horsepower than you can easily upgrade to OpenShift Online Pro.&lt;/p&gt;
&lt;p&gt;The challenge is now to distribute the three parts (Octobox, Postgres, Redis) over these 1 GB.
As Octobox as rails application is quite a memory hog, we want to dedicate as much memory as possible to it.
For Postgres, we do not need much Memory at all, so 50 to 100 MB is good enough.
The same for Redis as an initial guess.
We can always tune this later if we found out that our initial guess a wrong.&lt;/p&gt;
&lt;p&gt;Ok, let&amp;rsquo;s start with:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;875 MB Octobox&lt;/li&gt;
&lt;li&gt;50 MB Redis&lt;/li&gt;
&lt;li&gt;75 MB Postgres&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;When trying out these limits, I quickly found out that this doesn&amp;rsquo;t work.
The reason is that OpenShift Online has a &lt;strong&gt;minimum&lt;/strong&gt; size for a container which is 100 MB.
Also, you can&amp;rsquo;t choose &lt;a href="https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/" target="_blank" rel="noreferrer"&gt;requests and limits&lt;/a&gt; freely, but there is a fixed ratio of 50% to calculate the &lt;code&gt;request&lt;/code&gt; from a given &lt;code&gt;limit&lt;/code&gt; (the &lt;code&gt;request&lt;/code&gt; specified is always ignored).
This fact not only means that you get a &lt;a href="https://medium.com/google-cloud/quality-of-service-class-qos-in-kubernetes-bb76a89eb2c6" target="_blank" rel="noreferrer"&gt;&lt;em&gt;Burstable&lt;/em&gt;&lt;/a&gt; QoS class, but also that you &lt;strong&gt;have&lt;/strong&gt; to specify 200 MB as &lt;code&gt;limit&lt;/code&gt; to get at least 100 MB &lt;code&gt;request&lt;/code&gt; to exceed the required minimum.&lt;/p&gt;
&lt;p&gt;So we end up with:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;600 MB Octobox&lt;/li&gt;
&lt;li&gt;200 MB Redis&lt;/li&gt;
&lt;li&gt;200 MB Postgres&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Apparently, this is not optimal, but that&amp;rsquo;s how it works for OpenShift Online Starter tier (and probably also the Pro Tier).
For other OpenShift cluster it, of course, depends on the setup of this specific cluster.
We could put Redis and Octobox in the same container, and start two processes in the container.
This change would free up another 150 MB for Octobox but is ugly design.
So we won&amp;rsquo;t do it ;-)&lt;/p&gt;

&lt;h2 class="relative group"&gt;tl;dr
 &lt;div id="tldr" class="anchor"&gt;&lt;/div&gt;
 
 &lt;span
 class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100 select-none"&gt;
 &lt;a class="text-primary-300 dark:text-neutral-700 !no-underline" href="#tldr" aria-label="Anchor"&gt;#&lt;/a&gt;
 &lt;/span&gt;
 
&lt;/h2&gt;
&lt;p&gt;Porting an application to OpenShift is not difficult.
Especially the free &lt;a href="https://www.openshift.com/pricing/index.html" target="_blank" rel="noreferrer"&gt;OpenShift Online Starter&lt;/a&gt; is very appealing for such experiments.
The challenges are mostly around creating proper Docker images and getting resource limits right.
As a result, you get a decent running and managed installation.&lt;/p&gt;
&lt;p&gt;For the full installation instructions, please refer to the OpenShift specific Octobox &lt;a href="https://github.com/octobox/octobox/blob/master/openshift/OPENSHIFT_INSTALLATION.md" target="_blank" rel="noreferrer"&gt;installation instructions&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>RasPi 3 Kubernetes Cluster - An Update</title><link>https://ro14nd.de/k8s-on-pi-update/</link><pubDate>Wed, 05 Apr 2017 00:00:00 +0000</pubDate><guid>https://ro14nd.de/k8s-on-pi-update/</guid><description>&lt;p&gt;Our Ansible Playbooks for installing Kubernetes on a Raspberry Pi Cluster have been constantly updated and are now using the awesome &lt;a href="https://github.com/kubernetes/kubeadm" target="_blank" rel="noreferrer"&gt;kubeadm&lt;/a&gt;. The update to Kubernetes 1.6. was a bit tricky, though.&lt;/p&gt;
&lt;p&gt;Recently I had the luck to meet Mr. &lt;a href="https://twitter.com/kubernetesonarm" target="_blank" rel="noreferrer"&gt;@kubernetesonarm&lt;/a&gt; Lucas Käldström at the&lt;a href="https://devops-gathering.io/" target="_blank" rel="noreferrer"&gt; DevOps Gathering&lt;/a&gt; where he demoed his multi-arch cluster. That was really impressing. Lucas really squeezes out the maximum what is possible these days with Raspberry Pis and other SOC devices on the Kubernetes platform. Please follow his &lt;a href="https://github.com/luxas/kubeadm-workshop" target="_blank" rel="noreferrer"&gt;Workshop&lt;/a&gt; on GitHub for a multi-platform setup with ingress controller, persistent volumes, custom API servers and more.&lt;/p&gt;
&lt;p&gt;Needless to say that after returning home one of the first task was to update our &lt;a href="https://github.com/Project31/ansible-kubernetes-openshift-pi3" target="_blank" rel="noreferrer"&gt;Ansible playbooks&lt;/a&gt; for updating to Kubernetes 1.6 on my RasPi cluster. The goal of these playbooks are a bit different than Lucas workshop setup: Instead of living at the edge, the goal here is to provide an easy, automated and robust way to install a standard Kubernetes installation on a Raspberry Pi 3 cluster. &lt;code&gt;kubeadm&lt;/code&gt; is a real great help and makes many things so much easier. However there are still some steps to do in addition.&lt;/p&gt;
&lt;p&gt;After following the &lt;a href="https://github.com/luxas/kubeadm-workshop/blob/master/README.md" target="_blank" rel="noreferrer"&gt;workshop instructions&lt;/a&gt; it turned out soon, that it was probably not the best time for the update. Kubernetes 1.6. has just been released and it turned out that last minute pre-release changes &lt;a href="https://github.com/kubernetes/kubeadm/issues/212" target="_blank" rel="noreferrer"&gt;broke kubeadm 1.6.0&lt;/a&gt;. Luckily these were fixed quickly with 1.6.1. However the so called &lt;em&gt;self hosted&lt;/em&gt; mode of kubeadm broke, too (and is currently still &lt;a href="https://github.com/luxas/kubeadm-workshop/issues/8" target="_blank" rel="noreferrer"&gt;broken&lt;/a&gt; in 1.6.1 but should be fixed soon). So the best bet for the moment is to use a standard install (with external processes for api-server et. al).&lt;/p&gt;
&lt;p&gt;Also this time I wanted to use &lt;a href="https://github.com/weaveworks/weave" target="_blank" rel="noreferrer"&gt;Weave&lt;/a&gt; instead of Flannel as the overlay network. In turned out that this didn&amp;rsquo;t worked on my cluster because every of my nodes got the same virtual Mac address assigned by Weave. That&amp;rsquo;s because this address is &lt;a href="https://github.com/weaveworks/weave/blob/916ff7aa3979fced84fceef1635ab8c868d71e25/net/uuid.go#L26" target="_blank" rel="noreferrer"&gt;calculated&lt;/a&gt; based on &lt;code&gt;/etc/machine-id&lt;/code&gt;. And guess what. All my nodes had the &lt;em&gt;same machine id&lt;/em&gt; &lt;code&gt;9989a26f06984d6dbadc01770f018e3b&lt;/code&gt;. This it what the base Hypriot 1.4.0 system decides to install (in fact it is derived by &lt;code&gt;systemd-machine-id-setup&lt;/code&gt; from &lt;code&gt;/var/lib/dbus/machine-id&lt;/code&gt;). And every Hypriot installation out there has this very same machine-id ;-) For me it wasn&amp;rsquo;t surprising, that this happened (well, developing bugs is our daily business ;-), but I was quite puzzled that this hasn&amp;rsquo;t been a bigger &lt;a href="https://github.com/hypriot/image-builder-rpi/issues/167" target="_blank" rel="noreferrer"&gt;issue&lt;/a&gt; yet, because I suspect that especially in cluster setups (may it be Docker Swarm or Kubernetes) at some point the nodes need their unique id. Of course most of the time the IP and hostname is enough. But for a more rigorous UUID &lt;code&gt;/etc/machine-id&lt;/code&gt; is normally good fit.&lt;/p&gt;
&lt;p&gt;After knowing this and re-creating the UUID on my own (with &lt;code&gt;dbus-uuidgen &amp;gt; /etc/machine-id&lt;/code&gt;) everything works smoothly now again, so that I have a base Kubernetes 1.6 cluster with DNS and proper overlay network again. Uff, was quite a mouthful of work :)&lt;/p&gt;
&lt;p&gt;You find the installation instructions and the updated playbooks at &lt;a href="https://github.com/Project31/ansible-kubernetes-openshift-pi3" target="_blank" rel="noreferrer"&gt;https://github.com/Project31/ansible-kubernetes-openshift-pi3&lt;/a&gt;. If your router is configured properly, it takes not much more than half an hour to &lt;a href="https://github.com/Project31/ansible-kubernetes-openshift-pi3#ansible-playbooks" target="_blank" rel="noreferrer"&gt;setup the full cluster&lt;/a&gt;. I did it several times now since last week, always starting afresh with flashing the SD cards. I can confirm that its reproducible and idempotent now ;-)&lt;/p&gt;
&lt;p&gt;The next steps are to add persistent volumes with &lt;a href="https://github.com/rook/rook" target="_blank" rel="noreferrer"&gt;Rook&lt;/a&gt;, &lt;a href="https://traefik.io/" target="_blank" rel="noreferrer"&gt;Træfik&lt;/a&gt; as ingress controller and an own internal registry.&lt;/p&gt;
&lt;p&gt;Feel free to give it a try and open many &lt;a href="https://github.com/Project31/ansible-kubernetes-openshift-pi3/issues/new" target="_blank" rel="noreferrer"&gt;issues&lt;/a&gt; ;-)&lt;/p&gt;</description></item><item><title>A Raspberry Pi 3 Kubernetes Cluster</title><link>https://ro14nd.de/kubernetes-on-raspberry-pi3/</link><pubDate>Wed, 27 Apr 2016 00:00:00 +0000</pubDate><guid>https://ro14nd.de/kubernetes-on-raspberry-pi3/</guid><description>&lt;p&gt;Let&amp;rsquo;s build a Raspberry Pi Cluster running Docker and Kubernetes. There has been already a handful of good recipes, however this howto is a bit different and provides some unique features.&lt;/p&gt;
&lt;img src="https://ro14nd.de/images/kubernetes-on-raspberry-pi3/pi_cluster.jpg" style="float:right; margin: 50px 0px 20px 30px"/&gt;
&lt;p&gt;My main motivation for going the Raspberry Pi road for a Kubernetes cluster was that I wanted something fancy for my &lt;a href="https://github.com/rhuss/jax-kubernetes-2016" target="_blank" rel="noreferrer"&gt;Kubernetes talk&lt;/a&gt; to show, shamelessly stealing the idea &lt;a href="https://opensource.com/life/16/2/build-a-kubernetes-cloud-with-raspberry-pi" target="_blank" rel="noreferrer"&gt;from&lt;/a&gt; &lt;a href="https://www.youtube.com/watch?time_continue=4&amp;amp;v=AAS5Mq9EktI" target="_blank" rel="noreferrer"&gt;others&lt;/a&gt; (kudos to &lt;code&gt;@KurtStam&lt;/code&gt;, &lt;code&gt;@saturnism&lt;/code&gt;, &lt;code&gt;@ArjenWassink&lt;/code&gt; and &lt;code&gt;@kubernetesonarm&lt;/code&gt; for the inspiration ;-)&lt;/p&gt;
&lt;p&gt;I.e. the following Pi-K8s projects already existed:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;&lt;a href="https://github.com/Project31/kubernetes-installer-rpi" target="_blank" rel="noreferrer"&gt;kubernetes-installer-rpi&lt;/a&gt;&lt;/strong&gt; : A set up shell scripts and precompiled ARM binaries for running Kubernetes by &lt;a href="https://twitter.com/KurtStam" target="_blank" rel="noreferrer"&gt;@KurtStam&lt;/a&gt; on top of the Hypriot Docker Image for Raspberry Pi.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;a href="https://github.com/luxas/kubernetes-on-arm" target="_blank" rel="noreferrer"&gt;Kubernetes on ARM&lt;/a&gt;&lt;/strong&gt; : An opinionated approach by &lt;a href="https://twitter.com/kubernetesonarm" target="_blank" rel="noreferrer"&gt;@kubernetesonarm&lt;/a&gt; with an own installer for setting up Kubernetes no only for the Pi but also for other ARM based platforms.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;a href="https://github.com/awassink/k8s-on-rpi" target="_blank" rel="noreferrer"&gt;K8s on Rpi&lt;/a&gt;&lt;/strong&gt; : Another shell based installer for installing a Kubernetes cluster by &lt;a href="https://twitter.com/ArjenWassink" target="_blank" rel="noreferrer"&gt;@ArjenWassink&lt;/a&gt; and &lt;a href="https://twitter.com/saturnism" target="_blank" rel="noreferrer"&gt;@saturnism&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;When there are already multiple recipes out there, why then trying yet another approach ?&lt;/p&gt;
&lt;p&gt;My somewhat selfish goals were:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Using (and learning on the way) Ansible for not only a one-shot installation but also maintainance and upgrades.&lt;/li&gt;
&lt;li&gt;Learning myself how to setup a Kubernetes cluster. This setup includes flannel as an overlay network, the SkyDNS extension and soon also a registry. Using Ansible helps me to incrementally add on top of things already installed.&lt;/li&gt;
&lt;li&gt;Want to use WiFi for connecting the cluster. See below for the reason.&lt;/li&gt;
&lt;li&gt;Get &lt;a href="https://github.com/openshift/origin" target="_blank" rel="noreferrer"&gt;OpenShift Origin&lt;/a&gt; running and be able to switch between Ansible and OpenShift via Ansible.&lt;/li&gt;
&lt;li&gt;Create a demonstration platform for my favourite development and integration platform &lt;a href="http://fabric8.io" target="_blank" rel="noreferrer"&gt;fabric8&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;As it turns out the whole experience was very enlightening to me. Its one thing to start Kubernetes on a single node within a VM (because multiple VM-based nodes kill soon your machine resourcewise) or having a small bare metal cluster, which blinks red and green and where you can plug wires at will. Not to mention the the geek factor :)&lt;/p&gt;

&lt;h2 class="relative group"&gt;Shopping List
 &lt;div id="shopping-list" class="anchor"&gt;&lt;/div&gt;
 
 &lt;span
 class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100 select-none"&gt;
 &lt;a class="text-primary-300 dark:text-neutral-700 !no-underline" href="#shopping-list" aria-label="Anchor"&gt;#&lt;/a&gt;
 &lt;/span&gt;
 
&lt;/h2&gt;
&lt;p&gt;Here&amp;rsquo;s my shopping list for a Raspberry Pi 3 cluster, along with (non-affiliate) links to (German) shops, but I&amp;rsquo;m sure you can find them elswhere, too.&lt;/p&gt;
&lt;table&gt;
 &lt;thead&gt;
 &lt;tr&gt;
 &lt;th&gt;Amount&lt;/th&gt;
 &lt;th&gt;Part&lt;/th&gt;
 &lt;th&gt;Price&lt;/th&gt;
 &lt;/tr&gt;
 &lt;/thead&gt;
 &lt;tbody&gt;
 &lt;tr&gt;
 &lt;td&gt;4&lt;/td&gt;
 &lt;td&gt;&lt;a href="http://www.watterott.com/de/Raspberry-Pi-3" target="_blank" rel="noreferrer"&gt;Raspberry Pi 3&lt;/a&gt;&lt;/td&gt;
 &lt;td&gt;4 * 38 EUR&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;4&lt;/td&gt;
 &lt;td&gt;&lt;a href="http://www.amazon.de/dp/B013UDL5RU" target="_blank" rel="noreferrer"&gt;Micro SD Card 32 GB&lt;/a&gt;&lt;/td&gt;
 &lt;td&gt;4 * 11 EUR&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;1&lt;/td&gt;
 &lt;td&gt;&lt;a href="http://www.amazon.de/dp/B00XPUIDFQ" target="_blank" rel="noreferrer"&gt;WLAN Router&lt;/a&gt;&lt;/td&gt;
 &lt;td&gt;22 EUR&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;4&lt;/td&gt;
 &lt;td&gt;&lt;a href="http://www.amazon.de/dp/B016BEVNK4" target="_blank" rel="noreferrer"&gt;USB wires&lt;/a&gt;&lt;/td&gt;
 &lt;td&gt;9 EUR&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;1&lt;/td&gt;
 &lt;td&gt;&lt;a href="http://www.amazon.de/dp/B00PTLSH9G" target="_blank" rel="noreferrer"&gt;Power Supply&lt;/a&gt;&lt;/td&gt;
 &lt;td&gt;30 EUR&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;1&lt;/td&gt;
 &lt;td&gt;&lt;a href="http://www.amazon.de/dp/B00NB1WPEE" target="_blank" rel="noreferrer"&gt;Case&lt;/a&gt;&lt;/td&gt;
 &lt;td&gt;10 EUR&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;3&lt;/td&gt;
 &lt;td&gt;&lt;a href="http://www.amazon.de/dp/B00NB1WQZW" target="_blank" rel="noreferrer"&gt;Intermediate Case Plate&lt;/a&gt;&lt;/td&gt;
 &lt;td&gt;3 * 7 EUR&lt;/td&gt;
 &lt;/tr&gt;
 &lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;All in all, a 4 node Pi cluster for &lt;strong&gt;288 EUR&lt;/strong&gt; (as of April 2016). Not so bad.&lt;/p&gt;
&lt;p&gt;Some remarks:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Using WiFi for the connection has the big advantage that the Raspberry Pi 3 integrated BCM43438 WiFi chip doesn&amp;rsquo;t go over USB and saves valuable bandwidth used for IO in general. That way you are able to to get ~ 25 MB/s for disk IO and network traffic, respectively. And also less cables, of course. You can alway plug the power wire for demos, too ;-)&lt;/li&gt;
&lt;li&gt;A class 10 Mirco SD is recommended but it doesn&amp;rsquo;t have to be the fastest on the world as the USB bus only allows around 35 MB/s anyway.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 class="relative group"&gt;Initial Pi Setup
 &lt;div id="initial-pi-setup" class="anchor"&gt;&lt;/div&gt;
 
 &lt;span
 class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100 select-none"&gt;
 &lt;a class="text-primary-300 dark:text-neutral-700 !no-underline" href="#initial-pi-setup" aria-label="Anchor"&gt;#&lt;/a&gt;
 &lt;/span&gt;
 
&lt;/h2&gt;
&lt;p&gt;Most of the installation is automated by using &lt;a href="https://www.ansible.com/" target="_blank" rel="noreferrer"&gt;Ansible&lt;/a&gt;. However the initial setup is a bit more involved. It certainly can be improved (e.g. automatic filesystem expanding of the initial Raspian setup). If you have ideas how to improve this, please open issues and PRs on &lt;a href="https://github.com/Project31/ansible-kubernetes-openshift-pi3" target="_blank" rel="noreferrer"&gt;Project31/ansible-kubernetes-openshift-pi3&lt;/a&gt;. Several base distributions has been tried out. It turned out that the most stable setup is based on a stock Raspian. Unfortunately it doesn&amp;rsquo;t provide a headless WLAN setup as it is possible with the latest &lt;a href="https://github.com/hypriot/image-builder-rpi/releases/latest" target="_blank" rel="noreferrer"&gt;Hypriot&lt;/a&gt; images, but for the moment it much more stable (I had strange kernel panics and 200% CPU load issues with the Hypriot image for no obvious reasons). Since this is a one time effort, let&amp;rsquo;s use Raspbian. If you want to try out the Hypriot image, there&amp;rsquo;s an &lt;a href="https://github.com/Project31/ansible-kubernetes-openshift-pi3/tree/hypriot" target="_blank" rel="noreferrer"&gt;experimental branch&lt;/a&gt; for the Ansible playbooks which can be used with Hypriot. I will retry Hypriot OS for sure some times later.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Download the latest Raspian image and store it as &lt;code&gt;raspbian.zip&lt;/code&gt; :&lt;/p&gt;
&lt;pre&gt;&lt;code&gt; curl -L https://downloads.raspberrypi.org/raspbian_lite_latest \
 -o raspbian.zip
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Install Hypriots&amp;rsquo; &lt;a href="https://github.com/hypriot/flash" target="_blank" rel="noreferrer"&gt;flash&lt;/a&gt; installer script. Follow the directions on the installation page.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Insert you Micro-SD card in your Desktop computer (via an adapter possibly) and run&lt;/p&gt;
&lt;pre&gt;&lt;code&gt; flash raspbian.zip
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You will be asked to which device to write. Check this carefully, otherwise you could destroy your Desktop OS if selecting the the wrong device. Typically its something like &lt;code&gt;/dev/disk2&lt;/code&gt; on OS X, but depends on the number of hard drives you have.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Insert the Micro SSD card into your Raspberry Pi and connect it to a monitor and keyboard. Boot up. Login in with &lt;em&gt;pi&lt;/em&gt; / &lt;em&gt;raspberry&lt;/em&gt;. Then:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt; raspi-config --expand-rootfs
 vi /etc/wpa_supplicant/wpa_supplicant.conf
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;and then add your WLAN credentials&lt;/p&gt;
&lt;pre&gt;&lt;code&gt; network={
 ssid=&amp;quot;MySSID&amp;quot;
 psk=&amp;quot;s3cr3t&amp;quot;
 }
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Reboot&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Repeat step 2. to 5. for each Micro SD card.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2 class="relative group"&gt;Network Setup
 &lt;div id="network-setup" class="anchor"&gt;&lt;/div&gt;
 
 &lt;span
 class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100 select-none"&gt;
 &lt;a class="text-primary-300 dark:text-neutral-700 !no-underline" href="#network-setup" aria-label="Anchor"&gt;#&lt;/a&gt;
 &lt;/span&gt;
 
&lt;/h2&gt;
&lt;p&gt;It is now time to configure your WLAN router. This of course depends on which router you use. The following instructions are based on a &lt;a href="http://www.tp-link.com/en/products/details/cat-9_TL-WR802N.html" target="_blank" rel="noreferrer"&gt;TP-Link TL-WR802N&lt;/a&gt; which is quite inexepensive but still absolutely ok for our purposes since it sits very close to the cluster and my notebook anyway.&lt;/p&gt;
&lt;p&gt;First of all you need to setup the SSID and password. Use the same credentials with which you have configured your images.&lt;/p&gt;
&lt;p&gt;My setup is, that I span a private network &lt;code&gt;192.168.23.0/24&lt;/code&gt; for the Pi cluster which my MacBook also joins via its integrated WiFi.&lt;/p&gt;
&lt;p&gt;The addresses I have chosen are :&lt;/p&gt;
&lt;p&gt;| &lt;code&gt;192.168.23.1&lt;/code&gt; | WLAN Router |
| &lt;code&gt;192.168.23.100&lt;/code&gt; | MacBook&amp;rsquo;s WLAN |
| &lt;code&gt;192.168.23.200&lt;/code&gt; &amp;hellip; &lt;code&gt;192.168.23.203&lt;/code&gt; | Raspberry Pis |&lt;/p&gt;
&lt;p&gt;The MacBook is setup for NAT and forwarding from this private network to the internet. This &lt;a href="https://github.com/Project31/ansible-kubernetes-openshift-pi3/blob/master/tools/setup_nat_on_osx.sh" target="_blank" rel="noreferrer"&gt;script&lt;/a&gt; helps in setting up the forwarding and NAT rules on OS X.&lt;/p&gt;
&lt;p&gt;In order to configure your WLAN router you need to connect to it according to its setup instructions. The router is setup in &lt;strong&gt;Access Point&lt;/strong&gt; mode with DHCP enabled. As soon as the MAC of the Pis are known (which you can see as soon as they connect for the first time via WiFi), I configured them to always use the same DHCP lease. For the TL-WR802N this can be done in the configuration section &lt;em&gt;DHCP -&amp;gt; Address Reservation&lt;/em&gt;. In the &lt;em&gt;DHCP -&amp;gt; DHCP-Settings&lt;/em&gt; the default gateway is set to &lt;code&gt;192.168.23.100&lt;/code&gt;, which my notebook&amp;rsquo;s WLAN IP.&lt;/p&gt;
&lt;p&gt;Startup all nodes, you should be able to ping every node in your cluster. I added &lt;code&gt;n0&lt;/code&gt; &amp;hellip; &lt;code&gt;n3&lt;/code&gt; to my notebook&amp;rsquo;s &lt;code&gt;/etc/hosts&lt;/code&gt; pointing to &lt;code&gt;192.168.23.200&lt;/code&gt; &amp;hellip; &lt;code&gt;192.168.23.203&lt;/code&gt; for convenience.&lt;/p&gt;
&lt;p&gt;You should be able to ssh into every Pi with user &lt;em&gt;pi&lt;/em&gt; and password &lt;em&gt;raspberry&lt;/em&gt;. Also, if you set up the forwarding on your desktop properly you should be able to ping from within the pi to the outside world.&lt;/p&gt;

&lt;h2 class="relative group"&gt;Ansible Playbooks
 &lt;div id="ansible-playbooks" class="anchor"&gt;&lt;/div&gt;
 
 &lt;span
 class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100 select-none"&gt;
 &lt;a class="text-primary-300 dark:text-neutral-700 !no-underline" href="#ansible-playbooks" aria-label="Anchor"&gt;#&lt;/a&gt;
 &lt;/span&gt;
 
&lt;/h2&gt;
&lt;p&gt;After this initial setup is done, the next step is to initialize the base system with Ansible. You will need Ansible 2 installed on your desktop (e.g. &lt;code&gt;brew install ansible&lt;/code&gt; when running on OS X)&lt;/p&gt;

&lt;h3 class="relative group"&gt;Ansible Configuration
 &lt;div id="ansible-configuration" class="anchor"&gt;&lt;/div&gt;
 
 &lt;span
 class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100 select-none"&gt;
 &lt;a class="text-primary-300 dark:text-neutral-700 !no-underline" href="#ansible-configuration" aria-label="Anchor"&gt;#&lt;/a&gt;
 &lt;/span&gt;
 
&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Checkout the Ansible playbooks:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt; git clone https://github.com/Project31/ansible-kubernetes-openshift-pi3.git k8s-pi
 cd k8s-pi
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Copy over &lt;code&gt;hosts.example&lt;/code&gt; and adapt it to your needs&lt;/p&gt;
&lt;pre&gt;&lt;code&gt; cp hosts.example hosts
 vi hosts
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;There are three Ansible groups which are refered to in the playbooks:&lt;/p&gt;
&lt;p&gt;| &lt;strong&gt;pis&lt;/strong&gt; | All cluster node | &lt;code&gt;n0&lt;/code&gt;, &lt;code&gt;n1&lt;/code&gt;, &lt;code&gt;n2&lt;/code&gt;, &lt;code&gt;n3&lt;/code&gt; |
| &lt;strong&gt;master&lt;/strong&gt; | Master node | &lt;code&gt;n0&lt;/code&gt; |
| &lt;strong&gt;nodes&lt;/strong&gt; | All nodes which are not master | &lt;code&gt;n1&lt;/code&gt;, &lt;code&gt;n2&lt;/code&gt;, &lt;code&gt;n3&lt;/code&gt;|&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Copy over the configuration and adapt it.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt; cp config.yml.example config.yml
 vi config.yml
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You should at least put in your WLAN credentials, but you are also free to adapt the other values.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3 class="relative group"&gt;Basic Node Setup
 &lt;div id="basic-node-setup" class="anchor"&gt;&lt;/div&gt;
 
 &lt;span
 class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100 select-none"&gt;
 &lt;a class="text-primary-300 dark:text-neutral-700 !no-underline" href="#basic-node-setup" aria-label="Anchor"&gt;#&lt;/a&gt;
 &lt;/span&gt;
 
&lt;/h3&gt;
&lt;p&gt;If you have already created a cluster with these playbooks and want to start a fresh, please be sure that you cleanup your &lt;code&gt;~/.ssh/known_hosts&lt;/code&gt; from the old host keys. You should be able to ssh into each of the nodes without warnings. Also you must be able to reach the internet from the nodes.&lt;/p&gt;
&lt;p&gt;In the next step the basic setup (without Kubernetes) is performed. This is done by&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;ansible-playbook -k -i hosts setup.yml
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;When you are prompted for the password, use &lt;em&gt;raspberry&lt;/em&gt;. You will probably also need to confirm the SSH authentity for each host with &lt;em&gt;yes&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;The following steps will be applied by this command (which may take a bit):&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Docker will be installed from the Hypriot repositories&lt;/li&gt;
&lt;li&gt;Your public SSH key is copied over to &lt;em&gt;pi&amp;rsquo;s&lt;/em&gt; authenticated_keys and the users password will be taken from &lt;code&gt;config.yml&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Some extra tools are installed for your convenience and some benchmarking:
&lt;ul&gt;
&lt;li&gt;hdparm&lt;/li&gt;
&lt;li&gt;iperf&lt;/li&gt;
&lt;li&gt;mtr&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Hostname is set to the name of the node configured. Also &lt;code&gt;/etc/hosts&lt;/code&gt; is setup to contain all nodes with their short names.&lt;/li&gt;
&lt;li&gt;A swapfile is enabled (just in case)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;With this basic setup you have already a working Docker environment.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Now its time to reboot the whole cluster since some required boot params has been added. Plug the wire.&lt;/strong&gt;&lt;/p&gt;

&lt;h3 class="relative group"&gt;Kubernetes Setup
 &lt;div id="kubernetes-setup" class="anchor"&gt;&lt;/div&gt;
 
 &lt;span
 class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100 select-none"&gt;
 &lt;a class="text-primary-300 dark:text-neutral-700 !no-underline" href="#kubernetes-setup" aria-label="Anchor"&gt;#&lt;/a&gt;
 &lt;/span&gt;
 
&lt;/h3&gt;
&lt;p&gt;The final step for a working Kubernetes cluster is to run&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;ansible-playbook -i hosts kubernetes.yml
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This will install one master at n0 and threed additional nodes n1, n2, n3.&lt;/p&gt;
&lt;p&gt;The following features are enabled:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;etcd&lt;/code&gt;, &lt;code&gt;flanneld&lt;/code&gt; and &lt;code&gt;kubelet&lt;/code&gt; are installed as systemd services on the master&lt;/li&gt;
&lt;li&gt;&lt;code&gt;kubelet&lt;/code&gt; and &lt;code&gt;flanneld&lt;/code&gt; are installed as systemd services on the nodes&lt;/li&gt;
&lt;li&gt;Docker is configured to use the Flannel overlay network&lt;/li&gt;
&lt;li&gt;&lt;code&gt;kubectl&lt;/code&gt; is installed (and an alias &lt;code&gt;k&lt;/code&gt;)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If there are some issues when restarting services in the master, don&amp;rsquo;t worry. However you should best restart the master node n0 when this happens, because when setting up the other nodes the would fail if not all services are running on the master.&lt;/p&gt;
&lt;p&gt;After an initial installation it may take a bit until all infrastructure docker images has been loaded. Eventually should be able to use &lt;code&gt;kubectl get nodes&lt;/code&gt; from e.g. &lt;code&gt;n0&lt;/code&gt;. When this wotks but you see only one node, please reboot the cluster since some services may have not been started on the nodes (plug the cables when &lt;code&gt;n0&lt;/code&gt; is ready).&lt;/p&gt;

&lt;h3 class="relative group"&gt;Install SkyDNS
 &lt;div id="install-skydns" class="anchor"&gt;&lt;/div&gt;
 
 &lt;span
 class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100 select-none"&gt;
 &lt;a class="text-primary-300 dark:text-neutral-700 !no-underline" href="#install-skydns" aria-label="Anchor"&gt;#&lt;/a&gt;
 &lt;/span&gt;
 
&lt;/h3&gt;
&lt;p&gt;For service discovery via DNS you should finally install the SkyDNS addon, but only when the cluster is running, i.e. the master must be up and listening. For this final step call:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;ansible-playbook -i hosts skydns.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;h2 class="relative group"&gt;Wrap Up
 &lt;div id="wrap-up" class="anchor"&gt;&lt;/div&gt;
 
 &lt;span
 class="absolute top-0 w-6 transition-opacity opacity-0 -start-6 not-prose group-hover:opacity-100 select-none"&gt;
 &lt;a class="text-primary-300 dark:text-neutral-700 !no-underline" href="#wrap-up" aria-label="Anchor"&gt;#&lt;/a&gt;
 &lt;/span&gt;
 
&lt;/h2&gt;
&lt;p&gt;This has become a rather long recipe. I re-did everything from scratch within 60 minutes, so this could be considered as a lower boundary (because I already did it several times :). The initial setup might be a bit flaky, but should be easy to fix. I&amp;rsquo;d love to hear your feedback on this, and maybe we get it more stable afterwards. Remember, that&amp;rsquo;s my first Ansible playbook :)&lt;/p&gt;
&lt;p&gt;Now go out, buy and setup your Kubernetes cluster and have fun :-)&lt;/p&gt;</description></item></channel></rss>