Tag Archives: #docker

Announcing a new, Docker-based Hanlon-Microkernel

For several months now, we’ve been working hard on a new version of the Hanlon-Microkernel project with the goal of switching from our original Tiny-Core Linux based Microkernel to something that would be simpler to maintain and extend over time. This change was driven in part by the need to provide a simpler path for users who wanted to construct custom Hanlon Microkernels and in part by our own experience over time supporting the ‘standard’ Hanlon Microkernel. This post describes the changes that we have made to this end, as well as the corresponding changes that we had to make to the Hanlon project to support this new Microkernel type.

Why change?

When we were writing Razor (the tool that would become Hanlon) a few years ago, we searched long and hard for an in-memory Linux kernel that we could use for node discovery. Using a Linux kernel for node discovery gave us the ability to take advantage of tools that were already out there in the Linux/UNIX community (lshw, lscpu, facter, etc.) to discover the capabilities of the nodes being managed by Hanlon. Using an in-memory Linux kernel meant that we could iPXE-boot any node into our Microkernel without any side-effects that might damage anything that might already be on the node — an important consideration if we were to manage nodes that had already been provisioned with an operating system in an existing datacenter. As we have discussed previously, we eventually settled on Tiny-Core Linux as the base OS for our Microkernel.

Tiny-Core Linux (TCL) had several advantages over the other in-memory alternatives that were available at the time including the fact that it was very small (the ISO for the ‘Core’ version of TCL weighed in at a mere 7 megabytes) and that, out of the box, it provided pre-built versions of most of the packages that we needed to run our Ruby-based discovery agent in the form of Tiny-Core Extensions (TCEs). All that was left was to construct a shell-script based approach that would make it simpler for the typical user, with limited knowledge of Linux or UNIX system administration, to remaster the standard TCL ISO in order to built a ‘Microkernel ISO’ suitable for use with Hanlon. Things went quite well initially, but over time we started to notice issues with the approach we had chosen for building our Microkernel, and those issues became harder and harder to resolve over time.

Those issues boiled down to a few limitations in the way we were building our Microkernel — remastering a standard TCL ISO in order to construct a Microkernel ISO that included all of the dependencies needed for our discovery agent to run — and the static nature of that process. In short, those issues can be broken down into a few key areas of weakness in that approach:

  • Hardware support: when users started trying to use our Microkernel with some of the newer servers coming out on the market they discovered that those nodes, when booted into the pre-packaged Microkernel that we had posted online were not able to check-in and register with the Hanlon server. When we dug deeper, we realized that the issue was that the kernel modules for the NICs on those servers weren’t included in our pre-built Microkernel. We spent some time developing a mechanism that would give users the ability to add kernel modules to the Microkernel during the remastering process so they could build a custom Microkernel that worked with their hardware, but that meant that they would have to use our remastering process to create their own custom ISOs (something specific to their hardware). In spite of our efforts to make this process as simple as possible we found that process wasn’t that easy to follow (to say the least) for an inexperienced user.
  • Customizing the TCL kernel: Things got a bit worse when we started trying to define a scale-out strategy for Razor. The team we were working with wanted to set up a hardware load-balancer in front of a set of Razor servers and then route requests to the various Razor servers using a round-robin algorithm. Unfortunately, the hardware load-balancer that was chosen wasn’t capable of running a PXE-boot server locally and as a result our Microkernel was not able to discover the location of the Razor server using the next-server parameter it received back from the DHCP server (which pointed to the PXE-boot server, not the hardware load-balancer). We knew we could get around this by customizing the DHCP client in our Microkernel to support the parsing of additional options from the reply it received back from the DHCP server, but because TCL is based on a BusyBox Linux kernel that meant we would have to build our own customized version of the BusyBox kernel and replace the BusyBox kernel embedded in the standard TCL ISO with our new, customized BusyBox kernel during our remastering process. While we were able to modify the remastering process to support this change fairly quickly, the process of rebuilding the BusyBox kernel itself is not an exercise for the faint of heart since it requires cross-compilation of the kernel on a separate Linux machine.
  • Updating our Microkernel to support newer versions of TCL: At the same time, we started to find bugs in Razor that were the result of known issues in a few of the TCEs that were maintained by the TCL community. Because we were using an older version of TCL, the TCEs we were downloading during the remastering process were built from older versions of the packages they contained. We resolved many of these of these issues by moving to a newer version of the TCL kernel, but that process wasn’t an easy one since it required signficant changes to the remastering process itself to support changes in the boot process that had occurred between TCL 4.x and TCL 5.x (a process that took several weeks to get right).
  • Building custom TCEs: Not all of the issues we had with TCEs from the standard TCE repositories could be resolved by updating the TCL version we were basing our Microkernel on and we also found ourselves wanting to include packages that we couldn’t find pre-built in the standard TCE repositories. As a result, we quickly found ourselves in the business of building our own TCEs, then modifying our remastering process to allow for bundling of these locally-built TCEs into the remastered Microkernel ISOs. As was the case with rebuilding a customized version of the BusyBox kernel used in our Microkernel, this was not an easy process to follow for an inexperienced user, and it led to even more time being spent on things that were not related to development of the Microkernel itself.

So, we knew we needed to make a change to how we built our Microkernel and that left us with the question of what we should use as the basis for our new Microkernel platform. We knew we didn’t want to lose the features that had initially led us to choose TCL (a small, in-memory Linux kernel that provided us witha repository of the tools we needed for node discovery), but what, really, was our best alternative?

Times had changed

Fortunately for us, several technologies had come to the forefront in the two or three years since we conducted our original search. After giving the problem some thought, we realized that one of the easiest solutions, particularly from the point of view of a casual user of the Hanlon Microkernel, might actually be to convert our Microkernel Controller (the Ruby-based daemon running in the Microkernel that communicated with the Hanlon server) from a service running in a dynamically provisioned, in-memory Linux kernel to a service that was running in a Docker container in a dynamically provisioned, in-memory Linux kernel. By converting our Microkernel to a Docker image and running our Microkernel Controller in a Docker container based on that image, it would be very simple for a user to build their own version of the Hanlon Microkernel, customized for use in their environment. Plus it would be even simpler for us to define an Automated Build for the Hanlon-Microkernel project in our cscdock organization on DockerHub so users who wanted to use the standard Hanlon Microkernel could do so via a simple ‘docker pull’ and ‘docker save’ command.

With that thought in mind, we started looking more deeply at how much work it would be to convert our Microkernel Controller to something that could be run in a Docker container. The answer, as it turned out, was “not much”. The Microkernel Controller was already set up to run as a daemon process in a Linux environment and it didn’t really have any significant dependencies on other, external services. As it turned out, setting up a Docker container that could run our Microkernel controller turned out to be a very simple task. The most difficult part of the process was setting things up so that facter could discover and report ‘facts’ about the host operating system instance, not the ‘facts’ associated with the container environment it was running in. The solution to that turned out to be a bit of sed-magic run against the facter gem after it was installed during the docker build process (so that it would look for the facts it reported in a non-standard location), cross-mounting of the /proc, /dev, and /sys filesystems from the host as local directories in the Docker container’s filesystem, starting up the container in privileged mode, and setting the container’s network to host mode so that the details of the host’s network were visible from within the container.

With those changes in place, we had a working instance of our Microkernel Controller running in a Docker container. All that remained was to determine which Docker image we wanted to base our Docker Microkernel Image off of and which operating system we wanted to use for the host operating system that the node would be iPXE-booted into.

It actually took a bit of digging to answer both of these questions, but the first was easier to answer than the second. As was the case with our initial analysis, we had some criteria in mind when making this decision:

  • The Docker image should be smaller than 256MB in size (to speed up delivery of the image to the node); smaller was considered better
  • Only Docker images that were being actively developed were considered
  • The Docker image should be based on a relatively recent Linux kernel so that we could be fairly confident that it would support the newer hardware we knew we would find in many modern data-centers
  • Since we knew we would be using facter as part of the node discovery process, the distribution that the Docker image was based on needed to include a standard package for a relatively recent release of Ruby
  • The distribution should also provide standard packages for the other tools needed for the node discovery process (lshw, lscpu, dmidecode, impitool, etc.) and provide access to tools that could be used to discover the network topology around the node using Link Layer Datagram Protocol (LLDP)
  • The distribution that the Docker image was based on should be distributed under a commercial friendly open-source license in order to support development of commercial versions of any extensions that might be developed moving forward

After looking at several of the alternatives available to us, we eventually settled on the GliderLabs Alpine Linux Docker image, which is:

  • very small (weighing in at a mere 5.25MB in size)
  • actively being developed (the most recent release was made about three months ago at the time this was being written)
  • based on a recent release of the Linux kernel (v3.18.20)
  • distributed under a relatively commercial friendly GPLv2 license, a license that allows for development of commercial extensions of our Microkernel so long as those extensions are not bundled directly into the ISO.

Additionally, it provides pre-built packages for all of the tools needed by our Microkernel Controller (including recent versions of ruby, lshw, lscpu, dmidecode and impitool) through its apk package management tool.

For those interested in more details regarding this image, the GitHub page for the project used to build this image can be found here, and the README.md file on that page includes links to additional pages and documentation on the project.

Of course, we still needed an operating system

Now that we had a strategy for migrating our Microkernel Controller from a service running in an operating system to a service running in a Docker container, we were left with the question of which operating system we should use as the base for the new Hanlon Microkernel. Of course, we still had to consider the criteria we mentioned above (small, under active development, distributed under a commercial friendly license, etc.) when choosing the Linux distribution to use as an operating system for our Microkernel container. Not only that, but we wanted a standard, in-memory distribution that could be used to iPXE-boot a node, with no modifications to the ISO necessary to run our Microkernel container.

With those constraints in mind, we started looking at alternatives. Initially, we felt CoreOS would provided us with the best small platform for our Microkernel (small here being a relative concept, even though a CoreOS ISO weighs in at 190MB, that’s still much smaller than the 450+MB size for the LiveCD image of most major distributions). When we mentioned our search for a suitable, small OS that could run Docker containers to Aaron Huslage (@huslage) from Docker, he recommended we take a look at a relatively recent entry amongst small, in-memory Linux distributions, RancherOS. While it is still in beta, it is significantly smaller than the other distributions we were looking at (weighing in at a mere 22MB), runs Docker natively (even the system services are run in their own Docker containers in RancherOS), and it’s distributed under a very commercial friendly APLv2 license. Given these advantages, we decided to use RancherOS rather than CoreOS as the base operating system for our Microkernel.

Building a new Microkernel

With the new platform selected, it was time to modify our Microkernel Controller so that it could be run in in a Docker container. Since all of the tools required by our Microkernel Controller were available out of the box under Alpine Linux, this was really of an exercise in getting rid of the code in the Microkernel we didn’t need (mostly code that was specific to the work we had to do in the past to initialize the TCL platform) than any real modifications to the Microkernel Controller itself.

Specifically we:

  • Removed the code that was associated with the process of building the ‘bundle file’ and replaced it with a Dockerfile
  • Removed the code that was used to configure the old, TCL-based Microkernel during the boot process (this code was replaced by a cloud-config that was returned to the new Microkernel by Hanlon during the iPXE-boot process)

Overall, when these changes were made, we were able to reduce the size of the Hanlon-Microkernel codebase by more than 1400 lines of code. Not only that, but there were a few unexpected benefits, including:

  • Removing the need to use custom parameters in the DHCP response to pass parameters into our Microkernel so that it could check-in with the Hanlon server. Because RancherOS (like CoreOS) supports the use of a cloud-config (passed to the kernel as a URL during the iPXE-boot process), we could pass all of the parameters that we used to pass to the Microkernel via DHCP directly to the Microkernel from the Hanlon server as part of that same cloud-config.
  • Configuring the Microkernel Controller correctly from the start. Again, we are able to pass the configuration of the Microkernel directly from the Hanlon server using that same cloud-config, so the Microkernel Controller is correctly configured from the start. Previously, we burned a default configuration into every Microkernel instance and then updated that configuration after the Microkernel checked in with Hanlon for the first time. Being able to pass the initial configuration to the Microkernel directly from the Hanlon server makes it much simpler to debug any issues that might arise prior to first checkin since the log-level of the Microkernel controller can be set to Logger::DEBUG from the start, not just after the first check-in succeeds.

Not only that, but the shift from an ISO-based Microkernel to a Docker container-based Microkernel also simplified distribution of new releases of the Hanlon-Microkernel project. Since the Hanlon-Microkernel project is now built as a Docker image, we can now setup an Automated Build on DockerHub (under our cscdock organization in the cscdock/hanlon-microkernel repository) that will trigger whenever we merge changes into the master branch of the Hanlon-Microkernel project. In fact, we’ve already setup a build there and obtaining a local copy of the Hanlon Microkernel image that is suitable for use with the Hanlon server is as simple as running the following pair of commands:

$ docker pull cscdock/hanlon-microkernel
Using default tag: latest
latest: Pulling from cscdock/hanlon-microkernel
3857f5237e43: Pull complete
9606ec958876: Pull complete
42b186ff3b3c: Pull complete
4d46659c683d: Pull complete
Digest: sha256:19dcb9c0f5d4e55202c46eaff7f4b3cc5ac1d2e90e033ae1e81412665ab6a240
Status: Downloaded newer image for cscdock/hanlon-microkernel:latest
$ docker save cscdock/hanlon-microkernel> new_mk_image.tar

The result of that docker save command will be a tarfile that you can use as one of the inputs (along with a RancherOS ISO) when adding a Microkernel to Hanlon (more on this, below).

We are also creating standard Docker images from the Hanlon-Microkernel project (starting with the v3.0.0 release) under that same repository on DockerHub. To retrieve a specific build of the Docker Microkernel Image you’d simply modify the commands shown above to include the tag for that version. The tags we use for these version-specific builds in the DockerHub repository will be the same as those in the GitHub repository, but without the ‘v’ prefix, so the commands to retrieve (and save that image in a form usable with the Hanlon server) the build from the v3.0.0 Hanlon-Microkernel release would look like the following:

$ docker pull cscdock/hanlon-microkernel:3.0.0
3.0.0: Pulling from cscdock/hanlon-microkernel
3857f5237e43: Pull complete
40806b4dc54b: Pull complete
ed09cd42dec4: Pull complete
d346b8255728: Pull complete
Digest: sha256:45206e7407251a18db5ddd88b1d1198106745c43e92cd989bae6d38263b43665
Status: Downloaded newer image for cscdock/hanlon-microkernel:3.0.0
$ docker save cscdock/hanlon-microkernel:3.0.0 > new_mk_image-3.0.0.tar

As was the case in the previous example, the output of the docker save command will be a tarfile suitable for use as one of the arguments (along with a RancherOS ISO) when adding a Microkernel instance to a Hanlon server.

Building your own (Docker-based) Hanlon Microkernel

As we mentioned earlier, one of our goals in shifting from an ISO-based Hanlon Microkernel to a Docker container-based Hanlon Microkernel was to drastically simplify the process for users who were interested in creating their own, custom Microkernel images. In short, after a few weeks of experience with the new process ourselves we think we’ve met, and hopefully even surpassed that goal with the new Hanlon-Microkernel release.

Customizing the Microkernel is now as simple as cloning down a copy of the Hanlon-Microkernel project to a local directory (using a git clone command), making your modifications to the codebase, and then running a ‘docker build’ command to build your new, custom version of the standard Hanlon-Microkernel. The changes you make might be changes to the source code for the Microkernel Controller itself (to fix a bug or add additional capabilities to it) or they might involve modifications to the Dockerfile (eg. to add additional kernel modules needed for some specialized hardware only used locally), but no longer will users have to understand all of the details of the process of remastering a Tiny-Core Linux ISO to build their own version of the Hanlon-Microkernel. Now, building a new custom version of the Microkernel is as simple as the following:

$ docker build -t hanlon-mk-image:3.0.0 .
Sending build context to Docker daemon 57.51 MB
Step 0 : FROM gliderlabs/alpine
---> 2cc966a5578a
Step 1 : RUN apk update && apk add bash sed dmidecode ruby ruby-irb open-lldp util-linux open-vm-tools sudo && apk add lshw ipmitool --update-cache --repository http://dl-3.alpinelinux.org/alpine/edge/testing/ --allow-untrusted && echo "install: --no-rdoc --no-ri" > /etc/gemrc && gem install facter json_pure daemons && find /usr/lib/ruby/gems/2.2.0/gems/facter-2.4.4 -type f -exec sed -i 's:/proc/:/host-proc/:g' {} + && find /usr/lib/ruby/gems/2.2.0/gems/facter-2.4.4 -type f -exec sed -i 's:/dev/:/host-dev/:g' {} + && find /usr/lib/ruby/gems/2.2.0/gems/facter-2.4.4 -type f -exec sed -i 's:/host-dev/null:/dev/null:g' {} + && find /usr/lib/ruby/gems/2.2.0/gems/facter-2.4.4 -type f -exec sed -i 's:/sys/:/host-sys/:g' {} +
---> Running in 4bfa520b64f9
fetch http://alpine.gliderlabs.com/alpine/v3.2/main/x86_64/APKINDEX.tar.gz
v3.2.3-105-ge9ebe94 [http://alpine.gliderlabs.com/alpine/v3.2/main]
OK: 5290 distinct packages available
(1/35) Installing ncurses-terminfo-base (5.9-r3)
(2/35) Installing ncurses-libs (5.9-r3)
(3/35) Installing readline (6.3.008-r0)
(4/35) Installing bash (4.3.33-r0)
(5/35) Installing dmidecode (2.12-r0)
(6/35) Installing libconfig (1.4.9-r1)
(7/35) Installing libnl (1.1.4-r0)
(8/35) Installing open-lldp (0.9.45-r2)
(9/35) Installing fuse (2.9.4-r0)
(10/35) Installing libgcc (4.9.2-r5)
(11/35) Installing libffi (3.2.1-r0)
(12/35) Installing libintl (0.19.4-r1)
(13/35) Installing glib (2.44.0-r1)
(14/35) Installing libstdc++ (4.9.2-r5)
(15/35) Installing icu-libs (55.1-r1)
(16/35) Installing libproc (3.3.9-r0)
(17/35) Installing libcom_err (1.42.13-r0)
(18/35) Installing krb5-conf (1.0-r0)
(19/35) Installing keyutils-libs (1.5.9-r1)
(20/35) Installing libverto (0.2.5-r0)
(21/35) Installing krb5-libs (1.13.1-r1)
(22/35) Installing libtirpc (0.3.0-r1)
(23/35) Installing open-vm-tools (9.4.6_p1770165-r4)
Executing open-vm-tools-9.4.6_p1770165-r4.pre-install
(24/35) Installing gdbm (1.11-r0)
(25/35) Installing yaml (0.1.6-r1)
(26/35) Installing ruby-libs (2.2.2-r0)
(27/35) Installing ruby (2.2.2-r0)
(28/35) Installing ruby-irb (2.2.2-r0)
(29/35) Installing sed (4.2.2-r0)
(30/35) Installing sudo (1.8.15-r0)
(31/35) Installing libuuid (2.26.2-r0)
(32/35) Installing libblkid (2.26.2-r0)
(33/35) Installing libmount (2.26.2-r0)
(34/35) Installing ncurses-widec-libs (5.9-r3)
(35/35) Installing util-linux (2.26.2-r0)
Executing busybox-1.23.2-r0.trigger
Executing glib-2.44.0-r1.trigger
OK: 63 MiB in 50 packages
fetch http://dl-3.alpinelinux.org/alpine/edge/testing/x86_64/APKINDEX.tar.gz
fetch http://alpine.gliderlabs.com/alpine/v3.2/main/x86_64/APKINDEX.tar.gz
(1/2) Installing ipmitool (1.8.13-r0)
(2/2) Installing lshw (02.17-r1)
Executing busybox-1.23.2-r0.trigger
OK: 70 MiB in 52 packages
Successfully installed facter-2.4.4
Successfully installed json_pure-1.8.3
Successfully installed daemons-1.2.3
3 gems installed
---> e7a8344fda5a
Removing intermediate container 4bfa520b64f9
Step 2 : ADD hnl_mk*.rb /usr/local/bin/
---> c963bb236983
Removing intermediate container 0a42b371b2e9
Step 3 : ADD hanlon_microkernel/*.rb /usr/local/lib/ruby/hanlon_microkernel/
---> ac4cdf004a25
Removing intermediate container 1b66c3efd788
Successfully built ac4cdf004a25
$ docker save hanlon-mk-image:3.0.0 > hanlon-mk-image.tar

As was the case in the examples shown previously, the result of the ‘docker save’ command will be a tarfile suitable for use as one of the inputs required when adding a new Microkernel instance to a Hanlon server.

One final note on building your own Microkernel…it is critical that any Microkernel image you build be tagged with a version compatible with the semantic versioning used internally by Hanlon. In the example shown above, you can see that we actually tagged the Docker image we built using using a fixed string (3.0.0) for the version.

Of course, instead of using a fixed string you could use the git describe command, combined with a few awk or sed commands, to generate a string that would be quite suitable for use as a tag in a docker build command. Here is an example of just such a command pipeline:

git describe --tags --dirty --always | sed -e 's@-@_@' | sed -e 's/^v//'

This command pipeline returns a string that includes information from the most recent GitHub tag, the number of commits since that tag, the most recent commit ID for the repository, and a ‘-dirty’ suffix if there are currently uncommitted changes in the repository. For example, if this command pipeline returns the following string:

2.0.1_13-g3eade33-dirty

that would indicate that the repository is 13 commits ahead of the commit that is tagged as ‘v2.0.1’, that the commit ID for the latest commit is ‘g3eade33’, and that there are currently uncommitted changes in the repository. Of course, if you use the same command in a repository that has just been tagged as v3.0.0, then the output of that command pipeline would be much simpler:

3.0.0

So, the ‘git describe’ command pipeline shown above provides us with a mechanism for generating a semantic version compatible tag for images that are built using a ‘docker build’ command. Here’s an example:

docker build -t hanlon-mk-image:`git describe --tags --dirty --always | sed -e 's@-@_@' | sed -e 's/^v//'` .

Using our new Microkernel with Hanlon

So now we've got a tarfile containing our new Docker Microkernel Image, what's the next step? How exactly do we build a Microkernel Image containing our Microkernel Controller? This is where the changes to Hanlon (v3.0.0) come in, so perhaps a brief description of those changes is in order.

The first thing we had to change in Hanlon was its concept of exactly what a Microkernel image was. Prior to this release, an image in Hanlon always consisted of one and only one input file, the ISO that represented the image in question. A Hanlon image was built from a single ISO, regardless of whether it was an OS image, an ESX image, a Xen-server image, or a Microkernel image. The only difference as far as Hanlon was concerned was that the contents of the ISO (eg. the location of the kernel and ramdisk files) would change from one type of ISO to another, but up until the latest release a Hanlon image was built from a single ISO, period.

With this new release, a Microkernel image is significantly different from the other image types defined in Hanlon. A Microkernel image now consists of two input files, the RancherOS ISO containing the boot image for a node and the Docker image file containing the Microkernel Controller. So, while the command to add a Microkernel in previous versions of Hanlon (v2.x and older) looked like this:

hanlon image add -t mk -p ~/iso-build/v2.0.1/hnl_mk_debug-image.2.0.1.iso

(note the single argument, passed using the -p flag, that provides Hanlon with the path on the local filesystem where Hanlon can find the Microkernel ISO), the new Hanlon-Microkernel requires an additional argument:

hanlon image add -t mk -p /tmp/rancheros-v0.4.1.iso -d /tmp/cscdock-mk-image.tar.bz2

In this example you can see that not must the user provide the path on the local filesystem where Hanlon can find an instance of a RancherOS ISO (using the -p flag) when adding a new Microkernel instance to Hanlon, but they also must provide the path to a tarfile containing an instance of the Docker Microkernel Image file that we saved previously (using the -d flag). These two files, together, constitute a Hanlon Microkernel in the new version of Hanlon, and both pieces must be provided to successfully add a Microkernel instance to a Hanlon server.

So, what does the future hold?

Hopefully, it’s apparent that our shift from an ISO-based Hanlon Microkernel to a Docker container-based Hanlon Microkernel has successfully resolved the issues we set out to resolve. It is now much simpler for even an inexperienced Hanlon user to rebuild a standard Docker Microkernel Image locally or to build their own custom Docker Microkernel Images. Not only that, but it is now much easier to extend the existing Microkernel or update the Microkernel (eg. moving the Microkernel to a newer Alpine Linux build in order to support newer hardware). Finally, shifting over to a modern OS that can be configuration at boot time using a cloud-config URL and that can run our Microkernel Controller in a Docker container has meant that we could significantly simplify the codebase in our Hanlon-Microkernel project.

This same, modern platform may also provide us with opportunities to extend the behavior of the Hanlon Microkernel at runtime, something that we previously could only imagine. For example, there have been a number of ideas for the Microkernel that we have discussed over the past two or three years years that we really couldn’t imagine implementing, based on the static nature of the ISO-based Microkernel we were using. Now that we’re working with a much more dynamic platform for our Microkernel, perhaps it’s time to revisit some of those ideas — eg. creating Microkernel ‘stacks’ so that a Microkernel can behave differently but only for a single boot or a finite sequence of boots.

Only time will tell, but it’s a brave new world for Hanlon and the Hanlon Microkernel…

Advertisements