Why you should help me create the next operating system (using containers)
People have the capacity to manage about seven units of information at a time in their working memory. Computers can help us with tasks that are complex and relieve cognitive load; however, they can also tax our working memory when one has to think through how to do each step within their problem on an operating system. It’s important to consider how much working memory space you have left over when you’re on a computer. What about tablets, phones, virtual reality headsets, or optical displays?
Why are people so fascinated by Mac OS over Windows, or Linux? Why do some prefer Android over iPhones? Would everyone buy Apple if they could afford it? What components of an operating system make them so personal that people have a preference? How are they different? Do you operate the system, or does it operate you?
Operating systems should always be evolving; yet it takes years for any significant change.
These are the types of things that I research and think about commonly. As a result, I’ve began to develop an operating system to apply these principles. In this post, you can read the conceptual design plans.
Linux is a freely available operating system that already exists in multiple flavors. Apple is based off Unix and BSD, and Android is built with the Linux kernel; Windows has its own kernel; but none of them are good enough, and I’ll explain why.
What is an operating system
Operating systems provide a platform for developers to create the applications you use. Additionally there are development kits that make it easier to create software by using APIs and libraries, but often developers use what they were taught, and don’t understand what limitations a specific kit or language may have. If you build an application for Windows, then you’ll find significant challenges when you want to run it on Mac. The same for creating an iPhone app, and then wanting to use it on Android..
If you don’t plan ahead, then you’ll end up developing two independent applications. Changes on one will not reflect on the other. Windows 8 received a lot of criticism because they decided to merge their codebase to support all their devices, but in the process took out many of the features that desktop users came to expect. It didn’t scale for business productivity and instead became more of a glamorized tablet.
This is all part of something called user experience. Without synergy you’ll quickly grow frustrated when something isn’t meeting your needs, in this example so that you can work efficiently with the most working memory and least cognitive load. Mac has more synergy, because they developed a menu system that is consistent between applications and a descent toolkit for building similar looking applications. Ubuntu Touch uses the same menus between phones, tablets, and desktops — yet each device looks different and provides a unique experience. How?
User interface markup languages, such as QML, allow you to create windows and menus that use the same code no matter what device you’re on. That means that you can expect the same options and features whether you’re using a wearable display or desktop computer, yet each provides a vastly different user experience.
Graphical User Interfaces
The graphical user interface (GUI) and windowing system provides the designs you’re accustomed to. Linux has several that you can use, albeit my focus is on Qt since it compiles natively across multiple platforms and is the most efficient actively developed open-source framework that I know of to date.
What if we could style applications, like we do web pages? Anyone familiar with website development has heard of CSS. Certainly developers can implement their own interface; however wouldn’t it make more sense to allow the OS and users to change this depending on preference and device that you’re on?
You can and that is what I want to do. Imagine creating an application, but not having to worry about design. You simply create the options and functions, and everything else is already built in. Now envision being able to create a graphical interface for a command-line tool using a simple editor and basic logic. How about using the same editor to extend functionality to a graphical application that already exists. Now imagine being able to upload and manage your designs so that anyone can use, rate, provide feedback, and contribute.
If you’re in tech already, then you’ve probably heard of virtual machines—maybe you’ve used VMware, Virtualbox, or Parallels. You’ve certainly heard about cloud technology. If you’re lucky, then you’ve heard about containers. Containers are the most efficient way to run and manage secure applications without performance degradation; and the daemon does so much more than that. We’ll get more into that later. My goal is to use a containerized user-space compositor like Wayland, and build a QML interface on-top that provides a highly optimized user experience that is capable of creating augmented and holographic menus, while also providing a optimized experience on desktop computers.
Did you know that virtual machine functionality is implemented in the Linux kernel? That you can use it to run 3D games in Windows on Linux? That you can use Mac OS on Linux? The only issue is that there is no single tool. There are no graphical applications that include this functionality. However, there is an amazing tool called virsh, but it is command-line based. The solution I’m proposing will change this, and make it easy to create a whole slew of new graphical tools and menus (or combinations thereof). You could manage your Amazon EC2 instances using their native tool with an interface that a user built, but add in functionality to back them up using another application from a menu entry that you built.
Building a Linux distribution
Traditionally when you build a Linux distribution, you either have to be able to use packages that already exist, or create the packages yourself by compiling from source code. Developers don’t always document their process; an application may depend on the version of another; and you have to use different tools to compile based on the programming language.
Almost all Linux distributions use something called shared libraries. This was to ensure that packages could be updated independently of the shared tools they depend on. Unfortunately, it had a negative effect. These libraries are treated like software packages. Not only that, but if you run a system update you may end up breaking multiple applications. A typical Linux distribution will have hundreds, if not thousands, of packages that are simply dependencies of other applications.
Fortunately, there is a better approach called static binaries. When you compile software you can use a static compilation library. Additionally, there are a few Linux distributions implementing this approach already. I want to automate building software by creating a system that always pulls the latest stable code from source, uses tools to define and automatically test what requirements it has, and cross-compile a single executable that can later be distributed inside a container. I also believe that we can improve the process for developers so that they no longer have to manage dependencies.
In the enterprise and server markets, containers and Docker are the buzz. Rightfully so, it is the future of how we build cloud environments. What if we built a desktop operating system using the same tools? What if it could sync your applications across all your devices simultaneously? What if the container daemon became the core process that managed everything?
Now, let’s say we can do that while improving privacy, and increasing security so that viruses become obsolete. Moreover, it will provide the best tools for designers and developers to create a robust, yet consistent experience. It will eliminate the need to install applications, as they will be simply ran, eliminating left over data from installations. Once the application is downloaded for the first time, the software will use a predictable path, and prompt you if it needs access to something else based on permissions. The operating system I want to create will do all this and more.
A container daemon is like an app store. You run the software, and if it hasn’t already been downloaded, it will automatically. You pick the version and can run multiple builds of the same application simultaneously. You determine how to run it. You are in control. Remember the QML interface from before? Now imagine this interface being a gateway to manage and change the variables in which your containerized applications are ran, or what commands are executed inside the containers.
Packaging software into containers can be automated. Not just software, but there are ways to package kernel drivers. The Linux kernel is built to be modular. Most operating systems are built to support a wide range of hardware, and every possible combination of file systems and software. This adds overhead, increasing the security footprint. Apple realized a long time ago that supporting legacy hardware was inefficient, so they started building their own. By using a container daemon you can do the same by only loading the modules you need.
Additionally, by automating the kernel build process across devices, you can always have access to the latest stable kernel builds and hardware support. You could use the real-time Linux patches to run this on smartwatches, embedded IoT devices, and even spacecrafts.
Most desktop operating systems are at least 1GB in size, often closer to 4GB. What if I could build you one under 20MB? What if you could install a new version, and then rollback to an old version, without affecting your applications or data? What if your operating system software was unbreakable?
My goal is to give you access to the latest application releases and kernel features in a secure and stable way. I have virtually laid out all the components to make this possible. There is a descent amount of technical jargon and detail that I’m leaving out, but I’m willing to share more if you can help. The app store is already created using the Docker Registry (see Docker Hub), although it needs more work. The biggest challenge is creating the engine that will help manage and monitor you running containers, configurations, permissions, device settings, and alert you of potential issues.
If you’re an investor, or know of an investor that may be interested, please reach out. I have worked in Linux engineering for over 10 years. I’ve also studied ontology engineering and machine learning. I want to improve the thought process in which we develop, and evolve tools to automatically create software for us. This will help advance science in all sectors. I also have plans to use crowdfunding to bring this goal to fruition, but I need to be able to finance that as well.
If you have suggestions, include them in the comments below.