Skip to main content

· 4 min read
Juraj Karadža

why-are-clis-important-cover

Command Line Interfaces (CLIs) seem old-fashioned in the age of graphical user interfaces (GUIs) and touchscreens, but they remain essential tools for developers. You may not realize it, but they are used far more often than you might think. For example, if you use git commands over the terminal, you're likely engaging with a CLI on a daily basis.

To give you some background, I work on Cyclops, a tool that provides a graphical user interface for developers to configure and deploy their applications in Kubernetes. Given my employment, why would I, of all people, emphasize the importance of command-line interfaces?

Support us 🙏

We know that Kubernetes can be difficult. That is why we created Cyclops, a truly developer-oriented Kubernetes platform. Abstract the complexities of Kubernetes, and deploy and manage your applications through a UI. Because of its platform nature, the UI itself is highly customizable - you can change it to fit your needs.

We're developing Cyclops as an open-source project. If you're keen to give it a try, here's a quick start guide available on our repository. If you like what you see, consider showing your support by giving us a star ⭐

github-stars

Why CLIs 🤔

Command-line interfaces (CLIs) are incredibly fast for getting things done. They let you perform actions with simple, direct commands, which is especially useful for repetitive tasks. For example, you can search for files or oversee the various programs and services running on your computer with just a few commands. However, this speed is only accessible to those who know the necessary commands.

CLIs also enable fast remote access to machines and servers. Using tools like SSH, you can connect to remote systems from anywhere, allowing you to maintain servers that aren’t physically nearby. Plus, because CLIs are lightweight, they use minimal bandwidth, making remote operations more efficient and reliable.

These are great benefits of having a CLI, but why would Cyclops be interested in developing one? The answer is automation.

Automation and CI/CD ⚙️

When I talk about automation, I mean writing scripts to handle repetitive tasks, ensuring they are done the same way every time, saving a lot of time. You can automate everything from simple file operations to complex deployments. Automations boost efficiency and reduce the chance of human error since the scripts perform tasks consistently every time they run.

Automation is a standard practice in the software development lifecycle (just think of GitHub actions). If Cyclops had a CLI, it would allow it to integrate into larger systems for deploying applications, like CI/CD pipelines.

You could use Cyclops’s UI to make it easier for developers to configure and deploy their applications in Kubernetes clusters, and a CLI would allow you to automate any part of that process.

For example, once you create a template and publish it on GitHub, GitHub actions could automatically connect the template to your Cyclops instance using our CLI. This would allow your developers instant access to each new template or even any update the template receives. More on automation in future blog posts… 😉

Cyclops CLI - cyctl 🔅

We didn't want to miss out on the advantages of a CLI, but initially, we struggled to find the time to develop it. However, our community has grown significantly over the past few months, and since we are an open-source project, we began opening issues to kickstart the development of our CLI.

Thanks to our amazing community, we realized that our CLI was closer to realization than we had thought. And just a couple of days ago, Homebrew received a new package - our very own cyctl!

cyctl

Open Source Magic ✨

We are very proud to say that cyctl is a community-driven project. While the Cyclops team has been overseeing every change and reviewing every PR, most of the features have been developed by our contributors!

If you're excited about making Kubernetes easier for developers and want to contribute to our CLI or any other part of our project, we’d love to have you on board. Come join us on Discord to connect, collaborate, and be a part of something great!

And if you enjoyed this post, we would be grateful if you could star our repo ⭐ 😊

· 6 min read
Juraj Karadža

k8s-easy-start-cover

They say the first step is always the hardest. And when that step is in the direction of Kubernetes, it can feel even more intimidating. You can sometimes feel "paralyzed" by the sheer number of things you don't understand. Or better said, you don’t understand yet.

But once the first step is conquered, the rest feels more attainable. So, let’s take the first step together. Oh, and we'll be using a couple of tools to help us out. After all, we are trying to make this as simple as possible 😉

Support us 🙏

We know that Kubernetes can be difficult. That is why we created Cyclops, a truly developer-oriented Kubernetes platform. Abstract the complexities of Kubernetes, and deploy and manage your applications through a UI. Because of its platform nature, the UI itself is highly customizable - you can change it to fit your needs.

We're developing Cyclops as an open-source project. If you're keen to give it a try, here's a quick start guide available on our repository. If you like what you see, consider showing your support by giving us a star ⭐

github-stars

A Cluster 🌐

The first thing you need to get started with Kubernetes is a cluster. Typically, a cluster represents the pool of servers on which you run your apps. But for now, you don’t need a pool of servers. Let’s start with something simple: a local playground that will help us get the hang of it.

One of the easiest ways to quickly get a cluster running is with Minikube. Minikube is a tool that sets up a local Kubernetes cluster on your computer. It’s perfect for development and testing purposes. To get started, you need to:

  1. Install Docker: docker will allow us to run Minikube in an isolated environment, you can find out how to download it here
  2. Install Minikube: Follow the instructions on the Minikube site to install it on your machine.
  3. Start Minikube: Once installed, you can start your local cluster with the command:
minikube start

This will set up a single-node (single-server) Kubernetes cluster on your machine.

The Configuration 🗄️

This is usually the trickiest part of working with Kubernetes. For Kubernetes to run your application, you must create a configuration file that tells Kubernetes how to handle your application. These files are traditionally written in YAML and follow their own syntax and rules.

But here’s the good news: Cyclops lets you skip this process entirely.Cyclops is an open-source tool that provides a user-friendly interface for configuring your applications to run in Kubernetes.

The UI that Cyclops provides is highly customizable when it comes to defining your configurations through its templates feature. It also comes with a couple of predefined templates to get you started on your journey.

Cyclops is simple to set up and requires just two commands:

kubectl apply -f https://raw.githubusercontent.com/cyclops-ui/cyclops/v0.6.2/install/cyclops-install.yaml &&
kubectl apply -f https://raw.githubusercontent.com/cyclops-ui/cyclops/v0.6.2/install/demo-templates.yaml

and secondly:

kubectl port-forward svc/cyclops-ui 3000:3000 -n cyclops

Just wait a few seconds between these commands to let your Kubernetes cluster start Cyclops.

Now, head over to localhost:3000, and you should be all set!

cyclops-modules-screen

Running your App inside the Cluster 🏃

Once inside Cyclops, you’ll be greeted with a screen that says “No Modules Found.” Modules are Cyclops’s way of saying applications. The next step is running your application (module) in the Kubernetes cluster, or in Kubernetes terms, “deploying your application.”

Start by clicking on the Add module button at the top right. This will take you to a new screen where the configuration file from the last step will be generated.

Cyclops uses templates to generate the configuration file (find more about it here). You can create your own, but Cyclops comes with a few predefined templates that are perfect for getting started.

At the top of the screen, choose the demo-template. You’ll notice the screen changes, and new fields appear! Switching to another template will change the fields on the screen, but let’s stick to the demo-template for now.

You can leave the inputs in the fields as they are or change them to your liking, but you must give your module a name!

If you have a Docker image of an application that you created and want to run in Kubernetes, you can do that too! Just put the name of your image in the image field and its version in the version field.

cyclops-config-screen

Once you’re satisfied with these fields, click Save at the bottom, and voilà, your application is deployed!

Anatomy of an App 🫀

One of the challenges with Kubernetes is the variety of resources it uses. However, Cyclops makes it easy by displaying all the resources your modules have created. This visual representation really helps you understand the anatomy of your applications.

With our demo-template and the inputs we entered, we created a simple Kubernetes configuration consisting of a service and deployment, as shown on the screen. These are the two most common resources you will encounter and are a good entry point into the whole system.

cyclops-module-overview

The Cyclops interface displays all these components in a clear, organized manner, making it easy to understand how your application is structured and how the different pieces fit together.

For instance, you can see that the application named "my-app" is running on Minikube, with one replica of the Nginx container (version 1.14.2). You can view logs or modify settings right from this interface.

This visual approach helps bridge the gap between developers and Kubernetes' underlying infrastructure, making it easier to manage and understand your applications.

Next steps 👣

Now that you’ve broken the ice, Kubernetes feels a lot less scary. I suggest you play around with the other templates Cyclops provides and see how different templates create different resources.

The journey to mastering Kubernetes is long and can be tedious. However, you don’t have to walk that path alone! Join our Discord community and connect with others who can help you if you ever feel lost!

If you enjoyed this post, remember to support us by giving us a star on our repo

· 6 min read
Juraj Karadža

k8s-database

When you touch on containerized apps today, Kubernetes usually comes up as their orchestrator. Sure, Kubernetes is great for managing your containers on a fleet of servers and ensuring those are running over time. But today, Kubernetes is more than that.

Kubernetes allows you to extend its functionality with your logic. You can build upon existing mechanisms baked into Kubernetes and build dev tooling like never before - enter Custom Resource Definitions (CRDs).

Support us 🙏

We know that Kubernetes can be difficult. That is why we created Cyclops, a truly developer-oriented Kubernetes platform. Abstract the complexities of Kubernetes, and deploy and manage your applications through a UI. Because of its platform nature, the UI itself is highly customizable - you can change it to fit your needs.

We're developing Cyclops as an open-source project. If you're keen to give it a try, here's a quick start guide available on our repository. If you like what you see, consider showing your support by giving us a star ⭐

gh-stars

Kubernetes components

Before diving into CRDs, let's take a step back and look at Kubernetes control plane components, specifically Kubernetes API and its ETCD database. We made a blog on each one of those components previously, so feel free to check it out for more details.

You will likely talk to your Kubernetes cluster using the command-line tool kubectl. This tool allows you to create, read, and delete resources in your Kubernetes clusters. When I say “talk” to a Kubernetes cluster, I mean making requests against the API. Kubernetes API is the only component we, as users, ever interact with.

Each time we create or update a K8s resource, the Kubernetes API stores it in its database — etcd. etcd is a distributed key-value store used to store all of your resource configurations, such as deployments, services, and so on. A neat feature of etcd is that you can subscribe to changes in some keys in the database, which is used by other Kubernetes mechanisms.

Kubernetes control plane

What happens when we create a new K8s resource? Let's go through the flow by creating a service. To create it, we need a file called service.yaml

# service.yaml

apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app.kubernetes.io/name: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 9376

and apply it to the cluster using kubectl:

kubectl apply -f service.yaml

service/my-service created

kubectl read our file and created a request against the Kubernetes API. API then makes sure our service configuration is valid (e.g., all the necessary fields were there, fields were of the correct types, …) and stores it to etcd. Now etcd can utilize its watch feature mentioned previously and notify controllers about a newly created service.

CRDs and how to create one

With the basic flow covered, we can now extend it. We can apply the same process of validating, storing, and watching resources to custom objects. To define those objects, we will use Kubernetes’ Custom Resource Definitions (CRD).

CRD can be a YAML file containing the schema of our new object - which fields does our custom object have, and how do we validate them. It will instruct the Kubernetes API on how to handle a new type of resource.

Let’s say your company is in the fruit business, and you are trusted with the task of automating the deployment of apples to your Kubernetes cluster. The example, of course, has nothing to do with a real-life scenario to show that you can extend the Kubernetes API however you see fit.

Apples have a color that can be either green, red, or yellow, and each apple has its weight. Let’s create a YAML to reflect that on our Kubernetes API:

# apple-crd.yaml

apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: apples.my-fruit.com
spec:
group: my-fruit.com
names:
kind: Apple
listKind: ApplesList
plural: apples
singular: apple
scope: Namespaced
versions:
- name: v1alpha1
schema:
openAPIV3Schema:
properties:
apiVersion:
type: string
kind:
type: string
metadata:
type: object
spec:
properties:
color:
enum:
- green
- red
- yellow
type: string
weightInGrams:
type: integer
type: object
type: object
served: true
storage: true

We defined two properties for version v1alpha1 under .properties.spec:

  • color (which can take one of the values in the enum)
  • weightInGrams

To tell the Kubernetes API, there is a new type of object, we can just apply the previous file to the cluster:

kubectl apply -f apple-crd.yaml

customresourcedefinition.apiextensions.k8s.io/apples.my-fruit.com created

Kubernetes API is now ready to receive Apples, validate them, and store them to etcd.

Don’t take my word for it, you can create a Kubernetes object that satisfies the schema from the CRD above:

# green-apple.yaml

apiVersion: my-fruit.com/v1alpha1
kind: Apple
metadata:
name: green-apple
spec:
color: green
weightInGrams: 200

and apply it to the cluster:

kubectl apply -f green-apple.yaml

apple.my-fruit.com/green-apple created

Now, your cluster can handle one more type of resource, and you can store and handle your custom data inside the same Kubernetes cluster. This is now a completely valid command:

kubectl get apples

NAME AGE
green-apple 6s

Can I then use Kubernetes as a database?

Now that we know we can store any type of object in our Kubernetes database and manage it through the K8s API, we should probably draw a line on how far we want to abuse this concept.

Obviously, your application data (like fruits in the example) would fall into the misuse category when talking about CRDs. You should develop stand-alone APIs with separate databases for such cases.

CRDs are a great fit if you need your objects to be accessible through kubectl and the API to the object is declarative. Also, another great use case for extending the Kubernetes API is when you are implementing the Kubernetes Operator pattern, but more on that in future blog posts 😄

On the other hand, if you decide to go the CRD route, you are very much dependent on how K8s API works with resources, and you can get restricted because of its API groups and namespaces.

Kubernetes CRDs are a powerful tool and can help you build new developer platforms and tools. We at Cyclops develop on top of our CRDs so feel free to check them out on our repository.

· 4 min read
Juraj Karadža

k8s-myths-cover

Kubernetes has risen to fame over the past couple of years, transforming from a geeky buzzword into a cornerstone of modern software development.

Yet, with its rise to fame, Kubernetes has also become shrouded in myths and misconceptions, which can often deter potential users - “Kubernetes is only for large companies” or “I need microservices for Kubernetes”…

While most of these misconceptions can be said about other container orchestrators, in this article, I focused on Kubernetes as it is the most popular and the first one that comes to mind when people think of container orchestration.

That being said, let’s dive into five of the most common misconceptions about Kubernetes and set the record straight!

Support us 🙏

We know that Kubernetes can be difficult. That is why we created Cyclops, a truly developer-oriented Kubernetes platform. Abstract the complexities of Kubernetes, and deploy and manage your applications through a UI. Because of its platform nature, the UI itself is highly customizable - you can change it to fit your needs.

We're developing Cyclops as an open-source project. If you're keen to give it a try, here's a quick start guide available on our repository. If you like what you see, consider showing your support by giving us a star ⭐

gh-stars

1. Kubernetes is Only for Large Enterprises

💨 Misconception: Kubernetes is too complex and resource-intensive for small to medium-sized businesses and is only practical for large enterprises.

🔍  Reality: Kubernetes is highly scalable and can be used by organizations of any size. With managed services like EKS, GKE, and AKS, even small teams can set up and manage Kubernetes clusters. These services take care of much of the heavy lifting, so smaller organizations can easily benefit from the features Kubernetes provides without needing a ton of infrastructure or a big DevOps team.

For example, a startup can easily set up a Kubernetes cluster on AWS using EKS. This takes advantage of AWS’s infrastructure and managed services, making it simpler and more cost-effective to manage.

2. Kubernetes Replaces Docker

💨 Misconception: Kubernetes is a replacement for Docker.

🔍  Reality: Kubernetes and Docker serve different, yet complementary, purposes. Docker is a platform for containerizing applications, while Kubernetes is an orchestration system for managing those containers across multiple hosts/servers. Docker handles the creation and running of containers, but Kubernetes takes on the orchestration tasks, such as scaling, load balancing, and self-healing.

In practice, Docker containers are often used as the building blocks that Kubernetes orchestrates. For instance, developers use Docker to create container images, which are then deployed and managed by Kubernetes.

docker-and-k8s

3. Kubernetes Manages Everything Automatically

💨 Misconception: Kubernetes fully automates all aspects of container management without the need for manual intervention.

🔍 Reality: Kubernetes takes care of many tasks like container deployment, scaling, and failover, but it still needs proper setup and ongoing management. You’ll have to configure things like network policies, resource limits, and storage. Plus, keeping an eye on the cluster with regular monitoring and updates is crucial for its health and security.

For example, deploying applications requires configuration files in which you describe declaratively what you want Kubernetes to do with your application. Tools like Cyclops help manage these configurations by providing a user-friendly interface.

4. Kubernetes is Only for Microservices

💨 Misconception: Kubernetes is not suitable for monolithic applications.

🔍  Reality: While Kubernetes excels at managing microservices, it is equally capable of handling monolithic applications. Kubernetes provides features like horizontal pod autoscaling (HPA), which can be utilized by both microservices and monolithic applications. HPA allows applications to scale based on the load automatically, ensuring your application will not crash under high pressure.

For example, a monolithic application can be containerized and deployed in a Kubernetes cluster. With proper configuration, Kubernetes can manage the application's scaling, deployment, and monitoring just as effectively as it does with microservices.

no-monoliths-allowed

5. Kubernetes is Too Complex to Learn and Implement

💨 Misconception: Kubernetes is too complicated for individuals and small teams to learn and implement.

🔍  Reality: While Kubernetes does have a steep learning curve, there are numerous tools, tutorials, and resources available to simplify the learning and implementation process. Platforms like Minikube allow developers to run Kubernetes locally, providing a sandbox environment to experiment and learn.

Online courses, documentation, and community support also significantly contribute to making Kubernetes more accessible. With the right tools, even small teams can effectively use Kubernetes for their projects.

Thanks for reading 🙌

I hope this article clarified some common myths about Kubernetes. If you enjoyed the read, consider supporting us by giving us a star on our repo!

· 6 min read
Juraj Karadža

telescope-cover-image

What started as a frustration with not being able to get in touch with our users, quickly developed into a redesign of the flow of our platform.

My team and I are developing an open-source platform that helps developers deploy and manage their applications in Kubernetes. We have been working hard to expand our user base, and the efforts were starting to show results.

The rising number of installations was satisfying to see. However, that was the only thing we were able to observe. We wanted to know more. We wanted to know what users are doing with our platform and what they are struggling with.

The following short story could be considered a #building-in-public entry of our startup, but I just found it interesting and wanted to share it with you.

Support us 🙏

We know that Kubernetes can be difficult. That is why we created Cyclops, a truly developer-oriented Kubernetes platform. Abstract the complexities of Kubernetes, and deploy and manage your applications through a UI. Because of its platform nature, the UI itself is highly customizable - you can change it to fit your needs.

github-stars.gif

We're developing Cyclops as an open-source project. If you're keen to give it a try, here's a quick start guide available on our repository. If you like what you see, consider showing your support by giving us a star ⭐

User Feedback 🗣️

Since the beginning, we have been trying to talk to our users and gather as much feedback as we can. However, that turned out to be sort of a problem. We knew that people were downloading Cyclops; on our DockerHub, we could see the number of pulled images getting larger by the day.

The problem was that we had no way of contacting our users. We could only see the number of pulls, not who pulled them.

In an attempt to get in touch with our users, we created a Discord server. Discord is a great way to keep your community close to you, and because of it, we have a way of getting to know our users.

So we started talking to them. The feedback wasn’t always constructive…

Unsatisfied User

… but most of it was really positive. However, there is a caveat; a lot of the positive feedback we were getting was from 1-on-1 meetings with our users. In these meetings, we could demonstrate Cyclops's capabilities better than users could on their own. This turned out to be a bigger issue than we thought.

Recently, we implemented telemetry to better understand how our users utilize Cyclops. As soon as the statistics started to come in, boy, were we surprised.

The Problem ❗

We were really pleased with the number of installations of Cyclops. As it turns out, we were correct in thinking that Cyclops is pretty simple and straightforward to install. But when it came to starting to use it, more than 60% of our users got lost.

So what was the problem?

The thing is, when you want to deploy an application to your Kubernetes cluster, you must provide a template in the form of a Helm chart. We have created a few examples of such charts and published them on our open repository. In all our documentation and blogs, we pointed people toward that repository when starting out with Cyclops. However, it seems that it didn’t catch on. The number of deployed applications was still much lower than the number of started instances of Cyclops.

A Theory 🧑‍🔬

Here is a fun fact for you, dear reader: the majority of online readers spend less than 15 seconds on a web page (source). Knowing this, could it be that most of our users skimmed over the blogs and documentation and missed out on the reference to our template repository?

We wanted to test this theory. In our last blog, we did another tutorial on Cyclops showcasing its benefits. However, for this specific article, we created a special version of Cyclops. What was so special about this version? We added a default value for the template when creating new modules.

Small Change

After gathering statistics for some time, the results were in.

The Results 📊

With a simple change, we saw an improvement in our users' behavior, they no longer got lost at the very first step of using our platform! However, it wasn’t as big of an improvement as we initially hoped for but it was certainly in the right direction. We asked ourselves how to further improve on this issue. And we think we got it 🙌

Since our most recent version (v0.3.0), we have reworked the platform's flow. Choosing a template is no longer an input field but a dropdown. Every instance of Cyclops comes with a couple of premade templates (stored in our templates repository), which you are free to use and abuse. We feel like this will go a long way in showcasing the customizable nature of Cyclops to our users.

v0.3.0

But an important part of Cyclops is its ability to use your own templates, and we weren’t ready to compromise on that! That is why we added a new Templates tab where you can add new templates and manage the existing ones. Once added, your new templates will be shown in the dropdown the next time you find yourself deploying an application.

Credits 🦔

We released v0.3.0 earlier this week, so it is still too early to say how much of an impact it had on our users, but we have high expectations! We might share the statistics once enough time passes, so make sure to follow us to find out!

It would be a shame not to mention PostHog as the telemetry provider we are using, since it turned out to be extremely useful. Because it is hard to find people who will talk with you about your product, gathering statistics gave us a much greater insight into our users.

If you are one of the few readers who gave this article more than the previously mentioned 15 seconds, I hope you found it amusing at least 😁

If you are interested in contributing to our project, whether through coding or providing feedback, join our Discord community and come talk to us!

· 6 min read
Juraj Karadža

Developers Perspective

We perceive things by the way we interact with and understand them. To an infrastructure team, Kubernetes is a great way to scale and manage applications, but for a frontend/backend developer, It may seem complicated and stressful.

Kubernetes introduces many concepts and terminologies (deployments, services, pods…) that take time to grasp and even more time to master. But devs sometimes have to interact with them weekly, if not daily.

Through our experience and research, we identified the three most common ways in which developers engage with Kubernetes.

So, what does Kubernetes look like through the lens of a developer?

Support us 🙏

GitHub stars

Before we start, we would love it if you starred our open-source repository and helped us get our tool in front of other developers ⭐

Touchpoints

Depending on the structure and development workflow of the company, developers either do not interact with Kubernetes at all because automated processes and scripts are doing it, or they interact with Kubernetes through three touchpoints:

  1. 1. configuration files of their application
  2. 2. deploying their application in the K8s cluster
  3. 3. monitoring their applications

Configuration files

yaml

Configuration files tell Kubernetes how to handle an application. They are declarative, meaning you write the result you want to see rather than the steps to get there.

Most commonly written in YAML, these files are large and complex to read and understand. And being written in YAML comes with its challenges (and quirks) since it is an additional programming language that devs need to learn.

Mistakes in configurations are hard to spot and rectify but are a BIG deal. Even if you coded your app perfectly with no bugs whatsoever (hypothetically 😅), a mistake in the config could mean that your app just won't run.

Deploying

Deploying your application is the process of starting your app in the Kubernetes cluster. The native way of doing this is with kubectl.

kubectl is the command line tool for Kubernetes. Being a command line tool/interface is something we should take into consideration. Senior devs should be accustomed to CLIs, but junior devs still might be timid. Getting accustomed to kubectl adds a new layer to the learning curve.

However, deploying requires more than knowledge of the available commands of kubectl; it also requires an understanding of the context and how it manipulates Kubernetes objects.

Monitoring

Now that our devs have configured and deployed their application, they are done, right? Well, no. The application probably has its fair share of bugs and issues, and they are expected to be able to monitor them and fix them when needed.

Finding out what is wrong with your app inside the Kubernetes cluster is not straightforward. Devs need to understand the objects that are in the cluster and how they interact with each other.

For instance, to fetch your app's logs, you need to know that it is run in a Pod. That Pod is located in a Replica Set, which itself is located inside a Deployment. You must understand these relationships while rummaging through your K8s resources with kubectl.

Developer Platforms

This is just the tip of the iceberg when it comes to Kubernetes. Its complexity drastically elongates onboarding time for new devs, and all the additional steps introduced slow down development cycles.

Companies usually have a dedicated team to deal with infrastructure and Kubernetes. When developers encounter issues or need help, they turn to these teams. But you can imagine that when dealing with something as complex as Kubernetes, needing help is a common occurrence.

This is why we are seeing the rise of developer platforms, which reduce friction between developer and infrastructure teams. A good platform makes the development cycle of creating, updating, and deploying as smooth and straightforward as possible.

Cyclops - Kubernetes platform made for developers

Cyclops is a fantastic open-source developer tool that abstracts Kubernetes's complexities behind a simple graphical user interface. We call it a platform because your infrastructure teams can customize the UI to suit your specific needs and wants.

So, how does Cyclops solve the issues mentioned above?

  1. 1. With Cyclops, your developers never directly interact with configuration files. You create a template, which is then rendered for the developers as a form. This avoids the need to learn languages like YAML and has the added benefit of allowing you to add validations to their input, making it harder for developers to make mistakes.
  2. 2. Once the developers have specified what they need with Cyclops' help, deploying their application is as simple as clicking a button.
  3. 3. Now that they have deployed their app, Cyclops makes it easy to monitor its state via its user interface. The application's details are easily accessible and include all the resources (and logs) it uses.

Let's see it in action!

The installation is just a two-step process, but there are a few prerequisites, with the main thing being a Kubernetes cluster.

So once you have your cluster up and running, you can install Cyclops with the command below:

kubectl apply -f https://raw.githubusercontent.com/cyclops-ui/cyclops/blogs-demo/install/cyclops-install.yaml

And once it's successfully installed (which could take a minute or two), you can access it with:

kubectl port-forward svc/cyclops-ui 3000:3000 -n cyclops

Cyclops should now be accessible in your browser at http://localhost:3000.

Now, by clicking on the Add Module button, you will be taken to this screen:

New Module in Cyclops

We created a default template for you, but this screen is highly customizable (you can try out your own Helm charts!). Now, instead of reading and writing YAML, developers can fill out these fields by clicking the save button, and their application will be deployed!

App Overview in Cyclops

This next screen represents the detailed view of your newly deployed application. Here, you can see all the specified resources (and easily access the logs 😉). If anything goes wrong, it will be visible here!

If you wish to change something in the configuration (like adding more replicas of your app), click on the Edit button. You will be taken to a screen similar to the first one, where you can make those changes, again not even knowing that you are dealing with YAML underneath.

Wrapping up

We hope this article did a good job of showcasing some of the ways developers are struggling with Kubernetes and a potential solution that you could consider next time you or your devs engage with it.

If you are running into issues with Kubernetes or have devs who are pestering you with K8s-related questions, consider supporting us by giving us a star on our GitHub repository

· 6 min read
Juraj Karadža

contributing-to-os

Have you ever thought of contributing to open source? If you are here, you probably did 😄

For a beginner, it might seem confusing, and I can relate - I've been there myself. However, you found the willpower to push onwards and learn more about this process and I hope this article will show you that it is not as complicated as it may seem.

Most repositories that are accepting contributions usually have a CONTRIBUTING.MD file that you should look out for. Since not all repositories are the same, this file will tell you more about the process of contributing to that specific repository.

However, some general rules can be applied to most repositories, and we will go over them in this article.

Support us 🙏🏻

GitHub stars

Before we start, we would love it if you starred our open-source repository and helped us get our tool in front of other developers ⭐

Where to contribute?

The first question that comes to mind is: Where to contribute?

Well, you should start with the projects you are already using. Maybe some library needs updating, or some tool has a bug?

You may want to contribute to some project within your domain of expertise or a project that uses the tech stack you are comfortable with.

These are great contenders, and you should look into them.

If you don't know any projects but still want to contribute, browse on GitHub or go to sites like Quine, where many open-source repositories are looking for contributors.

For this article, we will use our open-source repository - Cyclops.

How to know what needs improving?

Whether you are looking for something to do or already know of a bug that needs fixing, all contributions start in the same place - the Issues tab.

issues.png

If you are new to the project, you can look for the “good first issue” label that most repositories have. As the name suggests, they are a good entry point to get involved in a project. All of the issues should have a description of the problem.

If you know of a problem or bug that is not listed here or you want to see a new feature get introduced, open a new issue! Once you open the issue, the maintainers will decide what to do next, and you should wait for their response before you start coding.

Pro Tip: If you are opening a bug issue, be sure to write down the steps on how to reproduce the bug!

How to contribute?

Okay, we found a repo, an issue we want to work on, talked with the maintainers, and were given the green light to work on the issue. Let's finally start coding!

1. Fork the repo

The first step is forking the repository. This will make a copy of the project and add it to your GitHub account.

fork.png

2. Clone the repo

Now go to your repositories and find the forked repo. Click on the < > Code button and select one of the options (HTTPS / SSH / GitHub CLI).

clone.png

Copy the content in the box. Now open your terminal and position yourself where you want to save the project locally. Once you have positioned yourself, type the following command in your terminal:

git clone <paste the copied content>

After a few moments, you should have the project locally on your PC!

3. Create a new branch

Now, go to your local folder and create a new branch. Be sure to check out the CONTRIBUTE.md of the project to see if the maintainers want you to follow some rules for the naming of branches!

4. Commit and push your changes

Once you have your branch ready, you can start changing the codebase. After you finish, commit your changes and push them to your forked repository. Be sure to follow the commit message conventions if the repository has them in place (check the CONTRIBUTING.md).

5. Open a pull request

Now that you have pushed your changes and want to merge them to the main repository, it is time to create a pull request! Once again, you should check the CONTRIBUTING.md rules to see if the maintainers would like you to stick to a naming convention when creating PRs and what they like to see in the description.

opening a PR.png

❗Be sure to set the base repository to the original repository you forked from❗

I created a PR, now what?

You are satisfied with your changes and successfully created a pull request to the main repository. What now? Now, you wait.

Depending on the urgency of the problem your PR is fixing and the schedule of the maintainers, you will have to wait for somebody to review your pull request. Be prepared to explain why and what you did (if you didn't do a good job in the PR description) and to make changes if necessary.

Don't take any requests for changes personally. All of you are here for the betterment of the project and nobody is ill-willed. If you disagree with the opinions of the reviewers, tell them! A healthy discussion has never been a bad thing.

Why fork? 🍴

You are probably wondering why we had to fork the repository at all. Why not just clone the original and work on a separate branch? Basically, remove the #1 step and isn't the rest the same?

Well, you could try, but once you push your changes, you will realize you don't have the authorization to do so! By forking the repo, you become the owner of the copied repository with the rights to change the codebase. This is a neat system to ensure only the changes approved by the original maintainers go through.

Go contribute!

Now that you are armed with this information, you are ready to go and make your mark in the open-source world! Go help the myriad of projects you have and haven't heard of, and join this growing community. Your help will be greatly appreciated, I'm sure of it 😉

· 3 min read
Juraj Karadža

OS-problematic

If you have been in the open-source community lately, you know what I am talking about. The story goes something like this: There were loads of videos/blogs/events hyping up open-source contributions, mainly as a good gateway to land your dream software engineering job. And to some extent, it is true.

However, this trend has also brought a flood of pull requests (PRs) that contribute little to nothing or, worse, add clutter to the codebase.

And this is why, lately, you can find a wave of blogs and videos on the theme of “Why you should NOT contribute to Open-Source.”

This article will show how bad it can get with the latest surge of unsavory PRs.

Support us

GitHub stars

Before we start, we would love it if you starred our open-source repository and help us get our tool in front of other developers ⭐
(The irony is not lost on us here... 😄)

ExpressJS

The latest drama has happened in the epxressjs GitHub repo. As you can see, there were loads of “Update Readme.md” pull requests.

List of closed PRs

This doesn’t immediately sound bad; perhaps the Readme was riddled with typos? It's a long shot, but let's investigate. Unfortunately, that wasn’t the case. Looking closely at some of these PRs, we will see the drama's root cause.

So let us take a look… PR-hello

Maybe it’s just one bad apple? Well, let’s look at some others… PR-hehe PR-demo-collage

As you can see, these PRs are not trying to better the project they contribute to. Although somewhat comedic, having lots and lots of such PRs is a nightmare for the project's maintainers.

And the last one, I think, tells the bigger picture in this story. It seems that lots of these PRs were a learning experience (assuming “Collage” was a mistype of “College”). Although, that is an assumption made in good faith.

Some of them could have been done as a sort of shortcut for bolstering resumes, which is a far more alarming intent.

The missing puzzle piece

Newcomers don’t understand that contributing is not all about the code. It is about investing your time in understanding the project and the issues it is trying to solve, being a part of the community and the discussion, and wanting to better the project because you want it to thrive.

And that is what is praised about open source contributions, the will to learn and the will to help. In the process, you demonstrate that you can be proactive and solve complex issues. That is what employers are really looking for.

One look at contributions like these, and you can be sure that you will be ignored by potential employers.

Final thoughts

Now, there were some external actors in the latest PR nightmare that I won’t be naming here because I doubt that they acted with ill intent. It’s the latest buzz, and I am sure that you can find them with a single Google search if you wanted to.

It’s important to mention that when considering contributing to open-source, start by looking at the projects you already use and are familiar with.

Alternatively, focus on projects where you have domain expertise, as sharing that knowledge can be a valuable resource for the maintainers.

Have you had any bad experiences contributing to open-source?

· 10 min read
Juraj Karadža

Docker Ship

If you are using Kubernetes, there's a fair chance you are using Helm or at least considered to. This article will guide you on how to publish your Helm charts in a less conventional way - using OCI-based registries.

First of all, we will briefly cover what OCI-based registries are and how they can help us, and after some theory, we will create a Helm chart, push it to the OCI registry, and actually deploy something using it.

Show us your support 🙏🏻

ProductHunt Launch

Before we start, we want to mention that we scheduled ourfirst release on Product Hunt! Click the notify me button to be alerted when we are out and ready to receive your feedback 🔔

We would love it if you starred our repository and helped us get our tool in front of other developers ⭐

Helm OCI-based registries

Helm repositories are used to store Helm charts. Using them, we can publish and version our applications as packaged Helm charts others can install into their cluster. They also allow for easier versioning and rollback of resources. All in all, a single, centralized place to store your Helm charts.

Under the hood, a Helm repository is a simple HTTP server that serves an index.yaml file that contains information about charts stored in that repository, like versions, descriptions, and URLs on where to download chart contents (usually on the same server). Read more about index.yaml file here.

OCI stands for Open Container Initiative, and its goal as an organization is to define a specification for container formats and runtime.

At first glance, it does not seem related to the Helm repositories we just mentioned; at least for me it wasn’t. OCI registries mostly host container images, but we can store different types of content there. One of the types we can host is Helm charts!

With such a registry, you can host all your images and Helm charts in the same place. On top of that, you don’t need to maintain a Helm repository index.yaml file, which makes the management of your chart easier.

You can serve the exact same chart on a Helm repository and a container (OCI) registry; the only difference between those two approaches is how you maintain the charts.

You can use multiple different container registries to store Helm charts:

Getting our hands dirty

Now that we have our basics down, let's see those OCI charts in practice and use them to deploy our applications. We will create a chart, push it to DockerHub, and then use it to deploy our apps in a Kubernetes cluster. In order to deploy resources from the chart we defined into a Kubernetes cluster, we are going to use Cyclops.

Creating a Helm chart on OCI registry

Firstly, we are going to create a Helm chart. To create a chart, create a new directory

mkdir oci-demo

and add the files listed below to the created directory. Feel free to customize the chart to fit your needs, but for the sake of this demo, we are going to create a basic one with the following structure:

.
└── oci-demo
├── Chart.yaml # YAML file containing information about the chart
├── templates # Directory of templates that, when combined with values, will generate valid Kubernetes manifest files.
│ ├── deployment.yaml # K8s resources are separated into multiple files. Feel free to add more or change existing
│ └── service.yaml
├── values.schema.json # JSON Schema for imposing a structure on the values.yaml file
└── values.yaml # The default configuration values for this chart

You can find out more about each of those files/directories on Helm's official docs.

Chart.yaml

Let's start with Chart.yaml:

# Chart.yaml

apiVersion: v1
name: oci-demo
version: 0.0.0

Not to go into detail, I'm going to 302 you to the Helm docs.

Templates folder

The next step is defining what Kubernetes resources our packaged application need. We define those in the /templates folder. As seen in the chart structure from earlier, we are going to add only a deployment and a service to our application.

Contents of those files are below:

# templates/deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: {{ .Values.name }}
name: {{ .Values.name }}
spec:
replicas: {{ .Values.replicas }}
selector:
matchLabels:
app: {{ .Values.name }}
template:
metadata:
labels:
app: {{ .Values.name }}
spec:
containers:
- image: {{ .Values.image -}}:{{ .Values.version }}
name: {{ .Values.name }}
ports:
- containerPort: 80
name: http

and

# templates/service.yaml

{{- if .Values.service }}
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.name }}
labels:
app: {{ .Values.name }}
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: {{ .Values.name }}
{{- end }}

Values definition

In the /templates folder we defined, obviously, just the templates. It would be a good idea to define default values for those. We are going to use values.yaml for that:

name: demo
replicas: 3

image: nginx
version: 1.14.2

service: true

These are default values, and anybody using your chart will most probably want to change those. But what happens if someone using your chart messes up values by providing invalid data? For example, setting replicas: two or service: no. Another thing that can get messed up is the name of the value, so somebody might use instance: 3 instead of replicas: 3.

Both of these examples seem pretty obvious and something you wouldn’t mess up, but as your chart grows, so does your values.yaml file. A great example is the Redis chart by Bitnami. I encourage you to scroll through its values file. See you in a minute!

Now that you are back, you probably understand why validating values and defining their structure makes sense. Let’s do the same for our chart.

But first, let’s define what are the rules of this validation:

  • service is type boolean
  • name, image and version are strings
  • replicas is an integer
  • replicas is ≥ 0
  • allow only certain values for version; let those be 1.14.1, 1.14.2, or 1.15.0

There are quite a few rules packed for such short values file! However, having them in place gives us the confidence to deploy the chart without worrying about making mistakes.

Now that we have those defined, our JSON schema will look like the following:

{
"properties": {
"name": {
"description": "Application name",
"type": "string"
},
"replicas": {
"description": "Number of replicas",
"type": "integer",
"minimum": 0
},
"image": {
"description": "Container Image",
"type": "string"
},
"version": {
"description": "Container image version",
"type": "string",
"enum": ["1.14.1", "1.14.2", "1.15.0"]
},
"service": {
"description": "Expose your application",
"type": "boolean"
}
},
"order": ["name", "replicas", "image", "version", "service"],
"title": "Values",
"type": "object"
}

You can find more on how to write a JSON schema for a Helm chart in the Helm docs.

Pushing to Docker Hub

If you don’t already have a Docker Hub account, you should create one to host your charts.

To push our chart to an OCI registry, we will need to package the chart into a tarball with the following command:

helm package oci-demo

There should now be a tarball file called oci-demo-0.0.0.tgz .

Next, you will need to sign in to Docker Hub using Helm:

helm registry login registry-1.docker.io -u {username}

And finally, push your chart to the remote registry:

helm push oci-demo-0.0.0.tgz oci://registry-1.docker.io/{username}

Check your artifacts on Docker Hub, and you should see your newly created Helm chart. If you click on the chart, you’ll see more info about the chart and its versions.

Docker Hub tags

Using OCI charts

We now have our Helm chart locked and loaded, so let’s use it. First of all, let's spin up a Kubernetes cluster. If you already have a running Kubernetes cluster, feel free to use it and skip this step.

Create a minikube Cluster

When I want to play around with a new Kubernetes tool, I try it out on a minikube cluster. Minikube is basically a Kubernetes cluster you can run on your own machine and easily tear down once you are done.

If you are using a mac, you can install it via brew:

brew install minikube

Check their installation guide here → https://minikube.sigs.k8s.io/docs/start/

To actually run the cluster, just hit:

minikube start

A quick check that everything is ok; let's list all the namespaces:

kubectl get ns

NAME STATUS AGE
default Active 11s
kube-node-lease Active 13s
kube-public Active 13s
kube-system Active 13s

Deploy OCI chart into a Kubernetes cluster

You can deploy your newly created Helm chart using pure Helm, but let’s take it a step further. Chances are we will want to edit that deployment with our specific values and change those over time, so let's make it more user-friendly.

We can use Cyclops to help us with that! It can help you deploy and visualize your applications by giving you a simple UI where you get your chart deployed in just a couple of clicks. Let’s install Cyclops into our cluster and deploy that newly created chart!

You can install Cyclops with a single command:

kubectl apply -f https://raw.githubusercontent.com/cyclops-ui/cyclops/v0.2.0/install/cyclops-install.yaml

It will create a new namespace called cyclops and start a Cyclops deployment inside it.

Check that pods are up and running:

kubectl get pods -n cyclops

NAME READY STATUS RESTARTS AGE
cyclops-ctrl-d6fd877d8-tpdqd 1/1 Running 0 62s
cyclops-ui-5c858b44d4-dhn2c 1/1 Running 0 62s

Once those are up, you need to port-forward both deployments:

kubectl port-forward svc/cyclops-ui 3000:3000 -n cyclops

and in a separate window, run:

kubectl port-forward svc/cyclops-ctrl 8080:8080 -n cyclops

You can now access Cyclops at http://localhost:3000

When you open your local Cyclops, you can hit Add module in the upper right corner.

This is where we get to use our OCI chart! You are prompted for the repository, path, and version of the template. You can fill those out with the OCI chart you created earlier, like I did below.

Repository: oci://registry-1.docker.io/{username}
Path: oci-demo
Version: 0.0.0

From here, you can just hit load and let Cyclops render a form based on your chart.

Do you remember that schema file we added to our chart earlier? This is what Cyclops uses to render this form for you. All the fields and validations you set in the values.schema.json file are taken into account so you can get a completely custom UI for your applications.

You can now fill those fields out and hit save, and Cyclops will do the rest for you. Firstly, it will inject those form values into the template and then deploy each of the resources into the cluster.

Any last words?

We started from scratch and, in a couple of minutes, deployed an application with our own Helm chart and a custom UI tailored specifically for our application. On top of that, we did it using an OCI-based registry and didn’t go through setting up a Helm repository to serve our chart.

Hope you had fun throughout the article and found the information and steps we did useful. Thank you for checking out our article!

· 7 min read
Juraj Karadža

Kubernetes Enjoyer

For the uninitiated, K8s stands for Kubernetes, with the number 8 representing the eight letters between K and s. Kubernetes has become pretty much unavoidable in the current tech landscape but remains uninviting because of its complexity and steep learning curve.

The terminal-based interaction has a part to play in this story. If you ever had the privilege of watching a seasoned DevOps work his way with a Kubernetes cluster, you might look at him like you would a seasoned martial artist showcasing his fighting skills. That is because everything that is done through a terminal always looks more frightening and seems like it requires years and years of training. 🥋

Now the question stands: how can we make such a complex issue (one that even had its name beautified) more enjoyable? Well, in the same way we make everything more enjoyable → make it easier and make it prettier! 🎀 And how would you do that, you might ask. With a graphical user interface, or GUI for short! Let’s take a look at five tools that provide you with a user interface when dealing with Kubernetes.

Show us your support 🙏🏻

ProductHunt Launch

Before we start, we want to mention that we scheduled our first release on Product Hunt! Click the notify me button to be alerted when we are out and ready to receive your feedback 🔔

And we would love it if you starred our repository and helped us get our tool in front of other developers ⭐

Kubernetes Dashboard

Let's dive into the quintessential tool for Kubernetes managementthe Kubernetes Dashboard. Automatically bundled with your cluster, it delivers a graphical overview of your Kubernetes environment. You can use it to get an overview of applications running on a cluster, deploy containerized applications to a Kubernetes cluster, and manage cluster resources.

The Kubernetes Dashboard not only offers an overview but also helps with troubleshooting. It provides insights into the health of Kubernetes resources, spotlighting any operational errors.

Through it, you can deploy applications as well. You can do it with a manifest that you wrote or through a form that you just fill in. However, it's worth noting that the form, while user-friendly, lacks the flexibility for customization beyond basic examples.

While the K8s dashboard is a jack-of-all-trades, many find it to be a generalist, lacking in-depth features. This limitation encourages us to explore more tools, each designed for specific purposes, and so we embark on our journey through the list of tools we’ve explored.

K8s Dashboard

K9s

K9s is your best friend (get it? 🐶) when exploring your cluster via the terminal. It shares commonality with Vim for its interaction style using shortcuts and starting commands with: but don’t let that discourage you. K9s keeps a vigilant eye on Kubernetes activities, providing real-time information and intuitive commands for resource interaction.

It can almost replace the standard kubectl and doesn’t require you to have a “cheat sheet” next to you when interacting with Kubernetes. You traverse through your resources just by selecting them and drilling down to the lowest level. This allows for easy log extraction and access to its shell.

K9s gives you the ability to see the manifest of each of your resources and the ability to edit and apply changes. As I mentioned, it almost replaces the kubectl. One of the differentiators is that you cannot deploy new resources via the K9s.

K9s comes with the ability to filter out your resources and search them with the / command, making it easier to locate the ones you are looking for in the sea of resources or filter through the logs of a specific pod.

A nice touch is the list of commands and shortcuts available to you at any given moment at the top of the screen, and its customization with skins and plugins gives you room for additional utility.

K9s UI

Cyclops

If you are having difficulties fighting with manifest files, Cyclops is the tool for you! Cyclops removes the clutter and complexity when dealing with manifests by transforming them into a structured web-based form, eliminating the need for manual configuration and command-line interactions.

This makes the deployment process more accessible to individuals with varying levels of technical expertise.

Within the architecture of Cyclops, a central component is the Helm engine. Helm is very popular within the Kubernetes community; chances are you have already run into it. The popularity of Helm plays to Cyclops's strength because of its straightforward integration.

Cyclops Form

With Cyclops, you're not boxed into a one-size-fits-all approach. You can customize the form to suit your unique needs. For instance, a team member can generate a Helm chart, allowing others to define necessary values using Cyclops for painless application deployment.

Once you have declared the wanted state of your application, deploying it is as straightforward as clicking a button. Furthermore, once you deploy your application, the wanted state is also easily changeable through Cyclops.

In Cyclops, every application lays out a detailed list of resources it uses - deployments, services, pods, and others, all in plain view. You can easily track their status, helping you quickly spot and fix any hiccups in your application. It's like having a clear roadmap to navigate and troubleshoot any issues that pop up.

Cyclops Resources

DevSpace

Consider the convenience and time saving of your local server refreshing automatically with every code save, providing real-time visualization of your code changes.

Imagine taking this smooth experience a step further into Kubernetes clusters; DevSpace makes that possible. With DevSpace, you can deploy applications in real time during the coding process, facilitating swift iteration.

DevSpace streamlines the process by automatically applying changes to your K8s cluster without needing the entire image building and deployment pipeline. It builds the image locally without pushing it to a registry, although the option to automatically push images is available for those who require it during development.

Moreover, DevSpace features a user interface that, while somewhat limited, offers a quick overview of all pods in your cluster. It allows you to easily access pod logs and even execute commands directly within them, enhancing your development workflow.

Although I have focused on local development, DevSpace is used for creating workflows as well. All your workflows are saved in one file, making it easy to reproduce environments on any machine with a single devspace deploy command.

DevSpace UI

Kubevious

Unlike the other tools mentioned in this post, Kubevious has no way of changing the cluster state. It is intended solely as an observability tool, focusing on potential issues in your cluster. It highlights potential threats and risks for every resource you may run.

The graphical views offer insights into containers, networking, exposure, RBAC, and Helm charts for intuitive troubleshooting.

Kubevious has a rule engine that helps with the detection and prevention of misconfigurations. It comes with rules out of the box, but it allows you to create custom rules as well (for example, “don’t allow images to be on the latest tag”).

It also comes with the cool time machine feature that allows users to travel back in time, audit applications, root cause outages, and recover manifests, ensuring a complete understanding of cluster history.

And I have to mention the full-text search it provides! You can search for any resource without knowing the specific name of it. One great example is searching for any resources that use a specific port by just typing in “port 3000,” and Kubevious will find your resource.

Kubevious Dashboard

Final thoughts

In our quest to enhance the Kubernetes experience, we've unwrapped five delightful tools, each offering its unique charm to make your journey smoother and more enjoyable.

These are not the only tools that provide a UI for Kubernetes, but we wanted to shine a spotlight on some, maybe lesser-known ones.

All of these tools are open-source, so give them a go; they're free!

I want to end this post with a question directed to you, the reader: What are your thoughts on graphical representations of Kubernetes? Is it needed, or does kubectl reign supreme?