T O P

  • By -

thelolzmaster

In short: k3s is a distribution of K8s and for most purposes is basically the same and all skills transfer. My suggestion as someone that learned this way is to buy three surplus workstations (Dell optiplex or similar, could also be raspberry pis) and install Kubernetes on them either k3s or using kubeadm. You can run a cluster on VMs or in containers as well, but I think having physical nodes and figuring out the networking on a physical system is a good exercise. It will also force you to image them with Linux and configure them. Perhaps you’ll write an Ansible playbook to do this for you. My suggestion is have an end goal in mind: build a cluster from scratch with the goal of deploying something in it for example a personal website.


ok_if_you_say_so

You really don't need separate physical nodes and honestly doing it this way you are much more likely to run into issues related to this setup that only serve to act as a distraction from learning and troubleshooting your actual kubernetes workloads. In the real world you would basically never have to stitch together a cluster from separate disparate nodes, they would always be part of a cloud managed offering or a VM scale set or something like that. I would just use k3d. It uses docker + k3s to create a multi-node setup right on your single machine. You can practice multi-node concepts without needing separate physical machines and get all the benefits and experience with none of the distractions.


thelolzmaster

I agree it’s not a necessity, just relaying that those other issues helped my learning experience. In a world of abstractions, I find that the more you know about the underlying technology the better off you are. I use Kubernetes at work everyday and from experience know that using container based Kubernetes clusters like KinD, minikube, k3d can come with its own set of problems. Running physical nodes at home is also a cheap way to run lots of workloads. I find once you’re done installing the “basics” like a monitoring stack, cert-manager, Harbor, longhorn, etc your pushing the limits of what you want to run on a single machine, let alone your personal workstation.


ok_if_you_say_so

Yeah I guess what I'm saying is that in terms of real world use case, the process of setting up the nodes and fixing all of the inconsistencies between them doesn't translate to any skill you'd ever really use. You would just scale up the VMSS and be done with it. That's the part that is a distraction. The skills you want to translate for real world kubernetes use all come after you've set up and maintained the nodes within the cluster, which you don't get to spend time doing if you're building everything from scratch. And I'm not sure if you haven't used k3d but it's just literally k8s, there's nothing for it to really screw up. In my years of experience using it I have never once "noticed" it in the stack, it behaves exactly like my vmware tanzu clusters and exactly like my azure AKS clusters. Minikube and kind have both done weird non-standard things that I do agree can end up being distractions of their own. k3d doesn't seem to suffer from that issue.


niceman1212

I want to disagree with you but I really can’t. I do it myself and I enjoy the distractions, partly because I want to self host as much as possible. Though in the grand scheme of things, it is a distraction. Most managed cloud providers provide all needed infrastructure out of the box. And what comes after infra (deploying apps, securing them, troubleshooting them) arguably is a lot more valuable to learn.


thelolzmaster

I tend to disagree. If you’re doing this for learning purposes with no time limit, I think you’d be doing yourself a disservice to skip over Linux fundamentals and jump straight to Kubernetes, particularly when it’s time to fix the inevitable problems that crop up. I’m sure I’ve seen the meme around here of the kid skipping a bunch of steps going up a staircase.


ok_if_you_say_so

Linux fundamentals are something else entirely from bare metal k8s installation fundamentals. The latter is something that isn't something a real business should really be doing. I definitely agree that learning and keeping up with linux skills overall are valuable, given how those fundamentals translate directly inside your running containers. But as far as like, doing surgery on a node to dissect a one-off issue...you wouldn't do that in the real world. You would drain and cordon the node, delete it, and provision a new one using the golden image. Your k8s distro provider would be responsible for cutting those golden images and you would rely on them for the deep dive details. I manage many clusters that serve huge amounts of traffic and support dozens of developer teams, this idea that you're going to be digging through a node to perform surgery in your day to day is just not realistic in the business world.


glotzerhotze

You do work in a cloud environment, don‘t you? If so, lucky for you that you don‘t have to comply with regulations forcing you to not use the cloud as an environment. Furthermore, I think „root-cause analysis“ is more valuable for the overall stability and resilience of a distributed computing system. There is also value in automation and having a short mean time to repair / recover. But relying only on „rebuild on failure“ without ever fixing the root cause introducing the problems will inevitably accumulate technical debt that wants future you to pay the interest. And on a personal note: if I want to do the above anyway, I could run windows and schedule weekly reboot-maintenance windows. Same thing.


ok_if_you_say_so

What I said is true of on-prem, where you would also use an organized offering such as openshift or vmware tanzu. It's the same idea regardless of where you run it. Virtually no business should be building their own k8s from scratch, it's a huge distraction from what the business is trying to focus on. I work in a highly regulated healthcare industry and yes we do have primarily an on-prem presence


glotzerhotze

Why would I pay for a highly opinionated version of kubernetes to run on-prem if I could just use vanilla kubernetes and be done with it? I don‘t think running kubernetes is more complicated than running a large vmware or openstack setup. Same distributed system problems, just a different tech-stack. I also believe in teaching people rather than dumping large amounts of cash onto a vendor. But opinions might differ on the subject at hands. PS: you did read my concerns about stability and resilience of distributed systems, did you not?


ok_if_you_say_so

There is no such thing as vanilla kubernetes, every distro and installation is opinionated. You're going to pay for it regardless. You can pay for it with people-hours, longer project timelines, and hiring expertise that you need, or you can pay for it by just paying for it. If it is not a core competency of your business (i.e. unless your business is literally selling a k8s platform to other businesses) then it's almost universally just going to be a huge time sink and a distraction to roll your own. But I understand that many engineers suffer from Not Invented Here syndrome and feel the need to do everything from scratch. By the way, do you use Linux From Scratch? It seems like if you're following this policy, you wouldn't want to use a pre-existing linux distribution either.


krupptank

Well I disagree on the part where is is a vmscale set. I'm operating on multiple bare-metal clusters and run the sparse vm's on top of that.. This shift in Paradigm from machines to vm's occured before ofcourse. However one must ask self if they want to learn platform engineering or application development/deployment based on k8s


ok_if_you_say_so

I am referring to what is standard practice within the real world in the industry. There is really no business at scale that should be spending their time focusing on bare metal installations like that. It's a solved issue and a distraction from the business's core competencies. If it's something that is a personal interest of yours, by all means experiment away. I am primarily focused on this from the perspective of a real world business case.


krupptank

Really, so what are hypervisors installed on? Hypervisors are obsolete in the traditional (VMware/proxmox) approach...


ok_if_you_say_so

That's beyond the scope of discussion. If you are talking about using k8s you are beyond the scope of how you are managing the lifecycle of your VMs, that's a separate blast radius and still another skill that doesn't translate to someone who is posting in /r/kubernetes asking where to start. It just isn't something that you need to be worried about in your real world day to day job as a kubernetes admin or app developer


Nicolasayudame

Thank you! My Idea was to integrate this cluster with a Jenkins pipeline so I can deploy some of my applications and see how the cluster handles any possible failure. Maybe use Helm (or kubectl in the beginning) to deploy the applications. I already use ansible in my homelab, I didn’t know I could use it to configure cluster nodes, thank you!


MuscleLazy

You should use ArgoCD, instead of Jenkins. My cluster is deployed into 8 raspberry’s, see example: https://github.com/axivo/k3s-cluster


Nicolasayudame

Cool, thank you so much! I will definitely take a look at it!


thelolzmaster

You can even run Jenkins on your cluster using the Helm chart. Happy homelabbing!


Nicolasayudame

Thanks!


raw65

Yes, k3s is basically a lightweight Kubernetes deployment. What you learn on k3s will be helpful in any Kubernetes environment. Because k3s is optimized for resource constrained environments you may not be able to explore all Kubernetes capabilities but it will be enough to get you keep you busy for a long time. I highly recommend the [Kubernetes Networking Series](https://www.youtube.com/playlist?list=PLSAko72nKb8QWsfPpBlsw-kOdMBD7sra-) from the [The Learning Channel](https://www.youtube.com/@TheLearningChannel-Tech) YouTube channel. Those videos are long and slow but have really, really good content if you want to really understand Kubernetes (and containers in general). You can absolutely run a cluster on VMs and I would argue it is the best way to learn because it's easy to spin up and shut down clusters to test various scaling and failure scenarios. More memory would certainly be helpful though. For learning purposes I wouldn't expect cores to be the limiting factor. You could mix in PIs with VMs if that's what you have access to. My personal preference though would be to add as much memory as possible to your Proxmox machine and use that.


Nicolasayudame

Thank you so much! You gave me a ton of informations, I’ll spend a lot of free time on that! :) I didn’t imagine I could mix vm and physical devices. I think I’ll give a look into this direction, I want to know how the cluster behaves when there are different devices/architecture, maybe with different resources.


anthonybrice

k3s is great. Learn everything you can with it. Everything you learn with k3s will be worthwhile, and there will still be plenty more to learn when you're done with k3s. Beyond the official documentation, the best resources for learning about full stack development on Kubernetes are Google's SRE books. [https://sre.google/books/](https://sre.google/books/) Hardware-wise, I like Zimaboards and that's what I use at home. However it doesn't really matter. I could just as easily be running k3s on hetzner for cheap. glhf


Aggravating-Body2837

One question no one is asking, what are you aiming to learn? To be a k8s user or a k8s admin? Usually a dev (like yourself) will be a k8s user, so if what you want is to get better at your job, create a managed k8s cluster from one of the cloud. For simplicity sake I'd recommend GCP Kubernetes autopilot. Then start deploying stuff onto the cluster with all types of features - exposed to the Internet vs internal app, stateful vs stateless apps, scaling apps, inject configurations and secrets, etc etc. If you want to get your feet wet with managing a k8s cluster, I'd advice to play with "kubernetes the hard way", either with cloud VMs or virtual box VMs or physical machines, whatever. Do it once in zombie mode, without much thinking, just following the steps. Then dive deep into the steps. Either way, I'd recommend starting with the first approach, so at least you know how you can test//use your cluster.


Nicolasayudame

That’s a good question lol I would like to be both, I know it might be hard. I think the best approach for me may be a mix of the two. I can go into “zombie mode”, setup a bunch of vms and try to deploy something and then go deeper. thanks!


Aggravating-Body2837

Yep that's good too. You'll see while you are at it, questions will start to pop left and right. Feel free to PM if you're having a trouble with the journey, glad to help


ReNTsU51

I personally prefer to use Minikube, it doesn't have the power of k3s, nor does it have the same purpose, but for a learning environment is probably the best of what you will get.


Nicolasayudame

Thank you! I tried minikube a long time ago but I felt like if I was missing out something. I learned how to deploy a basic application on it, but I couldn't completely understand how fault tolerance works or how nodes are created. Maybe I’ll give it another chance :)


Rain-And-Coffee

Hey bud, I did this a few months ago. I learned proxmox and Kubernetes at the same time, but doing one is also fine. I started by just learning about basic Kubernetes concepts. Played with minikube locally, then stepped up to running my own cluster. I used 3 mini pcs and ran microk8s on them. There’s a ton of blogs that walk you through the process, a Google search should then then up. If you get stuck just Google any particular thing.


Discomfited8812

I’ve recently gone through kubernetes the hard way using virtualbox on my home windows 11 machine. It was interesting to say the least but it worked. Now I’m going through the kubernetes.io guide using kubeadm and Ubuntu servers setup in virtualbox. So far control plane is running next step is to join the worker nodes. I’m finding using wsl to ssh into my vms to be a better solution than working directly in the vms in virtualbox. For my job I work in OpenShift daily so this has been a fun experience..using kubectl and context versus oc command line. I’ll be following this for different ideas if I branch out to something more serious for a home lab too.


Bloodrose_GW2

k3s is perfect for learning. Rancher as a commercial product is based on the same thing. Raspberry pi is a usable alternative. I run a cluster made of 4 Pi4 boards.


Independent-Chef9421

1. https://github.com/kelseyhightower/kubernetes-the-hard-way If you really want to learn 2. People (sorry guys) using RPis for this are mad. Buy some second hand thin clients like Dell/Wyse if you want hardware, otherwise run lxc/lxd (doesn't need the same overheads as VMs)


Speeddymon

Straight answers to your questions: > > Software-side: > > > > * Is there any differences between k8s and k3s? Or is k3s just a lightweight version of k8s? > > No to the first, yes to the second. > > * If I learn on a k3s environment, can I use those skills on a k8s? Or will I miss out on something? > > Yes to the first, no to the second. > > * do you have some learning resources (yt, books, videocourse) besides the official documentation? > > Sure! On Udemy, look up Kubernetes for the Absolute Beginner. This is a course taught by Mumshad Mannambeth, a highly respected instructor and member of the team at KodeKloud learning academy. [Here](https://www.youtube.com/watch?v=yH1LkWLocVo) is a link to a youtube video about the course. After you complete the course, you will have the very basics for deploying and managing your app in kubernetes. Next you will want to look for their CKAD course, Certified Kubernetes App Developer. This will teach you enough to troubleshoot your app running in Kubernetes on a managed cloud. > > > > Hardware-side: I own a homelab with proxmox running on quadcore and 16gb ram. > > > > * Can I run a “virtual” cluster using lxc container/vm as nodes? Can this work? > > Absolutely! You could run it in vagrant or vmware or hyperv, or whatever vm solution you fancy. > > * Are some raspberry pi/zero a better alternatives? > > Define "better"? What will this homelab be running? What requirements do you have?


Nicolasayudame

Hi, thank so much for all the info and the course! I’ll start in a virtual environment and then, maybe, move to physical devices. I rejected the idea of pis due to other comments, I’ll probably use some dell optiplex(or similar)


Kooky_Comparison3225

To get started with Kubernetes in a homelab, I recommend using MicroK8s due to its simplicity and ease of setup, which makes it perfect for learning. MicroK8s can run efficiently on your Proxmox setup using VMs or even on Raspberry Pi devices, providing a flexible and powerful environment to master Kubernetes. For a detailed step-by-step guide, check out my article on creating a local multi-node K8s cluster with MicroK8s and Multipass: [Creating a Local Multi-Node K8s Cluster with MicroK8s and Multipass](https://devoriales.com/post/267/creating-a-local-multi-node-k8s-cluster-with-microk8s-and-multipass).


fuzzy812

Minikube


Bright_Mobile_7400

Have a look at JimsGarage on YouTube he has great content on Kubernetes. You can also look at TechnoTim that has an ansible playbook to create a cluster


Nicolasayudame

Thank you, I will!


drosmi

Try minikube. If you already know docker then try kind.


crazymanushya

I have got a comparison to refer, to what K3s is capable of with respect to Talos. You can refer it here: [https://www.cloudraft.io/blog/k3s-vs-talos-linux](https://www.cloudraft.io/blog/k3s-vs-talos-linux) I think it covers most of your answers!


Nicolasayudame

Thanks, I’ll take a look at it!


SJrX

I guess a different perspective from other people but I try and make projects that are useful in the long term and run indefinitely. If you have an idea of things to host that you want on it, it might help. I'd recommend automating it from the start (and I'd recommend ansible). Automating it (both the setup and tear down), makes it kind of as easy as VMs, but also let's you start fresh and repair things much easier. I know a few people who have built their own clusters and the problem they have is they get out of date quickly and then you are basically redoing the work, which they never do. Exactly one year after I spun up my cluster, all the default certs expired the cluster was dead, and and running home assistant, and a few other things and I didn't have time to trouble shoot, but I just blew away the cluster and re installed, and 10 minutes later things were back where they were (the cluster is stateless). I think my cluster started with 1.24, and I upgraded it a few weeks ago, and it was so painless with the automation. I think it took an hour. I went with, and would recommend a physical cluster. I went with 6x Raspberry PI 4s. I've heard that small Intel NUC/Mini PCs have better price and performance. The reason I would go with physical clusters is mostly an aesthetic choice. I think it makes sense when you have a cluster and want to run containers on it to use Kubernetes, and it simplifies a lot of things in this case. I don't think it makes sense if you have a server and want to run a container, to run a number of VMs and use that, when you can just use docker-compose. That's just my opinion though. I guess one final piece of advice, is to not let perfection or complexity prevent you from getting something useful. There is a saying that I will butcher which says that, if you don't regret anything about a project, the project was too easy and you didn't learn anything.


Nicolasayudame

Thanks! I think I will start with VMs so I can make backup of base linux image and refresh all the environment if shit happens. I will take a look at ansible too, to easily keep all project updated.


OkAcanthocephala1450

"I Am a FuLL-sTAck deVeLOpEr" - said the one that knows the backend and a little frontend.