Hacker News new | past | comments | ask | show | jobs | submit login
Microsoft doubles down on Kubernetes for Azure (dxc.technology)
200 points by CrankyBear on Nov 26, 2017 | hide | past | favorite | 60 comments



Edit: Thanks to replies explaining that in fact AKS doesn't charge for the master! I'm leaving the rest of this comment standing because it's an honest reply to the article posted - Microsoft isn't disingenuous, this article is just wrong.

> Be that as it may, Microsoft is offering AKS for free. You will only pay for the virtual machines (VM) that you use for managing your Kubernetes cluster. Microsoft says, “Unlike other cloud providers who charge an hourly rate for the management infrastructure, with AKS you will pay nothing for the management of your Kubernetes cluster, ever. After all, the cloud should be about only paying for what you consume.”

This is disingenuous. We're talking about k8s, so GKE is the comparison point. Yes GKE charges a flat hourly fee for cluster management, regardless of the number or size of the nodes[0]. But GKE doesn't charge for the master instances (which are hidden away behind the GKE product). So AKS will not charge for the cluster management, but will charge for the master instances. Neither pricing model is more "for free" than the other.

Also, if we have to think about the master instance sizes on AKS because we're paying for them, then that's one thing we have to manage on AKS that we don't have to manage on GKE - GKE adds more value by more completely managing our cluster.

[0] pedantic: there's actually no fee for clusters of less than 6 nodes.


> Also, if we have to think about the master instance sizes on AKS because we're paying for them, then that's one thing we have to manage on AKS that we don't have to manage on GKE

This is entirely premised on your speculation which turned out to be false. Curious that you're willing to call this blog post disingenuous when you're the one spreading falsehoods?


"You will only pay for the virtual machines (VM) that you use for managing your Kubernetes cluster."


Statement's like Microsoft's are horrible; "the cloud should be about paying for what you consume" well, Kubernetes consumes at least one master node in order to orchestrate a cluster.

But, Google/GKE does charge for management of the cluster. A constant fee of $109/mo for 6+ nodes. Its not technically the charge for the master node, but that cost to Google + other things is likely where that $109/mo comes from.


GKE does charge for the masters for clusters of 6 or more nodes. See https://cloud.google.com/kubernetes-engine/pricing. Microsoft's offering does not seem to charge you for the masters ever, so that does seem like a differentiator.


You write "Microsoft's offering does not seem to charge you for the masters ever". The article says "You will only pay for the virtual machines (VM) that you use for managing your Kubernetes cluster". Do you have a source that disagrees with the article?

Edit: to those replying correcting the article, if you were to create a top-level comment on this thread correcting the article, we could upvote your comments and better correct the misinformation. I have edited my own comment explaining what you have dug up. Thanks for your research!


Yes, I have played with AKS. You do not pay for the Kubernetes masters, just the nodes. In the prior version of the product (ACS) you would specify a cluster size as a number of masters and a number of workers/nodes. In AKS, you only manage the latter and Azure abstracts away the Kubernetes API.

https://azure.microsoft.com/en-us/pricing/details/container-...


> Yes, I have played with AKS. You do not pay for the Kubernetes masters, just the nodes. In the prior version of the product (ACS) you would specify a cluster size as a number of masters and a number of workers/nodes. In AKS, you only manage the latter and Azure abstracts away the Kubernetes API.

On GKE, if i run a Kubernetes cluster of size 1, I'm still charged for a single VM. You're saying that if I do the equivalent on Azure, I won't get charged at all?


Based on the comments in this thread GCE charges you for masters on clusters with 6 or more nodes, while MS AKS will only charge you for the nodes.

As with anything cloud related, though: it's all about relevant cost calculations for the specific solution...


https://docs.microsoft.com/gl-es/azure/aks/intro-kubernetes

> "you pay only for the agent nodes within your clusters, not for the masters."


Which is to be read "the price of the Master is incorporated into the amortised price per unit time of the agent nodes". Microsoft is not giving you something for nothing here.


To be fair to MS here, the agent price is the same as the stand-alone VM pricing, so there really does not appear to be a premium/hidden cost here.


Either that or everyone is paying some kind of premium to subsidise the service. I would be extremely surprised if it's not rolled into some cost of goods sold metric -- if it's one thing microsoft is not, it's being bad at passing on costs.


Google and GKE team member here: The AKS announcement was either poorly worded or intentionally vague. I choose to grant the benefit of the doubt.

For small clusters (less than 1-5 nodes) GKE does not currently charge anything "extra" except your own node VMs. That is LITERALLY a $0 master.

Beyond 5 nodes we currently charge a flat rate that covers your zonal master(s) regardless of how many or how big they need to be.


...and as of today there is no fee for masters on GKE.

https://cloudplatform.googleblog.com/2017/11/Cutting-Cluster...


This is a good thing; Microsoft will increase competition in this space by applying its expertise in dev tools to Kubernetes.

I just hope that MS doesn't add in too many platform-specific pieces that would encourage vendor lock-in.

For example k8s ingress right now is based on controllers and GCE controllers support/don't support a wide variety of things compared to nginx ingress... these areas of Kubernetes make me worried about potential future switching costs.


Note that Microsoft Azure was a launch partner this month for the Certified Kubernetes conformance program, which is explicitly designed to reduce the chance of forking and lock-in by defining the standard Kubernetes APIs that all platforms and distributions must support.

https://www.cncf.io/certification/software-conformance/

(Disclosure: I'm executive director of CNCF and helped lead the conformance efforts.)


So, on a tangent, what does your directorship in CNCF entail and how does one end up in a position like this?


It mainly involves replying to HN threads on the weekend.

More seriously, this slide deck gives an overview of all the areas in which CNCF is involved, from the Certified Kuberenetes program to providing marketing and other services to our projects to offering training and certification.

https://docs.google.com/presentation/d/1BoxFeENJcINgHbKfygXp...

We're also running KubeCon + CloudNativeCon in a week, with over 4,000 people expected.

http://events.linuxfoundation.org/events/kubecon-and-cloudna...

As to how to get into it, you can take a look at my background and also look at opportunities with our parent, the Linux Foundation.

https://www.dankohn.com https://www.linuxfoundation.org/about/careers/


I am familiar, ironically. I spend my downtime listening to SE Daily, Newstack Makers, and other things to make car rides less boring. CNCF comes up a non-trivial amount. I will check your personal site. Thanks.


You might be interested in a Changelog podcast I did that was recorded a few days ago.

https://changelog.com/podcast/276

(Disclosure: I'm still the executive director of CNCF)


Fortunately there's a bunch of ingress controllers you can use: Traefik, Voyager, HAProxy, and probably several others. And it's surprisingly trivial to write your own.

So ingresses is not currently a weak point in relation to vendor lock-in happens. And Kubernetes already supports plenty of non-Google tech; as an open source project, Kubernetes is refreshingly non-Google-focused (there are a bunch of players, notably Red Hat and Microsoft, ensuring this).

As an aside, the current ingresses (including the Nginx one and Google's own GLB one) all have annoying deficiencies (subpar support for TLS and per-route timeout settings among the biggest). Ingress support is, and has been for a long time, Kubernetes' weakest point. For example, GLBs max out at 20 TLS certs (ridiculous if you're hosting many customers on a SaaS solution) and default to a timeout of 30 seconds (you can't control this using ingress annotations; you have to manually go in and edit the backends via API or UI) (which doesnt work for big streaming requests and WebSockets). These are also very trivial problems compared to the complex ones that are being solved by big new features in Kubernetes proper, so it's a bit surprising that ingress implementations are lagging to this extent.


>> As an aside, the current ingresses (including the Nginx one and Google's own GLB one) all have annoying deficiencies

Agree, this is along the lines of what I meant. For example GCE ingress allows you to reference a global static IP while this is not possible with solely nginx ingress due to limitations with the TCP load balancer. There are separate ingress annotations for GKE/nginx/haproxy, etc. If I want to use the global-static-ip GCE annotation, this will make it a little harder to move to Azure.


Totally fair points, and vendor lock-in is something to be highly vigilant of.

For some context, though: the ingress functionality is rather recent in Kubernetes and is still fleshing out in terms of demands in the market and across cloud providers. Another year or two will do a lot for equality and saner networking solutions for small-scale and bespoke K8s deployments.

I got a little shock that I had to set up an ingress controller on each of my nodes (on our VMWare cluster), but going through the nuts and bolts of it I can see why it's hanging a bit behind the rest of the product. I think the devs are being smart letting the ecosystem mature a little instead of pushing out something half-baked and prematurely limiting.


You can always layer these. Have a simple global ingress pointing to a service or daemon set of nginx/haproxy.


The problem with Ingress "lagging" is that it was designed to be a lowest-common-denominator API - it only absorbs logic that exists in the majority of realistic implementations. For better or worse, cloud LBs are vastly more limited in feature-set than Nginx or Envoy, so Ingress is too.

This is a big topic for debate, and will be on the agenda at KubeCon in O(days).


This is understandable, of course. In hindsight, I suspect the current concept of an abstract, one-size-fits-all ingress was, and is, going in the wrong direction.

With CRDs, we could have each ingress controller provide its own, native ingress object ("nginx-ingress") that had the exact features it supported (with schema validation). The ingress controller would then create or delete cloud-specific CRDs ("google-loadbalancer") based on what flavour of cloud you're running under, which Kubernetes could pick up and use. Or something like that.

But as you say, some of the friction exists because cloud LBs are limited in the first place. The arbitrary cert limit on GCP is particularly egregious. We run a SaaS solution with about 100 vendor domains, which means we've been forced to use the Nginx ingress controller and terminate TLS there, instead of at the GLB level where it arguably belongs. (We could run 10 GLBs, but that would require splitting our ingresses into 10 separate ingresses, with the duplication and potential for copy/paste errors that would ensue.)

But thirdly, it's also true that several of the ingress implementations are just a bit sloppy. Traefik, Voyager and haproxy-ingress all have issues with using TLS certs (all of them have open issues about serving both HTTP and HTTPS at the same time, I believe). A lot of today's ergonomics could be solved by polishing up these projects.


First, Ingress had a purpose to serve, and it has served that purpose - it is relatively easy to handle low-complexity apps with generic Ingress.

But low-complexity apps don't stay that way.

What you're describing is very much the way my brain has gone. In my experience, most users end up using at least one non-portable annotation on Ingress. The logical conclusion then, is that people care about features MORE than portability in this facet of the API.

This is not surprising to me, given how religious the debate tends to be...


Microsoft should write an ingress controller for Azure? Not sure how you can get into a vendor lock-in situation with k8s. Maybe Microsoft extends the API with its own blackbox controllers and force that instead of kube primitives?


I seriously doubt it’s a deliberate strategy on their part in this case, but Microsoft has been known to pull this type of maneuver in the past:

Step one: “We love Kubernetes! Run it on Azure! It’s great”

Step two: “Kubernetes experience on Azure is best enjoyed using Microsoft Enterprise Ingress, Microsoft Custom Resources for Business, and developing in Visual Studio. Use those!”

Step three: “Oh, the CTO wants to migrate workloads to AWS instead? Too bad you used so many of of our custom and proprietary add ons, that’s going to make migrating incredibly expensive. Just stay on Azure and everything will be just fine...”


Plausible, but not as if AWS doesn't promote it's own proprietary lock in services.

K8S does help a bit in this regard, but all three of the cloud big cloud players have reasons, products, policies, and strategies to lock you in.


Yup, just explaining how K8s is not a magic salve you can rub on everything to prevent vendor lock-in.


In fact MS acquired Deis sometime in March this year.

Draft.sh , helm etc are under MS now, while they do remain OSS.


And Deis Workflow was marked EOL later on, something like July of this year, ostensibly so those devs could devote more time to Azure and other OSS offerings.

A lot of the complexity of getting serious deployments running on Kubernetes is just neatly abstracted away by Deis, so much IMHO that now I'm not sure what to do about getting my team to take up Kubernetes without encouraging them to use Deis. To be clear, we are using Deis, but in a very limited capacity in large part because of the perceived risk associated with those yellow construction triangles[1]. (We are so lightly invested in K8S that I think we don't really have another part of the plan as of yet. The plan is to spend 6 months on ECS and not explore Kubernetes until after, can't wait to hear what news comes this week from re:invent, or if this strategy will even be realistically possible to follow once the "EKS" news hits.)

Some of us really felt like MS making this move shortly after acquiring Deis team was a bit like throwing out the baby with the bathwater, but those devs have assured us they are not just taking directions from corporate, and that they actually are going where the demand is (that there's not enough demand for Deis Workflow specifically to be strategically important for Microsoft, but there is plenty of demand for Kubernetes at-large and more Kube-native tooling around K8S issues.)

Not sure how close you've been following this story, so apologies if I'm telling you about things you already know. And I'm not one to complain about the mode of delivery that my free stuff is received in, but this change really came out of nowhere for me and has thrown a massive wrench in my own Kubernetes adoption strategy. Some of us are trying to make sure that the OSS Deis Workflow tools are not lost to bit-rot, and development of the project now continues under the name "Hephy"[2].

Also, just a minor nit, I think that although Helm was originally made by Deis, Helm[3] the package manager has not been "taken" under the Microsoft umbrella, as it was adopted and considered as a part of Kubernetes-proper before the sale of Deis.

[1]: https://github.com/deis/workflow/

[2]: https://github.com/teamhephy/workflow

[3]: https://github.com/kubernetes/helm


> I just hope that MS doesn't add in too many platform-specific pieces that would encourage vendor lock-in.

I watched one of their videos demonstrating the product, and they understood people wanted 100% compatibility with the open-source k8s system and no special Azure/MS features, and so this was one of their selling points, which is good.


> I just hope that MS doesn't add in too many platform-specific pieces that would encourage vendor lock-in.

That's Microsoft's main strategy though. Example: Users can't even install Firefox or Chrome on Windows 10 S.

Microsoft announced: "Apps that browse the web must use the appropriate HTML and JavaScript engines provided by the Windows Platform."

I wouldn't be the slightest bit surprised to see similar things happening to their cloud platform as soon as they are able to do them without users' noticing.


That's a different product, with different goals. You're right about 10 S, but it's kinda unrelated to Azure.


I've lived through the experience of using non-MS products since the early 2000s, and all of those things are related. It has been one of their core strategies and still is, though the techniques are shifting.


> "Apps that browse the web must use the appropriate HTML and JavaScript engines provided by the Windows Platform."

This seems like a thing that platform owners are doing more and more (iOS has a similar clause, and ChromeOS, well, uses Chrome).

Is this just purely a move to stifle competition from web apps that could threaten control of the platform? Or perhaps I am missing some nuance behind these kinds of decisions? I'm honestly struggling to find any room to give them benefit of the doubt here.


A JavaScript JIT requires memory that is writable and executable. W^X is a security feature and required for the security model of walled gardens. Otherwise you could pull down unapproved code and make the walled garden useless.

They don’t want shitty JavaScript interpreters making their platform look bad, so they force you to use their system JavaScript engine.


Yeah, the excuse for taking away freedoms usually is "security".

Walled gardens are one of the problems. They should be rendered useless.

It's a company that caused many people to suffer through Internet Explorer for 20 years. I don't think that security[1] or making the platform look bad[1] is the primary motivation with blocking competing browsers. MS saw that other companies got away with it using different tactics, so they just changed their approach.

[1] http://www.debugcomputer.net/uploads/images/ie%201.jpg


> This seems like a thing that platform owners are doing more and more (iOS has a similar clause, and ChromeOS, well, uses Chrome).

Shame on all of them. If it continues, the next generation will not know what it's like to have technology freedom like we currently do (in the US, for the most part).


Whenever I look at Kubernetes, I can't help myself but think that it's overcomplicated, with too many moving parts. Almost as if to incentivise using the hosted solutions instead of rolling it on your own on some VPSs. I can deploy a HA Swarm Mode cluster anywhere in minutes, while I'm not sure even where to start with Kubernetes. Do I use juju (recommended Ubuntu way), or do I use kubeadm? Which network add-on do I use? How do I upgrade (the procedure seems to be different with each version)?


If you’re not prepared to dedicate a small team to running Kubernetes, I’d say it’s too much for your needs.

Read the Borg paper for an idea of what k8s is meant to solve. If you have hundreds of services spread across 10s/100s of nodes, then the complexity of k8s starts to become amortized. Similarly if you have 100s of devs without a consistent way to do deployments, k8s starts to make sense.

Source: tech lead for a team managing a large on-prem and cloud k8s cluster.


Well...at least it's not OpenStack!

Out of the couple of stacks I tried, kops was the easiest to set up. Juju and kubeadm were so-so.


It doesn't help that most of the open source solutions to deploy a cluster are either poorly engineered (e.g. kubespray) or poorly maintained (e.g. kubeadm). You are on to something... organizations/people that have the capability and resources to make the toolings to manage Kubernetes setup complexity seem to think they're much better off if they charge for it.


There's a way of setting up a bare-metal (or VMs) cluster using Ansible. It may be the sort of 'easy in' you may be looking for:

https://kubernetes.io/docs/getting-started-guides/fedora/fed...

If you manage to automate deployment of the VMs or physical nodes (e.g., PXE), you've got a cherry on top.


+1

highly agreed. I'm actually surprised that most people dont try Docker Swarmkit. It is unbelievably good for what it does.


I really, really hope aws joins the game. re:invent is around the corner.


Well, you can use something like kops, although it's a little more difficult to get started, I guess. This new tool from Azure is basically the same thing.

It's a little interesting since they already have ECS, which isn't that bad either, but it seems like everyone just wants Kubernetes. Will they do both, or provide a migration path?


ECS is okay, used it quite a bit recently. It definitely has some quirks to it, like having to use CloudWatch events to figure out the port given to the container when using bridge network. Maybe if the service discovery story was a little bit more defined, ECS would be easier to use. There's lots of little manual things one has to do around it to make it work as swift as other solutions available.

With regards to k8s on AWS, I do not have a problem running it myself but some backing from AWS definitely help defining k8s as "the solution" for running containerized workflows. Here's one hoping.


One of the re:Invent talk descriptions referenced something called "Elastic Container Service for Kubernetes (EKS)". Make of that what you will.


Didn’t AWS just join the Kubernetes foundation earlier this year? If so, we’ll see what comes.

One thing ECS does is fully integrate ALBs and NLBs with containerized services and makes the plumbing and scaling pretty easy to set up (at least IMO.

If they can swap out the custom ECS service that manages containers with K8s, that makes sense to me and would mean all the major cloud providers are centralizing on K8s... which in turn means we could stop building a new container orchestration layer every month and move on to some new problem :)


I'm almost sure they will. They don't have much of a NIH problem (look at how many DBs they support, anyone remember SimpleDB?), ECS hasn't exactly taken off, and they're generally very good at giving whatever their customers want.


Fun fact: SimpleDB still exists and has customers.

It's just very very very hard to find a link to it from AWS :)

https://aws.amazon.com/simpledb/


I wrote about SimpleDB earlier this year - it still has some use cases that aren't readily suited to other database providers.

https://medium.com/zendesk-engineering/resurrecting-amazon-s...


So restaurants give you ketchup for "free" too, but turns out you're paying for it.

This is stupid to say the management node is free, you're just paying for it in another way. Now if they said they were able to run the node so cost effectively that it's just included in the hourly node cost, then that's a plus.


For SMBs and smaller dev shops kicking off their forays into container development, not to mention customer deployments, the simplified cost models make it more management friendly and more understandable to non-IT actors.

Whether it is absolutely cheaper than GCE or whatever, and if master management is a requirement, is a separate comparison based on the solution. Cutting out the management node costs removes a barrier to entry and creates a more streamlined product.

Oh, it also makes their Kubernetes services usable within the free credits they give out on Azure via their MSDN subscriptions... so suddenly millions of devs can use AKS for demos and temp-hosting "for free", which is huge in creating mindshare in the Enterprise...


I'm sure you're right, but not understand economics to that level wow.


I hope they don't embrace, extend, extinguish.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: