Hacker News new | past | comments | ask | show | jobs | submit login
Docker Raises $95M Series D Round for Its Container Platform (techcrunch.com)
126 points by carlchenet on April 14, 2015 | hide | past | favorite | 86 comments



I don't mean to be overly dismissive or anything, but what does Docker make that customers will pay for? This is an awful lot of money to throw at a company that doesn't have a business plan, let alone technology that can't be readily replicated.


Customers will pay for Docker Hub potentially. The type of customer who would pay them for Docker Hub is roughly the type of customer who would pay for Github (with market size constrained by Docker adoption.) That is, customers who are willing to pay a bit for private hosting of some part of their development or deployment process and are willing to do that using SaaS.

My tiny company pays them for private repositories as part of our deployment process. (Push to Github, Docker Hub makes automated build, built Docker image gets sent to AWS ElasticBeanstalk). It's worth the $7 a month for the convenience of not doing the builds and hosting the images ourselves.

Not saying it's going to turn into a business model that will justify that valuation, just that it does at minimum provide a value I'm willing to pay a small amount for.


From the article: "Among the respondents, Docker recorded the strongest buying intention score the researchers recorded in the six years they’ve run this survey. Messina also noted that about 50 percent of the companies in the current Docker Hub beta are Fortune 100 companies."


Are those Fortune 100 companies purchasers of the Docker Hub service, or mere users? If such a company downloaded even a single Docker image from the Hub, does that count?


I assume by the "current Docker Hub beta" they're referring to Docker Hub Enterprise, so they'll have set up their own registries.


This reminds me of Steve Klabnik's article, "Is npm worth $2.6MM?" [1]. Even if the company is barely able to create a sustainable business model, or generate any significant profits, will it help in creating an ecosystem, that helps in creating billion dollar companies?

[1]: http://words.steveklabnik.com/is-npm-worth-26mm


Yes, but over $100M on that sort of bet is very different than $3M. $3M is frequently like 1% of the money raised on a billion dollar company and could be done as a bet to sustain the ecosystem it depends on.

Betting $100M+ needs real return. xD


No worries. NPM just raised another $8MM today, too.


The point was you can't compare things that are an order of magnitude of difference like that. $11M vs. Docker's raise is still in that category.


Docker Hub is basically a compile farm and software distribution mechanism. It's not that hard to see Docker moving towards providing continuous integration, containerized cluster hosting, app store like stuff, and so on. With expertise and funding, they could become a very interesting platform.


I read a few months ago about a startup I'd not even heard of that was valued in the Billion dollar range. Snapchat. I was like, they let you take a picture and upload it while chatting and that's worth Billions? So yeah, Docker should be worth some decent cash.


For me it seems to fall into the same sort of setup as Mongo or MySQL in the long term, and MySQL did alright ($1bn).


I fail to see how they are in any way comparable. MySQL and Mongo both have made a lot of money selling into the enterprise with the decades old per server licensing model. Now Mongo is starting to make inroads into the data warehouse space with their partnerships with Teradata, SAP etc. They aren't selling a PaaS or web application that will just be used by a development team here and there.

Docker seems to be much closer to Atlassian. Which could well be their biggest competitor one day.


It seems that Docker could follow a very similar path as MongoDB -- be the preeminent supporter for docker and supplier for turnkey + validated solutions. Building an ecosystem of add-ons (closed or open source) around the core product puts them in an even more central role


Though Docker's current valuation wasn't disclosed, it was rumored at $400m for the Series C round, and has probably at least doubled since then, given the amount of capital raised right now and the fact that they haven't spent all of the Series B. I seriously doubt that a $1bn exit would make Docker's investors happy


MySQL was installed at virtually every hosting provider and was 1/4 of the stack for a massive number of developers. I'm not even sure what Docker does.


[deleted]


hehe i see what you did there


It seems to me that Docker is heading down the same path as MySQL, whether knowingly or not. Perhaps they are trying to get everyone on board, then sell to a BigCo a la Oracle, telling them "here's the whole Docker ecosystem, go ahead and extract money from it." Who is Docker's actual customer? The user or BigCo?


I thought MySQL AB was already extracting money, and lots of it, when it sold? Like $100 million in revenue?


Amazing news. Decided 6 months ago to go with Docker (both in dev and in production) for a financial application. It was a risk, and I got lots of resistance from the other devs and the security expert on the project.

Docker is simply amazing. I'm currently using Grunt to manage the images/containers, but Docker Compose will replace some of that in the future.

That being said, here are what you should expect:

1. It's not a VM. What work in DEV will not work in PRODUCTION. Be ready for some nasty debugging.

2. It's better to use the same OS (e.g. Centos) for both DEV and PRODUCTION.

3. Image sizes are not really small. Docker is consuming around 4GB of space for 6 containers.

4. Sometimes it breaks. I just need to destroy the whole thing and start clean. But that's Grunt ReRerun. And it works nicely.

5. I can update the code base, data, the applications, the whole OSes structure with Grunt ReRun. Takes a couple minutes.


I have similar experiences to what you listed... and it's why I'm going to be moving away from docker in the mid-future. That 'nasty debugging' consumes a ton of time. I'm probably going to head back to .debs. I see a place for docker, but not on my production servers. It consumes gobs of disk space in practice and requires a lot of hand-holding; an extra layer of stuff to work around and to debug.

I was resistant to it at first as they'd just started using it in the company when I arrived, but the devs kept talking about how wow it was. Fast forward to today, and not a single dev has it installed on their machine. None of them are interested.


Interesting. What do they do now?


Generally what they were already doing, as only one or two actually installed docker in the first place. They use Gulp to manage their dev systems, but I'm not sure if they use any other tools.


Right, so it never really took. My experience is that it doesn't happen by itself, and needs focus from someone to make it work for you, much like you can't just install Jenkins and declare CI "done".


I'm really exited to see which business model will be chosen to get a ROI. Docker is very popular but since most existing enterprise vendors adopt or integrate it, I'm not yet convinced that Docker Inc has some unique assets to make money out of.


I went to a couple of events where Docker was a topic in Europe, the general feeling is that the solution looks great, but every single time a speaker/organizer asks to raise a hand if using it in production, nobody does. I mean, it's definitely a technology having a momentum, but I still have to find someone that heavily used it in production.


That's because it isn't ready for production yet, and certainly not at scale. You're in for a world of hurt if you try to deploy hundreds of containers at once from a central registry. And there's not yet a way to bind containers to physical interfaces that DHCP themselves at runtime.

Docker solves problems for developers, and soon it might solve problems in production. But for large deployments, that day is not today.


Well, we're currently running 80+ instances in my company using Docker to deploy every app. I'm the responsible for the push to this Docker-only approach, given that we had a bunch of legacy apps that were running in Chef-managed nodes (that I've also implemented), so I don't think that it's a developers-only tool, it's already pretty mature and I've not yet run into requiring to bind a container to some physical interface or whatever, the virtual interface has been great thus far.


We're running it in production as well, very heavily (100+ servers running many more containers). The parent is right. Docker is not really ready for production yet.

We've had to write (and continue to write) a lot of code to work around limitations and bugs. We've replaced the registry with custom code, written a lot of code to "manage" the Docker daemon, had to do some funny business to make logging work, are constantly integrating other components (such as weave) to work around Docker limitations, and my God the bugs...

I can deal with the limitations. It's the fundamentals that are the real problem (bugs, memory leaks, descriptor problems, aufs issues, the fucking registry). Docker, Inc. keep chasing new features over fixing the fundamentals. That's their choice, but it's unfortunate for us because, well, I have to go restart a few Docker daemons now because of that memory leak, yet again.

I expect these issues to be resolved eventually and there isn't a better alternative (the ones that exist have their own issues) so we take the pain and wait patiently for things to get better.


You're higher on the Docker product and team than I am, but with your discussion of new-and-shiny over working-and-bugfree you've pretty much nailed why I'm not happy with Docker and am hoping Rocket (or something out of left field) becomes a thing. While I'm kind of terrified in general about the current drive towards VC-backed companies providing core infrastructure, I think CoreOS's team cares much more about stability and correctness and I think I can at least come to grips with them.

The usability of Docker gets so much worse as soon as you get outside the happy examples, too. Like, the one that most recently bit us: Docker is considered a "1.x product" and I have to hardcode a registry URI into the Dockerfile instead of something sane like a host-level search path--like all package managers have done for forevers now-and the suggested option by a Docker developer (and I'm not slagging him, he tried to help and I really appreciate that, it's not his fault this is stupid) was to muck around with DNS to redirect a single hostname to different places within my environment.

Couple the constant friction of usage with the security questions throughout the system (single giant garbage executable that needs root for absolutely everything instead of principle of least privilege, and--oh!--looks like user namespaces slipped to 1.7, awesome!) and I'm not feeling confident in my use of Docker. I feel like they'd rather market than fix.

- I am reminded of this article, and the dismissive replies by the author (a Docker employee and committer) here on HN: https://blog.jessfraz.com/posts/docker-containers-on-the-des... https://news.ycombinator.com/item?id=9086751


Every couple months I clone Rocket, build it, play around with it for a few days, and end up back with Docker. We wrote Python scripts that export Docker containers to a format that is very similar to Rocket's spec so I'm very interested in using Rocket as a replacement for that part of our infrastructure.

That said, this is a hard problem and it is going to take some time for all the issues to shake themselves out. Rocket also has a long way to go before they are production ready. I'm hopeful they'll provide a nice alternative in the long run, but today their platform is just as sketchy.

When I say the bugs will be fixed, I mean they will be fixed, but not necessarily by Docker, Inc. The larger community will have its way as it always does.


You could be right, of course. I generally don't trust a meaningful community of contributors to form around a modern-era VC-backed company (which is to say, "there are no Red Hats here"), so I'm much less hopeful.


How large are your images? How long does it take you to fetch them from the registry?

Binding to a physical interface is required if any of your apps will share port numbers on a host. If you don't have network ACLs that control traffic between VLANs, and you have a discovery service to manage rendezvous, then maybe you can get away without it. But many of us cannot.


Our images range from 600MiB to 2GiB in general. When you say share port numbers what exactly are you referring to? App X listening to port M in eth0 and app Y listening to port M in eth1? We don't really have this use case so for us it has been pretty simple.

I think that your use case is a lot more complex than ours or I'm misunderstanding some part, are you using bare metal servers in your own data center? I'm trying to grasp what's your architecture like so I can see what parallel can I make to ours.


You didn't mention how long deploys take. 2GiB x 80 servers = 160GiB; assuming a 1Gb/s interface on the repository, it'll take a minimum of 22 minutes for all the servers to get their new bits.


The nice part about Docker is the images are composed of layers, so you only have to re-download a layer when it changes. In practice, we pull all of our base layers when we bake our AMIs, so spinning up a new instance is pretty quick and each new deploy only has to update the top layer that contains our code.

Honestly, the only issue when it comes to image size is the initial developer pull either when they onboard to a team or when they bork their boot2docker install (which, disturbingly, happens way too often).


Many applications are very large (consider a Rails app with all of its prerequisite Gems). Granted, this isn't Docker's problem, but it presents practical distribution bandwidth constraints that Docker doesn't currently solve well.


Not every server runs the same applications, we mostly do deploys on 5-6 servers at a time, it takes around 3 minutes to deploy our biggest apps, most of them deploy in under 30 seconds.


600 MiB? That more than a complete OS install. Isn't an image supposed to be one application only? Do you store data in them, or why are they so big?


Not sure about sharing port numbers - you can do something like:

  docker -d -p 80:8000 repo/container
isn't this good enough? Can you give more detailed example?


How does that help? All this does is establish a local PAT; you can still only bind one application to port 80 on on the host. If you have more than one service that needs to listen to port 80 and be reached by clients, you're still out of luck.


This is a pretty easy solve...haproxy + consul + consul-template allows you to spin up a new container, register it with consul and have the consul-template watcher regenerate the haproxy config to route to the new port.

It's a little overkill if you're running it on a single machine, but in a real production environment, it's a pretty lightweight solution.


Easy in theory, but in practice, you have to configure and monitor it all. Adding components and orchestration logic is not cost-free.


Well it's not so hard. If there is good document steps for it. It's really abut getting consul working right.


Did you have to change anything during the implementation? What about habits compared to deploy on VMs/DSs? What's you experience regarding a long running? Did you face any issue?


We had to change how we managed services, I think it was for a better way, we were too much reliant on instances being available all the time, although I tried to always think with an HA/redundancy mind most of our team failed to do that. We're now using Docker with CoreOS (and fleet), so it has not been without headaches (mostly in part from CoreOS and btrfs and fleet scheduling going afoul), but we're much more resilient.

Compared to VMs we adopted a lot more of a "this is disposable" mindset, we persist any data that must not be destroyed in a failure in an external volume and treat the instance and service as something unreliable, pretty much it's always "Game Day", before deploying something we've already tried to cover the basics of HA (what happens if this service suddenly stops? How is the rest of the system behave? How do we make it automatically restart or reprovision?).

As I said, most of our issues regarded CoreOS (especially running with btrfs, getting out of space for metadata was a pain in the ass for months) but right now, after moving to ext4 + overlayfs we've been running without issues for a combined 28,800 hours of instance time (roughly NO issue in half a month in an environment running 30+ deploys a day).

I've not stated that we're perfect, very far from it, but Docker itself has not been a pain in production. Our private registry is currently running in an Ubuntu-based instance for more than 6 months non-stop (and we push/pull a lot of data from it everyday).


Compared to VMs we adopted a lot more of a "this is disposable" mindset

There is nothing particularly magical about Docker that means you can't consider VMs equally as disposable. It's a mindset change about how you build your product, not something inherent to Docker. It's the original concept behind AWS's EC2 - treat your VMs as disposable, and store state elsewhere.


You are correct of course but in my experience with Docker so far, it's much easier to get into this mindset, the more ephemeral your "disposable unit" is insomuch as VMs tend to stick around longer than Docker containers and occasionally you might find yourself saying, "this is ok for now" but your barrier for that is higher if you have to deal with things more often in flux.


> And there's not yet a way to bind containers to physical interfaces that DHCP themselves at runtime

Sure there is. Systemd will actually do this for you.

Docker recently eschewed LXC, and I'm still not sure I understand exactly why, but even after the move to libcontainer, systemd is capable of handling this.


I thought we were talking about Docker, not systemd. systemd is pretty powerful, but you have to write the glue Docker is providing yourself.

If you mean a scenario where Docker launches systemd, we don't do that. In my view Docker is just a packaging system; each container's pid 1 is the app we're deploying.


> If you mean a scenario where Docker launches systemd, we don't do that. In my view Docker is just a packaging system; each container's pid 1 is the app we're deploying.

No, systemd is actually capable of running Docker containers directly (even without Docker).

I definitely wouldn't recommend running systemd inside a container unless you're using systemd to launch the container (instead of Docker), since Docker doesn't handle init systems very well, whereas systemd has this built-in. Though you can also go with the one-app-process-per-container approach as well with systemd, which also works.


If you have any citations that describe how to make systemd run Docker containers without Docker, I'd like to see them.


Starting with systemd 219, you can download a Docker image from a registry with "machinectl pull-dkr", and run it with "machinectl start":

http://www.freedesktop.org/software/systemd/man/machinectl.h...


There is a lot community effort being put into Docker extensions to enable production use cases like what you have just mentioned.


It is going through the same path that AWS did, people start with test and CI type work, then move towards production. Europe is behind Silicon Valley in terms of moving towards production.


I'm using it for a financial application in production.

Granted I can go without it, but currently it makes lots of sense to use it since I need the ability to nicely handle dependencies.

I'm using Grunt for handling the creation, destruction and running of the containers. Docker Compose didn't exist back then.


It's a startup from San Francisco. They don't need to make money!


  he told me that the company still hasn’t spent most of its Series B funding yet
They raised a $15M Series B that they haven't even burned through, and yet they've raised a $40M Series C and another $95M D round? That is crazy.


"Why didn't anyone tell me I could raise less?" - last SV episode could be relevant here.


Why is it crazy? If you expect to need a longer runway, it seems prudent to secure it before you are at the end of what you have.


Docker is either going to find a way to be a major player in the VM/Container market or its going to crash and burn.

Raising money when the VCs are willing to throw it at you is a good strategy when you know you'll burn through alot.


> Raising money when the VCs are willing to throw it at you is a good strategy when you know you'll burn through alot.

Yes. Doesn't it also indicate that you are giving a lot of the company up at the current valuation? ie: you are betting that a (near-term) future valuation is lower, so you want to lock in the money now.


> ie: you are betting that a (near-term) future valuation is lower, so you want to lock in the money now.

Or, you know, you think the money now effects the long-term valuation such that, if the current owners are giving up x proportion of the company in the proposed round, with v0 as the anticipated future value of the company if the round doesn't happen and v1 as the anticipated future value if it does, the owners think that v0 < v1×(1-x)


Well, you are hedging your bets. It is alot like buying a house imo.


Raise when you can, not when you have to.


OK. I'm calling market top. Love docker, but to date I've given them zero dollars and do not anticipate that changing. The Microsoft strategy looks cloudy at best.


For that to be more relevant, you might want to tell us what role you're speaking as, what you know about Docker's paid services and why you're uninterested, what other similar services you do pay for, etc.


My role is data scientist for a venture backed marketing startup, but I also wrote/designed all the tech used for ETL and reporting. I use docker in production to sandbox models that are called by the client via a web app. I admin a private registry that is used by other groups at the company.

All of this was free. Setting up a private registry takes about thirty minutes. Orchestration via ansible is also free.

I'm "calling top" in the most cheeky way possible, so be need to get your back up. I love docker and use it heavily. It is a brilliant product, but the brilliance to me is also in the fact that you really don't need much else beyond the basics. And I realize that they are moving incredibly fast and there is potential to see something that will blow my mind and cause me to open my wallet.

If you're an enterprise user coming from Microsoft world, maybe paying for docker services makes sense. I prefer to just take an hour and do it myself for the cost of infrastructure.


As someone who self hosts Dev, Git, etc. at a decent sized company...I wouldn't pay for Docker either. I get what they are trying to do but it honestly doesn't provide enough improvements over existing technology to be worth expending serious amounts of money on.


Since Docker is not ready to officially be used in production:

- Those of you using it in production, how are you handling security?

- How secure has Docker proven to be in general?


Containers per se do not reduce security. Most of the security issues are related to other changes, eg if you switch from VMs to Docker you are removing the VMs that may have been providing security, thats not really a Docker issue per se. If you run stuff with increased privileges in containers (ie root, or some capabilities) then you may decrease security too, but your applications should not be doing this.

You can argue that the docker daemon itself (which is a big monolithic process running as root) is a security issue (RedHat certainly do). You can also argue that it makes security processes harder, as you dont have workflows in place to audit containers, make sure they are updated etc, but thats a different issue.


Eeehhh, I'm going to go out on a limb and say that Docker containers, as of 1.6, do reduce security. Without user namespaces, container root is system root, and the number of images that run as container root is incredibly high (where they might otherwise be running as system non-root). 1.7 should change this, as we'll finally get user namespaces, though I don't know how many people will end up using them; let's hope they're on by default, I guess.


But if you run stuff as root you already lost, container or not. Why are so many people running containers as root? Is it because they are running whole distros that expect that?

User namespaces are going to be really complicated to work with not really convinced they are the panacea people expect. Not running as root is much easier!


Sure. I'm just saying that running stuff as root is the de-facto standard in the container world; it isn't outside of the docker ecosystem. Take, for example, the docker registry image[0]. Runs as root. In what world, other than the docker world, would that be considered acceptable?

[0] https://github.com/docker/docker-registry/blob/0.9.1/Dockerf...


The container root certainly has UID 0, but it has reduced system capabilities. The irony of user namespaces is that they provide additional capabilities, but give you a fake UID-0 while "actually" running as a higher UID.

This distinction is important because it's why, in practice, Docker's current approach has proven to be more secure than user namespaces. User namespaces allow various operations that would be outright denied under a Docker container's very-real-but-very-limited root user.

However, yes, user namespaces are coming to Docker. It's a highly desired feature and eventually userns will mature.


I'd be interested to see how the current approach has been proven to be more secure than userns, though I can well beleive it. I do agree that dropping capabilities is good, however, even with the reduced capabilities provided, the kernel is still a very large target, and so I would sleep a lot easier with userns active (in addition to capability (and mb even seccomp ;)) filtering) on all my containers. It just seems like such a free win (and I appreciate how that must sound to the people who are implementing it).


By your logic that means all operating systems are insecure because users can run things as root. I've grown tired of hearing this argument against containers (not just Docker). It applies to security in general.


Related discussion from Daniel Walsh at Redhat on security in Docker: https://www.youtube.com/watch?v=zWGFqMuEHdw


>How secure has Docker proven to be in general?

Very little compared to hardware-assisted virtualization like KVM. Unfortunately LXC containers are not designed with security in mind and, together with the rest of the kernel, provide a big attack surface.


Other than privilege escalation through the kernel, what attack surface do you see exposed?

Might just be limited to my use case for Docker, but so far it's security agnostic.


We are using docker in production. We have had many issues but none related to security.

It's an interesting phrase: 'to officially be used in production'. Who is the official that officially signs docker to be ready? ;-)


Well, Docker Inc called Docker production ready on the 1.0 release and took down all the "do not use in production" warnings.


What are your security concerns regarding Docker? Or what do you perceive to be common security flaws?

So far I've perceived Docker to be like virtualenv for python - useful but orthogonal to any security practices.


I dont think people will put private application docker file on hub even with paid plan


Please explain to me the value of these articles, I understand that hacker news is run by capital venturous but I still feel that we got minimal value from just ogling at the absurd amount of money that the startups are getting.


Docker is a YC graduate so it makes sense for there to be extra interest here.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: