[ale] FW: SUSE Linux Days 2013 kicks off this May!
Chris Ricker
chris.ricker at gmail.com
Fri May 17 08:55:16 EDT 2013
On 5/14/13 9:22 PM, Scott Plante wrote:
> A lot was made about how hard OpenStack is to do on your own--has
> anyone on the list done raw OpenStack? What was your experience like?
> We tend toward making the free, open source versions of things work
> for us. Then again, maybe we'd make more money as a company if I
> didn't spend so much time playing admin and concentrated on my paid
> programming work ;-).
I've deployed internal OpenStack clouds at my last job (deployed via
packages and our in-house puppetry) and these days I work for a vendor
doing OpenStack. So, some experience with it from both sides of the
checkbook as it were...
When the vendors tell you OpenStack is "hard", what they're referring to
is a combination of several things:
- OpenStack is really a framework more than a single product. It's a
family of several loosely coupled products that can be set up in a
myriad of different combinations -- and every user's install is going to
be unique to some extent. Some users will need object storage (S3 in
Amazon parlance) while others have no use for it, for example; pretty
much every at least quasi-relevant hypervisor is supported; the number
of networking configurations possible are insanely long; etc
- OpenStack is under very rapid development. The community and vendors
try to keep up but that means inevitably documentation is trailing, and
on the vendor side things like training programs are just now starting
to appear
- The problem space OpenStack tries to handle is large. The dream is
that it's orchestration / abstraction for the entire data center. So
you're dealing with storage, servers, switches, routers, firewalls, load
balancers, etc -- and just about every possible vendor of each type of
component
- Production deployments typically span dozens if not hundreds of nodes,
so the complexities of managing at scale also factor in
- Because of the above items, OpenStack is severely over-configurable.
There are literally hundreds of configuration parameters scattered
across a couple dozen configuration files. Although you won't have to
set all for a deployment, you will even for a relatively vanilla
deployment touch many parameters in lots of places, which means if
you're doing this by hand you will likely make mistakes and may get to
spend some quality time in python to figure out where you went wrong
It's not really THAT scary to get going though and in fact some of the
large deployments of OpenStack track trunk continually, so it's
certainly possible to deploy in production straight from source
If you're wanting to see a bit more there are a couple of relatively
easy ways to try it
- devstack is a shell script that launches a complete OpenStack
environment. It's primarily meant to give developers a quick way of
deploying and testing, but it's also fine for getting your feet wet and
learning the different components of OpenStack and how they interact.
http://devstack.org/ is the home page and has some examples to get you
going.
- if you have some spare servers there are lots of guides to deploying
OpenStack in a relatively manual way over 2-4 nodes for a more realistic
test. Here's one that is reasonably good from the community:
https://github.com/mseknibilel/OpenStack-Grizzly-Install-Guide/blob/OVS_MultiNode/OpenStack_Grizzly_Install_Guide.rst
In addition, most of the Linux distro vendors that are doing OpenStack
have guides for their own distro, like the Fedora one here:
http://fedoraproject.org/wiki/Getting_started_with_OpenStack_on_Fedora_17
- there are lots of vendors putting out "distros" of OpenStack which
simplify install. Many are open source, or have try-before-buy versions
available
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.ale.org/pipermail/ale/attachments/20130517/d3209b6d/attachment-0001.html>
More information about the Ale
mailing list