If you are evaluating what it's like to upgrade a DC/OS cluster or automate cluster installation using configuration management, then you should use the advanced installation method, which exposes all the configuration options available in DC/OS. This quick start guide walks you through one variation of advanced installation that results in a cluster big enough to test all of Mesosphere's certified services. For brevity, full explanations and some details have been excluded, so if you're setting up a production cluster please refer to the
documentation.
Why use Advanced Installation?
There are many ways to install DC/OS, which are all codified variations of
advanced installation. These methods include the GUI and CLI installers, and the Cloud templates for AWS and Azure. They all have many of DC/OS' configuration options already decided. However, some functionality (upgrading the DC/OS version for example) is unavailable using these methods, and configuration management software like Puppet, Chef, Ansible, and Salt can't easily automate them. If you are only interested in what it's like to run a workload on DC/OS, one of the codified methods might be right for you, but if you're evaluating what it's like to operate DC/OS itself in production, you'll want to start with an advanced installation.
The cluster you'll get by following these instructions will have the following nodes:
- One bootstrap
- One master
- One public-agent
- Three private-agents
Installing DC/OS can be separated into three phases: server configuration, install script configuration, and finally DC/OS installation on the cluster nodes.
Server Configuration
Before installing DC/OS, your servers must be properly configured. You can install DC/OS on almost any hardware in the cloud or on premise, but you might want to check the documented
hardware requirements just in case. One thing to keep in mind is that you'll need to run DC/OS masters and agents on Centos/RHEL 7.2 or 7.4, or CoreOS 1235.9.0 or higher.
Below we describe how to install package dependencies, configure groups and users, and do some additional systems administration. For all cluster nodes (for the above described cluster, that would be six servers) complete the following:
- Enable overlay for Docker in /etc/modules-load.d/overlay.conf.
sudo tee /etc/modules-load.d/overlay.conf <<-'EOF'overlayEOF
- Ensure the time and date are synced across all cluster nodes. If NTP is not running, we recommend configuring it.
- To setup the following requirements and optional configuration (CentOS only) you can run this script, which is handy but not an official Mesosphere-supported part of the install process, on all your nodes including the bootstrap node (to setup Docker). After running the script restart all nodes. The script installs and configures:
- Required Packages: tar, xz, unzip, curl, ipset
- Required Software: Docker 1.11 or higher
- Required Configuration: SELinux disabled, nogroup and docker groups
- Finally, check that the dependencies included in the script were installed correctly.
Create the DC/OS Install Script
- Use cURL to download the DC/OS package onto the bootstrap node, curl -O https://downloads.dcos.io/dcos/stable/dcos_generate_config.sh
- Make the genconf directory, mkdir genconf.
- Create ip detect script under the /genconf directory on the bootstrap node.
- vim ip-detect
#!/bin/sh# Example ip-detect script using an external authority# Uses the AWS Metadata Service to get the node's internal# ipv4 addresscurl -fsSL http://169.254.169.254/latest/meta-data/local-ipv4
- The ip detect script reports each node's IP address, this enables the nodes to communicate across the cluster.
- Create config.yaml under the `/genconf` directory on the bootstrap node. This file configures your DC/OS cluster and determines the cluster's name, list of master nodes, security level, and other settings—many of which cannot be changed after installation. Review our documentation for a full list.
bootstrap_url: http://:80 cluster_name: 'MyCluster'exhibitor_storage_backend: staticmaster_discovery: static master_list:- <master-private-ip>resolvers:- 8.8.4.4- 8.8.8.8security: disabled
- The installer script and /genconf should be in the same directory. The installer script is generated by running, sudo bash dcos_generate_config.sh on the bootstrap node. After the command completes you should have a .tar file similar to dcos-genconf.e38ab2aa282077c8eb-4d92536e7381176206.tar (the two hashes are the commit ID and bootstrap ID for your version of DC/OS), and a new /genconf/serve directory with your installation files, including the installer script, which is called dcos_install.sh.
- Pull and run an nginx Docker container to serve the installer script to the master and agent nodes. sudo docker pull nginx and sudo docker run -d -p 80:80 -v ${PWD}/genconf/serve:/usr/share/nginx/html:ro nginx
Install DC/OS on Cluster Nodes
You can install all nodes simultaneously, however the agent nodes might have some failures while waiting for the master.
- For all nodes download the DC/OS installer script,curl -O http://<bootstrap-ip>:<your_port>/dcos_install.sh while ssh'd into the node itself.
- Still on the node, install DC/OS with sudo bash dcos_install.sh master, sudo bash dcos_install.sh slave, or sudo bash dcos_install.sh slave_public, depending on the node type. To get the cluster we described above, install one public agent and three private agents.
Now that you have a DC/OS Cluster, it is time to explore the DC/OS Dashboard. Simply use the master node's public IP address in a web browser, with the default username and password, to login. The default username is bootstrapuser and password deleteme. In the DC/OS Dashboard you can monitor your nodes, deploy applications, or monitor systems health.
Once you have setup a test cluster you can use the same installation method with a bit of extra configuration to install DC/OS on a production cluster. Pay special attention to the
hardware requirements for a production cluster.
Did you run into trouble following these instructions? Let us know and get help by joining the
DC/OS community Slack workspace, and sending a message in the #install channel.