In a previous post I touched on the various installation methods for SaltStack Config. One of those is the so called standard install. The standard install is also the recommended way of installing SaltStack Config on-prem. VMware describes the standard installation on their documentation site as well. However, there might be a few bits that could be unclear. Especially if you are new to the subject. In this post I will go through the standard install and explain the steps with a bit more detail. Throughout the proces I will be using terms like salt, master, eAPI, Redis, etc. If this is unclear please check out my previous post.
Operating Systems
There are a lot of operating systems that Salt can run on, even Windows, but please be aware that not all components of SaltStack Config can run on all operating systems. The recommended OS is RedHat or CentOS. These are the most broadly supported operating systems from Salt’s perspective. At the time of writing RedHat and CentOS 7.x are supported (but version 8 is around the corner).
SaltStack Config System Architecture
The VMware site has a lot of information on the SaltStack Config system architecture and scale limits. Keep in mind that these numbers are general guidance. There is no safe way to say upfront how many minions can be supported in a SaltStack Config installation. A lot of factors play a role in this number. Think about how much Pillar data you have or how many jobs you need to run per hour. In any case it is always a good idea to build in 10% of slack and monitor your SaltStack Config installation.
Pre-installation
Before we dive into the installation itself there are some things to consider. For a complete list check out the VMware site. Most are self-explanatory but some I want to highlight here.
- Licensing: You do not need a license during the installation, but there is a grace period afterwards so configure your license timely.
- Internet access: The standard install assumes internet access is available. If this is not possible think of alternatives like a proxy, access through allow lists or a local repository before going for the air-gapped install.
- Firewall: There are a few firewall ports that need to be opened up front. Also think about the local firewall on the systems that will host the SaltStack Config components. Either set it up correctly, or disable the local firewall if this fits in your security policy.
- Python: It is good to understand that SaltStack Config comes with its own packaged Python. So there is no need to install your own.
- DNS/IP: Before we can start we need DNS records (A and PTR) for the VM’s that will host SaltStack Config. If you plan to use a load balancer for the RaaS component later, you might want to already setup the load balancer DNS CNAME record and point to the first RaaS node. When you then use this CNAME during the installation the correct name is already added to the Salt config files. This way you don’t need to update the config once you add a second RaaS and a load balancer.
- Dependencies: Before installation a few dependencies are required. If you include these dependencies in a VM template you can re-use this template later when you need more instances of lets say a Salt Master. So, in your RedHat/CentOS system make sure the following packages are also installed:
- OpenSSL – sudo yum install openssl
- Extra Packages for Enterprise Linux (EPEL) – sudo yum install epel-release
- Python cryptography – sudo yum install python36-cryptography
- Python OpenSSL library – sudo yum install python36-pyOpenSSL
Installing SaltStack Config
Now that we have covered the pre-requisites it is time to install SaltStack Config. What I like about the SaltStack Config installation is that it uses itself to install the components on all VM’s. How is that for eating your own dog food. When we go through the installation steps we will see how Salt does this.
Since we are doing a standard install we need 4 VM’s with RedHat/CentOS plus the dependencies as described before. Once the 4 VM’s are available and the firewall has been opened we can start the actual installation.
Salt Master
On the VM that will be the Salt Master we install the Salt Master and Salt Minion services.
First we need to add the Salt Project repository and key and clear the package manager cache.
sudo yum install https://repo.saltstack.com/py3/redhat/salt-py3-repo-latest.el7.noarch.rpm
sudo yum clean expire-cache
Now we can install the Salt Master and Minion.
sudo yum install salt-master
sudo yum install salt-minion
After this is done we need to create a master configuration file so the minion component knows where its master is. The way config files work with SaltStack Config is that you can have multiple *.conf files with various configuration lines in the *.d directory. Upon starting the master or minion service all *.conf files are read to apply all the config. In this case we create a master.conf file in the /etc/salt/minion.d directory. We point the minion to itself because it is also the master for the environment.
sudo echo "master: localhost" > /etc/salt/minion.d/master.conf
Now we can enable and start our Salt Master and Salt Minion
sudo systemctl enable salt-master
sudo systemctl start salt-master
sudo systemctl enable salt-minion
sudo systemctl start salt-minion
More Salt Minions
After we have installed the Salt Master (which has indeed also a Salt Minion) we move our attention to the other 3 VM’s. These VM’s being the RaaS, PostgreSQL and Redis nodes. Since we used a template to deploy these VM’s all prerequisites should be installed.
We can safely install the Salt Minion on these VM’s. On each VM run the following commands.
sudo yum install https://repo.saltstack.com/py3/redhat/salt-py3-repo-latest.el7.noarch.rpm
sudo yum clean expire-cache
sudo yum install salt-minion
Like before we need to let the freshly installed Salt Minions know where the Salt Master is located. To do this we create a master.conf in the /etc/salt/minion.d directory of each VM and point it to the previously deployed Salt Master. After that we enable and start the Salt Minions.
sudo echo "master: <your_master_ip>" > /etc/salt/minion.d/master.conf
sudo systemctl enable salt-minion
sudo systemctl start salt-minion
Accept the keys
One of the nice things with Salt, is that security is pretty good. There is no way of having an endpoint do something if there isn’t a mutual trust first. This works with pre-shared AES keys in Salt. Each Salt Minion that reports to a Salt Master shares its key with the Salt Master. If an admin finds that this endpoint should be managed from that master, the key is accepted. By now our, 4 Salt Minions have been knocking on the masters door and it is time to accept their keys.
Go to the Salt Master and run the following commands:
salt-key -A
If everything went as planned you should see all 4 nodes listed when you run a salt-key -L on the master node.
[root@salt-01a ~]# salt-key -L
Accepted Keys:
postgr-01a.corp.local
raas-01a.corp.local
redis-01a.corp.local
salt-01a.corp.local
Denied Keys:
Unaccepted Keys:
Rejected Keys:
If not you should go back to each node and check if the minion service is running, check its status and make sure all required firewall ports are open. The Salt Master should be accessible on TCP 4505/4506 by the Salt Minions.
Also note that the minion ID’s are the same as the FQDN of the VM. This is the default behaviour of the Salt Minion. In most cases this is fine, but if you want to change the minion ID you can edit the /etc/salt/minion_id file and restart the minion service. After that you will need to re-accept the key on the master with salt-key -a <minion-id>.
Download installation files
After we have the prerequisites ready we need to download the installation files. But, didn’t we just install Salt already? Why do we need more installation files? The Salt Master and Salt Minion installation are basically the Salt Open part of the complete environment. Now we need the SaltStack Config part which consists of the RaaS, PostgreSQL and Redis components. VMware packaged this into a single tar.gz file and it is what we use to complete the SaltStack Config standard install.
Log in to VMware customer connect and find the VMware vRealize Automation SaltStack Config Automated Installer. It should be located under All Products > vRealize Automation. Make sure you pick the tar.gz file and download it. After the file is downloaded copy it to the Salt Master server, I usually use SCP for this. Make sure the sha256 hash matches the hash on the VMware site using sha256sum.
If the file is uploaded and the sha256 hash matches you can extract the files.
Importing the asc keys
One of the steps you just take for granted is the acceptance of the *.asc key files. This step needs to be done on all nodes that are part of the installation (Salt Master, RaaS, PostgreSQL, Redis). The asc keys make sure that the local packaging system trusts the RPM packages we are about to install on these systems.
Use FTP or Rsync to copy the just extracted files in the sse-installer/keys directory to all nodes. After that run the following command on all nodes (make sure you are in the correct directory).
sudo rpmkeys --import keys/*.asc
Setting up the install files
There are a few files in the downloaded tar.gz that we need to copy to a specific location and edit according to our environment. The information we need to provide are DNS names and Minion ID’s. Important in this step is that DNS names are NOT case sensitive and Minion ID’s ARE. So when editing the files, be conscious if you are adding a DNS/FQDN or a Minion ID. Different parameters require different inputs. As said before, the default for a Minion ID is the FQDN. So, if you didn’t change the default, just use the FQDN exactly as it is and you should be fine.
Assuming we are still working from the Salt Master, first we copy the required files to the correct location, so navigate to the sse-installer directory and run the following commands.
sudo mkdir /srv/salt
sudo cp -r salt/sse /srv/salt/
sudo mkdir /srv/pillar
sudo cp -r pillar/sse /srv/pillar/
sudo cp -r pillar/top.sls /srv/pillar/
sudo cp -r salt/top.sls /srv/salt/
The directory /srv/salt and /srv/pillar are actually special directories because they are the default file_root for the SaltStack base environment.
First we edit the /srv/pillar/top.sls file to include our minion ID’s for the Salt Master, RaaS, PostgreSQL and Redis. Example:
{# Pillar Top File #}
{# Define SSE Servers #}
{% load_yaml as sse_servers %}
- postgr-01a.corp.local
- raas-01a.corp.local
- redis-01a.corp.local
- salt-01a.corp.local
{% endload %}
base:
{# Assign Pillar Data to SSE Servers #}
{% for server in sse_servers %}
'{{ server }}':
- sse
{% endfor %}
The next file is /srv/pillar/sse/sse_settings.yaml. This is a more elaborate file with various sections we need to update. Let’s go through it section by section.
Section 1
We start with 4 variables to update (Salt Master, RaaS, PostgreSQL and Redis) and it again expects Minion ID’s.
# Section 1: Define servers in the SSE deployment by minion id
servers:
# PostgreSQL Server (Single value)
pg_server: postgr-01a.corp.local
# Redis Server (Single value)
redis_server: redis-01a.corp.local
# SaltStack Enterprise Servers (List one or more)
eapi_servers:
- raas-01a.corp.local
# Salt Masters (List one or more)
salt_masters:
- salt-01a.corp.local
Note that the bottom 2 entries state ‘list one or more’, that is because this installer can also install additional RaaS or Master servers if you are doing an HA install. HA is not in scope for this walkthrough.
Section 2
This section is used to configure the PostgreSQL part. These are mainly the connection details. In this section we enter the FQDN and not the minion ID.
# Section 2: Define PostgreSQL settings
pg:
# Set the PostgreSQL endpoint and port
# (defines how SaltStack Enterprise services will connect to PostgreSQL)
pg_endpoint: postgr-01a.corp.local
pg_port: 5432
# Set the PostgreSQL Username and Password for SSE
pg_username: salteapi
pg_password: <create your own password here>
# Specify if PostgreSQL Host Based Authentication by IP and/or FQDN
# (allows SaltStack Enterprise services to connect to PostgreSQL)
pg_hba_by_ip: True
pg_hba_by_fqdn: False
pg_cert_cn: localhost
pg_cert_name: localhost
Section 3
This section is basically the same as section 2 but for Redis.
# Section 3: Define Redis settings
redis:
# Set the Redis endpoint and port
# (defines how SaltStack Enterprise services will connect to Redis)
redis_endpoint: redis-01a.corp.local
redis_port: 6379
# Set the Redis Username and Password for SSE
redis_username: saltredis
redis_password: <create your own password here>
Section 4
This is the RaaS (or eAPI) section. Normally we only update the eapi_endpoint paramater with the FQDN of the RaaS server and create a new eAPI key with openssl rand -hex 32. Unless, you are doing a more advanced installation, which we are not doing at this time.
Note that the username and password should be left default at this time. If changed, the installation will not work. Change this AFTER deployment through the RaaS GUI.
# Section 4: eAPI Server settings
eapi:
# Set the credentials for the SaltStack Enterprise service
# - The default for the username is "root"
# and the default for the password is "salt"
# - You will want to change this after a successful deployment
eapi_username: root
eapi_password: salt
# Set the endpoint for the SaltStack Enterprise service
eapi_endpoint: raas-01a.corp.local
# Set if SaltStack Enterprise will use SSL encrypted communicaiton (HTTPS)
eapi_ssl_enabled: True
# Set if SaltStack Enterprise will use SSL validation (verified certificate)
eapi_ssl_validation: False
# Set if SaltStack Enterprise (PostgreSQL, eAPI Servers, and Salt Masters)
# will all be deployed on a single "standalone" host
eapi_standalone: False
# Set if SaltStack Enterprise will regard multiple masters as "active" or "failover"
# - No impact to a single master configuration
# - "active" (set below as False) means that all minions connect to each master (recommended)
# - "failover" (set below as True) means that each minion connects to one master at a time
eapi_failover_master: False
# Set the encryption key for SaltStack Enterprise
# (this should be a unique value for each installation)
# To generate one, run: "openssl rand -hex 32"
#
# Note: Specify "auto" to have the installer generate a random key at installation time
# ("auto" is only suitable for installations with a single SaltStack Enterprise server)
eapi_key: 1226cf48e438c8a2b300d1a52c6ffee9462a3ea993208a14902cb057f9dd3a7a
eapi_server_cert_cn: localhost
eapi_server_cert_name: localhost
Section 5
This is the last bit we need to change. Here it is important to also randomise the customer ID with cat /proc/sys/kernel/random/uuid. Changing the cluster ID is optional but recommended. Especially if you expect more Salt Masters managing minions in different parts of your environment.
# Section 5: Identifiers
ids:
# Appends a customer-specific UUID to the namespace of the raas database
# (this should be a unique value for each installation)
# To generate one, run: "cat /proc/sys/kernel/random/uuid"
customer_id: 696766b6-b951-4855-8283-d4fc37889d4d
# Set the Cluster ID for the master (or set of masters) that will managed
# the SaltStack Enterprise infrastructure
# (additional sets of masters may be easily managed with a separate installer)
cluster_id: saltmaster_cluster_1
Now that we have updated the files we are ready to complete the SaltStack Config Standard Install.
Running the Highstate
After setting up the top.sls and sse_settings.yaml files with the correct information we need to run a highstate against all minions (again our Salt Master, RaaS, PostgreSQL an Redis nodes). A highstate simply means run a series of state files against a specific target. This means that when you run a highstate multiple init.sls files get applied to the targeted minion. This is the way how SaltStack uses its own engine to deploy and configure its own components.
Before we run the highstate we need to make sure all Salt Minions have all the correct data (pillar data from the sse_settings.yaml) and that the Salt Master has all information about the Minions (the so called grains).
Run the following command to get all grain data from the Minions. This command assumes the only minions we have are our 4 nodes we defined as our installation base.
sudo salt \* saltutil.refresh_grains
Note that using \* means all minions in this case. You could also use ‘*’ to achieve the same. It is also possible to target one or more minions directly.
Next we run the command to distribute all pillar data.
sudo salt \* saltutil.refresh_pillar
If everything went ok, we should be able to read the pillar data from all our minions.
sudo salt \* pillar.items
This is an example of the output for the Redis node, you would have an output for each node:
redis-01a.corp.local:
----------
sse_cluster_id:
saltmaster_cluster_1
sse_customer_id:
696766b6-b951-4855-8283-d4fc37889d4d
sse_eapi_endpoint:
raas-01a.corp.local
sse_eapi_failover_master:
False
sse_eapi_key:
1226cf48e438c8a2b300d1a52c6ffee9462a3ea993208a14902cb057f9dd3a7a
sse_eapi_num_processes:
6
sse_eapi_password:
salt
sse_eapi_server_cert_cn:
localhost
sse_eapi_server_cert_name:
localhost
sse_eapi_server_fqdn_list:
- raas-01a.corp.local
sse_eapi_server_ipv4_list:
- 192.168.71.101
sse_eapi_servers:
- raas-01a.corp.local
sse_eapi_ssl_enabled:
True
sse_eapi_ssl_validation:
False
sse_eapi_standalone:
False
sse_eapi_username:
root
sse_pg_cert_cn:
localhost
sse_pg_cert_name:
localhost
sse_pg_endpoint:
postgr-01a.corp.local
sse_pg_fqdn:
postgr-01a.corp.local
sse_pg_hba_by_fqdn:
False
sse_pg_hba_by_ip:
True
sse_pg_ip:
192.168.109.2
sse_pg_password:
<your pg password>
sse_pg_port:
5432
sse_pg_server:
postgr-01a.corp.local
sse_pg_username:
salteapi
sse_redis_endpoint:
redis-01a.corp.local
sse_redis_password:
<your redis password>
sse_redis_port:
6379
sse_redis_server:
redis-01a.corp.local
sse_redis_username:
saltredis
sse_salt_master_fqdn_list:
- salt-01a.corp.local
sse_salt_master_ipv4_list:
- 192.168.71.103
sse_salt_masters:
- salt-01a.corp.local
Once this is completed and you checked all pillar data is in order it is time to run the highstates.
sudo salt postgr-01a.corp.local state.highstate
sudo salt redis-01a.corp.local state.highstate
sudo salt raas-01a.corp.local state.highstate
sudo salt salt-01a.corp.local state.highstate
After running these highstates and no errors have occurred the initial SaltStack Config installation is done. If you do see errors, there is a troubleshooting page at VMware to help you progress with the most common errors.
Next Steps
There are a few things that you still need to do, some of which are optional. I’ll list them here but wont go in to detail as it has been a very long read already. There is a Post Install section at the VMware docs that is pretty good to get these steps done.
Must do’s
- Install license key
- On the RaaS node create a license file: /etc/raas/ssc_license and enter the license key in this file. Just make sure the file name ends with _license and restart the raas service.
- Log in to the GUI and change the root password from the default ‘salt’ to your own.
- This is done under Administration->Local Users->Select Root->Enter new password and save.
Optional
- Setup signed certificates
- Not 100% necessary but for any production grade environment I would say it is a must.
- Setup AD/LDAP integration and RBAC
- While you could work with local users, I recommend to integrate with AD or something similar so you can give specific roles to groups defined by the governance team.
Conclusion
It has been one heck of a ride but, this concludes the SaltStack Config Standard install. I hope this has been informative and that the additional information on how to install this product is helpful. There is a lot more ground to cover when it comes to SaltStack Config. Think about installing this with high-availability or disaster recovery in mind. What about adding load balancers in the mix. How do you connect the just installed environment with your vRealize Automation environment, because it doesn’t happen automagically if not using the vRealize Suite Lifecycle Manager installation. So, there is a lot of stuff to write up about in upcoming blog posts.