The Core Distinction
Salt has two mechanisms for data: grains and pillars. They're often mentioned together and they serve complementary purposes, but they're fundamentally different in where data lives, who controls it, and what it's for. Mixing them up costs you in debugging time and occasionally in security.
Grains are facts about the minion. They live on the minion, they describe the minion, and they're available to anyone who can query that minion. Some are discovered automatically at startup. Others you set yourself. The key thing: grains are stored and cached locally on the minion.
Pillars are configuration data delivered from the master to specific minions. They're rendered on the master, encrypted in transit, and never written to disk on the minion. Pillars are for things you want to control centrally and deliver selectively — database passwords, API keys, environment-specific config values.
The rule I use: if it describes what the machine IS, it's a grain. If it describes what the machine NEEDS TO KNOW, it's a pillar. A server's role is a grain. Its database password is a pillar.
Grains in Practice
Built-in grains you'll use constantly:
salt '*' grains.item os # 'CentOS', 'Amazon', etc.
salt '*' grains.item osrelease # '7', '2015.09', etc.
salt '*' grains.item fqdn
salt '*' grains.item ip_interfaces
salt '*' grains.item mem_total
salt '*' grains.item num_cpus
salt '*' grains.items # dump everything
Custom grains are where grains become a real inventory system. Set a role grain on each server at provisioning time:
salt 'web01' grains.setval role webserver
salt 'db01' grains.setval role database
salt 'lb01' grains.setval role loadbalancer
You can also set multiple values:
salt 'web01' grains.setval roles ['webserver', 'app']
Now target by grain anywhere in Salt:
salt -G 'role:webserver' state.highstate
salt -G 'os:CentOS' cmd.run 'yum update -y'
In your top file:
base:
'G@role:webserver':
- nginx
- app
'G@role:database':
- postgresql
- backup
This is the approach I settled on by late 2015. Instead of maintaining a separate inventory file that drifts out of sync with reality, the role assignment travels with the minion. When you decommission a server and provision a new one, you set the grain during bootstrap and the top file handles the rest automatically.
Using grains in Jinja2 templates inside states:
server_name: {{ grains['fqdn'] }}
cpu_count: {{ grains['num_cpus'] }}
Pillars: Secure Data from the Master
Pillars live in /srv/pillar/ on the master by default. The structure mirrors the salt fileserver: a top file maps minions to pillar files, and pillar files are YAML with optional Jinja2.
/srv/pillar/top.sls:
base:
'G@role:database':
- database
'*':
- common
/srv/pillar/database.sls:
db_password: hunter2-but-actually-a-real-password
db_replica_user: replication
db_replica_password: another-real-password
/srv/pillar/common.sls:
ntp_servers:
- 0.pool.ntp.org
- 1.pool.ntp.org
log_level: info
The database pillar is only delivered to minions with the role:database grain. Web servers never see db_password. This is the actual security model — pillar targeting controls data access.
Verify what a minion sees:
salt 'web01' pillar.items # should NOT show db_password
salt 'db01' pillar.items # should show db_password
In a state file, reference pillar data like this:
/etc/app/database.conf:
file.managed:
- source: salt://app/database.conf.j2
- template: jinja
In the template:
[database]
password = {{ pillar['db_password'] }}
host = {{ pillar.get('db_host', 'localhost') }}
Use pillar.get('key', 'default') when a value might not be present — it's safer than a bare pillar['key'] which throws an error if the key is missing.
Jinja2 with Both
You'll frequently use grains and pillars together in the same template:
[app]
hostname = {{ grains['fqdn'] }}
environment = {{ pillar['environment'] }}
api_key = {{ pillar['app_api_key'] }}
log_dir = /var/log/{{ grains['id'] }}
The pattern I use: grains for identity and topology facts, pillars for credentials and environment-specific values. The template reads clearly — you can tell at a glance what's machine-specific versus what came from a central config.
The Mine Function
The mine is an underused feature that solves a real problem: how do minions share data with each other? The classic case is web servers that need to know the current list of database IPs.
Configure the mine in /etc/salt/minion or via a state:
mine_functions:
network.ip_addrs:
- eth0
This publishes the minion's IP on eth0 to the mine. Other minions can then query it:
{% set db_hosts = salt['mine.get']('G@role:database', 'network.ip_addrs', 'grain') %}
{% for host in db_hosts.values() %}
upstream backend {
server {{ host[0] }}:5432;
}
{% endfor %}
The mine is refreshed on a schedule controlled by mine_interval in the minion config (default 60 seconds). It's not real-time, but for building config files it's more than fast enough and it eliminates a whole class of hardcoded IP address problems.
The Pillar Cache Gotcha
This one got me more than once. After you modify a pillar file on the master, minions don't automatically see the change. Their pillar data is cached. If you run state.highstate immediately after changing a pillar, minions may apply states using the old values.
The fix:
salt '*' saltutil.refresh_pillar
Run this before any state run that depends on pillars you've just changed. I eventually added it as a step in our deployment scripts. You can also target it narrowly:
salt -G 'role:webserver' saltutil.refresh_pillar
The same applies to grains, to a lesser extent. If you've set a grain and it's not being picked up in targeting, run:
salt '*' saltutil.sync_grains
Security Boundary
To be explicit: pillars are NOT stored on the minion. They're rendered on the master using the minion's key for encryption and delivered over the ZeroMQ channel. If you ssh into a minion and look in /etc/salt/, you won't find pillar values sitting in a file.
Grains ARE stored on the minion, in /etc/salt/grains. Don't put secrets in grains. Don't put API keys or passwords in grains. Beyond the security issue, grains are meant to be facts, not secrets. Keeping the distinction clean makes your Salt setup easier to reason about as it grows.
By the end of 2015 I had grain-based targeting covering about 60 minions and pillar data handling four different environments. The setup was clean enough that onboarding a new server was a single grains.setval command followed by a state.highstate. That's the goal — remove the manual per-server configuration steps and make role assignment the only thing you have to do explicitly.