So Jow Forums. I've been getting into configuration management a bit...

So Jow Forums. I've been getting into configuration management a bit, manually configuring systems is fun but it just doesn't scale all that well and current configuration management systems are more manageable then dozens of shell scripts.

I've tried Ansible for a bit but I don't really like the YAML files that much. What other configuration management systems have you tried? Currently I'd like to try Salt Stack but I'm open to others as well. Be it Chef, Puppet, Nix or others.

Attached: Devopstools.jpg (700x500, 66K)

Other urls found in this thread:

puppet.com/docs/puppetserver/5.3/services_master_puppetserver.html
propellor.branchable.com/
docs.saltstack.com/en/latest/topics/proxyminion/index.html
twitter.com/NSFWRedditVideo

I use saltstack and ansible at work salt has some very nice features like beacons and the event bus. Very handy if you want to automate self healing tasks on top of using it for configuration management. However it does have a steeper learning curve than ansible and uses yaml files.

If you want to get away from yaml try chef. The cookbooks are written in ruby.

manually configuring systems sucks ass. i'm currently ansibling all my servers, because it's just way faster und easier to run a playbook than trying to debug why some vm died during the update and trying to figure how to revive it again.
also playbooks serve as a documentation.

i'm still trying to figure out how to ansible my ldap servers though, can't find any modules or roles to setup an ldap, just to add stuff.

Looks like I should just suck it up and get more used to YAML and learn Ruby while I'm at it. It's just 2 different ways to write management files for 4 configuration management systems.

Since you use both Ansible and Saltstack could you highlight what you like and dislike about both? Also I'm learning from the docs but is there a better way to learn them?

This is basically where I want to start. I have a lot of configuration for lots of different systems and it would be an absolute pita to reproduce the same configuration if anything happens. From there on I'd like to try to achieve way more things with configuration management. It seems like a nice skill to have for the work field once I graduate from uni too.

Hey, can you guys answer this for me? I have been messing around with setting up some of these configuration management techs (set up a puppet master server but first had to set up a DNS server and after I set that up I realized windows azure default DNS was boning my ass by overwriting all of my addressing so I'm back to square one trying to figure out whether to start over on AWS or DO)

Anyway,,, are these configuration automation tools only for use with containerization instances? I was setting up static servers on my cloud instance and looking to use puppet, etc. to do config management on those.. Does that even make any sense? I'm still trying to wrap my head around all of these sysops/devops concepts ..

puppet.com/docs/puppetserver/5.3/services_master_puppetserver.html

no, doing it with static servers is fine

stick with ansible, puppet is overkill. if you must use puppet use masterless puppet

this sort of stuff is not that bad. what's your problem with it (ansible/yml)

- name: copy caddy
copy:
src: caddy
dest: /usr/local/bin/
mode: 0755
owner: root
group: root

- name: Set cap_net_bind_service+eip on /usr/local/bin/caddy
capabilities:
path: /usr/local/bin/caddy
capability: cap_net_bind_service+eip
state: present

- name: Creates /etc/caddy directory
file:
path: /etc/caddy
state: directory
owner: root
group: www-data
mode: 0775
recurse: yes

- name: Creates /etc/ssl/caddy directory
file:
path: /etc/ssl/caddy
state: directory
owner: www-data
group: root
mode: 0770
recurse: yes

- name: Creates /var/www directory
file:
path: /var/www
state: directory
owner: www-data
group: www-data
mode: 0775
recurse: yes

- name: copy Caddyfile
copy:
src: Caddyfile
dest: /etc/caddy/Caddyfile
mode: 0775
owner: root
group: www-data
register: caddyconf

- name: Install caddy systemd service definition
copy:
src: caddy.service
dest: /etc/systemd/system/caddy.service
mode: 0644
owner: root
group: root

- name: keep caddy running
systemd:
state: started
enabled: yes
name: caddy
daemon_reload: yes

- name: restart caddy on Caddyfile change
systemd:
state: restarted
enabled: yes
daemon_reload: yes
name: caddy
when: caddyconf.changed

Ah, ok.. I was looking at puppet just because it seemed to be one of the more complex / low-level solutions. Figure I learn the 'hardest' shit first so I have the fundamentals and concepts in place rather than just letting a ez-mode tool do all the work without me understanding it.

I used udemy, youtube, and linux academy to learn both of them. I'm a slow reader, but saltstack has a vagrant environment that you can use to quickly spin up a saltstack environment.

Ansible:
I like ansible's playbook setup you dont have to define separate formulas and what not to target specific hosts. Easy to use with tons of tutorials out there. I also like how there is an ESXI and AWS modules which allows you to spin up instances on the fly, kind of like Terraform and CloudFormation.

What I dont like is how it uses SSH to connect to individual hosts and can be slow at times depending on hows many hosts your interacting with.


Saltstack:
Feature rich right out of the box. Like I mentioned earlier, the event bus and beacons system offers you some automation options depending on what your instance is doing.
Faster than ansible because it doesnt use SSH to interact with individual hosts. You can use SSH with saltstack if you want but zeroMQ is the default option.

For the cons, steeper learning curve especially if you get into building state templates and start playing with advanced features in saltstack. Jinja syntax confused the shit out of me. Make sure you understand basic python because grabbing data from the event bus requires knowledge of slicing a python dictionary.

the only things i don't ansible (or haven't yet) are the cluster my VMs are running on, and my ansible VM.
all these VMs are pretty much static and don't change often (dnsmasq, firewall, ldap, nfs, samba, mail, openvpn, .. each service has it's own VM)

Damn dude, fucking goals. You're a god. I could probably learn so much from you - setting up a fully fleshed out environment like that is a goal of mine, but it feels so far out from where I am now. Thanks for response(s)

took me ~ 1 year. i took over a completely broken IT without any documentation, where no server survived a fucking update, so instead of trying to fix that shit i tried to rebuild it using ansile.

Thank you for the breakdown. I think SaltStack sounds more interesting to me but I don't have any Python knowledge yet. Do you think it's better to start with Ansible and try SaltStack later or is it doable to go balls deep with SaltStack first?

Well I hope I can reach that level within a decent timeframe. I like working with infrastructure more than full time developing. However with all this DevOps development going on I really need to step up my game and broaden my skillset.

>PalletOps
What did they mean by this?

Go with ansible its much easier to learn than saltstack. Once you get a handle of yaml and configuration management principles, then you can start looking at saltstack. Both are python based anyways.

Does anyone here have experience with Nix? If yes, how is it?

ptdel, I know you're in this thread.

As a normal systems developer (mostly firmware/embedded), what can this stuff do for me? I kind of understand docker, and I use GitLab CI for my personal projects.
The devops stuff at work is pretty weak- one guy, a server running jenkins, and an SVN server all halfway across the planet from my office.

Configuration managements offers you a way to write a configuration and push it to other devices. This way you can deploy reproducible results on your infrastructure.

Just take a look at this guy.
If his cluster crashes he only needs to make a new cluster after which he can instruct his configuration manager to just recreate all the vm's he mentioned with the exact same configuration as they had before.

Configuration management is also great for managing multiple devices at once. When you have a change you want to commit to 10 servers you can just write a file for the desired change and send it to all 10 servers at once after which they all apply the change in the exact same way. This way you safe the time and effort needed to perform the change while making sure your servers do not have small differences in their configuration.

Docker is more of a container that keeps one application in it's desired state across multiple machines. This is also pretty nice but it doesn't make sure the hosts it's running on are configured properly. I don't know how useful configuration management would be for you but maybe it can be nice to have a script that can create a reproducible test environment.

Most of the configuration management stuff all has the same problems:

* no semantic checking lets syntactically valid errors pass into your deployment

* you end up calling shell scripts all the time anyways

* small misconfigurations can proliferate into huge problems.

* some thing has to manage state somewhere

* desired state is a large attack surface compared to immutable state.

* if you're not doing headless deployments, networking issues will fuck up your deploy roll-outs

Chef, Puppet, Salt, Ansible, W/E, you'll find they all have these issues.

If I _have_ to recommend one it'd be Propellor. propellor.branchable.com/ It leverages Haskell's type system to make "provable" deployments. It avoids lots of the errors inherit to other configuration management systems and Joey Hess (the developer) is a man of notable contributions.

For embedded development, I'm not sure how much config management really gets you, as opposed to just automated builds. Jenkins FWIW isn't configuration management, but you can use it to interface with those types of tools

>deploy reproducible results on your infrastructure
but my "infrastructure" (i.e. my product) is tens of thousands of control blocks deployed in factories around the world. They don't run an operating system that has any concept of a VM or a container. Most of them don't even know what a filesystem is.

Doesn't Nix avoid most of these problems too due to being a purely functional package manager?

Well it probably has no use for you in that case.

I suppose. It's just a package manager though, I don't think anything about it is going to orchestrate sets of configurations on multiple hosts. I haven't looked at it much though so I could be wrong.

Nix has something called NixOps which can be used to create, orchestrate and remove NixOS hosts. Nix also provides additional configuration for your or other systems you let Nix run. Let's just say Docker manages applications, other configuration managers can configure hosts and NixOS is a host that can configure itself.

thats pretty cool. Is it a third-party tool using Nix or is it part of their core? These days I do more application development so I haven't checked out anything up and coming. Would this require that all hosts it interfaces with are NixOS hosts? The one thing I liked about Propellor was that it was host agnostic

To me the ideal setup is using configuration management to create images and then just deploying those images.
You can use a config management tool + Hashicorp's packer to do this, or just use docker if you don't need VMs.

I don't know if there is a way to get just the package manager by itself running on other distros. However I'd say NixOS also is a pretty decent distro. It would be a good pick for hypervisors for example.

Go with Salt. You'll start to hit the speed limitations of Ansible quite quickly. Salt on the other hand is fast af. Both have a learning curve and Salt's isn't much steeper.

>Well it probably has no use for you in that case
that's unfortunate. I think embedded and industrial automation could really benefit from better tooling. My company is relatively innovative compared to our competitors, and even our stuff is pretty damn primitive when it comes to the management/deployment tools.

Well you can't exactly use configuration management on devices that don't have a networkstack (or any other way to communicate) and enough userspace tools to interpret the commands of the master. Maybe you could develop a standard way to change the configuration of devices given you have a way of communicating with them. Like some sort of network interface to connect to and a couple of tools that can read data and stats and a tool that can write new configuration. However I haven't really done any real embedded work so I don't know what kind of tools/equipment you have access to.

>devices that don't have a networkstack
oh they have a network stack, a very sophisticated one at that. Besides all the weird industrial protocols, they also have an HTTP server, SNMP, and a few other common user-facing protocols. They also have a runtime system for user code, and can talk to each other via modbus. The problem is that modbus is not meant to support a massive distributed netowork of communicating devices. I think something almost like the ec2 API would be an insanely cool thing for our devices to have.

Maybe if you can implement a listening server (like maybe ssh) which you can connect to for administrative tasks you could make some sort of configuration manager for it. You probably want some sort of agent that can send data to you to read stats and the current configuration, the ability to write new config files or revert all configuration and push a new configuration to assure no garbage is left.

On the master side you want it to be able to take a file that instructs what you want to configure, specify all devices you want to push the config to, interpreted the file into the necessary commands to realise the config and then push it to all the devices at once.

There might be a way to realise a decent configuration management system in your sector but it might be just a bit harder given the difference between available tools on standard servers and embedded systems.

>Maybe if you can implement a listening server (like maybe ssh) which you can connect to for administrative tasks
maybe it could just be integrated into the existing HTTP server? just service requests given in some easy-to-process format e.g. json. Then, provide a client-side GUI application that just makes those requests. That would allow for more processing to be offloaded to the client machine, as opposed to the devices. A full-blown shell would be a lot for these things to handle.

They all suck ass.
Ansible is probably the least bad since it's using some sane technologies like Python and SSH (with some retarded stuff like YAML).

Salt has a concept called proxy minions for machines that can't run standard config management tools. This is what you want to use. You can hook up embedded hardware with weird network stacks and control them all with a salt master.
docs.saltstack.com/en/latest/topics/proxyminion/index.html

What do you use to keep loads of machines in check then? They could all improve of course but it's better then managing loads of machines manually.

Well it would seem easy enough to use configuration management if user develops a standard interface for all the embedded devices then.

I guess I'll have a look at how to write an interface too, seems useful for some network devices.

>Ctfl+F
>stow
>0 results
Is is a viable alternative?

I have used Ansible. It's decent but playbooks will fail unless you specifically say that the tasks shouldn't fail. There's some testing tools (Molecule) but it sucks and only works with Docker. Also the dependency on Python sucks.
stow literally just manages config files.

>stow literally just manages config files
Isn't that the point? I'd do the rest with scripts and cron jobs. But I'm not a systems autist so please correct me if I'm wrong.

>Isn't that the point
It isn't. Configuration management isn't just about config files, it's about every aspect of the system. Installed packages, IP configuration, hostname, services, etc.
There's a key difference between something like Ansible and bash scripts. Ansible describes how the system configuration *should be*. Theoretically if you run the same playbooks twice, nothing on the system should change. However, bash scripts are 'dumb' and just complete specific tasks. They also aren't as powerful as Ansible because Ansible has a huge number of modules to hook into stuff. Not to mention that bash scripts become spaghetti very quickly.

i've used puppet a bit. it's pretty cool, but the modules sometimes lack documentation and you have to dig into the code to try to figure out what to do. it can also be pretty complicated, some modules can be huge, and it uses its own gaylord language/syntax now. it's also ruby which i'm not a big fan of and is generally a dying language. ansible is much easier in comparison, though from what i read, the modules are also worse

puppet is structured well though, certain configurations go on specific places, it has composition, inheritance etc. you can do pretty much everything with it, outside of provisioning the machines and bootstrapping them (need to create the machine, configure network, install the puppet client on them), which you can do with some other tool

also it so cool/magical once you set up the machine to do everything you want. when my coworkers saw it, they had that kind of wonder you'd only see in the 90s when presented with really groundbreaking technology

pic somewhat related

Attached: big anime tiddie grab.jpg (1253x1309, 495K)

Thanks, I see how this is useful in a business environment now. For my ricing and homelab needs, stow, python and execline scripts are enough though. I don't use bash.

Any tips for ansible? I want to learn it mainly because id rather not configure hundreds of devices by hand, even though i already have a bash script to do it for me. Alt how can i make my current linux install a bootable distribution, as of right now i just have a live cd