Distribution Init scripts (sys-v, systemd, etc.)
Terry Burton via Argus-info
argus-info at lists.andrew.cmu.edu
Wed May 25 11:51:23 EDT 2016
On 25 May 2016 at 15:33, Carter Bullard <carter at qosient.com> wrote:
> Anything that makes deployment easier is good for me. I do like this as a starting point, but I believe that whatever we do, it should be applicable to argus, radium and other argus stream processes, as well as argus repository development and maintenance, whether they are on a single box, or distributed among a number of boxes.
>
> For many sites, they build complex radium collection nodes, which can have dozens of radii, collecting and distributing argus data streams. These are generally related to the ra client programs that consume the streams to get work done. Would you want these to be sub-directories of the radium branch, or separate.
<...snip...>
> We can put argus.conf and radium.conf files in the instance subdirectories, as well as any configuration files for rasplit, or ralabel, or rawhatever. Lock files can go into the instance directories ???
Hi,
I'm purposefully not intending to impose any structure or methodology
by this directory-driven approach so I envision a simple flat
structure (no hierachy) that in essence simply relates instance-names
-> actual daemon commands. The names radium/rasplit in my example are
merely instances and would likely have more specific identifiers in a
real deployment. I wouldn't intend for any differentiated treatment
between argus vs radium vs other-raclient, etc. - the user might even
create an instance of some useful non-argus command such as a netcat
connector or whatever.
The directories might indeed be useful for storing instance-specific
configuration, however I would recommend that any lock files land in
/var/run/ named something like argus-$INSTANCE.lock.
The idea is to enable users to easily specify what they think they
need in terms of argus components, then turn their thought processes
into real init-system-managed daemons very quickly, making
prototype-to-production quick and fun and easy to iterate on. At the
moment I guess that most people start by running a lot of foreground
instances of the tools to assemble what they want and then those users
who are not seasoned sysadmins may struggle to find the most
appropriate way to "properly install" this "prototype", thereby
suffering from robustness issues going forward.
Specifically, the user's thought process might be "I know I need an
argus collector named 'argus123' that is running with -J -A -R -U 80,
a rastream called 'radiumLabelCNflows', another radium called
'radiumAggregatorAndFilterVLAN123', ..." and then they configure the
daemons by simply creating/amending instance directories without
touching a single init script/service file, etc. Finally then kick
their init system to install/restart those daemons - this could
potentially be automated or is at least straight forward to document
for each major init system.
> With that brief response, not sure about the naming convention inf-ipaddr for instance directories.
So my idea is that the instance names are arbitrary. The inf-ipaddr
just happens to relate to the way I label by argus instances at
present in my setup (several collector boxes feeding a fanout network
of radii that select data to forward towards flow-storage and
flow-analysis VMs).
> Sandboxing for Mac OS X may work well in this as long as initd like facilities work with these directories. But if a person wanted to launch from these structure, may not be straight forward in some security setting.
I've not familiarity with this so nothing to add here.
> So, I like it, not sure of the portability to other init systems, and need to work on the conventional use of the directory structure.
Great. I'll write a sys-v init script that accomplishes the above and
that should give us more opportunity to discuss the details.
All the best,
Terry
>> On May 24, 2016, at 7:26 AM, Terry Burton via Argus-info <argus-info at lists.andrew.cmu.edu> wrote:
>>
>> Hi Carter,
>>
>> I hope you are well.
>>
>> It's been about a year since I last looked at Debian packaging and
>> having recently upgraded our platform to Jessie which defaults to
>> systemd I have been giving the init script a little thought.
>>
>> Currently argus supplies support init scripts that set up a basic
>> argus -> radium -> rasplit pipeline for logging the "em2" interface to
>> the local filesystem. The distribution-specific init scripts follow
>> this lead resulting in an installation that is mostly broken upon
>> install for the majority of users this is unlikely to match their
>> expectations.
>>
>> Getting things up and running requires the user to duplicate and
>> customise the provided init scripts (or .service files) per
>> argus/raclient instance - taking care to avoid lock file clashes and
>> such in the sys-v case. As this process is very specific to their
>> environment it is non-trivial to document centrally - and distribution
>> maintainer are usually too lasy to write specific documentation ;-)
>>
>> What if instead we were to provide a framework that makes it easier to
>> configure individual instances of the argus/raclient daemons that aims
>> to somewhat platform/initsystem-agnostic?
>>
>> For example, the installation/packaging could create
>> /etc/argus/instances beneath which the user creates a subdirectory
>> with an arbitrary instance name for each argus/radium/rasplit
>> instance, etc.
>>
>> $ tree /etc/argus/instances/
>> /etc/argus/instances/
>> ├── eth1-192.168.0.1
>> │ └── env
>> ├── eth3-123.123.1.1
>> │ └── env
>> ├── radium
>> │ └── env
>> ├── rasplit
>> └── env
>>
>> (Initially this could be pre-populated with the current simple "em2" example.)
>>
>> Each instance contains a file, env, that contains environment
>> variables. Currently there is only one such variable, CMD, which
>> determines the command that is run.
>>
>> Here's a basic example to illustrate the point:
>>
>> $ cat /etc/argus/instances/eth1-192.168.0.1/env
>> CMD=argus -u argus -i eth1/192.168.0.1 -B 127.0.0.1 -P 564
>>
>> $ cat /etc/argus/instances/eth3-123.123.1.1/env
>> CMD=argus -u argus -i eth3/123.123.1.1 -B 127.0.0.1 -P 565
>>
>> $ cat /etc/argus/instances/radium/env
>> CMD=radium -u argus -S 127.0.0.1:564 -S 127.0.0.1:565 -B 127.0.0.1 -P 569
>>
>> $ cat /etc/argus/instances/rasplit/env
>> CMD=rasplit -u argus -S 127.0.0.1:569 -M time 5m -w
>> /var/log/argus/%Y-%m-%d/$srcid-%H:%M:%S.arg
>>
>> systemd has an "instances" feature that make using this trivial. It
>> expands references to "%i" with the part after an @ symbol in the
>> .service filename. So you can have a single template service file as
>> follows:
>>
>> $ cat /lib/systemd/system/argus at .service
>> [Unit]
>> Description=UoL Argus instance %i
>> After=network.target
>>
>> [Service]
>> Type=simple
>> EnvironmentFile=/etc/argus/instances/%i/env
>> ExecStart=/usr/bin/env $CMD
>>
>> [Install]
>> WantedBy=multi-user.target
>>
>> Which can be enabled per argus/raclient instance as follows:
>>
>> systemctl enable argus at eth1-192.168.0.1
>> systemctl enable argus at eth3-123.123.1.1
>> systemctl enable argus at radium
>> systemctl enable argus at rasplit
>>
>> This scheme should be harmonious for both systemd and sys-v since you
>> could have a common sys-v init script (say /etc/init.d/argus) that is
>> enabled via per-instance symlinks in the rcN.d directories, e.g.
>> /etc/rc2.d/argus at eth1-192.168.0.1. The init script inspects it's $0 to
>> obtain its instance-specific name, sources /etc/argus/instances/$0
>> (-ish), and invokes the daemon contained in $CMD.
>>
>> The benefit is that for most setups the user is never doing anything
>> other than configuring the /etc/argus/instances subdirectories then
>> enabling the services according to the specifics of their init system.
>>
>> Do you think that this would be a useful abstraction and can it be
>> easily extended to other init systems that you are familiar with in
>> the argus ecosystem? If so then I'm happy to update the systemd and
>> sys-v init configuration in the Debian packaging which you can then
>> modify for Centos, BSDs, etc.
>>
>>
>> All the best,
>>
>> Terry
More information about the argus
mailing list