daemon tasks are actions that are executed in background by the daemon.
They're started from the GUI (server) via a Notification (protobuf),
with the type TASK_START (protobuf).
Once received in the daemon, the TaskManager starts the task in
background.
Tasks may run at interval times (every 5s, 2days, etc), until they
finish an operation, until a timeout, etc.
Each task has each own configuration options, which will customize the
behaviour of its operations.
In this version, if the GUI is closed, the daemon will stop all the
running tasks.
Each Task has a flag to ignore this behaviour, for example if they need
to run until they finish and only send a notification to the GUI,
instead of streaming data continuously to the GUI (server).
- Up until now we only had one task that could be initiated from the GUI:
the process monitor dialog. It has been migrated to a Task{}.
- go.mod bumped to v1.20, to use unsafe string functions.
- go.sum updated accordingly.
- When reloading rules from a path:
stop existing (domains,ips,regexp) lists monitors, stop rules
watcher and start watching the new dir for changes, delete existing
rules from memory, etc.
- Previously, cli parameters (queue number, log file, etc) were taking
into account before loading the configuration.
Now the configuration file is loaded first (default-config.json), and
if any of the cli parameter has been specified, it'll overwrite the
loaded configuration from file.
This means for example that if you use "-process-monitor-method proc",
and "ebpf" is configured in default-config.json, firstly "ebpf" will
be configured, and later "proc".
(-queue-num option for now requires to match config option
cfg.FwOptions.QueueNumber)
continuation of previous commit bde5d34deb
- Allow to reconfigure stats limits (how many events we keep on the
daemon, number of workers, ...)
- Allow to reconfigure loggers.
- Added cli option -config-file to specify an alternate path to the
config file.
- Allow to configure rules path from the configuration file (cli option
takes precedence).
- Default options are now /etc/opensnitchd/rules and
/etc/opensnitchd/default-config.json. Previously the default rules
directory was "rules" (relative path).
Closes#449
Now you can create rules to filter processes by checksum. Only md5 is
available at the moment.
There's a global configuration option that you can use to enable or
disable this feature, from the config file or from the Preferences
dialog.
As part of this feature there have been more changes:
- New proc monitor method (PROCESS CONNECTOR) that listens for
exec/exit events from the kernel.
This feature depends on CONFIG_PROC_EVENTS kernel option.
- Only one cache of active processes for ebpf and proc monitor
methods.
More info and details: #413.
- Allow to use SSL certificates to secure unix sockets communications.
- Allow to use abstract users sockets for server and nodes.
Go gRPC doesn't seem to understand unix sockets addresses that start
with "unix-abstract:", and python gRPC doesn't seem to understand
"unix:@" addresses.
Therefore, on the server (python gRPC) we use the format "unix:@" to
specify the address where the server will listen on, and rewrite it to
"unix-abstract:" before starting the server.
Note about certs and abstract unix sockets:
When creating the SSL certificates, you'll have to specify the
address of the unix socket as the Common Name of the certificates:
Address: "unix:@my-abstract-socket"
Common Name: @my-abstract-socket
Allow to cypher channel communications with certificates.
There are 3 authentication types: simple, tls-simple and tls-mutual.
- 'simple' wont't cypher communications.
- 'tls-simple' uses a server key and certificate for the server, and a
common CA certificate or the server certificate to authenticate all
nodes.
- 'tls-mutual' uses a server key and certificate for the server, and a
client key and certificate per node.
There are 2 options to verify how gRPC validates credentials:
- SkipVerify: https://pkg.go.dev/crypto/tls#Config
- ClientAuthType: https://pkg.go.dev/crypto/tls#ClientAuthType
Example configuration:
"Server": {
"Address": "127.0.0.1:12345",
"Authentication": {
"Type": "tls-simple",
"TLSOptions": {
"CACert": "/etc/opensnitchd/auth/ca-cert.pem",
"ServerCert": "/etc/opensnitchd/auth/server-cert.pem",
"ClientCert": "/etc/opensnitchd/auth/client-cert.pem",
"ClientKey": "/etc/opensnitchd/auth/client-key.pem",
"SkipVerify": false,
"ClientAuthType": "req-and-verify-cert"
}
}
}
More info: https://github.com/evilsocket/opensnitch/wiki/Nodes
- Configuration of system firewall rules from the GUI is not supported for
iptables. Up until now only a warning was displayed, encouring to change
fw type manually.
Now if configured fw type is iptables (default-config.json, Firewall:),
and the user opens the fw dialog, we'll ask the user to change it from
the GUI.
- Add fw rules before connecting to the GUI. Otherwise we send to the
GUI an invalid fw state.
Up until now some error and warning messages were only logged out to the
system, not allowing the user know what was happening under the hood.
Now the following events are notified:
- eBPF related errors.
- netfilter queue errors.
- configuration errors.
WIP, we'll keep improving it and build new features on top of this one.
Now you can send events to syslog, local or remote.
This feature was requested here #638
This feature allows you to integrate opensnitch with your SIEM. Take a
look at the above discussion to see examples with
syslog-ng+promtail+loki+grafana.
There's only one logger implemented (syslog), but it should be easily
expandable to add more type of loggers (elastic, etc).
The event format can be CSV or RFC5424. It sould also be easy to add
more formats.
- Allow to configure stats workers. They were hardcoded to 4.
- Added ability to add a description to the rules.
- Display the description field on the Rules view, and remove the internal
fields (operator, operator_data, etc).
- Added DB migrations.
- Improved rules' executable path field tooltip (#661).
Closes#652#466
Previous behaviour:
1) Before version 1.0.0b the daemon kept a list of processes that had
established connections. The list was displayed on the GUI as is, so
the maximum number of connections displayed were 100 (hardcoded).
2) When the intercepted connections reached 100, the last entry of the
list was removed, and a new one was inserted on the top.
After v1.0.0 we started saving connections to a DB on the GUI side, to
get rid of the hardcoded connections limit. However, the point 2) was
still present that caused some problems:
- When the backlog was full we kept inserting and deleting connections
from it continuously, one by one.
- If there was a connections burst we could end up missing some
connections.
New behaviour:
- The statisics are deleted from the daemon everytime we send them to
the GUI, because we don't need them on the daemon anymore.
- If the GUI is not connected, the connections will be added to the
backlog as in the point 2).
- When the backlog reaches the limit, it'll keep deleting the last
one in order to insert a new one.
- The number of connections to keep on the backlog is configurable.
- If the statistics configuration is missing, default values will be
150 (maxEvents) and 25 (maxStats).
Notes:
If the GUI is saving the data to memory (default), there won't be
any noticeable side effect.
If the GUI is configured to save the connections to a DB on disk, and
the daemon sends all the backlog at once, the GUI may experience a
delay and a high CPU spike. This can occur on connecting to the daemon
(because the backlog will be full), or when an app sends too many
connections per second (like nmap).
Before this change, we tried to determine what firewall to use based on
the version of iptables (if -V legacy -> nftables, otherwise iptables).
This caused problems (#455), and as there's no support yet for nftables
system firewall rules, it can't be configured to workaround these
errors.
Now the default firewall to use will be iptables.
If it's not available (installed), can't be used or the configuration
option is empty/missing, we'll use nftables.
Prior to v1.4.x versions, when a pop-up asked the user to allow or deny
a connection, the rest of the network traffic was dropped until an
action was taken.
We fixed it, but when a pop-up was asking to allow or deny a new connection,
we let it passing by if the daemon's DefaultAction option was set to
allow, even if the user hadn't taken an action on it yet.
It also caused some confusion if the users had configured the pop-up's
DefaultAction to deny, they were expecting to not allow the connection
until they had decided what to do.
Now the previous behaviour has been restored, having these usage
scenarios:
- If the GUI is connected + daemon DefaultAction set to allow or deny.
Result:
1. Prompt the user to allow or deny the new connection.
2. Deny the new connection until the user takes an action on it.
3. Allow the rest of traffic, allowing known connections, and
denying new ones until the active pop-up is closed and we can
prompt the user again.
- GUI disconnected.
Result:
1. Apply daemon's DefaultAction from the configuration file
default-config.json.
closes: #392
problem:
- after losing network connectivity node<->server, the node didn't restore
the connection. In reality, the connection with the server was not
closed, but the notifications channel was closed due to inactivity
after 20s.
set inactivity timeouts to 20s on both node and server. Previous
timeouts were 2h for the main connection and 20s for the streaming
channels (notifications).
- get rid of the logic to determine if the server is alive or not based
on sending pings.
Instead, use the connection events when a node connects/disconnects
(Subscribe).
The Ping call is still used to send the statistics.
other:
- fixed exception when updating the status of a node.
The server address and log file were hardcoded into the
opensnitchd.service file, making it almost impossible to change.
Soon we'll be able to change it from the UI.
If a rule has the priority flag set, no others rules will be checked.
So if you name the rule as 000-allow-xx and set the priority flag, the
rule wil lbe the only one that will be checked if it matches a
connection.
See #36 to know more on this feature.
When the daemon is stopped, we need to close opened netfilter recurses.
Otherwise we can fall into a situation where we leave NFQUEUE queues
opened, which causes opensnitch to not run anymore until system restart
or a manual intervention, because there's a NFQUEUE queue already created
with the same ID.
This is what was happening as a collateral effect of #41.
(1/2)
We start receiving notifications from the UI, which allow us to change
configurations and perform actions on the daemon.
The concept of Node has also been introduced, which identifies every
daemon (client) connected to the UI (server).
These options has been added:
- Enable/Disable firewall interception (for all nodes)
- Change daemons (clients) configuration. globally or per node.
- Change prompt dialog options.
We have fixed some bugs along the way:
- Close audit client connection gracefully.
- Exclude our own connections from being intercepted.
- Better handling of client connection status with the UI.
We probably has also introduced some other bugs (not listed here).
Added ProcMonitorMethod, which can be "proc", "ftrace" or "audit".
Parameters passed by command line take prevalence over default
configuration.
breaking changes: config options changed from xx_yy to XxYy.
Config example:
{
"DefaultAction": "allow",
"DefaultDuration": "once",
"InterceptUnknown": true,
"ProcMonitorMethod": "audit"
}
If we can't communicate with the server (UI), apply the default
configured action. For example, if the UI is doing too much work and it
reaches the timeout, or if there's a programming error (python exception
for instance).
If the file /etc/opensnitchd/default-config.json exists,
read it and apply the options to the default rule when there's no client
connected.
If it doesn't exist, just apply the default rule, allow connections
once.
Config example: {"default_action": "deny", "default_duration": "once"}