Added option to configure multiple nfqueues.
Post with detailed information about the performance:
https://github.com/evilsocket/opensnitch/discussions/1104
After using -queues 1:6 , you need to configure the rules manually:
(for TCP)
nft insert rule inet mangle output tcp flags syn / fin,syn,rst,ack queue to numgen inc mod 6
TODO:
- Configure queues in the fw automatically based on the queues defined.
- Investigate if we need to use runtime.LockOSThread() in NewQueue().
- Allow to use multiple instances of the daemon:
* One daemon acts as the main daemon, connected to the server (UI) and
managing the rules and notifications.
* The other daemons only intercept and apply verdicts on packets, with
the rules loaded from a central directory (/etc/opensnitchd/rules)
FIXME:
- There's a deadlock repeating the packets when a connection is waiting
for approval.
- Investigate the high mem consumption under heavy load.
If the pop-ups' target is to filter by cmdline, but the typed/launched
command is not absolute or it starts with /proc, also filter by the
absolute path to the binary.
We were not handling configuration upgrades properly on rpm based
systems.
Now local changes to default-config.json and system-fw.json are kept,
and if the distributed files changes in the future, new files will be
created with the extension .rpmnew
We were not reacting to common exit signals, only to kill/interrupt
signals, so the DNS uprobes were never properly removed. Each uprobe
has the PID of the daemon in the identifier, so in theory, there
shouldn't be conflicts, but better clean our probes on exit.
previous to this commit with the daemon running
(and lot of starts/stops):
~ # cat /sys/kernel/debug/tracing/uprobe_events |wc -l
367
after stopping the daemon:
~ # cat /sys/kernel/debug/tracing/uprobe_events |wc -l
364
~ # > /sys/kernel/debug/tracing/uprobe_events
~ # cat /sys/kernel/debug/tracing/uprobe_events |wc -l
0
~ # cp opensnitchd-new /usr/bin/opensnitchd ; service opensnitchd start
~ # cat /sys/kernel/debug/tracing/uprobe_events |wc -l
3
~ # service opensnitchd stop
~ # cat /sys/kernel/debug/tracing/uprobe_events |wc -l
0
On 68c2c8ae1a we excluded failed execve*
calls from being delivered to userspace, in order to get the binary that
was executed and avoid errors/confusion.
But on aarch64, it seems that we fail to save the exec event to a map,
so the event is never delivered to userspace.
So for the time being, send the exec events as soon as they arrive on
aarch64, without checking if the call failed.
On the previus commit we just disabled dns uprobes for armhf/i386 to
avoid loading errors. A better fix is to initialized the structs used.
On armhf still fails after loading it, when attaching to the uprobes
(offsets?), and on i386 it doesn't seem to send anything to userspace
(more analysis needed).
- Increased the number of IPs associated with a domain that are
delivered to userspace. (getfedora.org returns 30 ipv4+ipv6).
- Fixed getting the aliases of a domain when using gethostbyname().
Added the path to the libc as well as the calculated offset for the
uprobe.
Don't return on the first error found loading a uprobe, instead try all
the uprobes and return if the loaded uprobes are 0.
The opensnitch-dns module was not loading on i386/arm architectures.
With the following changes it loads and some uprobes are attached.
for-loops unrolling doesn't still work though on i386/armhf (help
needed).
And on armhf the perf_output channel fails to load for some uprobes.
If the path of a process starts with /tmp/.mount_*, which is the common
path for appimages, use it as the default target on the popups.
Previously it was only added to the list of targets, but preselecting it
will help users to create rules for appimages.
There's a long running task that monitors established connections every
~2s.
When a connection is not found via ebpf or proc, sometimes it's found
there so we can use the inode to search for the process.
However on some systems the netlink call to dump the sockets may fail
continuously, wasting resources. It'll also fail if you block connections
to port 0 (common case for ICMP packets).
So if there're too many errors dumpng the sockets, stop this task for
these cases.
- When discovering the hierarchy of a process, reuse components of
the tree if they're already on cache, to improve speed and reduce
mem allocs.
- When building the tree of a proces, rebuild the tree if the first
component doesn't have pid 1. Otherwise reuse the tree.
Simplify the cache of connections by storing only the PID of a process,
instead of the Process object.
We can obtain the Process object from the cache of processes by PID.
Added config option to set how often the garbage collector runs.
For example:
"Internal": {
"GCPercent": 75
},
If this option is not specified in the config file, or the value
is 0, then the GC percentage is not configured.
More info:
https://pkg.go.dev/runtime/debug#SetGCPercent
We track new processes execution by intercepting the enter and exit
of the functions, but sometimes the exit hook is not called, so the
corresponding entry was not being removed from the map.
In this situation the map becomes full and accepts no new entries.
Now the entry is deleted from the map once the process exits, if it
still exists in the map.
By default load the system fw config file from
/etc/opensnitchd/system-fw.json.
There're these options to specify the file to load:
- via cli option with -fw-config-file
- writing it in the default-config.json file:
"FwOptions": { "ConfigPath": "..." }
If both options are empty, then the default one is used.
FIXME:
When the cli option is used to load the fw configuration, and the main
preferences are saved, the fw is reloaded but the path to the fw config
is lost.
On this test we assumed that there would always be reading stats for our
own process /proc/self, but on restricted environments that might not
alwys be the case. Anyway, a value of 0 is not an error in itself.
Closes#1075
We were not deleting DNS entries from the hash map, so when it reached
the maximum capacity (12k entries), we couldn't allocate new entries,
resulting in events not being sent to userspace.
New option to save and display alerts/events received from the daemon,
like system fw errors or eBPF modules errors.
Until now we only displayed a desktop message, making it difficult to
review the message in detail, or other actions.
Now it's possible to configure eBPF modules path from the
default-config.json file:
"Ebpf": {
"ModulesPath": "..."
}
If the option is not provided, or if it's empty, we'll keep loading from
the default directories:
- /usr/local/lib/opensnitchd/ebpf
- /usr/lib/opensnitchd/ebpf
- /etc/opensnitchd/ebpf (deprecated, will be removed in the future).
Closes#928
- Allow to configure system firewall configuration file path:
* via cli (-fw-config-file).
* via global configuration file.
- Allow to configure fw rules check interval.
The system fw config file contains regular iptables/nftables rules.
Previously it was hardcoded to /etc/opensnitchd/system-fw.json
The interval to check if the interception rules were added was also
hardcoded to 10 seconds. Now it's possible to configure it.
A value of "0s" disables the interval, while "" defaults to 10 seconds.
Up until now, the GUI was refreshed if:
- it was not minimized or hidden.
- if there were new events (even if we received events from the daemon,
they were filtered out if they were duplicated).
But still, there were scenarios where refreshing the views every second
(more or less) was too much, like when monitoring multiple machines.
Now it's possible to configure the views' refresh interval, regardless
of what the daemon sends.
Asked here: #1073
- Added cli option -config-file to specify an alternate path to the
config file.
- Allow to configure rules path from the configuration file (cli option
takes precedence).
- Default options are now /etc/opensnitchd/rules and
/etc/opensnitchd/default-config.json. Previously the default rules
directory was "rules" (relative path).
Closes#449
- Fixed several leaks.
- Cache of events reorganized and improved.
* items are added faster.
* proc details are rebuilt if needed (checksums, proc tree, etc)
* proc's tree is reused if we've got the parent in cache.
rel: #413