New feature to collect and display bytes sent/received per process.
- it only works with 'ebpf' monitor method.
- the information is collected on kernel space and sent to the daemon:
- when the connection socket is closed.
- every 2s on large transfers. On this case the bytes are
accumulated.
The daemon sends the events to the server (GUI), where the information
is added to the DB.
The information is displayed on the GUI:
- on the statusbar in real-time (based on the refresh interval defined).
- on the Applications tab.
By right clicking on the Applications tab headers, the user can reset
Tx/Rx stats, and grouping bytes per unit (default) or not.
Finally, the rx/tx stats are deleted based on the preferences options.
We were not reacting to common exit signals, only to kill/interrupt
signals, so the DNS uprobes were never properly removed. Each uprobe
has the PID of the daemon in the identifier, so in theory, there
shouldn't be conflicts, but better clean our probes on exit.
previous to this commit with the daemon running
(and lot of starts/stops):
~ # cat /sys/kernel/debug/tracing/uprobe_events |wc -l
367
after stopping the daemon:
~ # cat /sys/kernel/debug/tracing/uprobe_events |wc -l
364
~ # > /sys/kernel/debug/tracing/uprobe_events
~ # cat /sys/kernel/debug/tracing/uprobe_events |wc -l
0
~ # cp opensnitchd-new /usr/bin/opensnitchd ; service opensnitchd start
~ # cat /sys/kernel/debug/tracing/uprobe_events |wc -l
3
~ # service opensnitchd stop
~ # cat /sys/kernel/debug/tracing/uprobe_events |wc -l
0
Added the path to the libc as well as the calculated offset for the
uprobe.
Don't return on the first error found loading a uprobe, instead try all
the uprobes and return if the loaded uprobes are 0.
There's a long running task that monitors established connections every
~2s.
When a connection is not found via ebpf or proc, sometimes it's found
there so we can use the inode to search for the process.
However on some systems the netlink call to dump the sockets may fail
continuously, wasting resources. It'll also fail if you block connections
to port 0 (common case for ICMP packets).
So if there're too many errors dumpng the sockets, stop this task for
these cases.
- When discovering the hierarchy of a process, reuse components of
the tree if they're already on cache, to improve speed and reduce
mem allocs.
- When building the tree of a proces, rebuild the tree if the first
component doesn't have pid 1. Otherwise reuse the tree.
Simplify the cache of connections by storing only the PID of a process,
instead of the Process object.
We can obtain the Process object from the cache of processes by PID.
Added config option to set how often the garbage collector runs.
For example:
"Internal": {
"GCPercent": 75
},
If this option is not specified in the config file, or the value
is 0, then the GC percentage is not configured.
More info:
https://pkg.go.dev/runtime/debug#SetGCPercent
By default load the system fw config file from
/etc/opensnitchd/system-fw.json.
There're these options to specify the file to load:
- via cli option with -fw-config-file
- writing it in the default-config.json file:
"FwOptions": { "ConfigPath": "..." }
If both options are empty, then the default one is used.
FIXME:
When the cli option is used to load the fw configuration, and the main
preferences are saved, the fw is reloaded but the path to the fw config
is lost.
On this test we assumed that there would always be reading stats for our
own process /proc/self, but on restricted environments that might not
alwys be the case. Anyway, a value of 0 is not an error in itself.
Closes#1075
Now it's possible to configure eBPF modules path from the
default-config.json file:
"Ebpf": {
"ModulesPath": "..."
}
If the option is not provided, or if it's empty, we'll keep loading from
the default directories:
- /usr/local/lib/opensnitchd/ebpf
- /usr/lib/opensnitchd/ebpf
- /etc/opensnitchd/ebpf (deprecated, will be removed in the future).
Closes#928
- Allow to configure system firewall configuration file path:
* via cli (-fw-config-file).
* via global configuration file.
- Allow to configure fw rules check interval.
The system fw config file contains regular iptables/nftables rules.
Previously it was hardcoded to /etc/opensnitchd/system-fw.json
The interval to check if the interception rules were added was also
hardcoded to 10 seconds. Now it's possible to configure it.
A value of "0s" disables the interval, while "" defaults to 10 seconds.
- Added cli option -config-file to specify an alternate path to the
config file.
- Allow to configure rules path from the configuration file (cli option
takes precedence).
- Default options are now /etc/opensnitchd/rules and
/etc/opensnitchd/default-config.json. Previously the default rules
directory was "rules" (relative path).
Closes#449
- Fixed several leaks.
- Cache of events reorganized and improved.
* items are added faster.
* proc details are rebuilt if needed (checksums, proc tree, etc)
* proc's tree is reused if we've got the parent in cache.
rel: #413
Sometimes we receive /proc/self/exe as the path of the process (electron
apps).
Since a couple of systemd versions ago, some processes spawned by
systemd are reported as /proc/self/fd/<number>.
In these cases reading the symbolic link /proc/<pid>/exe points to the
file on disk.
In b93051026e we disabled sending/parsing
list operators as JSON strings. Instead, now it's sent/parsed as
protobuf Rule, and saved to disk as JSON array, which ease the task of
manually creating new rules if needed.
This change was missing in the previous commit.
Previously when creating a new rule we followed these steps:
- Create a new protobuf Rule object from the ruleseditor or the
pop-ups.
- If the rule contained more than one operator, we converted the
list of operators to a JSON string.
- This JSON string was sent back to the daemon, and saved to the
DB.
- The list of operators were never expanded on the GUI, i.e., they
were not saved as a list of protobuf Operator objects.
- Once received in the daemon, the JSON string was parsed and
converted to a protobuf Operator list of objects.
Both, the JSON string and the list of protobuf Operator objects were
saved to disk, but the JSON string was ignored when loading the
rules.
Saving the list of operators as a JSON string was a problem if you
wanted to create or modify rules without the GUI.
Now when creating or modifying rules from the GUI, the list of operators
is no longer converted to JSON string. Instead the list is sent to the
daemon as a list of protobuf Operators, and saved as JSON objects.
Notes:
- The JSON string is no longer saved to disk as part of the rules.
- The list of operators is still saved as JSON string to the DB.
- About not enabled rules:
Previously, not enabled rules only had the list of operators as JSON
string, with the field list:[] empty.
Now the list of operators is saved as JSON objects, but if the rule
is not enabled, it won't be parsed/loaded.
Closes#1047
- Obtain the process's parent hierarchy.
- Display the hierarchy on the pop-ups and the process dialog.
- [pop-ups] Added a Detailed view with all the metadata of the
process.
- [cache-events] Improved the cache of processes.
- [ruleseditor] Fixed enabling md5 checksum widget.
Related: #413, #406
Now you can create rules to filter processes by checksum. Only md5 is
available at the moment.
There's a global configuration option that you can use to enable or
disable this feature, from the config file or from the Preferences
dialog.
As part of this feature there have been more changes:
- New proc monitor method (PROCESS CONNECTOR) that listens for
exec/exit events from the kernel.
This feature depends on CONFIG_PROC_EVENTS kernel option.
- Only one cache of active processes for ebpf and proc monitor
methods.
More info and details: #413.