- daemon: Allow to dump XDP sockets from kernel.
- ui: Added options to filter by RAW protocol and AF_XDP family.
- Bumped vishvananda/netlink version to v1.3.0.
- Updated go.mod and go.sum
By default when adding the interception rules, we were killing all
existing connections, to force them go to the netfilter queue.
However in some environments this is not acceptable, so now it's configurable.
Besides, we were doing this only for nftables, so now it also works for
iptables.
Now you can create rules to filter processes by checksum. Only md5 is
available at the moment.
There's a global configuration option that you can use to enable or
disable this feature, from the config file or from the Preferences
dialog.
As part of this feature there have been more changes:
- New proc monitor method (PROCESS CONNECTOR) that listens for
exec/exit events from the kernel.
This feature depends on CONFIG_PROC_EVENTS kernel option.
- Only one cache of active processes for ebpf and proc monitor
methods.
More info and details: #413.
When we start to intercept connections, we flush out the conntrack
table, to force already established connections reconnect again so we
can intercept them, and let the user choose if allow or deny them.
Since we no longer use conntrack states to intercept TCP connections, we
now close existing connections, leaving to the applications reestablish
them again.
Local connections are excluded, because it may cause problems with some
local servers.
Both options interfere with the established connections, so you may
experience ocasional network interruptions when enabling the
interception for the first time.
Discussion: #995
For the eBPF monitoring method, we listed and stored local addresses
every second, so that we could later check if the source IP of an
outbound connection was local or not, because sometimes we received
outbound connections like:
443:1.1.1.1 -> 192.168.1.123:12345
This could have been alread solved on this change e090833, so maybe
we no longer need this code.
- Now we subscribe to local addresses events, to receive add/remove
events asynchronously, without having to list local addrs
every second, alliviating CPU usage.
- Fixed creating context object to cancel subroutines. It was not
working properly when switching between proc monitor methods.
Under certain conditions, when we dumped inodes via netlink, we were
linking network connections to wrong applications.
- To improve this situation:
1) Use netfilter's UID by default:
Sometimes the UID reported via netlink was different than the one
reported by libnetfilter. libnetfilter UID is always correct.
If you had a rule that filtered by UID, this problem could cause to
prompt you again to allow the connection.
2) Use the netlink entry that matches exactly the properties of an
outgoing connection:
There're some in-kernel sockets that doesn't match 1:1 outgoing
connections (daemon/netlink/socket.go#L22).
In order to identify the applications that initiate these network
connections we use a workaround. But under certain conditions
(source port reuse), we were associating connections to wrong
applications.
So in order to avoid this problem, if there's a 1:1 match use that
netlink entry. If not, fallback to the workaround.
- misc: added more logs to better debug these issues.
When blocking a connection via libnetfilter-queue using NF_DROP the
connection is discarded. If the blocked connection is a DNS query, the app
that initiated it will wait until it times out, which is ~30s.
This behaviour can for example cause slowdowns loading web pages: #481
This change adds the option to reject connections by killing the socket
that initiated them.
Denying:
$ time telnet 1.1.1.1 22
Trying 1.1.1.1...
telnet: Unable to connect to remote host: Connection timed out
real 2m10,039s
Rejecting:
$ time telnet 1.1.1.1 22
Trying 1.1.1.1...
telnet: Unable to connect to remote host: Software caused connection abort
real 0m0,005s
One of the steps of PIDs discovering is knowing what's the socket inode
of a connection. The first try is to dump the active connections in the
kernel, using NETLINK_SOCK_DIAG via netlink.
Sometimes when a source port was reused, the kernel could return multiple
entries with the same source port, leading us to associate connections with
the wrong application.
This change fixes this problem, while allowing us to discover other
apps.
More information:
https://github.com/evilsocket/opensnitch/issues/387#issuecomment-888663121
Note: this problem shouldn't occur using the procs monitor method eBPF.
When enabling the eBPF monitor method we dump the active connections,
but in some cases there're no active connections, and because of this
we're failing enabling this monitor method.
If there're no connections established, netlink returns 0 entries. It's
not clear if it's an indication of error in some cases or the expected
result.
Either way:
- fail only if we're unable to load the eBPF module.
- dump TCP IPv6 connections only if IPv6 is enabled in the syste,-
- De/Serialize IPv6 connections.
- Added SocketsDump() to list all sockets currently in the kernel.
- [proc details] Resolve all the sockets an application has opened
and translate them to network data, e.g:
```
ls -l /proc/1234/fd/
0 ... 25 -> socket[12345678]
```
to
```
0 .... 25 -> socket[12345678] - 54321:10.0.2.2 -> github.com:443,
state: established
```
- Dump connections from kernel querying by source port + protocol.
- Prioritize responses which match the outgoing connection.
- If we don't get any response, apply the default action configured in
/etc/opensnitchd/default-config.json
--
A connection can be considered unique if:
protocol + source port + source ip + destination ip + destination port
We can be quite sure that only one process has created the connection.
However, many times, querying the kernel for the connection details by
all these parameters results in no response.
A regular query and normal response would be:
query: TCP:47344:192.168.1.106 -> 151.101.65.140:443
response: 47344:192.168.1.106 -> 151.101.65.140:443, inode: 1234567, ...
But in another cases, the details of the outgoing connection differs
from the kernel response, or it even doesn't exist.
However, if we query by protocol+source port, we can get more entries, and
somewhat guess what program opened the outgoing connection.
Some examples of querying by outgoing connection and response from
kernel:
query: 8612:192.168.1.5 -> 192.168.1.255:8612
response: 8612:192.168.1.105 -> 0.0.0.0:0
query: 123:192.168.1.5 -> 217.144.138.234:123
response: 123:0.0.0.0 -> 0.0.0.0:0
query: 45015:127.0.0.1 -> 239.255.255.250:1900
response: 45015:127.0.0.1 -> 0.0.0.0:0
query: 50416:fe80::9fc2:ddcf:df22:aa50 -> fe80::1:53
response: 50416:254.128.0.0 -> 254.128.0.0:53
query: 51413:192.168.1.106 -> 103.224.182.250:1337
response: 51413:0.0.0.0 -> 0.0.0.0:0
man sock_diag:
"If the nlmsg_flags field of the struct nlmsghdr header has the
NLM_F_DUMP flag set, it means that a list of sockets is being
requested; otherwise it is a query about an individual socket."