opensnitch/daemon/default-config.json

24 lines
481 B
JSON
Raw Normal View History

{
"Server":
{
"Address":"unix:///tmp/osui.sock",
"LogFile":"/var/log/opensnitchd.log"
},
"DefaultAction": "allow",
"DefaultDuration": "once",
"InterceptUnknown": false,
"ProcMonitorMethod": "ebpf",
"LogLevel": 2,
"LogUTC": true,
"LogMicro": false,
"Firewall": "nftables",
"Rules": {
"EnableChecksums": false
},
statistics: fixed missed connections Previous behaviour: 1) Before version 1.0.0b the daemon kept a list of processes that had established connections. The list was displayed on the GUI as is, so the maximum number of connections displayed were 100 (hardcoded). 2) When the intercepted connections reached 100, the last entry of the list was removed, and a new one was inserted on the top. After v1.0.0 we started saving connections to a DB on the GUI side, to get rid of the hardcoded connections limit. However, the point 2) was still present that caused some problems: - When the backlog was full we kept inserting and deleting connections from it continuously, one by one. - If there was a connections burst we could end up missing some connections. New behaviour: - The statisics are deleted from the daemon everytime we send them to the GUI, because we don't need them on the daemon anymore. - If the GUI is not connected, the connections will be added to the backlog as in the point 2). - When the backlog reaches the limit, it'll keep deleting the last one in order to insert a new one. - The number of connections to keep on the backlog is configurable. - If the statistics configuration is missing, default values will be 150 (maxEvents) and 25 (maxStats). Notes: If the GUI is saving the data to memory (default), there won't be any noticeable side effect. If the GUI is configured to save the connections to a DB on disk, and the daemon sends all the backlog at once, the GUI may experience a delay and a high CPU spike. This can occur on connecting to the daemon (because the backlog will be full), or when an app sends too many connections per second (like nmap).
2021-08-13 12:18:10 +02:00
"Stats": {
"MaxEvents": 150,
"MaxStats": 25,
"Workers": 6
statistics: fixed missed connections Previous behaviour: 1) Before version 1.0.0b the daemon kept a list of processes that had established connections. The list was displayed on the GUI as is, so the maximum number of connections displayed were 100 (hardcoded). 2) When the intercepted connections reached 100, the last entry of the list was removed, and a new one was inserted on the top. After v1.0.0 we started saving connections to a DB on the GUI side, to get rid of the hardcoded connections limit. However, the point 2) was still present that caused some problems: - When the backlog was full we kept inserting and deleting connections from it continuously, one by one. - If there was a connections burst we could end up missing some connections. New behaviour: - The statisics are deleted from the daemon everytime we send them to the GUI, because we don't need them on the daemon anymore. - If the GUI is not connected, the connections will be added to the backlog as in the point 2). - When the backlog reaches the limit, it'll keep deleting the last one in order to insert a new one. - The number of connections to keep on the backlog is configurable. - If the statistics configuration is missing, default values will be 150 (maxEvents) and 25 (maxStats). Notes: If the GUI is saving the data to memory (default), there won't be any noticeable side effect. If the GUI is configured to save the connections to a DB on disk, and the daemon sends all the backlog at once, the GUI may experience a delay and a high CPU spike. This can occur on connecting to the daemon (because the backlog will be full), or when an app sends too many connections per second (like nmap).
2021-08-13 12:18:10 +02:00
}
}