The current behavior of priority rules can be non-intuitive with
higher priority rules completely overriding lower priority rules even in
permissions not held in common. This behavior does have use cases but
its can be very confusing, and does not normal policy behavior
Eg.
priority=0 allow r /**,
priority=1 deny w /**,
will result in no allowed permissions even though the deny rule is
only removing the w permission, beause the higher priority rule
completely over ride lower priority permissions sets (including
none shared permissions).
Instead move to tracking the priority at a per permission level. This
allows the w permission to still override at priority 1, while the
read permission is allowed at priority 0.
The final constructed state will still drop priority for the final
permission set on the state.
Note: this patch updates the equality tests for the cases where
the complete override behavior was being tested for.
The complete override behavior will be reintroduced in a future
patch with a keyword extension, enabling that behavior to be used
for ordered blocks etc.
Signed-off-by: John Johansen <john.johansen@canonical.com>
The priority field is only used during state construction, and can
even prevent later optimizations like minimization. The parser already
explcitily clears the states priority field as part of the last thing
done during construction so it doesn't prevent minimization
optimizations.
This means the state priority not only wastes storage because it is
unused post construction but if used it could introduce regressions,
or other issues.
The change to the minimization tests just removes looking for the
priority field that is no longer reported.
Signed-off-by: John Johansen <john.johansen@canonical.com>
Like was done for the other MatchFlags switch to using a node type
instead of dynamic_cast as this will result in a performance
improvement.
Signed-off-by: John Johansen <john.johansen@canonical.com>
Add a basic overview of the ordering of the backend of the compiler
and which stages specific dump info lines up with.
Signed-off-by: John Johansen <john.johansen@canonical.com>
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1470
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
Add a basic overview of the ordering of the backend of the compiler
and which stages specific dump info lines up with.
Signed-off-by: John Johansen <john.johansen@canonical.com>
Construction of the chfa can reorder states from what the numbering
given during the hfa constuctions because of reordering for better
compression, dead state removal to ensure better packing etc.
This however means the dfa dump is difficult (it is possible using
multiple dumpes) to match up to the chfa that the kernel is
using. Make this easier by making the dfa dump be able to take
the remapping as input, and provide an option to dump the
chfa equivalent hfa.
Renumbered states will show up as {new <== {orig}} in the dump
Eg.
```
--D dfa-states
{1} <== priority (allow/deny/prompt/audit/quiet)
{5} 0 (0x 4/0//0/0/0)
{1} perms: none
0x2 -> {5} 0 (0x 4/0//0/0/0)
0x4 -> {5} 0 (0x 4/0//0/0/0)
\a 0x7 -> {5} 0 (0x 4/0//0/0/0)
\t 0x9 -> {5} 0 (0x 4/0//0/0/0)
\n 0xa -> {5} 0 (0x 4/0//0/0/0)
\ 0x20 -> {5} 0 (0x 4/0//0/0/0)
4 0x34 -> {3}
{3} perms: none
0x0 -> {6}
{6} perms: none
1 0x31 -> {5} 0 (0x 4/0//0/0/0)
```
```
-D dfa-compressed-states
{1} <== priority (allow/deny/prompt/audit/quiet)
{2 == {5}} 0 (0x 4/0//0/0/0)
{1} perms: none
0x2 -> {2 == {5}} 0 (0x 4/0//0/0/0)
0x4 -> {2 == {5}} 0 (0x 4/0//0/0/0)
\a 0x7 -> {2 == {5}} 0 (0x 4/0//0/0/0)
\t 0x9 -> {2 == {5}} 0 (0x 4/0//0/0/0)
\n 0xa -> {2 == {5}} 0 (0x 4/0//0/0/0)
\ 0x20 -> {2 == {5}} 0 (0x 4/0//0/0/0)
4 0x34 -> {3}
{3} perms: none
0x0 -> {4 == {6}}
{4 == {6}} perms: none
1 0x31 -> {2 == {5}} 0 (0x 4/0//0/0/0)
```
Signed-off-by: John Johansen <john.johansen@canonical.com>
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1474
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: John Johansen <john@jjmx.net>
Construction of the chfa can reorder states from what the numbering
given during the hfa constuctions because of reordering for better
compression, dead state removal to ensure better packing etc.
This however means the dfa dump is difficult (it is possible using
multiple dumpes) to match up to the chfa that the kernel is
using. Make this easier by making the dfa dump be able to take the
emapping as input, and provide an option to dump the chfa equivalent
hfa.
Renumbered states will show up as {new <== {orig}} in the dump
Eg.
--D dfa-states
{1} <== priority (allow/deny/prompt/audit/quiet)
{5} 0 (0x 4/0//0/0/0)
{1} perms: none
0x2 -> {5} 0 (0x 4/0//0/0/0)
0x4 -> {5} 0 (0x 4/0//0/0/0)
\a 0x7 -> {5} 0 (0x 4/0//0/0/0)
\t 0x9 -> {5} 0 (0x 4/0//0/0/0)
\n 0xa -> {5} 0 (0x 4/0//0/0/0)
\ 0x20 -> {5} 0 (0x 4/0//0/0/0)
4 0x34 -> {3}
{3} perms: none
0x0 -> {6}
{6} perms: none
1 0x31 -> {5} 0 (0x 4/0//0/0/0)
-D dfa-compressed-states
{1} <== priority (allow/deny/prompt/audit/quiet)
{2 == {5}} 0 (0x 4/0//0/0/0)
{1} perms: none
0x2 -> {2 == {5}} 0 (0x 4/0//0/0/0)
0x4 -> {2 == {5}} 0 (0x 4/0//0/0/0)
\a 0x7 -> {2 == {5}} 0 (0x 4/0//0/0/0)
\t 0x9 -> {2 == {5}} 0 (0x 4/0//0/0/0)
\n 0xa -> {2 == {5}} 0 (0x 4/0//0/0/0)
\ 0x20 -> {2 == {5}} 0 (0x 4/0//0/0/0)
4 0x34 -> {3}
{3} perms: none
0x0 -> {4 == {6}}
{4 == {6}} perms: none
1 0x31 -> {2 == {5}} 0 (0x 4/0//0/0/0)
Signed-off-by: John Johansen <john.johansen@canonical.com>
Currently states are added to the reachable set when they are popped
from the workqueue. This however can result in states being
added to the work queue multiple times and reprocessed.
Eg. If state 2 has the transitions, and 9 is not in the reachable set
a -> 9
b -> 9
c -> 9
d -> 9
e -> 3
then 9 will get pushed onto the work 4 times. Even worse other states
on the workqueue may also add state 9 to the workqueue because it has
not been added to the reachable set.
Instead add states to the reachable set when they are added to the
workqueue. The first encounter with a state will result in it being
reachable and all other encounters will see that it already in the set
and not add it to the workqueue.
Signed-off-by: John Johansen <john.johansen@canonical.com>
The dfa goes through several stages during the build. Allow dumping it
at the various stages instead of only at the end.
Signed-off-by: John Johansen <john.johansen@canonical.com>
Match Flags convert output to hex but don't restore after outputting
the flag resulting in following numbers being hex encoded. This
results in dumps that can be confusing eg.
rule: \d2 -> \x2 priority=1001 (0x4/0)< 0x4>
rule: \d7 -> \a priority=3e9 (0x4/0)< 0x4>
rule: \d10 -> \n priority=3e9 (0x4/0)< 0x4>
rule: \d9 -> \t priority=3e9 (0x4/0)< 0x4>
rule: \d14 -> \xe priority=1001 (0x4/0)< 0x4>
where priority=3e9 is the hex encoded priority 1001.
Signed-off-by: John Johansen <john.johansen@canonical.com>
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1419
Approved-by: Maxime Bélair <maxime.belair@canonical.com>
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
Fix libapparmor_re/Makefile so it works correctly with rebuilds and improve state machine dump information, to aid with debugging of permission handling during the compile.
Signed-off-by: John Johansen <john.johansen@canonical.com>
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1410
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
This is simple enough to fix even if weld_file_to_policy isn't used in practice
with the compat layer that uses it being a target for deletion
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
Match Flags convert output to hex but don't restore after outputting
the flag resulting in following numbers being hex encoded. This
results in dumps that can be confusing eg.
rule: \d2 -> \x2 priority=1001 (0x4/0)< 0x4>
rule: \d7 -> \a priority=3e9 (0x4/0)< 0x4>
rule: \d10 -> \n priority=3e9 (0x4/0)< 0x4>
rule: \d9 -> \t priority=3e9 (0x4/0)< 0x4>
rule: \d14 -> \xe priority=1001 (0x4/0)< 0x4>
where priority=3e9 is the hex encoded priority 1001.
Signed-off-by: John Johansen <john.johansen@canonical.com>
The mapping of AA_CONT_MATCH was being dropped resulting in the
tcp tests failing because they would only match up to the first conditional
match check in the layout.
Bug: https://gitlab.com/apparmor/apparmor/-/issues/462
Fixes: e29f5ce5f ("parser: if extended perms are supported by the kernel build a permstable")
Signed-off-by: John Johansen <john.johansen@canonical.com>
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1409
Approved-by: Ryan Lee <rlee287@yahoo.com>
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
The parser recently changed how/where deny information is applied.
commit 1fa45b7c1 ("parser: dfa minimization prepare for extended
permissions") removed the implicit filtering of explicit denies during
the minimization pass. The implicit clear allowed the explicit
information to be carried into the minimization pass and merged with
implicit denies. The end result being a minimized dfa with the explicit
deny information available to be applied post minimization, and
then dropped later at permission encoding in the accept entries.
Extended permission however enable carrying explicit deny information
into the kernel to fix certain bugs like complain mode not being
able to distinguish between implicit and explicit deny rules (ie.
deny rules get ignored in complain mode). However keeping explicit
deny information when unnecessary result in a larger state machine
than necessary and slower compiles.
commit 179c1c1ba ("parser: fix minimization check for filtering_deny")
Moved the explicit apply_and_clear_deny() pass to before minimization
to restore mnimization's ability to create a minimized dfa with
explicit and implicit deny information merged but this also cleared
the explicit deny information that used to be carried through
minimization. This meant that when the deny information was applied
post minimization it resulted in the audit and quiet information
being cleared.
This resulted in the query_label tests failing as they are checking
for the expected audit infomation in the permissions.
Fixes: 179c1c1ba ("parser: fix minimization check for filtering_deny")
Bug: https://gitlab.com/apparmor/apparmor/-/issues/461
Signed-off-by: John Johansen <john.johansen@canonical.com>
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1408
Approved-by: Ryan Lee <rlee287@yahoo.com>
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
The mapping of AA_CONT_MATCH was being dropped resulting in the
tcp tests failing because they would only match up to the first conditional
match check in the layout.
Bug: https://gitlab.com/apparmor/apparmor/-/issues/462
Fixes: e29f5ce5f ("parser: if extended perms are supported by the kernel build a permstable")
Signed-off-by: John Johansen <john.johansen@canonical.com>
Instead of encoding permissions in the accept and accept2 tables
extended perms uses a permissions table and accept becomes an index
into the table.
Add the ability to dump the permissions table so that it can be
compared and debugged.
Signed-off-by: John Johansen <john.johansen@canonical.com>
The chfa dump is missing information about the accept2 entry. The
accept2 information is necessary to help with debugging state machine
builds as accept2 is used to store quiet and audit information in the
old format or conditional information in the extended perms format.
Signed-off-by: John Johansen <john.johansen@canonical.com>
The Makefile is missing some of its .h depenedncies causing compiles
to either fail or worse result in bad builds when rebuilding in an
already built tree.
Move the header dependencies into a variable and use it for each
target. While some targets don't need every include as a dependency
and this will result in unnecessary rebuilds in some cases, it makes
the Makefile cleaner, easier to maintain and makes sure a dependency
isn't accidentally missed.
Signed-off-by: John Johansen <john.johansen@canonical.com>
The parser recently changed how/where deny information is applied.
commit 1fa45b7c1 ("parser: dfa minimization prepare for extended
permissions") removed the implicit filtering of explicit denies during
the minimization pass. The implicit clear allowed the explicit
information to be carried into the minimization pass and merged with
implicit denies. The end result being a minimized dfa with the explicit
deny information available to be applied post minimization, and
then dropped later at permission encoding in the accept entries.
Extended permission however enable carrying explicit deny information
into the kernel to fix certain bugs like complain mode not being
able to distinguish between implicit and explicit deny rules (ie.
deny rules get ignored in complain mode). However keeping explicit
deny information when unnecessary result in a larger state machine
than necessary and slower compiles.
commit 179c1c1ba ("parser: fix minimization check for filtering_deny")
Moved the explicit apply_and_clear_deny() pass to before minimization
to restore mnimization's ability to create a minimized dfa with
explicit and implicit deny information merged but this also cleared
the explicit deny information that used to be carried through
minimization. This meant that when the deny information was applied
post minimization it resulted in the audit and quiet information
being cleared.
This resulted in the query_label tests failing as they are checking
for the expected audit infomation in the permissions.
Fixes: 179c1c1ba ("parser: fix minimization check for filtering_deny")
Bug: https://gitlab.com/apparmor/apparmor/-/issues/461
Signed-off-by: John Johansen <john.johansen@canonical.com>
When the find fails but the insertion also fails, we leak the new node
that we generated. Delete the new node in this case to avoid leaking
memory.
The question remains, however, as to whether we should implement `operator==` in addition to `operator<` so that they are consistent with each other and `find` works correctly.
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1399
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: John Johansen <john@jjmx.net>
When the find fails but the insertion also fails, we leak the new node
that we generated. Delete the new node in this case to avoid leaking
memory.
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
commit 1fa45b7c1 ("parser: dfa minimization prepare for extended
permissions") removed implicit filtering of explicit denies in the
minimization pass (the information was ignored in building the set of
final accept states).
The filtering of explicit denies reduces the size of the produced
dfa. Since we need to be smarter about when explicit denies are
kept (eg. during complain mode), and most dfas are limited to 65k
states we currently need to filter explicit deny perms by default.
To compensate commit 2737cb2c2 ("parser: minimization - remove
unnecessary second minimization pass") moved the
apply_and_clear_deny() to before minimization. However its check to
apply removal denials before minimization is broken. Remove minimization
triggering apply_and_clear_deny() and just set the FILTER_DENY flag
by default, until we have better selection of rules/conditions where
explicit deny information should be carried through to the backend.
Fixes: 2737cb2c2 ("parser: minimization - remove unnecessary second minimization pass")
Signed-off-by: John Johansen <john.johansen@canonical.com>
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1397
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: John Johansen <john@jjmx.net>
There is an integer overflow when comparing priorities when cmp is
used because it uses subtraction to find lessthan, equal, and greater
than in one operation.
But INT_MAX and INT_MIN are being used by priorities and this results
in INT_MAX - INT_MIN and INT_MIN - INT_MAX which are both overflows
causing an incorrect comparison result and selection of the wrong
rule permission.
Closes: https://gitlab.com/apparmor/apparmor/-/issues/452
Fixes: e3fca60d1 ("parser: add the ability to specify a priority prefix to rules")
Signed-off-by: John Johansen <john.johansen@canonical.com>
commit 1fa45b7c1 ("parser: dfa minimization prepare for extended
permissions") removed implicit filtering of explicit denies in the
minimization pass (the information was ignored in building the set of
final accept states).
The filtering of explicit denies reduces the size of the produced
dfa. Since we need to be smarter about when explicit denies are
kept (eg. during complain mode), and most dfas are limited to 65k
states we currently need to filter explicit deny perms by default.
To compensate commit 2737cb2c2 ("parser: minimization - remove
unnecessary second minimization pass") moved the
apply_and_clear_deny() to before minimization. However its check to
apply removal denials before minimization is broken. Remove minimization
triggering apply_and_clear_deny() and just set the FILTER_DENY flag
by default, until we have better selection of rules/conditions where
explicit deny information should be carried through to the backend.
Fixes: 2737cb2c2 ("parser: minimization - remove unnecessary second minimization pass")
Signed-off-by: John Johansen <john.johansen@canonical.com>
Moving apply_and_clear_deny() before the first minimization pass, which
was necessary to propperly support building accept information for
older none extended permission dfas, allows us to also get rid of doing a
second minimization pass if we want to force clearing explicit deny
info from extended permission tables.
Signed-off-by: John Johansen <john.johansen@canonical.com>
Instead of compressing the permission set into 128 bit and using that
as the index in the permission map, just use the permissions directly
as the index into the permission map.
Note: this will break equality and minimization tests. Because deny
is not being cleared it will result in more partitions in the initial
setup. This will be addressed and the tests will be fixed in a follow
on patch.
Signed-off-by: John Johansen <john.johansen@canonical.com>
Hash minimization was removed in
f0b154528 Fix dfa minimization
however some remnants of minimization remained. A comment and the use
of the hash but only as a 0 value. Drop this dead code and comment.
Signed-off-by: John Johansen <john.johansen@canonical.com>
If the state machine does not requires more than 2^16 states use the
dfa16 encoding for next/check tables to keep the dfa size small.
Bug: https://gitlab.com/apparmor/apparmor/-/issues/419
Signed-off-by: John Johansen <john.johansen@canonical.com>
The hfa stores next/check transitions in 16 bit fields to reduce memory
usage. However this means the state machine can on contain 2^16
states.
Allow the next/check tables to be 32 bit. This theoretically could allow
for 2^32 states however the base table uses the top 8 bits as flags
giving us only 2^24 bits to index into the next/check tables. With
most states having at least 1 transition this effectively caps the
number of states at 2^24.
To obtain 2^32 possible states a flags table needs to be added. Add
a skeleton around supporting a flags table, so we can note the remaining
work that needs to be done. This patch will only allow for 2^24 states.
Bug: https://gitlab.com/apparmor/apparmor/-/issues/419
Signed-off-by: John Johansen <john.johansen@canonical.com>
__uint128 is not supported by gcc on 32 bit architectures so rework
the 128 bit map key to be a pair of 64bit numbers.
Signed-off-by: John Johansen <john.johansen@canonical.com>
switch permission bits to use perm32_t type. This is just annotating
the code as it is no different than uint32_t at this time.
We do not convert the accept values as they may be mapped permission
bits or they may be and index value.
Signed-off-by: John Johansen <john.johansen@canonical.com>
The use of xbits can not pass verification so we need to leave them
off this makes the profile a leaf profile.
Signed-off-by: John Johansen <john.johansen@canonical.com>
v1 of permstable32 has some broken verification checks. By using two
copies of a merged dfa and an xtable the same size of the permstable
we can work around the issue.
Signed-off-by: John Johansen <john.johansen@canonical.com>
If extended permissions are supported use them. We need to build a
permission table and set the accept state of the chfa up as an index
into the table.
For now map the front end permission layout into the old format and
then convert that to the perms table just as the kernel does.
Signed-off-by: John Johansen <john.johansen@canonical.com>
The kernel does not expect a name and it is not used even within the
parser so drop it. Correct the padding calculation.
sizeof(th_version)
includes the trailing \0 in the count so we should not be adding it
explicitly. Doing so made it seem like we were writing an extra byte
and messing things up, because the string write below did not include
the \0 which we had to add explicitly.
Switch to writing the th_version using size_of() bytes as is used in
the pad calculation, to avoid confusion around the header padding.
Signed-off-by: John Johansen <john.johansen@canonical.com>
This reverts commit 78ae956087.
Commit 78ae956087 causes policy to not
to conform to protocol as determined by the kernel. Technically the
reverted patch is correct and the kernel is wrong but we can not
change 15 years of history.
The reason it breaks the policy in the kernel is because the kernel
does not use the name field, and does not expect it. It just expects
the size with a single trailing 0. This doesn't break because this
section is all padded to 64 bytes so writing the extra 0 doesn't
hurt as it is effectively just manually adding to the padding.
Signed-off-by: John Johansen <john.johansen@canonical.com>
Update the state machine readme to better reflect how the chfa is
encoded and works. It still needs a lot more but fixes several errors
in the doc and adds some info about state differential encoding, oobs,
and comb compression.
In addition fix an off by own error during chfa encoding. This has
likely never triggered as it gets hidden by being in a section that
is being in a section that is padded to an 8 byte boundary.
Signed-off-by: John Johansen <john.johansen@canonical.com>
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1244
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
Expression simplification can get into an infinite loop due to eps
pairs hiding behind and alternation that can't be caught by
normalize_eps() (which exists in the first place to stop a similar
loop).
The loop in question happens in AltNode::normalize when a subtree has
the following structure.
1. elseif (child[dir]->is_type(ALT_NODE)) rotate_node too
alt
/\
/ \
/ \
eps alt
/\
/ \
/ \
alt eps
/\
/ \
/ \
eps eps
2. if (normalize_eps(dir)) results in
alt
/\
/ \
/ \
alt eps
/\
/ \
/ \
alt eps
/\
/ \
/ \
eps eps
3. elseif (child[dir]->is_type(ALT_NODE)) rotate_node too
alt
/\
/ \
/ \
alt alt
/\ /\
/ \ / \
/ \ / \
eps eps eps eps
4. elseif (child[dir]->is_type(ALT_NODE)) rotate_node too
alt
/\
/ \
/ \
eps alt
/\
/ \
/ \
eps alt
/\
/ \
/ \
eps eps
5. if (normalize_eps(dir)) results in
alt
/\
/ \
/ \
alt eps
/\
/ \
/ \
eps alt
/\
/ \
/ \
eps eps
6. elseif (child[dir]->is_type(ALT_NODE)) rotate_node too
alt
/\
/ \
/ \
eps alt
/\
/ \
/ \
alt eps
/\
/ \
/ \
eps eps
back to beginning of cycle
Fix this by detecting the creation of an eps_pair in rotate_node(),
that pair can be immediately eliminated by simplifying the tree in that
step.
In the above cycle the pair creation is caught at step 3 resulting
in
3. elseif (child[dir]->is_type(ALT_NODE)) rotate_node too
alt
/\
/ \
/ \
alt eps
/\
/ \
/ \
eps eps
4. elseif (child[dir]->is_type(ALT_NODE)) rotate_node too
alt
/\
/ \
/ \
eps alt
/\
/ \
/ \
eps eps
whch gets reduces to
alt
/\
/ \
/ \
eps eps
breaking the normalization loop. The degenerate alt node will be caught
in turn when its parent is dealt with.
This needs to be backported to all releases
Closes: https://gitlab.com/apparmor/apparmor/-/issues/398
Fixes: 846cee506 ("Split out parsing and expression trees from regexp.y")
Reported-by: Christian Boltz <apparmor@cboltz.de>
Signed-off-by: John Johansen <john.johansen@canonical.com>
The parser writes
sizeof(th)) + th_version + (char)0 + name + (char)0;
but the padding currently is computed as
sizeof(th)) + th_version + name + (char)0;
missing the internal (char)0, add 1 to the pad and fill to ensure
this is correct.
Reported-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
Signed-off-by: John Johansen <john.johansen@canonical.com>
Update the state machine readme to better reflect how the chfa is
encoded and works. It still needs a lot more but fixes several errors
in the doc and adds some info about state differential encoding, oobs,
and comb compression.
Signed-off-by: John Johansen <john.johansen@canonical.com>
When the regex parser failed, the Chars objects created/used in rules
charset and cset_chars would not be cleaned up properly and would
leak.
Closes#361
Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>
speedup and reduce memory usage of dfa generation
A variety of changes to improve dfa generation
- By switching to Nodevec instead of Node sets we can reduce memory usage slightly and reduce code
- By using charsets for chars we reduce code and increase chances of node merging/reduction which reduces memory usage slightly
- By merging charsets we reduce the number of nodes
Signed-off-by: John Johansen <john.johansen@canonical.com>
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1066
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
Instead of having multiple tables, since we have room post split
of optimization and dump flags just move all the optimization and
dump flags into a common table.
We can if needed switch the flag entry size to a long in the future.
Signed-off-by: John Johansen <john.johansen@canonical.com>
In preparation for more flags (not all of the backend dfa based),
rework the optimization and dump flag handling which has been exclusively
around the dfa up to this point.
- split dfa control and dump flags into separate fields. This gives more
room for new flags in the existing DFA set
- rename DFA_DUMP, and DFA_CONTROL to CONTROL_DFA and DUMP_DFA as
this will provide more uniform naming for none dfa flags
- group dump and control flags into a structure so they can be passed
together.
Signed-off-by: John Johansen <john.johansen@canonical.com>