On a test system without bison installed, make setup fails with:
/bin/sh: 1: bison: not found
/bin/sh: 1: test: -ge: unexpected operator
Signed-off-by: Zygmunt Krynicki <zygmunt.krynicki@canonical.com>
Match Flags convert output to hex but don't restore after outputting
the flag resulting in following numbers being hex encoded. This
results in dumps that can be confusing eg.
rule: \d2 -> \x2 priority=1001 (0x4/0)< 0x4>
rule: \d7 -> \a priority=3e9 (0x4/0)< 0x4>
rule: \d10 -> \n priority=3e9 (0x4/0)< 0x4>
rule: \d9 -> \t priority=3e9 (0x4/0)< 0x4>
rule: \d14 -> \xe priority=1001 (0x4/0)< 0x4>
where priority=3e9 is the hex encoded priority 1001.
Signed-off-by: John Johansen <john.johansen@canonical.com>
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1419
Approved-by: Maxime Bélair <maxime.belair@canonical.com>
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
The old code implicitly initialized it to 0 by overwriting a
zero-initialized array terminator. Now that we construct the new entry
from scratch, we need to do this manually.
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
Fix libapparmor_re/Makefile so it works correctly with rebuilds and improve state machine dump information, to aid with debugging of permission handling during the compile.
Signed-off-by: John Johansen <john.johansen@canonical.com>
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1410
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
Unfortunately, meaningfully parallelizing parser testing is a giant task:
- Parser equality testing is a shell script based framework where adding parallelism would be a major rework.
- Parser testing using Python’s unittest framework also needs a different test runner to enable parallelism.
- Parser testing using Perl’s prove framework already supports parallelism, but adding -j to Prove does not result in speedups. Thus, I suspect most of the overhead is in spawning the processes, and that speeding this part up will require making the parser a library and testing it that way.
The commit in this MR passes a `-j` parallelism flag to Perl's prove framework, but local testing has shown that this does not create speedups, and Gitlab CI has a very modest improvement of 11 minutes 16 seconds for the parser testing stage without `-j $(nproc)` vs 10 minutes 51 seconds with `-j $(nproc)`. Instead of passing `-j $(nproc)`, pass a fixed `-j 2` to gain some speedups, as the overhead of `-j $(nproc)` on a system with more than 2 cores eats up any time gains that parallelism would have brought.
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1416
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
This function was broken all this time: instead of duplicating each entry in the list, it would duplicate the first entry n times. Since this function is currently not used anywhere, delete it instead of fixing it.
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1421
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: John Johansen <john@jjmx.net>
Besides of transitioning towards C++, this also eliminates the linear scan search that the functions using these arrays did.
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1420
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
This function was broken all this time: instead of duplicating each entry
in the list, it would duplicate the first entry n times. Since this
function is currently not used anywhere, delete it instead of fixing it.
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
This is simple enough to fix even if weld_file_to_policy isn't used in practice
with the compat layer that uses it being a target for deletion
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
There is a null check before storing invflags into inv, but not before initializing the value at inv to 0.
Assuming the null check is needed, it should be there in both places.
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
Match Flags convert output to hex but don't restore after outputting
the flag resulting in following numbers being hex encoded. This
results in dumps that can be confusing eg.
rule: \d2 -> \x2 priority=1001 (0x4/0)< 0x4>
rule: \d7 -> \a priority=3e9 (0x4/0)< 0x4>
rule: \d10 -> \n priority=3e9 (0x4/0)< 0x4>
rule: \d9 -> \t priority=3e9 (0x4/0)< 0x4>
rule: \d14 -> \xe priority=1001 (0x4/0)< 0x4>
where priority=3e9 is the hex encoded priority 1001.
Signed-off-by: John Johansen <john.johansen@canonical.com>
The mapping of AA_CONT_MATCH was being dropped resulting in the
tcp tests failing because they would only match up to the first conditional
match check in the layout.
Bug: https://gitlab.com/apparmor/apparmor/-/issues/462
Fixes: e29f5ce5f ("parser: if extended perms are supported by the kernel build a permstable")
Signed-off-by: John Johansen <john.johansen@canonical.com>
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1409
Approved-by: Ryan Lee <rlee287@yahoo.com>
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
The parser recently changed how/where deny information is applied.
commit 1fa45b7c1 ("parser: dfa minimization prepare for extended
permissions") removed the implicit filtering of explicit denies during
the minimization pass. The implicit clear allowed the explicit
information to be carried into the minimization pass and merged with
implicit denies. The end result being a minimized dfa with the explicit
deny information available to be applied post minimization, and
then dropped later at permission encoding in the accept entries.
Extended permission however enable carrying explicit deny information
into the kernel to fix certain bugs like complain mode not being
able to distinguish between implicit and explicit deny rules (ie.
deny rules get ignored in complain mode). However keeping explicit
deny information when unnecessary result in a larger state machine
than necessary and slower compiles.
commit 179c1c1ba ("parser: fix minimization check for filtering_deny")
Moved the explicit apply_and_clear_deny() pass to before minimization
to restore mnimization's ability to create a minimized dfa with
explicit and implicit deny information merged but this also cleared
the explicit deny information that used to be carried through
minimization. This meant that when the deny information was applied
post minimization it resulted in the audit and quiet information
being cleared.
This resulted in the query_label tests failing as they are checking
for the expected audit infomation in the permissions.
Fixes: 179c1c1ba ("parser: fix minimization check for filtering_deny")
Bug: https://gitlab.com/apparmor/apparmor/-/issues/461
Signed-off-by: John Johansen <john.johansen@canonical.com>
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1408
Approved-by: Ryan Lee <rlee287@yahoo.com>
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
Besides of transitioning towards C++ this also eliminates the linear scan search that the functions using these arrays did.
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
Because the main overhead of the parser_sanity test suite is process
spawning, parallelizing too much could end up hurting performance instead
of helping it. Thus, use a fixed value of 2 instead of $(nproc).
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
The mapping of AA_CONT_MATCH was being dropped resulting in the
tcp tests failing because they would only match up to the first conditional
match check in the layout.
Bug: https://gitlab.com/apparmor/apparmor/-/issues/462
Fixes: e29f5ce5f ("parser: if extended perms are supported by the kernel build a permstable")
Signed-off-by: John Johansen <john.johansen@canonical.com>
Instead of encoding permissions in the accept and accept2 tables
extended perms uses a permissions table and accept becomes an index
into the table.
Add the ability to dump the permissions table so that it can be
compared and debugged.
Signed-off-by: John Johansen <john.johansen@canonical.com>
The chfa dump is missing information about the accept2 entry. The
accept2 information is necessary to help with debugging state machine
builds as accept2 is used to store quiet and audit information in the
old format or conditional information in the extended perms format.
Signed-off-by: John Johansen <john.johansen@canonical.com>
The Makefile is missing some of its .h depenedncies causing compiles
to either fail or worse result in bad builds when rebuilding in an
already built tree.
Move the header dependencies into a variable and use it for each
target. While some targets don't need every include as a dependency
and this will result in unnecessary rebuilds in some cases, it makes
the Makefile cleaner, easier to maintain and makes sure a dependency
isn't accidentally missed.
Signed-off-by: John Johansen <john.johansen@canonical.com>
The parser recently changed how/where deny information is applied.
commit 1fa45b7c1 ("parser: dfa minimization prepare for extended
permissions") removed the implicit filtering of explicit denies during
the minimization pass. The implicit clear allowed the explicit
information to be carried into the minimization pass and merged with
implicit denies. The end result being a minimized dfa with the explicit
deny information available to be applied post minimization, and
then dropped later at permission encoding in the accept entries.
Extended permission however enable carrying explicit deny information
into the kernel to fix certain bugs like complain mode not being
able to distinguish between implicit and explicit deny rules (ie.
deny rules get ignored in complain mode). However keeping explicit
deny information when unnecessary result in a larger state machine
than necessary and slower compiles.
commit 179c1c1ba ("parser: fix minimization check for filtering_deny")
Moved the explicit apply_and_clear_deny() pass to before minimization
to restore mnimization's ability to create a minimized dfa with
explicit and implicit deny information merged but this also cleared
the explicit deny information that used to be carried through
minimization. This meant that when the deny information was applied
post minimization it resulted in the audit and quiet information
being cleared.
This resulted in the query_label tests failing as they are checking
for the expected audit infomation in the permissions.
Fixes: 179c1c1ba ("parser: fix minimization check for filtering_deny")
Bug: https://gitlab.com/apparmor/apparmor/-/issues/461
Signed-off-by: John Johansen <john.johansen@canonical.com>
Numbered as 1 because I expect to find and fix more things like this as I continue to dig into the parser code.
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1400
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
When the find fails but the insertion also fails, we leak the new node
that we generated. Delete the new node in this case to avoid leaking
memory.
The question remains, however, as to whether we should implement `operator==` in addition to `operator<` so that they are consistent with each other and `find` works correctly.
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1399
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: John Johansen <john@jjmx.net>
When the find fails but the insertion also fails, we leak the new node
that we generated. Delete the new node in this case to avoid leaking
memory.
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
commit 1fa45b7c1 ("parser: dfa minimization prepare for extended
permissions") removed implicit filtering of explicit denies in the
minimization pass (the information was ignored in building the set of
final accept states).
The filtering of explicit denies reduces the size of the produced
dfa. Since we need to be smarter about when explicit denies are
kept (eg. during complain mode), and most dfas are limited to 65k
states we currently need to filter explicit deny perms by default.
To compensate commit 2737cb2c2 ("parser: minimization - remove
unnecessary second minimization pass") moved the
apply_and_clear_deny() to before minimization. However its check to
apply removal denials before minimization is broken. Remove minimization
triggering apply_and_clear_deny() and just set the FILTER_DENY flag
by default, until we have better selection of rules/conditions where
explicit deny information should be carried through to the backend.
Fixes: 2737cb2c2 ("parser: minimization - remove unnecessary second minimization pass")
Signed-off-by: John Johansen <john.johansen@canonical.com>
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1397
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: John Johansen <john@jjmx.net>
There is an integer overflow when comparing priorities when cmp is
used because it uses subtraction to find lessthan, equal, and greater
than in one operation.
But INT_MAX and INT_MIN are being used by priorities and this results
in INT_MAX - INT_MIN and INT_MIN - INT_MAX which are both overflows
causing an incorrect comparison result and selection of the wrong
rule permission.
Closes: https://gitlab.com/apparmor/apparmor/-/issues/452
Fixes: e3fca60d1 ("parser: add the ability to specify a priority prefix to rules")
Signed-off-by: John Johansen <john.johansen@canonical.com>
commit 1fa45b7c1 ("parser: dfa minimization prepare for extended
permissions") removed implicit filtering of explicit denies in the
minimization pass (the information was ignored in building the set of
final accept states).
The filtering of explicit denies reduces the size of the produced
dfa. Since we need to be smarter about when explicit denies are
kept (eg. during complain mode), and most dfas are limited to 65k
states we currently need to filter explicit deny perms by default.
To compensate commit 2737cb2c2 ("parser: minimization - remove
unnecessary second minimization pass") moved the
apply_and_clear_deny() to before minimization. However its check to
apply removal denials before minimization is broken. Remove minimization
triggering apply_and_clear_deny() and just set the FILTER_DENY flag
by default, until we have better selection of rules/conditions where
explicit deny information should be carried through to the backend.
Fixes: 2737cb2c2 ("parser: minimization - remove unnecessary second minimization pass")
Signed-off-by: John Johansen <john.johansen@canonical.com>
As for %s when the pointer is null: even if gcc prints (null) this is still undefined behavior, so we should do this explicitly
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
The parser and binutils pot file have not been recently refreshed. Update them to current code and add missing pot files for aa_load and aa_status. Also give aa_status base support for translations to populate its pot file.
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1318
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
... without all the profiles generated by the gen-*.py scripts.
This target is meant for local manual testing, especially when working
on additional simple_tests profiles.
It makes local testing much faster (15 seconds for ~2k profiles vs.
several minutes for the additional ~70k profiles generated by gen-*.py)
Needless to say that the CI should continue to use the parser_sanity
target that includes all the generated profiles.
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1325
Approved-by: John Johansen <john@jjmx.net>
Merged-by: John Johansen <john@jjmx.net>
... without all the profiles generated by the gen-*.py scripts.
This target is meant for local manual testing, especially when working
on additional simple_tests profiles.
It makes local testing much faster (15 seconds for ~2k profiles vs.
several minutes for the additional ~70k profiles generated by gen-*.py)
Needless to say that the CI should continue to use the parser_sanity
target that includes all the generated profiles.
To run the network port range equality tests, we need to check if the
kernel supports the network_v8/af_inet feature. Also, a new file
features.af_inet is needed containing the af_inet feature.
Signed-off-by: Georgia Garcia <georgia.garcia@canonical.com>
The parser pot file should have been updated before beta. Make
sure it is up to date with the current code.
Signed-off-by: John Johansen <john.johansen@canonical.com>
The wording of "scrub the environment" with respect to execution modes is misleading, because a quick read of it could imply that it removes all environment variables. However, it actually enables ld.so's secure-execution mode, which removes a very limited subset of them. This MR rewords the relevant documentation and prompts. If proper environment variable filtering is added later, the documentation can be updated again then.
Synchronizes-with:
- Wiki page update, which I can do after this MR is approved
- Kernel patch to update wording of debug logs (patch submitted to the Apparmor mailing list [here](https://lists.ubuntu.com/archives/apparmor/2024-August/013339.html))
Things that may need updating first:
- Translations: attempting to update `utils/po/apparmor-utils.pot` resulted in a bunch of unrelated changes, so I'd like to ask about translation statuses before making a commit that updates that file properly.
- Adding info on which libc's actually behave differently based on AT_SECURE: glibc and musl libc both do, but they may do subtly different things. I don't know about other libc's.
Signed-off-by: Ryan Lee <ryan.lee@canonical.com>
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1315
Approved-by: John Johansen <john@jjmx.net>
Merged-by: Ryan Lee <rlee287@yahoo.com>
io_uring and userns mediation are encoding permissions on the class
byte. This is a mistake that should never have been allowed.
With the addition of rule priorities the class byte mediates rule,
that ensure the kernel can determine a class is being mediated is
given the highest priority possible, to ensure class mediation can not
be removed by a deny rule. See
61b7568e1 ("parser: bug fix mediates_X stub rules.")
for details.
Unfortunately this breaks rule classes that encode permissions on the
class byte, because those rules will always have a lower priority and
the class mediates rule will always be selected over them resulting in
only the class mediates permission being on the rule class state.
Fix this by adding the mediaties class rules for these rule classes
with the lowest priority possible. This means that any rule mediating
the class will wipe out the mediates class rule. So add a new mediates
class rule at the same priority, as the rule being added.
This is a naive implementation and does result in more mediates rules
being added than necessary. The rule class could keep track of the
highest priority rule that had been added, and use that to reduce the
number of mediates rules it adds for the class.
Technically we could also get away with not adding the rules for allow
rules, as the kernel doesn't actually check the encoded permission but
whether the class state is not the trap state. But it is required with
deny rules to ensure the deny rule doesn't result in permissions being
removed from the class, resulting in the kernel thinking it is
unmediated. We also want to ensure that mediation is encoded for other
rule types like prompt, and in the future the kernel could check the
permission so we do want to guarantee that the class state has the
MAY_READ permission on it.
Note: there is another set of classes (file, mqueue, dbus, ...) which
encodes a default rule permission as
class .* <perm>
this encoding is unfortunate in that it will also add the permission
to the class byte, but also sets up following states with the permission.
thankfully, this accespt anything, including nothing generally isn't
valid in the nothing case (eg. a file without any absolute name). For
this set of classes, the high priority mediates rule just ensures
that the null match case does not have permission.
Fixes: 61b7568e1 parser: bug fix mediates_X stub rules.
Signed-off-by: John Johansen <john.johansen@canonical.com>
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/1307
Approved-by: Georgia Garcia <georgia.garcia@canonical.com>
Merged-by: John Johansen <john@jjmx.net>
io_uring and userns mediation are encoding permissions on the class
byte. This is a mistake that should never have been allowed.
With the addition of rule priorities the class byte mediates rule,
that ensure the kernel can determine a class is being mediated is
given the highest priority possible, to ensure class mediation can not
be removed by a deny rule. See
61b7568e1 ("parser: bug fix mediates_X stub rules.")
for details.
Unfortunately this breaks rule classes that encode permissions on the
class byte, because those rules will always have a lower priority and
the class mediates rule will always be selected over them resulting in
only the class mediates permission being on the rule class state.
Fix this by adding the mediaties class rules for these rule classes
with the lowest priority possible. This means that any rule mediating
the class will wipe out the mediates class rule. So add a new mediates
class rule at the same priority, as the rule being added.
This is a naive implementation and does result in more mediates rules
being added than necessary. The rule class could keep track of the
highest priority rule that had been added, and use that to reduce the
number of mediates rules it adds for the class.
Technically we could also get away with not adding the rules for allow
rules, as the kernel doesn't actually check the encoded permission but
whether the class state is not the trap state. But it is required with
deny rules to ensure the deny rule doesn't result in permissions being
removed from the class, resulting in the kernel thinking it is
unmediated. We also want to ensure that mediation is encoded for other
rule types like prompt, and in the future the kernel could check the
permission so we do want to guarantee that the class state has the
MAY_READ permission on it.
Note: there is another set of classes (file, mqueue, dbus, ...) which
encodes a default rule permission as
class .* <perm>
this encoding is unfortunate in that it will also add the permission
to the class byte, but also sets up following states with the permission.
thankfully, this accespt anything, including nothing generally isn't
valid in the nothing case (eg. a file without any absolute name). For
this set of classes, the high priority mediates rule just ensures
that the null match case does not have permission.
Fixes: 61b7568e1 parser: bug fix mediates_X stub rules.
Signed-off-by: John Johansen <john.johansen@canonical.com>
the ix portion of file, causes x conflicts in regular priority. The
long term goal is to fix this by using dominance for x rules. But in
the mean time we can fix by giving the ix portion of the rule a
reduced priority.
Signed-off-by: John Johansen <john.johansen@canonical.com>