cache files will be written out even if the '--skip-bad-cache' option
is given and the cached features file differs from the features of
the currently running kernel. The patch below fixes the regression.
From: John Johansen <john.johansen@canonical.com>
Acked-by: Steve Beattie <steve@nxnw.org>
This patch addresses a bunch of the compiler string conversion warnings
that were introduced with the C++-ification patch.
Signed-off-by: Steve Beattie <steve@nxnw.org>
Acked-by: Tyler Hicks <tyhicks@canonical.com>
Drop support for the old subdomainfs mountpoint and use the fn exported
by libapparmor.
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Seth Arnold <seth.arnold@canonical.com>
we are writing a new cache .features file the cache dir should be cleared
out.
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Tyler Hicks <tyhicks@canonical.com>
Fix this and unify creation and use of cacheloc so that we can hopefully
avoid these bugs.
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Tyler Hicks <tyhicks@canonical.com>
--cache-loc= is specified. This results in using the .features file from
/etc/apparmor.d/cache or always recompiling policy.
The former case is particularly bad as the .features file in
/etc/apparmor.d/cache/ may not correspond to the file in the specified
cache location.
bug: launchpad.net/bugs/1229393
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Tyler Hicks <tyhicks@canonical.com>
change_hat 1.4 was an experiement is more directly controlling change_hat
by adding hat rulles to the profile. It has not been used since the
original experiment (4 years). So remove it
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Seth Arnold <seth.arnold@canonical.com>
Acked-by: Steve Beattie <steve@nxnw.org>
This conversion is nothing more than what is required to get it to
compile. Further improvements will come as the code is refactored.
Unfortunately due to C++ not supporting designated initializers, the auto
generation of af names needed to be reworked, and "netlink" and "unix"
domain socket keywords leaked in. Since these where going to be added in
separate patches I have not bothered to do the extra work to replace them
with a temporary place holder.
Signed-off-by: John Johansen <john.johansen@canonical.com>
[tyhicks: merged with dbus changes and memory leak fixes]
Signed-off-by: Tyler Hicks <tyhicks@canonical.com>
Acked-by: Seth Arnold <seth.arnold@canonical.com>
Acked-by: Steve Beattie <steve@nxnw.org>
Remove use of AARE_DFA as the alternate pcre matching engine was removed
years ago.
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Seth Arnold <seth.arnold@canonical.com>
Acked-by: Steve Beattie <steve@nxnw.org>
This patch fixes problems in the handling of both the final cache
name location and the temporary cache file when an alternate location
is specified.
The first issue is that if the alternate cache directory location was
specified, the alternate directory name would be used as the final location for
the cache file, rather than the alternate directory + the basename of
the profile.
The second issue is that it would generate the temporary file that it
stores the cache file in [basedir]/cache even if an alternate cache
location was specified on the command line. This causes a problem
if [basedir]/cache is on a separate device than the alternate cache
location, because the rename() of the tempfile into the final location
would fail (which the parser would not check the return code of).
This patch fixes the above by incorporating the basename into the cache
file name if the alternate cache location has been specified, bases the
temporary cache file name on the destination cache name (such that they
end up in the same directory), and finally detects if the rename fails
and unlinks the temporary file if that happens (rather than leave it
around). It also has been updated to add a couple of testcases to verify
that writing and reading from an alternate cache location work.
Patch history:
v1: first draft of patch
v2: add testcases, convert PERROR() to pwarn() if rename() fails for
placing cachefile into place.
Signed-off-by: Steve Beattie <sbeattie@ubuntu.com>
Acked-by: John Johansen <john.johansen@canonical.com>
Using --subdomainfs without an argument triggers a segfault. This was due
to the long option missing the "has_arg" flag.
Signed-off-by: Kees Cook <kees@ubuntu.com>
Acked-by: John Johansen <john.johansen@canonical.com>
file is larger than the feature buffer used for cache version comparison.
Ideally this would be dynamically allocated but for 2.8 just bumping the
buffer size is the quick fix.
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Steve Beattie <sbeattie@ubuntu.com>
The apparmor_parser has 3 different directory walking routines. Abstract
them out and use a single common routine.
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-By: Steve Beattie <sbeattie@ubuntu.com>
Fix the apparmor_parsers -N command (which dumps the list of profile
names found in a policy file) to be available without privilege and
also make it be recognized as a command instead of an option so that
it can conflict with -a -r -R -S and -o.
Currently it can be specified with these commands but will cause the
parser to short circuit just dumping the names and not doing the actual
profile compile or load.
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-By: Steve Beattie <sbeattie@ubuntu.com>
Add the ability to clear out the binary profile cache. This removes the
need to have a separate script to handle the logic of checking and
removing the cache if it is out of date.
The parser already does all the checking to determine cache validity
so it makes sense to allow the parser to clear out inconsistent cache
when it has been instructed to update the cache.
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-By: Steve Beattie <sbeattie@ubuntu.com>
serious flaw. The test for the network flag was being applied against both
the kernel flags and the cache flags. This means that if either the kernel
or the cache did not have the flag set then network mediation would be
turned off.
Thus if a kernel was booted without the flag, and a cache was generated
based on that kernel and then the system was rebooted into a kernel with
the network flag present, the parser on generating the new policy would
detect the old cache did not support network and turn it off for the
new policy as well.
This can be fixed by either removing the old cache first or regenerating
the cache twice. As the first generation will write that networking is
supported in the cache (even though the policy will have it disabled), and
the second generation will generate the correct policy.
The following patch moves the test so that it is only applied to the kernel
flags set.
Signed-off-by: John Johansen <john.johansen@canonical.com>
compatibility interface. Previously it was assuming that if the compatibility
interface was present that network rules where also present, this is not
necessarily true and causes apparmor to break when only the compatibility
patch is applied.
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Kees Cook <kees@ubuntu.com>
http://bugs.launchpad.net/bugs/968956
The parser is incorrectly generating network rules for kernels that can
not support them. This occurs on kernels with the new features directory
but not the compatibility patches applied.
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-By: Steve Beattie <sbeattie@ubuntu.com>
the cache test is failing because it assumes that kernel features are
stored in a file instead of a directory
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-By: Steve Beattie <sbeattie@ubuntu.com>
On newer kernels the features directory causes the creation of a
cache/.feature file that contains newline characters. This causes the
feature comparison to fail, because get_flags_string() uses fgets
which stop reading in the feature file after the first newline.
This caches the features comparision to compare a single line of the
file against the full kernel feature directory resulting in caching
failure.
Worse this also means the cache won't get updated as the parser doesn't
change what set gets caches after the .feature file gets created.
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-By: Steve Beattie <sbeattie@ubuntu.com>
Newer versions of AppArmor use a features directory instead of a file
update the parser to use this to determine features and match string
This is just a first pass at this to get things up quickly. A much
more comprehensive rework that can parse and use the full information
set is needed.
Signed-off-by: John Johansen <john.johansen@canonical.com>
The removal of deny information is a one way operation, that can result
in a smaller dfa, but also results in a dfa that should not be used in
future operations because the deny rules from the precomputed dfa would
not get applied.
For now default filtering out of deny information to off, as it takes
extra time and seldom results in further state reduction.
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Kees Cook <kees@ubuntu.com>
Previously permission information was thrown away early and permissions
where packed to their CHFA form at the start of DFA construction. Because
of this permissions hashing to setup the initial DFA partitions was
required as x transition conflicts, etc. could not be resolved.
Move the mapping of permissions to CHFA construction, and track the full
permission set through DFA construction. This allows removal of the
perm_hashing hack, which prevented a full minimization from happening
in some DFAs. It also could result in x conflicts not being correctly
detected, and deny rules not being fully applied in some situations.
Eg.
pre full minimization
Created dfa: states 33451
Minimized dfa: final partitions 17033
with full minimization
Created dfa: states 33451
Minimized dfa: final partitions 9550
Dfa minimization no states removed: partitions 9550
The tracking of deny rules through to the completed DFA construction creates
a new class of states. That is states that are marked as being accepting
(carry permission information) but infact are non-accepting as they
only carry deny information. We add a second minimization pass where such
states have their permission information cleared and are thus moved into the
non-accepting partion.
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Kees Cook <kees@ubuntu.com>
Profile loads when specifying namespaces currently conflict with caching.
If the profile (ignoring the specified namespace) is in the cache, then
the cached profile will be loaded, replacing the profile in the current
namespace instead of loading the profile to the new namespace.
Fix this by disabling caching when a namespace is specified, forcing the
profile to be compiled.
NOTE: this will not affect profiles loaded from within a namespace using
either the same or a separate directory as the base to load a namespac
from. This only affects loading profiles directly into a child
namespace.
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Kees Cook <kees@ubuntu.com>
Currently the cache location is fixed and links are needed to move it.
Add an option that can be set in the apparmor_parser.conf file so distros
can locate the cache where ever makes sense for them.
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Kees Cook <kees@ubuntu.com>
requested and fails to be created. Also don't make the
warning output conditional on the showcache flag as we
should be showing warning/errors by default.
Signed-off-by: John Johansen <john.johansen@canonical.com>
to /etc/apparmor/parser.conf (NOTE option to allow changing this is not
provided currently).
Signed-off-by: John Johansen <john.johansen@canonical.com>
Allow dumping out which states where dropped during partition minimization
and which state became the partitions representative state.
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-By: Steve Beattie <sbeattie@ubuntu.com>
This is a rather large rearrangement of how a subset of the parser global
variables are defined. Right now, there are unit tests built without
linking against parser_main.c. As a result, none of the globals defined in
parser_main.c could be used in the code that is built for unit tests
(misc, regex, symtab, variable). To get a clean build, either stubs needed
to be added to "#ifdef UNIT_TEST" blocks in each .c file, or we had to
depend on link-time optimizations that would throw out the unused routines.
First, this is a problem because all the compile-time warnings had to be
explicitly silenced, so reviewing the build logs becomes difficult on
failures, and we can potentially (in really unlucky situations) test
something that isn't actually part of the "real" parser.
Second, not all compilers will allow this kind of linking (e.g. mips gcc),
and the missing symbols at link time will fail the entire build even though
they're technically not needed.
To solve all of this, I've moved all of the global variables used in lex,
yacc, and main to parser_common.c, and adjusted the .h files. On top of
this, I made sure to fully link the tst builds so all symbols are resolved
(including aare lib) and removedonly tst build-log silencing (for now,
deferring to another future patchset to consolidate the build silencing).
Signed-off-by: Kees Cook <kees.cook@canonical.com>
If the apparmor_parser is updated (outside of current packaging), when
doing profile loads it will use the existing cache of compiled profiles,
instead of forcing a recompile on profiles.
This can cause apparmor to load bad policy if the parser contains a bug
fix for the previous version of the parser.
This can be worked around in packaging by invalidating the cache and
forcing a profile reload when the parser is upgraded.
Signed-off-by: John Johansen <john.johansen@canonical.com>
Older versions of the apparmor kernel patches didn't handle receiving
network tables of a larger size than expected.
Allow the parser to detect the kernel version and override the AF_MAX
value for those kernels.
This also replaces the hack using a hardcoded limit of 36 for kernels
missing the features flag.
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Kees Cook <kees.cook@canonical.com>
When doing permission merging in the dfa minimization phase the information
about whether a rule is dominant or not has been lost so the merge of
xtransitions can not be handled correctly.
When two conflicting x transitions are merged the results are unpredicitable
and not currently detected. So default dfa minimization to set up its
initial partitions with permission hashing, this ensures that dfa states
that have different xtransitions in the minimization stage will never
be merged thus will not result in a conflict.
x permission checking is still enforced at the dfa creation phase where
the originial information is available to check whether the conflicting
permissions came from exact match or re rules so that conflict resolution
can be properly applied.
The end result is that dfa minimization does not result in a truely minimal
dfa (the minimization phase is also slightly faster).
Signed-off-by: John Johansen <john.johansen@canonical.com>
add short options to turn on all stats, and all progress indicators,
also allow adding "no-" prefix to dump options to allow subtracting
individual options when short options are used.
eg.
-D stats -D no-expr-simplify
Move the -O and -D options into tables, that keep the option and its
description. This will help keep the options consistent and the description
up to date, as all information is now in one place.
Previously the options, and descriptions kept getting out of sync as all
relavent parts were spread out.
improves minimization performance, it can slow down total creation time and
result in larger compressed dfas.
This is because it results in the dfa not being completely minimized which
with the current O(n2) dfa table compression algorithm can result in slower
compressed dfa generation.
first hash does hashing on state just state transitions, which always results
in a performance improvement.
The second does hashing based off of accept permissions, which can create
more initial states but can result in not being able to achieve a true
minimum dfa. This can also lead to slowing down total dfa creation because
while minimization, compression can take longer if the dfa isn't completely
minimized.
permission hashing is currently required, as minimization does not accumulate
redundant Node permissions.