Commit graph

100 commits

Author SHA1 Message Date
Kees Cook
4eb5fa017d fix missing long opt arg value
Using --subdomainfs without an argument triggers a segfault. This was due
to the long option missing the "has_arg" flag.

Signed-off-by: Kees Cook <kees@ubuntu.com>
Acked-by: John Johansen <john.johansen@canonical.com>
2013-06-26 11:26:43 -07:00
John Johansen
b643a42dfd This is a minimal fix to apparmor 2.8 for cache failures when the feature
file is larger than the feature buffer used for cache version comparison.

Ideally this would be dynamically allocated but for 2.8 just bumping the
buffer size is the quick fix.

Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Steve Beattie <sbeattie@ubuntu.com>
2013-05-02 11:32:56 -07:00
John Johansen
c0b5035b1a apparmor: abstract out the directory walking routine
The apparmor_parser has 3 different directory walking routines. Abstract
them out and use a single common routine.

Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-By: Steve Beattie <sbeattie@ubuntu.com>
2012-08-16 16:26:03 -07:00
John Johansen
9c42360b34 apparmor: correct apparmor_parser -N command privilege
Fix the apparmor_parsers -N command (which dumps the list of profile
names found in a policy file) to be available without privilege and
also make it be recognized as a command instead of an option so that
it can conflict with -a -r -R -S and -o.

Currently it can be specified with these commands but will cause the
parser to short circuit just dumping the names and not doing the actual
profile compile or load.

Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-By: Steve Beattie <sbeattie@ubuntu.com>
2012-08-13 16:59:00 -07:00
John Johansen
55d6f869fc apparmor: add clearing the profile cache when inconsistent
Add the ability to clear out the binary profile cache. This removes the
need to have a separate script to handle the logic of checking and
removing the cache if it is out of date.

The parser already does all the checking to determine cache validity
so it makes sense to allow the parser to clear out inconsistent cache
when it has been instructed to update the cache.

Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-By: Steve Beattie <sbeattie@ubuntu.com>
2012-08-13 16:58:33 -07:00
John Johansen
d64d860c93 The previous patch to fix policy compilation around the network flag had a
serious flaw. The test for the network flag was being applied against both
the kernel flags and the cache flags. This means that if either the kernel
or the cache did not have the flag set then network mediation would be
turned off.

Thus if a kernel was booted without the flag, and a cache was generated
based on that kernel and then the system was rebooted into a kernel with
the network flag present, the parser on generating the new policy would
detect the old cache did not support network and turn it off for the
new policy as well.

This can be fixed by either removing the old cache first or regenerating
the cache twice. As the first generation will write that networking is
supported in the cache (even though the policy will have it disabled), and
the second generation will generate the correct policy.

The following patch moves the test so that it is only applied to the kernel
flags set.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2012-07-17 16:03:32 -07:00
John Johansen
3d4a98bed9 Fix the parser so it checks for the presence of the network feature in the
compatibility interface. Previously it was assuming that if the compatibility
interface was present that network rules where also present, this is not
necessarily true and causes apparmor to break when only the compatibility
patch is applied.

Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Kees Cook <kees@ubuntu.com>
2012-07-01 01:36:37 -07:00
John Johansen
6f27ba3abb Fix protocol error when loading policy to kernels without compat patches
http://bugs.launchpad.net/bugs/968956

The parser is incorrectly generating network rules for kernels that can
not support them.  This occurs on kernels with the new features directory
but not the compatibility patches applied.

Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-By: Steve Beattie <sbeattie@ubuntu.com>
2012-04-11 16:03:21 -07:00
John Johansen
c8e134930f Fix the "Kernel features are written to cache:" test
the cache test is failing because it assumes that kernel features are
stored in a file instead of a directory

Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-By: Steve Beattie <sbeattie@ubuntu.com>
2012-03-09 04:25:03 -08:00
John Johansen
3876299fa0 Fix caching when used with a newer kernel with the feature directory
On newer kernels the features directory causes the creation of a
cache/.feature file that contains newline characters.  This causes the
feature comparison to fail, because get_flags_string() uses fgets
which stop reading in the feature file after the first newline.

This caches the features comparision to compare a single line of the
file against the full kernel feature directory resulting in caching
failure.

Worse this also means the cache won't get updated as the parser doesn't
change what set gets caches after the .feature file gets created.

Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-By: Steve Beattie <sbeattie@ubuntu.com>
2012-03-09 04:24:20 -08:00
John Johansen
e61b7b9241 Update the copyright dates for the apparmor_parser
Signed-off-by: John Johansen <john.johansen@canonical.com>
2012-02-24 04:21:59 -08:00
John Johansen
df46234c55 Generate the features list from the features directory
Newer versions of AppArmor use a features directory instead of a file
update the parser to use this to determine features and match string

This is just a first pass at this to get things up quickly.  A much
more comprehensive rework that can parse and use the full information
set is needed.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2012-02-24 04:18:45 -08:00
John Johansen
e7c550243c Make second minimization pass optional
The removal of deny information is a one way operation, that can result
in a smaller dfa, but also results in a dfa that should not be used in
future operations because the deny rules from the precomputed dfa would
not get applied.

For now default filtering out of deny information to off, as it takes
extra time and seldom results in further state reduction.

Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Kees Cook <kees@ubuntu.com>
2012-02-16 07:43:02 -08:00
John Johansen
6f95ff5637 Track full permission set through all stages of DFA construction.
Previously permission information was thrown away early and permissions
where packed to their CHFA form at the start of DFA construction.  Because
of this permissions hashing to setup the initial DFA partitions was
required as x transition conflicts, etc. could not be resolved.

Move the mapping of permissions to CHFA construction, and track the full
permission set through DFA construction.  This allows removal of the
perm_hashing hack, which prevented a full minimization from happening
in some DFAs.  It also could result in x conflicts not being correctly
detected, and deny rules not being fully applied in some situations.

Eg.
 pre full minimization
   Created dfa: states 33451
   Minimized dfa: final partitions 17033

 with full minimization
   Created dfa: states 33451
   Minimized dfa: final partitions 9550
   Dfa minimization no states removed: partitions 9550

The tracking of deny rules through to the completed DFA construction creates
a new class of states.  That is states that are marked as being accepting
(carry permission information) but infact are non-accepting as they
only carry deny information.  We add a second minimization pass where such
states have their permission information cleared and are thus moved into the
non-accepting partion.

Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Kees Cook <kees@ubuntu.com>
2012-02-16 07:41:40 -08:00
John Johansen
62a7934ea6 Disable caching when a namespace is specified
Profile loads when specifying namespaces currently conflict with caching.
If the profile (ignoring the specified namespace) is in the cache, then
the cached profile will be loaded, replacing the profile in the current
namespace instead of loading the profile to the new namespace.

Fix this by disabling caching when a namespace is specified, forcing the
profile to be compiled.

NOTE: this will not affect profiles loaded from within a namespace using
      either the same or a separate directory as the base to load a namespac
      from.  This only affects loading profiles directly into a child
      namespace.

Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Kees Cook <kees@ubuntu.com>
2012-01-11 17:26:51 +01:00
John Johansen
5fdf33c689 Add an option to allow setting the cache's location.
Currently the cache location is fixed and links are needed to move it.
Add an option that can be set in the apparmor_parser.conf file so distros
can locate the cache where ever makes sense for them.

Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Kees Cook <kees@ubuntu.com>
2012-01-11 17:25:18 +01:00
John Johansen
12a98135bf Provide a more user friendly error message when cache is
requested and fails to be created.  Also don't make the
warning output conditional on the showcache flag as we
should be showing warning/errors by default.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2011-10-07 14:42:55 -07:00
John Johansen
4dec6cab65 Add the ability for the parser to have a basic conf file, that defaults
to /etc/apparmor/parser.conf (NOTE option to allow changing this is not
provided currently).

Signed-off-by: John Johansen <john.johansen@canonical.com>
2011-08-09 06:52:43 -07:00
Kees Cook
e66e56b020 Add pending local commits. 2011-05-23 11:30:11 -07:00
John Johansen
627638a6cf Add debugging dump for DFA partition minimization
Allow dumping out which states where dropped during partition minimization
and which state became the partitions representative state.

Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-By: Steve Beattie <sbeattie@ubuntu.com>
2011-05-20 09:26:44 -07:00
Kees Cook
6a68aa2ecb [v2: added clean-ups, backed off on some of the build silencing]
This is a rather large rearrangement of how a subset of the parser global
variables are defined. Right now, there are unit tests built without
linking against parser_main.c. As a result, none of the globals defined in
parser_main.c could be used in the code that is built for unit tests
(misc, regex, symtab, variable). To get a clean build, either stubs needed
to be added to "#ifdef UNIT_TEST" blocks in each .c file, or we had to
depend on link-time optimizations that would throw out the unused routines.

First, this is a problem because all the compile-time warnings had to be
explicitly silenced, so reviewing the build logs becomes difficult on
failures, and we can potentially (in really unlucky situations) test
something that isn't actually part of the "real" parser.

Second, not all compilers will allow this kind of linking (e.g. mips gcc),
and the missing symbols at link time will fail the entire build even though
they're technically not needed.

To solve all of this, I've moved all of the global variables used in lex,
yacc, and main to parser_common.c, and adjusted the .h files. On top of
this, I made sure to fully link the tst builds so all symbols are resolved
(including aare lib) and removedonly  tst build-log silencing (for now,
deferring to another future patchset to consolidate the build silencing).

Signed-off-by: Kees Cook <kees.cook@canonical.com>
2011-05-13 02:12:49 -07:00
Kees Cook
abcf66292d Description: adjust for missing or incorrect includes.
Author: Kees Cook <kees@ubuntu.com>
Acked-By: Steve Beattie <sbeattie@ubuntu.com>
2011-05-02 13:34:58 -07:00
John Johansen
55bad42088 apparmor_parser doesn't use its time stamp when determining if cache is stale
If the apparmor_parser is updated (outside of current packaging), when
doing profile loads it will use the existing cache of compiled profiles,
instead of forcing a recompile on profiles.

This can cause apparmor to load bad policy if the parser contains a bug
fix for the previous version of the parser.

This can be worked around in packaging by invalidating the cache and
forcing a profile reload when the parser is upgraded.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2011-03-08 14:49:03 -08:00
John Johansen
174c89f772 override AF_MAX for kernels that don't support proper masking
Older versions of the apparmor kernel patches didn't handle receiving
network tables of a larger size than expected.

Allow the parser to detect the kernel version and override the AF_MAX
value for those kernels.

This also replaces the hack using a hardcoded limit of 36 for kernels
missing the features flag.

Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Kees Cook <kees.cook@canonical.com>
2011-03-03 15:45:10 -08:00
John Johansen
9df0a29e9e Update the copyright message in apparmor_parser --version 2011-02-22 14:58:49 -08:00
Kees Cook
723a20ba7d as ACKed on IRC, drop the unused $Id$ tags everywhere 2010-12-20 12:29:10 -08:00
John Johansen
283abda83c Default permission-hashing for dfa creation to on, to fix a bug
When doing permission merging in the dfa minimization phase the information
about whether a rule is dominant or not has been lost so the merge of
xtransitions can not be handled correctly.

When two conflicting x transitions are merged the results are unpredicitable
and not currently detected.  So default dfa minimization to set up its
initial partitions with permission hashing, this ensures that dfa states
that have different xtransitions in the minimization stage will never
be merged thus will not result in a conflict.

x permission checking is still enforced at the dfa creation phase where
the originial information is available to check whether the conflicting
permissions came from exact match or re rules so that conflict resolution
can be properly applied.

The end result is that dfa minimization does not result in a truely minimal
dfa (the minimization phase is also slightly faster).

Signed-off-by: John Johansen <john.johansen@canonical.com>
2010-12-20 11:58:44 -08:00
John Johansen
14e7d94701 Add ability to dump unique permission sets 2010-11-09 11:56:28 -08:00
John Johansen
318351376c Add the ability to dump NodeSet to dfa state mapping 2010-11-09 11:55:40 -08:00
John Johansen
f1a3f66515 Add -D stats and -D progress options
add short options to turn on all stats, and all progress indicators,
also allow adding "no-" prefix to dump options to allow subtracting
individual options when short options are used.

eg.
  -D stats -D no-expr-simplify
2010-11-09 11:53:38 -08:00
John Johansen
6b4dff4bee Move -O and -D options and documentation into tables
Move the -O and -D options into tables, that keep the option and its
description.  This will help keep the options consistent and the description
up to date, as all information is now in one place.

Previously the options, and descriptions kept getting out of sync as all
relavent parts were spread out.
2010-11-09 11:52:38 -08:00
John Johansen
de2dec2bec Reduce the number of -O flag options by factoring our no- prefix
Factor out the "no-" prefix so that optimization flags and their no-
counter parts are handled by the same code.
2010-11-09 11:50:13 -08:00
John Johansen
fae7cac15c Rename trans-XXXX transition to compress- compression
trans- isn't a very good name for this phase of compilation.  It is the
compression phase, rename to trans- to compress- to reflect this.
2010-11-09 11:49:18 -08:00
John Johansen
8972e4f577 Generic cleanup pass of -D and -O options 2010-11-09 11:48:53 -08:00
John Johansen
a84844cea5 Do not use permission hashing for minimization by default. While this
improves minimization performance, it can slow down total creation time and
result in larger compressed dfas.

This is because it results in the dfa not being completely minimized which
with the current O(n2) dfa table compression algorithm can result in slower
compressed dfa generation.
2010-11-09 11:27:36 -08:00
John Johansen
a949b075b4 The dfa flags currently are a weird mix of position and negative assertions.
Its cleaner just to have them all assert one way and let the cmd line
options apply them correctly.
2010-11-09 11:23:45 -08:00
John Johansen
36e99af7fb Split dfa minimizing hashing into two seperately controllable hashes. The
first hash does hashing on state just state transitions, which always results
in a performance improvement.

The second does hashing based off of accept permissions, which can create
more initial states but can result in not being able to achieve a true
minimum dfa.  This can also lead to slowing down total dfa creation because
while minimization, compression can take longer if the dfa isn't completely
minimized.

permission hashing is currently required, as minimization does not accumulate
redundant Node permissions.
2010-11-09 11:22:54 -08:00
Kees Cook
485df894ab This fixes a few typos in documentation that lintian noticed. 2010-11-04 14:27:30 -07:00
Steve Beattie
60b014667a When loading without the 2.4 compatibility patch, the parser needs the
following patch or it will explode when it can't find the "features"
file.

Bug: https://bugs.launchpad.net/ubuntu/+source/apparmor/+bug/626984
From: Kees Cook <kees@ubuntu.com>
2010-09-16 10:24:50 -07:00
Kees Cook
862836548d Fix write_cache to not be a privileged operation so that the caching tests
can be added to the build. Update caching tests to detect non-ns-resolution
filesystems and back off on the timing test.
2010-09-14 12:45:34 -07:00
Kees Cook
feb70284bc Effectively revert revno 1471, and fix the misdetected error condition
so that caching will work again without needing kernel_load.
2010-09-14 12:38:38 -07:00
Kees Cook
3a1fbb49f4 fix up typo and add extern for update_mru_tstamp 2010-09-14 12:37:59 -07:00
John Johansen
02e86864da This patch changes how cache validation is done, by moving it post
parsing, and precompilation of policy.  This allows finding the most
recent text time stamp during parsing and this is then compared to
the cache file time stamp.

While this is slightly slower than the cache file check that only
validated against the profile file it fixes the bug where abstraction
updates do not cause the cache file to become invalid.
2010-09-14 12:22:02 -07:00
John Johansen
8762c1dcfb The upstream 2.6.36 version of apparmor doesn't support network rules.
Add a flag to the parser controlling the output of network rules,
and warn per profile when network rules are not going to be enforced.
2010-08-26 10:37:46 -07:00
John Johansen
1f1a303457 The upstream 2.6.36 version of apparmor is missing the match file,
so the parser doesn't set matching options correctly.

Set minimal defaults with that will allow the parser to load policy,
on 2.6.36 kernels.
2010-08-26 10:36:45 -07:00
John Johansen
d72422b369 When doing debugging/building dfa graphs, generally I use -QT however
this results in

Unable to open output file - Success

to be output to standard error.

This occurs because despite specifying kernel_load = 0, the kernel load
parts are still being done, and failing.
2010-08-17 08:03:07 -07:00
John Johansen
5c8051994b Make -q quiet can not update cache warnings 2010-08-04 09:52:54 -07:00
John Johansen
b5c780d2a1 Remove pcre and update tests where necessary 2010-07-31 16:00:52 -07:00
Kees Cook
624aee531a Fix many compile-time warnings.
Start replacing RPM with lsb-release.
Drop old references to CVE.
Remove unused code.
2010-07-26 09:22:45 -07:00
John Johansen
4be07c3265 This adds a basic debug dump for the conversion of each rule in a profile to its expression
tree.  It is limited in that it doesn't currently handle the permissions of a rule.

conversion output presents an aare -> prce conversion followed by 1 or more expression
tree rules, governed by what the rule does.
eg.
  aare: /**   ->   /[^/\x00][^\x00]*
  rule: /[^/\x00][^\x00]*  ->  /[^\0000/]([^\0000])*

eg.
echo "/foo { /** rwlkmix, } " | ./apparmor_parser -QT -D rule-exprs -D expr-tree

aare: /foo   ->   /foo
aare: /**   ->   /[^/\x00][^\x00]*
rule: /[^/\x00][^\x00]*  ->  /[^\0000/]([^\0000])*

rule: /[^/\x00][^\x00]*\x00/[^/].*  ->  /[^\0000/]([^\0000])*\0000/[^/](.)*


DFA: Expression Tree
(/[^\0000/]([^\0000])*(((((((((((((<513>|<2>)|<4>)|<8>)|<16>)|<32>)|<64>)|<8404992>)|<32768>)|<65536>)|<131072>)|<262144>)|<524288>)|<1048576>)|/[^\0000/]([^\0000])*\0000/[^/](.)*((<16>|<32>)|<262144>))


This simple example shows many things
1. The profile name under goes pcre conversion.  But since no regular expressions where found
   it doesn't generate any expr rules
2. /** is converted into the pcre expression /[^\0000/]([^\0000])*
3. The pcre expression /[^\0000/]([^\0000])* is converted into two rules that are then
   converted into expression trees.

   The reason for this can not be seen by the output as this is actually triggered by
   permissions separation for the rule.  In this case the link permission is separated
   into what is shown as the second rule: statement.
4. DFA: Expression Tree dump shows how these rules are combined together

You will notice that the rule conversion statement is fairly redundant currently as it just
show pcre to expression tree pcre.  This will change when direct aare parsing occurs,
but currently serves to verify the pcre conversion step.


It is not the prettiest patch, as its touching some ugly code that is schedule to be cleaned
up/replaced. eg. convert_aaregex_to_pcre is going to replaced with native parse conversion
from an aare straight to the expression tree, and dfaflag passing will become part of the
rule set.
2010-07-23 13:29:35 +02:00