Commit graph

24 commits

Author SHA1 Message Date
Eric Chiang
cc09794fbd parser: determine xmatch priority based on smallest DFA match
The length of a xmatch is used to prioritize multiple profiles that
match the same path, with the intent that the more specific match wins.
Currently, the length of a xmatch is computed by the position of the
first regex character.

While trying to work around issues with no_new_privs by combining
profiles, we noticed that the xmatch length computation doesn't work as
expected for multiple regexs. Consider the following two profiles:

    profile all /** { }
    profile bins /{,usr/,usr/local/}bin/** { }

xmatch_len is currently computed as "1" for both profiles, even though
"bins" is clearly more specific.

When determining the length of a regex, compute the smallest possible
match and use that for xmatch priority instead of the position of the
first regex character.
2019-02-08 13:51:02 -08:00
Steve Beattie
768f11b497 parser: revert changes from commit rev 3248
The changes to the parser made in commit rev 3248 were accidental and
not intended to be committed.
2015-10-14 13:49:26 -07:00
John Johansen
99322d3978 Add LSS presentations about apparmor security model 2015-10-13 15:39:17 -07:00
John Johansen
8efb5850f2 Move rule simplification into the tree construction phase
The current rule simplification algorithm has issues that need to be
addressed in a rewrite, but it is still often a win, especially for
larger profiles.

However doing rule simplification as a single pass limits what it can
do. We default to right simplification first because this has historically
shown the most benefits. For two reasons
  1. It allowed better grouping of the split out accept nodes that we
     used to do (changed in previous patches)
  2. because trailing regexes like
       /foo/**,
       /foo/**.txt,
     can be combined and they are the largest source of node set
     explosion.

However the move to unique node sets, eliminates 1, and forces 2 to
work within only the single unique permission set on the right side
factoring pass, but it still incures the penalty of walking the whole
tree looking for potential nodes to factor.

Moving tree simplification into the construction phases gets rid of
the need for the right side factoring pass to walk other node sets
that will never combine, and since we are doing simplification we can
do it before the cat and permission nodes are added reducing the
set of nodes to look at by another two.

We do loose the ability to combine nodes from different sets during
the left factoring pass, but experimentation shows that doing
simplification only within the unique permission sets achieve most of
the factoring that a single global pass would achieve.

Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Steve Beattie <steve@nxnw.org>
2015-06-25 16:38:04 -06:00
John Johansen
832455de2c Change expr tree construction so that rules are grouped by perms
Currently rules are added to the expression tree in order, and then
tree simplification and factoring is done. This forces simplification
to "search" through the tree to find rules with the same permissions
during right factoring, which dependent on ordering of factoring may
not be able to group all rules of the same permissions.

Instead of having tree factoring do the work to regroup rules with the
same permissions, pregroup them as part of the expr tree construction.
And only build the full tree when the dfa is constructed.

Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Steve Beattie <steve@nxnw.org>
2015-06-25 16:38:02 -06:00
John Johansen
5a9300c91c Move the permission map into the rule set
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Steve Beattie <steve@nxnw.org>
2015-06-25 15:54:15 -06:00
John Johansen
292f3be438 switch away from doing an individual accept node for each perm bit
accept nodes per perm bit where done from the very begining in a
false belief that they would help produce minimized dfas because
a nfa states could share partial overlapping permissions.

In reality they make tree factoring harder, reduce in longer nfa
state sets during dfa construction and do not result in a minimized
dfa.

Moving to unique permission sets, allows us to minimize the number
of nodes sets, and helps reduce recreating each set type multiple
times during the dfa construction.

Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Steve Beattie <steve@nxnw.org>
2015-06-25 14:08:55 -06:00
John Johansen
19c942e5c2 parser: split accept perm processing from rule parsing
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Steve Beattie <steve@nxnw.org>
Acked-by: Seth Arnold <seth.arnold@canonical.com>
2014-09-03 14:40:08 -07:00
John Johansen
ee7bf1dc28 parser: Refactor rule accumulation to use some helper functions
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Steve Beattie <steve@nxnw.org>
Acked-by: Seth Arnold <seth.arnold@canonical.com>
2014-09-03 14:24:37 -07:00
John Johansen
f7e12a9bc5 Convert aare_rules into a class
This cleans things up a bit and fixes a bug where not all rules are
getting properly counted so that the addition of policy_mediation
rules fails to generate the policy dfa in some cases.

Because the policy dfa is being generated correctly now we need to
fix some tests to use the new -M flag to specify the expected features
set of the test.

Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Steve Beattie <steve@nxnw.org>
Acked-by: Seth Arnold <seth.arnold@canonical.com>
2014-04-23 10:57:16 -07:00
John Johansen
22855508e8 Add Differential State Compression to the DFA
Differential state compression encodes a state's transitions as the
difference between the state and its default state (the state it is
relative too).

This reduces the number of transitions that need to be stored in the
transition table, hence reducing the size of the dfa.  There is a
trade off in that a single input character may have to traverse more
than one state.  This is somewhat offset by reduced table sizes providing
better locality and caching properties.

With carefully encoding we can still make constant match time guarentees.
This patch guarentees that a state that is differentially encoded will do at
most 3m state traversal to match an input of length m (as opposed to a
non-differentially compressed dfa doing exactly m state traversals).
In practice the actually number of extra traversals is less than this becaus
we selectively choose which states are differentially encoded.

In addition to reducing the size of the dfa by reducing the number of
transitions that have to be stored.  Differential encoding reduces the
number of transitions that need to be considered by comb compression,
which can result in tighter packing, due to a reduction in sparseness, and
also reduces the time spent in comb compression which currently uses an
O(n^2) algorithm.

Differential encoding will always result in a DFA that is smaller or equal
in size to the encoded DFA, and will usually improve compilation times,
with the performance improvements increasing as the DFA gets larger.

Eg. Given a example DFA that created 8991 states after minimization.
* If only comb compression (current default) is used

 52057 transitions are packed into a table of 69591 entries. Achieving an
 efficiency of about 75% (an average of about 7.74 table entries per state).
 With a resulting compressed dfa16 size of 404238 bytes and a run time for
 the dfa compilation of
   real 0m9.037s
   user 0m8.893s
   sys  0m0.036s

* If differential encoding + comb compression is used, 8292 of the 8991
  states are differentially encoded, with 31557 trans removed.  Resulting in

  20500 transitions are packed into a table of 20675 entries.  Acheiving an
  efficiency of about 99.2% (an average of about 2.3 table entries per state
  With a resulting compressed dfa16 size of 207874 bytes (about 48.6%
  reduction) and a run time for the dfa compilation of
   real 0m5.416s (about 40% faster)
   user 0m5.280s
   sys  0m0.040s

Repeating with a larger DFA that has 17033 states after minimization.
* If only comb compression (current default) is used

 102992 transitions are packed into a table of 137987 entries.  Achieving
 an efficiency of about 75% (an average of about 8.10 entries per state).
 With a resultant compressed dfa16 size of 790410 bytes and a run time for d
 compilation of
  real  0m28.153s
  user  0m27.634s
  sys   0m0.120s

* with differential encoding
 39374 transition are packed into a table of 39594 entries. Achieving an
 efficiency of about 99.4% (an average of about 2.32 entries per state).
 With a resultant compressed dfa16 size of 396838 bytes (about 50% reduction
 and a run time for dfa compilation of
  real  0m11.804s (about 58% faster)
  user  0m11.657s
  sys   0m0.084s

Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Seth Arnold <seth.arnold@canonical.com>
2014-01-09 16:55:55 -08:00
Steve Beattie
9c50ff9fb3 parser - terminate search early if wildcards are discovered
This patch is a very minor optimization to the search to determine
whether a given rule is an exact match or not. If a wildcard rule
(i.e.  an inexact match) is discovered, exact_match is set to 0,
so we don't need to continue the tree traversal.

Signed-off-by: Steve Beattie <steve@nxnw.org>
Acked-by: John Johansen <john.johansen@canonical.com>
2013-10-14 14:36:05 -07:00
Steve Beattie
cf57476d6b parser - Fix const char warnings
This patch addresses a bunch of the compiler string conversion warnings
that were introduced with the C++-ification patch.

Signed-off-by: Steve Beattie <steve@nxnw.org>
Acked-by: Tyler Hicks <tyhicks@canonical.com>
2013-10-01 10:59:04 -07:00
John Johansen
a34059b1e5 Convert the parser to C++
This conversion is nothing more than what is required to get it to
compile. Further improvements will come as the code is refactored.

Unfortunately due to C++ not supporting designated initializers, the auto
generation of af names needed to be reworked, and "netlink" and "unix"
domain socket keywords leaked in. Since these where going to be added in
separate patches I have not bothered to do the extra work to replace them
with a temporary place holder.

Signed-off-by: John Johansen <john.johansen@canonical.com>
[tyhicks: merged with dbus changes and memory leak fixes]
Signed-off-by: Tyler Hicks <tyhicks@canonical.com>
Acked-by: Seth Arnold <seth.arnold@canonical.com>
Acked-by: Steve Beattie <steve@nxnw.org>
2013-09-27 16:13:22 -07:00
John Johansen
66717a2aec temp fix using the 2.8 patch until the 3.0 patch is ready to land
fix a nasty little bug that can surface in apparmor 2.8 when
Hats/children profiles are used.
  
the matchflags in the dfa backend are not getting properly reset, which
results in a previously processed profiles match flags being used. This is
not a problem for most permissions but can result in x conflict errors.
  
Note: this should not result in profiles with the wrong x transitions loaded
as it causes compilation to file with an x conflict.
  
This is a minimal patch targeted at the 2.8 release. As such I have just
updated the delete_ruleset routine to clear the flags as it is already
being properly called for every rule set.

Apparmor 2.9/3.0 will have a different approach where it is not possible
to reuse the flags.

Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Steve Beattie <sbeattie@ubuntu.com>
2012-12-10 17:08:19 -08:00
John Johansen
37f446dd79 Fix/cleanup the permission reporting for the dfa dumps
The permission reporting was not reporting the full set of permission
flags and was inconsistent between the dump routines.

Report permissions as the quad (allow/deny/audit/quiet) in hex.

Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-By: Steve Beattie <sbeattie@ubuntu.com>
2012-03-09 04:17:47 -08:00
John Johansen
e61b7b9241 Update the copyright dates for the apparmor_parser
Signed-off-by: John Johansen <john.johansen@canonical.com>
2012-02-24 04:21:59 -08:00
John Johansen
662ad60cd7 Extend the information dumped by -D rule-exprs to include permissions
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Kees Cook <kees@ubuntu.com>
2012-02-24 04:17:19 -08:00
John Johansen
e7c550243c Make second minimization pass optional
The removal of deny information is a one way operation, that can result
in a smaller dfa, but also results in a dfa that should not be used in
future operations because the deny rules from the precomputed dfa would
not get applied.

For now default filtering out of deny information to off, as it takes
extra time and seldom results in further state reduction.

Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Kees Cook <kees@ubuntu.com>
2012-02-16 07:43:02 -08:00
John Johansen
6f95ff5637 Track full permission set through all stages of DFA construction.
Previously permission information was thrown away early and permissions
where packed to their CHFA form at the start of DFA construction.  Because
of this permissions hashing to setup the initial DFA partitions was
required as x transition conflicts, etc. could not be resolved.

Move the mapping of permissions to CHFA construction, and track the full
permission set through DFA construction.  This allows removal of the
perm_hashing hack, which prevented a full minimization from happening
in some DFAs.  It also could result in x conflicts not being correctly
detected, and deny rules not being fully applied in some situations.

Eg.
 pre full minimization
   Created dfa: states 33451
   Minimized dfa: final partitions 17033

 with full minimization
   Created dfa: states 33451
   Minimized dfa: final partitions 9550
   Dfa minimization no states removed: partitions 9550

The tracking of deny rules through to the completed DFA construction creates
a new class of states.  That is states that are marked as being accepting
(carry permission information) but infact are non-accepting as they
only carry deny information.  We add a second minimization pass where such
states have their permission information cleared and are thus moved into the
non-accepting partion.

Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Kees Cook <kees@ubuntu.com>
2012-02-16 07:41:40 -08:00
John Johansen
9d374d4726 Rename compressed_hfa.{c,h} and TransitionTable within them to chfa. This
is done to be clear what TransitionTable is, as we will then add matching
capabilities.  Renaming the files is just to make them consistent with
the class in the file.

Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Kees Cook <kees@ubuntu.com>
2011-12-15 05:06:32 -08:00
John Johansen
84c0bba1ef Lindent + hand cleanups aare_rules
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-By: Steve Beattie <sbeattie@ubuntu.com>
2011-03-13 05:53:08 -07:00
John Johansen
6aad970d1c Split out compressed dfa "transition table" compression
Split hfa into hfa and compressed_hfa files.  The hfa portion focuses on
creating an manipulating hfas, while compressed_hfa is used for creating
compressed hfas that can be used/reused at run time with much less memory
usage than the full blown hfa.

Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-By: Steve Beattie <sbeattie@ubuntu.com>
2011-03-13 05:50:34 -07:00
John Johansen
298a36bffb Split out aare_rules which are used to encapsulate creating the dfa
Split out the aare_rule bits that encapsulate the convertion of apparmor
rules into the final compressed dfa.

This patch will not compile because of the it needs hfa to export an interface
but hfa is going to be split so just delay until hfa and transtable are
split and they can each export their own interface.

Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-By: Steve Beattie <sbeattie@ubuntu.com>
2011-03-13 05:49:15 -07:00