Commit graph

184 commits

Author SHA1 Message Date
Seth Arnold
1285d81547 parser: Fix delete after new[] -- patch from Oleg Strikov <oleg.strikov@gmail.com> 2017-03-21 12:09:59 -07:00
Seth Arnold
5d99b5fdb5 Fix Coverity issue 56025 -- Uninitialized scalar field
Signed-off-by: Seth Arnold <seth.arnold@canonical.com>
Acked-by: John Johansen <john.johansen@canonical.com>
2016-01-19 15:07:04 -08:00
Steve Beattie
768f11b497 parser: revert changes from commit rev 3248
The changes to the parser made in commit rev 3248 were accidental and
not intended to be committed.
2015-10-14 13:49:26 -07:00
John Johansen
99322d3978 Add LSS presentations about apparmor security model 2015-10-13 15:39:17 -07:00
John Johansen
8efb5850f2 Move rule simplification into the tree construction phase
The current rule simplification algorithm has issues that need to be
addressed in a rewrite, but it is still often a win, especially for
larger profiles.

However doing rule simplification as a single pass limits what it can
do. We default to right simplification first because this has historically
shown the most benefits. For two reasons
  1. It allowed better grouping of the split out accept nodes that we
     used to do (changed in previous patches)
  2. because trailing regexes like
       /foo/**,
       /foo/**.txt,
     can be combined and they are the largest source of node set
     explosion.

However the move to unique node sets, eliminates 1, and forces 2 to
work within only the single unique permission set on the right side
factoring pass, but it still incures the penalty of walking the whole
tree looking for potential nodes to factor.

Moving tree simplification into the construction phases gets rid of
the need for the right side factoring pass to walk other node sets
that will never combine, and since we are doing simplification we can
do it before the cat and permission nodes are added reducing the
set of nodes to look at by another two.

We do loose the ability to combine nodes from different sets during
the left factoring pass, but experimentation shows that doing
simplification only within the unique permission sets achieve most of
the factoring that a single global pass would achieve.

Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Steve Beattie <steve@nxnw.org>
2015-06-25 16:38:04 -06:00
John Johansen
832455de2c Change expr tree construction so that rules are grouped by perms
Currently rules are added to the expression tree in order, and then
tree simplification and factoring is done. This forces simplification
to "search" through the tree to find rules with the same permissions
during right factoring, which dependent on ordering of factoring may
not be able to group all rules of the same permissions.

Instead of having tree factoring do the work to regroup rules with the
same permissions, pregroup them as part of the expr tree construction.
And only build the full tree when the dfa is constructed.

Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Steve Beattie <steve@nxnw.org>
2015-06-25 16:38:02 -06:00
John Johansen
5a9300c91c Move the permission map into the rule set
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Steve Beattie <steve@nxnw.org>
2015-06-25 15:54:15 -06:00
John Johansen
292f3be438 switch away from doing an individual accept node for each perm bit
accept nodes per perm bit where done from the very begining in a
false belief that they would help produce minimized dfas because
a nfa states could share partial overlapping permissions.

In reality they make tree factoring harder, reduce in longer nfa
state sets during dfa construction and do not result in a minimized
dfa.

Moving to unique permission sets, allows us to minimize the number
of nodes sets, and helps reduce recreating each set type multiple
times during the dfa construction.

Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Steve Beattie <steve@nxnw.org>
2015-06-25 14:08:55 -06:00
Tyler Hicks
afb3cd0b06 parser: Honor USE_SYSTEM make variable in libapparmor_re
Signed-off-by: Tyler Hicks <tyhicks@canonical.com>
Acked-by: Seth Arnold <seth.arnold@canonical.com>
Acked-by: Steve Beattie <steve@nxnw.org>
2015-03-25 17:09:25 -05:00
John Johansen
d22a867723 Fix compilation of audit modifiers
This fixes the incorrect compilation of audit modifiers for exec and
pivot_root as detailed in

https://launchpad.net/bugs/1431717
https://launchpad.net/bugs/1432045

The permission accumulation routine on the backend was incorrectly setting
the audit mask based off of the exec type bits (info about the exec) and
not the actual exec permission.

This bug could have also caused permissions issues around overlapping exec
generic and exact match exec rules, except the encoding of EXEC_MODIFIERS
ensured that the
  exact_match_allow & AA_USER/OTHER_EXEC_TYPE
  test would never fail for a permission accumulation with the exec permission
  set.

Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Steve Beattie <steve@nxnw.org>
2015-03-18 10:05:55 -07:00
Steve Beattie
c2f7e5ff80 bison grammers: use pure.api directive instead of pure-parser variants
This patch adjusts the bison grammer in libapparmor and the parser
to use the %define api.pure directive instead of the deprecated
%pure_parser and %pure-parser keywords.  Bison had been warning about
the former:

  libraries/libapparmor/src/grammar.y:71.1-12: warning: deprecated directive, use ‘%pure-parser’ [-Wdeprecated]
  %pure_parser
  ^^^^^^^^^^^^

Signed-off-by: Steve Beattie <steve@nxnw.org>
Acked-by: Seth Arnold <seth.arnold@canonical.com>
2014-09-04 11:37:33 -07:00
John Johansen
19c942e5c2 parser: split accept perm processing from rule parsing
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Steve Beattie <steve@nxnw.org>
Acked-by: Seth Arnold <seth.arnold@canonical.com>
2014-09-03 14:40:08 -07:00
John Johansen
fb53ec793b parser: Refactor add_new_state into two versions
Refactor add_new_state into two versions, one that splits anodes from
nnodes, and one for use when anodes and nnodes are presplit

Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Steve Beattie <steve@nxnw.org>
Acked-by: Seth Arnold <seth.arnold@canonical.com>
2014-09-03 14:36:08 -07:00
John Johansen
df961a3e02 parser: Refactor the process_work_queue code into its own fn
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Steve Beattie <steve@nxnw.org>
Acked-by: Seth Arnold <seth.arnold@canonical.com>
2014-09-03 14:32:52 -07:00
John Johansen
e86f850d59 parser: Refactor accept nodes to be common to a shared node type
The shared node type will be used in the future to add new capabilities

Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Steve Beattie <steve@nxnw.org>
2014-09-03 14:29:35 -07:00
John Johansen
ee7bf1dc28 parser: Refactor rule accumulation to use some helper functions
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Steve Beattie <steve@nxnw.org>
Acked-by: Seth Arnold <seth.arnold@canonical.com>
2014-09-03 14:24:37 -07:00
John Johansen
73c74d044d parser: Move nodeset caching into expr-tree.h
We need to rework permission type mapping to nodesets, which means we
need to move the nodeset computations earlier in the dfa creation
processes, instead of a post step of follow(), so move the nodeset
into expr-tree

Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Steve Beattie <steve@nxnw.org>
Acked-by: Seth Arnold <seth.arnold@canonical.com>
2014-09-03 14:21:18 -07:00
John Johansen
7f29e7edee Fix: backend processing was not treating ${} as a special pcre character
Also for characters that are not recognized as a valid escape seq
make sure that the character is emitted.

previously
  \$ resulted in \
where it should have been \$ if $ wasn't recognized

Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Steve Beattie <steve@nxnw.org>
2014-06-19 13:49:00 -07:00
John Johansen
f7e12a9bc5 Convert aare_rules into a class
This cleans things up a bit and fixes a bug where not all rules are
getting properly counted so that the addition of policy_mediation
rules fails to generate the policy dfa in some cases.

Because the policy dfa is being generated correctly now we need to
fix some tests to use the new -M flag to specify the expected features
set of the test.

Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Steve Beattie <steve@nxnw.org>
Acked-by: Seth Arnold <seth.arnold@canonical.com>
2014-04-23 10:57:16 -07:00
John Johansen
94632cdca5 Unify escape sequence processing into a set of library fns.
Fix the octal escape sequence that was broken, so that short escapes \0,
\00 \xa, didn't work and actually resulted in some encoding bugs.

Also we were missing support for the decimal # conversion \d123

Incorporate and update Steve Beattie's unit tests of escape sequences
patch

v2
- unify escape sequence processing, creating lib fns.
- address Steve Beattie's feedback
- incorporate Steve Beattie's feedback 
v3
- address Seth's feedback
- add missing strn_escseq tests
- expand strn_escseq to take a 3rd parameter to allow specifying chars to
  convert straight across. . eg "+" will cause it to convert \+ as +
- fix libapparmor/parse.y failed escape pass through to match processunqoted

Unit tests by Steve Beattie

Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Seth Arnold <seth.arnold@canonical.com>
2014-04-15 14:59:41 -07:00
Steve Beattie
fdd89f1da5 parser: eliminate bison warning
This patch eliminates the bison warning about "%name-prefix =" being
deprecated.

Signed-off-by: Steve Beattie <steve@nxnw.org>
Acked-by: John Johansen <john.johansen@canonical.com>
2014-01-24 10:19:59 -08:00
Steve Beattie
9fcbd8af1c parser: fix compilation failure on 32 bit systems
std::max in C++ requires that both arguments be the same type. The
previous fix added std::max comparisons between unsigned long numeric
constants and size_t, this fix casts the numeric constants to size_t.

Signed-off-by: Steve Beattie <steve@nxnw.org>
Acked-by: John Johansen <john.johansen@canonical.com>
2014-01-10 11:02:59 -08:00
John Johansen
92eae9d2d9 Fix dump output of expr tree
Make the accept information dump output be in hexidecimal like the
other dumps so its easier to reference between them.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2014-01-09 17:30:00 -08:00
John Johansen
7ba571395e Fixes to that where dropped from the diff-encode patch
This diff is part of the diffencode patch but was dropped when it was
applied to bzr. I have no idea why and status showed a clean tree.

Signed-off-by: John Johansen <john.johansen@canonical.com>
2014-01-09 17:24:40 -08:00
John Johansen
3fb0689b84 Fix policy generation for small dfas
So there are multiple bugs in policy generation for small dfas.
- A bug where dfas reduced to only have a none accepting state
  drop the start state for accept tables in the chfa encoding

  eg. deny audit dbus,

  the accept and accept2 tables are resized to 1 but the cfha format
  requires at least 2. 1 for the none accepting state and 1 for the
  start state.
  the kernel check that the accept tables == other state table sizes
  caught this and rejected it.

- the next/check table needs to be padded to the largest base position
  used + 256 so no input can ever overflow the next/check table
  (next/check[base+c]).

  This is normally handled by inserting a transition which resizes
  the table. However in this case there where no transitions being
  inserted into the dfa. Resulting in a next/check table size of
  2, with a base pos of 0. Meaning the table needed to be padded
  to 256.

- there is an alignment bug for dfas within the container (see below)
  what follows is a hexdump of the generated policy. With the
  different parts broken out. There are 2 dfas (policy and older file) and
  it is the second dfa that is out of alignment.

  The aadfa blob wrapper should be making sure that the start of the actual
  dfa is in alignment but this is not happening. In this example


00000000  04 08 00 76 65 72 73 69  6f 6e 00 02 05 00 00 00  |...version......|
00000010  04 08 00 70 72 6f 66 69  6c 65 00 07 05 40 00 2f  |...profile...@./|
00000020  68 6f 6d 65 2f 75 62 75  6e 74 75 2f 62 7a 72 2f  |home/ubuntu/bzr/|
00000030  61 70 70 61 72 6d 6f 72  2f 74 65 73 74 73 2f 72  |apparmor/tests/r|
00000040  65 67 72 65 73 73 69 6f  6e 2f 61 70 70 61 72 6d  |egression/apparm|
00000050  6f 72 2f 71 75 65 72 79  5f 6c 61 62 65 6c 00 04  |or/query_label..|
00000060  06 00 66 6c 61 67 73 00  07 02 00 00 00 00 02 00  |..flags.........|
00000070  00 00 00 02 00 00 00 00  08 02 00 00 00 00 02 00  |................|
00000080  00 00 00 02 00 00 00 00  02 00 00 00 00 04 07 00  |................|
00000090  63 61 70 73 36 34 00 07  02 00 00 00 00 02 00 00  |caps64..........|
000000a0  00 00 02 00 00 00 00 02  00 00 00 00 08 04 09 00  |................|
000000b0  70 6f 6c 69 63 79 64 62  00 07

begin of policy dfa blob wrapper
000000b0                                 04 06 00 61 61 64  |policydb.....aad|
000000c0  66 61 00 06

size of the following blob (in little endian) so 0x80
000000c0              80 00 00 00  

begin of actual policy dfa, notice alignment on 8 byte boundry
000000c0                           1b 5e 78 3d 00 00 00 18  |fa.......^x=....|
000000d0  00 00 00 80 00 00 6e 6f  74 66 6c 65 78 00 00 00  |......notflex...|
000000e0  00 01 00 04 00 00 00 00  00 00 00 01 00 00 00 00  |................|
000000f0  00 07 00 04 00 00 00 00  00 00 00 01 00 00 00 00  |................|
00000100  00 02 00 04 00 00 00 00  00 00 00 02 00 00 00 00  |................|
00000110  00 00 00 00 00 00 00 00  00 04 00 02 00 00 00 00  |................|
00000120  00 00 00 02 00 00 00 00  00 08 00 02 00 00 00 00  |................|
00000130  00 00 00 02 00 00 00 00  00 03 00 02 00 00 00 00  |................|
00000140  00 00 00 02 00 00 00 00  08

dfa blob wrapper
00000140                              04 06 00 61 61 64 66  |............aadf|
00000150  61 00 06

size of the following blob (in little endian) so 0x4c8
00000150          c8 04 00 00

begin of file dfa, notice alignment. NOT on 8 byte boundry
                               1b  5e 78 3d 00 00 00 18 00  |a.......^x=.....|
00000160  00 04 c8 00 00 6e 6f 74  66 6c 65 78 00 00 00 00  |.....notflex....|
00000170  01 00 04 00 00 00 00 00  00 00 06 00 00 00 00 00  |................|
00000180  00 00 00 00 9f c2 7f 00  00 00 00 00 00 00 00 00  |................|
00000190  04 00 30 00 00 00 00 00  07 00 04 00 00 00 00 00  |..0.............|
000001a0  00 00 06 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
000001b0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
000001c0  02 00 04 00 00 00 00 00  00 00 06 00 00 00 00 00  |................|
000001d0  00 00 00 00 00 00 01 00  00 00 01 00 00 00 02 00  |................|
000001e0  00 00 00 00 00 00 00 00  04 00 02 00 00 00 00 00  |................|
000001f0  00 00 06 00 00 00 00 00  02 00 00 00 05 00 05 00  |................|
00000200  08 00 02 00 00 00 00 00  00 01 02 00 00 00 03 00  |................|
00000210  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
*
00000260  00 00 00 00 00 00 00 00  00 00 02 00 04 00 00 00  |................|
00000270  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
*
00000410  03 00 02 00 00 00 00 00  00 01 02 00 00 00 02 00  |................|
00000420  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
*
00000470  00 00 00 00 00 00 00 00  00 00 01 00 03 00 04 00  |................|
00000480  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
*
00000610  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00

end of container
00000610                                                08  |................|
00000620

Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Steve Beattie <steve@nxnw.org>
2014-01-09 17:09:54 -08:00
John Johansen
f0b154528d Fix dfa minimization
So DFA minimization has a bug and feature that keeps it from  minimizing
some dfas completely. This feature/bug did not result in incorrect dfas,
it just fails to result in full minimization.

The same mappings comparison is wrong. Or more correctly it is right when
transitions are not remapped to minimization partitions, but it may be
wrong when states are remapped. This means it will cause excess
partitioning (not removing all the states it should).

The trans hashing does a "guess" at partition splitting as a performance
enhancement. Basically it leverages the information that states that have
different transitions or transitions on different characters are not the
same. However this isn't always the case, because minimization can cause
some of those transitions to be altered. In previous testing this was
always a win, with only a few extra states being added some times. However
this changes with when the same mappings are fixed, as the hashing that was
done was based on the same flawed mapping as the broken same mappings.

If the same mappings are fixed and the hashing is not removed then there
is little to no change. However with both changes applied some dfas see
significant improvements. These improvements often result in performance
improvements despite minimization doing more work, because it means less
work to be done in the chfa comb compression

eg. test case that raised the issue (thanks tyler)
  /t { mount fstype=ext2, mount, }

  used to be minimized to
   {1} <== (allow/deny/audit/quiet)
   {6} (0x 2/0/0/0)

   {1} -> {2}: 0x7
   {2} -> {3}: 0x0
   {2} -> {2}: []
   {3} -> {4}: 0x0
   {3} -> {3}: []
   {4} -> {6}: 0x0
   {4} -> {7}: 0x65 e
   {4} -> {5}: []
   {5} -> {6}: 0x0
   {5} -> {5}: []
   {6}  (0x 2/0/0/0) -> {6}: [^\0x0]
   {7} -> {6}: 0x0
   {7} -> {8}: 0x78 x
   {7} -> {5}: []
   {8} -> {6}: 0x0
   {8} -> {5}: 0x74 t
   {8} -> {5}: []

  with the patch it is now properly minimized to
    {1} <== (allow/deny/audit/quiet)
    {6} (0x 2/0/0/0)

    {1} -> {2}: 0x7
    {2} -> {3}: 0x0
    {2} -> {2}: []
    {3} -> {4}: 0x0
    {3} -> {3}: []
    {4} -> {6}: 0x0
    {4} -> {4}: []
    {6}  (0x 2/0/0/0) -> {6}: [^\0x0]


The evince profile set sees some significant improvements picking a couple
example from its "minimized" dfas (it has 12) we see a reduction from 9720
states to 6232 states, and 6537 states to 3653 states. All told seeing the
performance/profile size going from
  2.8 parser: 4.607s 1007267 bytes
  dev head:   3.48s  1007267 bytes
  min fix:    2.68s  549603 bytes

of course evince is an extreme example so a few more

firefox
   2.066s   404549 bytes
 to
   1.336s   250585 bytes


cupsd
   0.365s   90834 bytes
 to
   0.293s   58855 bytes

dnsmasq
   0.118s   35689 bytes
 to
   0.112s   27992 bytes


smbd
   0.187s   40897 bytes
 to
   0.162s   33665 bytes


weather applet profile from ubuntu touch
   0.618s   105673 bytes
 to
   0.432s   89300 bytes


I have not seen a case where the parser regresses on performance but it is
possible. This patch will not cause a regression on generated policy size,
at worst it will result in policy that is the same size

Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Tyler Hicks <tyhicks@canonical.com>
Acked-by: Steve Beattie <steve@nxnw.org>
2014-01-09 17:06:48 -08:00
John Johansen
22855508e8 Add Differential State Compression to the DFA
Differential state compression encodes a state's transitions as the
difference between the state and its default state (the state it is
relative too).

This reduces the number of transitions that need to be stored in the
transition table, hence reducing the size of the dfa.  There is a
trade off in that a single input character may have to traverse more
than one state.  This is somewhat offset by reduced table sizes providing
better locality and caching properties.

With carefully encoding we can still make constant match time guarentees.
This patch guarentees that a state that is differentially encoded will do at
most 3m state traversal to match an input of length m (as opposed to a
non-differentially compressed dfa doing exactly m state traversals).
In practice the actually number of extra traversals is less than this becaus
we selectively choose which states are differentially encoded.

In addition to reducing the size of the dfa by reducing the number of
transitions that have to be stored.  Differential encoding reduces the
number of transitions that need to be considered by comb compression,
which can result in tighter packing, due to a reduction in sparseness, and
also reduces the time spent in comb compression which currently uses an
O(n^2) algorithm.

Differential encoding will always result in a DFA that is smaller or equal
in size to the encoded DFA, and will usually improve compilation times,
with the performance improvements increasing as the DFA gets larger.

Eg. Given a example DFA that created 8991 states after minimization.
* If only comb compression (current default) is used

 52057 transitions are packed into a table of 69591 entries. Achieving an
 efficiency of about 75% (an average of about 7.74 table entries per state).
 With a resulting compressed dfa16 size of 404238 bytes and a run time for
 the dfa compilation of
   real 0m9.037s
   user 0m8.893s
   sys  0m0.036s

* If differential encoding + comb compression is used, 8292 of the 8991
  states are differentially encoded, with 31557 trans removed.  Resulting in

  20500 transitions are packed into a table of 20675 entries.  Acheiving an
  efficiency of about 99.2% (an average of about 2.3 table entries per state
  With a resulting compressed dfa16 size of 207874 bytes (about 48.6%
  reduction) and a run time for the dfa compilation of
   real 0m5.416s (about 40% faster)
   user 0m5.280s
   sys  0m0.040s

Repeating with a larger DFA that has 17033 states after minimization.
* If only comb compression (current default) is used

 102992 transitions are packed into a table of 137987 entries.  Achieving
 an efficiency of about 75% (an average of about 8.10 entries per state).
 With a resultant compressed dfa16 size of 790410 bytes and a run time for d
 compilation of
  real  0m28.153s
  user  0m27.634s
  sys   0m0.120s

* with differential encoding
 39374 transition are packed into a table of 39594 entries. Achieving an
 efficiency of about 99.4% (an average of about 2.32 entries per state).
 With a resultant compressed dfa16 size of 396838 bytes (about 50% reduction
 and a run time for dfa compilation of
  real  0m11.804s (about 58% faster)
  user  0m11.657s
  sys   0m0.084s

Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Seth Arnold <seth.arnold@canonical.com>
2014-01-09 16:55:55 -08:00
Steve Beattie
7a42de3eae parser: add build option for coverage (v3)
This patch adds a parser make variable and a make target for building
the compiler with coverage compilation flags. With this, coverage
information can be generated by running tests/test suites against the
built parser and run through tools like gcovr.

Patch History:
  v1: initial version
  v2: refreshed/no change
  v3: address feedback from sarnold:
      - mark coverage target as phony
      - correct missing '.' typo in clean target
      - make coverage extensions consistent in clean targets

Signed-off-by: Steve Beattie <steve@nxnw.org>
Acked-by: Seth Arnold <seth.arnold@canonical.com>
2013-12-06 05:31:11 -08:00
Steve Beattie
0e09546391 parser - push normalize_tree() ops into expr-tree classes
This is patch tries to reduce the number of dynamic_cast<>s needed
during normalization by pushing the operations of normalize_tree()
into the expr-tree classes themselves rather than perform it as
an external function. This eliminates the need for dynamic_cast<>
checks on the current object under inspection and reduces the number
of checks needing to be performed on child Nodes as well.

In non-strict benchmarking, doing the dynamic_cast<> reduction
for just the tree normalization operation resulted in a ~10-15%
improvement in overall time on a couple of different hosts (amd64,
armel), as measured against apparmor_parser -Q.  Valgrind's callgrind
tool indicated a reduction in the number of calls to dynamic_cast<>
on the tst/simple_tests/vars/dbus_vars_9.sd test profile from ~19
million calls to ~12 million.

In comparisons with dumped expr trees over both the entire
tst/simple_tests/ tree and from 1000 randomly generated profiles via
stress.rb, the generated trees were identical.

Patch history:
  v1: initial version of patch
  v2: update patch to take into account the infinite loop fix in
      trunk rev 1975 and refresh against current code.
  v3: no change

Signed-off-by: Steve Beattie <steve@nxnw.org>
Acked-by: Seth Arnold <seth.arnold@canonical.com>
Acked-by: John Johansen <john.johansen@canonical.com>
2013-11-28 00:43:35 -08:00
Steve Beattie
151fb20972 parser: convert array into unordered map
This patch converts the problematic-with-g++ 4.6 state_names array
into a C++ unordered_map type. Using this depends on using the c++0x
(aka c++11) standard, and as we have gnuisms elsewhere (using the
typeof builtin), the patch also adds/converts to using -std=gnu++c0x
in the build rules (which conveniently eliminates some other warnings
we had due to other c++11-isms).

Signed-off-by: Steve Beattie <steve@nxnw.org>
Acked-By: Seth Arnold <seth.arnold@canonical.com>
2013-11-18 16:23:23 -08:00
John Johansen
1c86517e79 The apparmor parser build fails when bison 3 is used. The following
patch is needed to fix the build.

patch from: Jan Rękorajski <baggins@pld-linux.org>
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Steve Beattie <steve@nxnw.org>
2013-11-05 14:30:01 -08:00
Steve Beattie
9c50ff9fb3 parser - terminate search early if wildcards are discovered
This patch is a very minor optimization to the search to determine
whether a given rule is an exact match or not. If a wildcard rule
(i.e.  an inexact match) is discovered, exact_match is set to 0,
so we don't need to continue the tree traversal.

Signed-off-by: Steve Beattie <steve@nxnw.org>
Acked-by: John Johansen <john.johansen@canonical.com>
2013-10-14 14:36:05 -07:00
Steve Beattie
cf57476d6b parser - Fix const char warnings
This patch addresses a bunch of the compiler string conversion warnings
that were introduced with the C++-ification patch.

Signed-off-by: Steve Beattie <steve@nxnw.org>
Acked-by: Tyler Hicks <tyhicks@canonical.com>
2013-10-01 10:59:04 -07:00
John Johansen
a34059b1e5 Convert the parser to C++
This conversion is nothing more than what is required to get it to
compile. Further improvements will come as the code is refactored.

Unfortunately due to C++ not supporting designated initializers, the auto
generation of af names needed to be reworked, and "netlink" and "unix"
domain socket keywords leaked in. Since these where going to be added in
separate patches I have not bothered to do the extra work to replace them
with a temporary place holder.

Signed-off-by: John Johansen <john.johansen@canonical.com>
[tyhicks: merged with dbus changes and memory leak fixes]
Signed-off-by: Tyler Hicks <tyhicks@canonical.com>
Acked-by: Seth Arnold <seth.arnold@canonical.com>
Acked-by: Steve Beattie <steve@nxnw.org>
2013-09-27 16:13:22 -07:00
John Johansen
66717a2aec temp fix using the 2.8 patch until the 3.0 patch is ready to land
fix a nasty little bug that can surface in apparmor 2.8 when
Hats/children profiles are used.
  
the matchflags in the dfa backend are not getting properly reset, which
results in a previously processed profiles match flags being used. This is
not a problem for most permissions but can result in x conflict errors.
  
Note: this should not result in profiles with the wrong x transitions loaded
as it causes compilation to file with an x conflict.
  
This is a minimal patch targeted at the 2.8 release. As such I have just
updated the delete_ruleset routine to clear the flags as it is already
being properly called for every rule set.

Apparmor 2.9/3.0 will have a different approach where it is not possible
to reuse the flags.

Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Steve Beattie <sbeattie@ubuntu.com>
2012-12-10 17:08:19 -08:00
John Johansen
41b454f2e5 Older C++ compilers complain about the use of a class with a non trivial
constructor in a union.  Change the ProtoState class to use an init fn
instead of a constructor.
2012-05-30 14:31:41 -07:00
John Johansen
f4240fcc74 Rename and invert logic of is_null to is_accept to better reflect its use
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-By: Steve Beattie <sbeattie@ubuntu.com>
2012-03-22 13:21:55 -07:00
John Johansen
3c9cdfb841 rework the is_null test to not include deny
The deny information is not used as valid accept state information,
so remove it from the is_null test.  This does not change the dfa
generated but does result in the dumped information changing,
as states that don't have any accept information are no longer
reported as accepting. This is what changes the number of states
reported in the minimize tests.

Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-By: Steve Beattie <sbeattie@ubuntu.com>
2012-03-22 07:55:00 -07:00
John Johansen
e7f6e0f9f1 Fix dfa minimization around the nonmatching state
The same mappings routine had two bugs in it, that in practice haven't
manifested because of partition ordering during minimization.  The
result is that some states may fail comparison and split, resulting
in them not being eliminated when they could be.

The first is that direct comparison to the nonmatching state should
not be done as it is a candiate for elimination, instead its partion
should be compared against.  This simplifies the first test


The other error is the comparison
  if (rep->otherwise != nonmatching)

again this is wrong because nomatching should not be directly
compared against.  And again can result in the current rep->otherwise
not being eliminated/replaced by the partion.  Again resulting in
extra trap states.

These tests where original done the way they were because
 ->otherwise could be null, which was used to represent nonmatching.
The code was cleaned up a while ago to remove this, ->otherwise is
always a valid pointer now.

Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-By: Steve Beattie <sbeattie@ubuntu.com>
2012-03-22 07:50:35 -07:00
John Johansen
7fcbd543d7 Factor all the permissions dump code into a single perms method
Also make sure the perms method properly switches to hex and back to dec
as some of the previous perm dump code did not.

Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-By: Steve Beattie <sbeattie@ubuntu.com>
2012-03-22 07:49:43 -07:00
John Johansen
3a1b7bb54c Fix infinite loop bug in normalization.
There are some rare occassions, when lots of alternations are used that
tree simplification can result in an expression of
  (E | (E | E)) or (E . (E . E))   where E is the epsnode

both of these expressions will lead to an inifinite loop in normalize_tree
as the epsnode test
       if ((&epsnode == t->child[dir]) &&
       	        (&epsnode != t->child[!dir]) &&
		      	         dynamic_cast<TwoChildNode *>(t)) {

and the tree node rotation test
    	} else if ((dynamic_cast<AltNode *>(t) &&
	           dynamic_cast<AltNode *>(t->child[dir])) ||
		   			   (dynamic_cast<CatNode *>(t) &&
					   			    dynamic_cast<CatNode *>(t->child[dir]))) {

end up undoing each others work, ie.

                eps flip                 rotate
  (E | (E | E)) --------> ((E | E) | E) -------> (E | (E | E))

Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-By: Steve Beattie <sbeattie@ubuntu.com>
2012-03-09 04:22:42 -08:00
John Johansen
5e361a4a05 Fix dfa minimization to deal with exec conflicts
Minimization was failing because it was too agressive.  It was minimizing
as if there was only 1 accept condition.  This allowed it to remove more
states but at the cost of loosing unique permission sets, they where
being combined into single commulative perms.  This means that audit,
deny, xtrans, ... info on one path would be applied to all other paths
that it was combined with during minimization.

This means that we need to retain the unique accept states, not allowing
them to be combined into a single state.  To do this we put each unique
permission set into its own partition at the start of minimization.

The states within a partition have the  same permissions and can be combined
within the other states in the partition as the loss of unique path
information is will not result in a conflict.

This is similar to what perm hashing used to do but deny information is
still being correctly applied and carried.

Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-By: Steve Beattie <sbeattie@ubuntu.com>
2012-03-09 04:20:19 -08:00
John Johansen
cf5f7ef9c2 Fix the x intersection consistency test
The in x intersection consistency test for minimization was failing because
it was screening off the AA_MAY_EXEC permission before passing the exec
information to the consistency test fn.  This resulted in the consistency
test fn not testing the consistency because it treated the permission set
as not having x permissions.

Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-By: Steve Beattie <sbeattie@ubuntu.com>
2012-03-09 04:19:24 -08:00
John Johansen
811d8aefa3 Fix transition character reporting of dfa dumps
Make them report a hex value strings instead of the default C++
\vvvvv

Make them consistent,
- Dump to report the default transition and what isn't transitioned
  on it.


Signed-off-by: John Johansen <john.johansen@canonical.com>
2012-03-09 04:18:35 -08:00
John Johansen
37f446dd79 Fix/cleanup the permission reporting for the dfa dumps
The permission reporting was not reporting the full set of permission
flags and was inconsistent between the dump routines.

Report permissions as the quad (allow/deny/audit/quiet) in hex.

Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-By: Steve Beattie <sbeattie@ubuntu.com>
2012-03-09 04:17:47 -08:00
John Johansen
1a01b5c296 Fix/cleanup the dfa dump routines output to provide state label
Fix the transitions states output so that they output the state label
instead of the state address.  That is
  {1} -> 0x10831a0:  /
now becomes
  {1} -> {2}:  /

Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-By: Steve Beattie <sbeattie@ubuntu.com>
2012-03-09 04:14:34 -08:00
John Johansen
e61b7b9241 Update the copyright dates for the apparmor_parser
Signed-off-by: John Johansen <john.johansen@canonical.com>
2012-02-24 04:21:59 -08:00
John Johansen
954dc6f694 Fix hexdigit conversion in the pcre parser
The pcre parser in the dfa backend is not correctly converting escaped
hex string like 
  \0x0d

This is the minimal patch to fix, and we should investigate just using
the C/C++ conversion routines here.

I also I nominated for the 2.7 series.

Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Seth Arnold <seth.arnold@gmail.com>
2012-02-24 04:20:46 -08:00
John Johansen
662ad60cd7 Extend the information dumped by -D rule-exprs to include permissions
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Kees Cook <kees@ubuntu.com>
2012-02-24 04:17:19 -08:00
John Johansen
e7c550243c Make second minimization pass optional
The removal of deny information is a one way operation, that can result
in a smaller dfa, but also results in a dfa that should not be used in
future operations because the deny rules from the precomputed dfa would
not get applied.

For now default filtering out of deny information to off, as it takes
extra time and seldom results in further state reduction.

Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Kees Cook <kees@ubuntu.com>
2012-02-16 07:43:02 -08:00