std::max in C++ requires that both arguments be the same type. The
previous fix added std::max comparisons between unsigned long numeric
constants and size_t, this fix casts the numeric constants to size_t.
Signed-off-by: Steve Beattie <steve@nxnw.org>
Acked-by: John Johansen <john.johansen@canonical.com>
Make the accept information dump output be in hexidecimal like the
other dumps so its easier to reference between them.
Signed-off-by: John Johansen <john.johansen@canonical.com>
This diff is part of the diffencode patch but was dropped when it was
applied to bzr. I have no idea why and status showed a clean tree.
Signed-off-by: John Johansen <john.johansen@canonical.com>
So DFA minimization has a bug and feature that keeps it from minimizing
some dfas completely. This feature/bug did not result in incorrect dfas,
it just fails to result in full minimization.
The same mappings comparison is wrong. Or more correctly it is right when
transitions are not remapped to minimization partitions, but it may be
wrong when states are remapped. This means it will cause excess
partitioning (not removing all the states it should).
The trans hashing does a "guess" at partition splitting as a performance
enhancement. Basically it leverages the information that states that have
different transitions or transitions on different characters are not the
same. However this isn't always the case, because minimization can cause
some of those transitions to be altered. In previous testing this was
always a win, with only a few extra states being added some times. However
this changes with when the same mappings are fixed, as the hashing that was
done was based on the same flawed mapping as the broken same mappings.
If the same mappings are fixed and the hashing is not removed then there
is little to no change. However with both changes applied some dfas see
significant improvements. These improvements often result in performance
improvements despite minimization doing more work, because it means less
work to be done in the chfa comb compression
eg. test case that raised the issue (thanks tyler)
/t { mount fstype=ext2, mount, }
used to be minimized to
{1} <== (allow/deny/audit/quiet)
{6} (0x 2/0/0/0)
{1} -> {2}: 0x7
{2} -> {3}: 0x0
{2} -> {2}: []
{3} -> {4}: 0x0
{3} -> {3}: []
{4} -> {6}: 0x0
{4} -> {7}: 0x65 e
{4} -> {5}: []
{5} -> {6}: 0x0
{5} -> {5}: []
{6} (0x 2/0/0/0) -> {6}: [^\0x0]
{7} -> {6}: 0x0
{7} -> {8}: 0x78 x
{7} -> {5}: []
{8} -> {6}: 0x0
{8} -> {5}: 0x74 t
{8} -> {5}: []
with the patch it is now properly minimized to
{1} <== (allow/deny/audit/quiet)
{6} (0x 2/0/0/0)
{1} -> {2}: 0x7
{2} -> {3}: 0x0
{2} -> {2}: []
{3} -> {4}: 0x0
{3} -> {3}: []
{4} -> {6}: 0x0
{4} -> {4}: []
{6} (0x 2/0/0/0) -> {6}: [^\0x0]
The evince profile set sees some significant improvements picking a couple
example from its "minimized" dfas (it has 12) we see a reduction from 9720
states to 6232 states, and 6537 states to 3653 states. All told seeing the
performance/profile size going from
2.8 parser: 4.607s 1007267 bytes
dev head: 3.48s 1007267 bytes
min fix: 2.68s 549603 bytes
of course evince is an extreme example so a few more
firefox
2.066s 404549 bytes
to
1.336s 250585 bytes
cupsd
0.365s 90834 bytes
to
0.293s 58855 bytes
dnsmasq
0.118s 35689 bytes
to
0.112s 27992 bytes
smbd
0.187s 40897 bytes
to
0.162s 33665 bytes
weather applet profile from ubuntu touch
0.618s 105673 bytes
to
0.432s 89300 bytes
I have not seen a case where the parser regresses on performance but it is
possible. This patch will not cause a regression on generated policy size,
at worst it will result in policy that is the same size
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Tyler Hicks <tyhicks@canonical.com>
Acked-by: Steve Beattie <steve@nxnw.org>
Differential state compression encodes a state's transitions as the
difference between the state and its default state (the state it is
relative too).
This reduces the number of transitions that need to be stored in the
transition table, hence reducing the size of the dfa. There is a
trade off in that a single input character may have to traverse more
than one state. This is somewhat offset by reduced table sizes providing
better locality and caching properties.
With carefully encoding we can still make constant match time guarentees.
This patch guarentees that a state that is differentially encoded will do at
most 3m state traversal to match an input of length m (as opposed to a
non-differentially compressed dfa doing exactly m state traversals).
In practice the actually number of extra traversals is less than this becaus
we selectively choose which states are differentially encoded.
In addition to reducing the size of the dfa by reducing the number of
transitions that have to be stored. Differential encoding reduces the
number of transitions that need to be considered by comb compression,
which can result in tighter packing, due to a reduction in sparseness, and
also reduces the time spent in comb compression which currently uses an
O(n^2) algorithm.
Differential encoding will always result in a DFA that is smaller or equal
in size to the encoded DFA, and will usually improve compilation times,
with the performance improvements increasing as the DFA gets larger.
Eg. Given a example DFA that created 8991 states after minimization.
* If only comb compression (current default) is used
52057 transitions are packed into a table of 69591 entries. Achieving an
efficiency of about 75% (an average of about 7.74 table entries per state).
With a resulting compressed dfa16 size of 404238 bytes and a run time for
the dfa compilation of
real 0m9.037s
user 0m8.893s
sys 0m0.036s
* If differential encoding + comb compression is used, 8292 of the 8991
states are differentially encoded, with 31557 trans removed. Resulting in
20500 transitions are packed into a table of 20675 entries. Acheiving an
efficiency of about 99.2% (an average of about 2.3 table entries per state
With a resulting compressed dfa16 size of 207874 bytes (about 48.6%
reduction) and a run time for the dfa compilation of
real 0m5.416s (about 40% faster)
user 0m5.280s
sys 0m0.040s
Repeating with a larger DFA that has 17033 states after minimization.
* If only comb compression (current default) is used
102992 transitions are packed into a table of 137987 entries. Achieving
an efficiency of about 75% (an average of about 8.10 entries per state).
With a resultant compressed dfa16 size of 790410 bytes and a run time for d
compilation of
real 0m28.153s
user 0m27.634s
sys 0m0.120s
* with differential encoding
39374 transition are packed into a table of 39594 entries. Achieving an
efficiency of about 99.4% (an average of about 2.32 entries per state).
With a resultant compressed dfa16 size of 396838 bytes (about 50% reduction
and a run time for dfa compilation of
real 0m11.804s (about 58% faster)
user 0m11.657s
sys 0m0.084s
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Seth Arnold <seth.arnold@canonical.com>
This patch adds a parser make variable and a make target for building
the compiler with coverage compilation flags. With this, coverage
information can be generated by running tests/test suites against the
built parser and run through tools like gcovr.
Patch History:
v1: initial version
v2: refreshed/no change
v3: address feedback from sarnold:
- mark coverage target as phony
- correct missing '.' typo in clean target
- make coverage extensions consistent in clean targets
Signed-off-by: Steve Beattie <steve@nxnw.org>
Acked-by: Seth Arnold <seth.arnold@canonical.com>
This is patch tries to reduce the number of dynamic_cast<>s needed
during normalization by pushing the operations of normalize_tree()
into the expr-tree classes themselves rather than perform it as
an external function. This eliminates the need for dynamic_cast<>
checks on the current object under inspection and reduces the number
of checks needing to be performed on child Nodes as well.
In non-strict benchmarking, doing the dynamic_cast<> reduction
for just the tree normalization operation resulted in a ~10-15%
improvement in overall time on a couple of different hosts (amd64,
armel), as measured against apparmor_parser -Q. Valgrind's callgrind
tool indicated a reduction in the number of calls to dynamic_cast<>
on the tst/simple_tests/vars/dbus_vars_9.sd test profile from ~19
million calls to ~12 million.
In comparisons with dumped expr trees over both the entire
tst/simple_tests/ tree and from 1000 randomly generated profiles via
stress.rb, the generated trees were identical.
Patch history:
v1: initial version of patch
v2: update patch to take into account the infinite loop fix in
trunk rev 1975 and refresh against current code.
v3: no change
Signed-off-by: Steve Beattie <steve@nxnw.org>
Acked-by: Seth Arnold <seth.arnold@canonical.com>
Acked-by: John Johansen <john.johansen@canonical.com>
This patch converts the problematic-with-g++ 4.6 state_names array
into a C++ unordered_map type. Using this depends on using the c++0x
(aka c++11) standard, and as we have gnuisms elsewhere (using the
typeof builtin), the patch also adds/converts to using -std=gnu++c0x
in the build rules (which conveniently eliminates some other warnings
we had due to other c++11-isms).
Signed-off-by: Steve Beattie <steve@nxnw.org>
Acked-By: Seth Arnold <seth.arnold@canonical.com>
patch is needed to fix the build.
patch from: Jan Rękorajski <baggins@pld-linux.org>
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Steve Beattie <steve@nxnw.org>
This patch is a very minor optimization to the search to determine
whether a given rule is an exact match or not. If a wildcard rule
(i.e. an inexact match) is discovered, exact_match is set to 0,
so we don't need to continue the tree traversal.
Signed-off-by: Steve Beattie <steve@nxnw.org>
Acked-by: John Johansen <john.johansen@canonical.com>
This patch addresses a bunch of the compiler string conversion warnings
that were introduced with the C++-ification patch.
Signed-off-by: Steve Beattie <steve@nxnw.org>
Acked-by: Tyler Hicks <tyhicks@canonical.com>
This conversion is nothing more than what is required to get it to
compile. Further improvements will come as the code is refactored.
Unfortunately due to C++ not supporting designated initializers, the auto
generation of af names needed to be reworked, and "netlink" and "unix"
domain socket keywords leaked in. Since these where going to be added in
separate patches I have not bothered to do the extra work to replace them
with a temporary place holder.
Signed-off-by: John Johansen <john.johansen@canonical.com>
[tyhicks: merged with dbus changes and memory leak fixes]
Signed-off-by: Tyler Hicks <tyhicks@canonical.com>
Acked-by: Seth Arnold <seth.arnold@canonical.com>
Acked-by: Steve Beattie <steve@nxnw.org>
fix a nasty little bug that can surface in apparmor 2.8 when
Hats/children profiles are used.
the matchflags in the dfa backend are not getting properly reset, which
results in a previously processed profiles match flags being used. This is
not a problem for most permissions but can result in x conflict errors.
Note: this should not result in profiles with the wrong x transitions loaded
as it causes compilation to file with an x conflict.
This is a minimal patch targeted at the 2.8 release. As such I have just
updated the delete_ruleset routine to clear the flags as it is already
being properly called for every rule set.
Apparmor 2.9/3.0 will have a different approach where it is not possible
to reuse the flags.
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Steve Beattie <sbeattie@ubuntu.com>
The deny information is not used as valid accept state information,
so remove it from the is_null test. This does not change the dfa
generated but does result in the dumped information changing,
as states that don't have any accept information are no longer
reported as accepting. This is what changes the number of states
reported in the minimize tests.
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-By: Steve Beattie <sbeattie@ubuntu.com>
The same mappings routine had two bugs in it, that in practice haven't
manifested because of partition ordering during minimization. The
result is that some states may fail comparison and split, resulting
in them not being eliminated when they could be.
The first is that direct comparison to the nonmatching state should
not be done as it is a candiate for elimination, instead its partion
should be compared against. This simplifies the first test
The other error is the comparison
if (rep->otherwise != nonmatching)
again this is wrong because nomatching should not be directly
compared against. And again can result in the current rep->otherwise
not being eliminated/replaced by the partion. Again resulting in
extra trap states.
These tests where original done the way they were because
->otherwise could be null, which was used to represent nonmatching.
The code was cleaned up a while ago to remove this, ->otherwise is
always a valid pointer now.
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-By: Steve Beattie <sbeattie@ubuntu.com>
Also make sure the perms method properly switches to hex and back to dec
as some of the previous perm dump code did not.
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-By: Steve Beattie <sbeattie@ubuntu.com>
There are some rare occassions, when lots of alternations are used that
tree simplification can result in an expression of
(E | (E | E)) or (E . (E . E)) where E is the epsnode
both of these expressions will lead to an inifinite loop in normalize_tree
as the epsnode test
if ((&epsnode == t->child[dir]) &&
(&epsnode != t->child[!dir]) &&
dynamic_cast<TwoChildNode *>(t)) {
and the tree node rotation test
} else if ((dynamic_cast<AltNode *>(t) &&
dynamic_cast<AltNode *>(t->child[dir])) ||
(dynamic_cast<CatNode *>(t) &&
dynamic_cast<CatNode *>(t->child[dir]))) {
end up undoing each others work, ie.
eps flip rotate
(E | (E | E)) --------> ((E | E) | E) -------> (E | (E | E))
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-By: Steve Beattie <sbeattie@ubuntu.com>
Minimization was failing because it was too agressive. It was minimizing
as if there was only 1 accept condition. This allowed it to remove more
states but at the cost of loosing unique permission sets, they where
being combined into single commulative perms. This means that audit,
deny, xtrans, ... info on one path would be applied to all other paths
that it was combined with during minimization.
This means that we need to retain the unique accept states, not allowing
them to be combined into a single state. To do this we put each unique
permission set into its own partition at the start of minimization.
The states within a partition have the same permissions and can be combined
within the other states in the partition as the loss of unique path
information is will not result in a conflict.
This is similar to what perm hashing used to do but deny information is
still being correctly applied and carried.
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-By: Steve Beattie <sbeattie@ubuntu.com>
The in x intersection consistency test for minimization was failing because
it was screening off the AA_MAY_EXEC permission before passing the exec
information to the consistency test fn. This resulted in the consistency
test fn not testing the consistency because it treated the permission set
as not having x permissions.
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-By: Steve Beattie <sbeattie@ubuntu.com>
Make them report a hex value strings instead of the default C++
\vvvvv
Make them consistent,
- Dump to report the default transition and what isn't transitioned
on it.
Signed-off-by: John Johansen <john.johansen@canonical.com>
The permission reporting was not reporting the full set of permission
flags and was inconsistent between the dump routines.
Report permissions as the quad (allow/deny/audit/quiet) in hex.
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-By: Steve Beattie <sbeattie@ubuntu.com>
Fix the transitions states output so that they output the state label
instead of the state address. That is
{1} -> 0x10831a0: /
now becomes
{1} -> {2}: /
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-By: Steve Beattie <sbeattie@ubuntu.com>
The pcre parser in the dfa backend is not correctly converting escaped
hex string like
\0x0d
This is the minimal patch to fix, and we should investigate just using
the C/C++ conversion routines here.
I also I nominated for the 2.7 series.
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Seth Arnold <seth.arnold@gmail.com>
The removal of deny information is a one way operation, that can result
in a smaller dfa, but also results in a dfa that should not be used in
future operations because the deny rules from the precomputed dfa would
not get applied.
For now default filtering out of deny information to off, as it takes
extra time and seldom results in further state reduction.
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Kees Cook <kees@ubuntu.com>
Previously permission information was thrown away early and permissions
where packed to their CHFA form at the start of DFA construction. Because
of this permissions hashing to setup the initial DFA partitions was
required as x transition conflicts, etc. could not be resolved.
Move the mapping of permissions to CHFA construction, and track the full
permission set through DFA construction. This allows removal of the
perm_hashing hack, which prevented a full minimization from happening
in some DFAs. It also could result in x conflicts not being correctly
detected, and deny rules not being fully applied in some situations.
Eg.
pre full minimization
Created dfa: states 33451
Minimized dfa: final partitions 17033
with full minimization
Created dfa: states 33451
Minimized dfa: final partitions 9550
Dfa minimization no states removed: partitions 9550
The tracking of deny rules through to the completed DFA construction creates
a new class of states. That is states that are marked as being accepting
(carry permission information) but infact are non-accepting as they
only carry deny information. We add a second minimization pass where such
states have their permission information cleared and are thus moved into the
non-accepting partion.
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Kees Cook <kees@ubuntu.com>
Delay the packing of audit and quiet permissions until chfa construction,
and track deny and quiet perms during DFA construction, so that we will
be able to do full minimization. Also delay the packing of audit and
Signed-off-by: John Johansen <john.johansen@canonical.com>
Currently hfa::match calls hfa::match_len to do matching. However this
requires walking the input string twice. Instead provide a match routine
for input that is supposed to terminate at a given input character.
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-By: Steve Beattie <sbeattie@ubuntu.com>
Add the ability to match strings directly from the hfa instead of needing
to build a cfha.
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-By: Steve Beattie <sbeattie@ubuntu.com>
instead of a NodeSet.
We need to store sets of Nodes, to compute the dfa but the C++ set is
not the most efficient way to do this as, it has a has a lot of overhead
just to store a single pointer.
Instead we can use an array of tightly packed pointers + a some header
information. We can do this because once the Set is finalized it will
not change, we just need to be able to reference and compare to it.
We don't use C++ Vectors as they have more overhead than a plain array
and we don't need their additional functionality.
We only replace the use of hashedNodeSets for non-accepting states as
these sets are only used in the dfa construction, and dominate the memory
usage. The accepting states still may need to be modified during
minimization and there are only a small number of entries (20-30), so
it does not make sense to convert them.
Also introduce a NodeVec cache that serves the same purpose as the NodeSet
cache that was introduced earlier.
This is not abstracted this out as nicely as might be desired but avoiding
the use of a custom iterator and directly iterating on the Node array
allows for a small performance gain, on larger sets.
This patch reduces the amount of heap memory used by dfa creation by about
4x - overhead. So for small dfas the savings is only 2-3x but on larger
dfas the savings become more and more pronounced.
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Kees Cook <kees@ubuntu.com>
non-accepting, and have the proto-state use them.
To reduce memory overhead each set gains its own "cache" that make sure
there is only a single instance of each NodeSet generated. And since
we have a cache abstraction, move relavent stats into it.
Also refactor code slightly to make caches and work_queue etc, DFA member
variables instead of passing them as parameters.
The split + caching results in a small reduction in memory use as the
cost of ProtoState + Caching is less than the redundancy that is eliminated.
However this results in a small decrease in performance.
Sorry I know this really should have been split into multiple patches
but the patch evolved and I got lazy and decided to just not bother
splitting it.
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Kees Cook <kees@ubuntu.com>
It is the functional equivalent of ProtoState. We do this to provide a
new level of abstraction that ProtoState can leverage, when the node types
are split.
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Kees Cook <kees@ubuntu.com>
Create a new ProtoState class that will encapsulate the split, but for
this patch it will just contain what was done previously with NodeSet
Signed-off-by: John Johansen <john.johansen@canonical.com>
is done to be clear what TransitionTable is, as we will then add matching
capabilities. Renaming the files is just to make them consistent with
the class in the file.
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-by: Kees Cook <kees@ubuntu.com>
Allow dumping out which states where dropped during partition minimization
and which state became the partitions representative state.
Signed-off-by: John Johansen <john.johansen@canonical.com>
Acked-By: Steve Beattie <sbeattie@ubuntu.com>
The dfa graph dump was broken by previous dfa cleanups so that the graph
transition target is the output of a pointer instead of the dfa state
number.
Signed-off-by: John Johansen <john.johansen@canonical.com>
32bit arch, due to size_t objects being passed to fprintf with format
strings expecting longs. It does this by adjusting the fprintf rules
to expect size_t objects.
the default compilation rules when compiling C++ files, so that things
like CFLAGS et al will be honored. Without this, doing 'make DEBUG=y'
in the parser/ tree will not have its added -pg flag honored, breaking
profiling of the parser.