treewide: spelling/typo fixes in comments and docs

With the exception of the documentation fixes, these should all be
invisible to users.

Signed-off-by: Steve Beattie <steve.beattie@canonical.com>
Acked-by: Christian Boltz <apparmor@cboltz.de>
MR: https://gitlab.com/apparmor/apparmor/-/merge_requests/687
This commit is contained in:
Steve Beattie 2020-11-19 12:30:04 -08:00
parent 7c88f02d6a
commit 461d9c2294
Failed to generate hash of commit
74 changed files with 131 additions and 131 deletions

View file

@ -412,7 +412,7 @@ register_hooks(unused_ apr_pool_t *p)
module AP_MODULE_DECLARE_DATA apparmor_module = { module AP_MODULE_DECLARE_DATA apparmor_module = {
STANDARD20_MODULE_STUFF, STANDARD20_MODULE_STUFF,
aa_create_dir_config, /* dir config creater */ aa_create_dir_config, /* dir config creator */
NULL, /* dir merger --- default is to override */ NULL, /* dir merger --- default is to override */
/* immunix_merge_dir_config, */ /* dir merger --- default is to override */ /* immunix_merge_dir_config, */ /* dir merger --- default is to override */
aa_create_srv_config, /* server config */ aa_create_srv_config, /* server config */

View file

@ -66,8 +66,8 @@ under src/jni_src.
cp dist/libJNIChangeHat.so /usr/lib cp dist/libJNIChangeHat.so /usr/lib
[Note: you must ensure that the target directory is passed to tomcat via the [Note: you must ensure that the target directory is passed to tomcat via the
java.library.path propert. This can be accomplished by setting the JAVA_OPTS java.library.path property. This can be accomplished by setting the JAVA_OPTS
enviroment variable, export JAVA_OPTS=-Djava.library.path, or set via the environment variable, export JAVA_OPTS=-Djava.library.path, or set via the
env variable LD_LIBRARY_PATH to include this directory so that tomcat can env variable LD_LIBRARY_PATH to include this directory so that tomcat can
find this library at startup] find this library at startup]
@ -108,13 +108,13 @@ under src/jni_src.
Once the installation steps above have been started you are ready to begin Once the installation steps above have been started you are ready to begin
creating a profile for your application. The profile creation tool genprof will creating a profile for your application. The profile creation tool genprof will
guide you through generating a profile and its support for change_hat will guide you through generating a profile and its support for change_hat will
prompt you create discrete hats as requested byt the changeHatValve during prompt you create discrete hats as requested by the changeHatValve during
tomcat execution. tomcat execution.
1. Create a basic profile for the tomcat server. 1. Create a basic profile for the tomcat server.
- Run the command "genprof PATH_TO_CATALINA.SH" - Run the command "genprof PATH_TO_CATALINA.SH"
- In a seperate window start tomcat and then stop tomcat - In a separate window start tomcat and then stop tomcat
- In the genprof window press "S" to scan for events - In the genprof window press "S" to scan for events
- Answer the questions about the initial profile for tomcat - Answer the questions about the initial profile for tomcat
@ -124,7 +124,7 @@ tomcat execution.
- Stop the tomcat server - Stop the tomcat server
- Deploy your WAR file or equivalent files under the container. - Deploy your WAR file or equivalent files under the container.
- execute "genprof PATH_TO_CATALINA.SH" - execute "genprof PATH_TO_CATALINA.SH"
- In a seperate window start tomcat and then exercise your web application - In a separate window start tomcat and then exercise your web application
- In the genprof window press "S" to scan for events - In the genprof window press "S" to scan for events
During the prompting you will be asked questions similar to: During the prompting you will be asked questions similar to:
@ -180,7 +180,7 @@ all subsequent resource requests will be mediated in this hew hat (or security
context). context).
If you choose to use the default hat: genprof will mediate all resource If you choose to use the default hat: genprof will mediate all resource
requests in the default hat for the duration of processing this request. requests in the default hat for the duration of processing this request.
When the request processng is complete the valve will change_hat back to the When the request processing is complete the valve will change_hat back to the
parent context. parent context.

View file

@ -66,8 +66,8 @@ under src/jni_src.
cp dist/libJNIChangeHat.so /usr/lib cp dist/libJNIChangeHat.so /usr/lib
[Note: you must ensure that the target directory is passed to tomcat via the [Note: you must ensure that the target directory is passed to tomcat via the
java.library.path propert. This can be accomplished by setting the JAVA_OPTS java.library.path property. This can be accomplished by setting the JAVA_OPTS
enviroment variable, export JAVA_OPTS=-Djava.library.path, or set via the environment variable, export JAVA_OPTS=-Djava.library.path, or set via the
env variable LD_LIBRARY_PATH to include this directory so that tomcat can env variable LD_LIBRARY_PATH to include this directory so that tomcat can
find this library at startup] find this library at startup]
@ -108,13 +108,13 @@ under src/jni_src.
Once the installation steps above have been started you are ready to begin Once the installation steps above have been started you are ready to begin
creating a profile for your application. The profile creation tool genprof will creating a profile for your application. The profile creation tool genprof will
guide you through generating a profile and its support for change_hat will guide you through generating a profile and its support for change_hat will
prompt you create discrete hats as requested byt the changeHatValve during prompt you create discrete hats as requested by the changeHatValve during
tomcat execution. tomcat execution.
1. Create a basic profile for the tomcat server. 1. Create a basic profile for the tomcat server.
- Run the command "genprof PATH_TO_CATALINA.SH" - Run the command "genprof PATH_TO_CATALINA.SH"
- In a seperate window start tomcat and then stop tomcat - In a separate window start tomcat and then stop tomcat
- In the genprof window press "S" to scan for events - In the genprof window press "S" to scan for events
- Answer the questions about the initial profile for tomcat - Answer the questions about the initial profile for tomcat
@ -124,7 +124,7 @@ tomcat execution.
- Stop the tomcat server - Stop the tomcat server
- Deploy your WAR file or equivalent files under the container. - Deploy your WAR file or equivalent files under the container.
- execute "genprof PATH_TO_CATALINA.SH" - execute "genprof PATH_TO_CATALINA.SH"
- In a seperate window start tomcat and then exercise your web application - In a separate window start tomcat and then exercise your web application
- In the genprof window press "S" to scan for events - In the genprof window press "S" to scan for events
During the prompting you will be asked questions similar to: During the prompting you will be asked questions similar to:
@ -180,7 +180,7 @@ all subsequent resource requests will be mediated in this hew hat (or security
context). context).
If you choose to use the default hat: genprof will mediate all resource If you choose to use the default hat: genprof will mediate all resource
requests in the default hat for the duration of processing this request. requests in the default hat for the duration of processing this request.
When the request processng is complete the valve will change_hat back to the When the request processing is complete the valve will change_hat back to the
parent context. parent context.

View file

@ -6,7 +6,7 @@
# the source tree # the source tree
# ===================== # =====================
# It doesn't make sence for AppArmor to mediate PF_UNIX, filter it out. Search # It doesn't make sense for AppArmor to mediate PF_UNIX, filter it out. Search
# for "PF_" constants since that is what is required in bits/socket.h, but # for "PF_" constants since that is what is required in bits/socket.h, but
# rewrite as "AF_". # rewrite as "AF_".

View file

@ -125,7 +125,7 @@ layer. Binary policy cache files will be located in the directory
returned by this function. returned by this function.
The aa_policy_cache_dir_levels() function provides access to the number The aa_policy_cache_dir_levels() function provides access to the number
of directories that are being overlayed to create the policy cache. of directories that are being overlaid to create the policy cache.
=head1 RETURN VALUE =head1 RETURN VALUE

View file

@ -373,7 +373,7 @@ key: TOK_KEY_OPERATION TOK_EQUALS TOK_QUOTED_STRING
| TOK_KEY_CAPABILITY TOK_EQUALS TOK_DIGITS | TOK_KEY_CAPABILITY TOK_EQUALS TOK_DIGITS
{ /* need to reverse map number to string, need to figure out { /* need to reverse map number to string, need to figure out
* how to get auto generation of reverse mapping table into * how to get auto generation of reverse mapping table into
* autotools Makefile. For now just drop assumming capname is * autotools Makefile. For now just drop assuming capname is
* present which it should be with current kernels */ * present which it should be with current kernels */
} }
| TOK_KEY_CAPNAME TOK_EQUALS TOK_QUOTED_STRING | TOK_KEY_CAPNAME TOK_EQUALS TOK_QUOTED_STRING
@ -381,7 +381,7 @@ key: TOK_KEY_OPERATION TOK_EQUALS TOK_QUOTED_STRING
ret_record->name = $3; ret_record->name = $3;
} }
| TOK_KEY_OFFSET TOK_EQUALS TOK_DIGITS | TOK_KEY_OFFSET TOK_EQUALS TOK_DIGITS
{ /* offset is used for reporting where an error occured unpacking { /* offset is used for reporting where an error occurred unpacking
* loaded policy. We can just drop this currently * loaded policy. We can just drop this currently
*/ */
} }

View file

@ -1101,9 +1101,9 @@ int aa_query_link_path_len(const char *label, size_t label_len,
query[pos] = 0; query[pos] = 0;
query[++pos] = AA_CLASS_FILE; query[++pos] = AA_CLASS_FILE;
memcpy(query + pos + 1, link, link_len); memcpy(query + pos + 1, link, link_len);
/* The kernel does the query in two parts we could similate this /* The kernel does the query in two parts; we could simulate this
* doing the following, however as long as policy is compiled * doing the following, however as long as policy is compiled
* correctly this isn't requied, and it requires and extra round * correctly this isn't required, and it requires an extra round
* trip to the kernel and adds a race on policy replacement between * trip to the kernel and adds a race on policy replacement between
* the two queries. * the two queries.
* *

View file

@ -90,7 +90,7 @@ static int write_buffer(int fd, const char *buffer, int size)
/** /**
* write_policy_buffer - load compiled policy into the kernel * write_policy_buffer - load compiled policy into the kernel
* @fd: kernel iterface to write to * @fd: kernel interface to write to
* @atomic: whether to load all policy in buffer atomically (true) * @atomic: whether to load all policy in buffer atomically (true)
* @buffer: buffer of policy to load * @buffer: buffer of policy to load
* @size: the size of the data in the buffer * @size: the size of the data in the buffer
@ -205,7 +205,7 @@ static int write_policy_file_to_iface(aa_kernel_interface *kernel_interface,
* @apparmorfs: path to the apparmor directory of the mounted securityfs (can * @apparmorfs: path to the apparmor directory of the mounted securityfs (can
* be NULL and the path will be auto discovered) * be NULL and the path will be auto discovered)
* *
* Returns: 0 on success, -1 on error with errnot set and *@kernel_interface * Returns: 0 on success, -1 on error with errno set and *@kernel_interface
* pointing to NULL * pointing to NULL
*/ */
int aa_kernel_interface_new(aa_kernel_interface **kernel_interface, int aa_kernel_interface_new(aa_kernel_interface **kernel_interface,

View file

@ -63,7 +63,7 @@ struct ignored_suffix_t {
}; };
static struct ignored_suffix_t ignored_suffixes[] = { static struct ignored_suffix_t ignored_suffixes[] = {
/* Debian packging files, which are in flux during install /* Debian packaging files, which are in flux during install
should be silently ignored. */ should be silently ignored. */
{ ".dpkg-new", 9, 1 }, { ".dpkg-new", 9, 1 },
{ ".dpkg-old", 9, 1 }, { ".dpkg-old", 9, 1 },
@ -147,7 +147,7 @@ int _aa_is_blacklisted(const char *name)
return 0; return 0;
} }
/* automaticly free allocated variables tagged with autofree on fn exit */ /* automatically free allocated variables tagged with autofree on fn exit */
void _aa_autofree(void *p) void _aa_autofree(void *p)
{ {
void **_p = (void**)p; void **_p = (void**)p;

View file

@ -1,5 +1,5 @@
# Runs all tests with the extention "multi" for several times. # Runs all tests with the extension "multi" for several times.
# Each testprogram <programname>.multi has an own subdirectory # Each test program <programname>.multi has its own subdirectory
# <programmname> in which several testcases are defined for this program # <programmname> in which several testcases are defined for this program
# Each testcase has 3 files: # Each testcase has 3 files:
# #

View file

@ -37,7 +37,7 @@ static struct supported_cond supported_conds[] = {
{ "type", true, false, false, local_cond }, { "type", true, false, false, local_cond },
{ "protocol", false, false, false, local_cond }, { "protocol", false, false, false, local_cond },
{ "label", true, false, false, peer_cond }, { "label", true, false, false, peer_cond },
{ NULL, false, false, false, local_cond }, /* eol sentinal */ { NULL, false, false, false, local_cond }, /* eol sentinel */
}; };
bool af_rule::cond_check(struct supported_cond *conds, struct cond_entry *ent, bool af_rule::cond_check(struct supported_cond *conds, struct cond_entry *ent,

View file

@ -29,7 +29,7 @@
#include "profile.h" #include "profile.h"
#include "af_unix.h" #include "af_unix.h"
/* See unix(7) for autobind address definiation */ /* See unix(7) for autobind address definition */
#define autobind_address_pattern "\\x00[0-9a-f][0-9a-f][0-9a-f][0-9a-f][0-9a-f]"; #define autobind_address_pattern "\\x00[0-9a-f][0-9a-f][0-9a-f][0-9a-f][0-9a-f]";
int parse_unix_mode(const char *str_mode, int *mode, int fail) int parse_unix_mode(const char *str_mode, int *mode, int fail)
@ -40,7 +40,7 @@ int parse_unix_mode(const char *str_mode, int *mode, int fail)
static struct supported_cond supported_conds[] = { static struct supported_cond supported_conds[] = {
{ "addr", true, false, false, either_cond }, { "addr", true, false, false, either_cond },
{ NULL, false, false, false, local_cond }, /* sentinal */ { NULL, false, false, false, local_cond }, /* sentinel */
}; };
void unix_rule::move_conditionals(struct cond_entry *conds) void unix_rule::move_conditionals(struct cond_entry *conds)
@ -351,7 +351,7 @@ int unix_rule::gen_policy_re(Profile &prof)
/* local label option */ /* local label option */
if (!write_label(tmp, label)) if (!write_label(tmp, label))
goto fail; goto fail;
/* seperator */ /* separator */
tmp << "\\x00"; tmp << "\\x00";
buf = tmp.str(); buf = tmp.str();
@ -372,7 +372,7 @@ int unix_rule::gen_policy_re(Profile &prof)
/* local label option */ /* local label option */
if (!write_label(buffer, label)) if (!write_label(buffer, label))
goto fail; goto fail;
/* seperator */ /* separator */
buffer << "\\x00"; buffer << "\\x00";
/* create already masked off */ /* create already masked off */

View file

@ -8,7 +8,7 @@ chfa.{h,cc} - code to build a highly compressed runtime readonly version
of an hfa. of an hfa.
aare_rules.{h,cc} - code to that binds parse -> expr-tree -> hfa generation aare_rules.{h,cc} - code to that binds parse -> expr-tree -> hfa generation
-> chfa generation into a basic interface for converting -> chfa generation into a basic interface for converting
rules to a runtime ready statemachine. rules to a runtime ready state machine.
Regular Expression Scanner Generator Regular Expression Scanner Generator
==================================== ====================================
@ -19,12 +19,12 @@ Notes in the scanner File Format
The file format used is based on the GNU flex table file format The file format used is based on the GNU flex table file format
(--tables-file option; see Table File Format in the flex info pages and (--tables-file option; see Table File Format in the flex info pages and
the flex sources for documentation). The magic number used in the header the flex sources for documentation). The magic number used in the header
is set to 0x1B5E783D insted of 0xF13C57B1 though, which is meant to is set to 0x1B5E783D instead of 0xF13C57B1 though, which is meant to
indicate that the file format logically is not the same: the YY_ID_CHK indicate that the file format logically is not the same: the YY_ID_CHK
(check) and YY_ID_DEF (default) tables are used differently. (check) and YY_ID_DEF (default) tables are used differently.
Flex uses state compression to store only the differences between states Flex uses state compression to store only the differences between states
for states that are similar. The amount of compresion influences the parse for states that are similar. The amount of compression influences the parse
speed. speed.
The following two states could be stored as in the tables outlined The following two states could be stored as in the tables outlined

View file

@ -23,7 +23,7 @@
* it can be factored so that the set of important nodes is smaller. * it can be factored so that the set of important nodes is smaller.
* Having a reduced set of important nodes generally results in a dfa that * Having a reduced set of important nodes generally results in a dfa that
* is closer to minimum (fewer redundant states are created). It also * is closer to minimum (fewer redundant states are created). It also
* results in fewer important nodes in a the state set during subset * results in fewer important nodes in the state set during subset
* construction resulting in less memory used to create a dfa. * construction resulting in less memory used to create a dfa.
* *
* Generally it is worth doing expression tree simplification before dfa * Generally it is worth doing expression tree simplification before dfa
@ -150,7 +150,7 @@ void Node::dump_syntax_tree(ostream &os)
} }
/* /*
* Normalize the regex parse tree for factoring and cancelations. Normalization * Normalize the regex parse tree for factoring and cancellations. Normalization
* reorganizes internal (alt and cat) nodes into a fixed "normalized" form that * reorganizes internal (alt and cat) nodes into a fixed "normalized" form that
* simplifies factoring code, in that it produces a canonicalized form for * simplifies factoring code, in that it produces a canonicalized form for
* the direction being normalized so that the factoring code does not have * the direction being normalized so that the factoring code does not have
@ -172,10 +172,10 @@ void Node::dump_syntax_tree(ostream &os)
* dir to !dir. Until no dir direction node meets the criterial. * dir to !dir. Until no dir direction node meets the criterial.
* Then recurse to the children (which will have a different node type) * Then recurse to the children (which will have a different node type)
* to make sure they are normalized. * to make sure they are normalized.
* Normalization of a child node is guarenteed to not affect the * Normalization of a child node is guaranteed to not affect the
* normalization of the parent. * normalization of the parent.
* *
* For cat nodes the depth first traverse order is guarenteed to be * For cat nodes the depth first traverse order is guaranteed to be
* maintained. This is not necessary for altnodes. * maintained. This is not necessary for altnodes.
* *
* Eg. For left normalization * Eg. For left normalization

View file

@ -651,13 +651,13 @@ void DFA::minimize(dfaflags_t flags)
list<Partition *> partitions; list<Partition *> partitions;
/* Set up the initial partitions /* Set up the initial partitions
* minimium of - 1 non accepting, and 1 accepting * minimum of - 1 non accepting, and 1 accepting
* if trans hashing is used the accepting and non-accepting partitions * if trans hashing is used the accepting and non-accepting partitions
* can be further split based on the number and type of transitions * can be further split based on the number and type of transitions
* a state makes. * a state makes.
* If permission hashing is enabled the accepting partitions can * If permission hashing is enabled the accepting partitions can
* be further divided by permissions. This can result in not * be further divided by permissions. This can result in not
* obtaining a truely minimized dfa but comes close, and can speedup * obtaining a truly minimized dfa but comes close, and can speedup
* minimization. * minimization.
*/ */
int accept_count = 0; int accept_count = 0;
@ -753,7 +753,7 @@ void DFA::minimize(dfaflags_t flags)
/* Remap the dfa so it uses the representative states /* Remap the dfa so it uses the representative states
* Use the first state of a partition as the representative state * Use the first state of a partition as the representative state
* At this point all states with in a partion have transitions * At this point all states with in a partition have transitions
* to states within the same partitions, however this can slow * to states within the same partitions, however this can slow
* down compressed dfa compression as there are more states, * down compressed dfa compression as there are more states,
*/ */
@ -813,7 +813,7 @@ void DFA::minimize(dfaflags_t flags)
} }
/* Now that the states have been remapped, remove all states /* Now that the states have been remapped, remove all states
* that are not the representive states for their partition, they * that are not the representative states for their partition, they
* will have a label == -1 * will have a label == -1
*/ */
for (Partition::iterator i = states.begin(); i != states.end();) { for (Partition::iterator i = states.begin(); i != states.end();) {
@ -875,7 +875,7 @@ static int diff_partition(State *state, Partition &part, int max_range, int uppe
/** /**
* diff_encode - compress dfa by differentially encoding state transitions * diff_encode - compress dfa by differentially encoding state transitions
* @dfa_flags: flags controling dfa creation * @dfa_flags: flags controlling dfa creation
* *
* This function reduces the number of transitions that need to be stored * This function reduces the number of transitions that need to be stored
* by encoding transitions as the difference between the state and a * by encoding transitions as the difference between the state and a
@ -889,7 +889,7 @@ static int diff_partition(State *state, Partition &part, int max_range, int uppe
* - The number of state transitions needed to match an input of length * - The number of state transitions needed to match an input of length
* m will be 2m * m will be 2m
* *
* To guarentee this the ordering and distance calculation is done in the * To guarantee this the ordering and distance calculation is done in the
* following manner. * following manner.
* - A DAG of the DFA is created starting with the start state(s). * - A DAG of the DFA is created starting with the start state(s).
* - A state can only be relative (have a differential encoding) to * - A state can only be relative (have a differential encoding) to

View file

@ -189,7 +189,7 @@ struct DiffDag {
* accept: the accept permissions for the state * accept: the accept permissions for the state
* trans: set of transitions from this state * trans: set of transitions from this state
* otherwise: the default state for transitions not in @trans * otherwise: the default state for transitions not in @trans
* parition: Is a temporary work variable used during dfa minimization. * partition: Is a temporary work variable used during dfa minimization.
* it can be replaced with a map, but that is slower and uses more * it can be replaced with a map, but that is slower and uses more
* memory. * memory.
* proto: Is a temporary work variable used during dfa creation. It can * proto: Is a temporary work variable used during dfa creation. It can

View file

@ -76,7 +76,7 @@ static inline Chars* insert_char_range(Chars* cset, transchar a, transchar b)
%% %%
/* FIXME: Does not parse "[--]", "[---]", "[^^-x]". I don't actually know /* FIXME: Does not parse "[--]", "[---]", "[^^-x]". I don't actually know
which precise grammer Perl regexs use, and rediscovering that which precise grammar Perl regexs use, and rediscovering that
is proving to be painful. */ is proving to be painful. */
regex : /* empty */ { *root = $$ = &epsnode; } regex : /* empty */ { *root = $$ = &epsnode; }

View file

@ -206,7 +206,7 @@
* AppArmor mount rule encoding * AppArmor mount rule encoding
* *
* TODO: * TODO:
* add semantic checking of options against specified filesytem types * add semantic checking of options against specified filesystem types
* to catch mount options that can't be covered. * to catch mount options that can't be covered.
* *
* *

View file

@ -1,7 +1,7 @@
# parser.conf is a global AppArmor config file for the apparmor_parser # parser.conf is a global AppArmor config file for the apparmor_parser
# #
# It can be used to specify the default options for the parser, which # It can be used to specify the default options for the parser, which
# can then be overriden by options passed on the command line. # can then be overridden by options passed on the command line.
# #
# Leading whitespace is ignored and lines that begin with # are treated # Leading whitespace is ignored and lines that begin with # are treated
# as comments. # as comments.
@ -43,7 +43,7 @@
#skip-read-cache #skip-read-cache
#### Set Optimizaions. Multiple Optimizations can be set, one per line #### #### Set Optimizations. Multiple Optimizations can be set, one per line ####
# For supported optimizations see # For supported optimizations see
# apparmor_parser --help=O # apparmor_parser --help=O

View file

@ -23,7 +23,7 @@
We support 2 types of includes We support 2 types of includes
#include <name> which searches for the first occurance of name in the #include <name> which searches for the first occurrence of name in the
apparmor directory path. apparmor directory path.
#include "name" which will search for a relative or absolute pathed #include "name" which will search for a relative or absolute pathed
@ -60,7 +60,7 @@
static char *path[MAX_PATH] = { NULL }; static char *path[MAX_PATH] = { NULL };
static int npath = 0; static int npath = 0;
/* default base directory is /etc/apparmor.d, it can be overriden /* default base directory is /etc/apparmor.d, it can be overridden
with the -b option. */ with the -b option. */
const char *basedir; const char *basedir;

View file

@ -359,7 +359,7 @@ void sd_serialize_xtable(std::ostringstream &buf, char **table)
int len = strlen(table[i]) + 1; int len = strlen(table[i]) + 1;
/* if its a namespace make sure the second : is overwritten /* if its a namespace make sure the second : is overwritten
* with 0, so that the namespace and name are \0 seperated * with 0, so that the namespace and name are \0 separated
*/ */
if (*table[i] == ':') { if (*table[i] == ':') {
char *tmp = table[i] + 1; char *tmp = table[i] + 1;

View file

@ -433,7 +433,7 @@ int arg_pass(int c) {
return LATE_ARG; return LATE_ARG;
} }
/* process a single argment from getopt_long /* process a single argument from getopt_long
* Returns: 1 if an action arg, else 0 * Returns: 1 if an action arg, else 0
*/ */
#define DUMP_HEADER " variables \tDump variables\n" \ #define DUMP_HEADER " variables \tDump variables\n" \
@ -1258,7 +1258,7 @@ do { \
* from work_spawn and work_sync. We could throw a C++ exception, is it * from work_spawn and work_sync. We could throw a C++ exception, is it
* worth doing it to avoid the exit here. * worth doing it to avoid the exit here.
* *
* atm not all resources maybe cleanedup at exit * atm not all resources may be cleaned up at exit
*/ */
int last_error = 0; int last_error = 0;
void handle_work_result(int retval) void handle_work_result(int retval)
@ -1288,7 +1288,7 @@ static long compute_jobs(long n, long j)
static void setup_parallel_compile(void) static void setup_parallel_compile(void)
{ {
/* jobs and paralell_max set by default, config or args */ /* jobs and parallel_max set by default, config or args */
long n = sysconf(_SC_NPROCESSORS_ONLN); long n = sysconf(_SC_NPROCESSORS_ONLN);
long maxn = sysconf(_SC_NPROCESSORS_CONF); long maxn = sysconf(_SC_NPROCESSORS_CONF);
if (n == -1) if (n == -1)

View file

@ -534,7 +534,7 @@ static int process_profile_name_xmatch(Profile *prof)
int len; int len;
tbuf.clear(); tbuf.clear();
/* prepend \x00 to every value. This is /* prepend \x00 to every value. This is
* done to separate the existance of the * done to separate the existence of the
* xattr from a null value match. * xattr from a null value match.
* *
* if an xattr exists, a single \x00 will * if an xattr exists, a single \x00 will

View file

@ -112,7 +112,7 @@ static const char *const sig_names[MAXMAPPED_SIG + 1] = {
"lost", "lost",
"unused", "unused",
"exists", /* always last existance test mapped to MAXMAPPED_SIG */ "exists", /* always last existence test mapped to MAXMAPPED_SIG */
}; };

View file

@ -240,7 +240,7 @@ and may grant confined processes specific mount operations.
The security model of the various versions of NFS is that files are The security model of the various versions of NFS is that files are
looked up by name as usual, but after that lookup, each file is only looked up by name as usual, but after that lookup, each file is only
identified by a file handle in successive acesses. The file handle at a identified by a file handle in successive accesses. The file handle at a
minimum includes some sort of filesystem identifier and the file's inode minimum includes some sort of filesystem identifier and the file's inode
number. In Linux, the file handles used by most filesystems also number. In Linux, the file handles used by most filesystems also
include the inode number of the parent directory; this may change in the include the inode number of the parent directory; this may change in the
@ -816,7 +816,7 @@ one (this option may be used even if no profile by that name exists):
\subsection{Anatomy of a Profile} \subsection{Anatomy of a Profile}
AppArmor profiles use a simple declaritive language, fully described in AppArmor profiles use a simple declarative language, fully described in
the apparmor.d(5) manual page. By convention, profiles are stored in the apparmor.d(5) manual page. By convention, profiles are stored in
/etc/{\H}apparmor.d/. The AppArmor parser supports a simple cpp-style /etc/{\H}apparmor.d/. The AppArmor parser supports a simple cpp-style
include mechanism to allow sharing pieces of policy. A simple profile include mechanism to allow sharing pieces of policy. A simple profile

View file

@ -10,7 +10,7 @@ against a different parser, or use a different set of profiles for the
simple.pl test, you can change those settings in 'uservars.conf'. simple.pl test, you can change those settings in 'uservars.conf'.
You can also override which parser is used through make by specifying You can also override which parser is used through make by specifying
the PARSER veriable. For example, to run the tests on the system parser, the PARSER variable. For example, to run the tests on the system parser,
run 'make PARSER=/sbin/apparmor_parser'. run 'make PARSER=/sbin/apparmor_parser'.
Adding to the testsuite Adding to the testsuite
@ -61,7 +61,7 @@ The simple script looks for a few special comments in the profile,
expected parse result of PASS. expected parse result of PASS.
- #=TODO -- marks the test as being for a future item to implement and - #=TODO -- marks the test as being for a future item to implement and
thus are expected testsuite failures and hsould be ignored. thus are expected testsuite failures and should be ignored.
- #=DISABLED -- skips the test, and marks it as a failed TODO task. - #=DISABLED -- skips the test, and marks it as a failed TODO task.
Useful if the particular testcase causes the parser to infinite Useful if the particular testcase causes the parser to infinite

View file

@ -568,7 +568,7 @@ verify_binary_equality "set rlimit memlock <= 2GB" \
# Unfortunately we can not just compare an empty profile and hat to a # Unfortunately we can not just compare an empty profile and hat to a
# ie. "/t { ^test { /f r, }}" # ie. "/t { ^test { /f r, }}"
# to the second profile with the equivalent rule inserted manually # to the second profile with the equivalent rule inserted manually
# because policy write permission "w" actually expands to mutiple permissions # because policy write permission "w" actually expands to multiple permissions
# under the hood, and the parser is not adding those permissions # under the hood, and the parser is not adding those permissions
# to the rules it auto generates # to the rules it auto generates
# So we insert the rule with "append" permissions, and rely on the parser # So we insert the rule with "append" permissions, and rely on the parser

View file

@ -5,7 +5,7 @@ APPARMOR_PARSER="${APPARMOR_PARSER:-../apparmor_parser}"
# Format of -D dfa-states # Format of -D dfa-states
# dfa-states output is split into 2 parts: # dfa-states output is split into 2 parts:
# the accept state infomation # the accept state information
# {state} (allow deny audit XXX) ignore XXX for now # {state} (allow deny audit XXX) ignore XXX for now
# followed by the transition table information # followed by the transition table information
# {Y} -> {Z}: 0xXX Char #0xXX is the hex dump of Char # {Y} -> {Z}: 0xXX Char #0xXX is the hex dump of Char
@ -43,7 +43,7 @@ APPARMOR_PARSER="${APPARMOR_PARSER:-../apparmor_parser}"
# These tests currently only look at the accept state permissions # These tests currently only look at the accept state permissions
# #
# To view any of these DFAs as graphs replace --D dfa-states with -D dfa-graph # To view any of these DFAs as graphs replace --D dfa-states with -D dfa-graph
# strip of the test stuff around the parser command and use the the dot # strip of the test stuff around the parser command and use the dot
# command to convert # command to convert
# Eg. # Eg.
# echo "/t { /a r, /b w, /c a, /d l, /e k, /f m, deny /** w, }" | ./apparmor_parser -QT -O minimize -D dfa-graph --quiet 2>min.graph # echo "/t { /a r, /b w, /c a, /d l, /e k, /f m, deny /** w, }" | ./apparmor_parser -QT -O minimize -D dfa-graph --quiet 2>min.graph
@ -100,7 +100,7 @@ fi
echo "ok" echo "ok"
# same test as above except with deny 'w' perm added to /**, this does not # same test as above except with deny 'w' perm added to /**, this does not
# elimnates the states with 'w' and 'a' because the quiet information is # eliminates the states with 'w' and 'a' because the quiet information is
# being carried # being carried
# #
# {1} <== (allow/deny/audit/quiet) # {1} <== (allow/deny/audit/quiet)
@ -119,7 +119,7 @@ fi
echo "ok" echo "ok"
# same test as above except with audit deny 'w' perm added to /**, with the # same test as above except with audit deny 'w' perm added to /**, with the
# parameter this elimnates the states with 'w' and 'a' because # parameter this eliminates the states with 'w' and 'a' because
# the quiet information is NOT being carried # the quiet information is NOT being carried
# #
# {1} <== (allow/deny/audit/quiet) # {1} <== (allow/deny/audit/quiet)
@ -139,7 +139,7 @@ echo "ok"
# The x transition test profile is setup so that there are 3 conflicting x # The x transition test profile is setup so that there are 3 conflicting x
# permissions, two are on paths that won't collide during dfa creation. The # permissions, two are on paths that won't collide during dfa creation. The
# 3rd is a generic permission that should be overriden during dfa creation. # 3rd is a generic permission that should be overridden during dfa creation.
# #
# This should result in a dfa that specifies transitions on 'a' and 'b' to # This should result in a dfa that specifies transitions on 'a' and 'b' to
# unique states that store the alternate accept information. However # unique states that store the alternate accept information. However
@ -190,7 +190,7 @@ fi
echo "ok" echo "ok"
# now try audit + denying x and make sure perms are cleared # now try audit + denying x and make sure perms are cleared
# notice that the deny info is being carried, by an artifical trap state # notice that the deny info is being carried, by an artificial trap state
# {1} <== (allow/deny/audit/quiet) # {1} <== (allow/deny/audit/quiet)
# {3} (0x 0/fe17f85/0/0) # {3} (0x 0/fe17f85/0/0)

View file

@ -1,5 +1,5 @@
# #
#=DESCRIPTION validate some uses of capabilties. #=DESCRIPTION validate some uses of capabilities.
#=EXRESULT PASS #=EXRESULT PASS
# vim:syntax=subdomain # vim:syntax=subdomain
# Last Modified: Sun Apr 17 19:44:44 2005 # Last Modified: Sun Apr 17 19:44:44 2005

View file

@ -1,5 +1,5 @@
# #
#=DESCRIPTION validate some uses of capabilties. #=DESCRIPTION validate some uses of capabilities.
#=EXRESULT PASS #=EXRESULT PASS
# vim:syntax=subdomain # vim:syntax=subdomain
# Last Modified: Sun Apr 17 19:44:44 2005 # Last Modified: Sun Apr 17 19:44:44 2005

View file

@ -1,5 +1,5 @@
# #
#=DESCRIPTION validate some uses of capabilties. #=DESCRIPTION validate some uses of capabilities.
#=EXRESULT PASS #=EXRESULT PASS
# vim:syntax=subdomain # vim:syntax=subdomain
# Last Modified: Sun Apr 17 19:44:44 2005 # Last Modified: Sun Apr 17 19:44:44 2005

View file

@ -1,5 +1,5 @@
# #
#=DESCRIPTION validate some uses of capabilties. #=DESCRIPTION validate some uses of capabilities.
#=EXRESULT FAIL #=EXRESULT FAIL
# vim:syntax=subdomain # vim:syntax=subdomain
# Last Modified: Sun Apr 17 19:44:44 2005 # Last Modified: Sun Apr 17 19:44:44 2005

View file

@ -1,4 +1,4 @@
#=DESCRIPTION conditional else in invlaid locations #=DESCRIPTION conditional else in invalid locations
#=EXRESULT FAIL #=EXRESULT FAIL
$BAR = false $BAR = false

View file

@ -1,5 +1,5 @@
# #
#=DESCRIPTION m and [upi]x do not conflict, seperate rules #=DESCRIPTION m and [upi]x do not conflict, separate rules
#=EXRESULT PASS #=EXRESULT PASS
# vim:syntax=apparmor # vim:syntax=apparmor
# #

View file

@ -1,5 +1,5 @@
# #
#=DESCRIPTION m and [upi]x do not conflict, seperate rules #=DESCRIPTION m and [upi]x do not conflict, separate rules
#=EXRESULT PASS #=EXRESULT PASS
# #
/usr/bin/foo { /usr/bin/foo {

View file

@ -1,5 +1,5 @@
# #
#=DESCRIPTION m and [upi]x do not conflict, seperate rules #=DESCRIPTION m and [upi]x do not conflict, separate rules
#=EXRESULT PASS #=EXRESULT PASS
# #
/usr/bin/foo { /usr/bin/foo {

View file

@ -1,5 +1,5 @@
# #
#=DESCRIPTION netdomain tcp connect w/multiple from statments #=DESCRIPTION netdomain tcp connect w/multiple from statements
#=EXRESULT FAIL #=EXRESULT FAIL
/tmp/tcp/tcp_client { /tmp/tcp/tcp_client {
tcp_connect from 10.0.0.17/16:50-100 from 127.0.0.1 via eth1, tcp_connect from 10.0.0.17/16:50-100 from 127.0.0.1 via eth1,

View file

@ -1,6 +1,6 @@
# #
# $Id$ # $Id$
#=DESCRIPTION Basic namespace test wit named profile, duplicate mode bits #=DESCRIPTION Basic namespace test with named profile, duplicate mode bits
#=EXRESULT PASS #=EXRESULT PASS
# vim:syntax=subdomain # vim:syntax=subdomain
# Last Modified: Sun Apr 17 19:44:44 2005 # Last Modified: Sun Apr 17 19:44:44 2005

View file

@ -1,5 +1,5 @@
# #
#=DESCRIPTION simple max virtual memory szie rlimit test #=DESCRIPTION simple max virtual memory size rlimit test
#=EXRESULT PASS #=EXRESULT PASS
profile rlimit { profile rlimit {

View file

@ -50,7 +50,7 @@ else
LOGPROF?=LD_LIBRARY_PATH=$(LD_LIBRARY_PATH) PYTHONPATH=$(PYTHONPATH) $(PYTHON) ../utils/aa-logprof --configdir ../utils/test/ LOGPROF?=LD_LIBRARY_PATH=$(LD_LIBRARY_PATH) PYTHONPATH=$(PYTHONPATH) $(PYTHON) ../utils/aa-logprof --configdir ../utils/test/
endif endif
# $(PWD) is wrong when using "make -C profiles" - explicitely set it here to get the right value # $(PWD) is wrong when using "make -C profiles" - explicitly set it here to get the right value
PWD=$(shell pwd) PWD=$(shell pwd)
.PHONY: test-dependencies .PHONY: test-dependencies

View file

@ -29,8 +29,8 @@
# include <abstractions/ubuntu-browsers> # include <abstractions/ubuntu-browsers>
# include <abstractions/ubuntu-email> # include <abstractions/ubuntu-email>
# #
# # Add if accesibility access is considered as required # # Add if accessibility access is considered as required
# # (for message boxe in case exo-open fails) # # (for message box in case exo-open fails)
# include <abstractions/dbus-accessibility> # include <abstractions/dbus-accessibility>
# #
# # < add additional allowed applications here > # # < add additional allowed applications here >

View file

@ -29,8 +29,8 @@
# include <abstractions/ubuntu-browsers> # include <abstractions/ubuntu-browsers>
# include <abstractions/ubuntu-email> # include <abstractions/ubuntu-email>
# #
# # Add if accesibility access is considered as required # # Add if accessibility access is considered as required
# # (for message boxe in case exo-open fails) # # (for message box in case exo-open fails)
# include <abstractions/dbus-accessibility> # include <abstractions/dbus-accessibility>
# #
# # Add if audio support for message box is # # Add if audio support for message box is

View file

@ -14,7 +14,7 @@
# it is intended to be included in profiles for svnserve/apache2 and maybe # it is intended to be included in profiles for svnserve/apache2 and maybe
# some repository viewers like trac/viewvc # some repository viewers like trac/viewvc
# no hooks exec by default; please define whatever you need explicitely. # no hooks exec by default; please define whatever you need explicitly.
/srv/svn/**/conf/* r, /srv/svn/**/conf/* r,
/srv/svn/**/format r, /srv/svn/**/format r,

View file

@ -41,7 +41,7 @@
include <abstractions/base> include <abstractions/base>
# for openin with `exo-open` # for opening with `exo-open`
include <abstractions/exo-open> include <abstractions/exo-open>
# for opening with `gio open <uri>` # for opening with `gio open <uri>`

View file

@ -112,8 +112,8 @@ argument or the end of the argument list will be included within this hat.
Support for multiple profiles within a single load (for example for Support for multiple profiles within a single load (for example for
test that want to domain tansition to another profile) is supported by test that want to domain tansition to another profile) is supported by
the "image' argument to genprofile. This keyword preceeded by a '--' the "image' argument to genprofile. This keyword preceded by a '--'
seperator terminates the previous profile and creates a new profile for separator terminates the previous profile and creates a new profile for
the specified executable image. the specified executable image.
Together, 'image' and 'hat:' allow complex profiles including subhats and Together, 'image' and 'hat:' allow complex profiles including subhats and
@ -184,7 +184,7 @@ requiring signal passing)
<check it's output, it is expected to FAIL> <check it's output, it is expected to FAIL>
runchecktest "EXEC no x" fail $file runchecktest "EXEC no x" fail $file
<Thats it. Exit status $rc is automatically returned by epilogue.inc> <That's it. Exit status $rc is automatically returned by epilogue.inc>
Supporting files Supporting files
================ ================

View file

@ -8,7 +8,7 @@
#=NAME at_secure #=NAME at_secure
#=DESCRIPTION #=DESCRIPTION
# Verifies the AT_SECURE flag in the auxillary vector after an exec transition # Verifies the AT_SECURE flag in the auxiliary vector after an exec transition
#=END #=END
pwd=`dirname $0` pwd=`dirname $0`

View file

@ -13,7 +13,7 @@
# capability processing for confined processes) and no others allows successful # capability processing for confined processes) and no others allows successful
# access. For every syscall in the test, we iterate over each capability # access. For every syscall in the test, we iterate over each capability
# individually (plus no capabilities) in order to verify that only the expected # individually (plus no capabilities) in order to verify that only the expected
# capability grants access to the priviledged operation. The same is repeated # capability grants access to the privileged operation. The same is repeated
# for capabilities within hats. # for capabilities within hats.
#=END #=END

View file

@ -61,7 +61,7 @@ echo -n "${testexec}//${subtest3}" >/sys/kernel/security/apparmor/.remove
# Should put us into a null-profile # Should put us into a null-profile
# NOTE: As of AppArmor 2.1 (opensuse 10.3) this test now passes as # NOTE: As of AppArmor 2.1 (opensuse 10.3) this test now passes as
# the change_hat failes but it no longer entires the null profile # the change_hat fails but it no longer enters the null profile
genprofile $file:$okperm hat:$subtest $subfile:$okperm hat:$subtest2 $subfile:$okperm genprofile $file:$okperm hat:$subtest $subfile:$okperm hat:$subtest2 $subfile:$okperm
runchecktest "CHANGEHAT (access parent file 3)" pass $subtest3 $file runchecktest "CHANGEHAT (access parent file 3)" pass $subtest3 $file

View file

@ -9,7 +9,7 @@
#=NAME clone #=NAME clone
#=DESCRIPTION #=DESCRIPTION
# Verifies that clone is allowed under AppArmor, but that CLONE_NEWNS is # Verifies that clone is allowed under AppArmor, but that CLONE_NEWNS is
# restriced. # restricted.
#=END #=END
pwd=`dirname $0` pwd=`dirname $0`

View file

@ -21,7 +21,7 @@
/* A test to validate that we are properly handling the kernel appending /* A test to validate that we are properly handling the kernel appending
* (deleted) in d_path lookup. * (deleted) in d_path lookup.
* To acheive this the file is opened (the read/write of the file is just to * To achieve this the file is opened (the read/write of the file is just to
* make sure everything is working as expected), deleted without closing the * make sure everything is working as expected), deleted without closing the
* file reference, and doing a changehat. * file reference, and doing a changehat.
* The file is then used inside of the changehat. This forces the file * The file is then used inside of the changehat. This forces the file

View file

@ -42,7 +42,7 @@ extern char **environ;
(void)execve(argv[1], &argv[1], environ); (void)execve(argv[1], &argv[1], environ);
/* exec failed, kill outselves to flag parent */ /* exec failed, kill ourselves to flag parent */
(void)kill(getpid(), SIGKILL); (void)kill(getpid(), SIGKILL);
} }

View file

@ -119,7 +119,7 @@ genprofile $test2:rix signal:receive:peer=unconfined -- image=$test2 $file:$file
local_runchecktest "enforce ix case3" fail $test1 $test2 $file local_runchecktest "enforce ix case3" fail $test1 $test2 $file
# case 4: parent profile grants access # case 4: parent profile grants access
# missing child profile (irrelvant) # missing child profile (irrelevant)
# expected behaviour: child should be able to access resource # expected behaviour: child should be able to access resource
genprofile $test2:rix $file:$fileperm signal:receive:peer=unconfined genprofile $test2:rix $file:$fileperm signal:receive:peer=unconfined
@ -139,7 +139,7 @@ genprofile $test2:ux signal:receive:peer=unconfined
local_runchecktest "enforce ux case1" pass "unconfined" $test2 $file local_runchecktest "enforce ux case1" pass "unconfined" $test2 $file
# confined parent, exec child with conflicting exec qualifiers # confined parent, exec child with conflicting exec qualifiers
# that overlap in such away that px is prefered (ix is glob, px is exact # that overlap in such away that px is preferred (ix is glob, px is exact
# match). Other overlap tests should be in the parser. # match). Other overlap tests should be in the parser.
# case 1: # case 1:
# expected behaviour: exec of child passes # expected behaviour: exec of child passes

View file

@ -50,7 +50,7 @@
#define MAX_PERM_LEN 10 #define MAX_PERM_LEN 10
/* Set up permission subset test as a seperate binary to reduce the time /* Set up permission subset test as a separate binary to reduce the time
* as the shell based versions takes for ever * as the shell based versions takes for ever
*/ */

View file

@ -12,7 +12,7 @@
# processes. # processes.
#=END #=END
# I made this a seperate test script because of the need to make a # I made this a separate test script because of the need to make a
# loopfile before the tests run. # loopfile before the tests run.
pwd=`dirname $0` pwd=`dirname $0`

View file

@ -10,7 +10,7 @@
#=DESCRIPTION #=DESCRIPTION
# This test verifies that subdomain file access checks function correctly # This test verifies that subdomain file access checks function correctly
# for named piped (nodes in the filesystem created with mknod). The test # for named piped (nodes in the filesystem created with mknod). The test
# creates a parent/child process relationship which attempt to rendevous via # creates a parent/child process relationship which attempt to rendezvous via
# the named pipe. The tests are attempted for unconfined and confined # the named pipe. The tests are attempted for unconfined and confined
# processes and also for subhats. # processes and also for subhats.
#=END #=END

View file

@ -11,17 +11,17 @@
# #
# This file should be included by each test case # This file should be included by each test case
# It does a lot of hidden 'magic', Downside is that # It does a lot of hidden 'magic', Downside is that
# this magic makes debugging fauling tests more difficult. # this magic makes debugging failing tests more difficult.
# Running the test with the '-r' option can help. # Running the test with the '-r' option can help.
# #
# Userchangeable variables (tmpdir etc) should be specified in # User changeable variables (tmpdir etc) should be specified in
# uservars.inc # uservars.inc
# #
# Cleanup is automatically performed by epilogue.inc # Cleanup is automatically performed by epilogue.inc
# #
# For this file, functions are first, entry point code is at end, see "MAIN" # For this file, functions are first, entry point code is at end, see "MAIN"
#use $() to retreive the failure message or "true" if success #use $() to retrieve the failure message or "true" if success
# kernel_features_istrue() - test whether boolean files are true # kernel_features_istrue() - test whether boolean files are true
# $@: path(s) to test if true # $@: path(s) to test if true

View file

@ -87,7 +87,7 @@
#define AA_MAY_LINK 0x40000 #define AA_MAY_LINK 0x40000
#endif #endif
#ifndef AA_LINK_SUBSET /* overlayed perm in pair */ #ifndef AA_LINK_SUBSET /* overlaid perm in pair */
#define AA_LINK_SUBSET AA_MAY_LOCK #define AA_LINK_SUBSET AA_MAY_LOCK
#endif #endif

View file

@ -111,7 +111,7 @@ static int reexec(int pair[2], int argc, char **argv)
return 0; return 0;
/** /**
* Save off the first <CHANGE_ONEXEC> arg and then shift all preceeding * Save off the first <CHANGE_ONEXEC> arg and then shift all preceding
* args by one to effectively pop off the first <CHANGE_ONEXEC> * args by one to effectively pop off the first <CHANGE_ONEXEC>
*/ */
new_profile = argv[3]; new_profile = argv[3];

View file

@ -13,7 +13,7 @@
# unconfined processes can call these syscalls but confined processes cannot. # unconfined processes can call these syscalls but confined processes cannot.
#=END #=END
# I made this a seperate test script because of the need to make a # I made this a separate test script because of the need to make a
# swapfile before the tests run. # swapfile before the tests run.
pwd=`dirname $0` pwd=`dirname $0`

View file

@ -148,7 +148,7 @@ test_sysctl_proc()
# check if the kernel supports CONFIG_SYSCTL_SYSCALL # check if the kernel supports CONFIG_SYSCTL_SYSCALL
# generally we want to encourage kernels to disable it, but if it's # generally we want to encourage kernels to disable it, but if it's
# enabled we want to test against it # enabled we want to test against it
# In addition test that sysctl exists in the kernel headers, if it does't # In addition test that sysctl exists in the kernel headers, if it doesn't
# then we can't even built the syscall_sysctl test # then we can't even built the syscall_sysctl test
if echo "#include <sys/sysctl.h>" | cpp -dM >/dev/null 2>/dev/null ; then if echo "#include <sys/sysctl.h>" | cpp -dM >/dev/null 2>/dev/null ; then
settest syscall_sysctl settest syscall_sysctl

View file

@ -33,7 +33,7 @@ do_test()
local bad_p_addr="${13}" # optional local bad_p_addr="${13}" # optional
local desc="AF_UNIX $addr_type socket ($type);" local desc="AF_UNIX $addr_type socket ($type);"
local l_access # combind local perms: local bound and local unbound local l_access # combined local perms: local bound and local unbound
local c_access # combined perms: local bound, local unbound, and peer local c_access # combined perms: local bound, local unbound, and peer
local access # used as an iterator local access # used as an iterator
local u_rule # rule for pre-bind accesses local u_rule # rule for pre-bind accesses

View file

@ -14,8 +14,8 @@
# security: get r, set w + CAP_SYS_ADMIN # security: get r, set w + CAP_SYS_ADMIN
# system: (acl's etc.) fs and kernel dependent (CAP_SYS_ADMIN) # system: (acl's etc.) fs and kernel dependent (CAP_SYS_ADMIN)
# trusted: CAP_SYS_ADMIN # trusted: CAP_SYS_ADMIN
# user: for subdomain the relevent file must be in the profile, with r perm # user: for subdomain the relevant file must be in the profile, with r perm
# to get xattr, w perm to set or remove xattr. The appriate cap must be # to get xattr, w perm to set or remove xattr. The appropriate cap must be
# present in the profile as well # present in the profile as well
#=END #=END
@ -58,7 +58,7 @@ mkdir $dir
add_attrs() add_attrs()
{ {
#set the xattr for thos that passed above again so we can test removing it #set the xattr for those that passed above again so we can test removing it
setfattr -h -n security.sdtest -v hello "$1" setfattr -h -n security.sdtest -v hello "$1"
setfattr -h -n trusted.sdtest -v hello "$1" setfattr -h -n trusted.sdtest -v hello "$1"
if [ "$1" != $link ] ; then if [ "$1" != $link ] ; then

View file

@ -67,7 +67,7 @@ those processes are set to run under their proper profiles.
=head2 Responding to AppArmor Events =head2 Responding to AppArmor Events
B<aa-logprof> will generate a list of suggested profile changes that B<aa-logprof> will generate a list of suggested profile changes that
the user can choose from, or they can create their own, to modifiy the the user can choose from, or they can create their own, to modify the
permission set of the profile so that the generated access violation permission set of the profile so that the generated access violation
will not re-occur. will not re-occur.

View file

@ -253,7 +253,7 @@ def reopen_logfile_if_needed(logfile, logdata, log_inode, log_size):
while retry: while retry:
try: try:
# Reopen file if inode has chaneged, e.g. rename by logrotate # Reopen file if inode has changed, e.g. rename by logrotate
if os.stat(logfile).st_ino != log_inode: if os.stat(logfile).st_ino != log_inode:
debug_logger.debug('Logfile was renamed, reload to read the new file.') debug_logger.debug('Logfile was renamed, reload to read the new file.')
logdata = open(logfile, 'r') logdata = open(logfile, 'r')
@ -572,7 +572,7 @@ def main():
n.show() n.show()
# When notification is sent, raise privileged back to root if the # When notification is sent, raise privileged back to root if the
# original effective user id was zero (to be ableo to read AppArmor logs) # original effective user id was zero (to be able to read AppArmor logs)
raise_privileges() raise_privileges()
elif args.since_last: elif args.since_last:

View file

@ -70,7 +70,7 @@ from apparmor.rule import quote_if_needed
from apparmor.translations import init_translation from apparmor.translations import init_translation
_ = init_translation() _ = init_translation()
# Setup logging incase of debugging is enabled # Setup logging in case debugging is enabled
debug_logger = DebugLogger('aa') debug_logger = DebugLogger('aa')
# The database for severity # The database for severity
@ -568,7 +568,7 @@ def autodep(bin_name, pname=''):
# bin_full = bin_name # bin_full = bin_name
#if not bin_full.startswith('/'): #if not bin_full.startswith('/'):
#return None #return None
# Return if exectuable path not found # Return if executable path not found
if not bin_full: if not bin_full:
return None return None
else: else:
@ -881,7 +881,7 @@ def ask_exec(hashlog):
q.headers += [_('Profile'), combine_name(profile, hat)] q.headers += [_('Profile'), combine_name(profile, hat)]
# to_name should not exist here since, transitioning is already handeled # to_name should not exist here since, transitioning is already handled
q.headers += [_('Execute'), exec_target] q.headers += [_('Execute'), exec_target]
q.headers += [_('Severity'), severity] q.headers += [_('Severity'), severity]
@ -1087,7 +1087,7 @@ def ask_the_questions(log_dict):
if not aa[profile].get(hat, {}).get('file'): if not aa[profile].get(hat, {}).get('file'):
if aamode != 'merge': if aamode != 'merge':
# Ignore log events for a non-existing profile or child profile. Such events can occour # Ignore log events for a non-existing profile or child profile. Such events can occur
# after deleting a profile or hat manually, or when processing a foreign log. # after deleting a profile or hat manually, or when processing a foreign log.
# (Checking for 'file' is a simplified way to check if it's a ProfileStorage.) # (Checking for 'file' is a simplified way to check if it's a ProfileStorage.)
debug_logger.debug("Ignoring events for non-existing profile %s" % combine_name(profile, hat)) debug_logger.debug("Ignoring events for non-existing profile %s" % combine_name(profile, hat))
@ -1583,14 +1583,14 @@ def collapse_log(hashlog, ignore_null_profiles=True):
if '//null-' in hashlog[aamode][full_profile]['final_name'] and ignore_null_profiles: if '//null-' in hashlog[aamode][full_profile]['final_name'] and ignore_null_profiles:
# ignore null-* profiles (probably nested childs) # ignore null-* profiles (probably nested childs)
# otherwise we'd accidently create a null-* hat in the profile which is worse # otherwise we'd accidentally create a null-* hat in the profile which is worse
# XXX drop this once we support nested childs # XXX drop this once we support nested childs
continue continue
profile, hat = split_name(hashlog[aamode][full_profile]['final_name']) # XXX limited to two levels to avoid an Exception on nested child profiles or nested null-* profile, hat = split_name(hashlog[aamode][full_profile]['final_name']) # XXX limited to two levels to avoid an Exception on nested child profiles or nested null-*
# TODO: support nested child profiles # TODO: support nested child profiles
# used to avoid to accidently initialize aa[profile][hat] or calling is_known_rule() on events for a non-existing profile # used to avoid to accidentally initialize aa[profile][hat] or calling is_known_rule() on events for a non-existing profile
hat_exists = False hat_exists = False
if aa.get(profile) and aa[profile].get(hat): if aa.get(profile) and aa[profile].get(hat):
hat_exists = True hat_exists = True
@ -2112,7 +2112,7 @@ def parse_profile_data(data, file, do_include):
if lastline: if lastline:
# lastline gets merged into line (and reset to None) when reading the next line. # lastline gets merged into line (and reset to None) when reading the next line.
# If it isn't empty, this means there's something unparseable at the end of the profile # If it isn't empty, this means there's something unparsable at the end of the profile
raise AppArmorException(_('Syntax Error: Unknown line found in file %(file)s line %(lineno)s:\n %(line)s') % { 'file': file, 'lineno': lineno + 1, 'line': lastline }) raise AppArmorException(_('Syntax Error: Unknown line found in file %(file)s line %(lineno)s:\n %(line)s') % { 'file': file, 'lineno': lineno + 1, 'line': lastline })
# Below is not required I'd say # Below is not required I'd say

View file

@ -55,13 +55,13 @@ class CleanProf(object):
for inc in includes: for inc in includes:
if not self.profile.include.get(inc, {}).get(inc, False): if not self.profile.include.get(inc, {}).get(inc, False):
apparmor.load_include(inc) apparmor.load_include(inc)
if self.other.aa[program].get(hat): # carefully avoid to accidently initialize self.other.aa[program][hat] if self.other.aa[program].get(hat): # carefully avoid to accidentally initialize self.other.aa[program][hat]
deleted += apparmor.delete_all_duplicates(self.other.aa[program][hat], inc, apparmor.ruletypes) deleted += apparmor.delete_all_duplicates(self.other.aa[program][hat], inc, apparmor.ruletypes)
#Clean duplicate rules in other profile #Clean duplicate rules in other profile
for ruletype in apparmor.ruletypes: for ruletype in apparmor.ruletypes:
if not self.same_file: if not self.same_file:
if self.other.aa[program].get(hat): # carefully avoid to accidently initialize self.other.aa[program][hat] if self.other.aa[program].get(hat): # carefully avoid to accidentally initialize self.other.aa[program][hat]
deleted += self.other.aa[program][hat][ruletype].delete_duplicates(self.profile.aa[program][hat][ruletype]) deleted += self.other.aa[program][hat][ruletype].delete_duplicates(self.profile.aa[program][hat][ruletype])
else: else:
deleted += self.other.aa[program][hat][ruletype].delete_duplicates(None) deleted += self.other.aa[program][hat][ruletype].delete_duplicates(None)

View file

@ -251,7 +251,7 @@ def convert_regexp(regexp):
new_reg = new_reg.replace('**', multi_glob) new_reg = new_reg.replace('**', multi_glob)
#print(new_reg) #print(new_reg)
# Match atleast one character if * or ** after / # Match at least one character if * or ** after /
# ?< is the negative lookback operator # ?< is the negative lookback operator
new_reg = new_reg.replace('*', '(((?<=/)[^/\000]+)|((?<!/)[^/\000]*))') new_reg = new_reg.replace('*', '(((?<=/)[^/\000]+)|((?<!/)[^/\000]*))')
new_reg = new_reg.replace(multi_glob, '(((?<=/)[^\000]+)|((?<!/)[^\000]*))') new_reg = new_reg.replace(multi_glob, '(((?<=/)[^\000]+)|((?<!/)[^\000]*))')

View file

@ -214,7 +214,7 @@ def valid_profile_name(s):
return True return True
# profile name does not specify path # profile name does not specify path
# alpha-numeric and Debian version, plus '_' # alphanumeric and Debian version, plus '_'
if re.search(r'^[a-zA-Z0-9][a-zA-Z0-9_\+\-\.:~]+$', s): if re.search(r'^[a-zA-Z0-9][a-zA-Z0-9_\+\-\.:~]+$', s):
return True return True
return False return False

View file

@ -334,7 +334,7 @@ class ReadLog:
} }
def op_type(self, event): def op_type(self, event):
"""Returns the operation type if known, unkown otherwise""" """Returns the operation type if known, unknown otherwise"""
if ( event['operation'].startswith('file_') or event['operation'].startswith('inode_') or event['operation'] in self.OP_TYPE_FILE_OR_NET ): if ( event['operation'].startswith('file_') or event['operation'].startswith('inode_') or event['operation'] in self.OP_TYPE_FILE_OR_NET ):
# file or network event? # file or network event?

View file

@ -15,7 +15,7 @@
@{asdf} = foo "" @{asdf} = foo ""
/usr/bin/a/simple/cleanprof/test/profile { /usr/bin/a/simple/cleanprof/test/profile {
# Just for the heck of it, this comment wont see the day of light # Just for the heck of it, this comment won't see the day of light
#include <abstractions/base> #include <abstractions/base>
#include if exists <foo> #include if exists <foo>

View file

@ -52,7 +52,7 @@
/usr/lib/YaST2/servers_non_y2/ag_genprof = u /usr/lib/YaST2/servers_non_y2/ag_genprof = u
/usr/lib/YaST2/servers_non_y2/ag_logprof = u /usr/lib/YaST2/servers_non_y2/ag_logprof = u
# these ones shouln't have their own profiles # these ones shouldn't have their own profiles
/bin/awk = icn /bin/awk = icn
/bin/cat = icn /bin/cat = icn
/bin/chmod = icn /bin/chmod = icn

View file

@ -1085,7 +1085,7 @@ class FileGetPermsForPath_2(AATest):
(('/foo/bar', False, True ), {'allow': {'all': set(), 'owner': set() }, 'deny': {'all': FileRule.ALL, 'owner': set() }, 'paths': set() }), (('/foo/bar', False, True ), {'allow': {'all': set(), 'owner': set() }, 'deny': {'all': FileRule.ALL, 'owner': set() }, 'paths': set() }),
(('/etc/foo/dovecot-deny.conf', False, True ), {'allow': {'all': set(), 'owner': set() }, 'deny': {'all': FileRule.ALL, 'owner': set() }, 'paths': {'/etc/foo/dovecot-deny.conf' } }), (('/etc/foo/dovecot-deny.conf', False, True ), {'allow': {'all': set(), 'owner': set() }, 'deny': {'all': FileRule.ALL, 'owner': set() }, 'paths': {'/etc/foo/dovecot-deny.conf' } }),
(('/etc/foo/foo.conf', False, True ), {'allow': {'all': set(), 'owner': set() }, 'deny': {'all': FileRule.ALL, 'owner': set() }, 'paths': set() }), (('/etc/foo/foo.conf', False, True ), {'allow': {'all': set(), 'owner': set() }, 'deny': {'all': FileRule.ALL, 'owner': set() }, 'paths': set() }),
# (('/etc/foo/owner.conf', False, True ), {'allow': {'all': set(), 'owner': {'w'} }, 'deny': {'all': FileRule.ALL, 'owner': set() }, 'paths': {'/etc/foo/owner.conf' } }), # XXX doen't work yet # (('/etc/foo/owner.conf', False, True ), {'allow': {'all': set(), 'owner': {'w'} }, 'deny': {'all': FileRule.ALL, 'owner': set() }, 'paths': {'/etc/foo/owner.conf' } }), # XXX doesn't work yet
] ]
def _run_test(self, params, expected): def _run_test(self, params, expected):

View file

@ -296,7 +296,7 @@ def find_test_multi(log_dir):
return tests return tests
# if a logfile is given as parameter, print the resulting profile and exit (with $? = 42 to make sure tests break if the caller accidently hands over a parameter) # if a logfile is given as parameter, print the resulting profile and exit (with $? = 42 to make sure tests break if the caller accidentally hands over a parameter)
if __name__ == '__main__' and len(sys.argv) == 2: if __name__ == '__main__' and len(sys.argv) == 2:
print(logfile_to_profile(sys.argv[1])[1]) print(logfile_to_profile(sys.argv[1])[1])
exit(42) exit(42)

View file

@ -109,7 +109,7 @@ syn match sdError /^.*$/ contains=sdComment "highlight all non-valid lines as er
" TODO: the sdGlob pattern is not anchored with ^ and $, so it matches all lines matching ^@{...}.* " TODO: the sdGlob pattern is not anchored with ^ and $, so it matches all lines matching ^@{...}.*
" This allows incorrect lines also and should be checked better. " This allows incorrect lines also and should be checked better.
" This also (accidently ;-) includes variable definitions (@{FOO}=/bar) " This also (accidentally ;-) includes variable definitions (@{FOO}=/bar)
" TODO: make a separate pattern for variable definitions, then mark sdGlob as contained " TODO: make a separate pattern for variable definitions, then mark sdGlob as contained
syn match sdGlob /\v\?|\*|\{.*,.*\}|[[^\]]\+\]|\@\{[a-zA-Z][a-zA-Z0-9_]*\}/ syn match sdGlob /\v\?|\*|\{.*,.*\}|[[^\]]\+\]|\@\{[a-zA-Z][a-zA-Z0-9_]*\}/
@ -121,7 +121,7 @@ syn cluster sdEntry contains=sdEntryWriteExec,sdEntryR,sdEntryW,sdEntryIX,sdEntr
" TODO: support audit and deny keywords for all rules (not only for files) " TODO: support audit and deny keywords for all rules (not only for files)
" TODO: higlight audit and deny keywords everywhere " TODO: highlight audit and deny keywords everywhere
" Capability line " Capability line