mirror of
https://github.com/xonsh/xonsh.git
synced 2025-03-04 16:34:47 +01:00
commit
f41c4d5e32
15 changed files with 297 additions and 75 deletions
|
@ -6,6 +6,9 @@ Current Developments
|
|||
====================
|
||||
**Added:**
|
||||
|
||||
* ``and``, ``or``, ``&&``, ``||`` have been added as subprocess logical operators,
|
||||
by popular demand!
|
||||
* Subprocesses may be negated with ``not`` and grouped together with parentheses.
|
||||
* Added a new shell type ``'none'``, used to avoid importing ``readline`` or
|
||||
``prompt_toolkit`` when running scripts or running a single command.
|
||||
* New: `sudo` functionality on Windows through an alias
|
||||
|
@ -16,18 +19,17 @@ Current Developments
|
|||
* Added an option to update ``os.environ`` every time the xonsh environment changes.
|
||||
This is disabled by default but can be enabled by setting ``$UPDATE_OS_ENVIRON`` to
|
||||
True.
|
||||
* Added Windows 'cmd.exe' as a foreign shell. This gives xonsh the ability to source
|
||||
Windows Batch files (.bat and .cmd). Calling ``source-cmd script.bat`` or the
|
||||
alias ``source-bat script.bat`` will call the bat file and changes to the
|
||||
* Added Windows 'cmd.exe' as a foreign shell. This gives xonsh the ability to source
|
||||
Windows Batch files (.bat and .cmd). Calling ``source-cmd script.bat`` or the
|
||||
alias ``source-bat script.bat`` will call the bat file and changes to the
|
||||
environment variables will be reflected in xonsh.
|
||||
* Added an alias for the conda environment activate/deactivate batch scripts when
|
||||
* Added an alias for the conda environment activate/deactivate batch scripts when
|
||||
running the Anaconda python distribution on Windows.
|
||||
* Added a menu entry to launch xonsh when installing xonsh from a conda package
|
||||
* Added a new ``which`` alias that supports both regular ``which`` and also searches
|
||||
through xonsh aliases
|
||||
* Add support for prompt_toolkit_1.0.0
|
||||
|
||||
|
||||
**Changed:**
|
||||
|
||||
* Running scripts through xonsh (or running a single command with ``-c``) no
|
||||
|
@ -41,9 +43,8 @@ Current Developments
|
|||
* Regexpath matching with backticks, now returns an empty list in python mode.
|
||||
* Pygments added as a dependency for the conda package
|
||||
* PLY is no longer a external dependency but is bundled in xonsh/ply. Xonsh can
|
||||
therefore run without any external dependencies, although having prompt-toolkit
|
||||
recommended.
|
||||
|
||||
therefore run without any external dependencies, although having prompt-toolkit
|
||||
recommended.
|
||||
|
||||
**Deprecated:** None
|
||||
|
||||
|
@ -56,14 +57,15 @@ Current Developments
|
|||
the line.
|
||||
* Aliases will now evaluate enviornment variables and other expansions
|
||||
at execution time rather than passing through a literal string.
|
||||
* Fixed environment variables from os.environ not beeing loaded when a running
|
||||
* Fixed environment variables from os.environ not beeing loaded when a running
|
||||
a script
|
||||
* Fixed bug that prevented `source-alias` from working.
|
||||
* Fixed bug that prevented `source-alias` from working.
|
||||
* Fixed deadlock on Windows when runing subprocess that generates enough output
|
||||
to fill the OS pipe buffer
|
||||
to fill the OS pipe buffer
|
||||
|
||||
**Security:** None
|
||||
|
||||
|
||||
v0.2.7
|
||||
====================
|
||||
**Added:**
|
||||
|
@ -110,7 +112,6 @@ v0.2.7
|
|||
argument to be deleted.
|
||||
|
||||
|
||||
|
||||
**Removed:**
|
||||
|
||||
* The ``xonsh.tools.TERM_COLORS`` mapping has been axed, along with all
|
||||
|
|
|
@ -31,6 +31,12 @@ will help you put a finger on how to do the equivelent task in xonsh.
|
|||
* - ``set -x``
|
||||
- ``trace on``
|
||||
- Turns on tracing of source code lines during execution.
|
||||
* - ``&&``
|
||||
- ``and`` or ``&&``
|
||||
- Logical-and operator for subprocesses.
|
||||
* - ``||``
|
||||
- ``or`` as well as ``||``
|
||||
- Logical-or operator for subprocesses.
|
||||
* - ``$?``
|
||||
- ``__xonsh_history__.rtns[-1]``
|
||||
- Returns the exit code, or status, of the previous command.
|
||||
|
|
|
@ -79,7 +79,7 @@ name (``ls`` above) is in the present Python context. If it is, then it takes
|
|||
the line to be valid xonsh as written. If the left-most name cannot be found,
|
||||
then xonsh assumes that the left-most name is an external command. It thus
|
||||
attempts to parse the line after wrapping it in an uncaptured subprocess
|
||||
call ``$[]``. If wrapped version successfully parses, the ``$[]`` version
|
||||
call ``![]``. If wrapped version successfully parses, the ``![]`` version
|
||||
stays. Otherwise, the original line is retained.
|
||||
|
||||
All of the context sensitive parsing occurs as an AST transformation prior to
|
||||
|
@ -88,12 +88,12 @@ before failing.
|
|||
|
||||
It is critical to note that the context sensitive parsing is a convenience
|
||||
meant for humans. If ambiguity remains or exactness is required, simply
|
||||
manually use the ``$[]`` or ``$()`` operators on your code.
|
||||
manually use the ``![]``, ``!()``, ``$[]`` or ``$()`` operators on your code.
|
||||
|
||||
|
||||
5. Context-sensitive parsing is gross
|
||||
--------------------------------------
|
||||
Yes, context-sensitive parsing is gross. But the point of xonsh is that it uses xontext-sensitive parsing and
|
||||
Yes, context-sensitive parsing is gross. But the point of xonsh is that it uses xontext-sensitive parsing and
|
||||
is ultimately a lot less gross than other shell languages, such as BASH.
|
||||
Furthermore, its use is heavily limited here.
|
||||
|
||||
|
|
|
@ -503,6 +503,59 @@ Python operator.
|
|||
If you are unsure of what pipes are, there are many great references out there.
|
||||
You should be able to find information on StackOverflow or Google.
|
||||
|
||||
Logical Subprocess And
|
||||
=======================
|
||||
|
||||
Subprocess-mode also allows you to use the ``and`` operator to chain together
|
||||
subprocess commands. The truth value of a command is evaluated as whether
|
||||
its return code is zero (i.e. ``proc.returncode == 0``). Like in Python,
|
||||
if the command evaluates to ``False``, subsequent commands will not be executed.
|
||||
For example, suppose we want to lists files that may or may not exist:
|
||||
|
||||
.. code-block:: xonshcon
|
||||
|
||||
>>> touch exists
|
||||
>>> ls exists and ls doesnt
|
||||
exists
|
||||
/bin/ls: cannot access doesnt: No such file or directory
|
||||
|
||||
However, if you list the file that doesn't exist first,
|
||||
you would have only seen the error:
|
||||
|
||||
.. code-block:: xonshcon
|
||||
|
||||
>>> ls doesnt and ls exists
|
||||
/bin/ls: cannot access doesnt: No such file or directory
|
||||
|
||||
Also, don't worry. Xonsh directly translates the ``&&`` operator into ``and``
|
||||
for you. It is less Pythonic, of course, but it is your shell!
|
||||
|
||||
Logical Subprocess Or
|
||||
=======================
|
||||
|
||||
Much like with ``and``, you can use the ``or`` operator to chain together
|
||||
subprocess commands. The difference, to be certain, is that
|
||||
subsequent commands will be executed only if the
|
||||
if the return code is non-zero (i.e. a failure). Using the file example
|
||||
from above:
|
||||
|
||||
.. code-block:: xonshcon
|
||||
|
||||
>>> ls exists or ls doesnt
|
||||
exists
|
||||
|
||||
This doesn't even try to list a non-existent file!
|
||||
However, if you list the file that doesn't exist first,
|
||||
you will see the error and then the file that does exist:
|
||||
|
||||
.. code-block:: xonshcon
|
||||
|
||||
>>> ls doesnt or ls exists
|
||||
/bin/ls: cannot access doesnt: No such file or directory
|
||||
exists
|
||||
|
||||
Never fear! Xonsh also directly translates the ``||`` operator into ``or``,
|
||||
too. Your muscle memory is safe now, here with us.
|
||||
|
||||
Input/Output Redirection
|
||||
====================================
|
||||
|
|
|
@ -32,7 +32,7 @@ def check_exec(input):
|
|||
EXECER.exec(input)
|
||||
|
||||
def check_eval(input):
|
||||
with mock_xonsh_env(None):
|
||||
with mock_xonsh_env({'AUTO_CD': False}):
|
||||
EXECER.debug_level = DEBUG_LEVEL
|
||||
EXECER.eval(input)
|
||||
|
||||
|
@ -57,12 +57,14 @@ else:
|
|||
def test_bin_ls():
|
||||
yield check_eval, '/bin/ls -l'
|
||||
|
||||
def test_ls_dashl():
|
||||
yield check_eval, 'ls -l'
|
||||
def test_ls_dashl():
|
||||
yield check_parse, 'ls -l'
|
||||
|
||||
def test_which_ls():
|
||||
yield check_eval, 'which ls'
|
||||
def test_which_ls():
|
||||
yield check_parse, 'which ls'
|
||||
|
||||
def test_echo_hello():
|
||||
yield check_parse, 'echo hello'
|
||||
|
||||
def test_simple_func():
|
||||
code = ('def prompt():\n'
|
||||
|
|
|
@ -137,6 +137,18 @@ def test_multiline():
|
|||
def test_and():
|
||||
yield check_token, 'and', ['AND', 'and', 0]
|
||||
|
||||
def test_ampersand():
|
||||
yield check_token, '&', ['AMPERSAND', '&', 0]
|
||||
|
||||
def test_doubleamp():
|
||||
yield check_token, '&&', ['AND', 'and', 0]
|
||||
|
||||
def test_pipe():
|
||||
yield check_token, '|', ['PIPE', '|', 0]
|
||||
|
||||
def test_doublepipe():
|
||||
yield check_token, '||', ['OR', 'or', 0]
|
||||
|
||||
def test_single_quote_literal():
|
||||
yield check_token, "'yo'", ['STRING', "'yo'", 0]
|
||||
|
||||
|
|
|
@ -1672,6 +1672,30 @@ def test_two_cmds_one_pipe():
|
|||
def test_three_cmds_two_pipes():
|
||||
yield check_xonsh_ast, {}, '$(ls | grep wakka | grep jawaka)', False
|
||||
|
||||
def test_two_cmds_one_and_brackets():
|
||||
yield check_xonsh_ast, {}, '![ls me] and ![grep wakka]', False
|
||||
|
||||
def test_three_cmds_two_ands():
|
||||
yield check_xonsh_ast, {}, '![ls] and ![grep wakka] and ![grep jawaka]', False
|
||||
|
||||
def test_two_cmds_one_doubleamps():
|
||||
yield check_xonsh_ast, {}, '![ls] && ![grep wakka]', False
|
||||
|
||||
def test_three_cmds_two_doubleamps():
|
||||
yield check_xonsh_ast, {}, '![ls] && ![grep wakka] && ![grep jawaka]', False
|
||||
|
||||
def test_two_cmds_one_or():
|
||||
yield check_xonsh_ast, {}, '![ls] or ![grep wakka]', False
|
||||
|
||||
def test_three_cmds_two_ors():
|
||||
yield check_xonsh_ast, {}, '![ls] or ![grep wakka] or ![grep jawaka]', False
|
||||
|
||||
def test_two_cmds_one_doublepipe():
|
||||
yield check_xonsh_ast, {}, '![ls] || ![grep wakka]', False
|
||||
|
||||
def test_three_cmds_two_doublepipe():
|
||||
yield check_xonsh_ast, {}, '![ls] || ![grep wakka] || ![grep jawaka]', False
|
||||
|
||||
def test_one_cmd_write():
|
||||
yield check_xonsh_ast, {}, '$(ls > x.py)', False
|
||||
|
||||
|
|
|
@ -19,89 +19,89 @@ LEXER.build()
|
|||
INDENT = ' '
|
||||
|
||||
def test_subproc_toks_x():
|
||||
exp = '$[x]'
|
||||
exp = '![x]'
|
||||
obs = subproc_toks('x', lexer=LEXER, returnline=True)
|
||||
assert_equal(exp, obs)
|
||||
|
||||
def test_subproc_toks_ls_l():
|
||||
exp = '$[ls -l]'
|
||||
exp = '![ls -l]'
|
||||
obs = subproc_toks('ls -l', lexer=LEXER, returnline=True)
|
||||
assert_equal(exp, obs)
|
||||
|
||||
def test_subproc_toks_git():
|
||||
s = 'git commit -am "hello doc"'
|
||||
exp = '$[{0}]'.format(s)
|
||||
exp = '![{0}]'.format(s)
|
||||
obs = subproc_toks(s, lexer=LEXER, returnline=True)
|
||||
assert_equal(exp, obs)
|
||||
|
||||
def test_subproc_toks_git_semi():
|
||||
s = 'git commit -am "hello doc"'
|
||||
exp = '$[{0}];'.format(s)
|
||||
exp = '![{0}];'.format(s)
|
||||
obs = subproc_toks(s + ';', lexer=LEXER, returnline=True)
|
||||
assert_equal(exp, obs)
|
||||
|
||||
def test_subproc_toks_git_nl():
|
||||
s = 'git commit -am "hello doc"'
|
||||
exp = '$[{0}]\n'.format(s)
|
||||
exp = '![{0}]\n'.format(s)
|
||||
obs = subproc_toks(s + '\n', lexer=LEXER, returnline=True)
|
||||
assert_equal(exp, obs)
|
||||
|
||||
def test_subproc_toks_indent_ls():
|
||||
s = 'ls -l'
|
||||
exp = INDENT + '$[{0}]'.format(s)
|
||||
exp = INDENT + '![{0}]'.format(s)
|
||||
obs = subproc_toks(INDENT + s, mincol=len(INDENT), lexer=LEXER,
|
||||
returnline=True)
|
||||
assert_equal(exp, obs)
|
||||
|
||||
def test_subproc_toks_indent_ls_nl():
|
||||
s = 'ls -l'
|
||||
exp = INDENT + '$[{0}]\n'.format(s)
|
||||
exp = INDENT + '![{0}]\n'.format(s)
|
||||
obs = subproc_toks(INDENT + s + '\n', mincol=len(INDENT), lexer=LEXER,
|
||||
returnline=True)
|
||||
assert_equal(exp, obs)
|
||||
|
||||
def test_subproc_toks_indent_ls_no_min():
|
||||
s = 'ls -l'
|
||||
exp = INDENT + '$[{0}]'.format(s)
|
||||
exp = INDENT + '![{0}]'.format(s)
|
||||
obs = subproc_toks(INDENT + s, lexer=LEXER, returnline=True)
|
||||
assert_equal(exp, obs)
|
||||
|
||||
def test_subproc_toks_indent_ls_no_min_nl():
|
||||
s = 'ls -l'
|
||||
exp = INDENT + '$[{0}]\n'.format(s)
|
||||
exp = INDENT + '![{0}]\n'.format(s)
|
||||
obs = subproc_toks(INDENT + s + '\n', lexer=LEXER, returnline=True)
|
||||
assert_equal(exp, obs)
|
||||
|
||||
def test_subproc_toks_indent_ls_no_min_semi():
|
||||
s = 'ls'
|
||||
exp = INDENT + '$[{0}];'.format(s)
|
||||
exp = INDENT + '![{0}];'.format(s)
|
||||
obs = subproc_toks(INDENT + s + ';', lexer=LEXER, returnline=True)
|
||||
assert_equal(exp, obs)
|
||||
|
||||
def test_subproc_toks_indent_ls_no_min_semi_nl():
|
||||
s = 'ls'
|
||||
exp = INDENT + '$[{0}];\n'.format(s)
|
||||
exp = INDENT + '![{0}];\n'.format(s)
|
||||
obs = subproc_toks(INDENT + s + ';\n', lexer=LEXER, returnline=True)
|
||||
assert_equal(exp, obs)
|
||||
|
||||
def test_subproc_toks_ls_comment():
|
||||
s = 'ls -l'
|
||||
com = ' # lets list'
|
||||
exp = '$[{0}]{1}'.format(s, com)
|
||||
exp = '![{0}]{1}'.format(s, com)
|
||||
obs = subproc_toks(s + com, lexer=LEXER, returnline=True)
|
||||
assert_equal(exp, obs)
|
||||
|
||||
def test_subproc_toks_ls_42_comment():
|
||||
s = 'ls 42'
|
||||
com = ' # lets list'
|
||||
exp = '$[{0}]{1}'.format(s, com)
|
||||
exp = '![{0}]{1}'.format(s, com)
|
||||
obs = subproc_toks(s + com, lexer=LEXER, returnline=True)
|
||||
assert_equal(exp, obs)
|
||||
|
||||
def test_subproc_toks_ls_str_comment():
|
||||
s = 'ls "wakka"'
|
||||
com = ' # lets list'
|
||||
exp = '$[{0}]{1}'.format(s, com)
|
||||
exp = '![{0}]{1}'.format(s, com)
|
||||
obs = subproc_toks(s + com, lexer=LEXER, returnline=True)
|
||||
assert_equal(exp, obs)
|
||||
|
||||
|
@ -109,7 +109,7 @@ def test_subproc_toks_indent_ls_comment():
|
|||
ind = ' '
|
||||
s = 'ls -l'
|
||||
com = ' # lets list'
|
||||
exp = '{0}$[{1}]{2}'.format(ind, s, com)
|
||||
exp = '{0}![{1}]{2}'.format(ind, s, com)
|
||||
obs = subproc_toks(ind + s + com, lexer=LEXER, returnline=True)
|
||||
assert_equal(exp, obs)
|
||||
|
||||
|
@ -117,7 +117,7 @@ def test_subproc_toks_indent_ls_str():
|
|||
ind = ' '
|
||||
s = 'ls "wakka"'
|
||||
com = ' # lets list'
|
||||
exp = '{0}$[{1}]{2}'.format(ind, s, com)
|
||||
exp = '{0}![{1}]{2}'.format(ind, s, com)
|
||||
obs = subproc_toks(ind + s + com, lexer=LEXER, returnline=True)
|
||||
assert_equal(exp, obs)
|
||||
|
||||
|
@ -125,7 +125,7 @@ def test_subproc_toks_ls_l_semi_ls_first():
|
|||
lsdl = 'ls -l'
|
||||
ls = 'ls'
|
||||
s = '{0}; {1}'.format(lsdl, ls)
|
||||
exp = '$[{0}]; {1}'.format(lsdl, ls)
|
||||
exp = '![{0}]; {1}'.format(lsdl, ls)
|
||||
obs = subproc_toks(s, lexer=LEXER, maxcol=6, returnline=True)
|
||||
assert_equal(exp, obs)
|
||||
|
||||
|
@ -133,7 +133,7 @@ def test_subproc_toks_ls_l_semi_ls_second():
|
|||
lsdl = 'ls -l'
|
||||
ls = 'ls'
|
||||
s = '{0}; {1}'.format(lsdl, ls)
|
||||
exp = '{0}; $[{1}]'.format(lsdl, ls)
|
||||
exp = '{0}; ![{1}]'.format(lsdl, ls)
|
||||
obs = subproc_toks(s, lexer=LEXER, mincol=7, returnline=True)
|
||||
assert_equal(exp, obs)
|
||||
|
||||
|
@ -141,7 +141,7 @@ def test_subproc_hello_mom_first():
|
|||
fst = "echo 'hello'"
|
||||
sec = "echo 'mom'"
|
||||
s = '{0}; {1}'.format(fst, sec)
|
||||
exp = '$[{0}]; {1}'.format(fst, sec)
|
||||
exp = '![{0}]; {1}'.format(fst, sec)
|
||||
obs = subproc_toks(s, lexer=LEXER, maxcol=len(fst)+1, returnline=True)
|
||||
assert_equal(exp, obs)
|
||||
|
||||
|
@ -149,7 +149,7 @@ def test_subproc_hello_mom_second():
|
|||
fst = "echo 'hello'"
|
||||
sec = "echo 'mom'"
|
||||
s = '{0}; {1}'.format(fst, sec)
|
||||
exp = '{0}; $[{1}]'.format(fst, sec)
|
||||
exp = '{0}; ![{1}]'.format(fst, sec)
|
||||
obs = subproc_toks(s, lexer=LEXER, mincol=len(fst), returnline=True)
|
||||
assert_equal(exp, obs)
|
||||
|
||||
|
@ -158,6 +158,36 @@ def test_subproc_toks_comment():
|
|||
obs = subproc_toks('# I am a comment', lexer=LEXER, returnline=True)
|
||||
assert_equal(exp, obs)
|
||||
|
||||
def test_subproc_toks_not():
|
||||
exp = 'not ![echo mom]'
|
||||
obs = subproc_toks('not echo mom', lexer=LEXER, returnline=True)
|
||||
assert_equal(exp, obs)
|
||||
|
||||
def test_subproc_toks_paren():
|
||||
exp = '(![echo mom])'
|
||||
obs = subproc_toks('(echo mom)', lexer=LEXER, returnline=True)
|
||||
assert_equal(exp, obs)
|
||||
|
||||
def test_subproc_toks_paren_ws():
|
||||
exp = '(![echo mom]) '
|
||||
obs = subproc_toks('(echo mom) ', lexer=LEXER, returnline=True)
|
||||
assert_equal(exp, obs)
|
||||
|
||||
def test_subproc_toks_not_paren():
|
||||
exp = 'not (![echo mom])'
|
||||
obs = subproc_toks('not (echo mom)', lexer=LEXER, returnline=True)
|
||||
assert_equal(exp, obs)
|
||||
|
||||
def test_subproc_toks_and_paren():
|
||||
exp = 'True and (![echo mom])'
|
||||
obs = subproc_toks('True and (echo mom)', lexer=LEXER, returnline=True)
|
||||
assert_equal(exp, obs)
|
||||
|
||||
def test_subproc_toks_paren_and_paren():
|
||||
exp = '(![echo a]) and (echo b)'
|
||||
obs = subproc_toks('(echo a) and (echo b)', maxcol=9, lexer=LEXER, returnline=True)
|
||||
assert_equal(exp, obs)
|
||||
|
||||
def test_subproc_toks_semicolon_only():
|
||||
exp = None
|
||||
obs = subproc_toks(';', lexer=LEXER, returnline=True)
|
||||
|
|
56
xonsh/ast.py
56
xonsh/ast.py
|
@ -42,8 +42,10 @@ def leftmostname(node):
|
|||
rtn = leftmostname(node.value)
|
||||
elif isinstance(node, Call):
|
||||
rtn = leftmostname(node.func)
|
||||
elif isinstance(node, (BinOp, Compare)):
|
||||
rtn = leftmostname(node.left)
|
||||
elif isinstance(node, UnaryOp):
|
||||
rtn = leftmostname(node.operand)
|
||||
elif isinstance(node, BoolOp):
|
||||
rtn = leftmostname(node.values[0])
|
||||
elif isinstance(node, Assign):
|
||||
rtn = leftmostname(node.targets[0])
|
||||
elif isinstance(node, (Str, Bytes)):
|
||||
|
@ -72,6 +74,13 @@ def max_col(node):
|
|||
return col
|
||||
|
||||
|
||||
def isdescendable(node):
|
||||
"""Deteremines whether or not a node is worth visiting. Currently only
|
||||
UnaryOp and BoolOp nodes are visited.
|
||||
"""
|
||||
return isinstance(node, (UnaryOp, BoolOp))
|
||||
|
||||
|
||||
class CtxAwareTransformer(NodeTransformer):
|
||||
"""Transforms a xonsh AST based to use subprocess calls when
|
||||
the first name in an expression statement is not known in the context.
|
||||
|
@ -131,13 +140,13 @@ class CtxAwareTransformer(NodeTransformer):
|
|||
ctx.remove(value)
|
||||
break
|
||||
|
||||
def try_subproc_toks(self, node):
|
||||
def try_subproc_toks(self, node, strip_expr=False):
|
||||
"""Tries to parse the line of the node as a subprocess."""
|
||||
line = self.lines[node.lineno - 1]
|
||||
if self.mode == 'eval':
|
||||
mincol = len(line) - len(line.lstrip())
|
||||
maxcol = None
|
||||
else:
|
||||
else:
|
||||
mincol = min_col(node)
|
||||
maxcol = max_col(node) + 1
|
||||
spline = subproc_toks(line,
|
||||
|
@ -145,6 +154,8 @@ class CtxAwareTransformer(NodeTransformer):
|
|||
maxcol=maxcol,
|
||||
returnline=False,
|
||||
lexer=self.parser.lexer)
|
||||
if spline is None:
|
||||
return node
|
||||
try:
|
||||
newnode = self.parser.parse(spline, mode=self.mode)
|
||||
newnode = newnode.body
|
||||
|
@ -155,6 +166,8 @@ class CtxAwareTransformer(NodeTransformer):
|
|||
newnode.col_offset = node.col_offset
|
||||
except SyntaxError:
|
||||
newnode = node
|
||||
if strip_expr and isinstance(newnode, Expr):
|
||||
newnode = newnode.value
|
||||
return newnode
|
||||
|
||||
def is_in_scope(self, node):
|
||||
|
@ -169,8 +182,14 @@ class CtxAwareTransformer(NodeTransformer):
|
|||
break
|
||||
return inscope
|
||||
|
||||
#
|
||||
# Replacement visitors
|
||||
#
|
||||
|
||||
def visit_Expression(self, node):
|
||||
"""Handle visiting an expression body."""
|
||||
if isdescendable(node.body):
|
||||
node.body = self.visit(node.body)
|
||||
body = node.body
|
||||
inscope = self.is_in_scope(body)
|
||||
if not inscope:
|
||||
|
@ -179,6 +198,8 @@ class CtxAwareTransformer(NodeTransformer):
|
|||
|
||||
def visit_Expr(self, node):
|
||||
"""Handle visiting an expression."""
|
||||
if isdescendable(node.value):
|
||||
node.value = self.visit(node.value) # this allows diving into BoolOps
|
||||
if self.is_in_scope(node):
|
||||
return node
|
||||
else:
|
||||
|
@ -192,6 +213,31 @@ class CtxAwareTransformer(NodeTransformer):
|
|||
newnode.max_col = node.max_col
|
||||
return newnode
|
||||
|
||||
def visit_UnaryOp(self, node):
|
||||
"""Handle visiting an unary operands, like not."""
|
||||
if isdescendable(node.operand):
|
||||
node.operand = self.visit(node.operand)
|
||||
operand = node.operand
|
||||
inscope = self.is_in_scope(operand)
|
||||
if not inscope:
|
||||
node.operand = self.try_subproc_toks(operand, strip_expr=True)
|
||||
return node
|
||||
|
||||
def visit_BoolOp(self, node):
|
||||
"""Handle visiting an boolean operands, like and/or."""
|
||||
for i in range(len(node.values)):
|
||||
val = node.values[i]
|
||||
if isdescendable(val):
|
||||
val = node.values[i] = self.visit(val)
|
||||
inscope = self.is_in_scope(val)
|
||||
if not inscope:
|
||||
node.values[i] = self.try_subproc_toks(val, strip_expr=True)
|
||||
return node
|
||||
|
||||
#
|
||||
# Context aggregator visitors
|
||||
#
|
||||
|
||||
def visit_Assign(self, node):
|
||||
"""Handle visiting an assignment statement."""
|
||||
ups = set()
|
||||
|
@ -308,6 +354,6 @@ def pdump(s, **kwargs):
|
|||
if '(' in post or '[' in post or '{' in post:
|
||||
post = pdump(post)
|
||||
return pre + mid + post
|
||||
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -1,5 +1,6 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Implements the xonsh executer."""
|
||||
import re
|
||||
import types
|
||||
import inspect
|
||||
import builtins
|
||||
|
@ -7,10 +8,12 @@ from collections import Mapping
|
|||
|
||||
from xonsh import ast
|
||||
from xonsh.parser import Parser
|
||||
from xonsh.tools import subproc_toks
|
||||
from xonsh.tools import subproc_toks, END_TOK_TYPES
|
||||
from xonsh.built_ins import load_builtins, unload_builtins
|
||||
|
||||
|
||||
RE_END_TOKS = re.compile('(;|and|\&\&|or|\|\||\))')
|
||||
|
||||
class Execer(object):
|
||||
"""Executes xonsh code in a context."""
|
||||
|
||||
|
@ -64,7 +67,7 @@ class Execer(object):
|
|||
# tokens for all of the Python rules. The lazy way implemented here
|
||||
# is to parse a line a second time with a $() wrapper if it fails
|
||||
# the first time. This is a context-free phase.
|
||||
tree = self._parse_ctx_free(input, mode=mode)
|
||||
tree, input = self._parse_ctx_free(input, mode=mode)
|
||||
if tree is None:
|
||||
return None
|
||||
|
||||
|
@ -126,12 +129,13 @@ class Execer(object):
|
|||
def _find_next_break(self, line, mincol):
|
||||
if mincol >= 1:
|
||||
line = line[mincol:]
|
||||
if ';' not in line:
|
||||
if RE_END_TOKS.search(line) is None:
|
||||
return None
|
||||
maxcol = None
|
||||
self.parser.lexer.input(line)
|
||||
for tok in self.parser.lexer:
|
||||
if tok.type == 'SEMI':
|
||||
if tok.type in END_TOK_TYPES or \
|
||||
(tok.type == 'ERRORTOKEN' and ')' in tok.value):
|
||||
maxcol = tok.lexpos + mincol + 1
|
||||
break
|
||||
return maxcol
|
||||
|
@ -187,7 +191,7 @@ class Execer(object):
|
|||
returnline=True,
|
||||
maxcol=maxcol,
|
||||
lexer=self.parser.lexer)
|
||||
if sbpline.lstrip().startswith('$[$['):
|
||||
if sbpline.lstrip().startswith('![!['):
|
||||
# if we have already wrapped this in subproc tokens
|
||||
# and it still doesn't work, adding more won't help
|
||||
# anything
|
||||
|
@ -203,4 +207,4 @@ class Execer(object):
|
|||
lines[idx] = sbpline
|
||||
last_error_col += 3
|
||||
input = '\n'.join(lines)
|
||||
return tree
|
||||
return tree, input
|
||||
|
|
|
@ -35,6 +35,7 @@ _op_map = {
|
|||
'~': 'TILDE', '^': 'XOR', '<<': 'LSHIFT', '>>': 'RSHIFT',
|
||||
'<': 'LT', '<=': 'LE', '>': 'GT', '>=': 'GE', '==': 'EQ',
|
||||
'!=': 'NE', '->': 'RARROW',
|
||||
'&&': 'AND', '||': 'OR',
|
||||
# assignment operators
|
||||
'=': 'EQUALS', '+=': 'PLUSEQUAL', '-=': 'MINUSEQUAL',
|
||||
'*=': 'TIMESEQUAL', '@=': 'ATEQUAL', '/=': 'DIVEQUAL', '%=': 'MODEQUAL',
|
||||
|
@ -95,6 +96,14 @@ def handle_name(state, token, stream):
|
|||
else:
|
||||
state['last'] = n
|
||||
yield _new_token('IOREDIRECT', string, token.start)
|
||||
elif token.string == 'and':
|
||||
yield _new_token('AND', token.string, token.start)
|
||||
if n is not None:
|
||||
yield from handle_token(state, n, stream)
|
||||
elif token.string == 'or':
|
||||
yield _new_token('OR', token.string, token.start)
|
||||
if n is not None:
|
||||
yield from handle_token(state, n, stream)
|
||||
else:
|
||||
yield _new_token('NAME', token.string, token.start)
|
||||
if n is not None:
|
||||
|
@ -146,8 +155,33 @@ def _make_special_handler(token_type, extra_check=lambda x: True):
|
|||
handle_number = _make_special_handler('NUMBER')
|
||||
"""Function for handling number tokens"""
|
||||
|
||||
handle_ampersand = _make_special_handler('AMPERSAND')
|
||||
"""Function for handling ampersand tokens"""
|
||||
def handle_ampersands(state, token, stream):
|
||||
"""Function for generating PLY tokens for single and double ampersands."""
|
||||
n = next(stream, None)
|
||||
if n is not None and n.type == tokenize.OP and \
|
||||
n.string == '&' and n.start == token.end:
|
||||
state['last'] = n
|
||||
yield _new_token('AND', 'and', token.start)
|
||||
else:
|
||||
state['last'] = token
|
||||
if state['pymode'][-1][0]:
|
||||
yield _new_token('AMPERSAND', token.string, token.start)
|
||||
if n is not None:
|
||||
yield from handle_token(state, n, stream)
|
||||
|
||||
|
||||
def handle_pipes(state, token, stream):
|
||||
"""Function for generating PLY tokens for single and double pipes."""
|
||||
n = next(stream, None)
|
||||
if n is not None and n.type == tokenize.OP and \
|
||||
n.string == '|' and n.start == token.end:
|
||||
state['last'] = n
|
||||
yield _new_token('OR', 'or', token.start)
|
||||
else:
|
||||
state['last'] = token
|
||||
yield _new_token('PIPE', token.string, token.start)
|
||||
if n is not None:
|
||||
yield from handle_token(state, n, stream)
|
||||
|
||||
|
||||
def handle_dollar(state, token, stream):
|
||||
|
@ -381,7 +415,10 @@ special_handlers = {
|
|||
tokenize.NAME: handle_name,
|
||||
tokenize.NUMBER: handle_number,
|
||||
tokenize.ERRORTOKEN: handle_error_token,
|
||||
(tokenize.OP, '&'): handle_ampersand,
|
||||
(tokenize.OP, '|'): handle_pipes,
|
||||
(tokenize.OP, '||'): handle_pipes,
|
||||
(tokenize.OP, '&'): handle_ampersands,
|
||||
(tokenize.OP, '&&'): handle_ampersands,
|
||||
(tokenize.OP, '@'): handle_at,
|
||||
(tokenize.OP, '('): handle_lparen,
|
||||
(tokenize.OP, ')'): handle_rparen,
|
||||
|
|
|
@ -15,6 +15,7 @@ except ImportError:
|
|||
from xonsh import __version__
|
||||
from xonsh.shell import Shell
|
||||
from xonsh.pretty import pprint, pretty
|
||||
from xonsh.proc import HiddenCompletedCommand
|
||||
from xonsh.jobs import ignore_sigtstp
|
||||
from xonsh.tools import HAVE_PYGMENTS, setup_win_unicode_console, print_color, ON_WINDOWS
|
||||
|
||||
|
@ -131,16 +132,17 @@ def undo_args(args):
|
|||
au[k](args)
|
||||
|
||||
def _pprint_displayhook(value):
|
||||
if value is not None:
|
||||
builtins._ = None # Set '_' to None to avoid recursion
|
||||
if HAVE_PYGMENTS:
|
||||
s = pretty(value) # color case
|
||||
lexer = pyghooks.XonshLexer()
|
||||
tokens = list(pygments.lex(s, lexer=lexer))
|
||||
print_color(tokens)
|
||||
else:
|
||||
pprint(value) # black & white case
|
||||
builtins._ = value
|
||||
if value is None or isinstance(value, HiddenCompletedCommand):
|
||||
return
|
||||
builtins._ = None # Set '_' to None to avoid recursion
|
||||
if HAVE_PYGMENTS:
|
||||
s = pretty(value) # color case
|
||||
lexer = pyghooks.XonshLexer()
|
||||
tokens = list(pygments.lex(s, lexer=lexer))
|
||||
print_color(tokens)
|
||||
else:
|
||||
pprint(value) # black & white case
|
||||
builtins._ = value
|
||||
|
||||
class XonshMode(enum.Enum):
|
||||
single_command = 0
|
||||
|
|
|
@ -83,12 +83,12 @@ except ImportError:
|
|||
codec = lookup(encoding)
|
||||
except LookupError:
|
||||
# This behaviour mimics the Python interpreter
|
||||
raise SyntaxError("unknown encoding: " + encoding)
|
||||
raise SyntaxError("unknown encoding: " + encoding, filename='<file>')
|
||||
|
||||
if bom_found:
|
||||
if codec.name != 'utf-8':
|
||||
# This behaviour mimics the Python interpreter
|
||||
raise SyntaxError('encoding problem: utf-8')
|
||||
raise SyntaxError('encoding problem: utf-8', filename='<file>')
|
||||
encoding += '-sig'
|
||||
return encoding
|
||||
|
||||
|
|
|
@ -1,12 +1,12 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Implements the base xonsh parser."""
|
||||
from collections import Iterable, Sequence, Mapping
|
||||
|
||||
|
||||
try:
|
||||
from ply import yacc
|
||||
except ImportError:
|
||||
from xonsh.ply import yacc
|
||||
|
||||
|
||||
from xonsh import ast
|
||||
from xonsh.lexer import Lexer, LexToken
|
||||
from xonsh.tools import VER_3_5_1, VER_FULL
|
||||
|
|
|
@ -95,10 +95,12 @@ class DefaultNotGivenType(object):
|
|||
|
||||
DefaultNotGiven = DefaultNotGivenType()
|
||||
|
||||
BEG_TOK_SKIPS = frozenset(['WS', 'INDENT', 'NOT', 'LPAREN'])
|
||||
END_TOK_TYPES = frozenset(['SEMI', 'AND', 'OR', 'RPAREN'])
|
||||
|
||||
def subproc_toks(line, mincol=-1, maxcol=None, lexer=None, returnline=False):
|
||||
"""Excapsulates tokens in a source code line in a uncaptured
|
||||
subprocess $[] starting at a minimum column. If there are no tokens
|
||||
subprocess ![] starting at a minimum column. If there are no tokens
|
||||
(ie in a comment line) this returns None.
|
||||
"""
|
||||
if lexer is None:
|
||||
|
@ -111,13 +113,15 @@ def subproc_toks(line, mincol=-1, maxcol=None, lexer=None, returnline=False):
|
|||
end_offset = 0
|
||||
for tok in lexer:
|
||||
pos = tok.lexpos
|
||||
if tok.type != 'SEMI' and pos >= maxcol:
|
||||
if tok.type not in END_TOK_TYPES and pos >= maxcol:
|
||||
break
|
||||
if len(toks) == 0 and tok.type in ('WS', 'INDENT'):
|
||||
if len(toks) == 0 and tok.type in BEG_TOK_SKIPS:
|
||||
continue # handle indentation
|
||||
elif len(toks) > 0 and toks[-1].type == 'SEMI':
|
||||
elif len(toks) > 0 and toks[-1].type in END_TOK_TYPES:
|
||||
if pos < maxcol and tok.type not in ('NEWLINE', 'DEDENT', 'WS'):
|
||||
toks.clear()
|
||||
if tok.type in BEG_TOK_SKIPS:
|
||||
continue
|
||||
else:
|
||||
break
|
||||
if pos < mincol:
|
||||
|
@ -140,21 +144,22 @@ def subproc_toks(line, mincol=-1, maxcol=None, lexer=None, returnline=False):
|
|||
tok.lexpos = len(line)
|
||||
break
|
||||
else:
|
||||
if len(toks) > 0 and toks[-1].type == 'SEMI':
|
||||
if len(toks) > 0 and toks[-1].type in END_TOK_TYPES:
|
||||
toks.pop()
|
||||
if len(toks) == 0:
|
||||
return # handle comment lines
|
||||
tok = toks[-1]
|
||||
pos = tok.lexpos
|
||||
if isinstance(tok.value, string_types):
|
||||
end_offset = len(tok.value)
|
||||
end_offset = len(tok.value.rstrip())
|
||||
else:
|
||||
el = line[pos:].split('#')[0].rstrip()
|
||||
end_offset = len(el)
|
||||
if len(toks) == 0:
|
||||
return # handle comment lines
|
||||
beg, end = toks[0].lexpos, (toks[-1].lexpos + end_offset)
|
||||
rtn = '$[' + line[beg:end] + ']'
|
||||
end = len(line[:end].rstrip())
|
||||
rtn = '![' + line[beg:end] + ']'
|
||||
if returnline:
|
||||
rtn = line[:beg] + rtn + line[end:]
|
||||
return rtn
|
||||
|
|
Loading…
Add table
Reference in a new issue