manual merge

This commit is contained in:
Bob Hyman 2016-08-31 20:09:14 -04:00
commit 13912e90b9
81 changed files with 3071 additions and 425 deletions

23
.circle.yml Normal file
View file

@ -0,0 +1,23 @@
machine:
environment:
PATH: /home/ubuntu/miniconda/envs/test_env/bin:/home/ubuntu/miniconda/bin:$PATH
post:
- pyenv global 3.4.4 3.5.1
dependencies:
pre:
- wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh -O miniconda.sh
- bash miniconda.sh -b -p $HOME/miniconda
- conda config --set always_yes yes --set changeps1 no
- conda update -q conda
- bash .circle_miniconda.sh
- rm -rf ~/.pyenv
- rm -rf ~/virtualenvs
post:
- case $CIRCLE_NODE_INDEX in 0) python setup.py install ;; 1) python setup.py install ;; esac:
parallel: true
test:
override:
- case $CIRCLE_NODE_INDEX in 0) py.test --timeout=10 ;; 1) py.test --timeout=10 ;; esac:
parallel: true

10
.circle_miniconda.sh Normal file
View file

@ -0,0 +1,10 @@
if [[ $CIRCLE_NODE_INDEX == 0 ]]
then
conda create -q -n test_env python=3.4 pygments prompt_toolkit ply pytest pytest-timeout psutil
fi
if [[ $CIRCLE_NODE_INDEX == 1 ]]
then
conda create -q -n test_env python=3.5 pygments prompt_toolkit ply pytest pytest-timeout psutil
fi

1
.gitignore vendored
View file

@ -19,6 +19,7 @@ xonsh.egg-info/
docs/_build/
docs/envvarsbody
docs/xontribsbody
docs/eventsbody
xonsh/dev.githash
**/__pycache__/

View file

@ -4,12 +4,6 @@ matrix:
- os: linux
python: "nightly"
env: RUN_COVERAGE=true
- os: linux
language: generic
env: PYTHON="3.4" MINICONDA_OS="Linux"
- os: linux
language: generic
env: PYTHON="3.5" MINICONDA_OS="Linux"
- os: osx
language: generic
env: PYTHON="3.4" MINICONDA_OS="MacOSX"

1
circle.yml Symbolic link
View file

@ -0,0 +1 @@
.circle.yml

42
docs/advanced_events.rst Normal file
View file

@ -0,0 +1,42 @@
.. _events:
********************
Advanced Events
********************
If you havent, go read the `events tutorial <tutorial_events.rst>`_ first. This documents the messy
details of the event system.
You may also find the `events API reference <api/events.html>`_ useful.
Why Unordered?
==============
Yes, handler call order is not guarenteed. Please don't file bugs about this.
This was chosen because the order of handler registration is dependant on load order, which is
stable in a release but not something generally reasoned about. In addition, xontribs mean that we
don't know what handlers could be registered. So even an "ordered" event system would be unable to
make guarentees about ordering because of the larger system.
Because of this, the event system is not ordered; this is a form of abstraction. Order-dependant
semantics are not encouraged by the built-in methods.
So how do I handle results?
===========================
``Event.fire()`` returns a list of the returns from the handlers. You should merge this list in an
appropriate way.
What are Species?
=================
In xonsh, events come in species. Each one may look like an event and quack like an event, but they
behave differently.
This was done because load hooks look like events and quack like events, but they have different
semantics. See `LoadEvents <api/events.html#xonsh.events.LoadEvent>`_ for details.
In order to turn an event from the default ``Event``, you must transmogrify it, using
``events.transmogrify()``. The class the event is turned in to must be a subclass of ``AbstractEvent``.
(Under the hood, transmogrify creates a new instance and copies the handlers and docstring from the
old instance to the new one.)

11
docs/api/events.rst Normal file
View file

@ -0,0 +1,11 @@
.. _xonsh_events:
********************************************************************************
Events (``xonsh.events``)
********************************************************************************
.. automodule:: xonsh.events
:members:
:undoc-members:
:inherited-members:

View file

@ -49,6 +49,7 @@ For those of you who want the gritty details.
.. toctree::
:maxdepth: 1
events
tools
platform
lazyjson

View file

@ -10,12 +10,14 @@
import os
import sys
import builtins
import inspect
os.environ['XONSH_DEBUG'] = '1'
from xonsh import __version__ as XONSH_VERSION
from xonsh.environ import DEFAULT_DOCS, Env
from xonsh.xontribs import xontrib_metadata
from xonsh.events import events
sys.path.insert(0, os.path.dirname(__file__))
@ -340,8 +342,45 @@ def make_xontribs():
f.write(s)
def make_events():
names = sorted(vars(events).keys())
s = ('.. list-table::\n'
' :header-rows: 0\n\n')
table = []
ncol = 3
row = ' {0} - :ref:`{1} <{2}>`'
for i, var in enumerate(names):
star = '*' if i%ncol == 0 else ' '
table.append(row.format(star, var, var.lower()))
table.extend([' -']*((ncol - len(names)%ncol)%ncol))
s += '\n'.join(table) + '\n\n'
s += ('Listing\n'
'-------\n\n')
sec = ('.. _{low}:\n\n'
'``{title}``\n'
'{under}\n'
'{docstr}\n\n'
'-------\n\n')
for name in names:
event = getattr(events, name)
title = name
docstr = inspect.getdoc(event)
if docstr.startswith(name):
# Assume the first line is a signature
title, docstr = docstr.split('\n', 1)
docstr = docstr.strip()
under = '.' * (len(title) + 4)
s += sec.format(low=var.lower(), title=title, under=under,
docstr=docstr)
s = s[:-9]
fname = os.path.join(os.path.dirname(__file__), 'eventsbody')
with open(fname, 'w') as f:
f.write(s)
make_envvars()
make_xontribs()
make_events()
builtins.__xonsh_history__ = None
builtins.__xonsh_env__ = {}

5
docs/events.rst Normal file
View file

@ -0,0 +1,5 @@
Core Events
===========
The following events are defined by xonsh itself.
.. include:: eventsbody

View file

@ -74,9 +74,10 @@ Python variables, this could be transformed to the equivalent (Python)
expressions ``ls - l`` or ``ls-l``. Neither of which are valid listing
commands.
What xonsh does to overcome such ambiguity is to check if the left-most
name (``ls`` above) is in the present Python context. If it is, then it takes
the line to be valid xonsh as written. If the left-most name cannot be found,
What xonsh does to overcome such ambiguity is to check if the names in the
expression (``ls`` and ``l`` above) are in the present Python context. If they are,
then it takes
the line to be valid xonsh as written. If one of the names cannot be found,
then xonsh assumes that the left-most name is an external command. It thus
attempts to parse the line after wrapping it in an uncaptured subprocess
call ``![]``. If wrapped version successfully parses, the ``![]`` version

View file

@ -49,6 +49,7 @@ the xonsh shell
"The carcolh will catch you!",
"People xonshtantly mispronounce these things",
"WHAT...is your favorite shell?",
"Conches for the xonsh god!",
"Exploiting the workers and hanging on to outdated imperialist dogma since 2015."
];
document.write(taglines[Math.floor(Math.random() * taglines.length)]);
@ -98,7 +99,9 @@ Contents
tutorial
tutorial_hist
tutorial_macros
tutorial_xontrib
tutorial_events
tutorial_completers
bash_to_xsh
python_virtual_environments
@ -114,6 +117,7 @@ Contents
envvars
aliases
xontribs
events
**News & Media:**
@ -132,6 +136,7 @@ Contents
:maxdepth: 1
api/index
advanced_events
devguide/
previous/index
faq

View file

@ -31,20 +31,37 @@ the following from the source directory,
$ python setup.py install
Arch Linux users can install xonsh from the Arch User Repository with e.g.
``yaourt``, ``aura``, ``pacaur``, ``PKGBUILD``, etc...:
Debian/Ubuntu users can install xonsh from the repository with:
**apt:**
.. code-block:: console
$ apt install xonsh
Fedora users can install xonsh from the repository with:
**dnf:**
.. code-block:: console
$ dnf install xonsh
Arch Linux users can install xonsh from the Arch User Repository with:
**yaourt:**
.. code-block:: console
$ yaourt -Sa xonsh # yaourt will call sudo when needed
$ yaourt -Sa xonsh
**aura:**
.. code-block:: console
$ sudo aura -A xonsh
$ aura -A xonsh
**pacaur:**
@ -52,6 +69,7 @@ Arch Linux users can install xonsh from the Arch User Repository with e.g.
$ pacaur -S xonsh
Note that some of these may require ``sudo``.
If you run into any problems, please let us know!
.. include:: add_to_shell.rst
@ -139,3 +157,20 @@ you can try to replace a command for this action by the following:
In order to do this, go to ``Edit > Configure custom actions...``,
then choose ``Open Terminal Here`` and click on ``Edit currently selected action`` button.
Unable to use utf-8 characters inside xonsh
===========================================
If you are unable to use utf-8 (ie. non-ascii) characters in xonsh. For example if you get the following output
.. code-block:: xonsh
echo "ßðđ"
xonsh: For full traceback set: $XONSH_SHOW_TRACEBACK = True
UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-2: ordinal not in range(128)
The problem might be:
- Your locale is not set to utf-8, to check this you can set the content of the environment variable ``LC_TYPE``
- Your locale is correctly set but **after** xonsh started. This is typically the case if you set your ``LC_TYPE`` inside your ``.xonshrc`` and xonsh is your default/login shell. To fix this you should see the documentation of your operating system to know how to correctly setup environment variables before the shell start (``~/.pam_environment`` for example)

View file

@ -267,11 +267,11 @@ have also been written as ``ls - l`` or ``ls-l``. So how does xonsh know
that ``ls -l`` is meant to be run in subprocess-mode?
For any given line that only contains an expression statement (expr-stmt,
see the Python AST docs for more information), if the left-most name cannot
be found as a current variable name xonsh will try to parse the line as a
subprocess command instead. In the above, if ``ls`` is not a variable,
then subprocess mode will be attempted. If parsing in subprocess mode fails,
then the line is left in Python-mode.
see the Python AST docs for more information), if all the names cannot
be found as current variables xonsh will try to parse the line as a
subprocess command instead. In the above, if ``ls`` and ``l`` are not
variables, then subprocess mode will be attempted. If parsing in subprocess
mode fails, then the line is left in Python-mode.
In the following example, we will list the contents of the directory
with ``ls -l``. Then we'll make new variable names ``ls`` and ``l`` and then
@ -284,7 +284,7 @@ the directories again.
>>> ls -l
total 0
-rw-rw-r-- 1 snail snail 0 Mar 8 15:46 xonsh
>>> # set an ls variable to force python-mode
>>> # set ls and l variables to force python-mode
>>> ls = 44
>>> l = 2
>>> ls -l
@ -1132,7 +1132,7 @@ with keyword arguments:
Removing an alias is as easy as deleting the key from the alias dictionary:
.. code-block:: xonshcon
>>> del aliases['banana']
.. note::

49
docs/tutorial_events.rst Normal file
View file

@ -0,0 +1,49 @@
.. _tutorial_events:
************************************
Tutorial: Events
************************************
What's the best way to keep informed in xonsh? Subscribe to an event!
Overview
========
Simply, events are a way for various pieces of xonsh to tell each other what's going on. They're
fired when something of note happens, eg the current directory changes or just before a command is
executed.
While xonsh has its own event system, it is not dissimilar to other event systems. If you do know
events, this should be easy to understand. If not, then this document is extra for you.
Show me the code!
=================
Fine, fine!
This will add a line to a file every time the current directory changes (due to ``cd``, ``pushd``,
or several other commands)::
@events.on_chdir
def add_to_file(newdir):
with open(g`~/.dirhist`[0], 'a') as dh:
print(newdir, file=dh)
The exact arguments passed and returns expected vary from event to event; see the
`event list <events.html>`_ for the details.
Can I use this, too?
====================
Yes! It's even easy! In your xontrib, you just have to do something like::
events.doc('myxontrib_on_spam', """
myxontrib_on_spam(can: Spam) -> bool?
Fired in case of spam. Return ``True`` if it's been eaten.
""")
This will enable users to call ``help(events.myxontrib_on_spam)`` and get useful output.
Under the Hood
==============
If you want to know more about the gory details of what makes events tick, see
`Advanced Events <advanced_events.html>`_.

433
docs/tutorial_macros.rst Normal file
View file

@ -0,0 +1,433 @@
.. _tutorial_macros:
************************************
Tutorial: Macros
************************************
Bust out your DSLRs, people. It is time to closely examine macros!
What are macro instructions?
============================
In generic terms, a programming macro is a special kind of syntax that
replaces a smaller amount of code with a larger expression, syntax tree,
code object, etc after the macro has been evaluated.
In practice, macros pause the normal parsing and evaluation of the code
that they contain. This is so that they can perform their expansion with
a complete inputs. Roughly, the algorithm executing a macro follows is:
1. Macro start, pause or skip normal parsing
2. Gather macro inputs as strings
3. Evaluate macro with inputs
4. Resume normal parsing and execution.
Is this meta-programming? You betcha!
When and where are macros used?
===============================
Macros are a practicality-beats-purity feature of many programing
languages. Because they allow you break out of the normal parsing
cycle, depending on the language, you can do some truly wild things with
them. However, macros are really there to reduce the amount of boiler plate
code that users and developers have to write.
In C and C++ (and Fortran), the C Preprocessor ``cpp`` is a macro evaluation
engine. For example, every time you see an ``#include`` or ``#ifdef``, this is
the ``cpp`` macro system in action.
In these languages, the macros are technically outside of the definition
of the language at hand. Furthermore, because ``cpp`` must function with only
a single pass through the code, the sorts of macros that can be written with
``cpp`` are relatively simple.
Rust, on the other hand, has a first-class notion of macros that look and
feel a lot like normal functions. Macros in Rust are capable of pulling off
type information from their arguments and preventing their return values
from being consumed.
Other languages like Lisp, Forth, and Julia also provide their macro systems.
Even restructured text (rST) directives could be considered macros.
Haskell and other more purely functional languages do not need macros (since
evaluation is lazy anyway), and so do not have them.
If these seem unfamiliar to the Python world, note that Jupyter and IPython
magics ``%`` and ``%%`` are macros!
Function Macros
===============
Xonsh supports Rust-like macros that are based on normal Python callables.
Macros do not require a special definition in xonsh. However, like in Rust,
they must be called with an exclamation point ``!`` between the callable
and the opening parentheses ``(``. Macro arguments are split on the top-level
commas ``,``, like normal Python functions. For example, say we have the
functions ``f`` and ``g``. We could perform a macro call on these functions
with the following:
.. code-block:: xonsh
# No macro args
f!()
# Single arg
f!(x)
g!([y, 43, 44])
# Two args
f!(x, x + 42)
g!([y, 43, 44], f!(z))
Not so bad, right? So what actually happens to the arguments when used
in a macro call? Well, that depends on the definition of the function. In
particular, each argument in the macro call is matched up with the corresponding
parameter annotation in the callable's signature. For example, say we have
an ``identity()`` function that annotates its sole argument as a string:
.. code-block:: xonsh
def identity(x : str):
return x
If we call this normally, we'll just get whatever object we put in back out,
even if that object is not a string:
.. code-block:: xonshcon
>>> identity('me')
'me'
>>> identity(42)
42
>>> identity(identity)
<function __main__.identity>
However, if we perform macro calls instead we are now guaranteed to get
the string of the source code that is in the macro call:
.. code-block:: xonshcon
>>> identity!('me')
"'me'"
>>> identity!(42)
'42'
>>> identity!(identity)
'identity'
Also note that each macro argument is stripped prior to passing it to the
macro itself. This is done for consistency.
.. code-block:: xonshcon
>>> identity!(42)
'42'
>>> identity!( 42 )
'42'
Importantly, because we are capturing and not evaluating the source code,
a macro call can contain input that is beyond the usual syntax. In fact, that
is sort of the whole point. Here are some cases to start your gears turning:
.. code-block:: xonshcon
>>> identity!(import os)
'import os'
>>> identity!(if True:
>>> pass)
'if True:\n pass'
>>> identity!(std::vector<std::string> x = {"yoo", "hoo"})
'std::vector<std::string> x = {"yoo", "hoo"}'
You do you, ``identity()``.
Calling Function Macros
=======================
There are a couple of points to consider when calling macros. The first is
that passing in arguments by name will not behave as expected. This is because
the ``<name>=`` is captured by the macro itself. Using the ``identity()``
function from above:
.. code-block:: xonshcon
>>> identity!(x=42)
'x=42'
Performing a macro call uses only argument order to pass in values.
Additionally, macro calls split arguments only on the top-level commas.
The top-level commas are not included in any argument.
This behaves analogously to normal Python function calls. For instance,
say we have the following ``g()`` function that accepts two arguments:
.. code-block:: xonsh
def g(x : str, y : str):
print('x = ' + repr(x))
print('y = ' + repr(y))
Then you can see the splitting and stripping behavior on each macro
argument:
.. code-block:: xonshcon
>>> g!(42, 65)
x = '42'
y = '65'
>>> g!(42, 65,)
x = '42'
y = '65'
>>> g!( 42, 65, )
x = '42'
y = '65'
>>> g!(['x', 'y'], {1: 1, 2: 3})
x = "['x', 'y']"
y = '{1: 1, 2: 3}'
Sometimes you may only want to pass in the first few arguments as macro
arguments and you want the rest to be treated as normal Python arguments.
By convention, xonsh's macro caller will look for a lone ``*`` argument
in order to split the macro arguments and the regular arguments. So for
example:
.. code-block:: xonshcon
>>> g!(42, *, 65)
x = '42'
y = 65
>>> g!(42, *, y=65)
x = '42'
y = 65
In the above, note that ``x`` is still captured as a macro argument. However,
everything after the ``*``, namely ``y``, is evaluated is if it were passed
in to a normal function call. This can be useful for large interfaces where
only a handful of args are expected as macro arguments.
Hopefully, now you see the big picture.
Writing Function Macros
=======================
Though any function (or callable) can be used as a macro, this functionality
is probably most useful if the function was *designed* as a macro. There
are two main aspects of macro design to consider: argument annotations and
call site execution context.
Macro Function Argument Annotations
-----------------------------------
There are six kinds of annotations that macros are able to interpret:
.. list-table:: Kinds of Annotation
:header-rows: 1
* - Category
- Object
- Flags
- Modes
- Returns
* - String
- ``str``
- ``'s'``, ``'str'``, or ``'string'``
-
- Source code of argument as string.
* - AST
- ``ast.AST``
- ``'a'`` or ``'ast'``
- ``'eval'`` (default), ``'exec'``, or ``'single'``
- Abstract syntax tree of argument.
* - Code
- ``types.CodeType`` or ``compile``
- ``'c'``, ``'code'``, or ``'compile'``
- ``'eval'`` (default), ``'exec'``, or ``'single'``
- Compiled code object of argument.
* - Eval
- ``eval`` or ``None``
- ``'v'`` or ``'eval'``
-
- Evaluation of the argument, *default*.
* - Exec
- ``exec``
- ``'x'`` or ``'exec'``
- ``'exec'`` (default) or ``'single'``
- Execs the argument and returns None.
* - Type
- ``type``
- ``'t'`` or ``'type'``
-
- The type of the argument after it has been evaluated.
These annotations allow you to hook into whichever stage of the compilation
that you desire. It is important to note that the string form of the arguments
is split and stripped (as described above) prior to conversion to the
annotation type.
Each argument may be annotated with its own individual type. Annotations
may be provided as either objects or as the string flags seen in the above
table. String flags are case-insensitive.
If an argument does not have an annotation, ``eval`` is selected.
This makes the macro call behave like a normal function call for
arguments whose annotations are unspecified. For example,
.. code-block:: xonsh
def func(a, b : 'AST', c : compile):
pass
In a macro call of ``func!()``,
* ``a`` will be evaluated with ``eval`` since no annotation was provided,
* ``b`` will be parsed into a syntax tree node, and
* ``c`` will be compiled into code object since the builtin ``compile()``
function was used as the annotation.
Additionally, certain kinds of annotations have different modes that
affect the parsing, compilation, and execution of its argument. While a
sensible default is provided, you may also supply your own. This is
done by annotating with a (kind, mode) tuple. The first element can
be any valid object or flag. The second element must be a corresponding
mode as a string. For instance,
.. code-block:: xonsh
def gunc(d : (exec, 'single'), e : ('c', 'exec')):
pass
Thus in a macro call of ``gunc!()``,
* ``d`` will be exec'd in single-mode (rather than exec-mode), and
* ``e`` will be compiled in exec-mode (rather than eval-mode).
For more information on the differences between the exec, eval, and single
modes please see the Python documentation.
Macro Function Execution Context
--------------------------------
Equally important as having the macro arguments is knowing the execution
context of the macro call itself. Rather than mucking around with frames,
macros provide both the globals and locals of the call site. These are
accessible as the ``macro_globals`` and ``macro_locals`` attributes of
the macro function itself while the macro is being executed.
For example, consider a macro which replaces all literal ``1`` digits
with the literal ``2``, evaluates the modification, and returns the results.
To eval, the macro will need to pull off its globals and locals:
.. code-block:: xonsh
def one_to_two(x : str):
s = x.replace('1', '2')
glbs = one_to_two.macro_globals
locs = one_to_two.macro_locals
return eval(s, glbs, locs)
Running this with a few of different inputs, we see:
.. code-block:: xonshcon
>>> one_to_two!(1 + 1)
4
>>> one_to_two!(11)
22
>>> x = 1
>>> one_to_two!(x + 1)
3
Of course, many other more sophisticated options are available depending on the
use case.
Subprocess Macros
=================
Like with function macros above, subprocess macros allow you to pause the parser
for until you are ready to exit subprocess mode. Unlike function macros, there
is only a single macro argument and its macro type is always a string. This
is because it (usually) doesn't make sense to pass non-string arguments to a
command. And when it does, there is the ``@()`` syntax!
In the simplest case, subprocess macros look like the equivalent of their
function macro counterparts:
.. code-block:: xonshcon
>>> echo! I'm Mr. Meeseeks.
I'm Mr. Meeseeks.
Again, note that everything to the right of the ``!`` is passed down to the
``echo`` command as the final, single argument. This is space preserving,
like wrapping with quotes:
.. code-block:: xonshcon
# normally, xonsh will split on whitespace,
# so each argument is passed in separately
>>> echo x y z
x y z
# usually space can be preserved with quotes
>>> echo "x y z"
x y z
# however, subprocess macros will pause and then strip
# all input after the exclamation point
>>> echo! x y z
x y z
However, the macro will pause everything, including path and environment variable
expansion, that might be present even with quotes. For example:
.. code-block:: xonshcon
# without macros, environment variable are expanded
>>> echo $USER
lou
# inside of a macro, all additional munging is turned off.
>>> echo! $USER
$USER
Everything to the right of the exclamation point, except the leading and trailing
whitespace, is passed into the command directly as written. This allows certain
commands to function in cases where quoting or piping might be more burdensome.
The ``timeit`` command is a great example where simple syntax will often fail,
but will be easily executable as a macro:
.. code-block:: xonshcon
# fails normally
>>> timeit "hello mom " + "and dad"
xonsh: subprocess mode: command not found: hello
# macro success!
>>> timeit! "hello mom " + "and dad"
100000000 loops, best of 3: 8.24 ns per loop
All expressions to the left of the exclamation point are passed in normally and
are not treated as the special macro argument. This allows the mixing of
simple and complex command line arguments. For example, sometimes you might
really want to write some code in another language:
.. code-block:: xonshcon
# don't worry, it is temporary!
>>> bash -c ! export var=42; echo $var
42
# that's better!
>>> python -c ! import os; print(os.path.abspath("/"))
/
Compared to function macros, subprocess macros are relatively simple.
However, they can still be very expressive!
Take Away
=========
Hopefully, at this point, you see that a few well placed macros can be extremely
convenient and valuable to any project.

16
news/allnames.rst Normal file
View file

@ -0,0 +1,16 @@
**Added:** None
**Changed:**
* Context sensitive AST transformation now checks that all names in an
expression are in scope. If they are, then Python mode is retained. However,
if even one is missing, subprocess wrapping is attempted. Previously, only the
left-most name was examined for being within scope.
**Deprecated:** None
**Removed:** None
**Fixed:** None
**Security:** None

View file

@ -0,0 +1,13 @@
**Added:** None
**Changed:** None
**Deprecated:** None
**Removed:** None
**Fixed:**
* fix parsing for tuple of tuples (like `(),()`)
**Security:** None

View file

@ -0,0 +1,17 @@
**Added:** None
**Changed:**
* ``$COMPLETIONS_BRACKETS`` is now available to determine whether or not to
include opening brackets in Python completions
**Deprecated:** None
**Removed:** None
**Fixed:**
* ``sys.stdin``, ``sys.stdout``, ``sys.stderr`` no longer complete with
opening square brackets
**Security:** None

View file

@ -0,0 +1,14 @@
**Added:**
* New option ``COMPLETIONS_CONFIRM``. When set, ``<Enter>`` is used to confirm
completion instead of running command while completion menu is displayed.
**Changed:** None
**Deprecated:** None
**Removed:** None
**Fixed:** None
**Security:** None

14
news/emptysh.rst Normal file
View file

@ -0,0 +1,14 @@
**Added:** None
**Changed:** None
**Deprecated:** None
**Removed:** None
**Fixed:**
* Foreign shell functions that are mapped to empty filenames no longer
receive alaises since they can't be found to source later.
**Security:** None

13
news/events.rst Normal file
View file

@ -0,0 +1,13 @@
**Added:**
* A new `event subsystem <http://xon.sh/tutorial_events.html>`_ has been added.
**Changed:** None
**Deprecated:** None
**Removed:** None
**Fixed:** None
**Security:** None

14
news/expandvar.rst Normal file
View file

@ -0,0 +1,14 @@
**Added:** None
**Changed:** None
**Deprecated:** None
**Removed:** None
**Fixed:**
* Environment variables in subprocess mode were not being expanded
unless they were in a sting. They are now expanded properly.
**Security:** None

View file

@ -0,0 +1,13 @@
**Added:** None
**Changed:**
* New implementation of bash completer with better performance and compatibility.
**Deprecated:** None
**Removed:** None
**Fixed:** None
**Security:** None

13
news/foreign-aliases.rst Normal file
View file

@ -0,0 +1,13 @@
**Added:** None
**Changed:**
* Foreign aliases that match xonsh builtin aliases are now ignored with a warning.
**Deprecated:** None
**Removed:** None
**Fixed:** None
**Security:** None

21
news/history-api.rst Normal file
View file

@ -0,0 +1,21 @@
**Added:**
* ``History`` methods ``__iter__`` and ``__getitem__``
* ``tools.get_portions`` that yields parts of an iterable
**Changed:**
* ``_curr_session_parser`` now iterates over ``History``
**Deprecated:** None
**Removed:**
* ``History`` method ``show``
* ``_hist_get_portion`` in favor of ``tools.get_portions``
**Fixed:** None
**Security:** None

13
news/linux-guide.rst Normal file
View file

@ -0,0 +1,13 @@
**Added:**
* howto install sections for Debian/Ubuntu and Fedora.
**Changed:** None
**Deprecated:** None
**Removed:** None
**Fixed:** None
**Security:** None

16
news/m.rst Normal file
View file

@ -0,0 +1,16 @@
**Added:**
* Macro function calls are now available. These use a Rust-like
``f!(arg)`` syntax.
* Macro subprocess call now avalaible with the ``echo! x y z``
syntax.
**Changed:** None
**Deprecated:** None
**Removed:** None
**Fixed:** None
**Security:** None

15
news/module-file-attr.rst Normal file
View file

@ -0,0 +1,15 @@
**Added:** None
**Changed:**
* ``create_module`` implementation on XonshImportHook
**Deprecated:** None
**Removed:** None
**Fixed:**
* xonsh modules imported now have the __file__ attribute
**Security:** None

View file

@ -0,0 +1,13 @@
**Added:** None
**Changed:** None
**Deprecated:** None
**Removed:** None
**Fixed:**
* xonsh now properly handles syntax error messages arising from using values in inappropriate contexts (e.g., ``del 7``).
**Security:** None

13
news/netbsd.rst Normal file
View file

@ -0,0 +1,13 @@
**Added:**
* NetBSD is now supported.
**Changed:** None
**Deprecated:** None
**Removed:** None
**Fixed:** None
**Security:** None

View file

@ -0,0 +1,14 @@
**Added:** None
**Changed:**
* ``prompt_toolkit`` completions now only show the rightmost portion
of a given completion in the dropdown
**Deprecated:** None
**Removed:** None
**Fixed:** None
**Security:** None

16
news/syncp.rst Normal file
View file

@ -0,0 +1,16 @@
**Added:** None
**Changed:**
* ``yacc_debug=True`` now load the parser on the same thread that the
Parser instance is created. ``setup.py`` now uses this synchronous
form as it was causing the parser table to be missed by some package
managers.
**Deprecated:** None
**Removed:** None
**Fixed:** None
**Security:** None

13
news/test_xsh.rst Normal file
View file

@ -0,0 +1,13 @@
**Added:**
* Added a py.test plugin to collect `test_*.xsh` files and run `test_*()` functions.
**Changed:** None
**Deprecated:** None
**Removed:** None
**Fixed:** None
**Security:** None

14
news/tilde.rst Normal file
View file

@ -0,0 +1,14 @@
**Added:** None
**Changed:**
* Tilde expansion dor the home directory now has the same semantics as Bash.
Previously it only matched leading tildes.
**Deprecated:** None
**Removed:** None
**Fixed:** None
**Security:** None

View file

@ -5,7 +5,7 @@ flake8-ignore =
*.py E402
tests/tools.py E128
xonsh/ast.py F401
xonsh/built_ins.py F821
xonsh/built_ins.py F821 E721
xonsh/commands_cache.py F841
xonsh/history.py F821
xonsh/pyghooks.py F821

View file

@ -70,7 +70,7 @@ def build_tables():
sys.path.insert(0, os.path.dirname(__file__))
from xonsh.parser import Parser
Parser(lexer_table='lexer_table', yacc_table='parser_table',
outputdir='xonsh')
outputdir='xonsh', yacc_debug=True)
sys.path.pop(0)
@ -310,7 +310,8 @@ def main():
skw['entry_points'] = {
'pygments.lexers': ['xonsh = xonsh.pyghooks:XonshLexer',
'xonshcon = xonsh.pyghooks:XonshConsoleLexer'],
}
'pytest11': ['xonsh = xonsh.pytest_plugin']
}
skw['cmdclass']['develop'] = xdevelop
setup(**skw)

View file

@ -1,11 +1,16 @@
import glob
import builtins
import pytest
from tools import DummyShell, sp
import xonsh.built_ins
from xonsh.built_ins import ensure_list_of_strs
from xonsh.execer import Execer
from xonsh.tools import XonshBlockError
import glob
from xonsh.events import events
from tools import DummyShell, sp
@pytest.fixture
@ -38,6 +43,9 @@ def xonsh_builtins():
builtins.execx = None
builtins.compilex = None
builtins.aliases = {}
# Unlike all the other stuff, this has to refer to the "real" one because all modules that would
# be firing events on the global instance.
builtins.events = events
yield builtins
del builtins.__xonsh_env__
del builtins.__xonsh_ctx__
@ -56,3 +64,4 @@ def xonsh_builtins():
del builtins.execx
del builtins.compilex
del builtins.aliases
del builtins.events

View file

@ -2,7 +2,7 @@
import ast as pyast
from xonsh import ast
from xonsh.ast import Tuple, Name, Store, min_line
from xonsh.ast import Tuple, Name, Store, min_line, Call, BinOp
import pytest
@ -27,12 +27,31 @@ def test_gather_names_tuple():
obs = ast.gather_names(node)
assert exp == obs
def test_multilline_num():
code = ('x = 1\n'
'ls -l\n') # this second line wil be transformed
@pytest.mark.parametrize('line1', [
# this second line wil be transformed into a subprocess call
'x = 1',
# this second line wil be transformed into a subprocess call even though
# ls is defined.
'ls = 1',
# the second line wil be transformed still even though l exists.
'l = 1',
])
def test_multilline_num(line1):
code = line1 + '\nls -l\n'
tree = check_parse(code)
lsnode = tree.body[1]
assert 2 == min_line(lsnode)
assert isinstance(lsnode.value, Call)
def test_multilline_no_transform():
# no subprocess transformations happen here since all variables are known
code = 'ls = 1\nl = 1\nls -l\n'
tree = check_parse(code)
lsnode = tree.body[2]
assert 3 == min_line(lsnode)
assert isinstance(lsnode.value, BinOp)
@pytest.mark.parametrize('inp', [

View file

@ -3,13 +3,16 @@
from __future__ import unicode_literals, print_function
import os
import re
import builtins
import types
from ast import AST
import pytest
from xonsh import built_ins
from xonsh.built_ins import reglob, pathsearch, helper, superhelper, \
ensure_list_of_strs, list_of_strs_or_callables, regexsearch, \
globsearch
globsearch, expand_path, convert_macro_arg, macro_context, call_macro
from xonsh.environ import Env
from tools import skip_if_on_windows
@ -18,6 +21,10 @@ from tools import skip_if_on_windows
HOME_PATH = os.path.expanduser('~')
@pytest.fixture(autouse=True)
def xonsh_execer_autouse(xonsh_execer):
return xonsh_execer
@pytest.mark.parametrize('testfile', reglob('test_.*'))
def test_reglob_tests(testfile):
assert (testfile.startswith('test_'))
@ -113,3 +120,161 @@ f = lambda x: 20
def test_list_of_strs_or_callables(exp, inp):
obs = list_of_strs_or_callables(inp)
assert exp == obs
@pytest.mark.parametrize('s', [
'~',
'~/',
'x=~/place',
'one:~/place',
'one:~/place:~/yo',
'~/one:~/place:~/yo',
])
def test_expand_path(s, home_env):
if os.sep != '/':
s = s.replace('/', os.sep)
if os.pathsep != ':':
s = s.replace(':', os.pathsep)
assert expand_path(s) == s.replace('~', HOME_PATH)
@pytest.mark.parametrize('kind', [str, 's', 'S', 'str', 'string'])
def test_convert_macro_arg_str(kind):
raw_arg = 'value'
arg = convert_macro_arg(raw_arg, kind, None, None)
assert arg is raw_arg
@pytest.mark.parametrize('kind', [AST, 'a', 'Ast'])
def test_convert_macro_arg_ast(kind):
raw_arg = '42'
arg = convert_macro_arg(raw_arg, kind, {}, None)
assert isinstance(arg, AST)
@pytest.mark.parametrize('kind', [types.CodeType, compile, 'c', 'code',
'compile'])
def test_convert_macro_arg_code(kind):
raw_arg = '42'
arg = convert_macro_arg(raw_arg, kind, {}, None)
assert isinstance(arg, types.CodeType)
@pytest.mark.parametrize('kind', [eval, None, 'v', 'eval'])
def test_convert_macro_arg_eval(kind):
# literals
raw_arg = '42'
arg = convert_macro_arg(raw_arg, kind, {}, None)
assert arg == 42
# exprs
raw_arg = 'x + 41'
arg = convert_macro_arg(raw_arg, kind, {}, {'x': 1})
assert arg == 42
@pytest.mark.parametrize('kind', [exec, 'x', 'exec'])
def test_convert_macro_arg_exec(kind):
# at global scope
raw_arg = 'def f(x, y):\n return x + y'
glbs = {}
arg = convert_macro_arg(raw_arg, kind, glbs, None)
assert arg is None
assert 'f' in glbs
assert glbs['f'](1, 41) == 42
# at local scope
raw_arg = 'def g(z):\n return x + z\ny += 42'
glbs = {'x': 40}
locs = {'y': 1}
arg = convert_macro_arg(raw_arg, kind, glbs, locs)
assert arg is None
assert 'g' in locs
assert locs['g'](1) == 41
assert 'y' in locs
assert locs['y'] == 43
@pytest.mark.parametrize('kind', [type, 't', 'type'])
def test_convert_macro_arg_eval(kind):
# literals
raw_arg = '42'
arg = convert_macro_arg(raw_arg, kind, {}, None)
assert arg is int
# exprs
raw_arg = 'x + 41'
arg = convert_macro_arg(raw_arg, kind, {}, {'x': 1})
assert arg is int
def test_macro_context():
def f():
pass
with macro_context(f, True, True):
assert f.macro_globals
assert f.macro_locals
assert not hasattr(f, 'macro_globals')
assert not hasattr(f, 'macro_locals')
@pytest.mark.parametrize('arg', ['x', '42', 'x + y'])
def test_call_macro_str(arg):
def f(x : str):
return x
rtn = call_macro(f, [arg], None, None)
assert rtn is arg
@pytest.mark.parametrize('arg', ['x', '42', 'x + y'])
def test_call_macro_ast(arg):
def f(x : AST):
return x
rtn = call_macro(f, [arg], {}, None)
assert isinstance(rtn, AST)
@pytest.mark.parametrize('arg', ['x', '42', 'x + y'])
def test_call_macro_code(arg):
def f(x : compile):
return x
rtn = call_macro(f, [arg], {}, None)
assert isinstance(rtn, types.CodeType)
@pytest.mark.parametrize('arg', ['x', '42', 'x + y'])
def test_call_macro_eval(arg):
def f(x : eval):
return x
rtn = call_macro(f, [arg], {'x': 42, 'y': 0}, None)
assert rtn == 42
@pytest.mark.parametrize('arg', ['if y:\n pass',
'if 42:\n pass',
'if x + y:\n pass'])
def test_call_macro_exec(arg):
def f(x : exec):
return x
rtn = call_macro(f, [arg], {'x': 42, 'y': 0}, None)
assert rtn is None
@pytest.mark.parametrize('arg', ['x', '42', 'x + y'])
def test_call_macro_raw_arg(arg):
def f(x : str):
return x
rtn = call_macro(f, ['*', arg], {'x': 42, 'y': 0}, None)
assert rtn == 42
@pytest.mark.parametrize('arg', ['x', '42', 'x + y'])
def test_call_macro_raw_kwarg(arg):
def f(x : str):
return x
rtn = call_macro(f, ['*', 'x=' + arg], {'x': 42, 'y': 0}, None)
assert rtn == 42
@pytest.mark.parametrize('arg', ['x', '42', 'x + y'])
def test_call_macro_raw_kwargs(arg):
def f(x : str):
return x
rtn = call_macro(f, ['*', '**{"x" :' + arg + '}'], {'x': 42, 'y': 0}, None)
assert rtn == 42

View file

@ -67,4 +67,25 @@ def test_cdpath_expansion(xonsh_builtins):
tuple(os.rmdir(_) for _ in test_dirs if os.path.exists(_))
raise e
def test_cdpath_events(xonsh_builtins, tmpdir):
xonsh_builtins.__xonsh_env__ = Env(CDPATH=PARENT, PWD=os.getcwd())
target = str(tmpdir)
ev = None
@xonsh_builtins.events.on_chdir
def handler(old, new):
nonlocal ev
ev = old, new
old_dir = os.getcwd()
try:
dirstack.cd([target])
except:
raise
else:
assert (old_dir, target) == ev
finally:
# Use os.chdir() here so dirstack.cd() doesn't fire events (or fail again)
os.chdir(old_dir)

117
tests/test_events.py Normal file
View file

@ -0,0 +1,117 @@
"""Event tests"""
import inspect
import pytest
from xonsh.events import EventManager, Event, LoadEvent
@pytest.fixture
def events():
return EventManager()
def test_event_calling(events):
called = False
@events.on_test
def _(spam):
nonlocal called
called = spam
events.on_test.fire("eggs")
assert called == "eggs"
def test_event_returns(events):
called = 0
@events.on_test
def on_test():
nonlocal called
called += 1
return 1
@events.on_test
def second():
nonlocal called
called += 1
return 2
vals = events.on_test.fire()
assert called == 2
assert set(vals) == {1, 2}
def test_validator(events):
called = None
@events.on_test
def first(n):
nonlocal called
called += 1
return False
@first.validator
def v(n):
return n == 'spam'
@events.on_test
def second(n):
nonlocal called
called += 1
return False
called = 0
events.on_test.fire('egg')
assert called == 1
called = 0
events.on_test.fire('spam')
assert called == 2
def test_eventdoc(events):
docstring = "Test event"
events.doc('on_test', docstring)
assert inspect.getdoc(events.on_test) == docstring
def test_transmogrify(events):
docstring = "Test event"
events.doc('on_test', docstring)
@events.on_test
def func():
pass
assert isinstance(events.on_test, Event)
assert len(events.on_test) == 1
assert inspect.getdoc(events.on_test) == docstring
events.transmogrify('on_test', LoadEvent)
assert isinstance(events.on_test, LoadEvent)
assert len(events.on_test) == 1
assert inspect.getdoc(events.on_test) == docstring
def test_transmogrify_by_string(events):
docstring = "Test event"
events.doc('on_test', docstring)
@events.on_test
def func():
pass
assert isinstance(events.on_test, Event)
assert len(events.on_test) == 1
assert inspect.getdoc(events.on_test) == docstring
events.transmogrify('on_test', 'LoadEvent')
assert isinstance(events.on_test, LoadEvent)
assert len(events.on_test) == 1
assert inspect.getdoc(events.on_test) == docstring
def test_typos(xonsh_builtins):
for name, ev in vars(xonsh_builtins.events).items():
if 'pytest' in name:
continue
assert inspect.getdoc(ev)

View file

@ -68,24 +68,24 @@ def test_cmd_field(hist, xonsh_builtins):
assert None == hist.outs[-1]
cmds = ['ls', 'cat hello kitty', 'abc', 'def', 'touch me', 'grep from me']
CMDS = ['ls', 'cat hello kitty', 'abc', 'def', 'touch me', 'grep from me']
@pytest.mark.parametrize('inp, commands, offset', [
('', cmds, (0, 1)),
('-r', list(reversed(cmds)), (len(cmds)- 1, -1)),
('0', cmds[0:1], (0, 1)),
('1', cmds[1:2], (1, 1)),
('-2', cmds[-2:-1], (len(cmds) -2 , 1)),
('1:3', cmds[1:3], (1, 1)),
('1::2', cmds[1::2], (1, 2)),
('-4:-2', cmds[-4:-2], (len(cmds) - 4, 1))
('', CMDS, (0, 1)),
('-r', list(reversed(CMDS)), (len(CMDS)- 1, -1)),
('0', CMDS[0:1], (0, 1)),
('1', CMDS[1:2], (1, 1)),
('-2', CMDS[-2:-1], (len(CMDS) -2 , 1)),
('1:3', CMDS[1:3], (1, 1)),
('1::2', CMDS[1::2], (1, 2)),
('-4:-2', CMDS[-4:-2], (len(CMDS) - 4, 1))
])
def test_show_cmd_numerate(inp, commands, offset, hist, xonsh_builtins, capsys):
"""Verify that CLI history commands work."""
base_idx, step = offset
xonsh_builtins.__xonsh_history__ = hist
xonsh_builtins.__xonsh_env__['HISTCONTROL'] = set()
for ts,cmd in enumerate(cmds): # populate the shell history
for ts,cmd in enumerate(CMDS): # populate the shell history
hist.append({'inp': cmd, 'rtn': 0, 'ts':(ts+1, ts+1.5)})
exp = ('{}: {}'.format(base_idx + idx * step, cmd)
@ -185,3 +185,19 @@ def test_parser_show(args, exp):
'timestamp': False}
ns = _hist_parse_args(shlex.split(args))
assert ns.__dict__ == exp_ns
@pytest.mark.parametrize('index, exp', [
(-1, 'grep from me'),
('hello', 'cat hello kitty'),
((-1, -1), 'me'),
(('hello', 0), 'cat'),
((-1, slice(0,2)), 'grep from'),
(('kitty', slice(1,3)), 'hello kitty')
])
def test_history_getitem(index, exp, hist, xonsh_builtins):
xonsh_builtins.__xonsh_env__['HISTCONTROL'] = set()
for ts,cmd in enumerate(CMDS): # populate the shell history
hist.append({'inp': cmd, 'rtn': 0, 'ts':(ts+1, ts+1.5)})
assert hist[index] == exp

View file

@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
"""Testing xonsh import hooks"""
import os
import pytest
from xonsh import imphooks
@ -38,3 +39,18 @@ def test_relative_import():
def test_sub_import():
from xpack.sub import sample
assert ('hello mom jawaka\n' == sample.x)
TEST_DIR = os.path.dirname(__file__)
def test_module_dunder_file_attribute():
import sample
exp = os.path.join(TEST_DIR, 'sample.xsh')
assert os.path.abspath(sample.__file__) == exp
def test_module_dunder_file_attribute_sub():
from xpack.sub import sample
exp = os.path.join(TEST_DIR, 'xpack', 'sub', 'sample.xsh')
assert os.path.abspath(sample.__file__) == exp

View file

@ -4,10 +4,11 @@ import os
import sys
import ast
import builtins
import itertools
import pytest
from xonsh.ast import pdump
from xonsh.ast import pdump, AST
from xonsh.parser import Parser
from tools import VER_FULL, skip_if_py34, nodes_equal
@ -42,16 +43,17 @@ def check_stmts(inp, run=True, mode='exec'):
inp += '\n'
check_ast(inp, run=run, mode=mode)
def check_xonsh_ast(xenv, inp, run=True, mode='eval'):
def check_xonsh_ast(xenv, inp, run=True, mode='eval', debug_level=0,
return_obs=False):
__tracebackhide__ = True
builtins.__xonsh_env__ = xenv
obs = PARSER.parse(inp)
obs = PARSER.parse(inp, debug_level=debug_level)
if obs is None:
return # comment only
bytecode = compile(obs, '<test-xonsh-ast>', mode)
if run:
exec(bytecode)
return True
return obs if return_obs else True
def check_xonsh(xenv, inp, run=True, mode='exec'):
__tracebackhide__ = True
@ -437,6 +439,16 @@ def test_tuple_three():
def test_tuple_three_comma():
check_ast('(1, 42, 65,)')
def test_bare_tuple_of_tuples():
check_ast('(),')
check_ast('((),),(1,)')
check_ast('(),(),')
check_ast('[],')
check_ast('[],[]')
check_ast('[],()')
check_ast('(),[],')
check_ast('((),[()],)')
def test_set_one():
check_ast('{42}')
@ -1631,6 +1643,9 @@ def test_uncaptured_sub():
def test_hiddenobj_sub():
check_xonsh_ast({}, '![ls]', False)
def test_slash_envarv_echo():
check_xonsh_ast({}, '![echo $HOME/place]', False)
def test_bang_two_cmds_one_pipe():
check_xonsh_ast({}, '!(ls | grep wakka)', False)
@ -1782,3 +1797,289 @@ def test_redirect_error_to_output(r, o):
assert check_xonsh_ast({}, '$[echo "test" {} {}> test.txt]'.format(r, o), False)
assert check_xonsh_ast({}, '$[< input.txt echo "test" {} {}> test.txt]'.format(r, o), False)
assert check_xonsh_ast({}, '$[echo "test" {} {}> test.txt < input.txt]'.format(r, o), False)
def test_macro_call_empty():
assert check_xonsh_ast({}, 'f!()', False)
MACRO_ARGS = [
'x', 'True', 'None', 'import os', 'x=10', '"oh no, mom"', '...', ' ... ',
'if True:\n pass', '{x: y}', '{x: y, 42: 5}', '{1, 2, 3,}', '(x,y)',
'(x, y)', '((x, y), z)', 'g()', 'range(10)', 'range(1, 10, 2)', '()', '{}',
'[]', '[1, 2]', '@(x)', '!(ls -l)', '![ls -l]', '$(ls -l)', '${x + y}',
'$[ls -l]', '@$(which xonsh)',
]
@pytest.mark.parametrize('s', MACRO_ARGS)
def test_macro_call_one_arg(s):
f = 'f!({})'.format(s)
tree = check_xonsh_ast({}, f, False, return_obs=True)
assert isinstance(tree, AST)
args = tree.body.args[1].elts
assert len(args) == 1
assert args[0].s == s.strip()
@pytest.mark.parametrize('s,t', itertools.product(MACRO_ARGS[::2],
MACRO_ARGS[1::2]))
def test_macro_call_two_args(s, t):
f = 'f!({}, {})'.format(s, t)
tree = check_xonsh_ast({}, f, False, return_obs=True)
assert isinstance(tree, AST)
args = tree.body.args[1].elts
assert len(args) == 2
assert args[0].s == s.strip()
assert args[1].s == t.strip()
@pytest.mark.parametrize('s,t,u', itertools.product(MACRO_ARGS[::3],
MACRO_ARGS[1::3],
MACRO_ARGS[2::3]))
def test_macro_call_three_args(s, t, u):
f = 'f!({}, {}, {})'.format(s, t, u)
tree = check_xonsh_ast({}, f, False, return_obs=True)
assert isinstance(tree, AST)
args = tree.body.args[1].elts
assert len(args) == 3
assert args[0].s == s.strip()
assert args[1].s == t.strip()
assert args[2].s == u.strip()
@pytest.mark.parametrize('s', MACRO_ARGS)
def test_macro_call_one_trailing(s):
f = 'f!({0},)'.format(s)
tree = check_xonsh_ast({}, f, False, return_obs=True)
assert isinstance(tree, AST)
args = tree.body.args[1].elts
assert len(args) == 1
assert args[0].s == s.strip()
@pytest.mark.parametrize('s', MACRO_ARGS)
def test_macro_call_one_trailing_space(s):
f = 'f!( {0}, )'.format(s)
tree = check_xonsh_ast({}, f, False, return_obs=True)
assert isinstance(tree, AST)
args = tree.body.args[1].elts
assert len(args) == 1
assert args[0].s == s.strip()
SUBPROC_MACRO_OC = [
('!(', ')'),
('$(', ')'),
('![', ']'),
('$[', ']'),
]
@pytest.mark.parametrize('opener, closer', SUBPROC_MACRO_OC)
@pytest.mark.parametrize('body', ['echo!', 'echo !', 'echo ! '])
def test_empty_subprocbang(opener, closer, body):
tree = check_xonsh_ast({}, opener + body + closer, False, return_obs=True)
assert isinstance(tree, AST)
cmd = tree.body.args[0].elts
assert len(cmd) == 2
assert cmd[1].s == ''
@pytest.mark.parametrize('opener, closer', SUBPROC_MACRO_OC)
@pytest.mark.parametrize('body', ['echo!x', 'echo !x', 'echo !x', 'echo ! x'])
def test_single_subprocbang(opener, closer, body):
tree = check_xonsh_ast({}, opener + body + closer, False, return_obs=True)
assert isinstance(tree, AST)
cmd = tree.body.args[0].elts
assert len(cmd) == 2
assert cmd[1].s == 'x'
@pytest.mark.parametrize('opener, closer', SUBPROC_MACRO_OC)
@pytest.mark.parametrize('body', ['echo -n!x', 'echo -n!x', 'echo -n !x',
'echo -n ! x'])
def test_arg_single_subprocbang(opener, closer, body):
tree = check_xonsh_ast({}, opener + body + closer, False, return_obs=True)
assert isinstance(tree, AST)
cmd = tree.body.args[0].elts
assert len(cmd) == 3
assert cmd[2].s == 'x'
@pytest.mark.parametrize('opener, closer', SUBPROC_MACRO_OC)
@pytest.mark.parametrize('body', [
'echo!x + y',
'echo !x + y',
'echo !x + y',
'echo ! x + y',
'timeit! bang! and more',
'timeit! recurse() and more',
'timeit! recurse[] and more',
'timeit! recurse!() and more',
'timeit! recurse![] and more',
'timeit! recurse$() and more',
'timeit! recurse$[] and more',
'timeit! recurse!() and more',
'timeit!!!!',
'timeit! (!)',
'timeit! [!]',
'timeit!!(ls)',
'timeit!"!)"',
])
def test_many_subprocbang(opener, closer, body):
tree = check_xonsh_ast({}, opener + body + closer, False, return_obs=True,
debug_level=100,
)
assert isinstance(tree, AST)
cmd = tree.body.args[0].elts
assert len(cmd) == 2
assert cmd[1].s == body.partition('!')[-1].strip()
# test invalid expressions
def test_syntax_error_del_literal():
with pytest.raises(SyntaxError):
PARSER.parse('del 7')
def test_syntax_error_del_constant():
with pytest.raises(SyntaxError):
PARSER.parse('del True')
def test_syntax_error_del_emptytuple():
with pytest.raises(SyntaxError):
PARSER.parse('del ()')
def test_syntax_error_del_call():
with pytest.raises(SyntaxError):
PARSER.parse('del foo()')
def test_syntax_error_del_lambda():
with pytest.raises(SyntaxError):
PARSER.parse('del lambda x: "yay"')
def test_syntax_error_del_ifexp():
with pytest.raises(SyntaxError):
PARSER.parse('del x if y else z')
@pytest.mark.parametrize('exp', ['[i for i in foo]',
'{i for i in foo}',
'(i for i in foo)',
'{k:v for k,v in d.items()}'])
def test_syntax_error_del_comps(exp):
with pytest.raises(SyntaxError):
PARSER.parse('del {}'.format(exp))
@pytest.mark.parametrize('exp', ['x + y',
'x and y',
'-x'])
def test_syntax_error_del_ops(exp):
with pytest.raises(SyntaxError):
PARSER.parse('del {}'.format(exp))
@pytest.mark.parametrize('exp', ['x > y',
'x > y == z'])
def test_syntax_error_del_cmp(exp):
with pytest.raises(SyntaxError):
PARSER.parse('del {}'.format(exp))
def test_syntax_error_lonely_del():
with pytest.raises(SyntaxError):
PARSER.parse('del')
def test_syntax_error_assign_literal():
with pytest.raises(SyntaxError):
PARSER.parse('7 = x')
def test_syntax_error_assign_constant():
with pytest.raises(SyntaxError):
PARSER.parse('True = 8')
def test_syntax_error_assign_emptytuple():
with pytest.raises(SyntaxError):
PARSER.parse('() = x')
def test_syntax_error_assign_call():
with pytest.raises(SyntaxError):
PARSER.parse('foo() = x')
def test_syntax_error_assign_lambda():
with pytest.raises(SyntaxError):
PARSER.parse('lambda x: "yay" = y')
def test_syntax_error_assign_ifexp():
with pytest.raises(SyntaxError):
PARSER.parse('x if y else z = 8')
@pytest.mark.parametrize('exp', ['[i for i in foo]',
'{i for i in foo}',
'(i for i in foo)',
'{k:v for k,v in d.items()}'])
def test_syntax_error_assign_comps(exp):
with pytest.raises(SyntaxError):
PARSER.parse('{} = z'.format(exp))
@pytest.mark.parametrize('exp', ['x + y',
'x and y',
'-x'])
def test_syntax_error_assign_ops(exp):
with pytest.raises(SyntaxError):
PARSER.parse('{} = z'.format(exp))
@pytest.mark.parametrize('exp', ['x > y',
'x > y == z'])
def test_syntax_error_assign_cmp(exp):
with pytest.raises(SyntaxError):
PARSER.parse('{} = a'.format(exp))
def test_syntax_error_augassign_literal():
with pytest.raises(SyntaxError):
PARSER.parse('7 += x')
def test_syntax_error_augassign_constant():
with pytest.raises(SyntaxError):
PARSER.parse('True += 8')
def test_syntax_error_augassign_emptytuple():
with pytest.raises(SyntaxError):
PARSER.parse('() += x')
def test_syntax_error_augassign_call():
with pytest.raises(SyntaxError):
PARSER.parse('foo() += x')
def test_syntax_error_augassign_lambda():
with pytest.raises(SyntaxError):
PARSER.parse('lambda x: "yay" += y')
def test_syntax_error_augassign_ifexp():
with pytest.raises(SyntaxError):
PARSER.parse('x if y else z += 8')
@pytest.mark.parametrize('exp', ['[i for i in foo]',
'{i for i in foo}',
'(i for i in foo)',
'{k:v for k,v in d.items()}'])
def test_syntax_error_augassign_comps(exp):
with pytest.raises(SyntaxError):
PARSER.parse('{} += z'.format(exp))
@pytest.mark.parametrize('exp', ['x + y',
'x and y',
'-x'])
def test_syntax_error_augassign_ops(exp):
with pytest.raises(SyntaxError):
PARSER.parse('{} += z'.format(exp))
@pytest.mark.parametrize('exp', ['x > y',
'x > y +=+= z'])
def test_syntax_error_augassign_cmp(exp):
with pytest.raises(SyntaxError):
PARSER.parse('{} += a'.format(exp))

View file

@ -24,7 +24,7 @@ from xonsh.tools import (
to_dynamic_cwd_tuple, to_logfile_opt, pathsep_to_set, set_to_pathsep,
is_string_seq, pathsep_to_seq, seq_to_pathsep, is_nonstring_seq_of_strings,
pathsep_to_upper_seq, seq_to_upper_pathsep, expandvars, is_int_as_str, is_slice_as_str,
ensure_timestamp,
ensure_timestamp, get_portions
)
from xonsh.commands_cache import CommandsCache
from xonsh.built_ins import expand_path
@ -74,6 +74,20 @@ def test_subproc_toks_git_nl():
assert (exp == obs)
def test_bash_macro():
s = 'bash -c ! export var=42; echo $var'
exp = '![{0}]\n'.format(s)
obs = subproc_toks(s + '\n', lexer=LEXER, returnline=True)
assert (exp == obs)
def test_python_macro():
s = 'python -c ! import os; print(os.path.abspath("/"))'
exp = '![{0}]\n'.format(s)
obs = subproc_toks(s + '\n', lexer=LEXER, returnline=True)
assert (exp == obs)
def test_subproc_toks_indent_ls():
s = 'ls -l'
exp = INDENT + '![{0}]'.format(s)
@ -328,6 +342,8 @@ def test_subexpr_from_unbalanced_parens(inp, exp):
('(ls) && echo a', 1, 4),
('not ls && echo a', 0, 8),
('not (ls) && echo a', 0, 8),
('bash -c ! export var=42; echo $var', 0, 35),
('python -c ! import os; print(os.path.abspath("/"))', 0, 51),
])
def test_find_next_break(line, mincol, exp):
obs = find_next_break(line, mincol=mincol, lexer=LEXER)
@ -816,6 +832,7 @@ def test_bool_or_int_to_str(inp, exp):
@pytest.mark.parametrize('inp, exp', [
(42, slice(42, 43)),
(0, slice(0, 1)),
(None, slice(None, None, None)),
(slice(1,2), slice(1,2)),
('-1', slice(-1, None, None)),
@ -835,6 +852,21 @@ def test_ensure_slice(inp, exp):
assert exp == obs
@pytest.mark.parametrize('inp, exp', [
((range(50), slice(25, 40)),
list(i for i in range(25,40))),
(([1,2,3,4,5,6,7,8,9,10], [slice(1,4), slice(6, None)]),
[2, 3, 4, 7, 8, 9, 10]),
(([1,2,3,4,5], [slice(-2, None), slice(-5, -3)]),
[4, 5, 1, 2]),
])
def test_get_portions(inp, exp):
obs = get_portions(*inp)
assert list(obs) == exp
@pytest.mark.parametrize('inp', [
'42.3',
'3:asd5:1',

View file

@ -14,9 +14,23 @@ def test_crud(xonsh_builtins, tmpdir):
Creates a virtual environment, gets it, enumerates it, and then deletes it.
"""
xonsh_builtins.__xonsh_env__['VIRTUALENV_HOME'] = str(tmpdir)
last_event = None
@xonsh_builtins.events.vox_on_create
def create(name):
nonlocal last_event
last_event = 'create', name
@xonsh_builtins.events.vox_on_delete
def delete(name):
nonlocal last_event
last_event = 'delete', name
vox = Vox()
vox.create('spam')
assert stat.S_ISDIR(tmpdir.join('spam').stat().mode)
assert last_event == ('create', 'spam')
env, bin = vox['spam']
assert env == str(tmpdir.join('spam'))
@ -27,6 +41,7 @@ def test_crud(xonsh_builtins, tmpdir):
del vox['spam']
assert not tmpdir.join('spam').check()
assert last_event == ('delete', 'spam')
@skip_if_on_conda
@ -37,12 +52,27 @@ def test_activate(xonsh_builtins, tmpdir):
xonsh_builtins.__xonsh_env__['VIRTUALENV_HOME'] = str(tmpdir)
# I consider the case that the user doesn't have a PATH set to be unreasonable
xonsh_builtins.__xonsh_env__.setdefault('PATH', [])
last_event = None
@xonsh_builtins.events.vox_on_activate
def activate(name):
nonlocal last_event
last_event = 'activate', name
@xonsh_builtins.events.vox_on_deactivate
def deactivate(name):
nonlocal last_event
last_event = 'deactivate', name
vox = Vox()
vox.create('spam')
vox.activate('spam')
assert xonsh_builtins.__xonsh_env__['VIRTUAL_ENV'] == vox['spam'].env
assert last_event == ('activate', 'spam')
vox.deactivate()
assert 'VIRTUAL_ENV' not in xonsh_builtins.__xonsh_env__
assert last_event == ('deactivate', 'spam')
@skip_if_on_conda

18
tests/test_xonsh.xsh Normal file
View file

@ -0,0 +1,18 @@
def test_simple():
assert 1 + 1 == 2
def test_envionment():
$USER = 'snail'
x = 'USER'
assert x in ${...}
assert ${'U' + 'SER'} == 'snail'
def test_xonsh_party():
x = 'xonsh'
y = 'party'
out = $(echo @(x + ' ' + y))
assert out == 'xonsh party\n'

View file

@ -1,7 +1,7 @@
__version__ = '0.4.5'
# amalgamate exclude jupyter_kernel parser_table parser_test_table pyghooks
# amalgamate exclude winutils wizard
# amalgamate exclude winutils wizard pytest_plugin
import os as _os
if _os.getenv('XONSH_DEBUG', ''):
pass
@ -43,8 +43,8 @@ else:
_sys.modules['xonsh.contexts'] = __amalgam__
diff_history = __amalgam__
_sys.modules['xonsh.diff_history'] = __amalgam__
dirstack = __amalgam__
_sys.modules['xonsh.dirstack'] = __amalgam__
events = __amalgam__
_sys.modules['xonsh.events'] = __amalgam__
foreign_shells = __amalgam__
_sys.modules['xonsh.foreign_shells'] = __amalgam__
lexer = __amalgam__
@ -55,14 +55,16 @@ else:
_sys.modules['xonsh.proc'] = __amalgam__
xontribs = __amalgam__
_sys.modules['xonsh.xontribs'] = __amalgam__
commands_cache = __amalgam__
_sys.modules['xonsh.commands_cache'] = __amalgam__
environ = __amalgam__
_sys.modules['xonsh.environ'] = __amalgam__
dirstack = __amalgam__
_sys.modules['xonsh.dirstack'] = __amalgam__
history = __amalgam__
_sys.modules['xonsh.history'] = __amalgam__
inspectors = __amalgam__
_sys.modules['xonsh.inspectors'] = __amalgam__
commands_cache = __amalgam__
_sys.modules['xonsh.commands_cache'] = __amalgam__
environ = __amalgam__
_sys.modules['xonsh.environ'] = __amalgam__
base_shell = __amalgam__
_sys.modules['xonsh.base_shell'] = __amalgam__
replay = __amalgam__

View file

@ -6,7 +6,7 @@ import shlex
import inspect
import argparse
import builtins
import collections.abc as abc
import collections.abc as cabc
from xonsh.lazyasd import lazyobject
from xonsh.dirstack import cd, pushd, popd, dirs, _get_cwd
@ -15,7 +15,7 @@ from xonsh.foreign_shells import foreign_shell_data
from xonsh.jobs import jobs, fg, bg, clean_jobs
from xonsh.history import history_main
from xonsh.platform import (ON_ANACONDA, ON_DARWIN, ON_WINDOWS, ON_FREEBSD,
scandir)
ON_NETBSD, scandir)
from xonsh.proc import foreground
from xonsh.replay import replay_main
from xonsh.timings import timeit_alias
@ -27,7 +27,7 @@ from xonsh.xoreutils import _which
import xonsh.completers._aliases as xca
class Aliases(abc.MutableMapping):
class Aliases(cabc.MutableMapping):
"""Represents a location to hold and look up aliases."""
def __init__(self, *args, **kwargs):
@ -45,7 +45,7 @@ class Aliases(abc.MutableMapping):
val = self._raw.get(key)
if val is None:
return default
elif isinstance(val, abc.Iterable) or callable(val):
elif isinstance(val, cabc.Iterable) or callable(val):
return self.eval_alias(val, seen_tokens={key})
else:
msg = 'alias of {!r} has an inappropriate type: {!r}'
@ -94,7 +94,7 @@ class Aliases(abc.MutableMapping):
"""
word = line.split(' ', 1)[0]
if word in builtins.aliases and isinstance(self.get(word),
abc.Sequence):
cabc.Sequence):
word_idx = line.find(word)
expansion = ' '.join(self.get(word))
line = line[:word_idx] + expansion + line[word_idx+len(word):]
@ -593,6 +593,10 @@ def make_default_aliases():
default_aliases['egrep'] = ['egrep', '--color=auto']
default_aliases['fgrep'] = ['fgrep', '--color=auto']
default_aliases['ls'] = ['ls', '-G']
elif ON_NETBSD:
default_aliases['grep'] = ['grep', '--color=auto']
default_aliases['egrep'] = ['egrep', '--color=auto']
default_aliases['fgrep'] = ['fgrep', '--color=auto']
else:
default_aliases['grep'] = ['grep', '--color=auto']
default_aliases['egrep'] = ['egrep', '--color=auto']

View file

@ -226,12 +226,13 @@ class CtxAwareTransformer(NodeTransformer):
def is_in_scope(self, node):
"""Determines whether or not the current node is in scope."""
lname = leftmostname(node)
if lname is None:
return node
names = gather_names(node)
if not names:
return True
inscope = False
for ctx in reversed(self.contexts):
if lname in ctx:
names -= ctx
if not names:
inscope = True
break
return inscope

View file

@ -8,16 +8,19 @@ import os
import re
import sys
import time
import types
import shlex
import signal
import atexit
import inspect
import tempfile
import builtins
import itertools
import subprocess
import contextlib
import collections.abc as abc
import collections.abc as cabc
from xonsh.ast import AST
from xonsh.lazyasd import LazyObject, lazyobject
from xonsh.history import History
from xonsh.inspectors import Inspector
@ -35,6 +38,7 @@ from xonsh.tools import (
XonshCalledProcessError, XonshBlockError
)
from xonsh.commands_cache import CommandsCache
from xonsh.events import events
import xonsh.completers.init
@ -99,7 +103,14 @@ def expand_path(s):
"""Takes a string path and expands ~ to home and environment vars."""
if builtins.__xonsh_env__.get('EXPAND_ENV_VARS'):
s = expandvars(s)
return os.path.expanduser(s)
# expand ~ according to Bash unquoted rules "Each variable assignment is
# checked for unquoted tilde-prefixes immediately following a ':' or the
# first '='". See the following for more details.
# https://www.gnu.org/software/bash/manual/html_node/Tilde-Expansion.html
pre, char, post = s.partition('=')
s = pre + char + os.path.expanduser(post) if char else s
s = os.pathsep.join(map(os.path.expanduser, s.split(os.pathsep)))
return s
def reglob(path, parts=None, i=None):
@ -661,7 +672,7 @@ def ensure_list_of_strs(x):
"""Ensures that x is a list of strings."""
if isinstance(x, str):
rtn = [x]
elif isinstance(x, abc.Sequence):
elif isinstance(x, cabc.Sequence):
rtn = [i if isinstance(i, str) else str(i) for i in x]
else:
rtn = [str(x)]
@ -672,13 +683,204 @@ def list_of_strs_or_callables(x):
"""Ensures that x is a list of strings or functions"""
if isinstance(x, str) or callable(x):
rtn = [x]
elif isinstance(x, abc.Sequence):
elif isinstance(x, cabc.Sequence):
rtn = [i if isinstance(i, str) or callable(i) else str(i) for i in x]
else:
rtn = [str(x)]
return rtn
@lazyobject
def MACRO_FLAG_KINDS():
return {
's': str,
'str': str,
'string': str,
'a': AST,
'ast': AST,
'c': types.CodeType,
'code': types.CodeType,
'compile': types.CodeType,
'v': eval,
'eval': eval,
'x': exec,
'exec': exec,
't': type,
'type': type,
}
def _convert_kind_flag(x):
"""Puts a kind flag (string) a canonical form."""
x = x.lower()
kind = MACRO_FLAG_KINDS.get(x, None)
if kind is None:
raise TypeError('{0!r} not a recognized macro type.'.format(x))
return kind
def convert_macro_arg(raw_arg, kind, glbs, locs, *, name='<arg>',
macroname='<macro>'):
"""Converts a string macro argument based on the requested kind.
Parameters
----------
raw_arg : str
The str reprensetaion of the macro argument.
kind : object
A flag or type representing how to convert the argument.
glbs : Mapping
The globals from the call site.
locs : Mapping or None
The locals from the call site.
name : str, optional
The macro argument name.
macroname : str, optional
The name of the macro itself.
Returns
-------
The converted argument.
"""
# munge kind and mode to start
mode = None
if isinstance(kind, cabc.Sequence) and not isinstance(kind, str):
# have (kind, mode) tuple
kind, mode = kind
if isinstance(kind, str):
kind = _convert_kind_flag(kind)
if kind is str:
return raw_arg # short circut since there is nothing else to do
# select from kind and convert
execer = builtins.__xonsh_execer__
filename = macroname + '(' + name + ')'
if kind is AST:
ctx = set(dir(builtins)) | set(glbs.keys())
if locs is not None:
ctx |= set(locs.keys())
mode = mode or 'eval'
arg = execer.parse(raw_arg, ctx, mode=mode, filename=filename)
elif kind is types.CodeType or kind is compile:
mode = mode or 'eval'
arg = execer.compile(raw_arg, mode=mode, glbs=glbs, locs=locs,
filename=filename)
elif kind is eval or kind is None:
arg = execer.eval(raw_arg, glbs=glbs, locs=locs, filename=filename)
elif kind is exec:
mode = mode or 'exec'
if not raw_arg.endswith('\n'):
raw_arg += '\n'
arg = execer.exec(raw_arg, mode=mode, glbs=glbs, locs=locs,
filename=filename)
elif kind is type:
arg = type(execer.eval(raw_arg, glbs=glbs, locs=locs,
filename=filename))
else:
msg = ('kind={0!r} and mode={1!r} was not recongnized for macro '
'argument {2!r}')
raise TypeError(msg.format(kind, mode, name))
return arg
@contextlib.contextmanager
def macro_context(f, glbs, locs):
"""Attaches macro globals and locals temporarily to function as a
context manager.
Parameters
----------
f : callable object
The function that is called as f(*args).
glbs : Mapping
The globals from the call site.
locs : Mapping or None
The locals from the call site.
"""
prev_glbs = getattr(f, 'macro_globals', None)
prev_locs = getattr(f, 'macro_locals', None)
f.macro_globals = glbs
f.macro_locals = locs
yield
if prev_glbs is None:
del f.macro_globals
else:
f.macro_globals = prev_glbs
if prev_locs is None:
del f.macro_locals
else:
f.macro_locals = prev_locs
def call_macro(f, raw_args, glbs, locs):
"""Calls a function as a macro, returning its result.
Parameters
----------
f : callable object
The function that is called as f(*args).
raw_args : tuple of str
The str reprensetaion of arguments of that were passed into the
macro. These strings will be parsed, compiled, evaled, or left as
a string dependending on the annotations of f.
glbs : Mapping
The globals from the call site.
locs : Mapping or None
The locals from the call site.
"""
sig = inspect.signature(f)
empty = inspect.Parameter.empty
macroname = f.__name__
i = 0
args = []
for (key, param), raw_arg in zip(sig.parameters.items(), raw_args):
i += 1
if raw_arg == '*':
break
kind = param.annotation
if kind is empty or kind is None:
kind = eval
arg = convert_macro_arg(raw_arg, kind, glbs, locs, name=key,
macroname=macroname)
args.append(arg)
reg_args, kwargs = _eval_regular_args(raw_args[i:], glbs, locs)
args += reg_args
with macro_context(f, glbs, locs):
rtn = f(*args, **kwargs)
return rtn
@lazyobject
def KWARG_RE():
return re.compile('([A-Za-z_]\w*=|\*\*)')
def _starts_as_arg(s):
"""Tests if a string starts as a non-kwarg string would."""
return KWARG_RE.match(s) is None
def _eval_regular_args(raw_args, glbs, locs):
if not raw_args:
return [], {}
arglist = list(itertools.takewhile(_starts_as_arg, raw_args))
kwarglist = raw_args[len(arglist):]
execer = builtins.__xonsh_execer__
if not arglist:
args = arglist
kwargstr = 'dict({})'.format(', '.join(kwarglist))
kwargs = execer.eval(kwargstr, glbs=glbs, locs=locs)
elif not kwarglist:
argstr = '({},)'.format(', '.join(arglist))
args = execer.eval(argstr, glbs=glbs, locs=locs)
kwargs = {}
else:
argstr = '({},)'.format(', '.join(arglist))
kwargstr = 'dict({})'.format(', '.join(kwarglist))
both = '({}, {})'.format(argstr, kwargstr)
args, kwargs = execer.eval(both, glbs=glbs, locs=locs)
return args, kwargs
def load_builtins(execer=None, config=None, login=False, ctx=None):
"""Loads the xonsh builtins into the Python builtins. Sets the
BUILTINS_LOADED variable to True.
@ -714,6 +916,7 @@ def load_builtins(execer=None, config=None, login=False, ctx=None):
builtins.__xonsh_ensure_list_of_strs__ = ensure_list_of_strs
builtins.__xonsh_list_of_strs_or_callables__ = list_of_strs_or_callables
builtins.__xonsh_completers__ = xonsh.completers.init.default_completers()
builtins.__xonsh_call_macro__ = call_macro
# public built-ins
builtins.XonshError = XonshError
builtins.XonshBlockError = XonshBlockError
@ -721,6 +924,7 @@ def load_builtins(execer=None, config=None, login=False, ctx=None):
builtins.evalx = None if execer is None else execer.eval
builtins.execx = None if execer is None else execer.exec
builtins.compilex = None if execer is None else execer.compile
builtins.events = events
# sneak the path search functions into the aliases
# Need this inline/lazy import here since we use locate_binary that relies on __xonsh_env__ in default aliases
@ -779,6 +983,7 @@ def unload_builtins():
'__xonsh_execer__',
'__xonsh_commands_cache__',
'__xonsh_completers__',
'__xonsh_call_macro__',
'XonshError',
'XonshBlockError',
'XonshCalledProcessError',

View file

@ -1,14 +1,14 @@
# -*- coding: utf-8 -*-
import os
import builtins
import collections.abc as abc
import collections.abc as cabc
from xonsh.dirstack import _get_cwd
from xonsh.platform import ON_WINDOWS
from xonsh.tools import executables_in
class CommandsCache(abc.Mapping):
class CommandsCache(cabc.Mapping):
"""A lazy cache representing the commands available on the file system.
The keys are the command names and the values a tuple of (loc, has_alias)
where loc is either a str pointing to the executable on the file system or

View file

@ -1,16 +1,11 @@
# -*- coding: utf-8 -*-
"""A (tab-)completer for xonsh."""
import builtins
import collections.abc as abc
import xonsh.completers.bash as compbash
import collections.abc as cabc
class Completer(object):
"""This provides a list of optional completions for the xonsh shell."""
def __init__(self):
compbash.update_bash_completion()
def complete(self, prefix, line, begidx, endidx, ctx=None):
"""Complete the string, given a possible execution context.
@ -41,7 +36,7 @@ class Completer(object):
out = func(prefix, line, begidx, endidx, ctx)
except StopIteration:
return set(), len(prefix)
if isinstance(out, abc.Sequence):
if isinstance(out, cabc.Sequence):
res, lprefix = out
else:
res = out

View file

@ -1,97 +1,42 @@
import os
import re
import shlex
import pickle
import hashlib
import pathlib
import builtins
import subprocess
import xonsh.lazyasd as xl
import xonsh.platform as xp
from xonsh.completers.path import _quote_paths
RE_DASHF = xl.LazyObject(lambda: re.compile(r'-F\s+(\w+)'),
globals(), 'RE_DASHF')
INITED = False
BASH_COMPLETE_HASH = None
BASH_COMPLETE_FUNCS = {}
BASH_COMPLETE_FILES = {}
CACHED_HASH = None
CACHED_FUNCS = None
CACHED_FILES = None
BASH_COMPLETE_SCRIPT = """source "{filename}"
BASH_COMPLETE_SCRIPT = r"""
{sources}
if (complete -p "{cmd}" 2> /dev/null || echo _minimal) | grep --quiet -e "_minimal"
then
declare -f _completion_loader > /dev/null && _completion_loader "{cmd}"
fi
_func=$(complete -p {cmd} | grep -o -e '-F \w\+' | cut -d ' ' -f 2)
COMP_WORDS=({line})
COMP_LINE={comp_line}
COMP_POINT=${{#COMP_LINE}}
COMP_COUNT={end}
COMP_CWORD={n}
{func} {cmd} {prefix} {prev}
$_func {cmd} {prefix} {prev}
for ((i=0;i<${{#COMPREPLY[*]}};i++)) do echo ${{COMPREPLY[i]}}; done
"""
def update_bash_completion():
global BASH_COMPLETE_FUNCS, BASH_COMPLETE_FILES, BASH_COMPLETE_HASH
global CACHED_FUNCS, CACHED_FILES, CACHED_HASH, INITED
completers = builtins.__xonsh_env__.get('BASH_COMPLETIONS', ())
BASH_COMPLETE_HASH = hashlib.md5(repr(completers).encode()).hexdigest()
datadir = builtins.__xonsh_env__['XONSH_DATA_DIR']
cachefname = os.path.join(datadir, 'bash_completion_cache')
if not INITED:
if os.path.isfile(cachefname):
# load from cache
with open(cachefname, 'rb') as cache:
CACHED_HASH, CACHED_FUNCS, CACHED_FILES = pickle.load(cache)
BASH_COMPLETE_HASH = CACHED_HASH
BASH_COMPLETE_FUNCS = CACHED_FUNCS
BASH_COMPLETE_FILES = CACHED_FILES
else:
# create initial cache
_load_bash_complete_funcs()
_load_bash_complete_files()
CACHED_HASH = BASH_COMPLETE_HASH
CACHED_FUNCS = BASH_COMPLETE_FUNCS
CACHED_FILES = BASH_COMPLETE_FILES
with open(cachefname, 'wb') as cache:
val = (CACHED_HASH, CACHED_FUNCS, CACHED_FILES)
pickle.dump(val, cache)
INITED = True
invalid = ((not os.path.isfile(cachefname)) or
BASH_COMPLETE_HASH != CACHED_HASH or
_completions_time() > os.stat(cachefname).st_mtime)
if invalid:
# update the cache
_load_bash_complete_funcs()
_load_bash_complete_files()
CACHED_HASH = BASH_COMPLETE_HASH
CACHED_FUNCS = BASH_COMPLETE_FUNCS
CACHED_FILES = BASH_COMPLETE_FILES
with open(cachefname, 'wb') as cache:
val = (CACHED_HASH, BASH_COMPLETE_FUNCS, BASH_COMPLETE_FILES)
pickle.dump(val, cache)
def complete_from_bash(prefix, line, begidx, endidx, ctx):
"""Completes based on results from BASH completion."""
update_bash_completion()
sources = _collect_completions_sources()
if not sources:
return set()
if prefix.startswith('$'): # do not complete env variables
return set()
splt = line.split()
cmd = splt[0]
func = BASH_COMPLETE_FUNCS.get(cmd, None)
fnme = BASH_COMPLETE_FILES.get(cmd, None)
if func is None or fnme is None:
return set()
idx = n = 0
prev = ''
for n, tok in enumerate(splt):
if tok == prefix:
idx = line.find(prefix, idx)
@ -105,9 +50,11 @@ def complete_from_bash(prefix, line, begidx, endidx, ctx):
prefix = shlex.quote(prefix)
script = BASH_COMPLETE_SCRIPT.format(
filename=fnme, line=' '.join(shlex.quote(p) for p in splt),
comp_line=shlex.quote(line), n=n, func=func, cmd=cmd,
end=endidx + 1, prefix=prefix, prev=shlex.quote(prev))
sources='\n'.join(sources), line=' '.join(shlex.quote(p) for p in splt),
comp_line=shlex.quote(line), n=n, cmd=cmd,
end=endidx + 1, prefix=prefix, prev=shlex.quote(prev),
)
try:
out = subprocess.check_output(
[xp.bash_command()], input=script, universal_newlines=True,
@ -119,59 +66,6 @@ def complete_from_bash(prefix, line, begidx, endidx, ctx):
return rtn
def _load_bash_complete_funcs():
global BASH_COMPLETE_FUNCS
BASH_COMPLETE_FUNCS = bcf = {}
inp = _collect_completions_sources()
if not inp:
return
inp.append('complete -p\n')
out = _source_completions(inp)
for line in out.splitlines():
head, _, cmd = line.rpartition(' ')
if len(cmd) == 0 or cmd == 'cd':
continue
m = RE_DASHF.search(head)
if m is None:
continue
bcf[cmd] = m.group(1)
def _load_bash_complete_files():
global BASH_COMPLETE_FILES
inp = _collect_completions_sources()
if not inp:
BASH_COMPLETE_FILES = {}
return
if BASH_COMPLETE_FUNCS:
inp.append('shopt -s extdebug')
bash_funcs = set(BASH_COMPLETE_FUNCS.values())
inp.append('declare -F ' + ' '.join([f for f in bash_funcs]))
inp.append('shopt -u extdebug\n')
out = _source_completions(inp)
func_files = {}
for line in out.splitlines():
parts = line.split()
if xp.ON_WINDOWS:
parts = [parts[0], ' '.join(parts[2:])]
func_files[parts[0]] = parts[-1]
BASH_COMPLETE_FILES = {
cmd: func_files[func]
for cmd, func in BASH_COMPLETE_FUNCS.items()
if func in func_files
}
def _source_completions(source):
try:
return subprocess.check_output(
[xp.bash_command()], input='\n'.join(source),
universal_newlines=True, env=builtins.__xonsh_env__.detype(),
stderr=subprocess.DEVNULL)
except FileNotFoundError:
return ''
def _collect_completions_sources():
sources = []
completers = builtins.__xonsh_env__.get('BASH_COMPLETIONS', ())
@ -183,9 +77,3 @@ def _collect_completions_sources():
for _file in (x for x in path.glob('*') if x.is_file()):
sources.append('source "{}"'.format(_file.as_posix()))
return sources
def _completions_time():
compfiles = builtins.__xonsh_env__.get('BASH_COMPLETIONS', ())
compfiles = [os.stat(x).st_mtime for x in compfiles if os.path.exists(x)]
return max(compfiles) if compfiles else 0

View file

@ -186,12 +186,11 @@ def _splitpath(path):
def _splitpath_helper(path, sofar=()):
folder, path = os.path.split(path)
if path == "":
if path:
sofar = sofar + (path, )
if not folder or folder == xt.get_sep():
return sofar[::-1]
elif folder == "":
return (sofar + (path, ))[::-1]
else:
return _splitpath_helper(folder, sofar + (path, ))
return _splitpath_helper(folder, sofar)
def subsequence_match(ref, typed, csc):

View file

@ -4,7 +4,7 @@ import sys
import inspect
import builtins
import importlib
import collections.abc as abc
import collections.abc as cabc
import xonsh.tools as xt
import xonsh.lazyasd as xl
@ -95,10 +95,13 @@ def attr_complete(prefix, ctx, filter_func):
except: # pylint:disable=bare-except
continue
a = getattr(val, opt)
if callable(a):
rpl = opt + '('
elif isinstance(a, abc.Iterable):
rpl = opt + '['
if builtins.__xonsh_env__['COMPLETIONS_BRACKETS']:
if callable(a):
rpl = opt + '('
elif isinstance(a, (cabc.Sequence, cabc.Mapping)):
rpl = opt + '['
else:
rpl = opt
else:
rpl = opt
# note that prefix[:prelen-len(attr)] != prefix[:-len(attr)]

View file

@ -8,7 +8,11 @@ import subprocess
from xonsh.lazyasd import lazyobject
from xonsh.tools import get_sep
<<<<<<< HEAD
from xonsh.platform import ON_WINDOWS
=======
from xonsh.events import events
>>>>>>> master
DIRSTACK = []
"""A list containing the currently remembered directories."""
@ -109,6 +113,13 @@ def _unc_unmap_temp_drive(left_drive, cwd):
subprocess.check_output(['NET', 'USE', left_drive, '/delete'], universal_newlines=True)
events.doc('on_chdir', """
on_chdir(olddir: str, newdir: str) -> None
Fires when the current directory is changed for any reason.
""")
def _get_cwd():
try:
return os.getcwd()
@ -120,19 +131,27 @@ def _change_working_directory(newdir):
env = builtins.__xonsh_env__
old = env['PWD']
new = os.path.join(old, newdir)
<<<<<<< HEAD
=======
absnew = os.path.abspath(new)
>>>>>>> master
try:
os.chdir(os.path.abspath(new))
os.chdir(absnew)
except (OSError, FileNotFoundError):
if new.endswith(get_sep()):
new = new[:-1]
if os.path.basename(new) == '..':
env['PWD'] = new
return
if old is not None:
env['OLDPWD'] = old
if new is not None:
env['PWD'] = os.path.abspath(new)
else:
if old is not None:
env['OLDPWD'] = old
if new is not None:
env['PWD'] = absnew
# Fire event if the path actually changed
if old != env['PWD']:
events.on_chdir.fire(old, env['PWD'])
def _try_cdpath(apath):

View file

@ -17,7 +17,7 @@ import itertools
import contextlib
import subprocess
import collections
import collections.abc as abc
import collections.abc as cabc
from xonsh import __version__ as XONSH_VERSION
from xonsh.jobs import get_next_task
@ -101,6 +101,8 @@ def DEFAULT_ENSURERS():
re.compile('\w*DIRS$'): (is_env_path, str_to_env_path, env_path_to_str),
'COLOR_INPUT': (is_bool, to_bool, bool_to_str),
'COLOR_RESULTS': (is_bool, to_bool, bool_to_str),
'COMPLETIONS_BRACKETS': (is_bool, to_bool, bool_to_str),
'COMPLETIONS_CONFIRM': (is_bool, to_bool, bool_to_str),
'COMPLETIONS_DISPLAY': (is_completions_display_value,
to_completions_display_value, str),
'COMPLETIONS_MENU_ROWS': (is_int, int, str),
@ -244,6 +246,8 @@ def DEFAULT_VALUES():
'CDPATH': (),
'COLOR_INPUT': True,
'COLOR_RESULTS': True,
'COMPLETIONS_BRACKETS': True,
'COMPLETIONS_CONFIRM': False,
'COMPLETIONS_DISPLAY': 'multi',
'COMPLETIONS_MENU_ROWS': 5,
'DIRSTACK_SIZE': 20,
@ -355,19 +359,20 @@ def DEFAULT_DOCS():
'shell.\n\nPressing the right arrow key inserts the currently '
'displayed suggestion. Only usable with $SHELL_TYPE=prompt_toolkit.'),
'BASH_COMPLETIONS': VarDocs(
'This is a list (or tuple) of strings that specifies where the BASH '
'completion files may be found. The default values are platform '
'This is a list (or tuple) of strings that specifies where the '
'`bash_completion` script may be found. For better performance, '
'base-completion v2.x is recommended since it lazy-loads individual '
'completion scripts. Paths or directories of individual completion '
'scripts (like `.../completes/ssh`) do not need to be included here. '
'The default values are platform '
'dependent, but sane. To specify an alternate list, do so in the run '
'control file.', default=(
"Normally this is:\n\n"
" ('/etc/bash_completion',\n"
" '/usr/share/bash-completion/completions/git')\n\n"
" ('/etc/bash_completion', )\n\n"
"But, on Mac it is:\n\n"
" ('/usr/local/etc/bash_completion',\n"
" '/opt/local/etc/profile.d/bash_completion.sh')\n\n"
" ('/usr/local/etc/bash_completion', )\n\n"
"And on Arch Linux it is:\n\n"
" ('/usr/share/bash-completion/bash_completion',\n"
" '/usr/share/bash-completion/completions/git')\n\n"
" ('/usr/share/bash-completion/bash_completion', )\n\n"
"Other OS-specific defaults may be added in the future.")),
'CASE_SENSITIVE_COMPLETIONS': VarDocs(
'Sets whether completions should be case sensitive or case '
@ -377,6 +382,9 @@ def DEFAULT_DOCS():
'with Bash, xonsh always prefer an existing relative path.'),
'COLOR_INPUT': VarDocs('Flag for syntax highlighting interactive input.'),
'COLOR_RESULTS': VarDocs('Flag for syntax highlighting return values.'),
'COMPLETIONS_BRACKETS': VarDocs(
'Flag to enable/disable inclusion of square brackets and parentheses '
'in Python attribute completions.', default='True'),
'COMPLETIONS_DISPLAY': VarDocs(
'Configure if and how Python completions are displayed by the '
'prompt_toolkit shell.\n\nThis option does not affect Bash '
@ -393,6 +401,10 @@ def DEFAULT_DOCS():
"writing \"$COMPLETIONS_DISPLAY = None\" and \"$COMPLETIONS_DISPLAY "
"= 'none'\" are equivalent. Only usable with "
"$SHELL_TYPE=prompt_toolkit"),
'COMPLETIONS_CONFIRM': VarDocs(
'While tab-completions menu is displayed, press <Enter> to confirm '
'completion instead of running command. This only affects the '
'prompt-toolkit shell.'),
'COMPLETIONS_MENU_ROWS': VarDocs(
'Number of rows to reserve for tab-completions menu if '
"$COMPLETIONS_DISPLAY is 'single' or 'multi'. This only affects the "
@ -654,7 +666,7 @@ def DEFAULT_DOCS():
# actual environment
#
class Env(abc.MutableMapping):
class Env(cabc.MutableMapping):
"""A xonsh environment, whose variables have limited typing
(unlike BASH). Most variables are, by default, strings (like BASH).
However, the following rules also apply based on variable-name:
@ -695,7 +707,7 @@ class Env(abc.MutableMapping):
@staticmethod
def detypeable(val):
return not (callable(val) or isinstance(val, abc.MutableMapping))
return not (callable(val) or isinstance(val, cabc.MutableMapping))
def detype(self):
if self._detyped is not None:
@ -811,8 +823,8 @@ class Env(abc.MutableMapping):
else:
e = "Unknown environment variable: ${}"
raise KeyError(e.format(key))
if isinstance(val, (abc.MutableSet, abc.MutableSequence,
abc.MutableMapping)):
if isinstance(val, (cabc.MutableSet, cabc.MutableSequence,
cabc.MutableMapping)):
self._detyped = None
return val
@ -1416,7 +1428,7 @@ def load_static_config(ctx, config=None):
with open(config, 'r', encoding=encoding, errors=errors) as f:
try:
conf = json.load(f)
assert isinstance(conf, abc.Mapping)
assert isinstance(conf, cabc.Mapping)
ctx['LOADED_CONFIG'] = True
except Exception as e:
conf = {}

247
xonsh/events.py Normal file
View file

@ -0,0 +1,247 @@
"""
Events for xonsh.
In all likelihood, you want builtins.events
The best way to "declare" an event is something like::
events.doc('on_spam', "Comes with eggs")
"""
import abc
import collections.abc
from xonsh.tools import print_exception
class AbstractEvent(collections.abc.MutableSet, abc.ABC):
"""
A given event that handlers can register against.
Acts as a ``MutableSet`` for registered handlers.
Note that ordering is never guaranteed.
"""
def __call__(self, handler):
"""
Registers a handler. It's suggested to use this as a decorator.
A decorator method is added to the handler, validator(). If a validator
function is added, it can filter if the handler will be considered. The
validator takes the same arguments as the handler. If it returns False,
the handler will not called or considered, as if it was not registered
at all.
Parameters
----------
handler : callable
The handler to register
Returns
-------
rtn : callable
The handler
"""
# Using Pythons "private" munging to minimize hypothetical collisions
handler.__validator = None
self.add(handler)
def validator(vfunc):
"""
Adds a validator function to a handler to limit when it is considered.
"""
handler.__validator = vfunc
handler.validator = validator
return handler
def _filterhandlers(self, handlers, *pargs, **kwargs):
"""
Helper method for implementing classes. Generates the handlers that pass validation.
"""
for handler in handlers:
if handler.__validator is not None and not handler.__validator(*pargs, **kwargs):
continue
yield handler
@abc.abstractmethod
def fire(self, *pargs, **kwargs):
"""
Fires an event, calling registered handlers with the given arguments.
Parameters
----------
*pargs :
Positional arguments to pass to each handler
**kwargs :
Keyword arguments to pass to each handler
"""
class Event(AbstractEvent):
"""
An event species for notify and scatter-gather events.
"""
# Wish I could just pull from set...
def __init__(self):
self._handlers = set()
def __len__(self):
return len(self._handlers)
def __contains__(self, item):
return item in self._handlers
def __iter__(self):
yield from self._handlers
def add(self, item):
"""
Add an element to a set.
This has no effect if the element is already present.
"""
self._handlers.add(item)
def discard(self, item):
"""
Remove an element from a set if it is a member.
If the element is not a member, do nothing.
"""
self._handlers.discard(item)
def fire(self, *pargs, **kwargs):
"""
Fires an event, calling registered handlers with the given arguments. A non-unique iterable
of the results is returned.
Each handler is called immediately. Exceptions are turned in to warnings.
Parameters
----------
*pargs :
Positional arguments to pass to each handler
**kwargs :
Keyword arguments to pass to each handler
Returns
-------
vals : iterable
Return values of each handler. If multiple handlers return the same value, it will
appear multiple times.
"""
vals = []
for handler in self._filterhandlers(self._handlers, *pargs, **kwargs):
try:
rv = handler(*pargs, **kwargs)
except Exception:
print_exception("Exception raised in event handler; ignored.")
else:
vals.append(rv)
return vals
class LoadEvent(AbstractEvent):
"""
An event species where each handler is called exactly once, shortly after either the event is
fired or the handler is registered (whichever is later).
"""
def __init__(self):
self._fired = set()
self._unfired = set()
self._hasfired = False
def __len__(self):
return len(self._fired) + len(self._unfired)
def __contains__(self, item):
return item in self._fired or item in self._unfired
def __iter__(self):
yield from self._fired
yield from self._unfired
def add(self, item):
"""
Add an element to a set.
This has no effect if the element is already present.
"""
self._fired.add(item)
def discard(self, item):
"""
Remove an element from a set if it is a member.
If the element is not a member, do nothing.
"""
self._fired.discard(item)
self._unfired.discard(item)
def fire(self, *pargs, **kwargs):
raise NotImplementedError("See #1550")
class EventManager:
"""
Container for all events in a system.
Meant to be a singleton, but doesn't enforce that itself.
Each event is just an attribute. They're created dynamically on first use.
"""
def doc(self, name, docstring):
"""
Applies a docstring to an event.
Parameters
----------
name : str
The name of the event, eg "on_precommand"
docstring : str
The docstring to apply to the event
"""
type(getattr(self, name)).__doc__ = docstring
def transmogrify(self, name, klass):
"""
Converts an event from one species to another, preserving handlers and docstring.
Please note: Some species maintain specialized state. This is lost on transmogrification.
Parameters
----------
name : str
The name of the event, eg "on_precommand"
klass : sublcass of AbstractEvent
The type to turn the event in to.
"""
if isinstance(klass, str):
klass = globals()[klass]
if not issubclass(klass, AbstractEvent):
raise ValueError("Invalid event class; must be a subclass of AbstractEvent")
oldevent = getattr(self, name)
newevent = type(name, (klass,), {'__doc__': type(oldevent).__doc__})()
setattr(self, name, newevent)
for handler in oldevent:
newevent.add(handler)
def __getattr__(self, name):
"""Get an event, if it doesn't already exist."""
# This is only called if the attribute doesn't exist, so create the Event...
# (A little bit of magic to enable docstrings to work right)
e = type(name, (Event,), {'__doc__': None})()
# ... and save it.
setattr(self, name, e)
# Now it exists, and we won't be called again.
return e
# Not lazy because:
# 1. Initialization of EventManager can't be much cheaper
# 2. It's expected to be used at load time, negating any benefits of using lazy object
events = EventManager()

View file

@ -5,7 +5,7 @@ import types
import inspect
import builtins
import warnings
import collections.abc as abc
import collections.abc as cabc
from xonsh.ast import CtxAwareTransformer
from xonsh.parser import Parser
@ -45,13 +45,15 @@ class Execer(object):
if self.unload:
unload_builtins()
def parse(self, input, ctx, mode='exec', transform=True):
def parse(self, input, ctx, mode='exec', filename=None, transform=True):
"""Parses xonsh code in a context-aware fashion. For context-free
parsing, please use the Parser class directly or pass in
transform=False.
"""
if filename is None:
filename = self.filename
if not transform:
return self.parser.parse(input, filename=self.filename, mode=mode,
return self.parser.parse(input, filename=filename, mode=mode,
debug_level=(self.debug_level > 1))
# Parsing actually happens in a couple of phases. The first is a
@ -68,7 +70,7 @@ class Execer(object):
# tokens for all of the Python rules. The lazy way implemented here
# is to parse a line a second time with a $() wrapper if it fails
# the first time. This is a context-free phase.
tree, input = self._parse_ctx_free(input, mode=mode)
tree, input = self._parse_ctx_free(input, mode=mode, filename=filename)
if tree is None:
return None
@ -80,7 +82,7 @@ class Execer(object):
# it also is valid as a subprocess line.
if ctx is None:
ctx = set()
elif isinstance(ctx, abc.Mapping):
elif isinstance(ctx, cabc.Mapping):
ctx = set(ctx.keys())
tree = self.ctxtransformer.ctxvisit(tree, input, ctx, mode=mode)
return tree
@ -97,7 +99,8 @@ class Execer(object):
glbs = frame.f_globals if glbs is None else glbs
locs = frame.f_locals if locs is None else locs
ctx = set(dir(builtins)) | set(glbs.keys()) | set(locs.keys())
tree = self.parse(input, ctx, mode=mode, transform=transform)
tree = self.parse(input, ctx, mode=mode, filename=filename,
transform=transform)
if tree is None:
return None # handles comment only input
if transform:
@ -110,45 +113,53 @@ class Execer(object):
return code
def eval(self, input, glbs=None, locs=None, stacklevel=2,
transform=True):
filename=None, transform=True):
"""Evaluates (and returns) xonsh code."""
if isinstance(input, types.CodeType):
code = input
else:
if filename is None:
filename = self.filename
code = self.compile(input=input,
glbs=glbs,
locs=locs,
mode='eval',
stacklevel=stacklevel,
filename=filename,
transform=transform)
if code is None:
return None # handles comment only input
return eval(code, glbs, locs)
def exec(self, input, mode='exec', glbs=None, locs=None, stacklevel=2,
transform=True):
filename=None, transform=True):
"""Execute xonsh code."""
if isinstance(input, types.CodeType):
code = input
else:
if filename is None:
filename = self.filename
code = self.compile(input=input,
glbs=glbs,
locs=locs,
mode=mode,
stacklevel=stacklevel,
filename=filename,
transform=transform)
if code is None:
return None # handles comment only input
return exec(code, glbs, locs)
def _parse_ctx_free(self, input, mode='exec'):
def _parse_ctx_free(self, input, mode='exec', filename=None):
last_error_line = last_error_col = -1
parsed = False
original_error = None
if filename is None:
filename = self.filename
while not parsed:
try:
tree = self.parser.parse(input,
filename=self.filename,
filename=filename,
mode=mode,
debug_level=(self.debug_level > 1))
parsed = True
@ -163,7 +174,7 @@ class Execer(object):
if (e.loc is None) or (last_error_line == e.loc.lineno and
last_error_col in (e.loc.column + 1,
e.loc.column)):
raise original_error
raise original_error from None
last_error_col = e.loc.column
last_error_line = e.loc.lineno
idx = last_error_line - 1

View file

@ -4,14 +4,15 @@ import os
import re
import json
import shlex
import sys
import tempfile
import builtins
import subprocess
import warnings
import functools
import collections.abc as abc
import collections.abc as cabc
from xonsh.lazyasd import LazyObject
from xonsh.lazyasd import lazyobject
from xonsh.tools import to_bool, ensure_string
from xonsh.platform import ON_WINDOWS, ON_CYGWIN
@ -78,8 +79,11 @@ else
fi
echo ${namefile}"""
# mapping of shell name alises to keys in other lookup dictionaries.
CANON_SHELL_NAMES = LazyObject(lambda: {
@lazyobject
def CANON_SHELL_NAMES():
return {
'bash': 'bash',
'/bin/bash': 'bash',
'zsh': 'zsh',
@ -87,55 +91,79 @@ CANON_SHELL_NAMES = LazyObject(lambda: {
'/usr/bin/zsh': 'zsh',
'cmd': 'cmd',
'cmd.exe': 'cmd',
}, globals(), 'CANON_SHELL_NAMES')
}
DEFAULT_ENVCMDS = LazyObject(lambda: {
@lazyobject
def DEFAULT_ENVCMDS():
return {
'bash': 'env',
'zsh': 'env',
'cmd': 'set',
}, globals(), 'DEFAULT_ENVCMDS')
}
DEFAULT_ALIASCMDS = LazyObject(lambda: {
@lazyobject
def DEFAULT_ALIASCMDS():
return {
'bash': 'alias',
'zsh': 'alias -L',
'cmd': '',
}, globals(), 'DEFAULT_ALIASCMDS')
}
DEFAULT_FUNCSCMDS = LazyObject(lambda: {
@lazyobject
def DEFAULT_FUNCSCMDS():
return {
'bash': DEFAULT_BASH_FUNCSCMD,
'zsh': DEFAULT_ZSH_FUNCSCMD,
'cmd': '',
}, globals(), 'DEFAULT_FUNCSCMDS')
}
DEFAULT_SOURCERS = LazyObject(lambda: {
@lazyobject
def DEFAULT_SOURCERS():
return {
'bash': 'source',
'zsh': 'source',
'cmd': 'call',
}, globals(), 'DEFAULT_SOURCERS')
}
DEFAULT_TMPFILE_EXT = LazyObject(lambda: {
@lazyobject
def DEFAULT_TMPFILE_EXT():
return {
'bash': '.sh',
'zsh': '.zsh',
'cmd': '.bat',
}, globals(), 'DEFAULT_TMPFILE_EXT')
}
DEFAULT_RUNCMD = LazyObject(lambda: {
@lazyobject
def DEFAULT_RUNCMD():
return {
'bash': '-c',
'zsh': '-c',
'cmd': '/C',
}, globals(), 'DEFAULT_RUNCMD')
}
DEFAULT_SETERRPREVCMD = LazyObject(lambda: {
@lazyobject
def DEFAULT_SETERRPREVCMD():
return {
'bash': 'set -e',
'zsh': 'set -e',
'cmd': '@echo off',
}, globals(), 'DEFAULT_SETERRPREVCMD')
}
DEFAULT_SETERRPOSTCMD = LazyObject(lambda: {
@lazyobject
def DEFAULT_SETERRPOSTCMD():
return {
'bash': '',
'zsh': '',
'cmd': 'if errorlevel 1 exit 1',
}, globals(), 'DEFAULT_SETERRPOSTCMD')
}
@functools.lru_cache()
@ -261,12 +289,16 @@ def foreign_shell_data(shell, interactive=True, login=False, envcmd=None,
return env, aliases
ENV_RE = LazyObject(lambda: re.compile('__XONSH_ENV_BEG__\n(.*)'
'__XONSH_ENV_END__', flags=re.DOTALL),
globals(), 'ENV_RE')
ENV_SPLIT_RE = LazyObject(lambda: re.compile('^([^=]+)=([^=]*|[^\n]*)$',
flags=re.DOTALL | re.MULTILINE),
globals(), 'ENV_SPLIT_RE')
@lazyobject
def ENV_RE():
return re.compile('__XONSH_ENV_BEG__\n(.*)'
'__XONSH_ENV_END__', flags=re.DOTALL)
@lazyobject
def ENV_SPLIT_RE():
return re.compile('^([^=]+)=([^=]*|[^\n]*)$',
flags=re.DOTALL | re.MULTILINE)
def parse_env(s):
@ -280,10 +312,11 @@ def parse_env(s):
return env
ALIAS_RE = LazyObject(lambda: re.compile('__XONSH_ALIAS_BEG__\n(.*)'
'__XONSH_ALIAS_END__',
flags=re.DOTALL),
globals(), 'ALIAS_RE')
@lazyobject
def ALIAS_RE():
return re.compile('__XONSH_ALIAS_BEG__\n(.*)'
'__XONSH_ALIAS_END__',
flags=re.DOTALL)
def parse_aliases(s):
@ -312,10 +345,11 @@ def parse_aliases(s):
return aliases
FUNCS_RE = LazyObject(lambda: re.compile('__XONSH_FUNCS_BEG__\n(.+)\n'
'__XONSH_FUNCS_END__',
flags=re.DOTALL),
globals(), 'FUNCS_RE')
@lazyobject
def FUNCS_RE():
return re.compile('__XONSH_FUNCS_BEG__\n(.+)\n'
'__XONSH_FUNCS_END__',
flags=re.DOTALL)
def parse_funcs(s, shell, sourcer=None):
@ -343,8 +377,8 @@ def parse_funcs(s, shell, sourcer=None):
else sourcer
funcs = {}
for funcname, filename in namefiles.items():
if funcname.startswith('_'):
continue # skip private functions
if funcname.startswith('_') or not filename:
continue # skip private functions and invalid files
if not os.path.isabs(filename):
filename = os.path.abspath(filename)
wrapper = ForeignShellFunctionAlias(name=funcname, shell=shell,
@ -414,16 +448,18 @@ class ForeignShellFunctionAlias(object):
return args, True
VALID_SHELL_PARAMS = LazyObject(lambda: frozenset([
@lazyobject
def VALID_SHELL_PARAMS():
return frozenset([
'shell', 'interactive', 'login', 'envcmd',
'aliascmd', 'extra_args', 'currenv', 'safe',
'prevcmd', 'postcmd', 'funcscmd', 'sourcer',
]), globals(), 'VALID_SHELL_PARAMS')
])
def ensure_shell(shell):
"""Ensures that a mapping follows the shell specification."""
if not isinstance(shell, abc.MutableMapping):
if not isinstance(shell, cabc.MutableMapping):
shell = dict(shell)
shell_keys = set(shell.keys())
if not (shell_keys <= VALID_SHELL_PARAMS):
@ -444,9 +480,9 @@ def ensure_shell(shell):
shell['extra_args'] = tuple(map(ensure_string, shell['extra_args']))
if 'currenv' in shell_keys and not isinstance(shell['currenv'], tuple):
ce = shell['currenv']
if isinstance(ce, abc.Mapping):
if isinstance(ce, cabc.Mapping):
ce = tuple([(ensure_string(k), v) for k, v in ce.items()])
elif isinstance(ce, abc.Sequence):
elif isinstance(ce, cabc.Sequence):
ce = tuple([(ensure_string(k), v) for k, v in ce])
else:
raise RuntimeError('unrecognized type for currenv')
@ -540,9 +576,15 @@ def load_foreign_aliases(shells=None, config=None, issue_warning=True):
"""
shells = _get_shells(shells=shells, config=config, issue_warning=issue_warning)
aliases = {}
xonsh_aliases = builtins.aliases
for shell in shells:
shell = ensure_shell(shell)
_, shaliases = foreign_shell_data(**shell)
if shaliases:
aliases.update(shaliases)
for alias in set(shaliases) & set(xonsh_aliases):
del shaliases[alias]
print('aliases: alias {!r} of shell {!r} '
'tries to override xonsh alias, '
'xonsh wins!'.format(alias, shell['shell']),
file=sys.stderr)
aliases.update(shaliases)
return aliases

View file

@ -8,17 +8,16 @@ import time
import uuid
import argparse
import builtins
import collections
import datetime
import functools
import itertools
import threading
import collections
import collections.abc as abc
import collections.abc as cabc
from xonsh.lazyasd import lazyobject
from xonsh.lazyjson import LazyJSON, ljdump, LJNode
from xonsh.tools import (ensure_slice, to_history_tuple,
expanduser_abs_path, ensure_timestamp)
from xonsh.tools import (ensure_slice, to_history_tuple, is_string,
get_portions, expanduser_abs_path, ensure_timestamp)
from xonsh.diff_history import _dh_create_parser, _dh_main_action
@ -178,7 +177,7 @@ class HistoryFlusher(threading.Thread):
ljdump(hist, f, sort_keys=True)
class CommandField(abc.Sequence):
class CommandField(cabc.Sequence):
"""A field in the 'cmds' portion of history."""
def __init__(self, field, hist, default=None):
@ -262,7 +261,7 @@ def _all_xonsh_parser(**kwargs):
"""
Returns all history as found in XONSH_DATA_DIR.
return format: (name, start_time, index)
return format: (cmd, start_time, index)
"""
data_dir = builtins.__xonsh_env__.get('XONSH_DATA_DIR')
data_dir = expanduser_abs_path(data_dir)
@ -285,14 +284,11 @@ def _all_xonsh_parser(**kwargs):
def _curr_session_parser(hist=None, **kwargs):
"""
Take in History object and return command list tuple with
format: (name, start_time, index)
format: (cmd, start_time, index)
"""
if hist is None:
hist = builtins.__xonsh_history__
start_times = (start for start, end in hist.tss)
names = (name.rstrip() for name in hist.inps)
for ind, (c, t) in enumerate(zip(names, start_times)):
yield (c, t, ind)
return iter(hist)
def _zsh_hist_parser(location=None, **kwargs):
@ -394,20 +390,6 @@ def _hist_create_parser():
return p
def _hist_get_portion(commands, slices):
"""Yield from portions of history commands."""
if len(slices) == 1:
s = slices[0]
try:
yield from itertools.islice(commands, s.start, s.stop, s.step)
return
except ValueError: # islice failed
pass
commands = list(commands)
for s in slices:
yield from commands[s]
def _hist_filter_ts(commands, start_time, end_time):
"""Yield only the commands between start and end time."""
for cmd in commands:
@ -439,7 +421,7 @@ def _hist_get(session='session', *, slices=None, datetime_format=None,
if slices:
# transform/check all slices
slices = [ensure_slice(s) for s in slices]
cmds = _hist_get_portion(cmds, slices)
cmds = get_portions(cmds, slices)
if start_time or end_time:
if start_time is None:
start_time = 0.0
@ -488,6 +470,37 @@ def _hist_show(ns, *args, **kwargs):
class History(object):
"""Xonsh session history.
Indexing
--------
History object acts like a sequence that can be indexed in a special way
that adds extra functionality. At the moment only history from the
current session can be retrieved. Note that the most recent command
is the last item in history.
The index acts as a filter with two parts, command and argument,
separated by comma. Based on the type of each part different
filtering can be achieved,
for the command part:
- an int returns the command in that position.
- a slice returns a list of commands.
- a string returns the most recent command containing the string.
for the argument part:
- an int returns the argument of the command in that position.
- a slice returns a part of the command based on the argument
position.
The argument part of the filter can be omitted but the command part is
required.
Command arguments are separated by white space.
If the filtering produces only one result it is
returned as a string else a list of strings is returned.
Attributes
----------
rtns : sequence of ints
@ -605,16 +618,66 @@ class History(object):
self.buffer.clear()
return hf
def show(self, *args, **kwargs):
"""Return shell history as a list
def __iter__(self):
"""Get current session history.
Valid options:
`session` - returns xonsh history from current session
`xonsh` - returns xonsh history from all sessions
`zsh` - returns all zsh history
`bash` - returns all bash history
Yields
------
tuple
``tuple`` of the form (cmd, start_time, index).
"""
return list(_hist_get(*args, **kwargs))
start_times = (start for start, end in self.tss)
names = (name.rstrip() for name in self.inps)
for ind, (c, t) in enumerate(zip(names, start_times)):
yield (c, t, ind)
def __getitem__(self, item):
"""Retrieve history parts based on filtering rules,
see ``History`` docs for more info. Accepts one of
int, string, slice or tuple of length two.
"""
if isinstance(item, tuple):
cmd_pat, arg_pat = item
else:
cmd_pat, arg_pat = item, None
cmds = (c for c, *_ in self)
cmds = self._cmd_filter(cmds, cmd_pat)
if arg_pat is not None:
cmds = self._args_filter(cmds, arg_pat)
cmds = list(cmds)
if len(cmds) == 1:
return cmds[0]
else:
return cmds
@staticmethod
def _cmd_filter(cmds, pat):
if isinstance(pat, (int, slice)):
s = ensure_slice(pat)
yield from get_portions(cmds, s)
elif is_string(pat):
for command in reversed(list(cmds)):
if pat in command:
yield command
else:
raise TypeError('Command filter must be '
'string, int or slice')
@staticmethod
def _args_filter(cmds, pat):
args = None
if isinstance(pat, (int, slice)):
s = ensure_slice(pat)
for command in cmds:
yield ' '.join(command.split()[s])
else:
raise TypeError('Argument filter must be '
'int or slice')
return args
def __setitem__(self, *args):
raise PermissionError('You cannot change history! '
'you can create new though.')
def _hist_info(ns, hist):

View file

@ -3,9 +3,10 @@
This module registers the hooks it defines when it is imported.
"""
import builtins
import os
import sys
import builtins
import types
from importlib.machinery import ModuleSpec
from importlib.abc import MetaPathFinder, SourceLoader
@ -60,6 +61,14 @@ class XonshImportHook(MetaPathFinder, SourceLoader):
#
# SourceLoader methods
#
def create_module(self, spec):
"""Create a xonsh module with the appropriate attributes."""
mod = types.ModuleType(spec.name)
mod.__file__ = self.get_filename(spec.name)
mod.__loader__ = self
mod.__package__ = spec.parent or ''
return mod
def get_filename(self, fullname):
"""Returns the filename for a module's fullname."""
return self._filenames[fullname]
@ -70,7 +79,7 @@ class XonshImportHook(MetaPathFinder, SourceLoader):
def get_code(self, fullname):
"""Gets the code object for a xonsh file."""
filename = self._filenames.get(fullname, None)
filename = self.get_filename(fullname)
if filename is None:
msg = "xonsh file {0!r} could not be found".format(fullname)
raise ImportError(msg)
@ -92,6 +101,8 @@ def install_hook():
Can safely be called many times, will be no-op if a xonsh import hook is
already present.
"""
if XonshImportHook not in {type(hook) for hook in sys.meta_path}:
for hook in sys.meta_path:
if isinstance(hook, XonshImportHook):
break
else:
sys.meta_path.append(XonshImportHook())

View file

@ -8,7 +8,7 @@ import builtins
import threading
import importlib
import importlib.util
import collections.abc as abc
import collections.abc as cabc
__version__ = '0.1.1'
@ -128,7 +128,7 @@ def lazyobject(f):
return LazyObject(f, f.__globals__, f.__name__)
class LazyDict(abc.MutableMapping):
class LazyDict(cabc.MutableMapping):
def __init__(self, loaders, ctx, name):
"""Dictionary like object that lazily loads its values from an initial

View file

@ -4,7 +4,7 @@ import io
import json
import weakref
import contextlib
import collections.abc as abc
import collections.abc as cabc
def _to_json_with_size(obj, offset=0, sort_keys=False):
@ -12,7 +12,7 @@ def _to_json_with_size(obj, offset=0, sort_keys=False):
s = json.dumps(obj)
o = offset
n = size = len(s.encode()) # size in bytes
elif isinstance(obj, abc.Mapping):
elif isinstance(obj, cabc.Mapping):
s = '{'
j = offset + 1
o = {}
@ -35,7 +35,7 @@ def _to_json_with_size(obj, offset=0, sort_keys=False):
n = len(s)
o['__total__'] = offset
size['__total__'] = n
elif isinstance(obj, abc.Sequence):
elif isinstance(obj, cabc.Sequence):
s = '['
j = offset + 1
o = []
@ -94,7 +94,7 @@ def ljdump(obj, fp, sort_keys=False):
fp.write(s)
class LJNode(abc.Mapping, abc.Sequence):
class LJNode(cabc.Mapping, cabc.Sequence):
"""A proxy node for JSON nodes. Acts as both sequence and mapping."""
def __init__(self, offsets, sizes, root):
@ -110,8 +110,8 @@ class LJNode(abc.Mapping, abc.Sequence):
self.offsets = offsets
self.sizes = sizes
self.root = root
self.is_mapping = isinstance(self.offsets, abc.Mapping)
self.is_sequence = isinstance(self.offsets, abc.Sequence)
self.is_mapping = isinstance(self.offsets, cabc.Mapping)
self.is_sequence = isinstance(self.offsets, cabc.Sequence)
def __len__(self):
# recall that for maps, the '__total__' key is added and for
@ -137,7 +137,7 @@ class LJNode(abc.Mapping, abc.Sequence):
f.seek(self.root.dloc + offset)
s = f.read(size)
val = json.loads(s)
elif isinstance(offset, (abc.Mapping, abc.Sequence)):
elif isinstance(offset, (cabc.Mapping, cabc.Sequence)):
val = LJNode(offset, size, self.root)
else:
raise TypeError('incorrect types for offset node')
@ -204,8 +204,8 @@ class LazyJSON(LJNode):
self._f = open(f, 'r', newline='\n')
self._load_index()
self.root = weakref.proxy(self)
self.is_mapping = isinstance(self.offsets, abc.Mapping)
self.is_sequence = isinstance(self.offsets, abc.Sequence)
self.is_mapping = isinstance(self.offsets, cabc.Mapping)
self.is_sequence = isinstance(self.offsets, cabc.Sequence)
def __del__(self):
self.close()

View file

@ -145,7 +145,9 @@ def handle_error_token(state, token):
Function for handling error tokens
"""
state['last'] = token
if not state['pymode'][-1][0]:
if token.string == '!':
typ = 'BANG'
elif not state['pymode'][-1][0]:
typ = 'NAME'
else:
typ = 'ERRORTOKEN'
@ -340,6 +342,7 @@ class Lexer(object):
if self._tokens is None:
t = tuple(token_map.values()) + (
'NAME', # name tokens
'BANG', # ! tokens
'WS', # whitespace in subprocess mode
'LPAREN', 'RPAREN', # ( )
'LBRACKET', 'RBRACKET', # [ ]

View file

@ -17,6 +17,7 @@ from xonsh.lexer import Lexer, LexToken
from xonsh.platform import PYTHON_VERSION_INFO
from xonsh.tokenize import SearchPath
from xonsh.lazyasd import LazyObject
from xonsh.parsers.context_check import check_contexts
RE_SEARCHPATH = LazyObject(lambda: re.compile(SearchPath), globals(),
'RE_SEARCHPATH')
@ -219,6 +220,11 @@ class BaseParser(object):
self.lexer = lexer = Lexer()
self.tokens = lexer.tokens
self._lines = None
self.xonsh_code = None
self._attach_nocomma_tok_rules()
self._attach_nocloser_base_rules()
opt_rules = [
'newlines', 'arglist', 'func_call', 'rarrow_test', 'typedargslist',
'equals_test', 'colon_test', 'tfpdef', 'comma_tfpdef_list',
@ -233,7 +239,8 @@ class BaseParser(object):
'op_factor_list', 'trailer_list', 'testlist_comp',
'yield_expr_or_testlist_comp', 'dictorsetmaker',
'comma_subscript_list', 'test', 'sliceop', 'comp_iter',
'yield_arg', 'test_comma_list']
'yield_arg', 'test_comma_list',
'macroarglist', 'any_raw_toks']
for rule in opt_rules:
self._opt_rule(rule)
@ -247,11 +254,11 @@ class BaseParser(object):
'pm_term', 'op_factor', 'trailer', 'comma_subscript',
'comma_expr_or_star_expr', 'comma_test', 'comma_argument',
'comma_item', 'attr_period_name', 'test_comma',
'equals_yield_expr_or_testlist']
'equals_yield_expr_or_testlist', 'comma_nocomma']
for rule in list_rules:
self._list_rule(rule)
tok_rules = ['def', 'class', 'return', 'number', 'name',
tok_rules = ['def', 'class', 'return', 'number', 'name', 'bang',
'none', 'true', 'false', 'ellipsis', 'if', 'del',
'assert', 'lparen', 'lbrace', 'lbracket', 'string',
'times', 'plus', 'minus', 'divide', 'doublediv', 'mod',
@ -259,7 +266,8 @@ class BaseParser(object):
'for', 'colon', 'import', 'except', 'nonlocal', 'global',
'yield', 'from', 'raise', 'with', 'dollar_lparen',
'dollar_lbrace', 'dollar_lbracket', 'try',
'bang_lparen', 'bang_lbracket']
'bang_lparen', 'bang_lbracket', 'comma', 'rparen',
'rbracket']
for rule in tok_rules:
self._tok_rule(rule)
@ -273,9 +281,12 @@ class BaseParser(object):
if outputdir is None:
outputdir = os.path.dirname(os.path.dirname(__file__))
yacc_kwargs['outputdir'] = outputdir
self.parser = None
YaccLoader(self, yacc_kwargs)
# self.parser = yacc.yacc(**yacc_kwargs)
if yacc_debug:
# create parser on main thread
self.parser = yacc.yacc(**yacc_kwargs)
else:
self.parser = None
YaccLoader(self, yacc_kwargs)
# Keeps track of the last token given to yacc (the lookahead token)
self._last_yielded_token = None
@ -284,6 +295,8 @@ class BaseParser(object):
"""Resets for clean parsing."""
self.lexer.reset()
self._last_yielded_token = None
self._lines = None
self.xonsh_code = None
def parse(self, s, filename='<code>', mode='exec', debug_level=0):
"""Returns an abstract syntax tree of xonsh code.
@ -309,6 +322,8 @@ class BaseParser(object):
while self.parser is None:
time.sleep(0.01) # block until the parser is ready
tree = self.parser.parse(input=s, lexer=self.lexer, debug=debug_level)
if tree is not None:
check_contexts(tree)
# hack for getting modes right
if mode == 'single':
if isinstance(tree, ast.Expression):
@ -318,7 +333,7 @@ class BaseParser(object):
return tree
def _lexer_errfunc(self, msg, line, column):
self._parse_error(msg, self.currloc(line, column), self.xonsh_code)
self._parse_error(msg, self.currloc(line, column))
def _yacc_lookahead_token(self):
"""Gets the next-to-last and last token seen by the lexer."""
@ -400,15 +415,33 @@ class BaseParser(object):
return self.token_col(t)
return 0
def _parse_error(self, msg, loc, line=None):
if line is None:
@property
def lines(self):
if self._lines is None and self.xonsh_code is not None:
self._lines = self.xonsh_code.splitlines(keepends=True)
return self._lines
def source_slice(self, start, stop):
"""Gets the original source code from two (line, col) tuples in
source-space (i.e. lineno start at 1).
"""
bline, bcol = start
eline, ecol = stop
bline -= 1
lines = self.lines[bline:eline]
lines[-1] = lines[-1][:ecol]
lines[0] = lines[0][bcol:]
return ''.join(lines)
def _parse_error(self, msg, loc):
if self.xonsh_code is None or loc is None:
err_line_pointer = ''
else:
col = loc.column + 1
lines = line.splitlines()
lines = self.lines
i = loc.lineno - 1
if 0 <= i < len(lines):
err_line = lines[i]
err_line = lines[i].rstrip()
err_line_pointer = '\n{}\n{: >{}}'.format(err_line, '^', col)
else:
err_line_pointer = ''
@ -423,7 +456,8 @@ class BaseParser(object):
('left', 'EQ', 'NE'), ('left', 'GT', 'GE', 'LT', 'LE'),
('left', 'RSHIFT', 'LSHIFT'), ('left', 'PLUS', 'MINUS'),
('left', 'TIMES', 'DIVIDE', 'DOUBLEDIV', 'MOD'),
('left', 'POW'), )
('left', 'POW'),
)
#
# Grammar as defined by BNF
@ -474,7 +508,8 @@ class BaseParser(object):
def p_eval_input(self, p):
"""eval_input : testlist newlines_opt
"""
p[0] = ast.Expression(body=p[1])
p1 = p[1]
p[0] = ast.Expression(body=p1, lineno=p1.lineno, col_offset=p1.col_offset)
def p_func_call(self, p):
"""func_call : LPAREN arglist_opt RPAREN"""
@ -1638,9 +1673,19 @@ class BaseParser(object):
lineno=leader.lineno,
col_offset=leader.col_offset)
elif isinstance(trailer, Mapping):
# call normal functions
p0 = ast.Call(func=leader,
lineno=leader.lineno,
col_offset=leader.col_offset, **trailer)
elif isinstance(trailer, (ast.Tuple, tuple)):
# call macro functions
l, c = leader.lineno, leader.col_offset
gblcall = xonsh_call('globals', [], lineno=l, col=c)
loccall = xonsh_call('locals', [], lineno=l, col=c)
if isinstance(trailer, tuple):
trailer, arglist = trailer
margs = [leader, trailer, gblcall, loccall]
p0 = xonsh_call('__xonsh_call_macro__', margs, lineno=l, col=c)
elif isinstance(trailer, str):
if trailer == '?':
p0 = xonsh_help(leader, lineno=leader.lineno,
@ -1675,6 +1720,7 @@ class BaseParser(object):
# empty container atom
p0 = ast.Tuple(elts=[], ctx=ast.Load(), lineno=self.lineno,
col_offset=self.col)
p0._real_tuple = True
elif isinstance(p2, ast.AST):
p0 = p2
p0._lopen_lineno, p0._lopen_col = p1_tok.lineno, p1_tok.lexpos
@ -1767,13 +1813,60 @@ class BaseParser(object):
def p_atom_fistful_of_dollars(self, p):
"""atom : dollar_lbrace_tok test RBRACE
| dollar_lparen_tok subproc RPAREN
| bang_lparen_tok subproc RPAREN
| dollar_lparen_tok subproc RPAREN
| bang_lbracket_tok subproc RBRACKET
| dollar_lbracket_tok subproc RBRACKET
"""
p[0] = self._dollar_rules(p)
def p_atom_bang_empty_fistful_of_dollars(self, p):
"""atom : bang_lparen_tok subproc bang_tok RPAREN
| dollar_lparen_tok subproc bang_tok RPAREN
| bang_lbracket_tok subproc bang_tok RBRACKET
| dollar_lbracket_tok subproc bang_tok RBRACKET
"""
p3 = p[3]
node = ast.Str(s='', lineno=p3.lineno, col_offset=p3.lexpos + 1)
p[2][-1].elts.append(node)
p[0] = self._dollar_rules(p)
def p_atom_bang_fistful_of_dollars(self, p):
"""atom : bang_lparen_tok subproc bang_tok nocloser rparen_tok
| dollar_lparen_tok subproc bang_tok nocloser rparen_tok
| bang_lbracket_tok subproc bang_tok nocloser rbracket_tok
| dollar_lbracket_tok subproc bang_tok nocloser rbracket_tok
"""
p3, p5 = p[3], p[5]
beg = (p3.lineno, p3.lexpos + 1)
end = (p5.lineno, p5.lexpos)
s = self.source_slice(beg, end).strip()
node = ast.Str(s=s, lineno=beg[0], col_offset=beg[1])
p[2][-1].elts.append(node)
p[0] = self._dollar_rules(p)
def _attach_nocloser_base_rules(self):
toks = set(self.tokens)
toks -= {'LPAREN', 'RPAREN', 'LBRACE', 'RBRACE',
'LBRACKET', 'RBRACKET', 'AT_LPAREN', 'BANG_LPAREN',
'BANG_LBRACKET', 'DOLLAR_LPAREN', 'DOLLAR_LBRACE',
'DOLLAR_LBRACKET', 'ATDOLLAR_LPAREN'}
ts = '\n | '.join(sorted(toks))
doc = 'nocloser : ' + ts + '\n'
self.p_nocloser_base.__func__.__doc__ = doc
def p_nocloser_base(self, p):
# see above attachament function
pass
def p_nocloser_any(self, p):
"""nocloser : any_nested_raw"""
pass
def p_nocloser_many(self, p):
"""nocloser : nocloser nocloser"""
pass
def p_string_literal(self, p):
"""string_literal : string_tok"""
p1 = p[1]
@ -1820,6 +1913,34 @@ class BaseParser(object):
"""trailer : LPAREN arglist_opt RPAREN"""
p[0] = [p[2] or dict(args=[], keywords=[], starargs=None, kwargs=None)]
def p_trailer_bang_lparen(self, p):
"""trailer : bang_lparen_tok macroarglist_opt rparen_tok
| bang_lparen_tok nocomma comma_tok rparen_tok
| bang_lparen_tok nocomma comma_tok WS rparen_tok
| bang_lparen_tok macroarglist comma_tok rparen_tok
| bang_lparen_tok macroarglist comma_tok WS rparen_tok
"""
p1, p2, p3 = p[1], p[2], p[3]
begins = [(p1.lineno, p1.lexpos + 2)]
ends = [(p3.lineno, p3.lexpos)]
if p2:
begins.extend([(x[0], x[1] + 1) for x in p2])
ends = p2 + ends
elts = []
for beg, end in zip(begins, ends):
s = self.source_slice(beg, end).strip()
if not s:
if len(begins) == 1:
break
else:
msg = 'empty macro arguments not allowed'
self._parse_error(msg, self.currloc(*beg))
node = ast.Str(s=s, lineno=beg[0], col_offset=beg[1])
elts.append(node)
p0 = ast.Tuple(elts=elts, ctx=ast.Load(), lineno=p1.lineno,
col_offset=p1.lexpos)
p[0] = [p0]
def p_trailer_p3(self, p):
"""trailer : LBRACKET subscriptlist RBRACKET
| PERIOD NAME
@ -1832,6 +1953,85 @@ class BaseParser(object):
"""
p[0] = [p[1]]
def _attach_nocomma_tok_rules(self):
toks = set(self.tokens)
toks -= {'COMMA', 'LPAREN', 'RPAREN', 'LBRACE', 'RBRACE', 'LBRACKET',
'RBRACKET', 'AT_LPAREN', 'BANG_LPAREN', 'BANG_LBRACKET',
'DOLLAR_LPAREN', 'DOLLAR_LBRACE', 'DOLLAR_LBRACKET',
'ATDOLLAR_LPAREN'}
ts = '\n | '.join(sorted(toks))
doc = 'nocomma_tok : ' + ts + '\n'
self.p_nocomma_tok.__func__.__doc__ = doc
# The following grammar rules are no-ops because we don't need to glue the
# source code back together piece-by-piece. Instead, we simply look for
# top-level commas and record their positions. With these positions and
# the bounding parantheses !() positions we can use the source_slice()
# method. This does a much better job of capturing exactly the source code
# that was provided. The tokenizer & lexer can be a little lossy, especially
# with respect to whitespace.
def p_nocomma_tok(self, p):
# see attachement function above for docstring
pass
def p_any_raw_tok(self, p):
"""any_raw_tok : nocomma
| COMMA
"""
pass
def p_any_raw_toks_one(self, p):
"""any_raw_toks : any_raw_tok"""
pass
def p_any_raw_toks_many(self, p):
"""any_raw_toks : any_raw_toks any_raw_tok"""
pass
def p_nocomma_part_tok(self, p):
"""nocomma_part : nocomma_tok"""
pass
def p_any_nested_raw(self, p):
"""any_nested_raw : LPAREN any_raw_toks_opt RPAREN
| LBRACE any_raw_toks_opt RBRACE
| LBRACKET any_raw_toks_opt RBRACKET
| AT_LPAREN any_raw_toks_opt RPAREN
| BANG_LPAREN any_raw_toks_opt RPAREN
| BANG_LBRACKET any_raw_toks_opt RBRACKET
| DOLLAR_LPAREN any_raw_toks_opt RPAREN
| DOLLAR_LBRACE any_raw_toks_opt RBRACE
| DOLLAR_LBRACKET any_raw_toks_opt RBRACKET
| ATDOLLAR_LPAREN any_raw_toks_opt RPAREN
"""
pass
def p_nocomma_part_any(self, p):
"""nocomma_part : any_nested_raw"""
pass
def p_nocomma_base(self, p):
"""nocomma : nocomma_part"""
pass
def p_nocomma_append(self, p):
"""nocomma : nocomma nocomma_part"""
pass
def p_comma_nocomma(self, p):
"""comma_nocomma : comma_tok nocomma"""
p1 = p[1]
p[0] = [(p1.lineno, p1.lexpos)]
def p_macroarglist_single(self, p):
"""macroarglist : nocomma"""
p[0] = []
def p_macroarglist_many(self, p):
"""macroarglist : nocomma comma_nocomma_list"""
p[0] = p[2]
def p_subscriptlist(self, p):
"""subscriptlist : subscript comma_subscript_list_opt comma_opt"""
p1, p2 = p[1], p[2]
@ -1894,15 +2094,16 @@ class BaseParser(object):
"""testlist : test"""
p1 = p[1]
if isinstance(p1, ast.Tuple) and (hasattr(p1, '_real_tuple') and
p1._real_tuple):
p1._real_tuple and p1.elts):
p1.lineno, p1.col_offset = lopen_loc(p1.elts[0])
p[0] = p1
def p_testlist_single(self, p):
"""testlist : test COMMA"""
p1 = p[1]
if isinstance(p1, ast.Tuple) and (hasattr(p1, '_real_tuple') and
p1._real_tuple):
if isinstance(p1, ast.List) or (isinstance(p1, ast.Tuple) and
hasattr(p1, '_real_tuple') and
p1._real_tuple):
lineno, col = lopen_loc(p1)
p[0] = ast.Tuple(elts=[p1], ctx=ast.Load(),
lineno=p1.lineno, col_offset=p1.col_offset)
@ -1914,8 +2115,9 @@ class BaseParser(object):
| test comma_test_list
"""
p1 = p[1]
if isinstance(p1, ast.Tuple) and (hasattr(p1, '_real_tuple') and
p1._real_tuple):
if isinstance(p1, ast.List) or (isinstance(p1, ast.Tuple) and
hasattr(p1, '_real_tuple') and
p1._real_tuple):
lineno, col = lopen_loc(p1)
p1 = ast.Tuple(elts=[p1], ctx=ast.Load(),
lineno=p1.lineno, col_offset=p1.col_offset)
@ -2247,15 +2449,6 @@ class BaseParser(object):
p0._cliarg_action = 'append'
p[0] = p0
def p_subproc_atom_dollar_name(self, p):
"""subproc_atom : DOLLAR_NAME"""
p0 = self._envvar_getter_by_name(p[1][1:], lineno=self.lineno,
col=self.col)
p0 = xonsh_call('__xonsh_ensure_list_of_strs__', [p0],
lineno=self.lineno, col=self.col)
p0._cliarg_action = 'extend'
p[0] = p0
def p_subproc_atom_re(self, p):
"""subproc_atom : SEARCHPATH"""
p0 = xonsh_pathsearch(p[1], pymode=False, lineno=self.lineno,
@ -2317,6 +2510,7 @@ class BaseParser(object):
| STRING
| COMMA
| QUESTION
| DOLLAR_NAME
"""
# Many tokens cannot be part of this list, such as $, ', ", ()
# Use a string atom instead.
@ -2339,11 +2533,9 @@ class BaseParser(object):
else:
self._parse_error(p.value,
self.currloc(lineno=p.lineno,
column=p.lexpos),
self.xonsh_code)
column=p.lexpos))
else:
msg = 'code: {0}'.format(p.value),
self._parse_error(msg,
self.currloc(lineno=p.lineno,
column=p.lexpos),
self.xonsh_code)
column=p.lexpos))

View file

@ -0,0 +1,84 @@
import ast
import keyword
import collections
_all_keywords = frozenset(keyword.kwlist)
def _not_assignable(x, augassign=False):
"""
If ``x`` represents a value that can be assigned to, return ``None``.
Otherwise, return a string describing the object. For use in generating
meaningful syntax errors.
"""
if augassign and isinstance(x, (ast.Tuple, ast.List)):
return 'literal'
elif isinstance(x, (ast.Tuple, ast.List)):
if len(x.elts) == 0:
return '()'
for i in x.elts:
res = _not_assignable(i)
if res is not None:
return res
elif isinstance(x, (ast.Set, ast.Dict, ast.Num, ast.Str, ast.Bytes)):
return 'literal'
elif isinstance(x, ast.Call):
return 'function call'
elif isinstance(x, ast.Lambda):
return 'lambda'
elif isinstance(x, (ast.BoolOp, ast.BinOp, ast.UnaryOp)):
return 'operator'
elif isinstance(x, ast.IfExp):
return 'conditional expression'
elif isinstance(x, ast.ListComp):
return 'list comprehension'
elif isinstance(x, ast.DictComp):
return 'dictionary comprehension'
elif isinstance(x, ast.SetComp):
return 'set comprehension'
elif isinstance(x, ast.GeneratorExp):
return 'generator expression'
elif isinstance(x, ast.Compare):
return 'comparison'
elif isinstance(x, ast.Name) and x.id in _all_keywords:
return 'keyword'
elif isinstance(x, ast.NameConstant):
return 'keyword'
_loc = collections.namedtuple('_loc', ['lineno', 'column'])
def check_contexts(tree):
c = ContextCheckingVisitor()
c.visit(tree)
if c.error is not None:
e = SyntaxError(c.error[0])
e.loc = _loc(c.error[1], c.error[2])
raise e
class ContextCheckingVisitor(ast.NodeVisitor):
def __init__(self):
self.error = None
def visit_Delete(self, node):
for i in node.targets:
err = _not_assignable(i)
if err is not None:
msg = "can't delete {}".format(err)
self.error = msg, i.lineno, i.col_offset
break
def visit_Assign(self, node):
for i in node.targets:
err = _not_assignable(i)
if err is not None:
msg = "can't assign to {}".format(err)
self.error = msg, i.lineno, i.col_offset
break
def visit_AugAssign(self, node):
err = _not_assignable(node.target, True)
if err is not None:
msg = "illegal target for augmented assignment: {}".format(err)
self.error = msg, node.target.lineno, node.target.col_offset

View file

@ -46,6 +46,12 @@ ON_POSIX = LazyBool(lambda: (os.name == 'posix'), globals(), 'ON_POSIX')
ON_FREEBSD = LazyBool(lambda: (sys.platform.startswith('freebsd')),
globals(), 'ON_FREEBSD')
"""``True`` if on a FreeBSD operating system, else ``False``."""
ON_NETBSD = LazyBool(lambda: (sys.platform.startswith('netbsd')),
globals(), 'ON_NETBSD')
"""``True`` if on a NetBSD operating system, else ``False``."""
ON_BSD = LazyBool(lambda: ON_FREEBSD or ON_NETBSD,
globals(), 'ON_BSD')
"""``True`` if on a BSD operating system, else ``False``."""
#
@ -265,22 +271,15 @@ def BASH_COMPLETIONS_DEFAULT():
"""
if ON_LINUX or ON_CYGWIN:
if linux_distro() == 'arch':
bcd = (
'/usr/share/bash-completion/bash_completion',
'/usr/share/bash-completion/completions')
bcd = ('/usr/share/bash-completion/bash_completion', )
else:
bcd = ('/usr/share/bash-completion',
'/usr/share/bash-completion/completions')
bcd = ('/usr/share/bash-completion', )
elif ON_DARWIN:
bcd = ('/usr/local/etc/bash_completion',
'/opt/local/etc/profile.d/bash_completion.sh')
bcd = ('/usr/local/share/bash-completion/bash_completion', # v2.x
'/usr/local/etc/bash_completion') # v1.x
elif ON_WINDOWS and git_for_windows_path():
bcd = (os.path.join(git_for_windows_path(),
'usr\\share\\bash-completion'),
os.path.join(git_for_windows_path(),
'usr\\share\\bash-completion\\completions'),
os.path.join(git_for_windows_path(),
'mingw64\\share\\git\\completion\\git-completion.bash'))
'usr\\share\\bash-completion'), )
else:
bcd = ()
return bcd

View file

@ -17,7 +17,7 @@ import functools
import threading
import subprocess
import collections
import collections.abc as abc
import collections.abc as cabc
from xonsh.platform import ON_WINDOWS, ON_LINUX, ON_POSIX
from xonsh.tools import (redirect_stdout, redirect_stderr, fallback,
@ -358,7 +358,7 @@ def wrap_simple_command(f, args, stdin, stdout, stderr):
cmd_result = 0
if isinstance(r, str):
stdout.write(r)
elif isinstance(r, abc.Sequence):
elif isinstance(r, cabc.Sequence):
if r[0] is not None:
stdout.write(r[0])
if r[1] is not None:

View file

@ -4,7 +4,7 @@ import os
import builtins
from prompt_toolkit.layout.dimension import LayoutDimension
from prompt_toolkit.completion import Completer, Completion
from prompt_toolkit.completion import Completer, Completion, _commonprefix
class PromptToolkitCompleter(Completer):
@ -40,8 +40,13 @@ class PromptToolkitCompleter(Completer):
pass
elif len(os.path.commonprefix(completions)) <= len(prefix):
self.reserve_space()
c_prefix = _commonprefix([a.strip('\'/').rsplit('/', 1)[0]
for a in completions])
for comp in completions:
yield Completion(comp, -l)
if comp.endswith('/') and not c_prefix.startswith('/'):
c_prefix = ''
display = comp[len(c_prefix):].lstrip('/')
yield Completion(comp, -l, display=display)
def reserve_space(self):
cli = builtins.__xonsh_shell__.shell.prompter.cli

View file

@ -107,6 +107,15 @@ class EndOfLine(Filter):
return bool(at_end and not last_line)
class ShouldConfirmCompletion(Filter):
"""
Check if completion needs confirmation
"""
def __call__(self, cli):
return (builtins.__xonsh_env__.get('COMPLETIONS_CONFIRM')
and cli.current_buffer.complete_state)
# Copied from prompt-toolkit's key_binding/bindings/basic.py
@Condition
def ctrl_d_condition(cli):
@ -173,6 +182,16 @@ def load_xonsh_bindings(key_bindings_manager):
b = event.cli.current_buffer
carriage_return(b, event.cli)
@handle(Keys.ControlJ, filter=ShouldConfirmCompletion())
def enter_confirm_completion(event):
"""Ignore <enter> (confirm completion)"""
event.current_buffer.complete_state = None
@handle(Keys.Escape, filter=ShouldConfirmCompletion())
def esc_cancel_completion(event):
"""Use <ESC> to cancel completion"""
event.cli.current_buffer.cancel_completion()
@handle(Keys.Left, filter=BeginningOfLine())
def wrap_cursor_back(event):
"""Move cursor to end of previous line unless at beginning of document"""

66
xonsh/pytest_plugin.py Normal file
View file

@ -0,0 +1,66 @@
# -*- coding: utf-8 -*-
"""Pytest plugin for testing xsh files."""
import sys
import importlib
from traceback import format_list, extract_tb
import pytest
from xonsh.imphooks import install_hook
def pytest_configure(config):
install_hook()
def pytest_collection_modifyitems(items):
items.sort(key=lambda x: 0 if isinstance(x, XshFunction) else 1)
def _limited_traceback(excinfo):
""" Return a formatted traceback with all the stack
from this frame (i.e __file__) up removed
"""
tb = extract_tb(excinfo.tb)
try:
idx = [__file__ in e for e in tb].index(True)
return format_list(tb[idx+1:])
except ValueError:
return format_list(tb)
def pytest_collect_file(parent, path):
if path.ext.lower() == ".xsh" and path.basename.startswith("test_"):
return XshFile(path, parent)
class XshFile(pytest.File):
def collect(self):
sys.path.append(self.fspath.dirname)
mod = importlib.import_module(self.fspath.purebasename)
sys.path.pop(0)
tests = [t for t in dir(mod) if t.startswith('test_')]
for test_name in tests:
obj = getattr(mod, test_name)
if hasattr(obj, '__call__'):
yield XshFunction(name=test_name, parent=self,
test_func=obj, test_module=mod)
class XshFunction(pytest.Item):
def __init__(self, name, parent, test_func, test_module):
super().__init__(name, parent)
self._test_func = test_func
self._test_module = test_module
def runtest(self):
self._test_func()
def repr_failure(self, excinfo):
""" called when self.runtest() raises an exception. """
formatted_tb = _limited_traceback(excinfo)
formatted_tb.insert(0, "xonsh execution failed\n")
formatted_tb.append('{}: {}'.format(excinfo.type.__name__, excinfo.value))
return "".join(formatted_tb)
def reportinfo(self):
return self.fspath, 0, "xonsh test: {}".format(self.name)

View file

@ -57,9 +57,14 @@ def setup_readline():
pass
else:
break
if readline is None:
print("No readline implementation available. Skipping setup.")
print("""Skipping setup. Because no `readline` implementation available.
Please install a backend (`readline`, `prompt-toolkit`, etc) to use
`xonsh` interactively.
See https://github.com/xonsh/xonsh/issues/1170""")
return
import ctypes
import ctypes.util
uses_libedit = readline.__doc__ and 'libedit' in readline.__doc__

View file

@ -2,7 +2,7 @@
"""Tools to replay xonsh history files."""
import time
import builtins
import collections.abc as abc
import collections.abc as cabc
from xonsh.tools import swap
from xonsh.lazyjson import LazyJSON
@ -68,7 +68,7 @@ class Replayer(object):
new_env.update(re_env)
elif e == 'native':
new_env.update(builtins.__xonsh_env__)
elif isinstance(e, abc.Mapping):
elif isinstance(e, cabc.Mapping):
new_env.update(e)
else:
raise TypeError('Type of env not understood: {0!r}'.format(e))

View file

@ -19,12 +19,13 @@ Implementations:
"""
import builtins
import collections
import collections.abc as abc
import collections.abc as cabc
import contextlib
import ctypes
import datetime
import functools
import glob
import itertools
import os
import pathlib
import re
@ -243,6 +244,9 @@ def find_next_break(line, mincol=0, lexer=None):
elif tok.type == 'ERRORTOKEN' and ')' in tok.value:
maxcol = tok.lexpos + mincol + 1
break
elif tok.type == 'BANG':
maxcol = mincol + len(line) + 1
break
return maxcol
@ -259,11 +263,17 @@ def subproc_toks(line, mincol=-1, maxcol=None, lexer=None, returnline=False):
lexer.input(line)
toks = []
lparens = []
saw_macro = False
end_offset = 0
for tok in lexer:
pos = tok.lexpos
if tok.type not in END_TOK_TYPES and pos >= maxcol:
break
if tok.type == 'BANG':
saw_macro = True
if saw_macro and tok.type not in ('NEWLINE', 'DEDENT'):
toks.append(tok)
continue
if tok.type in LPARENS:
lparens.append(tok.type)
if len(toks) == 0 and tok.type in BEG_TOK_SKIPS:
@ -313,6 +323,8 @@ def subproc_toks(line, mincol=-1, maxcol=None, lexer=None, returnline=False):
end_offset = len(el)
if len(toks) == 0:
return # handle comment lines
elif saw_macro:
end_offset = len(toks[-1].value.rstrip()) + 1
beg, end = toks[0].lexpos, (toks[-1].lexpos + end_offset)
end = len(line[:end].rstrip())
rtn = '![' + line[beg:end] + ']'
@ -928,14 +940,14 @@ def SLICE_REG():
def ensure_slice(x):
"""Try to convert an object into a slice, complain on failure"""
if not x:
if not x and x != 0:
return slice(None)
elif isinstance(x, slice):
elif is_slice(x):
return x
try:
x = int(x)
if x != -1:
s = slice(x, x+1)
s = slice(x, x + 1)
else:
s = slice(-1, None, None)
except ValueError:
@ -954,6 +966,28 @@ def ensure_slice(x):
return s
def get_portions(it, slices):
"""Yield from portions of an iterable.
Parameters
----------
it: iterable
slices: a slice or a list of slice objects
"""
if is_slice(slices):
slices = [slices]
if len(slices) == 1:
s = slices[0]
try:
yield from itertools.islice(it, s.start, s.stop, s.step)
return
except ValueError: # islice failed
pass
it = list(it)
for s in slices:
yield from it[s]
def is_slice_as_str(x):
"""
Test if string x is a slice. If not a string return False.
@ -980,7 +1014,7 @@ def is_int_as_str(x):
def is_string_set(x):
"""Tests if something is a set of strings"""
return (isinstance(x, abc.Set) and
return (isinstance(x, cabc.Set) and
all(isinstance(a, str) for a in x))
@ -1016,7 +1050,7 @@ def set_to_pathsep(x, sort=False):
def is_string_seq(x):
"""Tests if something is a sequence of strings"""
return (isinstance(x, abc.Sequence) and
return (isinstance(x, cabc.Sequence) and
all(isinstance(a, str) for a in x))
@ -1024,7 +1058,7 @@ def is_nonstring_seq_of_strings(x):
"""Tests if something is a sequence of strings, where the top-level
sequence is not a string itself.
"""
return (isinstance(x, abc.Sequence) and not isinstance(x, str) and
return (isinstance(x, cabc.Sequence) and not isinstance(x, str) and
all(isinstance(a, str) for a in x))
@ -1058,7 +1092,7 @@ def seq_to_upper_pathsep(x):
def is_bool_seq(x):
"""Tests if an object is a sequence of bools."""
return isinstance(x, abc.Sequence) and all(isinstance(y, bool) for y in x)
return isinstance(x, cabc.Sequence) and all(isinstance(y, bool) for y in x)
def csv_to_bool_seq(x):
@ -1177,7 +1211,7 @@ HISTORY_UNITS = LazyObject(lambda: {
def is_history_tuple(x):
"""Tests if something is a proper history value, units tuple."""
if (isinstance(x, abc.Sequence) and
if (isinstance(x, cabc.Sequence) and
len(x) == 2 and
isinstance(x[0], (int, float)) and
x[1].lower() in CANON_HISTORY_UNITS):
@ -1224,7 +1258,7 @@ RE_HISTORY_TUPLE = LazyObject(
def to_history_tuple(x):
"""Converts to a canonincal history tuple."""
if not isinstance(x, (abc.Sequence, float, int)):
if not isinstance(x, (cabc.Sequence, float, int)):
raise ValueError('history size must be given as a sequence or number')
if isinstance(x, str):
m = RE_HISTORY_TUPLE.match(x.strip().lower())

View file

@ -27,9 +27,9 @@
"url": "http://xon.sh",
"description": ["Python virtual environment manager for xonsh."]
},
{"name": "prompt_ret_code",
"package": "xontrib-prompt-ret-code",
"url": "https://github.com/Siecje/xontrib-prompt-ret-code",
{"name": "prompt_ret_code",
"package": "xonsh",
"url": "http://xon.sh",
"description": ["Adds return code info to the prompt"]
},
{"name": "xo",
@ -50,11 +50,6 @@
"url": "https://github.com/xsteadfastx/xonsh-docker-tabcomplete",
"description": ["Adds tabcomplete functionality to docker inside of xonsh."]
},
{"name": "pacman_tabcomplete",
"package": "xonsh-pacman-tabcomplete",
"url": "https://github.com/gforsyth/xonsh-pacman-tabcomplete",
"description": ["Adds tabcomplete functionality to pacman inside of xonsh."]
},
{"name": "scrapy_tabcomplete",
"package": "xonsh-scrapy-tabcomplete",
"url": "https://github.com/Granitas/xonsh-scrapy-tabcomplete",
@ -125,13 +120,6 @@
"pip": "pip install xonsh-docker-tabcomplete"
}
},
"xonsh-pacman-tabcomplete": {
"license": "MIT",
"url": "https://github.com/gforsyth/xonsh-pacman-tabcomplete",
"install": {
"pip": "pip install xonsh-pacman-tabcomplete"
}
},
"xonsh-scrapy-tabcomplete": {
"license": "GPLv3",
"url": "https://github.com/Granitas/xonsh-scrapy-tabcomplete",

View file

@ -29,7 +29,7 @@ import sys
import stat
import getopt
import builtins
import collections.abc as abc
import collections.abc as cabc
r"""Find the full path to commands.
@ -199,7 +199,7 @@ def whichgen(command, path=None, verbose=0, exts=None):
break
else:
exts = ['.COM', '.EXE', '.BAT', '.CMD']
elif not isinstance(exts, abc.Sequence):
elif not isinstance(exts, cabc.Sequence):
raise TypeError("'exts' argument must be a sequence or None")
else:
if exts is not None:

View file

@ -0,0 +1,35 @@
from xonsh.tools import ON_WINDOWS as _ON_WINDOWS
def _ret_code_color():
if __xonsh_history__.rtns:
color = 'blue' if __xonsh_history__.rtns[-1] == 0 else 'red'
else:
color = 'blue'
if _ON_WINDOWS:
if color == 'blue':
return '{BOLD_INTENSE_CYAN}'
elif color == 'red':
return '{BOLD_INTENSE_RED}'
else:
if color == 'blue':
return '{BOLD_BLUE}'
elif color == 'red':
return '{BOLD_RED}'
def _ret_code():
if __xonsh_history__.rtns:
return_code = __xonsh_history__.rtns[-1]
if return_code != 0:
return '[{}]'.format(return_code)
return ''
$PROMPT = $PROMPT.replace('{prompt_end}{NO_COLOR}',
'{ret_code_color}{ret_code}{prompt_end}{NO_COLOR}')
$FORMATTER_DICT['ret_code_color'] = _ret_code_color
$FORMATTER_DICT['ret_code'] = _ret_code

View file

@ -1,4 +1,13 @@
"""API for Vox, the Python virtual environment manager for xonsh."""
"""
API for Vox, the Python virtual environment manager for xonsh.
Vox defines several events related to the life cycle of virtual environments:
* ``vox_on_create(env: str) -> None``
* ``vox_on_activate(env: str) -> None``
* ``vox_on_deactivate(env: str) -> None``
* ``vox_on_delete(env: str) -> None``
"""
import os
import venv
import shutil
@ -7,16 +16,45 @@ import collections.abc
from xonsh.platform import ON_POSIX, ON_WINDOWS, scandir
# This is because builtins aren't globally created during testing.
# FIXME: Is there a better way?
from xonsh.events import events
events.doc('vox_on_create', """
vox_on_create(env: str) -> None
Fired after an environment is created.
""")
events.doc('vox_on_activate', """
vox_on_activate(env: str) -> None
Fired after an environment is activated.
""")
events.doc('vox_on_deactivate', """
vox_on_deactivate(env: str) -> None
Fired after an environment is deactivated.
""")
events.doc('vox_on_delete', """
vox_on_delete(env: str) -> None
Fired after an environment is deleted (through vox).
""")
VirtualEnvironment = collections.namedtuple('VirtualEnvironment', ['env', 'bin'])
class EnvironmentInUse(Exception):
pass
"""The given environment is currently activated, and the operation cannot be performed."""
class NoEnvironmentActive(Exception):
pass
"""No environment is currently activated, and the operation cannot be performed."""
class Vox(collections.abc.Mapping):
@ -60,6 +98,7 @@ class Vox(collections.abc.Mapping):
env_path,
system_site_packages=system_site_packages, symlinks=symlinks,
with_pip=with_pip)
events.vox_on_create.fire(name)
def upgrade(self, name, *, symlinks=False, with_pip=True):
"""Create a virtual environment in $VIRTUALENV_HOME with python3's ``venv``.
@ -159,7 +198,7 @@ class Vox(collections.abc.Mapping):
env_path = builtins.__xonsh_env__['VIRTUAL_ENV']
if env_path.startswith(self.venvdir):
name = env_path[len(self.venvdir):]
if name[0] == '/':
if name[0] in '/\\':
name = name[1:]
return name
else:
@ -185,6 +224,8 @@ class Vox(collections.abc.Mapping):
if 'PYTHONHOME' in env:
type(self).oldvars['PYTHONHOME'] = env.pop('PYTHONHOME')
events.vox_on_activate.fire(name)
def deactivate(self):
"""
Deactive the active virtual environment. Returns the name of it.
@ -203,6 +244,7 @@ class Vox(collections.abc.Mapping):
env.pop('VIRTUAL_ENV')
events.vox_on_deactivate.fire(env_name)
return env_name
def __delitem__(self, name):
@ -222,3 +264,5 @@ class Vox(collections.abc.Mapping):
# No current venv, ... fails
pass
shutil.rmtree(env_path)
events.vox_on_delete.fire(name)