resolved conflicts from master

This commit is contained in:
Anthony Scopatz 2016-05-08 12:05:18 -04:00
commit ef64a8001a
37 changed files with 6354 additions and 321 deletions

View file

@ -17,8 +17,18 @@ Current Developments
$INTENSIFY_COLORS_ON_WIN environment variable.
* Added ``Ellipsis`` lookup to ``__xonsh_env__`` to allow environment variable checks, e.g. ``'HOME' in ${...}``
* Added an option to update ``os.environ`` every time the xonsh environment changes.
This disabled by default, but can be enabled by setting ``$UPDATE_OS_ENVIRON`` to
This is disabled by default but can be enabled by setting ``$UPDATE_OS_ENVIRON`` to
True.
* Added Windows 'cmd.exe' as a foreign shell. This gives xonsh the ability to source
Windows Batch files (.bat and .cmd). Calling ``source-cmd script.bat`` or the
alias ``source-bat script.bat`` will call the bat file and changes to the
environment variables will be reflected in xonsh.
* Added an alias for the conda environment activate/deactivate batch scripts when
running the Anaconda python distribution on Windows.
* Added a menu entry to launch xonsh when installing xonsh from a conda package
* Added a new ``which`` alias that supports both regular ``which`` and also searches
through xonsh aliases
* Add support for prompt_toolkit_1.0.0
**Changed:**
@ -31,6 +41,10 @@ Current Developments
* Left and Right arrows in the ``prompt_toolkit`` shell now wrap in multiline
environments
* Regexpath matching with backticks, now returns an empty list in python mode.
* Pygments added as a dependency for the conda package
* PLY is no longer a external dependency but is bundled in xonsh/ply. Xonsh can
therefore run without any external dependencies, although having prompt-toolkit
recommended.
**Deprecated:** None
@ -39,7 +53,15 @@ Current Developments
**Fixed:**
* Fixed bug with loading prompt-toolkit shell < v0.57.
* Fixed bug with prompt-toolkit completion when the cursor is not at the end of the line
* Fixed bug with prompt-toolkit completion when the cursor is not at the end of
the line.
* Aliases will now evaluate enviornment variables and other expansions
at execution time rather than passing through a literal string.
* Fixed environment variables from os.environ not beeing loaded when a running
a script
* Fixed bug that prevented `source-alias` from working.
* Fixed deadlock on Windows when runing subprocess that generates enough output
to fill the OS pipe buffer
**Security:** None

View file

@ -37,3 +37,6 @@ will help you put a finger on how to do the equivelent task in xonsh.
* - ``||``
- ``or`` as well as ``||``
- Logical-or operator for subprocesses.
* - ``$?``
- ``__xonsh_history__.rtns[-1]``
- Returns the exit code, or status, of the previous command.

View file

@ -5,7 +5,7 @@ Xonsh currently has the following external dependencies,
*Run Time:*
#. Python v3.4+
#. PLY
#. PLY (optional, included with xonsh)
#. prompt-toolkit (optional)
#. Jupyter (optional)
#. setproctitle (optional)

View file

@ -96,3 +96,15 @@ manually use the ``![]``, ``!()``, ``$[]`` or ``$()`` operators on your code.
Yes, context-sensitive parsing is gross. But the point of xonsh is that it uses xontext-sensitive parsing and
is ultimately a lot less gross than other shell languages, such as BASH.
Furthermore, its use is heavily limited here.
6. Gotchas
----------
There are a few gotchas when using xonsh across multiple versions of Python,
where some behavior can differ, as the underlying Python might behave
differently.
For example double star globbing `**` will only work on Python 3.5+ (ie not on 3.4)
as recursive globbing is `new in Python 3.5 <https://docs.python.org/3/library/glob.html#glob.glob>`_

View file

@ -221,7 +221,7 @@ Xonsh currently has the following external dependencies,
*Run Time:*
#. Python v3.4+
#. PLY
#. PLY (optional, included with xonsh)
#. prompt-toolkit (optional)
#. Jupyter (optional)
#. setproctitle (optional)

View file

@ -1,4 +0,0 @@
python setup.py install --conda
copy "%RECIPE_DIR%\xonsh_shortcut.json" "%PREFIX%\Menu\xonsh_shortcut.json"
copy "%RECIPE_DIR%\xonsh.ico" "%PREFIX%\Menu\xonsh.ico"

View file

@ -1,2 +0,0 @@
#!/bin/bash
$PYTHON setup.py install --conda

View file

@ -6,7 +6,9 @@ source:
git_url: ../
build:
script: python setup.py install --single-version-externally-managed --record=record.txt
number: {{ environ.get('GIT_DESCRIBE_NUMBER', 0) }}
skip: True # [py2k]
entry_points:
- xonsh = xonsh.main:main
@ -21,13 +23,14 @@ requirements:
- ply
- prompt_toolkit
- setproctitle
- pygments
app:
entry: xonsh
icon: ../docs/_static/ascii_conch_part_color.png
about:
home: http://xon.sh/
license: BSD
summary: xonsh is a Python-ish, BASHwards-facing shell.
# Removed temporarily until this is better support by the Anaconda launcher
#app:
# entry: xonsh
# icon: icon.png

View file

@ -1,2 +1 @@
numpydoc==0.5
ply==3.4

View file

@ -36,11 +36,6 @@ from xonsh import __version__ as XONSH_VERSION
TABLES = ['xonsh/lexer_table.py', 'xonsh/parser_table.py']
CONDA = ("--conda" in sys.argv)
if CONDA:
sys.argv.remove("--conda")
def clean_tables():
"""Remove the lexer/parser modules that are dynamically created."""
for f in TABLES:
@ -71,23 +66,21 @@ def install_jupyter_hook(root=None):
"language": "xonsh",
"codemirror_mode": "shell",
}
if CONDA:
d = os.path.join(sys.prefix + '/share/jupyter/kernels/xonsh/')
os.makedirs(d, exist_ok=True)
with TemporaryDirectory() as d:
os.chmod(d, 0o755) # Starts off as 700, not user readable
if sys.platform == 'win32':
# Ensure that conda-build detects the hard coded prefix
spec['argv'][0] = spec['argv'][0].replace(os.sep, os.altsep)
with open(os.path.join(d, 'kernel.json'), 'w') as f:
json.dump(spec, f, sort_keys=True)
else:
with TemporaryDirectory() as d:
os.chmod(d, 0o755) # Starts off as 700, not user readable
with open(os.path.join(d, 'kernel.json'), 'w') as f:
json.dump(spec, f, sort_keys=True)
print('Installing Jupyter kernel spec...')
KernelSpecManager().install_kernel_spec(
d, 'xonsh', user=('--user' in sys.argv), replace=True,
prefix=root)
if 'CONDA_BUILD' in os.environ:
root = sys.prefix
if sys.platform == 'win32':
root = root.replace(os.sep, os.altsep)
print('Installing Jupyter kernel spec...')
KernelSpecManager().install_kernel_spec(
d, 'xonsh', user=('--user' in sys.argv), replace=True,
prefix=root)
class xinstall(install):
@ -146,8 +139,6 @@ def main():
cmdclass={'install': xinstall, 'sdist': xsdist},
)
if HAVE_SETUPTOOLS:
skw['setup_requires'] = ['ply']
skw['install_requires'] = ['ply']
skw['entry_points'] = {
'pygments.lexers': ['xonsh = xonsh.pyghooks:XonshLexer',
'xonshcon = xonsh.pyghooks:XonshConsoleLexer',

3
tests/batch.bat Normal file
View file

@ -0,0 +1,3 @@
echo on
set ENV_TO_BE_ADDED=Hallo world
set ENV_TO_BE_REMOVED=

View file

@ -8,7 +8,7 @@ from nose.plugins.skip import SkipTest
from nose.tools import assert_equal
import xonsh.built_ins as built_ins
from xonsh.built_ins import Aliases
from xonsh.aliases import Aliases
from xonsh.environ import Env
from xonsh.tools import ON_WINDOWS
@ -35,13 +35,16 @@ def test_imports():
})
def test_eval_normal():
assert_equal(ALIASES.get('o'), ['omg', 'lala'])
with mock_xonsh_env({}):
assert_equal(ALIASES.get('o'), ['omg', 'lala'])
def test_eval_self_reference():
assert_equal(ALIASES.get('ls'), ['ls', '- -'])
with mock_xonsh_env({}):
assert_equal(ALIASES.get('ls'), ['ls', '- -'])
def test_eval_recursive():
assert_equal(ALIASES.get('color_ls'), ['ls', '- -', '--color=true'])
with mock_xonsh_env({}):
assert_equal(ALIASES.get('color_ls'), ['ls', '- -', '--color=true'])
def test_eval_recursive_callable_partial():
if ON_WINDOWS:

View file

@ -8,6 +8,7 @@ import nose
from nose.plugins.skip import SkipTest
from nose.tools import assert_equal, assert_true, assert_false
from xonsh.tools import ON_WINDOWS
from xonsh.foreign_shells import foreign_shell_data, parse_env, parse_aliases
def test_parse_env():
@ -54,5 +55,26 @@ def test_foreign_bash_data():
yield assert_equal, expval, obsaliases.get(key, False)
def test_foreign_cmd_data():
if not ON_WINDOWS:
raise SkipTest
env = (('ENV_TO_BE_REMOVED','test'),)
batchfile = os.path.join(os.path.dirname(__file__), 'batch.bat')
source_cmd ='call "{}"\necho off'.format(batchfile)
try:
obsenv, _ = foreign_shell_data('cmd',prevcmd=source_cmd,
currenv=env,
interactive =False,
sourcer='call',envcmd='set',
use_tmpfile=True,
safe=False)
except (subprocess.CalledProcessError, FileNotFoundError):
raise SkipTest
assert_true('ENV_TO_BE_ADDED' in obsenv)
assert_true(obsenv['ENV_TO_BE_ADDED']=='Hallo world')
assert_true('ENV_TO_BE_REMOVED' not in obsenv)
if __name__ == '__main__':
nose.runmodule()

View file

@ -9,7 +9,11 @@ from pprint import pformat
import nose
from ply.lex import LexToken
try:
from ply.lex import LexToken
except ImportError:
from xonsh.ply.lex import LexToken
from xonsh.lexer import Lexer

View file

@ -7,10 +7,11 @@ import nose
from nose.tools import assert_equal, assert_true, assert_false
from xonsh.lexer import Lexer
from xonsh.tools import subproc_toks, subexpr_from_unbalanced, is_int, \
always_true, always_false, ensure_string, is_env_path, str_to_env_path, \
env_path_to_str, escape_windows_title_string, is_bool, to_bool, bool_to_str, \
ensure_int_or_slice, is_float, is_string, check_for_partial_string
from xonsh.tools import (subproc_toks, subexpr_from_unbalanced, is_int,
always_true, always_false, ensure_string, is_env_path, str_to_env_path,
env_path_to_str, escape_windows_cmd_string, is_bool, to_bool, bool_to_str,
ensure_int_or_slice, is_float, is_string, check_for_partial_string,
argvquote)
LEXER = Lexer()
LEXER.build()
@ -187,6 +188,11 @@ def test_subproc_toks_paren_and_paren():
obs = subproc_toks('(echo a) and (echo b)', maxcol=9, lexer=LEXER, returnline=True)
assert_equal(exp, obs)
def test_subproc_toks_semicolon_only():
exp = None
obs = subproc_toks(';', lexer=LEXER, returnline=True)
assert_equal(exp, obs)
def test_subexpr_from_unbalanced_parens():
cases = [
('f(x.', 'x.'),
@ -303,19 +309,37 @@ def test_ensure_int_or_slice():
obs = ensure_int_or_slice(inp)
yield assert_equal, exp, obs
def test_escape_windows_title_string():
def test_escape_windows_cmd_string():
cases = [
('', ''),
('foo', 'foo'),
('foo&bar', 'foo^&bar'),
('foo$?-/_"\\', 'foo$?-/_"\\'),
('foo$?-/_"\\', 'foo$?-/_^"\\'),
('^&<>|', '^^^&^<^>^|'),
('this /?', 'this /.')
]
for st, esc in cases:
obs = escape_windows_title_string(st)
obs = escape_windows_cmd_string(st)
yield assert_equal, esc, obs
def test_argvquote():
cases = [
('', '""'),
('foo', 'foo'),
(r'arg1 "hallo, "world"" "\some\path with\spaces")',
r'"arg1 \"hallo, \"world\"\" \"\some\path with\spaces\")"'),
(r'"argument"2" argument3 argument4',
r'"\"argument\"2\" argument3 argument4"'),
(r'"\foo\bar bar\foo\" arg',
r'"\"\foo\bar bar\foo\\\" arg"')
]
for st, esc in cases:
obs = argvquote(st)
yield assert_equal, esc, obs
_leaders = ('', 'not empty')
_r = ('r', '')
_b = ('b', '')

View file

@ -56,6 +56,7 @@ def mock_xonsh_env(xenv):
builtins.__xonsh_exit__ = False
builtins.__xonsh_superhelp__ = lambda x: x
builtins.__xonsh_regexpath__ = lambda x: []
builtins.__xonsh_expand_path__ = lambda x: x
builtins.__xonsh_subproc_captured__ = sp
builtins.__xonsh_subproc_uncaptured__ = sp
builtins.__xonsh_ensure_list_of_strs__ = ensure_list_of_strs
@ -72,6 +73,7 @@ def mock_xonsh_env(xenv):
del builtins.__xonsh_exit__
del builtins.__xonsh_superhelp__
del builtins.__xonsh_regexpath__
del builtins.__xonsh_expand_path__
del builtins.__xonsh_subproc_captured__
del builtins.__xonsh_subproc_uncaptured__
del builtins.__xonsh_ensure_list_of_strs__

View file

@ -1,20 +1,142 @@
# -*- coding: utf-8 -*-
"""Aliases for the xonsh shell."""
import builtins
import os
import shlex
import builtins
import sys
import subprocess
from argparse import ArgumentParser
from collections.abc import MutableMapping, Iterable, Sequence
from xonsh.dirstack import cd, pushd, popd, dirs, _get_cwd
from xonsh.jobs import jobs, fg, bg, kill_all_jobs
from xonsh.proc import foreground
from xonsh.timings import timeit_alias
from xonsh.tools import ON_MAC, ON_WINDOWS, XonshError, to_bool
from xonsh.tools import ON_MAC, ON_WINDOWS, XonshError, to_bool, string_types
from xonsh.history import main as history_alias
from xonsh.replay import main as replay_main
from xonsh.environ import locate_binary
from xonsh.foreign_shells import foreign_shell_data
from xonsh.vox import Vox
from xonsh.tools import argvquote, escape_windows_cmd_string
class Aliases(MutableMapping):
"""Represents a location to hold and look up aliases."""
def __init__(self, *args, **kwargs):
self._raw = {}
self.update(*args, **kwargs)
def get(self, key, default=None):
"""Returns the (possibly modified) value. If the key is not present,
then `default` is returned.
If the value is callable, it is returned without modification. If it
is an iterable of strings it will be evaluated recursively to expand
other aliases, resulting in a new list or a "partially applied"
callable.
"""
val = self._raw.get(key)
if val is None:
return default
elif isinstance(val, Iterable) or callable(val):
return self.eval_alias(val, seen_tokens={key})
else:
msg = 'alias of {!r} has an inappropriate type: {!r}'
raise TypeError(msg.format(key, val))
def eval_alias(self, value, seen_tokens=frozenset(), acc_args=()):
"""
"Evaluates" the alias `value`, by recursively looking up the leftmost
token and "expanding" if it's also an alias.
A value like ["cmd", "arg"] might transform like this:
> ["cmd", "arg"] -> ["ls", "-al", "arg"] -> callable()
where `cmd=ls -al` and `ls` is an alias with its value being a
callable. The resulting callable will be "partially applied" with
["-al", "arg"].
"""
# Beware of mutability: default values for keyword args are evaluated
# only once.
if callable(value):
if acc_args: # Partial application
def _alias(args, stdin=None):
args = list(acc_args) + args
return value(args, stdin=stdin)
return _alias
else:
return value
else:
expand_path = builtins.__xonsh_expand_path__
token, *rest = map(expand_path, value)
if token in seen_tokens or token not in self._raw:
# ^ Making sure things like `egrep=egrep --color=auto` works,
# and that `l` evals to `ls --color=auto -CF` if `l=ls -CF`
# and `ls=ls --color=auto`
rtn = [token]
rtn.extend(rest)
rtn.extend(acc_args)
return rtn
else:
seen_tokens = seen_tokens | {token}
acc_args = rest + list(acc_args)
return self.eval_alias(self._raw[token], seen_tokens, acc_args)
def expand_alias(self, line):
"""Expands any aliases present in line if alias does not point to a
builtin function and if alias is only a single command.
"""
word = line.split(' ', 1)[0]
if word in builtins.aliases and isinstance(self.get(word), Sequence):
word_idx = line.find(word)
expansion = ' '.join(self.get(word))
line = line[:word_idx] + expansion + line[word_idx+len(word):]
return line
#
# Mutable mapping interface
#
def __getitem__(self, key):
return self._raw[key]
def __setitem__(self, key, val):
if isinstance(val, string_types):
self._raw[key] = shlex.split(val)
else:
self._raw[key] = val
def __delitem__(self, key):
del self._raw[key]
def update(self, *args, **kwargs):
for key, val in dict(*args, **kwargs).items():
self[key] = val
def __iter__(self):
yield from self._raw
def __len__(self):
return len(self._raw)
def __str__(self):
return str(self._raw)
def __repr__(self):
return '{0}.{1}({2})'.format(self.__class__.__module__,
self.__class__.__name__, self._raw)
def _repr_pretty_(self, p, cycle):
name = '{0}.{1}'.format(self.__class__.__module__,
self.__class__.__name__)
with p.group(0, name + '(', ')'):
if cycle:
p.text('...')
elif len(self):
p.break_()
p.pretty(dict(self))
def exit(args, stdin=None): # pylint:disable=redefined-builtin,W0622
@ -27,6 +149,7 @@ def exit(args, stdin=None): # pylint:disable=redefined-builtin,W0622
_SOURCE_FOREIGN_PARSER = None
def _ensure_source_foreign_parser():
global _SOURCE_FOREIGN_PARSER
if _SOURCE_FOREIGN_PARSER is not None:
@ -63,8 +186,12 @@ def _ensure_source_foreign_parser():
help='code to find locations of all native functions '
'in the shell language.')
parser.add_argument('--sourcer', default=None, dest='sourcer',
help='the source command in the target shell language, '
'default: source.')
help='the source command in the target shell '
'language, default: source.')
parser.add_argument('--use-tmpfile', type=to_bool, default=False,
help='whether the commands for source shell should be '
'written to a temporary file.',
dest='use_tmpfile')
_SOURCE_FOREIGN_PARSER = parser
return parser
@ -77,16 +204,23 @@ def source_foreign(args, stdin=None):
pass # don't change prevcmd if given explicitly
elif os.path.isfile(ns.files_or_code[0]):
# we have filename to source
ns.prevcmd = '{0} "{1}"'.format(ns.sourcer, '" "'.join(ns.files_or_code))
ns.prevcmd = '{} "{}"'.format(ns.sourcer, '" "'.join(ns.files_or_code))
elif ns.prevcmd is None:
ns.prevcmd = ' '.join(ns.files_or_code) # code to run, no files
foreign_shell_data.cache_clear() # make sure that we don't get prev src
fsenv, fsaliases = foreign_shell_data(shell=ns.shell, login=ns.login,
interactive=ns.interactive, envcmd=ns.envcmd,
aliascmd=ns.aliascmd, extra_args=ns.extra_args,
safe=ns.safe, prevcmd=ns.prevcmd,
postcmd=ns.postcmd, funcscmd=ns.funcscmd,
sourcer=ns.sourcer)
interactive=ns.interactive,
envcmd=ns.envcmd,
aliascmd=ns.aliascmd,
extra_args=ns.extra_args,
safe=ns.safe, prevcmd=ns.prevcmd,
postcmd=ns.postcmd,
funcscmd=ns.funcscmd,
sourcer=ns.sourcer,
use_tmpfile=ns.use_tmpfile)
if fsenv is None:
return (None, 'xonsh: error: Source failed: '
'{}\n'.format(ns.prevcmd), 1)
# apply results
env = builtins.__xonsh_env__
denv = env.detype()
@ -94,6 +228,11 @@ def source_foreign(args, stdin=None):
if k in denv and v == denv[k]:
continue # no change from original
env[k] = v
# Remove any env-vars that were unset by the script.
for k in denv:
if k not in fsenv:
env.pop(k, None)
# Update aliases
baliases = builtins.aliases
for k, v in fsaliases.items():
if k in baliases and v == baliases[k]:
@ -101,30 +240,36 @@ def source_foreign(args, stdin=None):
baliases[k] = v
def source_bash(args, stdin=None):
"""Simple Bash-specific wrapper around source-foreign."""
args = list(args)
args.insert(0, 'bash')
args.append('--sourcer=source')
return source_foreign(args, stdin=stdin)
def source_zsh(args, stdin=None):
"""Simple zsh-specific wrapper around source-foreign."""
args = list(args)
args.insert(0, 'zsh')
args.append('--sourcer=source')
return source_foreign(args, stdin=stdin)
def source_alias(args, stdin=None):
"""Executes the contents of the provided files in the current context.
If sourced file isn't found in cwd, search for file along $PATH to source instead"""
If sourced file isn't found in cwd, search for file along $PATH to source
instead"""
for fname in args:
if not os.path.isfile(fname):
fname = locate_binary(fname)
with open(fname, 'r') as fp:
execx(fp.read(), 'exec', builtins.__xonsh_ctx__)
builtins.execx(fp.read(), 'exec', builtins.__xonsh_ctx__)
def source_cmd(args, stdin=None):
"""Simple cmd.exe-specific wrapper around source-foreign."""
args = list(args)
fpath = locate_binary(args[0])
args[0] = fpath if fpath else args[0]
if not os.path.isfile(args[0]):
return (None, 'xonsh: error: File not found: {}\n'.format(args[0]), 1)
prevcmd = 'call '
prevcmd += ' '.join([argvquote(arg, force=True) for arg in args])
prevcmd = escape_windows_cmd_string(prevcmd)
prevcmd += '\n@echo off'
args.append('--prevcmd={}'.format(prevcmd))
args.insert(0, 'cmd')
args.append('--interactive=0')
args.append('--sourcer=call')
args.append('--envcmd=set')
args.append('--postcmd=if errorlevel 1 exit 1')
args.append('--use-tmpfile=1')
return source_foreign(args, stdin=stdin)
def xexec(args, stdin=None):
@ -137,9 +282,10 @@ def xexec(args, stdin=None):
try:
os.execvpe(args[0], args, denv)
except FileNotFoundError as e:
return 'xonsh: ' + e.args[1] + ': ' + args[0] + '\n'
return (None, 'xonsh: exec: file not found: {}: {}'
'\n'.format(e.args[1], args[0]), 1)
else:
return 'xonsh: exec: no args specified\n'
return (None, 'xonsh: exec: no args specified\n', 1)
_BANG_N_PARSER = None
@ -171,6 +317,52 @@ def bang_bang(args, stdin=None):
return bang_n(['-1'])
def which_version():
"""Returns output from system `which -v`"""
_ver = subprocess.run(['which','-v'], stdout=subprocess.PIPE)
return(_ver.stdout.decode('utf-8'))
def which(args, stdin=None):
"""
Checks if argument is a xonsh alias, then if it's an executable, then
finally throw an error.
If '-a' flag is passed, run both to return both `xonsh` match and
`which` match
"""
desc = "Parses arguments to which wrapper"
parser = ArgumentParser('which', description=desc)
parser.add_argument('arg', type=str,
help='The executable or alias to search for')
parser.add_argument('-a', action='store_true', dest='all',
help='Show all matches in $PATH and xonsh.aliases')
parser.add_argument('-s', '--skip-alias', action='store_true',
help='Do not search in xonsh.aliases', dest='skip')
parser.add_argument('-v', '-V', '--version', action='version',
version='{}'.format(which_version()))
pargs = parser.parse_args(args)
#skip alias check if user asks to skip
if (pargs.arg in builtins.aliases and not pargs.skip):
match = pargs.arg
print('{} -> {}'.format(match, builtins.aliases[match]))
if pargs.all:
try:
subprocess.run(['which'] + args,
stderr=subprocess.PIPE,
check=True)
except subprocess.CalledProcessError:
pass
else:
try:
subprocess.run(['which'] + args,
stderr=subprocess.PIPE,
check=True)
except subprocess.CalledProcessError:
raise XonshError('{} not in {} or xonsh.builtins.aliases'
.format(args[0], ':'.join(__xonsh_env__['PATH'])))
def xonfig(args, stdin=None):
"""Runs the xonsh configuration utility."""
from xonsh.xonfig import main # lazy import
@ -189,6 +381,7 @@ def vox(args, stdin=None):
vox = Vox()
return vox(args, stdin=stdin)
@foreground
def mpl(args, stdin=None):
"""Hooks to matplotlib"""
@ -196,82 +389,96 @@ def mpl(args, stdin=None):
show()
DEFAULT_ALIASES = {
'cd': cd,
'pushd': pushd,
'popd': popd,
'dirs': dirs,
'jobs': jobs,
'fg': fg,
'bg': bg,
'EOF': exit,
'exit': exit,
'quit': exit,
'xexec': xexec,
'source': source_alias,
'source-zsh': source_zsh,
'source-bash': source_bash,
'source-foreign': source_foreign,
'history': history_alias,
'replay': replay_main,
'!!': bang_bang,
'!n': bang_n,
'mpl': mpl,
'trace': trace,
'timeit': timeit_alias,
'xonfig': xonfig,
'scp-resume': ['rsync', '--partial', '-h', '--progress', '--rsh=ssh'],
'ipynb': ['jupyter', 'notebook', '--no-browser'],
'vox': vox,
}
if ON_WINDOWS:
# Borrow builtin commands from cmd.exe.
WINDOWS_CMD_ALIASES = {
'cls',
'copy',
'del',
'dir',
'erase',
'md',
'mkdir',
'mklink',
'move',
'rd',
'ren',
'rename',
'rmdir',
'time',
'type',
'vol'
def make_default_aliases():
"""Creates a new default aliases dictionary."""
default_aliases = {
'cd': cd,
'pushd': pushd,
'popd': popd,
'dirs': dirs,
'jobs': jobs,
'fg': fg,
'bg': bg,
'EOF': exit,
'exit': exit,
'quit': exit,
'xexec': xexec,
'source': source_alias,
'source-zsh': ['source-foreign', 'zsh', '--sourcer=source'],
'source-bash': ['source-foreign', 'bash', '--sourcer=source'],
'source-cmd': source_cmd,
'source-foreign': source_foreign,
'history': history_alias,
'replay': replay_main,
'!!': bang_bang,
'!n': bang_n,
'mpl': mpl,
'trace': trace,
'timeit': timeit_alias,
'xonfig': xonfig,
'scp-resume': ['rsync', '--partial', '-h', '--progress', '--rsh=ssh'],
'ipynb': ['jupyter', 'notebook', '--no-browser'],
'vox': vox,
'which': which,
}
if ON_WINDOWS:
# Borrow builtin commands from cmd.exe.
windows_cmd_aliases = {
'cls',
'copy',
'del',
'dir',
'erase',
'md',
'mkdir',
'mklink',
'move',
'rd',
'ren',
'rename',
'rmdir',
'time',
'type',
'vol'
}
for alias in windows_cmd_aliases:
default_aliases[alias] = ['cmd', '/c', alias]
default_aliases['call'] = ['source-cmd']
default_aliases['source-bat'] = ['source-cmd']
# Add aliases specific to the Anaconda python distribution.
if 'Anaconda' in sys.version:
def source_cmd_keep_prompt(args, stdin=None):
p = builtins.__xonsh_env__.get('PROMPT')
source_cmd(args, stdin=stdin)
builtins.__xonsh_env__['PROMPT'] = p
default_aliases['source-cmd-keep-promt'] = source_cmd_keep_prompt
default_aliases['activate'] = ['source-cmd-keep-promt',
'activate.bat']
default_aliases['deactivate'] = ['source-cmd-keep-promt',
'deactivate.bat']
if not locate_binary('sudo'):
import xonsh.winutils as winutils
for alias in WINDOWS_CMD_ALIASES:
DEFAULT_ALIASES[alias] = ['cmd', '/c', alias]
def sudo(args, sdin=None):
if len(args) < 1:
print('You need to provide an executable to run as '
'Administrator.')
return
cmd = args[0]
if locate_binary(cmd):
return winutils.sudo(cmd, args[1:])
elif cmd.lower() in windows_cmd_aliases:
args = ['/D', '/C', 'CD', _get_cwd(), '&&'] + args
return winutils.sudo('cmd', args)
else:
msg = 'Cannot find the path for executable "{0}".'
print(msg.format(cmd))
DEFAULT_ALIASES['which'] = ['where']
default_aliases['sudo'] = sudo
elif ON_MAC:
default_aliases['ls'] = ['ls', '-G']
else:
default_aliases['grep'] = ['grep', '--color=auto']
default_aliases['ls'] = ['ls', '--color=auto', '-v']
return default_aliases
if not locate_binary('sudo'):
import xonsh.winutils as winutils
def sudo(args, sdin=None):
if len(args) < 1:
print('You need to provide an executable to run as Administrator.')
return
cmd = args[0]
if locate_binary(cmd):
return winutils.sudo(cmd, args[1:])
elif cmd.lower() in WINDOWS_CMD_ALIASES:
return winutils.sudo('cmd', ['/D', '/C', 'CD', _get_cwd(), '&&'] + args)
else:
print('Cannot find the path for executable "{0}".'.format(cmd))
DEFAULT_ALIASES['sudo'] = sudo
elif ON_MAC:
DEFAULT_ALIASES['ls'] = ['ls', '-G']
else:
DEFAULT_ALIASES['grep'] = ['grep', '--color=auto']
DEFAULT_ALIASES['ls'] = ['ls', '--color=auto', '-v']

View file

@ -6,7 +6,7 @@ import sys
import time
import builtins
from xonsh.tools import XonshError, escape_windows_title_string, ON_WINDOWS, \
from xonsh.tools import XonshError, escape_windows_cmd_string, ON_WINDOWS, \
print_exception, HAVE_PYGMENTS
from xonsh.completer import Completer
from xonsh.environ import multiline_prompt, format_prompt
@ -214,7 +214,7 @@ class BaseShell(object):
return
t = format_prompt(t)
if ON_WINDOWS and 'ANSICON' not in env:
t = escape_windows_title_string(t)
t = escape_windows_cmd_string(t)
os.system('title {}'.format(t))
else:
os.write(1, "\x1b]2;{0}\x07".format(t).encode())

View file

@ -17,7 +17,7 @@ import tempfile
from glob import glob, iglob
from subprocess import Popen, PIPE, STDOUT, CalledProcessError
from contextlib import contextmanager
from collections import Sequence, MutableMapping, Iterable
from collections import Sequence, Iterable
from xonsh.tools import (
suggest_commands, XonshError, ON_POSIX, ON_WINDOWS, string_types,
@ -29,6 +29,7 @@ from xonsh.jobs import add_job, wait_for_active_job
from xonsh.proc import (ProcProxy, SimpleProcProxy, ForegroundProcProxy,
SimpleForegroundProcProxy, TeePTYProc,
CompletedCommand, HiddenCompletedCommand)
from xonsh.aliases import Aliases, make_default_aliases
from xonsh.history import History
from xonsh.foreign_shells import load_foreign_aliases
@ -55,118 +56,6 @@ def resetting_signal_handle(sig, f):
signal.signal(sig, newh)
class Aliases(MutableMapping):
"""Represents a location to hold and look up aliases."""
def __init__(self, *args, **kwargs):
self._raw = {}
self.update(*args, **kwargs)
def get(self, key, default=None):
"""Returns the (possibly modified) value. If the key is not present,
then `default` is returned.
If the value is callable, it is returned without modification. If it
is an iterable of strings it will be evaluated recursively to expand
other aliases, resulting in a new list or a "partially applied"
callable.
"""
val = self._raw.get(key)
if val is None:
return default
elif isinstance(val, Iterable) or callable(val):
return self.eval_alias(val, seen_tokens={key})
else:
msg = 'alias of {!r} has an inappropriate type: {!r}'
raise TypeError(msg.format(key, val))
def eval_alias(self, value, seen_tokens, acc_args=[]):
"""
"Evaluates" the alias `value`, by recursively looking up the leftmost
token and "expanding" if it's also an alias.
A value like ["cmd", "arg"] might transform like this:
> ["cmd", "arg"] -> ["ls", "-al", "arg"] -> callable()
where `cmd=ls -al` and `ls` is an alias with its value being a
callable. The resulting callable will be "partially applied" with
["-al", "arg"].
"""
# Beware of mutability: default values for keyword args are evaluated
# only once.
if callable(value):
if acc_args: # Partial application
def _alias(args, stdin=None):
args = [expand_path(i) for i in (acc_args + args)]
return value(args, stdin=stdin)
return _alias
else:
return value
else:
token, *rest = value
if token in seen_tokens or token not in self._raw:
# ^ Making sure things like `egrep=egrep --color=auto` works,
# and that `l` evals to `ls --color=auto -CF` if `l=ls -CF`
# and `ls=ls --color=auto`
return value + acc_args
else:
return self.eval_alias(self._raw[token],
seen_tokens | {token},
rest + acc_args)
def expand_alias(self, line):
"""Expands any aliases present in line if alias does not point to a
builtin function and if alias is only a single command.
"""
word = line.split(' ', 1)[0]
if word in builtins.aliases and isinstance(self.get(word), Sequence):
word_idx = line.find(word)
expansion = ' '.join(self.get(word))
line = line[:word_idx] + expansion + line[word_idx+len(word):]
return line
#
# Mutable mapping interface
#
def __getitem__(self, key):
return self._raw[key]
def __setitem__(self, key, val):
if isinstance(val, string_types):
self._raw[key] = shlex.split(val)
else:
self._raw[key] = val
def __delitem__(self, key):
del self._raw[key]
def update(self, *args, **kwargs):
for key, val in dict(*args, **kwargs).items():
self[key] = val
def __iter__(self):
yield from self._raw
def __len__(self):
return len(self._raw)
def __str__(self):
return str(self._raw)
def __repr__(self):
return '{0}.{1}({2})'.format(self.__class__.__module__,
self.__class__.__name__, self._raw)
def _repr_pretty_(self, p, cycle):
name = '{0}.{1}'.format(self.__class__.__module__,
self.__class__.__name__)
with p.group(0, name + '(', ')'):
if cycle:
p.text('...')
elif len(self):
p.break_()
p.pretty(dict(self))
def helper(x, name=''):
"""Prints help about, and then returns that variable."""
INSPECTOR.pinfo(x, oname=name, detail_level=0)
@ -278,19 +167,26 @@ def regexpath(s, pymode=False):
def globpath(s, ignore_case=False):
"""Simple wrapper around glob that also expands home and env vars."""
s = expand_path(s)
if ignore_case:
s = expand_case_matching(s)
o = glob(s)
o, s = _iglobpath(s, ignore_case=ignore_case)
o = list(o)
return o if len(o) != 0 else [s]
def iglobpath(s, ignore_case=False):
"""Simple wrapper around iglob that also expands home and env vars."""
def _iglobpath(s, ignore_case=False):
s = expand_path(s)
if ignore_case:
s = expand_case_matching(s)
return iglob(s)
if sys.version_info > (3, 5):
if '**' in s and '**/*' not in s:
s = s.replace('**', '**/*')
# `recursive` is only a 3.5+ kwarg.
return iglob(s, recursive=True), s
else:
return iglob(s), s
def iglobpath(s, ignore_case=False):
"""Simple wrapper around iglob that also expands home and env vars."""
return _iglobpath(s, ignore_case)[0]
RE_SHEBANG = re.compile(r'#![ \t]*(.+?)$')
@ -774,8 +670,7 @@ def load_builtins(execer=None, config=None, login=False):
builtins.compilex = None if execer is None else execer.compile
# Need this inline/lazy import here since we use locate_binary that relies on __xonsh_env__ in default aliases
from xonsh.aliases import DEFAULT_ALIASES
builtins.default_aliases = builtins.aliases = Aliases(DEFAULT_ALIASES)
builtins.default_aliases = builtins.aliases = Aliases(make_default_aliases())
if login:
builtins.aliases.update(load_foreign_aliases(issue_warning=False))
# history needs to be started after env and aliases

View file

@ -887,7 +887,7 @@ def get_hg_branch(cwd=None, root=None):
with open(branch_path, 'r') as branch_file:
branch = branch_file.read()
else:
branch = call_hg_command(['branch'], cwd)
branch = 'default'
if os.path.exists(bookmark_path):
with open(bookmark_path, 'r') as bookmark_file:
@ -1004,11 +1004,14 @@ def _current_job():
def env_name(pre_chars='(', post_chars=') '):
"""Extract the current environment name from $VIRTUAL_ENV."""
env_path = __xonsh_env__.get('VIRTUAL_ENV', '')
"""Extract the current environment name from $VIRTUAL_ENV or
$CONDA_DEFAULT_ENV if that is set
"""
env_path = builtins.__xonsh_env__.get('VIRTUAL_ENV', '')
if len(env_path) == 0 and 'Anaconda' in sys.version:
pre_chars, post_chars = '[', '] '
env_path = builtins.__xonsh_env__.get('CONDA_DEFAULT_ENV', '')
env_name = os.path.basename(env_path)
return pre_chars + env_name + post_chars if env_name else ''
@ -1032,6 +1035,7 @@ FORMATTER_DICT = dict(
current_job=_current_job,
env_name=env_name,
)
DEFAULT_VALUES['FORMATTER_DICT'] = dict(FORMATTER_DICT)
_FORMATTER = string.Formatter()
@ -1269,15 +1273,13 @@ def windows_env_fixes(ctx):
def default_env(env=None, config=None, login=True):
"""Constructs a default xonsh environment."""
# in order of increasing precedence
ctx = dict(BASE_ENV)
ctx.update(os.environ)
if login:
ctx = dict(BASE_ENV)
ctx.update(os.environ)
conf = load_static_config(ctx, config=config)
ctx.update(conf.get('env', ()))
ctx.update(load_foreign_envs(shells=conf.get('foreign_shells', DEFAULT_SHELLS),
issue_warning=False))
else:
ctx = {}
if ON_WINDOWS:
windows_env_fixes(ctx)
# finalize env

View file

@ -8,6 +8,7 @@ import builtins
import subprocess
from warnings import warn
from functools import lru_cache
from tempfile import NamedTemporaryFile
from collections import MutableMapping, Mapping, Sequence
from xonsh.tools import to_bool, ensure_string
@ -85,6 +86,7 @@ DEFAULT_ENVCMDS = {
'zsh': 'env',
'/bin/zsh': 'env',
'/usr/bin/zsh': 'env',
'cmd': 'set',
}
DEFAULT_ALIASCMDS = {
'bash': 'alias',
@ -92,6 +94,7 @@ DEFAULT_ALIASCMDS = {
'zsh': 'alias -L',
'/bin/zsh': 'alias -L',
'/usr/bin/zsh': 'alias -L',
'cmd': '',
}
DEFAULT_FUNCSCMDS = {
'bash': DEFAULT_BASH_FUNCSCMD,
@ -99,6 +102,7 @@ DEFAULT_FUNCSCMDS = {
'zsh': DEFAULT_ZSH_FUNCSCMD,
'/bin/zsh': DEFAULT_ZSH_FUNCSCMD,
'/usr/bin/zsh': DEFAULT_ZSH_FUNCSCMD,
'cmd': '',
}
DEFAULT_SOURCERS = {
'bash': 'source',
@ -106,13 +110,32 @@ DEFAULT_SOURCERS = {
'zsh': 'source',
'/bin/zsh': 'source',
'/usr/bin/zsh': 'source',
'cmd': 'call',
}
DEFAULT_TMPFILE_EXT = {
'bash': '.sh',
'/bin/bash': '.sh',
'zsh': '.zsh',
'/bin/zsh': '.zsh',
'/usr/bin/zsh': '.zsh',
'cmd': '.bat',
}
DEFAULT_RUNCMD = {
'bash': '-c',
'/bin/bash': '-c',
'zsh': '-c',
'/bin/zsh': '-c',
'/usr/bin/zsh': '-c',
'cmd': '/C',
}
@lru_cache()
def foreign_shell_data(shell, interactive=True, login=False, envcmd=None,
aliascmd=None, extra_args=(), currenv=None,
safe=True, prevcmd='', postcmd='', funcscmd=None,
sourcer=None):
sourcer=None, use_tmpfile=False, tmpfile_ext=None,
runcmd=None):
"""Extracts data from a foreign (non-xonsh) shells. Currently this gets
the environment, aliases, and functions but may be extended in the future.
@ -152,6 +175,11 @@ def foreign_shell_data(shell, interactive=True, login=False, envcmd=None,
How to source a foreign shell file for purposes of calling functions
in that shell. If this is None, a default value will attempt to be
looked up based on the shell name.
use_tmpfile : bool, optional
This specifies if the commands are written to a tmp file or just
parsed directly to the shell
tmpfile_ext : str or None, optional
If tmpfile is True this sets specifies the extension used.
Returns
-------
@ -167,13 +195,24 @@ def foreign_shell_data(shell, interactive=True, login=False, envcmd=None,
cmd.append('-i')
if login:
cmd.append('-l')
cmd.append('-c')
envcmd = DEFAULT_ENVCMDS.get(shell, 'env') if envcmd is None else envcmd
aliascmd = DEFAULT_ALIASCMDS.get(shell, 'alias') if aliascmd is None else aliascmd
funcscmd = DEFAULT_FUNCSCMDS.get(shell, 'echo {}') if funcscmd is None else funcscmd
tmpfile_ext = DEFAULT_TMPFILE_EXT.get(shell, 'sh') if tmpfile_ext is None else tmpfile_ext
runcmd = DEFAULT_RUNCMD.get(shell, '-c') if runcmd is None else runcmd
command = COMMAND.format(envcmd=envcmd, aliascmd=aliascmd, prevcmd=prevcmd,
postcmd=postcmd, funcscmd=funcscmd).strip()
cmd.append(command)
cmd.append(runcmd)
if not use_tmpfile:
cmd.append(command)
else:
tmpfile = NamedTemporaryFile(suffix=tmpfile_ext, delete=False)
tmpfile.write(command.encode('utf8'))
tmpfile.close()
cmd.append(tmpfile.name)
if currenv is None and hasattr(builtins, '__xonsh_env__'):
currenv = builtins.__xonsh_env__.detype()
elif currenv is not None:
@ -186,7 +225,10 @@ def foreign_shell_data(shell, interactive=True, login=False, envcmd=None,
except (subprocess.CalledProcessError, FileNotFoundError):
if not safe:
raise
return {}, {}
return None, None
finally:
if use_tmpfile:
os.remove(tmpfile.name)
env = parse_env(s)
aliases = parse_aliases(s)
funcs = parse_funcs(s, shell=shell, sourcer=sourcer)
@ -196,6 +238,7 @@ def foreign_shell_data(shell, interactive=True, login=False, envcmd=None,
ENV_RE = re.compile('__XONSH_ENV_BEG__\n(.*)__XONSH_ENV_END__', flags=re.DOTALL)
def parse_env(s):
"""Parses the environment portion of string into a dict."""
m = ENV_RE.search(s)
@ -210,13 +253,14 @@ def parse_env(s):
ALIAS_RE = re.compile('__XONSH_ALIAS_BEG__\n(.*)__XONSH_ALIAS_END__',
flags=re.DOTALL)
def parse_aliases(s):
"""Parses the aliases portion of string into a dict."""
m = ALIAS_RE.search(s)
if m is None:
return {}
g1 = m.group(1)
items = [line.split('=', 1) for line in g1.splitlines() if \
items = [line.split('=', 1) for line in g1.splitlines() if
line.startswith('alias ') and '=' in line]
aliases = {}
for key, value in items:
@ -236,9 +280,10 @@ def parse_aliases(s):
return aliases
FUNCS_RE = re.compile('__XONSH_FUNCS_BEG__\n(.*)__XONSH_FUNCS_END__',
FUNCS_RE = re.compile('__XONSH_FUNCS_BEG__\n(.+)\n__XONSH_FUNCS_END__',
flags=re.DOTALL)
def parse_funcs(s, shell, sourcer=None):
"""Parses the funcs portion of a string into a dict of callable foreign
function wrappers.
@ -429,7 +474,8 @@ def load_foreign_envs(shells=None, config=None, issue_warning=True):
for shell in shells:
shell = ensure_shell(shell)
shenv, _ = foreign_shell_data(**shell)
env.update(shenv)
if shenv:
env.update(shenv)
return env
@ -458,5 +504,6 @@ def load_foreign_aliases(shells=None, config=None, issue_warning=True):
for shell in shells:
shell = ensure_shell(shell)
_, shaliases = foreign_shell_data(**shell)
aliases.update(shaliases)
if shaliases:
aliases.update(shaliases)
return aliases

View file

@ -6,6 +6,7 @@ import time
import signal
import builtins
from subprocess import TimeoutExpired
from io import BytesIO
from xonsh.tools import ON_WINDOWS
@ -41,13 +42,16 @@ if ON_WINDOWS:
return
while obj.returncode is None:
try:
obj.wait(0.01)
outs, errs = obj.communicate(timeout=0.01)
except TimeoutExpired:
pass
except KeyboardInterrupt:
obj.kill()
outs, errs = obj.communicate()
if obj.poll() is not None:
builtins.__xonsh_active_job__ = None
obj.stdout = BytesIO(outs)
obj.stderr = BytesIO(errs)
else:
def _continue(obj):

View file

@ -8,7 +8,10 @@ import tokenize
from io import BytesIO
from keyword import kwlist
from ply.lex import LexToken
try:
from ply.lex import LexToken
except ImportError:
from xonsh.ply.lex import LexToken
from xonsh.tools import VER_3_5, VER_MAJOR_MINOR

View file

@ -52,12 +52,12 @@ parser.add_argument('-c',
dest='command',
required=False,
default=None)
parser.add_argument('-i',
parser.add_argument('-i', '--interactive',
help='force running in interactive mode',
dest='force_interactive',
action='store_true',
default=False)
parser.add_argument('-l',
parser.add_argument('-l', '--login',
help='run as a login shell',
dest='login',
action='store_true',
@ -105,7 +105,7 @@ def arg_undoers():
'-h': (lambda args: setattr(args, 'help', False)),
'-V': (lambda args: setattr(args, 'version', False)),
'-c': (lambda args: setattr(args, 'command', None)),
'-i': (lambda args: setattr(args, 'force_interactive', Fals)),
'-i': (lambda args: setattr(args, 'force_interactive', False)),
'-l': (lambda args: setattr(args, 'login', False)),
'-c': (lambda args: setattr(args, 'command', None)),
'--config-path': (lambda args: delattr(args, 'config_path')),
@ -115,6 +115,9 @@ def arg_undoers():
}
au['--help'] = au['-h']
au['--version'] = au['-V']
au['--interactive'] = au['-i']
au['--login'] = au['-l']
return au
def undo_args(args):

View file

@ -2,7 +2,10 @@
"""Implements the base xonsh parser."""
from collections import Iterable, Sequence, Mapping
from ply import yacc
try:
from ply import yacc
except ImportError:
from xonsh.ply import yacc
from xonsh import ast
from xonsh.lexer import Lexer, LexToken

34
xonsh/ply/LICENSE Normal file
View file

@ -0,0 +1,34 @@
PLY (Python Lex-Yacc) Version 3.8
Copyright (C) 2001-2015,
David M. Beazley (Dabeaz LLC)
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the name of the David Beazley or Dabeaz LLC may be used to
endorse or promote products derived from this software without
specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
More information about PLY can be obtained on the PLY webpage at:
http://www.dabeaz.com/ply

5
xonsh/ply/__init__.py Normal file
View file

@ -0,0 +1,5 @@
# PLY package
# Author: David Beazley (dave@dabeaz.com)
__version__ = '3.7'
__all__ = ['lex','yacc']

908
xonsh/ply/cpp.py Normal file
View file

@ -0,0 +1,908 @@
# -----------------------------------------------------------------------------
# cpp.py
#
# Author: David Beazley (http://www.dabeaz.com)
# Copyright (C) 2007
# All rights reserved
#
# This module implements an ANSI-C style lexical preprocessor for PLY.
# -----------------------------------------------------------------------------
from __future__ import generators
# -----------------------------------------------------------------------------
# Default preprocessor lexer definitions. These tokens are enough to get
# a basic preprocessor working. Other modules may import these if they want
# -----------------------------------------------------------------------------
tokens = (
'CPP_ID','CPP_INTEGER', 'CPP_FLOAT', 'CPP_STRING', 'CPP_CHAR', 'CPP_WS', 'CPP_COMMENT1', 'CPP_COMMENT2', 'CPP_POUND','CPP_DPOUND'
)
literals = "+-*/%|&~^<>=!?()[]{}.,;:\\\'\""
# Whitespace
def t_CPP_WS(t):
r'\s+'
t.lexer.lineno += t.value.count("\n")
return t
t_CPP_POUND = r'\#'
t_CPP_DPOUND = r'\#\#'
# Identifier
t_CPP_ID = r'[A-Za-z_][\w_]*'
# Integer literal
def CPP_INTEGER(t):
r'(((((0x)|(0X))[0-9a-fA-F]+)|(\d+))([uU][lL]|[lL][uU]|[uU]|[lL])?)'
return t
t_CPP_INTEGER = CPP_INTEGER
# Floating literal
t_CPP_FLOAT = r'((\d+)(\.\d+)(e(\+|-)?(\d+))? | (\d+)e(\+|-)?(\d+))([lL]|[fF])?'
# String literal
def t_CPP_STRING(t):
r'\"([^\\\n]|(\\(.|\n)))*?\"'
t.lexer.lineno += t.value.count("\n")
return t
# Character constant 'c' or L'c'
def t_CPP_CHAR(t):
r'(L)?\'([^\\\n]|(\\(.|\n)))*?\''
t.lexer.lineno += t.value.count("\n")
return t
# Comment
def t_CPP_COMMENT1(t):
r'(/\*(.|\n)*?\*/)'
ncr = t.value.count("\n")
t.lexer.lineno += ncr
# replace with one space or a number of '\n'
t.type = 'CPP_WS'; t.value = '\n' * ncr if ncr else ' '
return t
# Line comment
def t_CPP_COMMENT2(t):
r'(//.*?(\n|$))'
# replace with '/n'
t.type = 'CPP_WS'; t.value = '\n'
def t_error(t):
t.type = t.value[0]
t.value = t.value[0]
t.lexer.skip(1)
return t
import re
import copy
import time
import os.path
# -----------------------------------------------------------------------------
# trigraph()
#
# Given an input string, this function replaces all trigraph sequences.
# The following mapping is used:
#
# ??= #
# ??/ \
# ??' ^
# ??( [
# ??) ]
# ??! |
# ??< {
# ??> }
# ??- ~
# -----------------------------------------------------------------------------
_trigraph_pat = re.compile(r'''\?\?[=/\'\(\)\!<>\-]''')
_trigraph_rep = {
'=':'#',
'/':'\\',
"'":'^',
'(':'[',
')':']',
'!':'|',
'<':'{',
'>':'}',
'-':'~'
}
def trigraph(input):
return _trigraph_pat.sub(lambda g: _trigraph_rep[g.group()[-1]],input)
# ------------------------------------------------------------------
# Macro object
#
# This object holds information about preprocessor macros
#
# .name - Macro name (string)
# .value - Macro value (a list of tokens)
# .arglist - List of argument names
# .variadic - Boolean indicating whether or not variadic macro
# .vararg - Name of the variadic parameter
#
# When a macro is created, the macro replacement token sequence is
# pre-scanned and used to create patch lists that are later used
# during macro expansion
# ------------------------------------------------------------------
class Macro(object):
def __init__(self,name,value,arglist=None,variadic=False):
self.name = name
self.value = value
self.arglist = arglist
self.variadic = variadic
if variadic:
self.vararg = arglist[-1]
self.source = None
# ------------------------------------------------------------------
# Preprocessor object
#
# Object representing a preprocessor. Contains macro definitions,
# include directories, and other information
# ------------------------------------------------------------------
class Preprocessor(object):
def __init__(self,lexer=None):
if lexer is None:
lexer = lex.lexer
self.lexer = lexer
self.macros = { }
self.path = []
self.temp_path = []
# Probe the lexer for selected tokens
self.lexprobe()
tm = time.localtime()
self.define("__DATE__ \"%s\"" % time.strftime("%b %d %Y",tm))
self.define("__TIME__ \"%s\"" % time.strftime("%H:%M:%S",tm))
self.parser = None
# -----------------------------------------------------------------------------
# tokenize()
#
# Utility function. Given a string of text, tokenize into a list of tokens
# -----------------------------------------------------------------------------
def tokenize(self,text):
tokens = []
self.lexer.input(text)
while True:
tok = self.lexer.token()
if not tok: break
tokens.append(tok)
return tokens
# ---------------------------------------------------------------------
# error()
#
# Report a preprocessor error/warning of some kind
# ----------------------------------------------------------------------
def error(self,file,line,msg):
print("%s:%d %s" % (file,line,msg))
# ----------------------------------------------------------------------
# lexprobe()
#
# This method probes the preprocessor lexer object to discover
# the token types of symbols that are important to the preprocessor.
# If this works right, the preprocessor will simply "work"
# with any suitable lexer regardless of how tokens have been named.
# ----------------------------------------------------------------------
def lexprobe(self):
# Determine the token type for identifiers
self.lexer.input("identifier")
tok = self.lexer.token()
if not tok or tok.value != "identifier":
print("Couldn't determine identifier type")
else:
self.t_ID = tok.type
# Determine the token type for integers
self.lexer.input("12345")
tok = self.lexer.token()
if not tok or int(tok.value) != 12345:
print("Couldn't determine integer type")
else:
self.t_INTEGER = tok.type
self.t_INTEGER_TYPE = type(tok.value)
# Determine the token type for strings enclosed in double quotes
self.lexer.input("\"filename\"")
tok = self.lexer.token()
if not tok or tok.value != "\"filename\"":
print("Couldn't determine string type")
else:
self.t_STRING = tok.type
# Determine the token type for whitespace--if any
self.lexer.input(" ")
tok = self.lexer.token()
if not tok or tok.value != " ":
self.t_SPACE = None
else:
self.t_SPACE = tok.type
# Determine the token type for newlines
self.lexer.input("\n")
tok = self.lexer.token()
if not tok or tok.value != "\n":
self.t_NEWLINE = None
print("Couldn't determine token for newlines")
else:
self.t_NEWLINE = tok.type
self.t_WS = (self.t_SPACE, self.t_NEWLINE)
# Check for other characters used by the preprocessor
chars = [ '<','>','#','##','\\','(',')',',','.']
for c in chars:
self.lexer.input(c)
tok = self.lexer.token()
if not tok or tok.value != c:
print("Unable to lex '%s' required for preprocessor" % c)
# ----------------------------------------------------------------------
# add_path()
#
# Adds a search path to the preprocessor.
# ----------------------------------------------------------------------
def add_path(self,path):
self.path.append(path)
# ----------------------------------------------------------------------
# group_lines()
#
# Given an input string, this function splits it into lines. Trailing whitespace
# is removed. Any line ending with \ is grouped with the next line. This
# function forms the lowest level of the preprocessor---grouping into text into
# a line-by-line format.
# ----------------------------------------------------------------------
def group_lines(self,input):
lex = self.lexer.clone()
lines = [x.rstrip() for x in input.splitlines()]
for i in xrange(len(lines)):
j = i+1
while lines[i].endswith('\\') and (j < len(lines)):
lines[i] = lines[i][:-1]+lines[j]
lines[j] = ""
j += 1
input = "\n".join(lines)
lex.input(input)
lex.lineno = 1
current_line = []
while True:
tok = lex.token()
if not tok:
break
current_line.append(tok)
if tok.type in self.t_WS and '\n' in tok.value:
yield current_line
current_line = []
if current_line:
yield current_line
# ----------------------------------------------------------------------
# tokenstrip()
#
# Remove leading/trailing whitespace tokens from a token list
# ----------------------------------------------------------------------
def tokenstrip(self,tokens):
i = 0
while i < len(tokens) and tokens[i].type in self.t_WS:
i += 1
del tokens[:i]
i = len(tokens)-1
while i >= 0 and tokens[i].type in self.t_WS:
i -= 1
del tokens[i+1:]
return tokens
# ----------------------------------------------------------------------
# collect_args()
#
# Collects comma separated arguments from a list of tokens. The arguments
# must be enclosed in parenthesis. Returns a tuple (tokencount,args,positions)
# where tokencount is the number of tokens consumed, args is a list of arguments,
# and positions is a list of integers containing the starting index of each
# argument. Each argument is represented by a list of tokens.
#
# When collecting arguments, leading and trailing whitespace is removed
# from each argument.
#
# This function properly handles nested parenthesis and commas---these do not
# define new arguments.
# ----------------------------------------------------------------------
def collect_args(self,tokenlist):
args = []
positions = []
current_arg = []
nesting = 1
tokenlen = len(tokenlist)
# Search for the opening '('.
i = 0
while (i < tokenlen) and (tokenlist[i].type in self.t_WS):
i += 1
if (i < tokenlen) and (tokenlist[i].value == '('):
positions.append(i+1)
else:
self.error(self.source,tokenlist[0].lineno,"Missing '(' in macro arguments")
return 0, [], []
i += 1
while i < tokenlen:
t = tokenlist[i]
if t.value == '(':
current_arg.append(t)
nesting += 1
elif t.value == ')':
nesting -= 1
if nesting == 0:
if current_arg:
args.append(self.tokenstrip(current_arg))
positions.append(i)
return i+1,args,positions
current_arg.append(t)
elif t.value == ',' and nesting == 1:
args.append(self.tokenstrip(current_arg))
positions.append(i+1)
current_arg = []
else:
current_arg.append(t)
i += 1
# Missing end argument
self.error(self.source,tokenlist[-1].lineno,"Missing ')' in macro arguments")
return 0, [],[]
# ----------------------------------------------------------------------
# macro_prescan()
#
# Examine the macro value (token sequence) and identify patch points
# This is used to speed up macro expansion later on---we'll know
# right away where to apply patches to the value to form the expansion
# ----------------------------------------------------------------------
def macro_prescan(self,macro):
macro.patch = [] # Standard macro arguments
macro.str_patch = [] # String conversion expansion
macro.var_comma_patch = [] # Variadic macro comma patch
i = 0
while i < len(macro.value):
if macro.value[i].type == self.t_ID and macro.value[i].value in macro.arglist:
argnum = macro.arglist.index(macro.value[i].value)
# Conversion of argument to a string
if i > 0 and macro.value[i-1].value == '#':
macro.value[i] = copy.copy(macro.value[i])
macro.value[i].type = self.t_STRING
del macro.value[i-1]
macro.str_patch.append((argnum,i-1))
continue
# Concatenation
elif (i > 0 and macro.value[i-1].value == '##'):
macro.patch.append(('c',argnum,i-1))
del macro.value[i-1]
continue
elif ((i+1) < len(macro.value) and macro.value[i+1].value == '##'):
macro.patch.append(('c',argnum,i))
i += 1
continue
# Standard expansion
else:
macro.patch.append(('e',argnum,i))
elif macro.value[i].value == '##':
if macro.variadic and (i > 0) and (macro.value[i-1].value == ',') and \
((i+1) < len(macro.value)) and (macro.value[i+1].type == self.t_ID) and \
(macro.value[i+1].value == macro.vararg):
macro.var_comma_patch.append(i-1)
i += 1
macro.patch.sort(key=lambda x: x[2],reverse=True)
# ----------------------------------------------------------------------
# macro_expand_args()
#
# Given a Macro and list of arguments (each a token list), this method
# returns an expanded version of a macro. The return value is a token sequence
# representing the replacement macro tokens
# ----------------------------------------------------------------------
def macro_expand_args(self,macro,args):
# Make a copy of the macro token sequence
rep = [copy.copy(_x) for _x in macro.value]
# Make string expansion patches. These do not alter the length of the replacement sequence
str_expansion = {}
for argnum, i in macro.str_patch:
if argnum not in str_expansion:
str_expansion[argnum] = ('"%s"' % "".join([x.value for x in args[argnum]])).replace("\\","\\\\")
rep[i] = copy.copy(rep[i])
rep[i].value = str_expansion[argnum]
# Make the variadic macro comma patch. If the variadic macro argument is empty, we get rid
comma_patch = False
if macro.variadic and not args[-1]:
for i in macro.var_comma_patch:
rep[i] = None
comma_patch = True
# Make all other patches. The order of these matters. It is assumed that the patch list
# has been sorted in reverse order of patch location since replacements will cause the
# size of the replacement sequence to expand from the patch point.
expanded = { }
for ptype, argnum, i in macro.patch:
# Concatenation. Argument is left unexpanded
if ptype == 'c':
rep[i:i+1] = args[argnum]
# Normal expansion. Argument is macro expanded first
elif ptype == 'e':
if argnum not in expanded:
expanded[argnum] = self.expand_macros(args[argnum])
rep[i:i+1] = expanded[argnum]
# Get rid of removed comma if necessary
if comma_patch:
rep = [_i for _i in rep if _i]
return rep
# ----------------------------------------------------------------------
# expand_macros()
#
# Given a list of tokens, this function performs macro expansion.
# The expanded argument is a dictionary that contains macros already
# expanded. This is used to prevent infinite recursion.
# ----------------------------------------------------------------------
def expand_macros(self,tokens,expanded=None):
if expanded is None:
expanded = {}
i = 0
while i < len(tokens):
t = tokens[i]
if t.type == self.t_ID:
if t.value in self.macros and t.value not in expanded:
# Yes, we found a macro match
expanded[t.value] = True
m = self.macros[t.value]
if not m.arglist:
# A simple macro
ex = self.expand_macros([copy.copy(_x) for _x in m.value],expanded)
for e in ex:
e.lineno = t.lineno
tokens[i:i+1] = ex
i += len(ex)
else:
# A macro with arguments
j = i + 1
while j < len(tokens) and tokens[j].type in self.t_WS:
j += 1
if tokens[j].value == '(':
tokcount,args,positions = self.collect_args(tokens[j:])
if not m.variadic and len(args) != len(m.arglist):
self.error(self.source,t.lineno,"Macro %s requires %d arguments" % (t.value,len(m.arglist)))
i = j + tokcount
elif m.variadic and len(args) < len(m.arglist)-1:
if len(m.arglist) > 2:
self.error(self.source,t.lineno,"Macro %s must have at least %d arguments" % (t.value, len(m.arglist)-1))
else:
self.error(self.source,t.lineno,"Macro %s must have at least %d argument" % (t.value, len(m.arglist)-1))
i = j + tokcount
else:
if m.variadic:
if len(args) == len(m.arglist)-1:
args.append([])
else:
args[len(m.arglist)-1] = tokens[j+positions[len(m.arglist)-1]:j+tokcount-1]
del args[len(m.arglist):]
# Get macro replacement text
rep = self.macro_expand_args(m,args)
rep = self.expand_macros(rep,expanded)
for r in rep:
r.lineno = t.lineno
tokens[i:j+tokcount] = rep
i += len(rep)
del expanded[t.value]
continue
elif t.value == '__LINE__':
t.type = self.t_INTEGER
t.value = self.t_INTEGER_TYPE(t.lineno)
i += 1
return tokens
# ----------------------------------------------------------------------
# evalexpr()
#
# Evaluate an expression token sequence for the purposes of evaluating
# integral expressions.
# ----------------------------------------------------------------------
def evalexpr(self,tokens):
# tokens = tokenize(line)
# Search for defined macros
i = 0
while i < len(tokens):
if tokens[i].type == self.t_ID and tokens[i].value == 'defined':
j = i + 1
needparen = False
result = "0L"
while j < len(tokens):
if tokens[j].type in self.t_WS:
j += 1
continue
elif tokens[j].type == self.t_ID:
if tokens[j].value in self.macros:
result = "1L"
else:
result = "0L"
if not needparen: break
elif tokens[j].value == '(':
needparen = True
elif tokens[j].value == ')':
break
else:
self.error(self.source,tokens[i].lineno,"Malformed defined()")
j += 1
tokens[i].type = self.t_INTEGER
tokens[i].value = self.t_INTEGER_TYPE(result)
del tokens[i+1:j+1]
i += 1
tokens = self.expand_macros(tokens)
for i,t in enumerate(tokens):
if t.type == self.t_ID:
tokens[i] = copy.copy(t)
tokens[i].type = self.t_INTEGER
tokens[i].value = self.t_INTEGER_TYPE("0L")
elif t.type == self.t_INTEGER:
tokens[i] = copy.copy(t)
# Strip off any trailing suffixes
tokens[i].value = str(tokens[i].value)
while tokens[i].value[-1] not in "0123456789abcdefABCDEF":
tokens[i].value = tokens[i].value[:-1]
expr = "".join([str(x.value) for x in tokens])
expr = expr.replace("&&"," and ")
expr = expr.replace("||"," or ")
expr = expr.replace("!"," not ")
try:
result = eval(expr)
except StandardError:
self.error(self.source,tokens[0].lineno,"Couldn't evaluate expression")
result = 0
return result
# ----------------------------------------------------------------------
# parsegen()
#
# Parse an input string/
# ----------------------------------------------------------------------
def parsegen(self,input,source=None):
# Replace trigraph sequences
t = trigraph(input)
lines = self.group_lines(t)
if not source:
source = ""
self.define("__FILE__ \"%s\"" % source)
self.source = source
chunk = []
enable = True
iftrigger = False
ifstack = []
for x in lines:
for i,tok in enumerate(x):
if tok.type not in self.t_WS: break
if tok.value == '#':
# Preprocessor directive
# insert necessary whitespace instead of eaten tokens
for tok in x:
if tok.type in self.t_WS and '\n' in tok.value:
chunk.append(tok)
dirtokens = self.tokenstrip(x[i+1:])
if dirtokens:
name = dirtokens[0].value
args = self.tokenstrip(dirtokens[1:])
else:
name = ""
args = []
if name == 'define':
if enable:
for tok in self.expand_macros(chunk):
yield tok
chunk = []
self.define(args)
elif name == 'include':
if enable:
for tok in self.expand_macros(chunk):
yield tok
chunk = []
oldfile = self.macros['__FILE__']
for tok in self.include(args):
yield tok
self.macros['__FILE__'] = oldfile
self.source = source
elif name == 'undef':
if enable:
for tok in self.expand_macros(chunk):
yield tok
chunk = []
self.undef(args)
elif name == 'ifdef':
ifstack.append((enable,iftrigger))
if enable:
if not args[0].value in self.macros:
enable = False
iftrigger = False
else:
iftrigger = True
elif name == 'ifndef':
ifstack.append((enable,iftrigger))
if enable:
if args[0].value in self.macros:
enable = False
iftrigger = False
else:
iftrigger = True
elif name == 'if':
ifstack.append((enable,iftrigger))
if enable:
result = self.evalexpr(args)
if not result:
enable = False
iftrigger = False
else:
iftrigger = True
elif name == 'elif':
if ifstack:
if ifstack[-1][0]: # We only pay attention if outer "if" allows this
if enable: # If already true, we flip enable False
enable = False
elif not iftrigger: # If False, but not triggered yet, we'll check expression
result = self.evalexpr(args)
if result:
enable = True
iftrigger = True
else:
self.error(self.source,dirtokens[0].lineno,"Misplaced #elif")
elif name == 'else':
if ifstack:
if ifstack[-1][0]:
if enable:
enable = False
elif not iftrigger:
enable = True
iftrigger = True
else:
self.error(self.source,dirtokens[0].lineno,"Misplaced #else")
elif name == 'endif':
if ifstack:
enable,iftrigger = ifstack.pop()
else:
self.error(self.source,dirtokens[0].lineno,"Misplaced #endif")
else:
# Unknown preprocessor directive
pass
else:
# Normal text
if enable:
chunk.extend(x)
for tok in self.expand_macros(chunk):
yield tok
chunk = []
# ----------------------------------------------------------------------
# include()
#
# Implementation of file-inclusion
# ----------------------------------------------------------------------
def include(self,tokens):
# Try to extract the filename and then process an include file
if not tokens:
return
if tokens:
if tokens[0].value != '<' and tokens[0].type != self.t_STRING:
tokens = self.expand_macros(tokens)
if tokens[0].value == '<':
# Include <...>
i = 1
while i < len(tokens):
if tokens[i].value == '>':
break
i += 1
else:
print("Malformed #include <...>")
return
filename = "".join([x.value for x in tokens[1:i]])
path = self.path + [""] + self.temp_path
elif tokens[0].type == self.t_STRING:
filename = tokens[0].value[1:-1]
path = self.temp_path + [""] + self.path
else:
print("Malformed #include statement")
return
for p in path:
iname = os.path.join(p,filename)
try:
data = open(iname,"r").read()
dname = os.path.dirname(iname)
if dname:
self.temp_path.insert(0,dname)
for tok in self.parsegen(data,filename):
yield tok
if dname:
del self.temp_path[0]
break
except IOError:
pass
else:
print("Couldn't find '%s'" % filename)
# ----------------------------------------------------------------------
# define()
#
# Define a new macro
# ----------------------------------------------------------------------
def define(self,tokens):
if isinstance(tokens,(str,unicode)):
tokens = self.tokenize(tokens)
linetok = tokens
try:
name = linetok[0]
if len(linetok) > 1:
mtype = linetok[1]
else:
mtype = None
if not mtype:
m = Macro(name.value,[])
self.macros[name.value] = m
elif mtype.type in self.t_WS:
# A normal macro
m = Macro(name.value,self.tokenstrip(linetok[2:]))
self.macros[name.value] = m
elif mtype.value == '(':
# A macro with arguments
tokcount, args, positions = self.collect_args(linetok[1:])
variadic = False
for a in args:
if variadic:
print("No more arguments may follow a variadic argument")
break
astr = "".join([str(_i.value) for _i in a])
if astr == "...":
variadic = True
a[0].type = self.t_ID
a[0].value = '__VA_ARGS__'
variadic = True
del a[1:]
continue
elif astr[-3:] == "..." and a[0].type == self.t_ID:
variadic = True
del a[1:]
# If, for some reason, "." is part of the identifier, strip off the name for the purposes
# of macro expansion
if a[0].value[-3:] == '...':
a[0].value = a[0].value[:-3]
continue
if len(a) > 1 or a[0].type != self.t_ID:
print("Invalid macro argument")
break
else:
mvalue = self.tokenstrip(linetok[1+tokcount:])
i = 0
while i < len(mvalue):
if i+1 < len(mvalue):
if mvalue[i].type in self.t_WS and mvalue[i+1].value == '##':
del mvalue[i]
continue
elif mvalue[i].value == '##' and mvalue[i+1].type in self.t_WS:
del mvalue[i+1]
i += 1
m = Macro(name.value,mvalue,[x[0].value for x in args],variadic)
self.macro_prescan(m)
self.macros[name.value] = m
else:
print("Bad macro definition")
except LookupError:
print("Bad macro definition")
# ----------------------------------------------------------------------
# undef()
#
# Undefine a macro
# ----------------------------------------------------------------------
def undef(self,tokens):
id = tokens[0].value
try:
del self.macros[id]
except LookupError:
pass
# ----------------------------------------------------------------------
# parse()
#
# Parse input text.
# ----------------------------------------------------------------------
def parse(self,input,source=None,ignore={}):
self.ignore = ignore
self.parser = self.parsegen(input,source)
# ----------------------------------------------------------------------
# token()
#
# Method to return individual tokens
# ----------------------------------------------------------------------
def token(self):
try:
while True:
tok = next(self.parser)
if tok.type not in self.ignore: return tok
except StopIteration:
self.parser = None
return None
if __name__ == '__main__':
import ply.lex as lex
lexer = lex.lex()
# Run a preprocessor
import sys
f = open(sys.argv[1])
input = f.read()
p = Preprocessor(lexer)
p.parse(input,sys.argv[1])
while True:
tok = p.token()
if not tok: break
print(p.source, tok)

133
xonsh/ply/ctokens.py Normal file
View file

@ -0,0 +1,133 @@
# ----------------------------------------------------------------------
# ctokens.py
#
# Token specifications for symbols in ANSI C and C++. This file is
# meant to be used as a library in other tokenizers.
# ----------------------------------------------------------------------
# Reserved words
tokens = [
# Literals (identifier, integer constant, float constant, string constant, char const)
'ID', 'TYPEID', 'INTEGER', 'FLOAT', 'STRING', 'CHARACTER',
# Operators (+,-,*,/,%,|,&,~,^,<<,>>, ||, &&, !, <, <=, >, >=, ==, !=)
'PLUS', 'MINUS', 'TIMES', 'DIVIDE', 'MODULO',
'OR', 'AND', 'NOT', 'XOR', 'LSHIFT', 'RSHIFT',
'LOR', 'LAND', 'LNOT',
'LT', 'LE', 'GT', 'GE', 'EQ', 'NE',
# Assignment (=, *=, /=, %=, +=, -=, <<=, >>=, &=, ^=, |=)
'EQUALS', 'TIMESEQUAL', 'DIVEQUAL', 'MODEQUAL', 'PLUSEQUAL', 'MINUSEQUAL',
'LSHIFTEQUAL','RSHIFTEQUAL', 'ANDEQUAL', 'XOREQUAL', 'OREQUAL',
# Increment/decrement (++,--)
'INCREMENT', 'DECREMENT',
# Structure dereference (->)
'ARROW',
# Ternary operator (?)
'TERNARY',
# Delimeters ( ) [ ] { } , . ; :
'LPAREN', 'RPAREN',
'LBRACKET', 'RBRACKET',
'LBRACE', 'RBRACE',
'COMMA', 'PERIOD', 'SEMI', 'COLON',
# Ellipsis (...)
'ELLIPSIS',
]
# Operators
t_PLUS = r'\+'
t_MINUS = r'-'
t_TIMES = r'\*'
t_DIVIDE = r'/'
t_MODULO = r'%'
t_OR = r'\|'
t_AND = r'&'
t_NOT = r'~'
t_XOR = r'\^'
t_LSHIFT = r'<<'
t_RSHIFT = r'>>'
t_LOR = r'\|\|'
t_LAND = r'&&'
t_LNOT = r'!'
t_LT = r'<'
t_GT = r'>'
t_LE = r'<='
t_GE = r'>='
t_EQ = r'=='
t_NE = r'!='
# Assignment operators
t_EQUALS = r'='
t_TIMESEQUAL = r'\*='
t_DIVEQUAL = r'/='
t_MODEQUAL = r'%='
t_PLUSEQUAL = r'\+='
t_MINUSEQUAL = r'-='
t_LSHIFTEQUAL = r'<<='
t_RSHIFTEQUAL = r'>>='
t_ANDEQUAL = r'&='
t_OREQUAL = r'\|='
t_XOREQUAL = r'\^='
# Increment/decrement
t_INCREMENT = r'\+\+'
t_DECREMENT = r'--'
# ->
t_ARROW = r'->'
# ?
t_TERNARY = r'\?'
# Delimeters
t_LPAREN = r'\('
t_RPAREN = r'\)'
t_LBRACKET = r'\['
t_RBRACKET = r'\]'
t_LBRACE = r'\{'
t_RBRACE = r'\}'
t_COMMA = r','
t_PERIOD = r'\.'
t_SEMI = r';'
t_COLON = r':'
t_ELLIPSIS = r'\.\.\.'
# Identifiers
t_ID = r'[A-Za-z_][A-Za-z0-9_]*'
# Integer literal
t_INTEGER = r'\d+([uU]|[lL]|[uU][lL]|[lL][uU])?'
# Floating literal
t_FLOAT = r'((\d+)(\.\d+)(e(\+|-)?(\d+))? | (\d+)e(\+|-)?(\d+))([lL]|[fF])?'
# String literal
t_STRING = r'\"([^\\\n]|(\\.))*?\"'
# Character constant 'c' or L'c'
t_CHARACTER = r'(L)?\'([^\\\n]|(\\.))*?\''
# Comment (C-Style)
def t_COMMENT(t):
r'/\*(.|\n)*?\*/'
t.lexer.lineno += t.value.count('\n')
return t
# Comment (C++-Style)
def t_CPPCOMMENT(t):
r'//.*\n'
t.lexer.lineno += 1
return t

1097
xonsh/ply/lex.py Normal file

File diff suppressed because it is too large Load diff

3471
xonsh/ply/yacc.py Normal file

File diff suppressed because it is too large Load diff

74
xonsh/ply/ygen.py Normal file
View file

@ -0,0 +1,74 @@
# ply: ygen.py
#
# This is a support program that auto-generates different versions of the YACC parsing
# function with different features removed for the purposes of performance.
#
# Users should edit the method LParser.parsedebug() in yacc.py. The source code
# for that method is then used to create the other methods. See the comments in
# yacc.py for further details.
import os.path
import shutil
def get_source_range(lines, tag):
srclines = enumerate(lines)
start_tag = '#--! %s-start' % tag
end_tag = '#--! %s-end' % tag
for start_index, line in srclines:
if line.strip().startswith(start_tag):
break
for end_index, line in srclines:
if line.strip().endswith(end_tag):
break
return (start_index + 1, end_index)
def filter_section(lines, tag):
filtered_lines = []
include = True
tag_text = '#--! %s' % tag
for line in lines:
if line.strip().startswith(tag_text):
include = not include
elif include:
filtered_lines.append(line)
return filtered_lines
def main():
dirname = os.path.dirname(__file__)
shutil.copy2(os.path.join(dirname, 'yacc.py'), os.path.join(dirname, 'yacc.py.bak'))
with open(os.path.join(dirname, 'yacc.py'), 'r') as f:
lines = f.readlines()
parse_start, parse_end = get_source_range(lines, 'parsedebug')
parseopt_start, parseopt_end = get_source_range(lines, 'parseopt')
parseopt_notrack_start, parseopt_notrack_end = get_source_range(lines, 'parseopt-notrack')
# Get the original source
orig_lines = lines[parse_start:parse_end]
# Filter the DEBUG sections out
parseopt_lines = filter_section(orig_lines, 'DEBUG')
# Filter the TRACKING sections out
parseopt_notrack_lines = filter_section(parseopt_lines, 'TRACKING')
# Replace the parser source sections with updated versions
lines[parseopt_notrack_start:parseopt_notrack_end] = parseopt_notrack_lines
lines[parseopt_start:parseopt_end] = parseopt_lines
lines = [line.rstrip()+'\n' for line in lines]
with open(os.path.join(dirname, 'yacc.py'), 'w') as f:
f.writelines(lines)
print('Updated yacc.py')
if __name__ == '__main__':
main()

View file

@ -6,6 +6,8 @@ from prompt_toolkit.shortcuts import (create_prompt_application,
from xonsh.shell import prompt_toolkit_version_info
import builtins
class Prompter(object):
def __init__(self, cli=None, *args, **kwargs):
@ -69,6 +71,15 @@ class Prompter(object):
if self.major_minor <= (0, 57):
kwargs.pop('get_rprompt_tokens', None)
kwargs.pop('get_continuation_tokens', None)
# VI_Mode handling changed in prompt_toolkit v1.0
if self.major_minor >= (1, 0):
from prompt_toolkit.enums import EditingMode
if builtins.__xonsh_env__.get('VI_MODE'):
editing_mode = EditingMode.VI
else:
editing_mode = EditingMode.EMACS
kwargs['editing_mode'] = editing_mode
cli = CommandLineInterface(
application=create_prompt_application(message, **kwargs),
eventloop=eventloop,

View file

@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
"""Hooks for pygments syntax highlighting."""
import os
import re
import string
import builtins
@ -539,6 +540,29 @@ if hasattr(pygments.style, 'ansicolors'):
Color.WHITE: '#ansiwhite',
Color.YELLOW: '#ansiyellow',
}
elif ON_WINDOWS and 'CONEMUANSI' not in os.environ:
# These colors must match the color specification
# in prompt_toolkit, so the colors are converted
# correctly when using cmd.exe
DEFAULT_STYLE = {
Color.BLACK: '#000000',
Color.BLUE: '#0000AA',
Color.CYAN: '#00AAAA',
Color.GREEN: '#00AA00',
Color.INTENSE_BLACK: '#444444',
Color.INTENSE_BLUE: '#4444FF',
Color.INTENSE_CYAN: '#44FFFF',
Color.INTENSE_GREEN: '#44FF44',
Color.INTENSE_PURPLE: '#FF44FF',
Color.INTENSE_RED: '#FF4444',
Color.INTENSE_WHITE: '#888888',
Color.INTENSE_YELLOW: '#FFFF44',
Color.NO_COLOR: 'noinherit',
Color.PURPLE: '#AA00AA',
Color.RED: '#AA0000',
Color.WHITE: '#FFFFFF',
Color.YELLOW: '#AAAA00',
}
else:
DEFAULT_STYLE = {
Color.BLACK: '#000000',

View file

@ -94,8 +94,8 @@ class Shell(object):
from xonsh.base_shell import BaseShell as shell_class
elif shell_type == 'prompt_toolkit':
vptk = prompt_toolkit_version()
minor = int(vptk.split('.')[1])
if minor < 57 or vptk == '<0.57': # TODO: remove in future
major,minor = [int(x) for x in vptk.split('.')[:2]]
if (major,minor) < (0, 57) or vptk == '<0.57': # TODO: remove in future
msg = ('prompt-toolkit version < v0.57 and may not work as '
'expected. Please update.')
warn(msg, RuntimeWarning)

View file

@ -145,10 +145,10 @@ def subproc_toks(line, mincol=-1, maxcol=None, lexer=None, returnline=False):
tok.lexpos = len(line)
break
else:
if len(toks) > 0 and toks[-1].type in END_TOK_TYPES:
toks.pop()
if len(toks) == 0:
return # handle comment lines
if toks[-1].type in END_TOK_TYPES:
toks.pop()
tok = toks[-1]
pos = tok.lexpos
if isinstance(tok.value, string_types):
@ -416,17 +416,44 @@ def suggestion_sort_helper(x, y):
return lendiff + inx + iny
def escape_windows_title_string(s):
"""Returns a string that is usable by the Windows cmd.exe title
builtin. The escaping is based on details here and emperical testing:
def escape_windows_cmd_string(s):
"""Returns a string that is usable by the Windows cmd.exe.
The escaping is based on details here and emperical testing:
http://www.robvanderwoude.com/escapechars.php
"""
for c in '^&<>|':
for c in '()%!^<>&|"':
s = s.replace(c, '^' + c)
s = s.replace('/?', '/.')
return s
def argvquote(arg, force=False):
""" Returns an argument quoted in such a way that that CommandLineToArgvW
on Windows will return the argument string unchanged.
This is the same thing Popen does when supplied with an list of arguments.
Arguments in a command line should be separated by spaces; this
function does not add these spaces. This implementation follows the
suggestions outlined here:
https://blogs.msdn.microsoft.com/twistylittlepassagesallalike/2011/04/23/everyone-quotes-command-line-arguments-the-wrong-way/
"""
if not force and len(arg) != 0 and not any([c in arg for c in ' \t\n\v"']):
return arg
else:
n_backslashes = 0
cmdline = '"'
for c in arg:
if c == '"':
cmdline += (n_backslashes * 2 + 1) * '\\'
else:
cmdline += n_backslashes * '\\'
if c != '\\':
cmdline += c
n_backslashes = 0
else:
n_backslashes += 1
return cmdline + n_backslashes * 2 * '\\' + '"'
def on_main_thread():
"""Checks if we are on the main thread or not."""
return threading.current_thread() is threading.main_thread()

View file

@ -9,7 +9,10 @@ import itertools
from pprint import pformat
from argparse import ArgumentParser
import ply
try:
import ply
except ImportError:
from xonsh import ply
from xonsh import __version__ as XONSH_VERSION
from xonsh import tools