| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* eval.c (eval_init): nrot, rot intrinsics registered.
* lib.c (nrot, rot): New functions.
* lib.h (nrot, rot): Declared.
* tests/012/seq.tl: New test cases.
* txr.1: Documented.
* stdlib/doc-syms.tl: Updated.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* stdlib/op.tl (ret, aret): Simplify implementation, without
progn or @rest, or interpolation of multiple args.
We use identity* to allow the resulting function to
allow and ignore multiple arguments.
* txr.1: Strangely, an an edit in commit 99131c676,
on Sep 26, 2014, reverted the more accurate equivalence
(ret x) <--> (op identity (progn @rest x))
back to the original documentation
(ret x) <--> (op identity x)
which matched an older implementation. Anyway, that's moot
now; the documentation is updated to give the new equivalence
via identity*.
|
|
|
|
| |
* stdlib/quips.tl (%quips%): New one.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* eval.c (eval_init): Register tuples* intrinsic.
* lib.c (tuples_star_func): New static function.
(tuples_star): New function.
* lib.h (tuples_star): Declared.
* tests/012/seq.tl: New test cases.
* txr.1: Documented.
* stdlib/doc-syms.tl: Updated.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* lib.c (make_like): In the COBJ case, recognize an iterator
object. Pull out the underlying object and recurse on it.
This is needed in tuples_func, where make_like will now be
called on the abstract iterator, rather than the actual
sequence object.
(tuples_func): The incoming object is now an iterator, and not
a sequence; we need to handle it with iter_more, iter_item and
iter_step.
(tuples): Instead of nullify, begin iteration with
iter_begin, and use iter_more to test for empty.
In non-empty case, put propagate the iterator thorugh the lazy
cons car field, rather than the sequence.
|
|
|
|
| |
* tests/012/seq.tl: Numerous test cases for tuples.
|
|
|
|
|
|
|
| |
* lib.c (tuples): Check that n argument giving tuple size is a
is a positive integer.
* tests/012/seql.tl: Test case added.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is related to the pattern in the previous commit. When we have a
situation like this:
lab1
mov tn nil
lab2
ifq tn nil lab4
lab3
gcall tn ...
We know that if lab1 is entered, then lab2 will necessarily
fall through: the lab4 branch is not taken because tn is nil.
But then, tn is clobbered immediately in lab3 by the gcall tn.
In other words, the value stored into tn by lab1 is never used.
Therefore, we can remove the "mov tn nil" instruction and
move the l1 label.
lab2
ifq tn nil lab4
lab1
lab3
gcall tn ...
There are 74 hits for this pattern in stdlib.
* stdlib/optimize.tl (basic-blocks late-peephole): Implement the
above pattern.
|
|
|
|
|
|
|
|
|
|
|
| |
* stdlib/optimize.tl (basic-blocks late-peephole): This pattern doesn't
match any more because of code removed by the previous commit.
If we shorten it by removing the lab1 block, then it matches.
Because the pattern is shorter, the reduction being performed
by the replacement is no longer needed; it has already been done.
The remaining value is that threads the jump from lab3 to lab4.
This missing threading is what I noticed when evaluating the
effects of the previous patch; this restores it.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* stdlib/optimize.tl (basic-blocks merge-jump-thunks): For each
group of candidate jump-blocks, search the entire basic block
list for one more jump block which is identical to the
others, except that it doesn't end in a jmp, but rather
falls through to the same target that the group jumps to.
That block is then included in the group, and also becomes the
default leader since it is pushed to the front.
(basic-blocks late-peephole): Remove the peephole pattern
which tried to attack the same problem. The new approach is
much more effective: when compiling stdlib, 77 instances occur in which
such a block is identified and added! The peephole pattern only
matched six times.
|
|
|
|
| |
* stdlib/quips.tl (sys:%quips%): New entries.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There are six hits for this in stdlib, two of them in
optimize.tl itself. The situation is like:
label1
(instruction ...)
(jmp label3)
label2
(instruction ...)
label3
where (instruction ...) is identical in both places.
label1 and label2 are functionally identical blocks, which
means that the pattern can be rewritten as:
label1
label2
(instruction ...)
label3
When the label1 path is taken it's faster due to the
elimination of the jmp, and code size is reduced by two
instructions.
This pattern may possibly the result of an imperfection in the
design of the basic-blocks method merge-jump-thunks.
The label1 and label2 blocks are functionally identical.
But merge-jump-thunks looks strictly for blocks that end in a
jmp instruction. It's possible that there was a jmp
instruction and the end of the label2 block, which got
eliminated before merge-jump-thunks, which is done late, just
before late-peephole.
* stdlib/optimize.tl (basic-blocks late-peephole): New rule
for the above pattern.
|
|
|
|
|
|
|
|
| |
* stdlib/getput.tl (file-get-buf, command-get-buf): If the
number of bytes to read is specified, we use an unbuffered
stream. A buffered stream can read more bytes in order to
fill a buffer, which is undesirable when dealing with a
device or pipe.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* hash.c (randbox, hash_c_str, hash_buf): Separate
implementation for 64 bit pointers, using 64 bit random
values, and producing a 64 bit hash, taking in a 64 bit seed.
(gen_hash_seed): Use time_sec_nsec to get nanoseconds.
On 64 bit, put together the seed differently to generate
a wider value.
* tests/009/json.txr: Change from hash tables to lists,
so the order of the output doesn't change between 64 and 32
bits, due to the different string hashing.
* tests/009/json.expected: Updated.
* txr.1: Documented that seeds are up to 64 bits, but
with possibly only the lower 32 bits being used.
|
|
|
|
|
| |
* txr.1: Fix bunch of instances of formatting like << foo)
which should be << foo ).
|
|
|
|
|
|
|
|
|
|
|
|
| |
* RELNOTES: Updated.
* configure (txr_ver): Bumped version.
* stdlib/ver.tl (lib-version): Bumped.
* txr.1: Bumped version and date.
* txr.vim, tl.vim: Regenerated.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I've noticed a wasteful instruction pattern in the compiled
code for the sys:awk-code-move-check function:
7: 2C020007 movsr t2 t7
8: 3800000E if t7 14
9: 00000007
10: 20050002 gcall t2 1 t9 d1 t8 t6 t7
11: 00090001
12: 00080401
13: 00070006
14: 10000002 end t2
Here, the t2 register can be replaced with t7 in the gcall
and end instructions, and the movsr t2 t7 instruction can
be eliminated.
It looks like something that could somehow be targeted more generally
with a clever peephole pattern assisted by data-flow information,
but for now I'm sticking in a dumb late-peephole pattern which just
looks for this very specific pattern.
* stdlib/optimize.tl (basic-blocks late-peephole): Add new
pattern for eliminating the move, as described above.
There are several hits for this in the standard library in addition to
the awk module: in the path-test, each-prod and getopts files.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In situations when the compiler evaluates a constant expression in order
to make some code generating decision, we don't just want to be using
safe-const-eval. While that prevents the compiler from blowing up, and
issues a diagnostic, it causes incorrect code to be generated: code
which does not incorporate the unsafe expression. Concrete example:
(if (sqrt -1) (foo) (bar))
if we simply evaluate (sqrt -1) with safe-const-eval, we get a
diagnostic, and the value nil comes out. The compiler will thus
constant-fold this to (bar). Though the diagnostic was emitted,
executing the compiled code does not produce the exception from
(sqrt -1) any more, but just calls bar.
In certain cases where the compiler relies on the evaluation of a
constant expression, we should bypass those cases when the expression is
unsafe.
In cases where the expression will be integrated into the output
code, we can test with constantp. The same is true in some other
mitigating circumstances. For instance if we test with constantp,
and then require safe-const-eval to produce an integer, we are
okay, because a throwing evaluation will not produce an integer.
* stdlib/compiler.tl (safe-constantp): New function.
(compiler (comp-if, comp-ift, lambda-apply-transform)): Use
safe-constantp rather than constantp for determining whether
an expression is suitable for compile-time evaluation.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When the compiler evaluates constant expressions, it's
possible that they throw, for instance (/ 1 0).
We now handle it better; the compiler warns about it
and is able to keep working, avoiding constant-folding
the expression.
* stdlib/compiler.tl (eval-cache-entry): New struct type.
(%eval-cache%): New hash table variable.
(compiler (comp-arith-form, comp-fun-form)): Add some missing
rlcp calls to track locations for rewritten arithmetic
expressions, so we usefullly diagnose a (sys:b/ ...) and such.
(compiler (comp-if, comp-ift, comp-arith-form,
comp-apply-call, reduce-constant, lambda-apply-transform)):
Replace instances of eval of constantp expressions with
safe-const-eval, and instances of the result of eval being
quoted with safe-const-reduce.
(orig-form, safe-const-reduce, safe-const-eval,
eval-cache-emit-warnings): New functions.
(compile-top-level, with-compilation-unit): Call
eval-emit-cache-warnings to warn about constant expressions
that threw.
squash! compiler: handle constant expressions that throw.
|
|
|
|
|
| |
* hash.c (hash_print_op): Only set the need_space flag if
some leading item is printed.
|
|
|
|
|
| |
* txr.1: The syntax synopsis for the hash function neglects
to mention the :weak-or and :weak-and symbols.
|
|
|
|
|
|
| |
* txr.1: The hash construction keyword is :weak-vals;
the keyword :weak-values is not recognized, yet mentioned
in three places in the documentation.
|
|
|
|
|
| |
* txr.1: The gensym function's argument doesn't have to be a
string. Plus other wording fixes.
|
|
|
|
|
|
|
|
| |
* txr.1: Fix spelling errors that have crept in due to
read-once and the quasiliteral fixes to matching.
* stdlib/doc-syms.tl: Forgotten refresh, needed by the
fix to the wrong random-float-inc name.
|
|
|
|
|
|
| |
* stdlib/compiler.tl (compiler comp-arith-neg-form): Instead
of the length check on the form, we can use a tree case to
require three argument.
|
|
|
|
|
| |
* stdlib/compiler.tl (compiler comp-arith-neg-form): Remove
algebraically incorrect transformation.
|
|
|
|
|
|
|
| |
* stdlib/compiler.tl (compiler comp-arith-form): There is no
need here to pass the form through reduce-constant, since
we are about to divide up its arguments and individualy reduce
them, much like what that function does.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit c8f12ee44d226924b89cdd764b65a5f6a4030b81 tried to fix
an aspect of this problem. I ran into an issue where the try
code produced a D register as its output, and this was
clobbered by the catch code. In fact, the catch code simply
must not clobber the try fragment's output register. No matter
what register that is, it is not safe. A writable T register
could hold a variable.
For instance, this infinitely looping code is miscompiled
such that it terminates:
(let ((x 42))
(while (eql x 42)
(catch
(progn (throw 'foo)
x)
(foo () 0))))
When the exception is caught by the (foo () 0) clause
x is overwritten with that 0 value.
The variable x is assigned to a register like t13,
and since the progn form returns x as it value, it
compiles to a fragment (tfrag) which indicates t13
as its output register.
The catch code wrongly borrows ohis as its own output
register, placing the 0 value into it.
* stdlib/compiler.tl (compiler comp-catch): Get rid of the
coreg local variable, replacing all its uses with oreg.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The compiler is lifting top-level lambdas, such as those
generated by defun, using the load-time mechanism. This has
the undesireable effect of unnecessarily placing the lambdas
into a D register.
* stdlib/compiler.tl (*top-level*): New special variable.
This indicates that the compiler is compiling code that
is outside of any lambda.
(compiler comp-lambda-impl): Bind *top-level* to nil when
compiling lambda, so its interior is no longer at the
top level.
(compiler comp-lambda): Suppress the unnecessary lifting
optimization if the lambda expression is in the top-level,
outside of any other lambda, indicated by *top-level* being
true.
(compile-toplevel): Bind *top-level* to t.
|
|
|
|
|
|
|
|
| |
* stdlib/compiler.tl (compile-toplevel): Recently, I removed
the binding of *load-time* to t from this function. That is
not quite right; we want to positively bind it to nil. A new
top-level compile starts out in non-load-time. Suppose that
some compile-time evaluation recurses into the compiler.
|
|
|
|
|
|
|
|
|
|
|
| |
* lib.c (less): We cannot direclty access right->s.package
because the right operand can be nil. This causes a crash.
Furthermore, the separate NIL case is wrong. If the left
object is nil, the same logic must be carried out as for SYM.
The opposite operand might have the same name, and so packages
have to be compared. We simply merge the two cases, and make
sure we use the proper accessors symbol_name and
symbol_package to avoid blowing up on nil.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The checks for native Windows are incorrect, plus there
are some issues in the path-volume function.
We cannot check for native Windows at macro-expansion time simply by
calling (find #\\ path-sep-chars) because we compile on Cygwin where
that is false. What we must do is check for being on Windows at
macro-expansion time, and then in the "yes" branch of that decision, the
code must perform the path-sep-char test at run-time. In the "no"
branch, we can output smaller code that doesn't deal with Windows.
* stdlib/copy-file.tl (if-windows, if-native-windows): New macro, which
give a clear syntax to the above described testing.
(path-split): Use if-native-windows.
(path-volume): Use if-native-windows. In addition, fix some broken
tests. The tests for a UNC path "//whatever" cannot just test that the
first components are "", because that also matches the path "/".
It has t be that the first two components are "", and there are more
components. A similar issue occurs in the situation when there is
a drive letter. We cannot conclude that if the component after the
drive letter is "", then it's a drive absolute path, because that
situation occurs in a path like "c:" which is relative.
We also destructively manipulate the path to splice out the volume
part and turn it into a simple relative or absolute path. This is
because the path-simplify function deosn't deal with the volume prefix;
its logic like eliminating .. navigations from root do not work if the
prefix component is present.
(rel-path): We handle a missing error case here: one path has volume
prefix and the other doesn't. Also the error cases that can only occur
on Windows are wrapped with if-windows to remove them at compile time.
|
|
|
|
|
|
|
|
| |
* stdlib/match.tl (compile-match): Handle
the (sys:expr (sys:quasi ...)) case by recursing on
the (sys:quasi ...) part, thus making them equivalent.
This fixes the newly introduced broken test cases, and meets
the newly documented requirements.
|
|
|
|
|
|
| |
* tests/011/patmatch.tl: Add failing test cases.
* txr.1: Document desired requirements.
|
|
|
|
|
|
|
| |
* stdlib/pic.tl (insert-commas): Use ifa to bind the
anaphoric variable it to [num (pred i)]. With the new
ifa behavior involving read-place, this now prevents
two accesses to the array.
|
|
|
|
|
|
|
|
|
|
| |
* stdlib/ifa.tl (ifa): When the form bound to the it
anaphoric variable is a place, such that we use placelet,
wrap the place in (read-once ...) so that multiple
evaluations of it don't cause multiple accesses of the
place.
* txr.1: Documented.
|
|
|
|
|
|
|
|
|
|
|
| |
* lisplib.c (place_set_entries): Trigger autoload on
read-once.
* stdlib/place.t (read-once): New function and place.
* txr.1: Documented.
* stdlib/doc-syms.tl: Updated.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The following behavior is observed. When we clean the compiled files
using "make clean-tlo", then autoloading during completion does not
work reliably for some symbols like dissassemble and compile.
The symbols don't complete, and afterward, the functions remain
undefined, and no longer autoload.
The root cause is that when some modules are loaded form source,
deferred warnings occur, due to code referring to symbols that
are defined later. But the provide_completions function installs
a catch for all exceptions, including deferred warnings. It
thereby abruptly terminates loads which trigger deferred warnings,
leaving them half-complete.
The fix is to catch only errors.
* parser.c (catch_error): New global variable.
(load_rcfile): Use catch_error from now on instead of locally
consing this.
(provide_completions): Use catch_error instead of catch_all.
(parse_init): gc-protect catch_error and initialize it.
|
|
|
|
|
|
|
|
|
|
|
|
| |
This function includes the 1.0 value excluded by random-float.
* rand.c (random_float_incl): New static function.
(rand_init): Register random_float_incl intrinsic.
* txr.1: Document, and add discussion about uniformity requirements
and what they mean and do not mean.
* stdlib/doc-syms.tl: Updated.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* ffi.c (ff_cptr_in, ffi_carray_in): New static functions.
(ffi_type_compile): Wire in new functions for dynamically
compiled cptr and carray types.
(ffi_init_types): Also, wire in ffi_cptr_in function for
the non-parametrized cptr type.
(carray_set_ptr): New function.
* ffi.h (carray_set_ptr): Declared.
* txr.1: Documented.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* stdlib/compiler.tl (compiler optimize): After the dataflow-driven
peephole optimization, call elim-dead-code again.
* stdlib/optimize.tl (basic-blocks check-bypass-empty): New method.
(basic-bocks elim-dead-code): After eliminating unreachable blocks
from the list, we use check-bypass-empty to squeeze out any
empty blocks: blocks that have no instructions in their list,
other than the leading label. This helps elim-next-jmp
to find more opportunities to eliminate a wasteful jump, because
sometimes these jumps straddle over empty blocks.
Furthermore, elim-next-jmp can generate more empty blocks itself;
so we check for this situation, delete the blocks and iterate.
|
|
|
|
|
|
|
|
|
|
|
| |
* stdlib/optimize.tl (basic-blocks elim-dead-code): When
clearing the links before recalculating the graph, also clear
the next field of every block, because link-graph only sets
this if necessary, assuming that the value is already nil.
Thus by not resetting it, we risk leaving stale values in
these .next fields. The code reachability calculation relies
on next fields, so if they falsely point to dead blocks, those
blocks could be falsely retained.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The parser wrongly reads #(#; abc) as (nil) instead of (), and
related cases derived from this one are all likewise wrong.
A number of tests added in the previous commit target this
and fail. They are hereby fixed.
* parser.y (listacc): In the productions that begin with
HASH_SEMI, do not produce a (nil . nil) leading cons, but a
(nao . nil) leading cons; so the fact that the first item is
commented out is represented by a nao in the car field of the
leading cons.
(n_exprs): If the first element of the list produced by the
listacc grammar symbol is nao, then pop it off. Thereby, we
lose the spurious nil that we previously had there left by the
commented-out item.
* y.tab.c.shipped: Updated.
|
|
|
|
| |
* tests/012/syntax.tl: New tests, some of which fail.
|
|
|
|
|
|
|
|
|
|
|
| |
* stdlib/compiler.tl (usr:compile-toplevel): Do not
bind *load-time* to t at the top level. The idea behind this
binding was to treat load-time as a transparent form that does
nothing if it occurs in the top-level since the top-level is
already at load-time. However, this is problematic because it
breaks the expectation that load-time calculations are
factored out of a form and done prior to its evaluation, even
if that form is top-level.
|
|
|
|
|
|
| |
Add three tests; the first and third fail.
* tests/019/load-time.tl: New file.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The immediate problem is that with-dyn-lib creates a defvarl,
but deffi uses load-time forms to refer to that. In compiled
code, these load-time evaluations will occur before the
defvarl exists. The conceptual problem is that with-dyn-lib
might not be a top-level form. It can be conditionally
executed, as it happens in stdlib/doc-syms.tl, which is now
broken. Let's not use load-time, but straight lexical
environments.
* stdlib/ffi.tl (with-dyn-lib): Translate to a simple let
which binds sys:ffi-lib as a lexical variable.
(sys:with-dyn-lib-check): Use lexical-var-p to test what
sys:ffi-lib is lexically bound as a variable.
(deffi, sys:deffi-cb-expander): Instead of gloval defvarl
variables, bind the needed pieces to lexical variables,
placing the generated defun into that scope.
|
|
|
|
|
|
|
|
|
| |
* ffi.c (align_sw_get, align_sw_end, align_sw_put_end,
align_sw_put): On Intel, PowerPC and also on ARM if certain
compiler options are in effect (set by the user building TXR,
not us), define these macros to do nothing. This shrinks and
speeds up all the functions which use these macros for
handling unaligned accesses.
|
|
|
|
|
|
|
|
|
| |
* stdlib/copy-file.tl (path-simplify): If the incoming path's
first component is "", it is absolute; in that case swallow
any components that go above.
* tests/018/path-equal.tl: Uncomment two previously failing
tests.
|
|
|
|
|
|
| |
* txr.1: Document the parenthesized pattern notation for
obtaining a negative number with parentheses. Also putting the
escape syntax first, because it's a short section.
|