| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
| |
* lib.c (less_tab_init): Add missing initialization for VEC,
with a priority above CONS: all vectors are greater than
conses. The BUF priority is bumped to 7.
* test/012/less.tl: New file.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* tree.c (tree_min_node, tree_min, tree_del_min_node,
tree_del_min): New functions.
(tree_init): tree-min-node, tree-min, tree-del-min-node,
tree-del-min: New intrinsics registered.
* tree.h (tree_min_node, tree_min, tree_del_min_node,
tree_del_min): Declared.
* txr.1: Documented.
* tests/010/tree.tl: New tests.
* stdlib/doc-syms.tl: Updated.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When duplicate keys are inserted in the default way with
replacement, the tree size must not be incremented.
* tree.c (tr_insert): Increment the tr->size and maintain
tr->max_size here. In the case of replacing an existing node,
do not touch the count.
* tests/010/tree.tl: Add test cases covering duplicate
insertion and tree-count.
(tree_insert_node): Remove unconditional size increment.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* tree.c (tr_insert): New argument for allowing duplicate.
If it is true, suppresses the case of replacing a node,
causing the logic to fall through to traversing right, so the
duplicate key effectively looks like it is greater than the
existing duplicates, and gets inserted as the rightmost
duplicate.
(tr_do_delete_specific, tr_delete_specific): New static functions.
(tree_insert_node): New parameter, passed to tr_insert.
(tree_insert): New parameter, passed to tree_insert_node.
(tree_delete_specific_node): New function.
(tree): New parameter to allow duplicate keys in the elements
sequence.
(tree_construct): Pass t to tree to allow duplicate elements.
(tree_init): Update registrations of tree, tree-insert and
tree-insert-node. Register tree-delete-specific-node function.
* tree.h (tree, tree_insert_node, tree_insert): Declarations
updated.
(tree_delete_specific_node): Declared.
* lib.c (seq): Pass t argument to tree_insert, allowing
duplicates.
* parser.c (circ_backpatch): Likewise.
* parser.y (tree): Pass t to new argument of tree, so
duplicates are preserved in the element list of the #T
literal.
* y.tab.c.shipped: Updated.
* tests/010/tree.tl: Test cases for duplicate keys.
* txr.1: Documented.
* stdlib/doc-syms.tl: Updated.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* tree.c (tree_count): New function.
(tree_init): tree-count intrinsic registered.
* tree.h (tree_count): Declared.
* lib.c (length): Support search tree argument via tree_count.
* tests/010/tree.tl: Test cases for tree-count, indirectly via len.
* txr.1: Documented.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* lib.c (iter_reset): When we reinitialize the iterator, it can allocate
a new secondary object, e.g. using hash_begin, which is stored into the
iterator. This is potentially a wrong-way assignment in terms of GC
generations and so we must call mut(iter) to indicate that the object
has been suspiciously mutated. We only do this if the iterator has a
mark function. If it doesn't have one, then it isn't wrapping a heap
object, and so doesn't have this issue.
(seq_reset): This has the same issue, and the fix is the same. Since ths
function is obsolescent, we don't bother doing the si->ops->mark check;
we optimize for code size instead.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Issue 1: the seq_iter_init_with_info function potentially allocates
an object via hash_begin or tree_begin and installs it into the
iterator. The problem is that under iter_begin, the iterator is
a heaped object; this extra allocation can trigger gc which pushes the
iterator into the mature generation; yet the assignment in
seq_iter_init_with_info is just a plain assignment without using the set
macro.
Issue 2: when gc is triggered in the above situations, it crashes
due to the struct seq_iter being incompletely initialized. The
mark function tries to dereference the si->ops pointer. Alas, this
is initialized in the wrong order inside seq_iter_init_with_info.
Concretely, tree_begin is called first, and then the
it->ops = &si_tree_ops assignment is performed, which means that if
the garbage collector runs under tree_begin, it sees a null it->ops
pointer. However, this issue cannot just be fixed here by rearranging
the code because that leaves Issue 1 unsolved. Also, this initialization
order is not an issue for stack-allocated struct seq_iters.
The fix for Issue 1 and Issue 2 is to reorder things in iter_begin.
Initialize the iterator structure first, and then create the iterator
cobj. Now, of course, that goes against the usual correct protocol for
object initialization. If we just do this re-ordering naively,
we have Issue 3: the familiar problem that the cobj() call triggers gc,
and the iterator object (e.g. from tree_iter) that has been stored into
the seq_iter structure is not visible ot the GC, and is reclaimed.
* lib.c (iter_begin): reorder the calls so that seq_iter_init_with_info
is called first, and then the cobj to create from it the heap-allocated
iterator, taking care of Issue 1 and Issue 2. To avoid Issue 3,
after initializing the structure, we pull out the vulnerable iterator
object into a local variable, and pass it to gc_hint(), to ensure that
the variable is spilled into the stack, thereby protecting it from
reclamation.
(seq_begin): This function has exactly the same issue, fixed in the
same way.
|
|
|
|
|
|
|
|
|
|
| |
* stdlib/compiler.tl (compile): The symbol-function function
returns true for lambda and that's where we are handling
lambda expressions. However, the (set (symbol-function ...) ...)
then fails: that requires a function name that designates
a mutable function location. Let's restructure the code with
match-case, and handle the lambda pattern separately via
compile-toplevel.
|
|
|
|
|
|
|
|
|
| |
* stdlib/optimize.tl (basic-blocks late-peephole): In one
pattern, an instruction that is recognized as (jend ...)
is inadvertently rewritten to (end ...). Since this is the
last optimization stage, currently, and end and jend are
synonyms for the same opcode, it doesn't matter.
But it could turn into a bug; let's fix it.
|
|
|
|
|
|
| |
* stdlib/optimize (basic-block print): Print the label of the
next block, rather than the block itself. This reduces the
verbosity during debugging.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This work addresses the following issues, which cause compiled
functions to use more stack space than necessary.
Firstly, the compiler doesn't allocate registers tightly.
Every closure's registers can start at t2, but this isn't
done. Secondly, data flow optimizations eliminate registers,
leaving gaps in the register allocation. Code ends up strange:
you may see register t63 used in a function where the next
highest register below that is t39: evidence that a large
number of temporary variables got mapped to registers and
eliminated.
In this change, an optimization is introduced, active at
*opt-level* 6 which compacts the t registers for every
closure: every closure's registers are renumbered starting
from t2. Then, each closure' generating close instruction
is also updated to indicate the accurate number of registers
ensuring no space is wasted on the stack when the closure
is prepared for execution.
* stdlib/compiler.tl (compiler optimize): At optimization
level 6, insert a call to basic-blocks compact-tregs
just before the instruction are pulled out and put through
the late peephole pass.
* stdlib/optimize.tl (basic-block): New slot, closer.
This is pronounced "clozer", as in one who closes.
For any basic block that is the head of a closure (the entry
point ito the closure code), this slot is set to point
to the previous block: the one which ends in the close
instruction which creates this closure: the closer. This is
important because the close instruction can use t registers
for arguments, and those registers belong to the closure.
Those argument registers must be included in the renaming.
(basic-blocks): New slots closures and cl-hash. The former
lists the closure head basic blocks; all the basic blocks
which are the entry blocks of a closure. The cl-hash
associates each head block with a list of all the blocks
(including the head block).
(basic-blocks identify-closures): New method. This scans the
list of blocks, identifying the closure heads, and associating
them with their closers, to establish the bb.closure list.
Then for each closure head in the list, the graph is searched
to find all the blocks of a closure, and these lists are put
into the bb.cl-hash.
(basic-block fill-treg-compacting-map): This method scans
a basic block, ferreting out all of it t registers, and
adds renaming entries for them into a hash that is passed in.
If the block is the head of a closure, then the close
instruction of the block's closer is also scanned for t
registers: but only the arguments, not the destination
register.
(basic-block apply-treg-compacting-map): This method
renames the t register of a block using the renaming map.
It follows the registers in the same way as
fill-treg-compacting-map, and consquently also goes into
the close instruction to rename the argument registers, as
necessary. When tweaking the close instruction, it also
updates the instruction's number-of-tregs field with the
newly calculated number, which comes from the map size.
(basic-blocks compact-tregs): This method ties it together,
using the above methods to identify the basic blocks belonging
to closures, build register renaming maps for them, and hen
apply the renaming.
|
|
|
|
|
|
|
|
|
|
|
| |
I discovered this off chance by searching for occurrences
of (let ,(zip ...) ...) or (let (,*(zip ...)) ...) in the
code base, noticing an incorrect one.
* stdlib/place.tl (sys:register-simple-accessor): Remove
spurious list around ,(zip temps args).
* tests/012/defset.tl: Test cases for define-accessor added.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* txr.1: Adding the missing requirement that each-match
and the other macros in that family must have an implicit
anonymous block around the body forms. This is a requirements
bug, effectively: the programmer expects these operators to be
consistent with the each operator, as part of the same family.
* match.tl (each-match-expander): Implement the requirement.
Since we are using mapping functions, we must use temporary
variables: the evaluation of the expressions which produce the
sequence argument values to the mapping functions must be
outside of the anonymous block. The block must surround only
the function call.
* tests/011/patmatch.tl: Add small test case covering this.
|
|
|
|
|
|
|
|
|
|
|
|
| |
* eval.c (me_case): Reduce (key) to key only if key is
an atom. Otherwise we reduce ((a b c)), which
is a single list-valued key to (a b c), which looks like
three keys. This was introduced on Oct 25, 2017 in
commit b72c9309c8d8f1af320dce616a69412510531b48,
making it a regression.
* tests/012/case.tl: New file. The last test
case fails without this bugfix. The others pass either way.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* eval.c (eval_init): nrot, rot intrinsics registered.
* lib.c (nrot, rot): New functions.
* lib.h (nrot, rot): Declared.
* tests/012/seq.tl: New test cases.
* txr.1: Documented.
* stdlib/doc-syms.tl: Updated.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* stdlib/op.tl (ret, aret): Simplify implementation, without
progn or @rest, or interpolation of multiple args.
We use identity* to allow the resulting function to
allow and ignore multiple arguments.
* txr.1: Strangely, an an edit in commit 99131c676,
on Sep 26, 2014, reverted the more accurate equivalence
(ret x) <--> (op identity (progn @rest x))
back to the original documentation
(ret x) <--> (op identity x)
which matched an older implementation. Anyway, that's moot
now; the documentation is updated to give the new equivalence
via identity*.
|
|
|
|
| |
* stdlib/quips.tl (%quips%): New one.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* eval.c (eval_init): Register tuples* intrinsic.
* lib.c (tuples_star_func): New static function.
(tuples_star): New function.
* lib.h (tuples_star): Declared.
* tests/012/seq.tl: New test cases.
* txr.1: Documented.
* stdlib/doc-syms.tl: Updated.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* lib.c (make_like): In the COBJ case, recognize an iterator
object. Pull out the underlying object and recurse on it.
This is needed in tuples_func, where make_like will now be
called on the abstract iterator, rather than the actual
sequence object.
(tuples_func): The incoming object is now an iterator, and not
a sequence; we need to handle it with iter_more, iter_item and
iter_step.
(tuples): Instead of nullify, begin iteration with
iter_begin, and use iter_more to test for empty.
In non-empty case, put propagate the iterator thorugh the lazy
cons car field, rather than the sequence.
|
|
|
|
| |
* tests/012/seq.tl: Numerous test cases for tuples.
|
|
|
|
|
|
|
| |
* lib.c (tuples): Check that n argument giving tuple size is a
is a positive integer.
* tests/012/seql.tl: Test case added.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is related to the pattern in the previous commit. When we have a
situation like this:
lab1
mov tn nil
lab2
ifq tn nil lab4
lab3
gcall tn ...
We know that if lab1 is entered, then lab2 will necessarily
fall through: the lab4 branch is not taken because tn is nil.
But then, tn is clobbered immediately in lab3 by the gcall tn.
In other words, the value stored into tn by lab1 is never used.
Therefore, we can remove the "mov tn nil" instruction and
move the l1 label.
lab2
ifq tn nil lab4
lab1
lab3
gcall tn ...
There are 74 hits for this pattern in stdlib.
* stdlib/optimize.tl (basic-blocks late-peephole): Implement the
above pattern.
|
|
|
|
|
|
|
|
|
|
|
| |
* stdlib/optimize.tl (basic-blocks late-peephole): This pattern doesn't
match any more because of code removed by the previous commit.
If we shorten it by removing the lab1 block, then it matches.
Because the pattern is shorter, the reduction being performed
by the replacement is no longer needed; it has already been done.
The remaining value is that threads the jump from lab3 to lab4.
This missing threading is what I noticed when evaluating the
effects of the previous patch; this restores it.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* stdlib/optimize.tl (basic-blocks merge-jump-thunks): For each
group of candidate jump-blocks, search the entire basic block
list for one more jump block which is identical to the
others, except that it doesn't end in a jmp, but rather
falls through to the same target that the group jumps to.
That block is then included in the group, and also becomes the
default leader since it is pushed to the front.
(basic-blocks late-peephole): Remove the peephole pattern
which tried to attack the same problem. The new approach is
much more effective: when compiling stdlib, 77 instances occur in which
such a block is identified and added! The peephole pattern only
matched six times.
|
|
|
|
| |
* stdlib/quips.tl (sys:%quips%): New entries.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There are six hits for this in stdlib, two of them in
optimize.tl itself. The situation is like:
label1
(instruction ...)
(jmp label3)
label2
(instruction ...)
label3
where (instruction ...) is identical in both places.
label1 and label2 are functionally identical blocks, which
means that the pattern can be rewritten as:
label1
label2
(instruction ...)
label3
When the label1 path is taken it's faster due to the
elimination of the jmp, and code size is reduced by two
instructions.
This pattern may possibly the result of an imperfection in the
design of the basic-blocks method merge-jump-thunks.
The label1 and label2 blocks are functionally identical.
But merge-jump-thunks looks strictly for blocks that end in a
jmp instruction. It's possible that there was a jmp
instruction and the end of the label2 block, which got
eliminated before merge-jump-thunks, which is done late, just
before late-peephole.
* stdlib/optimize.tl (basic-blocks late-peephole): New rule
for the above pattern.
|
|
|
|
|
|
|
|
| |
* stdlib/getput.tl (file-get-buf, command-get-buf): If the
number of bytes to read is specified, we use an unbuffered
stream. A buffered stream can read more bytes in order to
fill a buffer, which is undesirable when dealing with a
device or pipe.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* hash.c (randbox, hash_c_str, hash_buf): Separate
implementation for 64 bit pointers, using 64 bit random
values, and producing a 64 bit hash, taking in a 64 bit seed.
(gen_hash_seed): Use time_sec_nsec to get nanoseconds.
On 64 bit, put together the seed differently to generate
a wider value.
* tests/009/json.txr: Change from hash tables to lists,
so the order of the output doesn't change between 64 and 32
bits, due to the different string hashing.
* tests/009/json.expected: Updated.
* txr.1: Documented that seeds are up to 64 bits, but
with possibly only the lower 32 bits being used.
|
|
|
|
|
| |
* txr.1: Fix bunch of instances of formatting like << foo)
which should be << foo ).
|
|
|
|
|
|
|
|
|
|
|
|
| |
* RELNOTES: Updated.
* configure (txr_ver): Bumped version.
* stdlib/ver.tl (lib-version): Bumped.
* txr.1: Bumped version and date.
* txr.vim, tl.vim: Regenerated.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I've noticed a wasteful instruction pattern in the compiled
code for the sys:awk-code-move-check function:
7: 2C020007 movsr t2 t7
8: 3800000E if t7 14
9: 00000007
10: 20050002 gcall t2 1 t9 d1 t8 t6 t7
11: 00090001
12: 00080401
13: 00070006
14: 10000002 end t2
Here, the t2 register can be replaced with t7 in the gcall
and end instructions, and the movsr t2 t7 instruction can
be eliminated.
It looks like something that could somehow be targeted more generally
with a clever peephole pattern assisted by data-flow information,
but for now I'm sticking in a dumb late-peephole pattern which just
looks for this very specific pattern.
* stdlib/optimize.tl (basic-blocks late-peephole): Add new
pattern for eliminating the move, as described above.
There are several hits for this in the standard library in addition to
the awk module: in the path-test, each-prod and getopts files.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In situations when the compiler evaluates a constant expression in order
to make some code generating decision, we don't just want to be using
safe-const-eval. While that prevents the compiler from blowing up, and
issues a diagnostic, it causes incorrect code to be generated: code
which does not incorporate the unsafe expression. Concrete example:
(if (sqrt -1) (foo) (bar))
if we simply evaluate (sqrt -1) with safe-const-eval, we get a
diagnostic, and the value nil comes out. The compiler will thus
constant-fold this to (bar). Though the diagnostic was emitted,
executing the compiled code does not produce the exception from
(sqrt -1) any more, but just calls bar.
In certain cases where the compiler relies on the evaluation of a
constant expression, we should bypass those cases when the expression is
unsafe.
In cases where the expression will be integrated into the output
code, we can test with constantp. The same is true in some other
mitigating circumstances. For instance if we test with constantp,
and then require safe-const-eval to produce an integer, we are
okay, because a throwing evaluation will not produce an integer.
* stdlib/compiler.tl (safe-constantp): New function.
(compiler (comp-if, comp-ift, lambda-apply-transform)): Use
safe-constantp rather than constantp for determining whether
an expression is suitable for compile-time evaluation.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When the compiler evaluates constant expressions, it's
possible that they throw, for instance (/ 1 0).
We now handle it better; the compiler warns about it
and is able to keep working, avoiding constant-folding
the expression.
* stdlib/compiler.tl (eval-cache-entry): New struct type.
(%eval-cache%): New hash table variable.
(compiler (comp-arith-form, comp-fun-form)): Add some missing
rlcp calls to track locations for rewritten arithmetic
expressions, so we usefullly diagnose a (sys:b/ ...) and such.
(compiler (comp-if, comp-ift, comp-arith-form,
comp-apply-call, reduce-constant, lambda-apply-transform)):
Replace instances of eval of constantp expressions with
safe-const-eval, and instances of the result of eval being
quoted with safe-const-reduce.
(orig-form, safe-const-reduce, safe-const-eval,
eval-cache-emit-warnings): New functions.
(compile-top-level, with-compilation-unit): Call
eval-emit-cache-warnings to warn about constant expressions
that threw.
squash! compiler: handle constant expressions that throw.
|
|
|
|
|
| |
* hash.c (hash_print_op): Only set the need_space flag if
some leading item is printed.
|
|
|
|
|
| |
* txr.1: The syntax synopsis for the hash function neglects
to mention the :weak-or and :weak-and symbols.
|
|
|
|
|
|
| |
* txr.1: The hash construction keyword is :weak-vals;
the keyword :weak-values is not recognized, yet mentioned
in three places in the documentation.
|
|
|
|
|
| |
* txr.1: The gensym function's argument doesn't have to be a
string. Plus other wording fixes.
|
|
|
|
|
|
|
|
| |
* txr.1: Fix spelling errors that have crept in due to
read-once and the quasiliteral fixes to matching.
* stdlib/doc-syms.tl: Forgotten refresh, needed by the
fix to the wrong random-float-inc name.
|
|
|
|
|
|
| |
* stdlib/compiler.tl (compiler comp-arith-neg-form): Instead
of the length check on the form, we can use a tree case to
require three argument.
|
|
|
|
|
| |
* stdlib/compiler.tl (compiler comp-arith-neg-form): Remove
algebraically incorrect transformation.
|
|
|
|
|
|
|
| |
* stdlib/compiler.tl (compiler comp-arith-form): There is no
need here to pass the form through reduce-constant, since
we are about to divide up its arguments and individualy reduce
them, much like what that function does.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit c8f12ee44d226924b89cdd764b65a5f6a4030b81 tried to fix
an aspect of this problem. I ran into an issue where the try
code produced a D register as its output, and this was
clobbered by the catch code. In fact, the catch code simply
must not clobber the try fragment's output register. No matter
what register that is, it is not safe. A writable T register
could hold a variable.
For instance, this infinitely looping code is miscompiled
such that it terminates:
(let ((x 42))
(while (eql x 42)
(catch
(progn (throw 'foo)
x)
(foo () 0))))
When the exception is caught by the (foo () 0) clause
x is overwritten with that 0 value.
The variable x is assigned to a register like t13,
and since the progn form returns x as it value, it
compiles to a fragment (tfrag) which indicates t13
as its output register.
The catch code wrongly borrows ohis as its own output
register, placing the 0 value into it.
* stdlib/compiler.tl (compiler comp-catch): Get rid of the
coreg local variable, replacing all its uses with oreg.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The compiler is lifting top-level lambdas, such as those
generated by defun, using the load-time mechanism. This has
the undesireable effect of unnecessarily placing the lambdas
into a D register.
* stdlib/compiler.tl (*top-level*): New special variable.
This indicates that the compiler is compiling code that
is outside of any lambda.
(compiler comp-lambda-impl): Bind *top-level* to nil when
compiling lambda, so its interior is no longer at the
top level.
(compiler comp-lambda): Suppress the unnecessary lifting
optimization if the lambda expression is in the top-level,
outside of any other lambda, indicated by *top-level* being
true.
(compile-toplevel): Bind *top-level* to t.
|
|
|
|
|
|
|
|
| |
* stdlib/compiler.tl (compile-toplevel): Recently, I removed
the binding of *load-time* to t from this function. That is
not quite right; we want to positively bind it to nil. A new
top-level compile starts out in non-load-time. Suppose that
some compile-time evaluation recurses into the compiler.
|
|
|
|
|
|
|
|
|
|
|
| |
* lib.c (less): We cannot direclty access right->s.package
because the right operand can be nil. This causes a crash.
Furthermore, the separate NIL case is wrong. If the left
object is nil, the same logic must be carried out as for SYM.
The opposite operand might have the same name, and so packages
have to be compared. We simply merge the two cases, and make
sure we use the proper accessors symbol_name and
symbol_package to avoid blowing up on nil.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The checks for native Windows are incorrect, plus there
are some issues in the path-volume function.
We cannot check for native Windows at macro-expansion time simply by
calling (find #\\ path-sep-chars) because we compile on Cygwin where
that is false. What we must do is check for being on Windows at
macro-expansion time, and then in the "yes" branch of that decision, the
code must perform the path-sep-char test at run-time. In the "no"
branch, we can output smaller code that doesn't deal with Windows.
* stdlib/copy-file.tl (if-windows, if-native-windows): New macro, which
give a clear syntax to the above described testing.
(path-split): Use if-native-windows.
(path-volume): Use if-native-windows. In addition, fix some broken
tests. The tests for a UNC path "//whatever" cannot just test that the
first components are "", because that also matches the path "/".
It has t be that the first two components are "", and there are more
components. A similar issue occurs in the situation when there is
a drive letter. We cannot conclude that if the component after the
drive letter is "", then it's a drive absolute path, because that
situation occurs in a path like "c:" which is relative.
We also destructively manipulate the path to splice out the volume
part and turn it into a simple relative or absolute path. This is
because the path-simplify function deosn't deal with the volume prefix;
its logic like eliminating .. navigations from root do not work if the
prefix component is present.
(rel-path): We handle a missing error case here: one path has volume
prefix and the other doesn't. Also the error cases that can only occur
on Windows are wrapped with if-windows to remove them at compile time.
|
|
|
|
|
|
|
|
| |
* stdlib/match.tl (compile-match): Handle
the (sys:expr (sys:quasi ...)) case by recursing on
the (sys:quasi ...) part, thus making them equivalent.
This fixes the newly introduced broken test cases, and meets
the newly documented requirements.
|
|
|
|
|
|
| |
* tests/011/patmatch.tl: Add failing test cases.
* txr.1: Document desired requirements.
|
|
|
|
|
|
|
| |
* stdlib/pic.tl (insert-commas): Use ifa to bind the
anaphoric variable it to [num (pred i)]. With the new
ifa behavior involving read-place, this now prevents
two accesses to the array.
|
|
|
|
|
|
|
|
|
|
| |
* stdlib/ifa.tl (ifa): When the form bound to the it
anaphoric variable is a place, such that we use placelet,
wrap the place in (read-once ...) so that multiple
evaluations of it don't cause multiple accesses of the
place.
* txr.1: Documented.
|