| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* struct.tl (defstruct): When generating the lambda that
initializes slots from boa arguments, instead of we use (set
(qref obj slot) val) instead of slotset. The qref macro will
diagnose use of nonexistent slots.Thus warnings are produced
for, say:
(defstruct (point x y) nil)
where x and y have not been defined, using the imperfect
approach of the qref implementation, which is better than
nothing.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a defmacro form is compiled, the entire form is retained
as a literal in the output. This is wasteful and gives away
the source code. In spite of that, errors in using the
macro are incorrectly reported against defmacro, because
that is the first symbol in the form. These issues arise with
what arguments are passed as the first two parameters of the
compiler's expand-bind-mac-params function, and what exactly
it does with them. We make a tweak to that, as well as some
tweaks to all the calls.
* stdlib/compiler.tl (expand-bind-mac-params): There is
a mix-up here in that both the ctx-form and err-form
arguments are ending up in the compiled output. Let's
have only the first agument, ctx-form going into the
compiled output. Thus that is what is inserted into
the sys:bind-mach-check call that is generated.
Secondly, ctx-form should not be passed to the constructor
for mac-param-parser. ctx-form is a to-be-evaluated
expression which might just be a gensym; we cannot use
it at compile time for error reporting. Here we must
use the second argument. Thus the second argument is now
used only for two purposes: copying the source code info
to the output code, and for error reporting in
the mac-param-parser class. This second purpose is minor,
because the code has been passed through the macro expander
before being compiled, which has caught all the errors.
Thus the argument is changed to rlcp-form, reflecting its
principal use.
(comp-tree-bind, comp-tree-case): Calculate a simplified
version of the tree-bind or tree-case form for error reporting
and pass that as argument the ctx-form argument of
expand-bind-mac-params. Just pass form as the second argument.
(comp-mac-param-bind, comp-mac-env-param-bind):
Just pass form as the second argument of
expand-bind-mac-params.
|
|
|
|
|
|
|
|
|
|
|
| |
* stdlib/optimize.tl (basic-blocks late-peephole):
The test whether lab2 is used is bogus, and will
never be true. The correct test is simply whether
the block has two or more rlinks. This makes no
difference in the standard library images. When
the bug appears, the manifestation would be that
a needed label is deleted, resulting in an exception
from the assembler.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
2022-09-13 commit 6e354e1c2d5d64d18f527d52db75e344a9223d95,
subject "compiler: bugfixes in dead code elimination",
introduced a problem. By allowing the closure body blocks to
be included in the links of the previous basic block that ends
in the close instruction, it caused liveness info to flow out
out of close blocks into the close instruction, which is
wrong. Thus registers used inside a closure, which are
entirely private, wrongly appear live outside of the closure,
interfering with optimizations like eliminating dead
registers.
We can't simply roll back the commit because the bug it
fixes will reappear. The fix is to pair the next field
with a prev field, and maintain them; don't rely on
the rlinks to point to the previous block.
* stdlib/optimize.tl (basic-block): New slot, prev.
(back-block join-block): As we delete the next block,
we must update that block's next block's prev link.
(basic-blocks link-graph): Build the prev links.
Fix the bug in handling the close instruction:
do not list the close body code among the links,
only the branch target of the close.
(basic-blocks do-peephole-block): In a few cases in
which we set the bl.next to nil, we also set the
bl.next.prev to nil, if bl.next exists.
(basic-blocks elim-dead-clode): Reset the bl.prev
of every block also.
(basic-block check-bypass-empty): Here, we no longer
depend on rlinks containing the previous block;
the prev gives it to us. So we move that fixup out
of the link, and also fix up the next blocks prev
pointer.
|
|
|
|
|
|
|
| |
* tests/012/sort.tl: The larger input tests are
testing only vectors, thus covering neither
quicksort nor array binary merge. Cases
added.
|
|
|
|
|
|
|
|
| |
* lib.c (quicksort): Avoid calls to keyfun when
it's known to be identity,
(mergesort): Likewise. Also, avoid redundant
accesses to the vector when merging, for that index
which has not moved between iterations.
|
|
|
|
|
|
|
|
| |
* tests/010/sort.tl: File moved to tests/012.
The reason is that the tests 010 run with the
--gc-debug torture tests. That test case runs
way too long under that test because of the
testing of many permutations and whatnot.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* gc.c (prot_array): Add self pointer; arr member
becomes flexible array.
(prot_array_mark): We now check the handle itself for
null, because the whole thing is freed.
(prot_array_free): Function removed.
(prot_array_ops): Wire cobj_destroy_free_op in place
of prot_array_free. This fixes a memory leak because
prot_array_free was not freeing the handle, only
the array.
(gc_prot_array_alloc): Fix to allocate everything
in one swoop and store the self-pointer in the
named member rather than arr[-1]. The self argument
is not required; we drop it. The size argument cannot
be anywhere near INT_PTR_MAX, because such an array
wouldn't fit into virtual memory, so it is always
safe to add a small value to the size.
(prot_array_free): Obtain the self-pointer, and
free the handle, replacing it with a null pointer.
* gc.h (gc_prot_array_alloc): Declaration updated.
* lib.c (ssort_vec): Don't pass self to gc_prot_array_alloc.
* lib.h (container): New macro.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For array-like objecgts, these objects use an
array-based merge sort, using an auxiliary array
equal in size to the original array.
To provide the auxiliary array, a new kind of very simple
vector-like object is introduced into the gc module: protected
array. This looks like a raw dynamic C array of val type,
returned as a val *. Under the hood, there is a heap object
there, which makes the array traversable by the garbage
collector.
The whole point of this exercise is to make the new mergesort
function safe even if the caller-supplied functions misbehave
in such a way that the auxiliary array holds the only
references to heap objects.
* gc.c (struct prot_array): New struct,
(prot_array_cls): New static variable.
(gc_late_init): Register COBJ class, retaining in
prot_array_cls.
(prot_array_mark, prot_array_free): New static functions.
(prot_array_ops): New static structure.
(prot_array_alloc, prot_array_free): New functions.
* gc.h (prot_array_alloc, prot_array_free): Declared.
* lib.c (mergesort, ssort_vec): New static function.
(snsort, ssort): New functions.
* lib.h (snsort, ssort): Declared.
* tests/010/sort.tl: Cover ssort.
* txr.1: Documented.
* stdlib/doc-syms.tl: Updated.
|
|
|
|
|
|
| |
* lib.c (sort_vec): Take self argument instead of assuming
that we are sort; this can be called by nsort.
(nsort, sort): Pass self to sort_vec.
|
|
|
|
|
|
|
|
| |
* tests/010/sort.tl: Add some test cases of larger list.
The exhaustive permutation tests are good but only go
up to a relatively short size, where the median-of-three
doesn't even kick in. We also cover choosing an alternative
less function.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I'm seeing numbers aobut the same performance on a
sorted vector of integers, and 21% faster on vector of N
random integers in the range [0, N).
Also, this original algorithm handles well the case
of an array consisting of a repeated value.
The code we are replacing degrates to quadratic time.
* lib.c (med_of_three, middle_pivot): We don't use
the return value, so don't calculate and return one.
(quicksort): Revise to Hoare: scanning from both ends
of the array, exchanging elements.
* tests/010/sort.tl: New file. We test sort with
lists and vectors from length zero to eight, all
permutations.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We don't have a function in the hash table module which can
create a populated hash table in one step without requiring
the caller to create auxiliary lists. This new function fills
that gap, albeit with some limitations.
* hash.c (hash_props): New function.
(hash_init): Register hash-props intrinsic.
* tests/010/hash.tl: New tests.
* txr.1: Documented.
* stdlib/doc-syms.tl: Updated.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* time.c (time_str_local, time_str_utc): New static functions.
(time_fields_local, time_fields_utc, time_struct_local,
time_struct_utc): Time argument
becomes optional, defaulted to current time.
(time_init): Use time_s symbol instead of interning
twice. Register new time-str-local and time-str-utc
intrinsics. Fix registration of functions that take
optional args.
* txr.1: New functions documented; optional arguments
documented; existing documentation revised.
* stdlib/doc-syms.tl: Updated.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Quasiquote patterns not containing unquotes are not
working, because the parser transforms them into
quoted objects. For instance ^#S(time) becomes
the form (quote #S(time)) and not the
form (sys:qquote (sys:struct-lit time)).
The pattern matching compiler doesn't treat quote
specially, only sys:qquote.
* parser.y (unquotes_occur): Function removed.
(vector, hash, struct, tree, json_vals, json_pairs):
Remove use of unquotes_occur. Thus vector, hash,
struct, tree and JSON syntax occurring within a
backquote will be turned into a special literal
whether or not it contains unquotes.
* lib.c (obj_print_impl): Do not print the
form (sys:hash-lit) as #Hnil, but #H().
* stdlib/match.tl (transform-qquote): Add a case
which will handle ^#H(), as if it were ^H(()).
Bugfix in the ^H(() ...) case. The use of @(coll)
means it fails to match the empty syntax when
no key/value pairs are specified, whereas
@(all) respects vacuous truth.
* test/011/patmatch.tl: A few tests.
* y.tab.shipped, y.tab.h.shipped: Updated.
|
|
|
|
|
|
|
|
| |
* stdlib/optimize.tl (basic-blocks local-liveness): Just
store the mask of defined registers into each live-info.
Do not propagate the defined mask from the next instruction
backwards. The way the defined mask is used in calc-liveness,
this makes no difference, and is simpler and faster.
|
|
|
|
|
|
|
|
| |
* stdlib/compiler.tl (compiler comp-call-impl): We can no longer
free the temporary registers as-we-go based on whether the
argument expression frag uses them as the output register
frag. Let's just put them all into the aoregs list to be freed
afterward.
|
|
|
|
|
|
|
|
| |
* stdlib/optimize.tl (basic-blocks rename): When we stop
the renaming due to an end instruction and the src
being a v-reg, we can still do the rename in that end
instruction itself. If the v-reg becomes invalid, that
doesn't happen until after the instruction.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* stdlib/optimize.tl (subst-preserve): Rename list param
to insn for clarity.
(careful-subst-preserve): New function. This is like
subst-preserve, but used only for instructions that
have destination registers. It performs a rewrite
such that those destination positions are avoided.
(basic-blocks rename): When the instruction has src
or dst as a target, don't just stop before that
insn. Do the substitution in the source operands using
careful-subst-preserve.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* stdlib/optimize.tl (basic-blocks do-peephole-block):
Remove the local function only-locally-used-treg.
This is unnecessary because the optimization is valid
even if the treg is used in downstream basic blocks.
It was necessary previously in the old version of
this optimization in which we deleted the first
instruction which sets the treg's value. We are now
depending on it being identified as a dead register.
Also, moving the rule to the end. The reason is
that there are cases when the pattern matches, but
it returns insns. That causes the rewrite macro to
march down to the next instruction, skipping other
patterns. This could be bad, unless the pattern is the
last one tried before the @else fallback.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of the conservative strategy in compiler comp-var of
loading variables into t-registers, and relying on optimization
to remove them, let's just go back to the old way: variables
are just registers. For function calls, we can detect mutated
variables and generate the conservative code.
* stdlib/compiler.tl (frag): New slots vbin and alt-oreg.
When a variable access is compiled, the binding is recorded
in vbin, and the desired output register in alt-oreg.
(simplify-var-spy): New struct type, used for detecting
mutated lexical variables when we compile a function argument
list.
(compiler comp-var): Revert to the old compilation strategy
for lexicals: the code fragment is empty, and the output
register is just the v-reg. However, we record the variable
binding and remember the caller's desired register in the
new frag fields.
(compiler comp-setq): Also revert the strategy here.
Here we get our frag from a recursive compilation, so
we just annotate it.
(compiler comp-call-impl): Use the simplify-var-spy to
obtain a list of the lexical variables that were mutated.
This is used for rewriting the frags, if necessary.
(handle-mutated-var-args): New function. If the mutated-vars
list is non-empty, it rewrites the frag list. Every element
in the frag which is a compiled reference to a lexical
variable which is mutated over the evaluation of the arg list
is substituted with a conservative frag which loads the
variable into a temporary register. That register thus
samples the value of the variable at the correct point in the
left-to-right evaluation, so the function is called with
the correct values.
|
|
|
|
|
|
|
|
|
|
|
|
| |
This change is now possible due to the previous bugfix.
* stdlib/optimize.tl (basic-blocks rename): If
the source register is a v-reg, do not allow
the propagation past an end instruction. This
is a precaution because the end instruction
could be the end of the frame in which the
v-register is valid; we don't want to propagate
it outside of that frame.
|
|
|
|
|
|
|
|
|
| |
* stdlib/optimize.tl (basic-blocks rename): When
we encounter a close instruction, we must leave
it alone. The registers named in the argument area
of the instruction do not belong to the current
instruction stream or basic block; they belong to
the function body.
|
|
|
|
|
|
|
|
| |
* stdlib/optimize.tl (basic-blocks local-liveness): Handle all
instructions explicitly with no catch-all behavior. Make a
copy of the live-info even for instructions that have no
source or destination operands, so that they don't mistakenly
marked as having defs or refs.
|
|
|
|
|
|
|
|
|
|
|
| |
* stdlib/optimize.tl (live-info): Slot def replaced by def0
and def1.
(basic-blocks local-liveness): The local function def becomes
defs: it can take two defs. These become def0 and def1. In the
catch instruction case, we use both arguments, capture the
resulting live-info and use it to call refs.
(basic-blocks rename): Check whether either def0 or def1 is
the source or destination.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* stdlib/optimize.tl (basic-blocks local-liveness): When
processing a pure def, we don't copy the live-info
unconditionally, which is waseteful since if the destination
register is a t-reg, we will invoke (new live-info) to
make yet another live info. Instead, let's destructively
mutate the incoming live info from the instruction below,
and return a copy that is made before that is done.
In the def-ref case, the local copy is entirely superfluous,
because in all cases we return a new object.
We also eliminate redundant (set [bb.li-hash insn] li)
evaluations.
|
|
|
|
|
|
|
|
|
| |
* stdlib/optimize.tl (basic-blocks local-liveness):
The exception symbol and argument registers in the
catch instruction are clobbers, not references.
We must treat them as defs. Unfortunately, the
instruction has two clobbers but live-info has
only one def slot, which should be fixed.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* optimize.tl (rename): Instead of a mapping operation,
we perform the substitution only until we hit an
instruction that defines either the src or dst register.
(basic-blocks do-peephole-block): Drop the conditions
for doing the rename: that neither register can be
defined somewhere in the rest of the block. This
restriction is too limiting. We have to be careful now;
we cannot delete the first instruction, and must only
set the recalc flag and add to the rescan list if the
substitution did something, to avoid looping.
|
|
|
|
|
|
|
|
|
|
|
| |
* stdlib/optimize.tl (basic-blocks do-peephole-block): In the
unnecessary copying t-reg case, let's just stay away from
doing it if the source operand is a v-reg. It breaks under the
recent "eval order of variables" commit, indicating that the
conditions that it uses for replacing a v-reg with the t-reg
are not correct. The most likely reason is that the v-reg
can be assigned, but this doesn't show up in the liveness
info which tracks only t-regs.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* stdlib/build.tl (sys:list-builder-flets, sys:build-expander,
build, buildn): Move to top of file. This resolves a circular
dependency triggered by the defstruct macro: it autoloads
struct.tl which autoloads other things, some of which depend
on the build macro. If we provide the build macro at the top,
everything is cool. The compiled version of build.tl doesn't
have this problem, because macro-time dependencies don't
affect compiled code. With this change, it's possible to
run the tests/012/compile.tl test case without stdlib
being compiled.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The purpose of this change is to help with the situation
when an uncompiled stdlib is being used (.tl files, no .tlo)
and txr is invoked with -C <num> to select compatibility
mode with an old version. The problem with compatibility is
that it potentially breaks the library due to the different
behavior of some macros like caseql.
Some test cases in the test suite use backwards compatibility,
and sometimes it is necessary to run with the uncompiled
test suite when debugging compiler work: situations when the
compiler is too broken to build the library.
* autoload.c (autoload_try): Temporarly set the opt_compat
option to 0 (disabled) around autoload processing. Thus
the loading of library code in source code form will not be
adversely affected by any syntax or macro level backward
compatibility hacks.
|
|
|
|
|
|
| |
* stdlib/optimizer.tl (basic-blocks do-peephole-block): Use
pushnew instead of push in one peephole case, so the block
isn't pushed onto the tryjoin and rescan lists twice.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We have the following problem: when function call argument
expressions mutate some of the variables that are being
passed as arguments, the left-to-right semantics isn't
obeyed. The problem is that the funcction call simply refers
to the registers that hold the variables, rather than to
the evaluated values. For instance (fun a (inc a)) will
translate to something like (gcall <n> (v 3) (v 3))
which is incorrect: both argument positions refer to the
current value of a, whereas we need the left argument
to refer to the value before the increment.
* stdlib/compiler.tl (compiler comp-var): Do not assert the
variable as the output register, with null code. Indicate
that the value is in the caller's output register, and
if necessary generate the move.
(compiler comp-setq): When compiling the right-hand-side,
use the original output register, so that we don't end
up reporting the variable as the result location.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* stdlib/compiler.tl (compiler): Remove discards slot.
(compile-in-toplevel, compile-with-fresh-tregs):
Do not save and restore discards.
(compiler maybe-mov): Method removed. It doesn't
require the compiler object so it can just be a function.
(maybe-mov): New function.
(compiler alloc-discard-treg): Method removed.
(compiler free-treg): No need to do anything with discards.
(compiler maybe-alloc-treg): No need to check discards.
(compiler (comp-setq, comp-if, comp-ift, comp-switch,
comp-block, comp-catch, comp-let, comp-fbind,
comp-lambda-impl, comp-or, comp-tree-case,
comp-load-time-lit): Use maybe-mov function instead of method.
(compiler comp-progn): Use alloc-treg rather than
alloc-discard-treg, and use maybe-mov function.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* stdlib/optimize.tl (basic-blocks num-blocks): New
method.
* stdlib/compiler.tl (compiler optimize): At optimization
level 6, instead of performing one extra pass of
jump threading, dead-code elimintation and peephole
optimizations, keep iterating on these until the number
of basic blocks stays the same.
* txr.1: Documented.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* stdlib/optimize.tl (basic-blocks peephole-block): Drop the
code argument, and operate on bl.insns, which is stored
back. Perform the renames in the rename list after the
peephole pass.
(basic-blocks rename): New method.
(basic-blocks do-peephole-block): Implementation of
peephole-block, under a new name. The local function called
rename is removed; calls to it go to the new rename method.
(basic-blocks peephole): Simplify code around calls to
peephole-block; we no longer have to pass bl.insns to it,
capture the return value and store it back into bl.insns.
* stdlib/compiler.tl (*opt-level*): Initial
value changes from 6 to 7.
(compiler optimize): At optimization level 6,
we now do another jump threading pass, and
peephole, like at levels 4 and 5. The peephole
optimizations at level 5 make it possible
to coalesce some basic blocks in some cases,
and that opens up the possibility for more
reductions. The previously level 6 optimizations
are moved to level 7.
* txr.1: Updated documentation of optimization levels,
and default value of *opt-level*.
* stdlib/doc-syms.tl: Updated.
|
|
|
|
|
|
|
| |
* stdlib/optimize.tl (basic-blocks peephole-block): Move local
rename function into main labels block, so other optimizations
will be able to use it. Remove an unused argument, and change
the recursion to a mapcar, since that's what it's doing.
|
|
|
|
|
|
|
|
|
|
|
| |
Contrary to the documentation, the later clauses of a condlet
have the earlier clause variables in scope.
* stdlib/ifa.tl (sys:if-to-cond): Change to different,
non-nesting expansion strategy. We lose the cond-oper
parameter.
(conda, condlet): Drop second parameter from calls
to if-to-cond.
|
|
|
|
|
|
|
|
| |
* unwind.c (uw_rthrow): Only issue the
with a "invalid re-entry of exception handling logic"
and abort, if the exception being processed is an error.
Warnings can occur during the execution of error
diagnosis.
|
|
|
|
|
| |
* stdlib/quips.tl (%quips%): Two new entries punning
on carcdr.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In most places in the hash module, we reduce a hash
code into the power-of-two sized table using
h & (hash->modulus - 1). In some places we wastefully
modulo operation h % hash->modulus. Why don't we
replace the modulus with a mask so we can just do
h & hash->mask.
* hash.c (struct hash_ops): Replace modulus member with
mask, which has a value one less.
(hash_mark, hash_grow, do_make_hash, make_similar_hash,
copy_hash, gethash_c, gethash_e, remhash, clearhash,
hash_iter_next_impl, hash_iter_peek, do_weak_tables):
Work with mask rather than modulus, preserving existing
behavior.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* RELNOTES: Updated.
* configure (txr_ver): Bumped version.
* stdlib/ver.tl (lib-version): Bumped.
* txr.1: Bumped version and date.
* txr.vim, tl.vim: Regenerated.
* time.c (struct tm_wrap): Fix for platforms without
HAVE_TM_ZONE. We still need tm_wrap defined, just
not the zone member. Out of the platforms I build
releases for, Solaris is the only one like this.
|
|
|
|
|
|
|
| |
* txr.1: Fix compiler-opts, *compiler-opts* and
with-compiler-opts to the correct "compile".
* stdlib/doc-syms.tl: Updated.
|
|
|
|
| |
* tests/010/range.tl: New file.
|
|
|
|
|
|
|
|
| |
* genvim.txr (txr_num): Somehow, in spite of all the complexity
and years of maintenance on this file, it generates syntax
files that fail to recognize decimal integer tokens and color
them like floating-point and other tokens like hex and octal.
We now add (back?) the rule for that.
|
|
|
|
|
|
| |
* genvim.txr (tl_ident): Remove one rule, and make
sure the other matches an optional : or #:
(txr_braced_ident): Match optional : or #: prefix.
|
|
|
|
|
|
|
|
|
|
| |
* txr.1: Updating the range and range* documentation,
to describe the new features. Turns out, the documentation
is horrible. It says that the functions work with integers,
and doesn't mention that step can be a function, which was
there from the beginning. I'm also changing wording which
refers to the output being a lazy sequence to call it
what it is: a lazy list.
|
|
|
|
|
|
|
| |
* eval.c (range, range_star); Instead of type switch
use arith. This includes user-defined arithmetic
objects. For that reason, in range_star, use equal
instead of eql.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* lib.h (arithp): Declared.
(plus_s): Existing symbol declared.
* arith.c (arithp): New function.
* struct.h (special_slot): New enum member plus_m.
* struct.c (special_sym): Register plus_s together as
the [plus_m] entry of the array.
* tests/016/arith.tl
* tests/016/ud-arith.tl: Tests for arithp.
* txr.1: Documented.
* stdlib/doc-syms.tl: Updated.
|
|
|
|
|
|
|
|
| |
* eval.c (range_func_fstep, range_func_fstep_inf,
range_func_iter, range_star_func_fstep,
range_star_func_iter: New static functions.
(range, range*): Analyze inputs and use the new functions
for non-numeric ranges.
|