summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* hash: fix broken copy_hash.Kaz Kylheku2017-10-231-1/+15
| | | | | | | | | | | | | | | | Impact assessment: this bug affects the correctness of all programs which rely on copying hash tables. Direct reliance means the use of copy-hash, or using the generic copy function on hash objects. Indirect reliance occurs through hash-diff which uses copy-hash. Nothing in TXR itself calls hash-diff. The the listener's Tab completion relies on copy-hash for package-sensitive symbol visibility calculation. Since that is an interactive feature, the impact is low. * hash.c (copy_hash_chain): New static function. (copy_hash): Use copy_hash_chain instead of copy_alist, since the pairs are hash conses and not regular conses: they have a hash value field that must be copied.
* hash: remove pointless nullify ops.Kaz Kylheku2017-10-231-4/+0
| | | | | | | * hash.c (hash_assoc, hash_assql): Remove useless nullify calls. These are copy and paste leftovers, since these functions were based on assoc and assql, which handle sequences other than lists.
* New variant of op: lop.Kaz Kylheku2017-10-193-6/+107
| | | | | | | | | | * lisplib.c (op_set_entries): Add lop to auto-load list. * share/txr/stdlib/op.tl (sys:op-expand): Recognize lop and implement its transformation. (lop) New macro. * txr.1: Documented.
* find_max: convert to use seq_info.Kaz Kylheku2017-10-131-20/+17
| | | | | | | * lib.c (find_max): Sequence classification rewritten to use seq_info. The cases are almost the same, but refer to si.obj rather than seq. Some care is taken in the list case to not hold a reference to the list head.
* rfind: rewrite to be like find.Kaz Kylheku2017-10-131-11/+48
| | | | | | * lib.c (rfind): Instead of treating the sequence as a list, classify with seq_info just like find. Basically the whole function is replaced with an altered copy of find.
* find: convert to seq_info classification.Kaz Kylheku2017-10-131-44/+36
| | | | | | | * lib.c (find): Convert switch statement to use the seq_info function to classify the sequence. For SEQ_VECLIKE, we still check whether the original object is a literal or regular string to treat it specially.
* tprint and -t option: handle infinite list.Kaz Kylheku2017-10-122-14/+41
| | | | | | | | | | | | | | | | Test case: txr -t '(gun "foo")' must run in constant memory. * eval.c (tprint): Rewritten to iterate over lists using open loop rather than mapdo. Classification of the sequence is done using the new seq_info, as must be for all new sequence functions. * txr.c (txr_main): Implementation of -t, -p and -P captures the result of the expression in a variable whose value is zapped when it is passed to the function. A gc_hint is added so that this isn't optimized away. Thus, this code won't hold on to the original pointer to a lazy, infinite list.
* Fixes in partition, partition*, split and split*.Kaz Kylheku2017-09-292-116/+111
| | | | | | | | | | | | | | | | | | | | | | | | | | Bunch of issues here: broken pre-171 compatibility, non-termination on lazy infinite lists of indices, doc issues. * lib.c (partition_func, split_func, split_star_func): Do the check for negative index values here, with the compat handling for 170 or older. (partition_split_common): Remove code that tries to adjust negative indices, and delete zeros or indices that are still negative after adjustment. The code consumes the entire list of prefixes, so chokes on lazy lists. Also in the compat case, there is complete breakage: the loop doesn't execute, and so out is just nil, and it is taken as the index list. (partition_star_func): Similar change like in partition_func. (partition_star): Similarly to partition_split_common, take out the bogus loop. Also take out loop that tries to remove leading negatives: we cannot do that because we haven't normalized them. * txr.1: Revised doc. Condensed by describing index-list argument in detail under partition. For the other functions, we refer to that one. Conditions for safely handling infinite list of indices spelled out.
* Makefile: clean temporary file used in testing.Kaz Kylheku2017-09-281-0/+1
| | | | * Makefile (clean): Remove $(TESTS_TMP) if it exists.
* Makefile: print failing command in condensed mode.Kaz Kylheku2017-09-281-63/+82
| | | | | | | | | | | | | | | | | | When make output is condensed, showing a summary of each build step in the style "CC txr.c -> opt/txr.o", as is the case by default, the failing build command is now shown. Previously, a failed build had to be re-invoked with make VERBOSE=y to show the failing command. * Makefile (SH): New macro. (COMPILE_C, COMPILE_C_WITH_DEPS, LINK_PROG, WINDRES, INSTALL): These macros now invoke commands via SH rather than directly. (lex.yy.c, y.tab.h, y.tab.c, install-tests, %): Recipes for these targets use SH macro for executing shell commands rather than specifying them directly. (tst/%.out, %.ok, %.expected): These test-related pattern rules also use SH.
* cleanup: remove unnecessary header includes.Kaz Kylheku2017-09-194-7/+0
| | | | | | | | | | * eval.c: doesn't need rand.h. * filter.c: doesn't need gc.h. * parser.l: doesn't need eval.h. * parser.y: doesn't need utf8.h, stream.h, args.h or cadr.h.
* Version 186.txr-186Kaz Kylheku2017-09-166-472/+522
| | | | | | | | | | * RELNOTES: Updated. * configure, txr.1: Bumped version and date. * share/txr/stdlib/ver.tl: Likewise. * txr.vim, tl.vim: Regenerated.
* places: use Lisp-1 macroexpansion where needed.Kaz Kylheku2017-09-151-2/+2
| | | | | | | | | | | | | | | | | | | | | | A test case for this very subtle bug is this: (let ((v (list 1 2 3))) (symacrolet ((x v)) (flet ((x () 42)) (set [x 0] 0)))) Because x is being evaluated in the DWIM brackets which flatten the two namespaces into one, it must be treated as a reference to the flet, and so [x 0] denotes the function call. The assignment is erroneous. The incorrect behavior being fixed is that the places code macro-expands x in the Lisp-2 style under which the symacrolet is not shadowed by the flet. The substitution of v takes place, and the assignment assigns to [v 0]. * share/txr/stdlib/place.tl (sys:l1-setq, sys:l1-val): Use macroexpand-lisp1 rather than macroexpand.
* doc: issues in qquote example.Kaz Kylheku2017-09-141-2/+2
| | | | | * txr.1: fix flaw in comment next to ^(qquote (unquote ,x)). Clarify accompanying text.
* doc: improve example under regsubKaz Kylheku2017-09-141-1/+1
| | | | * txr.1: instead of (op r^ ...) we can use (fr^ ...).
* doc: grammar under make-zstruct.Kaz Kylheku2017-09-141-1/+1
| | | | * txr.1: singularize inappropriate plural.
* doc: move away from "text processing".Kaz Kylheku2017-09-141-7/+6
| | | | | | * txr.1: Change title and heading to just "programming language". Opening paragraph explains TXR as being a programming language supporting multiple paradigms.
* doc: mention FFI early.Kaz Kylheku2017-09-141-0/+3
| | | | * txr.1: Introductory paragraphs mention FFI.
* bugfix: fixnum crackdown.Kaz Kylheku2017-09-135-26/+54
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The purpose of this commit is to address certain situations in which code is wrongly relying on a cnum value being in the fixnum range (NUM_MIN to NUM_MAX), so that num_fast can safely be used on it. One wrong pattern is that c_num is applied to some Lisp value, and that value (or one derived from it arithmetically) is then passed to num_fast. The problem is that c_num succeeds on integers outside of the fixnum range. Some bignum values convert to a cnum successfully. Thus either num has to be used instead of num_fast, or else the original c_num attempt must be replaced with something that will fail if the original value isn't a fixnum. (In the latter case, any arithmetic on the fixnum cannot produce value outside of that range). * buf.c (buf_put_bytes): The size argument here is not guaranteed to be in fixnum range: use num. * combi.c (perm_init_common): Throw if the sequence length isn't a fixnum. Thus the num_fast in perm_while_fun is correct, since the ci value is bounded by k, which is bounded by n. * hash.c (hash_grow): Remove dubious assertion which aborts the run-time if the hash table doubling overflows. Simply don't allow the modulus to grow beyond NUM_MAX. If doubling it makes it larger than NUM_MAX, then just don't grow the table. We need the modulus to be in fixnum range, so that uses of num_fast on the modulus value elsewhere are correct. (group_by, group_reduce): Use c_fixnum rather than c_num to extract a value that is later assumed to be a fixnum. * lib.c (c_fixnum): New function. (nreverse, reverse, remove_if, less, window_map_list, sort_vec, unique): Use c_fixnum rather than c_num to extract a value that is later assumed to be a fixnum. (string_extend): Use c_fixnum rather than c_num to extract a value that is later assumed to be a fixnum. Cap the string allocation size to fixnum range rather than INT_PTR_MAX. (cmp_str): The wcscmp function could return values outside of the fixnum range, so we must use num, not num_fast. * lib.h (c_fixnum): Declared.
* regex: bugfix: squash duplicates in move set.Kaz Kylheku2017-09-131-2/+1
| | | | | | | | * regex.c (nfa_move_closure): The move set calculation is wrongly assuming that all of the states are new and not testing their visited color. This could result in the same state being added twice. Though harmless, it wastefully inflates the set size.
* regex: factor out repeated visit-coloring pattern.Kaz Kylheku2017-09-131-13/+15
| | | | | | | * regex.c (nfa_test_set_visited): New inline function. (nfa_map_states, nfa_thread_epsilons, nfa_closure, nfa_move_closure): Use function instead of coding pattern which tests the state and sets the visited member.
* regex: re-introduce nfa_accept states.Kaz Kylheku2017-09-131-13/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The nfa_accept state label is re-introduced. This state type has the same representation as nfa_empty; essentially, this replaces the flag. This makes the state type smaller, and we don't have to access the flag to tell if a state is an acceptance state. * regex.c (nfa_kind_t): New enum label, nfa_accept. (struct nfa_state_empty): Member accept removed. (nfa_accept_state_p): Macro tests only for nfa_accept type. (nfa_empty_state_p): New macro. (nfa_state_accept): Set type of new state to nfa_accept; do not set accept flag. (nfa_state_empty): Do not set accept flag. (nfa_state_empty_convert): Do not clear accept flag. (nfa_map_states): Handle nfa_accept in switch, in the same case as nfa_empty. (nfa_thread_epsilons): Don't test for accept state in nfa_empty case; it would be always false now. Add nfa_accept case to switch which only arranges for a traversal of the two transitions. (Though these are expected to be null at the stage of the graph when this function is applied). (nfa_fold_accept): Switch type to nfa_accept rather than setting accept flag. (nfa_closure, nfa_move_closure): Use new macro for testing whether a state is empty.
* doc: more notes on regex % operator syntax.Kaz Kylheku2017-09-121-0/+34
| | | | | | * txr.1: The dual precedence of % leads to surprises; when parentheses are used around % expressions, they don't behave symmetrically on both sides.
* regex: retain unoptimized form for printing.Kaz Kylheku2017-09-121-5/+1
| | | | | | | regex.c (regex_compile): Take the source code to be the original code, rather than the version with AST-level optimizations and expansions related to the nongreedy operator.
* regex: bug printing #/abc(def|ghi)/Kaz Kylheku2017-09-121-1/+1
| | | | | | | | | | | | This was broken by the July 16 commit "regex: don't print superfluous parens around classes", 2411f779f47c441659720ad0ddcabf91df1d2529. * regex.c (print_rec): If an (or ...) appears as a compound element, it must be rendered in parentheses; or_s must be handled here just like and_s. Prior to the faulty commit, this was implicitly true because the logic was inverted and wasn't ruling out or_s.
* regex: accept-folding optimization.Kaz Kylheku2017-09-121-0/+23
| | | | | | | | | | | | In this optimization, we identify places in the NFA graph where empty states transition to accept states. We eliminate these transitions and turn the empty states into accept states. Accept states which thereby become unreachable from the start state are pruned away. * regex.c (nfa_fold_accept): New static function. (nfa_optimize): Add a pass which applies nfa_fold_accept to all states.
* regex: eliminate nfa_accept state type.Kaz Kylheku2017-09-121-20/+26
| | | | | | | | | | | | | | | | | | | | | | | | | Acceptance states are instead represented by adding an accept flag to nfa_empty states. This will support an optimization which eliminates accept states that are referenced only by empty states. * regex.c (nfa_kind_t): Enum member nfa_accept removed. (struct nfa_state_accept): Renamed to nfa_state_any. (struct nfa_state_empty): New member, accept. (union nfa_state): Member a changes from struct nfa_state_accept to struct nfa_state_any. (nfa_accept_state_p): New macro. (nfa_state_accept): Now makes an nfe_empty type state, with no transitions out and the accept flag set. (nfa_state_empty): Initialize accept flag to zero. (nfa_state_empty_convert): Set the accept flag to zero. (nfa_state_merge): Use new macro in assertion. (nfa_map_states): Remove nfa_accept switch case. (nfa_thread_epsilons): We must not eliminate an empty state which is an acceptance state, even if it meets the other conditions. (nfa_closure): Use a local variable to eliminate repetition of set[i] expression. Test for accept state using new macro. (nfa_move_closure): Test for accept state using new macro.
* regex: epsilon-threading optimization on NFA graph.Kaz Kylheku2017-09-121-1/+78
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is something done for the sake of faster NFA simulation. I think it's glossed over in regex literature because it makes no difference in a NFA to DFA transformation. Fewer states in the graph means less work in the nfa_move_closure function at regex run time. In the NFA graph, there can occur empty states which are useless. These are states which hold only one epsilon transition. Those states can be replaced by whatever state their epsilon transition points to. The terminology is inspired by "jump threading" in code optimization, whereby if a branch takes place to an instruction which holds an unconditional branch, the original branch can be retargetted to go to the ultimate target. I.e. we turn: x e S0 ---> S1 ----> S2 where e denotes an epsilon transition, and x represents any kind of transition to: x S0 ---> S2 We can also eliminate empty states which have no transition; those can be replaced by a null pointer. I suspect these do not actually occur. * regex.c (nfa_thread_epsilons, nfa_noop, nfa_optimize): New static functions. (regex_compile): Invoke nfa_optimize on the results of nfa_compile_regex.
* regex: elide needless increments of visited counter.Kaz Kylheku2017-09-121-5/+5
| | | | | | | | | | | | | * regex.c (nfa_run): Start with the existing value of the visited counter and pre-increment it when calling nfa_closure and nfa_move_closure. Thus it is incremented only as many times as those functions are called: one fewer than before. (regex_machine_reset): regm->n.visited is already incremented, since 1 was added to the prior value; there is no need to increment it again. (nfa_handle_wraparound): More prudent approach here. If we get within 8 increments of wraparound, we fast forward to UINT_MAX and then 0.
* regex: bugfix: incorrect use of nfa_move_closure.Kaz Kylheku2017-09-121-1/+1
| | | | | | | | | | | | | | This goes back to November 2, 2009 commit, "Start of implementation for freestyle matching", 6191fbb2ca7a9ac339dd3994bdea8273ceb0a24d It is exposed if we perform an epsilon threading optimization on the NFA graph. * regex.c (regex_machine_feed): We must pass the new, incremented value of the visited counter, to nfa_move_closure. States have already been tagged with the old value before the call to regex_machine_feed, so we risk failing to visit some states and include them in the closure.
* regex: new function, regex-prefix-match.Kaz Kylheku2017-09-113-0/+118
| | | | | | | | | | | | | | | | | This new function allows a program to determine whether a given string is the prefix of any of the strings denoted by a regular expression; or, in alternative words, whether a given string is the prefix of a possibly longer string which matches a regular expression. * regex.c (regex_machine_infer_init_state): New static function. (regex_prefix_match): New function. (regex_init): regex-prefix-match intrinsic registered. regex.h (regex_prefix_match): Declared. * txr.1: Documented.
* regex: remove nfa_reject representation.Kaz Kylheku2017-09-111-40/+36
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Reject states are no longer explicitly represented in the NFA graph. Any transition that previously would have led to a reject state is just a null pointer now: no transition. This just means we have to check one place for a null pointer. This change fixes a flaw in nfa_closure: the function was propagating nfa_reject states to the closure set. When the input set consisted only of reject states (or rather exactly one reject state in that special case), it would add it to the output set and return 1, making it seem as if the regex is one that can potentially match something. * regex.c (nfa_kind_t): Enum member nfa_reject removed. (nfa_state_reject): Static function removed. (nfa_compile_regex): Compile a t regex (match nothing) to a NFA graph with no transition at all into any start state; effectively an empty graph with no state nodes. (nfa_map_states): Remove nfa_reject case from switch. Also, stylistic change; state pointers are tested as Booleans elsewhere rather than by comparison to null pointer. (nfa_count_states, nfa_handle_wraparound): Handle null state pointer. (nfa_move_closure): Test the mov transition for null; anything can potentially now have a null transition, not only epsilon nodes. (nfa_run, regex_machine_reset): Handle nfa.start being null indicating an empty NFA graph. In that case we don't have to calculate the closure; we just have an empty set of states and set nfa.nclos to zero. The visited disambiguation counter is irrelevant since there are no states to visit. (regex_machine_feed): Don't try to propagate visited counter stored in regex machine to empty NFA graph which has no states.
* parser: fix precedence of DOTDOT.Kaz Kylheku2017-09-073-2/+34
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The problem is that a.b .. c.d parses as (qref a b..c d), which is useless and counterintuitive. Let's fix it, but with a backward compatibility switch to give more leeway to any hapless people out there whose code happens to depend on this unfortunate situation. We basically use two token numbers for the .. token: OLD_DOTDOT, and DOTDOT. Both are wired into the grammar. In backward compatibility mode, the lexer pumps out OLD_DOTDOT. Otherwise DOTDOT. * parser.l (grammar): When .. is scanned, return OLD_DOTDOT when in compatibility with 185 or earlier. Otherwise DOTDOT. * parser.y (OLD_DOTDOT): New terminal symbol; introduced at the same high precedence previously occupied by DOTDOT. (DOTDOT): Changes precedence to lower than '.' and UREFDOT. (n_expr): Two productions added involving OLD_DOTDOT. These are copy and paste of the existing productions involving DOTDOT; the only difference is that OLD_DOTDOT replaces DOTDOT. (yybadtoken): Handle OLD_DOTDOT. * txr.1: Compat notes added.
* linenoise: visual feedback for incomplete entry.Kaz Kylheku2017-09-071-0/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When a line of input is incomplete and the cursor is at the end of that line, hitting Enter causes an uncomfortable ambiguity. Although the cursor moves to the next line, it is not clear whether that is because the input is being accepted, or whether the expression which was entered is executing. For instance, these appear to behave the same way: > (while t[Enter][Enter]... > (while t)[Enter][Enter]... One is just waiting for more input; the other is sitting in an infinite loop just echoing the newline characters. To partially address this issue, we introduce a visual feedback mechanism. When Enter is issued at the end of an incomplete line, then immediately after the insertion of Enter, the character ! is flashed twice, alerting the user that the line is incomplete. In other situations, there isn't any feedback. An infinite loop or lengthy calculation like (while t) looks the same as code which is reading input like (get-line). * linenoise/linenoise.c (LINNOISE_FLASH_DELAY): New macro. (flash): New static function. (edit): Call flash to flash the ! character when Enter is issued at the end of an incomplete line, and we are not in paste mode.
* linenoise: bugfix: cancel extended mode on Enter.Kaz Kylheku2017-09-071-0/+1
| | | | | | | | | | * linenoise/linenoise.c (edit): If Enter is processed while in Ctrl-X extended command mode, that mode must be explicitly canceled by resetting the extended local flag. Not doing this became an issue when the Enter callback mechanism was introduced to detect incomplete lines. At that point, entering Ctrl-X Enter on an incomplete line caused linenoise to insert a newline, but stay in extended mode.
* txr -i honored despite parse-time exception.Kaz Kylheku2017-09-063-9/+38
| | | | | | | | | | | | | | | | | | | | | If an error is thrown while parsing a .txr file or while reading and evaluating the forms of a .tl file. * parser.y (parse_once, parse): Wording change in message when exception is caught. Only exceptions derived from error are caught. * txr.c (parse_once_noerr, read_eval_stream_noerr): New static functions. (txr_main): Use parse_once_noerr and read_eval_stream_noerr instead of parse_once and read_eval_stream. Don't exit if a TXR file has parser errors; in that situation, exit only if interactive mode is not requested, otherwise go interactive. Make sure *self-path* is registered to the name of the input source in this case also. * unwind.h (ignerr_func_body): New macro.
* New functions for polynomial evaluation.Kaz Kylheku2017-09-053-0/+156
| | | | | | | | | * arith.c (poly, rpoly): New functions. (arith_init): Registered intrinsics poly and rpoly. * arith.h (poly, rpoly): Declared. * txr.1: Documented.
* new: macroexpand-lisp1 and macroexpand-1-lisp1.Kaz Kylheku2017-08-312-4/+67
| | | | | | | | | | | | | | | | * eval.c (do_macroexpand_1, do_macroexpand): New static functions; take symbol macro lookup function poiner as argument. (macroexpand_1): Reimplemented as wrapper around do_macroexpand_1. (macroexpand): Reimplemented as wrapper around do_macroexpand. (macroexpand_1_lisp1, macroexpand_lisp1): New static functions. (eval_init): Registered intrinsics macroexpand-1-lisp1 and macroexpand-lisp1. * txr.1: Documented.
* New function: inverse of cumulative normal dist.Kaz Kylheku2017-08-314-0/+45
| | | | | | | | | | * arith.c (inv_cum_norm): New function. * arith.h (inv_cum_norm): Declared. * eval.c (eval_init): Register inv-cum-norm intrinsic. * txr.1: Documented.
* Version 185.txr-185Kaz Kylheku2017-08-307-738/+783
| | | | | | | | | | * RELNOTES: Updated. * configure, txr.1: Bumped version and date. * share/txr/stdlib/ver.tl: Likewise. * txr.vim, tl.vim: Regenerated.
* bugfix: places: handling of lisp1 contexts.Kaz Kylheku2017-08-301-3/+3
| | | | | | | | | * share/txr/stdlib/place.tl (sys:l1-val): Use the expanded version of the place in the resulting form, because some of the operators like sys:lisp1-value will not expand it. Previously we were getting away with (sys:l1-val @1) expanding to (sys:lisp1-value @1) because the op expander would traverse through this blindly and replace @1.
* doc: wrong symbol under unique function.Kaz Kylheku2017-08-291-1/+1
| | | | | * txr.1: Fix syntax synopsis for unique identifying the function as uniq.
* doc: formatting under expand-leftKaz Kylheku2017-08-291-1/+1
| | | | * txr.1: Fix run-on period included in .meta formatting.
* Rewriting op/do macros in Lisp.Kaz Kylheku2017-08-294-28/+157
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The new implementation treats the @1, @2 ... @rest op arguments as local macros, leveraging the power of the macro expander to perform the substitution which renames these to gensyms. As a result, the implementation is correct. The old implementation blindly walks the tree structure doing the substitution, so that @1 is substituted even though it is in a quoted literal: [(op list '(@1)) 42] -> ((#:arg-01-0166)) under the new implementation, '(@1) is left alone: [(op list '(@1)) 42] -> ((@1) 42) * eval.c (expand_quasi): Because the new op macro doesn't rudely reach into quasi forms to substitute sys:var elements, relying on macro expansion, we must now macro-expand sys:var elements. The sys:var macro created by op is smart enough to skip the compound ones that have modifiers; they are handled via the inner expansion of the symbol. That is to say, `@@1` contains the structure (sys:var (sys:var 1)). The sys:var macro ignores the outer sys:var. But existing behavior in expand_quasi expands the inner (sys:var 1), so the substitution takes place. (eval_init): Do not register the hacky old op and do macros, except in compatibility mode with 184 or older. * lisplib.c (op_set_entries, op_instantiate): New functions. (dlt_register): Register auto-loads for op and do macros via new functions, except when in compatibility mode with 184 or older, in which case we want the old build-in hacky op to be used. * share/txr/stdlib/op.tl: New file. * txr.1: Fixed or removed no-longer-true text which hints at special hacks implemented in the op expander. Added compatibility notes for all new compat-switched op behaviors.
* doc: remove wrong/outdated claim under op macro.Kaz Kylheku2017-08-291-12/+1
| | | | | | * txr.1: the quasiliteral `rest: @rest` and `rest: @@rest` in fact produce the same result. Remove text which claims that the second one is erroneous.
* Allow sys:var and sys:expr redefinition.Kaz Kylheku2017-08-281-0/+2
| | | | | * eval.c (builtin_reject_test): Suppress warning if the symbol is sys:var or sys:expr.
* expander: do dot-to-apply for meta-expressions.Kaz Kylheku2017-08-282-9/+65
| | | | | | | | | | | | | | | | | | | The dot-to-apply transformation is now applied when meta-expressions like @foo and @(bar) apparently occur in the dot position. This change is made in anticipation of a rewrite of the op macro, in which the @1, @2, and @rest arguments will be implemented as macrolets, rather than the ad-hoc, hacky code walk currently performed in the transform_op function. * eval.c (dot_meta_list_p): New static function. (dot_to_apply): Detect the presence of a sys:var or sys:expr argument in a form. If found, then turn it and the remaining forms into a single compound form which replaces them. * txr.1: Update doc under Dot Position in Function Calls.
* trie filtering: ensure we don't misuse string_extend.Kaz Kylheku2017-08-241-0/+2
| | | | | | | | * filter.c (trie_filter_string): If a trie node is associated with a filter substitute object that isn't a character or string, then convert that to a string using tostringp. It could be an integer. If so and we feed that to string_extend, it will extend the string.
* bugfix: replace_str uses string_extend incorrectly.Kaz Kylheku2017-08-241-1/+1
| | | | | | | | | | | | | | | | | | | | One test case for this is that (append "ABC" "DEF") returns an "ABCDEF" string whose length reports as 9 instead of the correct 6. This will wreak various havoc. The bug was introduced in the very first version of replace_str, in commit d011fda9b6b078f09027eb65d500c8beffc99414 on January 26, 2012. In the same commit, the string_extend behavior is introduced of supporting an integer value specifying the number of characters by which to extend the string. This feature of string_extend is used in replace_str, but wrongly. * lib.c (replace_str): Pass just the size delta to string_extend; do not add the old length to the delta such that the total size is wrongly passed.
* Version 184.txr-184Kaz Kylheku2017-08-237-217/+317
| | | | | | | | | | * RELNOTES: Updated. * configure, txr.1: Bumped version and date. * share/txr/stdlib/ver.tl: Likewise. * txr.vim, tl.vim, protsym.c: Regenerated.