| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* lisplib.c (getput_set_entries): Register autoload for
get-jsons, put-jsons, file-get-json, file-put-json,
file-append-json, file-get-jsons, file-put-jsons,
file-append-jsons, command-get-json, command-put-json,
command-get-jsons and command-put-jsons.
* share/txr/stdlib/getput.tl (get-jsons, put-jsons,
file-get-json, file-put-json, file-append-json,
file-get-jsons, file-put-jsons, file-append-jsons,
command-get-json, command-put-json, command-get-jsons and
command-put-jsons): New functions.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Let's restore the previous commit and fix it differently.
The real problem is that the tn_flatten function does not need
to use the set macro.
When the flattened tree is rebuilt with tn_build_tree, every
left and right link of every node is set at that time. That
function uses the set macro, so all is well.
The dummy node is not pulled into the rebuild, so it doesn't
cause any problem; the dummy node is only mutated in
tn_flatten.
This is a better fix not only because tn_flatten is now more
efficient. It is important for the dummy node to look like it
is a generation zero object. The dummy node ends up as the
pseudo-root of the tree, which means that it if it looks like
it is in generation 1, all generation 0 below it will be
wastefully checked by the garbage collector.
* tree.c (tn_flatten): Do not use the set macro. Just use
straight assignment to convert the tree into a linked list.
(tr_rebuild): Restore dummy node to be all zeros.
|
|
|
|
|
|
|
|
|
| |
* tree.c (tr_rebuild): The dummy on-stack node we use in the rebuilding
process must have its generation field set to 1 rather than 0, so it looks
like a mature heap object rather than a baby object. Otherwise
when its pointer is assigned to another node which happens to
be gen 1, the on-stack dummy will be added to the checkobj array,
and gc will try to mark it.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* eval.c (eval_init): Register put-json and put-jsonl
intrinsics.
* lib.c (out_json_str): Do not output the U+DC01 to U+DCFF
code points by masking them and using put_byte. This is
unnecessary; if we just send them as-is to the text stream,
the UTF-8 encoder does that for us.
(put_json, put_jsonl): New functions.
* lib.h (put_json, put_jsonl): Declared.
* txr.1: Documented. The bulk of tojson is moved under the
descriptions of these new functions, and elsewhere where the
document pointed to tojson for more information, it now points
to put-json. More detailed description of character treatment
is given.
* share/txr/stdlib/doc-syms.tl: Updated.
|
|
|
|
|
|
|
|
|
|
|
| |
* lib.c (out_json_rec): In the CONS case, if ctx is null bail
and report and invalid object. This lets us call the function
with a null context.
(tojson): Do not support (json ...) syntax. Instead of
obj_print, pass the object directly to out_json_rec.
* txr.1: Do not mention handling json macro syntax.
Common leading text factored out of bulleted paragraphs section.
|
|
|
|
|
|
|
|
|
| |
* lib.c (out_json_str): When the < character is seen, if the
lookahead character is /, output the < and a backslash to
escape the /.
* txr.1: Moved description of special JSON output handling
under tojson, and described the above escaping there also.
|
|
|
|
|
|
|
|
|
| |
* share/txr/stdlib/compiler.tl (compiler comp-catch): The
tfrag's output register may be (t 0) in which case the
maybe-mov in the cfrag code generates a mov into (t 0) which
is not allowed. Instead of tfrag.oreg we create an abstrction
coreg which could be either toreg or oreg and consistently use
it.
|
|
|
|
|
|
| |
* parser.c (repl): syntax error exceptions carry some text as
an argument, which we now print. This distinguishes
end-of-stream syntax errors from real ones.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* parser.c (lisp_parse_impl): Access pi->eof directly instead
of going through parser_eof. We are using pi->errors in the
same expression! This is the only place pi->eof is needed.
(read_file_common): Don't call parser_eof. Assume that if
error_val emerges, and errors is zero, it must be eof.
(read_eval_ret_last): Simplify: the code actually does nothing
specia for eof versus errors, so we just bail if error_var
emerges.
(get_parser): Function removed.
(parse_errors): This is the only caller of get_parser, which
now just calls gethash to fetch the parser.
(parser_eof): Function removed.
(parse_init): sys:get-parser, sys:parser-errors and
sys:parser-eof intrinsics removed. The C function
parser_errors still exists; it is used in a few places.
* parser.h (get_parser, parser_eof): Declarations removed.
* share/txr/stdlib/compiler.tl (compile-file-conditionally):
Use the new parse-errors function instead of the removed
sys:parser-errors. Since that works on the stream, we don't
have to call sys:get-parser. Because it returns nil if there
are no errors, we drop the > test.
|
|
|
|
|
|
|
|
|
|
| |
* parser.c (parse_errors): New function.
* parser.h (parse_errors): Declared.
* txr.1: Documented.
* share/txr/stdlib/doc-syms.tl: Updated.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* eval.c (eval_init): get-json intrinsic registered.
* parser.c (prime_parser): Handle prime_json.
(lisp_parse_impl): Take enum prime_parser argument
directly instead of the interactive flag.
(lisp_parse, nread, iread): Pass appropriate prime_parser
value instead of the original flag.
(get_json): New function. Like nread, but passes prime_json.
* parser.h (enum prime_parser): New constant, prime_json.
(get_json): Declared.
* parser.l (prime_scanner): Handle prime_json.
* parser.y (SECRET_ESCAPE_J): New terminal symbol.
(spec): New productions around SECRET_ESCAPE_J for parsing
JSON.
* lex.yy.c.shipped, y.tab.c.shipped, y.tab.h.shipped: Updated.
* txr.1: Documented.
* share/txr/stdlib/doc-syms.tl: Updated.
|
|
|
|
| |
* txr.1: Add missing .IP after .RE to return the indentation.
|
|
|
|
|
|
|
|
|
|
|
|
| |
* eval.c (eval_init): tojson intrinsic registered.
* lib.c (tojson): New function.
* lib.h (tojson): Declared.
* txr.1: Documented.
* share/txr/stdlib/doc-syms.tl: Updated.
|
|
|
|
|
|
|
|
| |
* parser.y (json): Add forgotten call end_of_json in
the '^' production.
(json_val): Pass zero to vector, not 0 which is nil.
* y.tab.c.shipped: Updated.
|
|
|
|
|
|
|
|
|
| |
* lib.c (out_json_rec): Save, establish and restore
indentation when printing [ ] and { } notation.
Enforce line breaks, and force a line break after the
object if one occurred in the object.
(out_json): Turn on indentation if it is off (but
not if it is forced off). Restore after doing the object.
|
|
|
|
|
|
|
|
|
|
| |
First cut, without line breaks or indentation.
* lib.c (out_json_str, out_json_rec, out_json): New static
functions.
(obj_print_impl): Hook in json printing via out_json.
* txr.1: Add notes about output to JSON extensions.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The JSON null will map to the Lisp null symbol. I thought
about using : but that could cause surprises; like when it's
passed to functions as an optional argument, it will trigger
the default value.
* parser.l (JSON): Add rules for producing null keyword.
* txr.1: Documented.
* lex.yy.c.shipped: Updated.
|
|
|
|
|
|
|
|
|
|
|
| |
* parser.l <JLIT>: Convert \u+0000 sequence to U+DC00
code point, the pseudo-null. Also include JLIT
in in the rule for catching bad bytes that are not
matched by {UANYN}.
* txr.1: Document this treatment as extensions to JSON.
* lex.yy.c.shipped: Updated.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There are two problems.
One is that in #J{~foo:"bar"} the foo expression is parsed
while the scanner is in Lisp mode and the : token is grabbed
by the parser as a lookahead token in the same mode. Thus it
comes back as a SYMTOK (the colon symbol).
We fix this by recognizing the colon symbol as synonym of
the colon character. But only if it is spelled ":" in
the syntax: we look at the lexeme.
The second problem is that even if we fix the above, ~foo
produces a sys:unquote form which is rejected because it is
not a string. We fix this in the most straightforward way:
by deleting the code which restricts keys to strings,
an extension I already thought about making anyway.
* parser.y (json_col): New non-terminal. This recognizes
either a SYMTOK or a ':' token, with the semantic restriction
that the SYMTOK's lexeme must be ":".
(json_vals): Use yybadtok instead of generic error.
(json_pairs): Do not require keys to be strings. Use json_col
instead of ':', and also yybadtok instead of generic error.
* y.tab.c.shipped: Updated.
|
|
|
|
|
|
|
|
|
|
|
|
| |
* genvim.txr (dig19, bvar, dir, list): New variables.
(txr_bracevar, tl_bracevar, tl_directive, txr_list,
txr_bracket, txr_mlist_txr_mbracket): Use variable to specify
common contents. JSON stuff added.
(txr_ign_tok): Specify contents using @list.
(txr_jkeyword, txr_jerr, txr_jpunc, txr_jesc, txr_juesc,
txr_jnum): New matches.
(txr_junqlist, txr_jstring, txr_jarray, txr_jhash): New
regions.
|
|
|
|
|
|
| |
* txr.1: Documented #J, #J^ and json macro.
* share/txr/stdlib/doc-syms.tl: Updated.
|
|
|
|
|
|
|
|
| |
* parser.y (json_val): Handle json_vals being a list,
indicating quasiquoting. We must nreverse it and turn it into
a sys:vector-lit form.
* y.tab.c.shipped: Updated.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Because #J<json> produces the (json ...) form that translates
into quote, ^#J<json> yields a quasiquote around a quote. This
has some disadvantages, because it requires an explicit eval
in some situtions to do what the programmer wants.
Here, we introduce an alternative: the syntax #J^<json> will
produce a quasiquote instead of a quote.
The new translation scheme is
#J X -> (json quote <X>)
#J^ X -> (json sys:qquote <X>)
where <X> denotes the Lisp object translation of JSON syntax X.
* parser.c (me_json): The quote symbol is now already in the
json form, so all that is left to do here is to take the cdr
to pop off the json symbol.
* parser.l (JPUNC, NJPUNC): Allow ^ to be a punctuator in
JSON mode.
* parser.y (json): For regular #J, generate the new (json
quote ...) syntax. Implement J# ^ which sets up the nonzero
quasi_level around the processing of the JSON syntax, so that
everything is in a quasiquote, finally producing the
(json sys:qquote ...) syntax.
* lex.yy.c.shipped, y.tab.c.shipped: Updated.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* parser.h (end_of_json_unquote): Declared.
* parser.l (JPUNC, NJPUNC): Add ~ and * characters to set of
JSON punctuators.
(grammar): Allow closing brace character in NESTED, SPECIAL
and QSPECIAL statues to be a token. This is because it occurs
as a lookahead character in this situation #J{"foo":~expr}.
The lexer switches from the JSON to the NESTED start state
when it scans the ~ token, so that expr is treated as Lisp.
But then } is consumed as a lookahead token by the parser in
that same mode; when we pop back to JSON mode, the } token has
already been scanned in NESTED mode.
We add two new rules in JSON mode to the lexer to recognize
the ~ unquote and ~* splicing unquote. Both have to push the
NESTED start condition.
(end_of_json_unquote): New function.
* parser.y (JSPLICE): New token.
(json_val): Logic for unquoting. The array and hash rules must
now be prepared to deal with json_vals and json_pairs now
producing a list object instead of a hash or vector. That is
the signal that the data contains active quasiquotes and must
be translated to the special literal syntax for quasiquoted
vectors and hashes. Here we also add the rules for ~ and ~*
unquoting syntax, including managing the lexer's transition
back to the JSON start condition.
(json_vals, json_pairs): We add the logic here to recognize
unquotes in quasiquoting state. This is more clever than the
way it is done in the Lisp areas of the grammar. If no
quasiquotes occur, we construct a vector or hash,
respectively, and add to it. If unquotes occur and if we
are nested in a quasiquote, we switch the object to a list,
and continue it that way.
(yybadtoken): Handle JSPLICE.
* lex.yy.c.shipped, y.tab.c.shipped, y.tab.h.shipped: Updated.
|
|
|
|
|
|
|
|
|
|
| |
* parser.l (HASH_N_EQUALS, HASH_N_HASH): Recognize these
tokens in the JSON start state also.
* parser.y (json_val): Add the circular syntax, exactly like
it is done for n_expr and i_expr. And it works!
* lex.yy.c.shipped, y.tab.c.shipped, y.tab.h.shipped: Updated.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
(needs buffer literal error message cleanup)
* parser.c (json_s): New symbol variable.
(is_balanced_line): Follow braces out of initial state.
This concession allows the listener to accept input
like #J{"a":"b"}.
(me_json): New static function (macro expander). The #J X
syntax produces a (json Y) form, with the JSON syntax X
translated to a Lisp object Y. If that is evaluated,
this macro translates it to (quote Y).
(parse_init): initialize json_s variable with interned symbol,
and register the json macro.
* parser.h (json_s): Declared.
(end_of_json): Declared.
* parser.l (num_esc): Treat u escape sequences in the same way
as x. This function can then be used for handling the \u
escapes in JSON string literals.
(DIG19, JNUM, JPUNC, NJPUNC): New lex named patterns.
(JSON, JLIT): New lex start conditions.
(grammar): Recognize #J syntax, mapping to HASH_J token,
which transitions into JSON start state.
In JSON start state, handle all the elements: numbers,
keywords, arrays and objects. Transition into JLIT state.
In JLIT start state, handle all the elements of JSON string
literals, including surrogate pair escapes.
JSON literals share the fallback {UANY} fallback patter with
other literals.
(end_of_jason): New function.
* parser.y (HASH_J, JSKW): New token symbols.
(json, json_val, json_vals, json_pairs): New nonterminal
symbols, and rules.
(i_expr, n_expr): Generate json nonterminal, to hook the
stuff into the grammar.
(yybadtoken): Handle JKSW and HASH_J tokens.
* lex.yy.c.shipped, y.tab.c.shipped, y.tab.h.shipped:
Updated.
|
|
|
|
|
|
|
| |
* parser.l (BUFLIT): When reporting a bad characters, do not
show it in the form of an escape sequence.
* lex.yy.c.shipped: Updated.
|
|
|
|
|
|
|
|
|
|
| |
* RELNOTES: Updated.
* configure, txr.1: Bumped version and date.
* share/txr/stdlib/ver.tl: Bumped.
* txr.vim, tl.vim: Regenerated.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* tests/common.tl (vtest): Only if the expected expression
is :error or (quote :error) do we wrap the expansion and
evaluation of the test expression with exception handling,
because only then do we expect an error. When the test
expression is anything else, we don't intercept any errors,
and so problems in test cases are easier to debug now.
* tests/012/struct.tl: In one case we must initialize
the *gensym-counter* to 4 to compensate for the change
in vtest to get the same gensym numbers in the output.
|
|
|
|
|
|
|
|
|
| |
* parser.c (find_matching_syms): Apply DeMorgan's on the
symbol binding tests, to use sum of positive tests instead of
product of negations. Check for struct and ffi type bindings
in the default case. The default and '[' cases are
rearranged so that the '[' case omits these, so as not to
complete on a struct or FFI typedef after a [.
|
|
|
|
|
|
|
| |
* tests/012/seq.tl: New tests.
* txr.1: Improve documentation of window-map's :wrap
and :reflect. Add examples.
|
|
|
|
|
|
|
| |
* txr.1: Fix heading repeating identity instead of listing
idenitty and identity*.
* share/txr/stdlib/doc-syms.tl: Regenerated.
|
|
|
|
|
|
| |
* txr.1: Fix numerous instances of text which uses the wording
that arguments are applied to a function. A few of the changes
repair wording that was entirely botched.
|
|
|
|
| |
* txr.1: Improve wording, eliminate superfluous comma.
|
|
|
|
|
|
|
|
|
|
|
| |
* lib.c (window_map_list): Rewrite :wrap and :reflect support.
The main issue with these is that they only sample items from
the front of the input list and generate both flanks of the
boundary from that prefix; :reflect is additionaly buggy due
to applying nreverse to a sub which can return the original
sequence.
* tests/012/seq.tl: Some test coverage for window-map.
|
|
|
|
|
|
|
|
|
|
|
|
| |
The @(hash ...) operator now allows key-only patterns
like (42) or (@x), where x could be bound or unbound.
This has separate semantics from when a value is present.
* share/txr/stdlib/match.tl (compile-hash-match): Implement.
* tests/011/patmatch.tl: Test.
* txr.1: Document.
|
|
|
|
|
|
| |
* share/txr/stdlib/match.tl (compile-hash-match): Fix
unquoting comma that had been strangely moved to the previous
line.
|
|
|
|
|
|
|
|
|
|
| |
* parser.y (parse): When issuing the diagostic indicating the
likely starting line of the unterminated expression, instead
of mentioning that line in the diagnostic text, let's just
issue the diagnostic against that line. The programmer's
text editor can then jump to that line.
* y.tab.c.shipped: Updated.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Because with-compilation-unit is keyed of *load-recursive*,
when compilation is happening in the context of a load (a
top-level form in a loaded file calls compile-file or
compile-update file) warnings are deferred until the end of
the load. That might never occur if the load doesn't complete,
because, say, the image quits for some reason.
If the following is the content of a file which is loaded:
(compile-file "foo")
(exit 0)
then warnings during the compilation are not issued when
compile-file terminates, and will never be issued because of
the termination due to the exit call.
* share/txr/stdlib/compiler.tl (*in-compilation-unit*): New
special variable.
(with-compilation-unit): Use *in-compilation-unit* to
determinw when a compilation unit has ended, and dump all the
deferred warnings then. We will bind *load-recursive* because
that is required for deferring warnings.
|
|
|
|
|
|
| |
* ffi.c (make_ffi_type_pointer): Set the by_value_in flag only if the in
function has been specified. Otherwise tft->in is a null pointer and
will be used if this pointer type appears as an argument.
|
|
|
|
|
|
|
| |
* eval.c (me_case): When we evaluate the keys of a caseq,
caseql* or casequal* construct, we must use expand_eval.
I ran into this problem trying to use constants defined
as symbol macros as keys.
|
|
|
|
|
|
| |
* txr.1: Round out the documentation with various missing
details, or details that benefit repeating in more than one
place int the document.
|
|
|
|
|
|
|
| |
* txr.1: The arguments after the script-file are not necessary
data files; they can have any meaning.
* txr.c (help): Also adjust the help text.
|
|
|
|
|
| |
* txr.c (help): Refer to "script-file" rather than
"query-file", just like the documentation.
|
|
|
|
| |
* share/txr/stdlib/quips.tl (%quips%): Entries added.
|
|
|
|
|
|
|
|
|
| |
* mpi/mpi.c (s_mp_in_big_range): If the value is negative,
extend the range. This is exactly the same fix as what was
applied to mp_in_range in 2019 in commit
11b5c567124a61d8e8249a0fbcce47f2688573c6. This function should
have been fixed at the same time. The corresponding
test cases now pass.
|
|
|
|
|
|
| |
* tests/016/arith.tl: Test providing coverage for the most
negative two's complement integer, #x-800...00 in various
sizes. The 64 bit cases are failing.
|
|
|
|
|
|
|
|
|
|
| |
* mpi/mpi.c (mp_get_uintptr, mp_get_double_uintptr): Fix loops
which shift and mask the bignum digits together in the wrong
way. The post-iteration shift is also wrong. We are fine in
mp_get_uintptr because the affected code is in an #if that
doesn't actually occur: bignum digits are pointer-sized.
mp_get_double_uintptr affects the conversion of bignums to
64 bits on 32 bit platforms.
|
|
|
|
|
|
|
|
|
|
|
| |
* mpi/mpi.c (mp_in_range, s_mp_in_big_range): The ptrnd
calculation here is wrong; it adds together dissimilar units:
bits and bytes. In the case of mp_in_range, we are okay by
fluke, because the calculation works out to 1 anyway. We would
not be okay of a mp_digit was half the size of a pointer.
In s_mp_in_big_range we have a problem. On 32 bit platforms,
ptrnd is wrongly calculated as 1 rather than 2, and so values
perfectly in range are rejected.
|
|
|
|
|
|
| |
* tests/016/arith.tl: Add tests covering the fixnum/bignum
knee, and ffi operations of various sizes that provide
coverage of various conversion routines.
|