| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
| |
* txr.1: Fix duplicate 4. bullet; mention the that control
character U+007F is rendered as an escape also.
|
|
|
|
|
|
|
|
| |
* stream.c (mkdtemp_wrap): Rename template argument to prefix,
because template is a C++ keyword.
(mkstemp_wrap): Rename local variable from template to templ.
* stream.h (mkdtemp_wrap): Rename template argument.
|
|
|
|
|
|
| |
* genvim.txr (list): Add txr_junqbkt: unquoted bracket.
(txr_junqlist): Support optional # for vector syntax.
(txr_junqbkt): New region for unquoted bracket expressions.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* genvim.txr (ws, jlist, jsonkw, jerr, jpunc, jesc, juesc,
jnum): New variables.
(txr_circ): Move down; this somehow clashes with JSON regions
beginning with #, so that even if we include txr_circ in JSON
regions, it doesn't work properly.
(txr_jerr, txr_jpunc, txr_jesc, txr_juesc, txr_jnum): Define
using variables.
(txr_jkeyword): Switch to regex match instead of keyword. Vim
8.0 does not recognize keywords when they are glued to #J, as
in #Jtrue, even though #J is excluded from the region.
(txr_jatom): New region.
(txr_jarray, txr_jhash): Define using jlist variable for
contained items.
(txr_jarray_in, txr_jhash_in): New regions for the inner
parts without the #J.
|
|
|
|
|
|
|
|
| |
* lib.c (chr_iscntrl): Don't use iswcntrl; it fails to report
0x80-0x9F as control characters. A bit of hand-crafted logic
does the job.
* txr.1: Redocumented.
|
|
|
|
|
|
|
|
|
|
|
| |
* Makefile (SHIPPED): New variable, holds names of files that
get copied to a .shipped suffix and commited to the repo.
(ABBREV3SH): New macro, version of ABBREV3 that doesn't assume
it is expanding an entire recipe line, and thus can be
embedded into shell commands.
(%.shipped): Rule removed.
(shipped): New rule that prints what it is copying.
Non-maintainer version of rule errors out.
|
|
|
|
|
|
|
|
|
| |
* Makefile (%.shipped): Call the ABBREV macro to produce
output when "make *.shipped" is invoked, otherwise unless
VERBOSE=1 is used, it works silently. This is only in
maintainer mode. The rule in regular mode has the ABBREV
call. (But should we have that rule at all outside of
maintainer mode?)
|
|
|
|
|
|
|
| |
* sysif.c (wrap_utimes, wrap_lutimes): Rename static functions
to utimes_wrap and lutimes_wrap, the convention used
everwhere for wrappers of library functions.
(sysif_init): Follow rename.
|
|
|
|
|
|
|
|
| |
* lib.c (put_json): Turn on indentation if the flat argument
doesn't suppress it, and the stream is not in forced-off
indentation mode.
(tojson): Retarget to put_json, so we don't have to
repeat this logic.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* /share/txr/stdlib/getput.tl (get-jsons): If the s parameter
is a string, convert it to a byte input stream so that.
(put-jsons): Add missing t return value.
(file-put-json, file-append-json, file-put-jsons,
file-append-jsons, command-put-jsons, command-put-jsons): Add
missing object argument to all these functions, and a missing
"w" open-file mode to several of them.
* stream.c (mkstemp_wrap): Calculate length of suff the
defaulted argument, not the raw suffix argument.
* test/010/json.tl: New file, providing tests that touch every
area of the new JSON functionality.
* tests/common.tl (mstest, with-temp-file): New macros.
* txr.1: Document that get-jsons takes a source which could be
a string.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The parser maintains some token objects for one parse job to
the next in the tok_pushback and recent_tok arrays. When the
recent_tok is assigned into the parser, it could be a
wrong-way assignment: the parser is a gen 1 object,
yet the token's semantic value of type val is a gen 0.
* parser.c (lisp_parse_impl, txr_parse): Before re-enabling
gc, indicate that the parser object may have been mutated by a
wrong-way assignment using the mut macro.
* txr.c (txr_main): Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The big comment I added above end_of_json_unquote summarizes
the issue. This issue has been uncovered by some test cases in
a JSON test suite, not yet committed.
* parser.l <JMARKER>: New start condition. Used as a reliable
marker in the start condition stack, based on which
end_of_json_quasiquote can intelligently fix-up the stack.
(JSON): In the transitions to the quasiquote scanning NESTED
state, push the JMARKER start condition before NESTED.
(JMARKER): The lexer should never read input in the JMARKER
state. If we don't put in a rule to catch this, if that ever
happens, the lexer will just copy the source code to standard
output. I ran into this during debugging.
(end_of_json_unquote): Rewrite the start condition stack
intelligently based on what the Lisp lookahead token has done
to it, so parsing can smoothly continue.
* lex.yy.c.shipped: Regenerated.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The recursive JSON printer must check for the circularity
circularity-related conditions and emit #n= and #n# notations.
* lib.c (circle_print_eligible): Function moved before
out_json_rec to avoid a forward declaration.
(check_emit_circle): New static function. This is a block of
code for doing the circular logic, taken out of
obj_print_impl, because we would like to use it in
out_json_rec.
(out_json_rec): Call check_emit_circle to emit any #n= or #n#
notation. The function returns 1 if it has emitted #n#,
in which a se we are done.
(obj_print_impl): Replace moved block of code with call to
check_emit_circle.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* configure: check for mkstemp and mkdtemp.
* stream.c (stdio_set_prop): Implement setting the :name
property. We need this in mkstemp_wrap in order to punch in
the temporary name, so that the application can retrieve it.
(mkdtemp_wrap, mkstemp_wrap): New functions.
(stream_init): Register mkdtemp and mkstemp intrinsics.
* stream.h (mkdtemp_wrap, mkstemp_wrap): Declared.
* txr.1: Documented.
* share/txr/stdlib/doc-syms.tl: Updated.
|
|
|
|
|
|
|
|
|
|
|
| |
* stream.c (tmpfile_wrap): New static function.
(stream_init): Register tmpfile intrinsic.
* stream.h (tmpfile_wrap): Declared.
* txr.1: Documented.
* share/txr/stdlib/doc-syms.tl: Updated.
|
|
|
|
|
|
|
|
|
| |
* txr.1: Documented file-get-json, file-put-json,
file-append-json, file-get-jsons, file-put-jsons,
file-append-jsons, command-get-json, command-put-json,
command-get-jsons, command-put-jsons.
* share/txr/stdlib/doc-syms.tl: Updated.
|
|
|
|
| |
* txr.1: Document return value of put-json and put-jsonl.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* lisplib.c (getput_set_entries): Register autoload for
get-jsons, put-jsons, file-get-json, file-put-json,
file-append-json, file-get-jsons, file-put-jsons,
file-append-jsons, command-get-json, command-put-json,
command-get-jsons and command-put-jsons.
* share/txr/stdlib/getput.tl (get-jsons, put-jsons,
file-get-json, file-put-json, file-append-json,
file-get-jsons, file-put-jsons, file-append-jsons,
command-get-json, command-put-json, command-get-jsons and
command-put-jsons): New functions.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Let's restore the previous commit and fix it differently.
The real problem is that the tn_flatten function does not need
to use the set macro.
When the flattened tree is rebuilt with tn_build_tree, every
left and right link of every node is set at that time. That
function uses the set macro, so all is well.
The dummy node is not pulled into the rebuild, so it doesn't
cause any problem; the dummy node is only mutated in
tn_flatten.
This is a better fix not only because tn_flatten is now more
efficient. It is important for the dummy node to look like it
is a generation zero object. The dummy node ends up as the
pseudo-root of the tree, which means that it if it looks like
it is in generation 1, all generation 0 below it will be
wastefully checked by the garbage collector.
* tree.c (tn_flatten): Do not use the set macro. Just use
straight assignment to convert the tree into a linked list.
(tr_rebuild): Restore dummy node to be all zeros.
|
|
|
|
|
|
|
|
|
| |
* tree.c (tr_rebuild): The dummy on-stack node we use in the rebuilding
process must have its generation field set to 1 rather than 0, so it looks
like a mature heap object rather than a baby object. Otherwise
when its pointer is assigned to another node which happens to
be gen 1, the on-stack dummy will be added to the checkobj array,
and gc will try to mark it.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* eval.c (eval_init): Register put-json and put-jsonl
intrinsics.
* lib.c (out_json_str): Do not output the U+DC01 to U+DCFF
code points by masking them and using put_byte. This is
unnecessary; if we just send them as-is to the text stream,
the UTF-8 encoder does that for us.
(put_json, put_jsonl): New functions.
* lib.h (put_json, put_jsonl): Declared.
* txr.1: Documented. The bulk of tojson is moved under the
descriptions of these new functions, and elsewhere where the
document pointed to tojson for more information, it now points
to put-json. More detailed description of character treatment
is given.
* share/txr/stdlib/doc-syms.tl: Updated.
|
|
|
|
|
|
|
|
|
|
|
| |
* lib.c (out_json_rec): In the CONS case, if ctx is null bail
and report and invalid object. This lets us call the function
with a null context.
(tojson): Do not support (json ...) syntax. Instead of
obj_print, pass the object directly to out_json_rec.
* txr.1: Do not mention handling json macro syntax.
Common leading text factored out of bulleted paragraphs section.
|
|
|
|
|
|
|
|
|
| |
* lib.c (out_json_str): When the < character is seen, if the
lookahead character is /, output the < and a backslash to
escape the /.
* txr.1: Moved description of special JSON output handling
under tojson, and described the above escaping there also.
|
|
|
|
|
|
|
|
|
| |
* share/txr/stdlib/compiler.tl (compiler comp-catch): The
tfrag's output register may be (t 0) in which case the
maybe-mov in the cfrag code generates a mov into (t 0) which
is not allowed. Instead of tfrag.oreg we create an abstrction
coreg which could be either toreg or oreg and consistently use
it.
|
|
|
|
|
|
| |
* parser.c (repl): syntax error exceptions carry some text as
an argument, which we now print. This distinguishes
end-of-stream syntax errors from real ones.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* parser.c (lisp_parse_impl): Access pi->eof directly instead
of going through parser_eof. We are using pi->errors in the
same expression! This is the only place pi->eof is needed.
(read_file_common): Don't call parser_eof. Assume that if
error_val emerges, and errors is zero, it must be eof.
(read_eval_ret_last): Simplify: the code actually does nothing
specia for eof versus errors, so we just bail if error_var
emerges.
(get_parser): Function removed.
(parse_errors): This is the only caller of get_parser, which
now just calls gethash to fetch the parser.
(parser_eof): Function removed.
(parse_init): sys:get-parser, sys:parser-errors and
sys:parser-eof intrinsics removed. The C function
parser_errors still exists; it is used in a few places.
* parser.h (get_parser, parser_eof): Declarations removed.
* share/txr/stdlib/compiler.tl (compile-file-conditionally):
Use the new parse-errors function instead of the removed
sys:parser-errors. Since that works on the stream, we don't
have to call sys:get-parser. Because it returns nil if there
are no errors, we drop the > test.
|
|
|
|
|
|
|
|
|
|
| |
* parser.c (parse_errors): New function.
* parser.h (parse_errors): Declared.
* txr.1: Documented.
* share/txr/stdlib/doc-syms.tl: Updated.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* eval.c (eval_init): get-json intrinsic registered.
* parser.c (prime_parser): Handle prime_json.
(lisp_parse_impl): Take enum prime_parser argument
directly instead of the interactive flag.
(lisp_parse, nread, iread): Pass appropriate prime_parser
value instead of the original flag.
(get_json): New function. Like nread, but passes prime_json.
* parser.h (enum prime_parser): New constant, prime_json.
(get_json): Declared.
* parser.l (prime_scanner): Handle prime_json.
* parser.y (SECRET_ESCAPE_J): New terminal symbol.
(spec): New productions around SECRET_ESCAPE_J for parsing
JSON.
* lex.yy.c.shipped, y.tab.c.shipped, y.tab.h.shipped: Updated.
* txr.1: Documented.
* share/txr/stdlib/doc-syms.tl: Updated.
|
|
|
|
| |
* txr.1: Add missing .IP after .RE to return the indentation.
|
|
|
|
|
|
|
|
|
|
|
|
| |
* eval.c (eval_init): tojson intrinsic registered.
* lib.c (tojson): New function.
* lib.h (tojson): Declared.
* txr.1: Documented.
* share/txr/stdlib/doc-syms.tl: Updated.
|
|
|
|
|
|
|
|
| |
* parser.y (json): Add forgotten call end_of_json in
the '^' production.
(json_val): Pass zero to vector, not 0 which is nil.
* y.tab.c.shipped: Updated.
|
|
|
|
|
|
|
|
|
| |
* lib.c (out_json_rec): Save, establish and restore
indentation when printing [ ] and { } notation.
Enforce line breaks, and force a line break after the
object if one occurred in the object.
(out_json): Turn on indentation if it is off (but
not if it is forced off). Restore after doing the object.
|
|
|
|
|
|
|
|
|
|
| |
First cut, without line breaks or indentation.
* lib.c (out_json_str, out_json_rec, out_json): New static
functions.
(obj_print_impl): Hook in json printing via out_json.
* txr.1: Add notes about output to JSON extensions.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The JSON null will map to the Lisp null symbol. I thought
about using : but that could cause surprises; like when it's
passed to functions as an optional argument, it will trigger
the default value.
* parser.l (JSON): Add rules for producing null keyword.
* txr.1: Documented.
* lex.yy.c.shipped: Updated.
|
|
|
|
|
|
|
|
|
|
|
| |
* parser.l <JLIT>: Convert \u+0000 sequence to U+DC00
code point, the pseudo-null. Also include JLIT
in in the rule for catching bad bytes that are not
matched by {UANYN}.
* txr.1: Document this treatment as extensions to JSON.
* lex.yy.c.shipped: Updated.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There are two problems.
One is that in #J{~foo:"bar"} the foo expression is parsed
while the scanner is in Lisp mode and the : token is grabbed
by the parser as a lookahead token in the same mode. Thus it
comes back as a SYMTOK (the colon symbol).
We fix this by recognizing the colon symbol as synonym of
the colon character. But only if it is spelled ":" in
the syntax: we look at the lexeme.
The second problem is that even if we fix the above, ~foo
produces a sys:unquote form which is rejected because it is
not a string. We fix this in the most straightforward way:
by deleting the code which restricts keys to strings,
an extension I already thought about making anyway.
* parser.y (json_col): New non-terminal. This recognizes
either a SYMTOK or a ':' token, with the semantic restriction
that the SYMTOK's lexeme must be ":".
(json_vals): Use yybadtok instead of generic error.
(json_pairs): Do not require keys to be strings. Use json_col
instead of ':', and also yybadtok instead of generic error.
* y.tab.c.shipped: Updated.
|
|
|
|
|
|
|
|
|
|
|
|
| |
* genvim.txr (dig19, bvar, dir, list): New variables.
(txr_bracevar, tl_bracevar, tl_directive, txr_list,
txr_bracket, txr_mlist_txr_mbracket): Use variable to specify
common contents. JSON stuff added.
(txr_ign_tok): Specify contents using @list.
(txr_jkeyword, txr_jerr, txr_jpunc, txr_jesc, txr_juesc,
txr_jnum): New matches.
(txr_junqlist, txr_jstring, txr_jarray, txr_jhash): New
regions.
|
|
|
|
|
|
| |
* txr.1: Documented #J, #J^ and json macro.
* share/txr/stdlib/doc-syms.tl: Updated.
|
|
|
|
|
|
|
|
| |
* parser.y (json_val): Handle json_vals being a list,
indicating quasiquoting. We must nreverse it and turn it into
a sys:vector-lit form.
* y.tab.c.shipped: Updated.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Because #J<json> produces the (json ...) form that translates
into quote, ^#J<json> yields a quasiquote around a quote. This
has some disadvantages, because it requires an explicit eval
in some situtions to do what the programmer wants.
Here, we introduce an alternative: the syntax #J^<json> will
produce a quasiquote instead of a quote.
The new translation scheme is
#J X -> (json quote <X>)
#J^ X -> (json sys:qquote <X>)
where <X> denotes the Lisp object translation of JSON syntax X.
* parser.c (me_json): The quote symbol is now already in the
json form, so all that is left to do here is to take the cdr
to pop off the json symbol.
* parser.l (JPUNC, NJPUNC): Allow ^ to be a punctuator in
JSON mode.
* parser.y (json): For regular #J, generate the new (json
quote ...) syntax. Implement J# ^ which sets up the nonzero
quasi_level around the processing of the JSON syntax, so that
everything is in a quasiquote, finally producing the
(json sys:qquote ...) syntax.
* lex.yy.c.shipped, y.tab.c.shipped: Updated.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* parser.h (end_of_json_unquote): Declared.
* parser.l (JPUNC, NJPUNC): Add ~ and * characters to set of
JSON punctuators.
(grammar): Allow closing brace character in NESTED, SPECIAL
and QSPECIAL statues to be a token. This is because it occurs
as a lookahead character in this situation #J{"foo":~expr}.
The lexer switches from the JSON to the NESTED start state
when it scans the ~ token, so that expr is treated as Lisp.
But then } is consumed as a lookahead token by the parser in
that same mode; when we pop back to JSON mode, the } token has
already been scanned in NESTED mode.
We add two new rules in JSON mode to the lexer to recognize
the ~ unquote and ~* splicing unquote. Both have to push the
NESTED start condition.
(end_of_json_unquote): New function.
* parser.y (JSPLICE): New token.
(json_val): Logic for unquoting. The array and hash rules must
now be prepared to deal with json_vals and json_pairs now
producing a list object instead of a hash or vector. That is
the signal that the data contains active quasiquotes and must
be translated to the special literal syntax for quasiquoted
vectors and hashes. Here we also add the rules for ~ and ~*
unquoting syntax, including managing the lexer's transition
back to the JSON start condition.
(json_vals, json_pairs): We add the logic here to recognize
unquotes in quasiquoting state. This is more clever than the
way it is done in the Lisp areas of the grammar. If no
quasiquotes occur, we construct a vector or hash,
respectively, and add to it. If unquotes occur and if we
are nested in a quasiquote, we switch the object to a list,
and continue it that way.
(yybadtoken): Handle JSPLICE.
* lex.yy.c.shipped, y.tab.c.shipped, y.tab.h.shipped: Updated.
|
|
|
|
|
|
|
|
|
|
| |
* parser.l (HASH_N_EQUALS, HASH_N_HASH): Recognize these
tokens in the JSON start state also.
* parser.y (json_val): Add the circular syntax, exactly like
it is done for n_expr and i_expr. And it works!
* lex.yy.c.shipped, y.tab.c.shipped, y.tab.h.shipped: Updated.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
(needs buffer literal error message cleanup)
* parser.c (json_s): New symbol variable.
(is_balanced_line): Follow braces out of initial state.
This concession allows the listener to accept input
like #J{"a":"b"}.
(me_json): New static function (macro expander). The #J X
syntax produces a (json Y) form, with the JSON syntax X
translated to a Lisp object Y. If that is evaluated,
this macro translates it to (quote Y).
(parse_init): initialize json_s variable with interned symbol,
and register the json macro.
* parser.h (json_s): Declared.
(end_of_json): Declared.
* parser.l (num_esc): Treat u escape sequences in the same way
as x. This function can then be used for handling the \u
escapes in JSON string literals.
(DIG19, JNUM, JPUNC, NJPUNC): New lex named patterns.
(JSON, JLIT): New lex start conditions.
(grammar): Recognize #J syntax, mapping to HASH_J token,
which transitions into JSON start state.
In JSON start state, handle all the elements: numbers,
keywords, arrays and objects. Transition into JLIT state.
In JLIT start state, handle all the elements of JSON string
literals, including surrogate pair escapes.
JSON literals share the fallback {UANY} fallback patter with
other literals.
(end_of_jason): New function.
* parser.y (HASH_J, JSKW): New token symbols.
(json, json_val, json_vals, json_pairs): New nonterminal
symbols, and rules.
(i_expr, n_expr): Generate json nonterminal, to hook the
stuff into the grammar.
(yybadtoken): Handle JKSW and HASH_J tokens.
* lex.yy.c.shipped, y.tab.c.shipped, y.tab.h.shipped:
Updated.
|
|
|
|
|
|
|
| |
* parser.l (BUFLIT): When reporting a bad characters, do not
show it in the form of an escape sequence.
* lex.yy.c.shipped: Updated.
|
|
|
|
|
|
|
|
|
|
| |
* RELNOTES: Updated.
* configure, txr.1: Bumped version and date.
* share/txr/stdlib/ver.tl: Bumped.
* txr.vim, tl.vim: Regenerated.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* tests/common.tl (vtest): Only if the expected expression
is :error or (quote :error) do we wrap the expansion and
evaluation of the test expression with exception handling,
because only then do we expect an error. When the test
expression is anything else, we don't intercept any errors,
and so problems in test cases are easier to debug now.
* tests/012/struct.tl: In one case we must initialize
the *gensym-counter* to 4 to compensate for the change
in vtest to get the same gensym numbers in the output.
|
|
|
|
|
|
|
|
|
| |
* parser.c (find_matching_syms): Apply DeMorgan's on the
symbol binding tests, to use sum of positive tests instead of
product of negations. Check for struct and ffi type bindings
in the default case. The default and '[' cases are
rearranged so that the '[' case omits these, so as not to
complete on a struct or FFI typedef after a [.
|
|
|
|
|
|
|
| |
* tests/012/seq.tl: New tests.
* txr.1: Improve documentation of window-map's :wrap
and :reflect. Add examples.
|
|
|
|
|
|
|
| |
* txr.1: Fix heading repeating identity instead of listing
idenitty and identity*.
* share/txr/stdlib/doc-syms.tl: Regenerated.
|
|
|
|
|
|
| |
* txr.1: Fix numerous instances of text which uses the wording
that arguments are applied to a function. A few of the changes
repair wording that was entirely botched.
|