| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a function ends like this:
jmp label
...
label: end t42
The jmp can just be replaced by "end t42".
* share/txr/stdlib/optimize.tl (basic-blocks peephole-block):
Implement the pattern to replace a jump to a one or two
instruction sequence ending in a jend with that sequence.
Including the extra instruction helps if the target has not
been peepholed yet. In the pattern matching Ackermann case,
two instructions are promoted, one of which is eliminated.
|
|
|
|
|
|
|
|
|
|
| |
During peephole optimization, new cases can arise where a dreg
is tested.
* share/txr/stdlib/optimize.tl (basic-blocks peephole-block):
Test for (if dreg ...) and (ifq dreg nil), eliminating the
instruction or rewriting to jmp. Could be worth it to do
another thread and dead code elimination.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Though the assembler acceps nil as a synonym of (t 0), we have
to be consistent.
* share/txr/stdlib/compiler.tl (compiler comp-or): Don't use
nil operand in ifq instruction: use (t 0).
* share/txr/stdlib/optimize.tl (basic-blocks
thread-jumps-block): Don't match ifq pattern with nil operand;
match (t 0).
|
|
|
|
|
|
| |
* share/txr/stdlib/optimize.tl (basic-blocks
thread-jumps-block): The two accidental occurrences of @reg
will never match each other: rename one to @dn.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* share/txr/stdlib/compiler.tl (compiler optimize): Perform
the control flow optimizations of jump threading and dead code
elimination first. Then, do the data-flow-assisted
optimizations on the re-calculated control flow graph.
With this, two more useless insructions are shaved off the
pattern matching Ackermann:
(defun ack (:match)
((0 @n) (+ n 1))
((@m 0) (ack (- m 1) 1))
((@m @n) (ack (- m 1) (ack m (- n 1)))))
It now compiles down to 16 instructions instead of 18;
and the speedup of (ack 3 7) versus interpreted goes
from 36.0 to 37.3 times.
The reason that the new reduction in the code is possible
is that previously, the code being analyzed was in separate
basic blocks, which are now merged together in the dead code
elimination pass.
* share/txr/stdlib/compiler.tl (compiler optimize): Move
calc-liveness and peephole after elim-dead-code.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
After eliminating dead code and useless forward jumps, there
can be blocks which unconditionally proceed to subsequent
blocks, which have no other predecessor. These blocks can be
merged together. This currently does nothing to alter the
generated code. The advantage will be obvious in a subsequent
commit.
* share/txr/stdlib/optimize.tl (struct basic-block): New slot,
rlinks: reverse links.
(basic-blocks join-block): New method.
(basic-blocks link-graph): Populate rlinks slots.
(basic-blocks join-blocks): New method.
(basic-blocks elim-dead-code): Reset rlinks also before
re-calculating the graph connectivity with link-graph.
Call join-blocks to merge together consecutive blocks.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The optimizer relies too much on labels. The basic block graph
is linked via labels instead of directly, and the blocks
are maintained in order via a label list. Let's get rid
of this.
* share/txr/stdlib/optimize.tl (struct basic-blocks): Member
slot removed. Oh look, we have a list slot that is not
utilized at all; that will be used instead.
(basic-blocks :postinit): Don't calculate the bb.labels.
(basic-blocks get-insns): Loop over list instead of labels,
and get the .insns directly without label hash lookup.
(basic-blocks (cut-block, next-block)): Operate with and
return blocks instead of labels.
(basic-blocks link-graph): Link graph using direct pointers
rather than indirection through labels.
(basic-blocks calc-liveness): Get rid of small amount of label
indirection here.
(basic-blocks thread-jumps-block): Drop unused label argument.
Do some needed label hash lookups now to produce block
argument for cut-block and next-block methods.
(basic-blocks peephole-block): Several rules need to do some
hash lookup to work with cut-block and next-block, and
retrieve a needed label from a block.
(basic-blocks (peephole, thread-jumps)): Loops simplified, no
dealing with labels.
(basic-blocks elim-next-jump): Drop label argument,
use bl in call to next-block.
(basic-blocks elim-dead-code): Loops and recursion simplified
without label indirection.
|
|
|
|
|
|
|
| |
* configure (yacc_is_newer_bison): Configuration variable
removed.
(gen_config_make): Generation of yacc_is_newer_bison make
variable removed.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Over two years ago, I disabled parallel building with a
.NOTPARALLEL: directive in the Makefile, due to some rules
that depended on order of updates of their prerequisites.
Today the situation is better. It looks like there are just a
few rules that have to be fixed, all in the area of executing
tests.
Let us not enable that by default, though, to protect
downstream users from themselves. Some operating system
distribution builds optimistically run make -j N on all the
packages, and then have to deal with sporadic breakages when
some of those packages have parallel build bugs. We can
prevent TXR from ever being implicated in such a breakage by
turning off parallel building even if make -j requests it.
There will now be a ./configure --parallelmake option
to allow parallel builds, off by default. The --maintainer
configuration mode will flip that default; configuring in
maintainer mode will enable parallel builds, unless explicitly
disabled with --no-parallelmake.
* Makefile (.NOTPARALLEL): Conditionally assert, if
parallelmake is false.
(tst/tests/014/dgram-stream.ok,
tst/tests/014/socket-basic.ok): Enforce order between these
two tests. They clash in their use of network ports so cannot
run at the same time.
(test.clean): test.clean must finish executing before tests;
enforce order.
* configure (parallelmake, parallelmake_given): New variables.
(help text): Add parallelmake option to help.
(gen_config_make): Generate the parallelmake make variable
into config.make.
New script section, defaulting parallelmake to y in maintainer
mode.
|
|
|
|
|
|
|
|
|
|
|
|
| |
* RELNOTES: Updated.
* configure, txr.1: Bumped version and date.
* share/txr/stdlib/ver.tl: Bumped.
* txr.vim, tl.vim: Regenerated.
* protsym.c: Likewise.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We cannot assume that a d register is has a non-nil value.
This is because d registers are exploited in the
implementation of load-time: the result of a load-time form is
stored by mutating a d register, and the value could be nil.
Since we still want to be able to assume that d registers
are non-nil, what we can do is just avoid that assumption for
those d regisers that are used for load-time values.
* share/txr/stdlib/compiler.tl (struct compiler): When
constructing basic-blocks, pass a new constructor argument:
the list of load-time d-regs. This is easily obtained by
mapping the load-time frags to their oreg slots, which are
those d-regs.
* share/txr/stdlib/optimize.tl (struct basic-blocks): New
slot and BOA constructor argument, lt-dregs.
(basic-blocks thread-jumps-block): Add a require to the
pattern (if (d @reg) @jlabel), that the register must not
be one of the load-time d-regs.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* share/txr/stdlib/copy-file.tl (package copy-file): Removed.
I cannot remember why I used this, but it was probably because
this source file was developed in "user space". Then it likely
became an experiment in the use of merge-delete-package to
define material in a temporary package which is merged down to
the ultimate target package like sys. Why get rid of this?
It's causing a problem. The second and subsequent recompile of
copy-path-rec causes its internal block not to be optimized
away. I spotted this while checking whether a recompile of the
library using the fully compiled library is different from
the initial compile: copy-path-rec's block is not optimized away.
|
|
|
|
|
|
| |
* share/txr/stdlib/compiler.tl (compiler comp-setqf): When an
assignment to a function is compiled, we must register the
occurrence of a free function, not a free variable.
|
|
|
|
|
|
|
|
|
| |
* share/txr/stdlib/optimize.tl (basic-blocks elim-next-jump):
New method, which detects that the basic block ends with a
(jmp label), where label is the next block in emit order.
This jmp instruction can be eliminated.
(basic-blocks elim-dead-code): Walk the reachable labels,
and call elim-next-jmp to remove useless jumps.
|
|
|
|
|
|
|
|
|
| |
* share/txr/stdlib/optimize.tl (basic-blocks
thread-jumps-block): This is a complementary optimization to
the one which matches (if (d @reg) @jlabel). An if
instruction conditional on the nil register (t 0) is always
taken, and can be rewritten to a jmp. This can promote
elimination of dead code.
|
|
|
|
|
|
|
|
|
|
|
| |
* share/txr/stdlib/compiler.tl (compiler optimize): Call new
elim-dead-code method on basic-blocks object.
* share/txr/stdlib/optimize.tl (basic-blocks elim-dead-code):
New method. We reset the links information for each basic
block and re-build the graph. Then we traverse it to determine
what blocks are reachable, and cull the original blocks list
of those that are not.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The register removal optimization now works for registers
initialized from variables. For instance
mov t13 v00000
mov t12 v00001
gcall t7 3 t13 d0
gcall t8 4 t12 t19
can, under the right conditions, be replaced with
gcall t7 3 v00000 d0
gcall t8 4 v00001 t19
where the useless copies of the v0 and v1 registers to
t13 and t12 are gone, and the refernces to those registers
are renamed to those v registers.
* share/txr/stdlib/optimize.tl (basic-blocks local-liveness):
We now use the def field of a live-info structure differently.
The def field holds a register, instead of a t register
number. For any instruction that defines a register, the
register is stored, whether it is v or t register.
(basic-blocks peephole-block): The rule is simplified and
extended to cover t-regs that receive a v-reg. A special
additional condition applies in this case: v-registers are
tied to a frame, and must not be accessed if their frame has ended.
Our optimization carries the risk that the variable was copied
into a register precisely for the purpose that the v register
is going out of scope, and the value is needed outside of the
frame. We don't have frame level information in the basic
block, so to be conservative, we guess like this: if any end
instructions occur (potentially ending a frame), and if the
register is used after an end instruction, then we don't do
the replacement.
* share/txr/stdlib/compiler.tl (compiler comp-catch): Remove
inappropriate jend. Control definitely flows past this
instruction, because we jump here in the case when no
exception has taken place to continue the program. This bug
has resulted in some blocks being declared unreachable by the
control flow graph construction, and thus not included in
register liveness calculations, resulting in having a nil
in the live slot. This was exposed by the new new
optimization.
|
|
|
|
|
|
|
|
|
|
|
|
| |
The list-builder methods, other than del, del* and get,
now return the object instead of nil.
* share/txr/stdlib/build.tl (list-builder (add, add*, pend,
pend*, ncon, ncon*): Return the object, self.
(list-builder-flets): Do not return the object out of the
local functions which invoke the above methods.
* txr.1: Documented.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Using liveness information, if we are very careful about the
circumstances, we can can eliminate instructions of the form
mov tN src
and replace every subsequent occurrence of tN in the basic
block by src. For instance, simple case: if a function
ends with
mov t13 d5
end t13
that can be rewriten as
end d5
The most important condition is that t13 is not live on exit
from that basic block. There are other conditions. For now,
one of the conditions is that src cannot be a v register.
* share/txr/stdlib/optimize.tl (struct live-info): New slot,
def. This indicates which t register is being clobbered, if
any, by the instruction to which this info is attached.
(basic-blocks local-liveness): Adjust the propagation of the
defined info. If an instruction both consumes a register and
overwrites it, we track that as both a use and a definition.
We set up the def fields of live-info. We do that by mutation,
so we must be careful to copy the structure. The def field
pertains to just one instruction, but the same info can be
attached to multiple instructions.
(subst-preserve): New function.
(basic-blocks peephole-block): New optimization added.
Now takes a basic-block argument, bl.
(basic-blocks peephole): Pass bl to peephole-block.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The optimizer now calculates t liveness information for the t
registers. In every basic block, it now knows which t regs
are live on exit, and which are used in the block, at every
instruction.
One small optimization is based on this so far: the removal
of a move instruction targeting a dead register. This appears
stable.
* share/txr/stdlib/compiler.tl (compiler comp-unwind-protect):
The protected code of a uwprot must terminate with a regular
end instruction, rather than the jend pseudo-instruction.
This is because the clean-up block is executed after the
protected block and references values generated in it: t
registers are live between he pfrag and the cfrag. Without
this, the compile-file-conditionally function was wrongly
optimized, causing it to return false due to the setting of
the success flag (that having been moved into a t register)
having being optimized away.
(compiler optimize): Add the call the basic-blocks method
to calculate liveness.
* share/txr/stdlib/optimize.tl (struct live-info, struct
basic-block): New structure types. The basic-block
structure type now representes basic blocks instead of raw
lists.
(struct basic-blocks): New slots, root, li-hash.
(basic-blocks jump-ops): We add few instructions that
reference labels, just to be safe.
(basic-blocks :postinit): Refactor division into basic blocks
so that it generates basic-block objects instead of just lists
of instructions. Also, the new method link-graph is called
which analyzes the tail instructions of all the blocks to
determine connectivity and sets the next and links fields
of the objects to build a graph.
(basic-blocks (get-insns, cut-blocks)): Refactor for struct
represenation of basic blocks.
(basic-blocks (link-graph, local-liveness, calc-liveness): New
methods.
(basic-blocks thread-jumps-block): Refactor for struct
representation of basic blocks.
(basic-blocks peephole-blocks): Likewise, and new pattern for
removing moves into dead t-registers, assisted by liveness
information.
(basic-blocks (peephole, thread-jumps)): Refactor for
basic-blocks representation.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Compiling a form like
(caseq op ((a b c d e f g h i j k) 42)))
Results in a run-time error in the compiler, similar to:
list-vec: (#:l0048) is not of type vec
* share/txr/stdlib/compiler.tl (compiler comp-switch): Make
sure cases is also still a vector in the complex case when
it's not just a copy of cases-vec.
|
|
|
|
|
|
|
|
| |
* mpi/mpi.c (mp_or, mp_xor): The main loop must only iterate
up to the minimum number of digits between the two source
operands, not the maximum number of digits, because otherwise
it will access past the end of the shorter bignum's
digit array.
|
|
|
|
|
|
|
| |
* share/txr/stdlib/optimize.tl (thread-jumps-block): Add
missing argument to close instruction pattern. This causes us
to miss a threading opportunity due to the new ntregs
parameter being mistaken for a label, which is not found.
|
|
|
|
|
|
|
|
|
|
|
|
| |
This fixes the bug of not allowing @(next :list nil).
Also the bug of @(next :string) or @(next :string nil)
reporting a nonsensical error message, instead of correctly
complaining that nil isn't a string.
* match.c (v_next_impl): Capture the conses returned
by assoc, so we know whether a keyword is present. Then use
those conses in tests which detect whether the keyword is
present, rather than the values, which could be nil.
|
|
|
|
| |
* match.c (match_files): Fix bungled wording.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The @(call) directive is buggy in the following ways, which
cause an indirect call to behave differently from a direct call.
It creates a new context, and so if the opening of a data
source is deferred into the indirectly called function, that
data source is lost when the indirect call terminates.
Furthermore, if a data source is already established, there
is no progress through the data: two consecutive @(call ...)
directives operate on the same data.
It also fails to implement vertical to horizontal fallback; if
a function is not vertically defined, the directive fails.
* match.c (v_call): Rewrite the core logic in the following
way: we rewrite the indirect @(call) syntax into direct call
syntax, substitute that into c->spec, and then just
call v_fun.
* tests/008/call-2.expected: New file.
* tests/008/call-2.txr: New file. Test fails before this commit
because both calls are matching against the same "A" element
of the list.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch causes TXR to treat calls to verticatl functions,
as well as the @(call) directive to be considered non-matching
directives, so that opening the data source is deferred.
This allows included .txr files to call the funtions that they
define, without the side effect of standard input being read.
* match.c (open_data_source): Function refactored to reduce
duplication. c->data is checked first, and if it is not t,
nothing is done, making the function cheaper in the frequent
case. The non_matching_dir condition changes. We now check
that the first element of the first spec is a non-nil symbol.
If it has a function binding as a vertical function, then
that is considered non_matching.
(dir_tables_init): Treat @(call) as a non-matching directive.
* Makefile (tst/tests/008/no-stdin-hang.ok): Add -n argument
for non-interactive, which will cause stdin to be read in
that test case if there is a regression in this change. If
make tests is run in a terminal, this will hang make tests.
* tests/no-stdin-hang.txr: New file.
* tests/no-stdin-hang.expected: New file.
|
|
|
|
|
|
|
| |
* configure (cmdline): We are missing the simple case when the
argument contains spaces or tabs, and no quotes or special
characters. This causes a badly formatted command line in
reconfigure in common situations like --opt-flags="-O3 -W".
|
|
|
|
|
|
|
|
|
|
|
|
| |
* share/txr/stdlib/compiler.tl (compiler comp-arith-form):
Pass env to reduce-constant.
(compiler comp-fun-form): Likewise, and don't bother checking
%const-foldable% because reduce-constant does that again.
(compiler comp-apply-call): Pass env to reduce-constant.
(reduce-constant): Take env argument. If the function is
constant foldable, check that there is no lexical function
call binding shadowing it. If so, it's not the function we
think it is, and we must not constant-fold it.
|
|
|
|
|
|
|
|
|
|
| |
* share/txr/stdlib/compiler.tl (compiler comp-apply-call):
Constant-fold the arguments. Check for special cases involving
call and route to regular function call.
(compiler comp-dwim): Don't wrap all arguments with
sys:lisp1-value, only those that are bindable symbols. This
way constant expressions, including keywords, t and nil, are
not wrapped, and detectable by constantp.
|
|
|
|
|
|
|
|
| |
* share/txr/stdlib/compiler.tl (%const-foldable-funs%): Add
numerous eligible functions that are registered in eval.c. We
avoid anything with functional arguments, environmental
dependencies or anything that may be relied upon to produce a
fresh object.
|
|
|
|
|
|
|
| |
* share/txr/stdlib/compiler.tl (%const-foldable-funs%): Add
all of the cadr, caddr, and other functions. Take out first
and second; these will be later added together with other
things that are being registered in eval.c.
|
|
|
|
|
|
|
|
| |
* share/txr/stdlib/compiler.tl (%const-foldable-funs%): Add
most functions from arith module.
(%const-foldable%): New variable, hash built from list.
(compiler comp-fun-form, reduce-constant): Refer to
%const-foldable% hash instead of %const-foldable-funs% list.
|
|
|
|
|
|
|
|
|
|
|
|
| |
* share/txr/stdlib/compiler.tl (%const-foldable-funs%): Add
pred, succ and their sisters.
* share/txr/stdlib/vm-param.tl (%max-lev-idx%, %max-v-lev%,
%max-sm-lev-idx%): Get rid of macro-time wrapping in
calculation, which are there for manual constant folding.
* share/txr/stdlib/asm.tl (with-lev-idx): Remove macro-time
providing manual constant folding.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Introducing folding of certain expressions that can be
evaluated at compile time, with some special handling for
common arithmetic functions, in which we can collapse
consecutive arguments that are constant integer expressions.
* share/txr/stdlib/compiler.tl (%const-foldable-funs%): New
global variable.
(compiler compile): Send multiplication and division through
new methods that that treat integer arguments.
(compiler comp-arith-form, compiler comp-neg-arith-form): New
methods.
(comp-fun-form): Apply constant folding to a proper function
call whose operator is listed in %const-foldable-funs%.
(reduce-constant): New function.
|
|
|
|
|
|
|
| |
* share/txr/stdlib/compiler.tl (compiler comp-if): Recognize
the pattern (if (not (eq ...) ..), and convert to (if (neq
...) ...) and likewise for eql and equal. This is fed back to
comp-if, whereby it may be further reduced.
|
|
|
|
|
|
| |
* share/txr/stdlib/compiler.tl (fixed-point): New macro.
(reduce-lisp): Hide irrelevant iteration details by using
fixed-point macro.
|
|
|
|
|
| |
* share/txr/stdlib/compiler.tl (compiler comp-fun-form):
Reduce negated eq, eql, equal to neq, neql, nequal.
|
|
|
|
|
| |
* share/txr/stdlib/compiler.tl (compiler comp-if): Support
reduction of nequal in the same way as equal.
|
|
|
|
|
|
|
| |
* share/txr/stdlib/compiler.tl (reduce-lisp): Add one more
reduction case. There is a "hit" for this somewhere, because
even though this adds code, overall 200 bytes are saved over
the entire library.
|
|
|
|
|
|
|
|
|
|
| |
* eval.c (dotted_form_error): Use eval_error instead of
throwing direcly, so the error message has location info and
consistent formatting.
(expand_forms, expand_forms_ss, expand_forms_lisp1): Check for
the dotting error one level above, at the cons cell. The cons
cell will have line number information attached to it for a
better error message.
|
|
|
|
|
|
|
|
|
|
|
|
| |
The raw size of the library compiled files shrinks by over 2%
from this optimization, not to mention that some list
construction code is faster.
* share/txr/stdlib/compiler.tl (compiler comp-fun-form):
Reduce common list construction primitives via reduce-lisp
function which algebraically transforms to a form with fewer
function calls.
(reduce-lisp): New function.
|
|
|
|
|
|
|
|
|
|
| |
* share/txr/stdlib/compiler.tl (compiler comp-if): Remove the
pointless cases which check for test being nil, since that is
subsumed under constantp. Move all the constantp cases up,
making them match-case clauses. The handling of %test-funs%
in several places becomes a single pattern case. The remaining
cases don't have any more sub-cases to test, so the cond
forms are gone.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Here, we look for (equal x y) expressions that can be reduced
to (eql x y) or (eq x y) and compiled that way. Also, we
look for (if (equal x y) ...) expressions that can be turned
into (if (eql x y) ...) or (if (eq x y) ...) which then
compile into ifq or ifql instructions.
* share/txr/stdlib/compiler.tl (compiler comp-if): Convert
tree-case into match case, and then handle the
(if (equal ...)) pattern.
(comp-fun-form): Add recognition for (equal x y) expressions,
and reduce their strength, if possible.
(eq-comparable, eql-comparable): New functions.
|
|
|
|
|
|
|
|
|
|
| |
* share/txr/stdlib/optimize.tl (basic-blocks
thread-jumps-block): We want a set here, not a pset, otherwise
we are processing the old-instruction again rather than
iterating. This breaks jump threading where multiple
iterations are required to get to the ultimate target. It
showed up as a difference in the compiled image of the
sys:compile-match function.
|
|
|
|
|
| |
* share/txr/stdlib/compiler.tl (compiler comp-fun-form):
Rewritten more compactly and extensibly using match-case.
|
|
|
|
|
|
| |
* txr.1: The documentation wrongly refers to @(call @'foo)
which hasn't worked that way since version 144. The first
argument is a Lisp expression.
|
|
|
|
|
|
|
|
|
|
|
| |
* match.c (v_call): This function must propagate the
next_spec_k return value, rather than return a
short-circuiting match object, which causes the parent
to immediately succeed.
* tests/008/call.txr: New test case, from Frank Schwidom.
* tests/008/call.expected: Likewise.
|
|
|
|
| |
* txr.1: Improved description of quote operator.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Jump threading just needs to looks at the last instruction in
a basic blocks now; it's a waste of cycles to be pattern
matching on jump intruction patterns while peephole scanning.
* share/txr/stdlib/compiler.tl (compiler optimize): Invoke
new thread-jumps after peephole.
* share/txr/stdlib/optimize.tl (basic-blocks
thread-jumps-block): New method.
(basic-blocks peephole-block): Remove jump-threading cases;
they are in thread-jumps block.
(basic-blocks thread-jumps): New method.
|