This enables an option that was being done unconditionally: to swap the `if` and `else` branches if the condition is shorter when negated.
Enabled by default.
Thanks to @Blues-Today for reporting that --postarg was not working.
This needed a regression test, and the best way to distinguish --prearg from --postarg is to display the full preprocessor command line, therefore a new option was added for that purpose: --preproc-show-cmdline
Fixes#18.
The limits were too low to be reasonable with more modern versions of Python.
Double exceptions were possible. Make use of soft/hard limits; raise the soft limit in case of exception. That should make it less likely to get a double exception.
Per bug report by @LeonaMorro (thanks!)
Fixes GitHub issue report #9 (fix#9).
Reverts f958b1cdf9, and adds the possibility of adding parameters after the system-inserted ones through a new command line parameter -A/--postarg.
Enables using e.g. both a command interpreter which requires the file to execute before any other parameters, and overriding the default definitions.
--prenodef was causing the preprocessor-specific options (like -std=c99 for gcpp or -V199901L for mcpp) to be omitted. That makes no sense; '-p ext' should be used when custom options are desired (also modify the text that mentions it to make that clearer). Only the macros should be omitted.
The comment that warned about side effects with the order of --prenodef with respect to -p no longer applies, so remove it.
This allows using e.g. --precmd=python --prearg=mypreproc.py and still benefit from automatic defines, but on the negative side (or not?), it doesn't allow user options to override mcpp/gcpp internal options; user is forced to use -p ext in that case.
Since our syntax extensions transform the source at parse time, all syntax extensions are disabled. The optimizations are disabled too, as it doesn't make sense to prettify and optimize at the same time (the optimizer would remove the constants that we're trying to keep).
Addresses #4 in a more user-friendly way.
This solves a long-standing issue where we needed more data about LSL functions than just whether it's side-effect-free.
There's still some debug code, which is kept for history purposes.
- Separate library loading code into a new module. parser.__init__() no longer loads the library; it accepts (but does not depend on) a library as a parameter.
- Add an optional library argument to parse(). It's no longer mandatory to create a new parser for switching to a different builtins or seftable file.
- Move warning() and types from lslparse to lslcommon.
- Add .copy() to uses of base_keywords, to not rely on it being a frozen set.
- Adjust the test suite.
Also simplify and fix the matching expression for #line (gcc inserts numeric flags at the end).
It still has many problems. It's O(n^2). It's calculated at every EParse, and EParse can be triggered and ignored while scanning vectors or globals. UniConvScript doesn't read #line at all, thus failing to report a meaningful input line. But at least it's a start.
ReportError() needed to account for terminal encodings that don't support the characters being printed. It was also reporting an inaccurate column number and its corresponding marker position, because the count was in bytes, not in characters, so that has been fixed.
Now EParse.__init__() calls a new function GetErrLineCol() that calculates the line and column corresponding to an error position.
The algorithm for finding the start of the line has also been changed in both ReportError() and EParse.__init__(); as a result, function fieldpos() has been removed.
The exception's lno and cno fields have been changed to be 1-based, rather than 0-based.
Thanks to @Jomik for the report. Fixes#5.