The NetBSD Project

CVS log for pkgsrc/devel/ply/Makefile

[BACK] Up to [] / pkgsrc / devel / ply

Request diff between arbitrary revisions

Default branch: MAIN

Revision 1.24 / (download) - annotate - [select for diffs], Mon Aug 14 05:24:14 2023 UTC (6 months, 3 weeks ago) by wiz
Branch: MAIN
CVS Tags: pkgsrc-2023Q4-base, pkgsrc-2023Q4, pkgsrc-2023Q3-base, pkgsrc-2023Q3, HEAD
Changes since 1.23: +2 -2 lines
Diff to previous 1.23 (colored)

*: recursive bump for Python 3.11 as new default

Revision 1.23 / (download) - annotate - [select for diffs], Thu Jun 30 11:18:17 2022 UTC (20 months ago) by nia
Branch: MAIN
CVS Tags: pkgsrc-2023Q2-base, pkgsrc-2023Q2, pkgsrc-2023Q1-base, pkgsrc-2023Q1, pkgsrc-2022Q4-base, pkgsrc-2022Q4, pkgsrc-2022Q3-base, pkgsrc-2022Q3
Changes since 1.22: +2 -2 lines
Diff to previous 1.22 (colored)

*: Revbump packages that use Python at runtime without a PKGNAME prefix

Revision 1.22 / (download) - annotate - [select for diffs], Tue Jan 4 20:52:46 2022 UTC (2 years, 1 month ago) by wiz
Branch: MAIN
CVS Tags: pkgsrc-2022Q2-base, pkgsrc-2022Q2, pkgsrc-2022Q1-base, pkgsrc-2022Q1
Changes since 1.21: +2 -2 lines
Diff to previous 1.21 (colored)

*: bump PKGREVISION for users

They now have a tool dependency on py-setuptools instead of a DEPENDS

Revision 1.21 / (download) - annotate - [select for diffs], Fri Dec 4 20:45:12 2020 UTC (3 years, 3 months ago) by nia
Branch: MAIN
CVS Tags: pkgsrc-2021Q4-base, pkgsrc-2021Q4, pkgsrc-2021Q3-base, pkgsrc-2021Q3, pkgsrc-2021Q2-base, pkgsrc-2021Q2, pkgsrc-2021Q1-base, pkgsrc-2021Q1, pkgsrc-2020Q4-base, pkgsrc-2020Q4
Changes since 1.20: +2 -2 lines
Diff to previous 1.20 (colored)

Revbump packages with a runtime Python dep but no version prefix.

For the Python 3.8 default switch.

Revision 1.20 / (download) - annotate - [select for diffs], Thu Apr 25 07:32:49 2019 UTC (4 years, 10 months ago) by maya
Branch: MAIN
CVS Tags: pkgsrc-2020Q3-base, pkgsrc-2020Q3, pkgsrc-2020Q2-base, pkgsrc-2020Q2, pkgsrc-2020Q1-base, pkgsrc-2020Q1, pkgsrc-2019Q4-base, pkgsrc-2019Q4, pkgsrc-2019Q3-base, pkgsrc-2019Q3, pkgsrc-2019Q2-base, pkgsrc-2019Q2
Changes since 1.19: +2 -1 lines
Diff to previous 1.19 (colored)

PKGREVISION bump for anything using python without a PYPKGPREFIX.

This is a semi-manual PKGREVISION bump.

Revision 1.19 / (download) - annotate - [select for diffs], Sat Sep 17 13:51:19 2016 UTC (7 years, 5 months ago) by mef
Branch: MAIN
CVS Tags: pkgsrc-2019Q1-base, pkgsrc-2019Q1, pkgsrc-2018Q4-base, pkgsrc-2018Q4, pkgsrc-2018Q3-base, pkgsrc-2018Q3, pkgsrc-2018Q2-base, pkgsrc-2018Q2, pkgsrc-2018Q1-base, pkgsrc-2018Q1, pkgsrc-2017Q4-base, pkgsrc-2017Q4, pkgsrc-2017Q3-base, pkgsrc-2017Q3, pkgsrc-2017Q2-base, pkgsrc-2017Q2, pkgsrc-2017Q1-base, pkgsrc-2017Q1, pkgsrc-2016Q4-base, pkgsrc-2016Q4, pkgsrc-2016Q3-base, pkgsrc-2016Q3
Changes since 1.18: +3 -4 lines
Diff to previous 1.18 (colored)

Updated devel/ply 3.4 to 3.9

Revision 1.18 / (download) - annotate - [select for diffs], Sat May 17 16:10:43 2014 UTC (9 years, 9 months ago) by wiz
Branch: MAIN
CVS Tags: pkgsrc-2016Q2-base, pkgsrc-2016Q2, pkgsrc-2016Q1-base, pkgsrc-2016Q1, pkgsrc-2015Q4-base, pkgsrc-2015Q4, pkgsrc-2015Q3-base, pkgsrc-2015Q3, pkgsrc-2015Q2-base, pkgsrc-2015Q2, pkgsrc-2015Q1-base, pkgsrc-2015Q1, pkgsrc-2014Q4-base, pkgsrc-2014Q4, pkgsrc-2014Q3-base, pkgsrc-2014Q3, pkgsrc-2014Q2-base, pkgsrc-2014Q2
Changes since 1.17: +2 -1 lines
Diff to previous 1.17 (colored)

Bump applications PKGREVISIONs for python users that might be using
python3, since the default changed from python33 to python34.

I probably bumped too many. I hope I got them all.

Revision 1.17 / (download) - annotate - [select for diffs], Thu Sep 5 20:29:48 2013 UTC (10 years, 6 months ago) by wiz
Branch: MAIN
CVS Tags: pkgsrc-2014Q1-base, pkgsrc-2014Q1, pkgsrc-2013Q4-base, pkgsrc-2013Q4, pkgsrc-2013Q3-base, pkgsrc-2013Q3
Changes since 1.16: +4 -3 lines
Diff to previous 1.16 (colored)

Update to 3.4, based on PR 48186 by @kiaderouiche

Version 3.4
02/17/11: beazley
          Minor patch to make compatible with Python 3.  Note: This
          is an experimental file not currently used by the rest of PLY.

02/17/11: beazley
          Fixed trove classifiers to properly list PLY as
          Python 3 compatible.

01/02/11: beazley
          Migration of repository to github.

Revision 1.16 / (download) - annotate - [select for diffs], Sat Apr 6 13:09:24 2013 UTC (10 years, 11 months ago) by rodent
Branch: MAIN
CVS Tags: pkgsrc-2013Q2-base, pkgsrc-2013Q2
Changes since 1.15: +2 -2 lines
Diff to previous 1.15 (colored)

"Please write instead of"

Revision 1.15 / (download) - annotate - [select for diffs], Wed Oct 31 11:19:25 2012 UTC (11 years, 4 months ago) by asau
Branch: MAIN
CVS Tags: pkgsrc-2013Q1-base, pkgsrc-2013Q1, pkgsrc-2012Q4-base, pkgsrc-2012Q4
Changes since 1.14: +1 -3 lines
Diff to previous 1.14 (colored)

Drop superfluous PKG_DESTDIR_SUPPORT, "user-destdir" is default these days.

Revision 1.14 / (download) - annotate - [select for diffs], Thu Mar 15 11:53:25 2012 UTC (11 years, 11 months ago) by obache
Branch: MAIN
CVS Tags: pkgsrc-2012Q3-base, pkgsrc-2012Q3, pkgsrc-2012Q2-base, pkgsrc-2012Q2, pkgsrc-2012Q1-base, pkgsrc-2012Q1
Changes since 1.13: +2 -1 lines
Diff to previous 1.13 (colored)

Bump PKGREVISION from default python to 2.7.

Revision 1.13 / (download) - annotate - [select for diffs], Sun Aug 29 11:00:31 2010 UTC (13 years, 6 months ago) by nonaka
Branch: MAIN
CVS Tags: pkgsrc-2011Q4-base, pkgsrc-2011Q4, pkgsrc-2011Q3-base, pkgsrc-2011Q3, pkgsrc-2011Q2-base, pkgsrc-2011Q2, pkgsrc-2011Q1-base, pkgsrc-2011Q1, pkgsrc-2010Q4-base, pkgsrc-2010Q4, pkgsrc-2010Q3-base, pkgsrc-2010Q3
Changes since 1.12: +11 -9 lines
Diff to previous 1.12 (colored)

Update ply to version 3.3.

Version 3.3
08/25/09: beazley
          Fixed issue 15 related to the set_lineno() method in yacc.  Reported by

08/25/09: beazley
          Fixed a bug related to regular expression compilation flags not being
          properly stored in files created by the lexer when running
          in optimize mode.  Reported by Bruce Frederiksen.

Version 3.2
03/24/09: beazley
          Added an extra check to not print duplicated warning messages
          about reduce/reduce conflicts.

03/24/09: beazley
          Switched PLY over to a BSD-license.

03/23/09: beazley
          Performance optimization.  Discovered a few places to make
          speedups in LR table generation.

03/23/09: beazley
          New warning message.  PLY now warns about rules never
          reduced due to reduce/reduce conflicts.  Suggested by
          Bruce Frederiksen.

03/23/09: beazley
          Some clean-up of warning messages related to reduce/reduce errors.

03/23/09: beazley
          Added a new picklefile option to yacc() to write the parsing
          tables to a filename using the pickle module.   Here is how
          it works:


          This option can be used if the normal file is
          extremely large.  For example, on jython, it is impossible
          to read parsing tables if the exceeds a certain

          The filename supplied to the picklefile option is opened
          relative to the current working directory of the Python
          interpreter.  If you need to refer to the file elsewhere,
          you will need to supply an absolute or relative path.

          For maximum portability, the pickle file is written
          using protocol 0.

03/13/09: beazley
          Fixed a bug in parser.out generation where the rule numbers
          where off by one.

03/13/09: beazley
          Fixed a string formatting bug with one of the error messages.
          Reported by Richard Reitmeyer

Version 3.1
02/28/09: beazley
          Fixed broken start argument to yacc().  PLY-3.0 broke this
          feature by accident.

02/28/09: beazley
          Fixed debugging output. yacc() no longer reports shift/reduce
          or reduce/reduce conflicts if debugging is turned off.  This
          restores similar behavior in PLY-2.5.   Reported by Andrew Waters.

Version 3.0
02/03/09: beazley
          Fixed missing lexer attribute on certain tokens when
          invoking the parser p_error() function.  Reported by
          Bart Whiteley.

02/02/09: beazley
          The lex() command now does all error-reporting and diagonistics
          using the logging module interface.   Pass in a Logger object
          using the errorlog parameter to specify a different logger.

02/02/09: beazley
          Refactored ply.lex to use a more object-oriented and organized
          approach to collecting lexer information.

02/01/09: beazley
          Removed the nowarn option from lex().  All output is controlled
          by passing in a logger object.   Just pass in a logger with a high
          level setting to suppress output.   This argument was never
          documented to begin with so hopefully no one was relying upon it.

02/01/09: beazley
          Discovered and removed a dead if-statement in the lexer.  This
          resulted in a 6-7% speedup in lexing when I tested it.

01/13/09: beazley
          Minor change to the procedure for signalling a syntax error in a
          production rule.  A normal SyntaxError exception should be raised
          instead of yacc.SyntaxError.

01/13/09: beazley
          Added a new method p.set_lineno(n,lineno) that can be used to set the
          line number of symbol n in grammar rules.   This simplifies manual
          tracking of line numbers.

01/11/09: beazley
          Vastly improved debugging support for yacc.parse().   Instead of passing
          debug as an integer, you can supply a Logging object (see the logging
          module). Messages will be generated at the ERROR, INFO, and DEBUG
	  logging levels, each level providing progressively more information.
          The debugging trace also shows states, grammar rule, values passed
          into grammar rules, and the result of each reduction.

01/09/09: beazley
          The yacc() command now does all error-reporting and diagnostics using
          the interface of the logging module.  Use the errorlog parameter to
          specify a logging object for error messages.  Use the debuglog parameter
          to specify a logging object for the 'parser.out' output.

01/09/09: beazley
          *HUGE* refactoring of the the ply.yacc() implementation.   The high-level
	  user interface is backwards compatible, but the internals are completely
          reorganized into classes.  No more global variables.    The internals
          are also more extensible.  For example, you can use the classes to
          construct a LALR(1) parser in an entirely different manner than
          what is currently the case.  Documentation is forthcoming.

01/07/09: beazley
          Various cleanup and refactoring of yacc internals.

01/06/09: beazley
          Fixed a bug with precedence assignment.  yacc was assigning the precedence
          each rule based on the left-most token, when in fact, it should have been
          using the right-most token.  Reported by Bruce Frederiksen.

11/27/08: beazley
          Numerous changes to support Python 3.0 including removal of deprecated
          statements (e.g., has_key) and the additional of compatibility code
          to emulate features from Python 2 that have been removed, but which
          are needed.   Fixed the unit testing suite to work with Python 3.0.
          The code should be backwards compatible with Python 2.

11/26/08: beazley
          Loosened the rules on what kind of objects can be passed in as the
          "module" parameter to lex() and yacc().  Previously, you could only use
          a module or an instance.  Now, PLY just uses dir() to get a list of
          symbols on whatever the object is without regard for its type.

11/26/08: beazley
          Changed all except: statements to be compatible with Python2.x/3.x syntax.

11/26/08: beazley
          Changed all raise Exception, value statements to raise Exception(value) for
          forward compatibility.

11/26/08: beazley
          Removed all print statements from lex and yacc, using sys.stdout and sys.stderr
          directly.  Preparation for Python 3.0 support.

11/04/08: beazley
          Fixed a bug with referring to symbols on the the parsing stack using negative

05/29/08: beazley
          Completely revamped the testing system to use the unittest module for everything.
          Added additional tests to cover new errors/warnings.

Version 2.5
05/28/08: beazley
          Fixed a bug with writing lex-tables in optimized mode and start states.
          Reported by Kevin Henry.

Version 2.4
05/04/08: beazley
          A version number is now embedded in the table file signature so that
          yacc can more gracefully accomodate changes to the output format
          in the future.

05/04/08: beazley
          Removed undocumented .pushback() method on grammar productions.  I'm
          not sure this ever worked and can't recall ever using it.  Might have
          been an abandoned idea that never really got fleshed out.  This
          feature was never described or tested so removing it is hopefully

05/04/08: beazley
          Added extra error checking to yacc() to detect precedence rules defined
          for undefined terminal symbols.   This allows yacc() to detect a potential
          problem that can be really tricky to debug if no warning message or error
          message is generated about it.

05/04/08: beazley
          lex() now has an outputdir that can specify the output directory for
          tables when running in optimize mode.  For example:

             lexer = lex.lex(optimize=True, lextab="ltab", outputdir="foo/bar")

          The behavior of specifying a table module and output directory are
          more aligned with the behavior of yacc().

05/04/08: beazley
          [Issue 9]
          Fixed filename bug in when specifying the modulename in lex() and yacc().
          If you specified options such as the following:

             parser = yacc.yacc(tabmodule="",outputdir="foo/bar")

          yacc would create a file "" in the given directory.
          Now, it simply generates a file "" in that directory.
          Bug reported by cptbinho.

05/04/08: beazley
          Slight modification to lex() and yacc() to allow their table files
	  to be loaded from a previously loaded module.   This might make
	  it easier to load the parsing tables from a complicated package
          structure.  For example:

	       import as parsetab
               parser = yacc.yacc(tabmodule=parsetab)

          Note:  lex and yacc will never regenerate the table file if used
          in the form---you will get a warning message instead.
          This idea suggested by Brian Clapper.

04/28/08: beazley
          Fixed a big with p_error() functions being picked up correctly
          when running in yacc(optimize=1) mode.  Patch contributed by
          Bart Whiteley.

02/28/08: beazley
          Fixed a bug with 'nonassoc' precedence rules.   Basically the
          non-precedence was being ignored and not producing the correct
          run-time behavior in the parser.

02/16/08: beazley
          Slight relaxation of what the input() method to a lexer will
          accept as a string.   Instead of testing the input to see
          if the input is a string or unicode string, it checks to see
          if the input object looks like it contains string data.
          This change makes it possible to pass string-like objects
          in as input.  For example, the object returned by mmap.

              import mmap, os
              data = mmap.mmap(,os.O_RDONLY),

11/29/07: beazley
          Modification of ply.lex to allow token functions to aliased.
          This is subtle, but it makes it easier to create libraries and
          to reuse token specifications.  For example, suppose you defined
          a function like this:

               def number(t):
                    t.value = int(t.value)
                    return t

          This change would allow you to define a token rule as follows:

              t_NUMBER = number

          In this case, the token type will be set to 'NUMBER' and use
          the associated number() function to process tokens.

11/28/07: beazley
          Slight modification to lex and yacc to grab symbols from both
          the local and global dictionaries of the caller.   This
          modification allows lexers and parsers to be defined using
          inner functions and closures.

11/28/07: beazley
          Performance optimization:  The lexer.lexmatch and t.lexer
          attributes are no longer set for lexer tokens that are not
          defined by functions.   The only normal use of these attributes
          would be in lexer rules that need to perform some kind of
          special processing.  Thus, it doesn't make any sense to set
          them on every token.

          *** POTENTIAL INCOMPATIBILITY ***  This might break code
          that is mucking around with internal lexer state in some
          sort of magical way.

11/27/07: beazley
          Added the ability to put the parser into error-handling mode
          from within a normal production.   To do this, simply raise
          a yacc.SyntaxError exception like this:

          def p_some_production(p):
              'some_production : prod1 prod2'
              raise yacc.SyntaxError      # Signal an error

          A number of things happen after this occurs:

          - The last symbol shifted onto the symbol stack is discarded
            and parser state backed up to what it was before the
            the rule reduction.

          - The current lookahead symbol is saved and replaced by
            the 'error' symbol.

          - The parser enters error recovery mode where it tries
            to either reduce the 'error' rule or it starts
            discarding items off of the stack until the parser

          When an error is manually set, the parser does *not* call
          the p_error() function (if any is defined).
          *** NEW FEATURE *** Suggested on the mailing list

11/27/07: beazley
          Fixed structure bug in examples/ansic.  Reported by Dion Blazakis.

11/27/07: beazley
          Fixed a bug in the lexer related to start conditions and ignored
          token rules.  If a rule was defined that changed state, but
          returned no token, the lexer could be left in an inconsistent
          state.  Reported by

11/27/07: beazley
          Modified to support Python Eggs.   Patch contributed by
          Simon Cross.

11/09/07: beazely
          Fixed a bug in error handling in yacc.  If a syntax error occurred and the
          parser rolled the entire parse stack back, the parser would be left in in
          inconsistent state that would cause it to trigger incorrect actions on
          subsequent input.  Reported by Ton Biegstraaten, Justin King, and others.

11/09/07: beazley
          Fixed a bug when passing empty input strings to yacc.parse().   This
          would result in an error message about "No input given".  Reported
          by Andrew Dalke.

Version 2.3
02/20/07: beazley
          Fixed a bug with character literals if the literal '.' appeared as the
          last symbol of a grammar rule.  Reported by Ales Smrcka.

02/19/07: beazley
          Warning messages are now redirected to stderr instead of being printed
          to standard output.

02/19/07: beazley
          Added a warning message to if it detects a literal backslash
          character inside the t_ignore declaration.  This is to help
          problems that might occur if someone accidentally defines t_ignore
          as a Python raw string.  For example:

              t_ignore = r' \t'

          The idea for this is from an email I received from David Cimimi who
          reported bizarre behavior in lexing as a result of defining t_ignore
          as a raw string by accident.

02/18/07: beazley
          Performance improvements.  Made some changes to the internal
          table organization and LR parser to improve parsing performance.

02/18/07: beazley
          Automatic tracking of line number and position information must now be
          enabled by a special flag to parse().  For example:


          In many applications, it's just not that important to have the
          parser automatically track all line numbers.  By making this an
          optional feature, it allows the parser to run significantly faster
          (more than a 20% speed increase in many cases).    Note: positional
          information is always available for raw tokens---this change only
          applies to positional information associated with nonterminal
          grammar symbols.

02/18/07: beazley
          Yacc no longer supports extended slices of grammar productions.
          However, it does support regular slices.  For example:

          def p_foo(p):
              '''foo: a b c d e'''
              p[0] = p[1:3]

          This change is a performance improvement to the parser--it streamlines
          normal access to the grammar values since slices are now handled in
          a __getslice__() method as opposed to __getitem__().

02/12/07: beazley
          Fixed a bug in the handling of token names when combined with
          start conditions.   Bug reported by Todd O'Bryan.

Version 2.2
11/01/06: beazley
          Added lexpos() and lexspan() methods to grammar symbols.  These
          mirror the same functionality of lineno() and linespan().  For

          def p_expr(p):
              'expr : expr PLUS expr'
               p.lexpos(1)     # Lexing position of left-hand-expression
               p.lexpos(1)     # Lexing position of PLUS
               start,end = p.lexspan(3)  # Lexing range of right hand expression

11/01/06: beazley
          Minor change to error handling.  The recommended way to skip characters
          in the input is to use t.lexer.skip() as shown here:

             def t_error(t):
                 print "Illegal character '%s'" % t.value[0]

          The old approach of just using t.skip(1) will still work, but won't
          be documented.

10/31/06: beazley
          Discarded tokens can now be specified as simple strings instead of
          functions.  To do this, simply include the text "ignore_" in the
          token declaration.  For example:

              t_ignore_cppcomment = r'//.*'

          Previously, this had to be done with a function.  For example:

              def t_ignore_cppcomment(t):

          If start conditions/states are being used, state names should appear
          before the "ignore_" text.

10/19/06: beazley
          The Lex module now provides support for flex-style start conditions
          as described at
          Please refer to this document to understand this change note.  Refer to
          the PLY documentation for PLY-specific explanation of how this works.

          To use start conditions, you first need to declare a set of states in
          your lexer file:

          states = (

          This serves the same role as the %s and %x specifiers in flex.

          One a state has been declared, tokens for that state can be
          declared by defining rules of the form t_state_TOK.  For example:

            t_PLUS = '\+'          # Rule defined in INITIAL state
            t_foo_NUM = '\d+'      # Rule defined in foo state
            t_bar_NUM = '\d+'      # Rule defined in bar state

            t_foo_bar_NUM = '\d+'  # Rule defined in both foo and bar
            t_ANY_NUM = '\d+'      # Rule defined in all states

          In addition to defining tokens for each state, the t_ignore and t_error
          specifications can be customized for specific states.  For example:

            t_foo_ignore = " "     # Ignored characters for foo state
            def t_bar_error(t):
                # Handle errors in bar state

          With token rules, the following methods can be used to change states

            def t_TOKNAME(t):
                t.lexer.begin('foo')        # Begin state 'foo'
                t.lexer.push_state('foo')   # Begin state 'foo', push old state
                                            # onto a stack
                t.lexer.pop_state()         # Restore previous state
                t.lexer.current_state()     # Returns name of current state

          These methods mirror the BEGIN(), yy_push_state(), yy_pop_state(), and
          yy_top_state() functions in flex.

          The use of start states can be used as one way to write sub-lexers.
          For example, the lexer or parser might instruct the lexer to start
          generating a different set of tokens depending on the context.

          example/yply/ shows the use of start states to grab C/C++
          code fragments out of traditional yacc specification files.

          *** NEW FEATURE *** Suggested by Daniel Larraz with whom I also
          discussed various aspects of the design.

10/19/06: beazley
          Minor change to the way in which was reporting shift/reduce
          conflicts.  Although the underlying LALR(1) algorithm was correct,
          PLY was under-reporting the number of conflicts compared to yacc/bison
          when precedence rules were in effect.  This change should make PLY
          report the same number of conflicts as yacc.

10/19/06: beazley
          Modified yacc so that grammar rules could also include the '-'
          character.  For example:

            def p_expr_list(p):
                'expression-list : expression-list expression'

          Suggested by Oldrich Jedlicka.

10/18/06: beazley
          Attribute lexer.lexmatch added so that token rules can access the re
          match object that was generated.  For example:

          def t_FOO(t):
              r'some regex'
              m = t.lexer.lexmatch
              # Do something with m

          This may be useful if you want to access named groups specified within
          the regex for a specific token. Suggested by Oldrich Jedlicka.

10/16/06: beazley
          Changed the error message that results if an illegal character
          is encountered and no default error function is defined in lex.
          The exception is now more informative about the actual cause of
          the error.

Version 2.1
10/02/06: beazley
          The last Lexer object built by lex() can be found in lex.lexer.
          The last Parser object built  by yacc() can be found in yacc.parser.

10/02/06: beazley
          New example added:  examples/yply

          This example uses PLY to convert Unix-yacc specification files to
          PLY programs with the same grammar.   This may be useful if you
          want to convert a grammar from bison/yacc to use with PLY.

10/02/06: beazley
          Added support for a start symbol to be specified in the yacc
          input file itself.  Just do this:

               start = 'name'

          where 'name' matches some grammar rule.  For example:

               def p_name(p):
                   'name : A B C'

          This mirrors the functionality of the yacc %start specifier.

09/30/06: beazley
          Some new examples added.:

          examples/GardenSnake : A simple indentation based language similar
                                 to Python.  Shows how you might handle
                                 whitespace.  Contributed by Andrew Dalke.

          examples/BASIC       : An implementation of 1964 Dartmouth BASIC.
                                 Contributed by Dave against his better

09/28/06: beazley
          Minor patch to allow named groups to be used in lex regular
          expression rules.  For example:

              t_QSTRING = r'''(?P<quote>['"]).*?(?P=quote)'''

          Patch submitted by Adam Ring.

09/28/06: beazley
          LALR(1) is now the default parsing method.   To use SLR, use
          yacc.yacc(method="SLR").  Note: there is no performance impact
          on parsing when using LALR(1) instead of SLR. However, constructing
          the parsing tables will take a little longer.

09/26/06: beazley
          Change to line number tracking.  To modify line numbers, modify
          the line number of the lexer itself.  For example:

          def t_NEWLINE(t):
              t.lexer.lineno += 1

          This modification is both cleanup and a performance optimization.
          In past versions, lex was monitoring every token for changes in
          the line number.  This extra processing is unnecessary for a vast
          majority of tokens. Thus, this new approach cleans it up a bit.

          You will need to change code in your lexer that updates the line
          number. For example, "t.lineno += 1" becomes "t.lexer.lineno += 1"

09/26/06: beazley
          Added the lexing position to tokens as an attribute lexpos. This
          is the raw index into the input text at which a token appears.
          This information can be used to compute column numbers and other
          details (e.g., scan backwards from lexpos to the first newline
          to get a column position).

09/25/06: beazley
          Changed the name of the __copy__() method on the Lexer class
          to clone().  This is used to clone a Lexer object (e.g., if
          you're running different lexers at the same time).

09/21/06: beazley
          Limitations related to the use of the re module have been eliminated.
          Several users reported problems with regular expressions exceeding
          more than 100 named groups. To solve this, is now capable
          of automatically splitting its master regular regular expression into
          smaller expressions as needed.   This should, in theory, make it
          possible to specify an arbitrarily large number of tokens.

09/21/06: beazley
          Improved error checking in  Rules that match the empty string
          are now rejected (otherwise they cause the lexer to enter an infinite
          loop).  An extra check for rules containing '#' has also been added.
          Since lex compiles regular expressions in verbose mode, '#' is interpreted
          as a regex comment, it is critical to use '\#' instead.

09/18/06: beazley
          Added a @TOKEN decorator function to that can be used to
          define token rules where the documentation string might be computed
          in some way.

          digit            = r'([0-9])'
          nondigit         = r'([_A-Za-z])'
          identifier       = r'(' + nondigit + r'(' + digit + r'|' + nondigit + r')*)'

          from ply.lex import TOKEN

          def t_ID(t):
               # Do whatever

          The @TOKEN decorator merely sets the documentation string of the
          associated token function as needed for lex to work.

          Note: An alternative solution is the following:

          def t_ID(t):
              # Do whatever

          t_ID.__doc__ = identifier

          Note: Decorators require the use of Python 2.4 or later.  If compatibility
          with old versions is needed, use the latter solution.

          The need for this feature was suggested by Cem Karan.

09/14/06: beazley
          Support for single-character literal tokens has been added to yacc.
          These literals must be enclosed in quotes.  For example:

          def p_expr(p):
               "expr : expr '+' expr"

          def p_expr(p):
               'expr : expr "-" expr'

          In addition to this, it is necessary to tell the lexer module about
          literal characters.   This is done by defining the variable 'literals'
          as a list of characters.  This should  be defined in the module that
          invokes the lex.lex() function.  For example:

             literals = ['+','-','*','/','(',')','=']

          or simply

             literals = '+=*/()='

          It is important to note that literals can only be a single character.
          When the lexer fails to match a token using its normal regular expression
          rules, it will check the current character against the literal list.
          If found, it will be returned with a token type set to match the literal
          character.  Otherwise, an illegal character will be signalled.

09/14/06: beazley
          Modified PLY to install itself as a proper Python package called 'ply'.
          This will make it a little more friendly to other modules.  This
          changes the usage of PLY only slightly.  Just do this to import the

                import ply.lex as lex
                import ply.yacc as yacc

          Alternatively, you can do this:

                from ply import *

          Which imports both the lex and yacc modules.
          Change suggested by Lee June.

09/13/06: beazley
          Changed the handling of negative indices when used in production rules.
          A negative production index now accesses already parsed symbols on the
          parsing stack.  For example,

              def p_foo(p):
                   "foo: A B C D"
                   print p[1]       # Value of 'A' symbol
                   print p[2]       # Value of 'B' symbol
                   print p[-1]      # Value of whatever symbol appears before A
                                    # on the parsing stack.

                   p[0] = some_val  # Sets the value of the 'foo' grammer symbol

          This behavior makes it easier to work with embedded actions within the
          parsing rules. For example, in C-yacc, it is possible to write code like

               bar:   A { printf("seen an A = %d\n", $1); } B { do_stuff; }

          In this example, the printf() code executes immediately after A has been
          parsed.  Within the embedded action code, $1 refers to the A symbol on
          the stack.

          To perform this equivalent action in PLY, you need to write a pair
          of rules like this:

               def p_bar(p):
                     "bar : A seen_A B"

               def p_seen_A(p):
                     "seen_A :"
                     print "seen an A =", p[-1]

          The second rule "seen_A" is merely a empty production which should be
          reduced as soon as A is parsed in the "bar" rule above.  The use
          of the negative index p[-1] is used to access whatever symbol appeared
          before the seen_A symbol.

          This feature also makes it possible to support inherited attributes.
          For example:

               def p_decl(p):
                     "decl : scope name"

               def p_scope(p):
                     """scope : GLOBAL
                              | LOCAL"""
                   p[0] = p[1]

               def p_name(p):
                     "name : ID"
                     if p[-1] == "GLOBAL":
                          # ...
                     else if p[-1] == "LOCAL":

          In this case, the name rule is inheriting an attribute from the
          scope declaration that precedes it.

          If you are currently using negative indices within existing grammar rules,
          your code will break.  This should be extremely rare if non-existent in
          most cases.  The argument to various grammar rules is not usually not
          processed in the same way as a list of items.

Version 2.0
09/07/06: beazley
          Major cleanup and refactoring of the LR table generation code.  Both SLR
          and LALR(1) table generation is now performed by the same code base with
          only minor extensions for extra LALR(1) processing.

09/07/06: beazley
          Completely reimplemented the entire LALR(1) parsing engine to use the
          DeRemer and Pennello algorithm for calculating lookahead sets.  This
          significantly improves the performance of generating LALR(1) tables
          and has the added feature of actually working correctly!  If you
          experienced weird behavior with LALR(1) in prior releases, this should
          hopefully resolve all of those problems.  Many thanks to
          Andrew Waters and Markus Schoepflin for submitting bug reports
          and helping me test out the revised LALR(1) support.

Version 1.8
08/02/06: beazley
          Fixed a problem related to the handling of default actions in LALR(1)
          parsing.  If you experienced subtle and/or bizarre behavior when trying
          to use the LALR(1) engine, this may correct those problems.  Patch
          contributed by Russ Cox.  Note: This patch has been superceded by
          revisions for LALR(1) parsing in Ply-2.0.

08/02/06: beazley
          Added support for slicing of productions in yacc.
          Patch contributed by Patrick Mezard.

Version 1.7
03/02/06: beazley
          Fixed infinite recursion problem ReduceToTerminals() function that
          would sometimes come up in LALR(1) table generation.  Reported by
          Markus Schoepflin.

03/01/06: beazley
          Added "reflags" argument to lex().  For example:


          This can be used to specify optional flags to the re.compile() function
          used inside the lexer.   This may be necessary for special situations such
          as processing Unicode (e.g., if you want escapes like \w and \b to consult
          the Unicode character property database).   The need for this suggested by
          Andreas Jung.

03/01/06: beazley
          Fixed a bug with an uninitialized variable on repeated instantiations of parser
          objects when the write_tables=0 argument was used.   Reported by Michael Brown.

03/01/06: beazley
          Modified to accept Unicode strings both as the regular expressions for
          tokens and as input. Hopefully this is the only change needed for Unicode support.
          Patch contributed by Johan Dahl.

03/01/06: beazley
          Modified the class-based interface to work with new-style or old-style classes.
          Patch contributed by Michael Brown (although I tweaked it slightly so it would work
          with older versions of Python).

Version 1.6
05/27/05: beazley
          Incorporated patch contributed by Christopher Stawarz to fix an extremely
          devious bug in LALR(1) parser generation.   This patch should fix problems
          numerous people reported with LALR parsing.

05/27/05: beazley
          Fixed problem with copy constructor.  Reported by Dave Aitel, Aaron Lav,
          and Thad Austin.

05/27/05: beazley
          Added outputdir option to yacc()  to control output directory. Contributed
          by Christopher Stawarz.

05/27/05: beazley
          Added test script to run tests using the Python unittest module.
          Contributed by Miki Tebeka.

Version 1.5
05/26/04: beazley
          Major enhancement. LALR(1) parsing support is now working.
          This feature was implemented by Elias Ioup (
          and optimized by David Beazley. To use LALR(1) parsing do
          the following:


          Computing LALR(1) parsing tables takes about twice as long as
          the default SLR method.  However, LALR(1) allows you to handle
          more complex grammars.  For example, the ANSI C grammar
          (in example/ansic) has 13 shift-reduce conflicts with SLR, but
          only has 1 shift-reduce conflict with LALR(1).

05/20/04: beazley
          Added a __len__ method to parser production lists.  Can
          be used in parser rules like this:

             def p_somerule(p):
                 """a : B C D
                      | E F"
                 if (len(p) == 3):
                     # Must have been first rule
                 elif (len(p) == 2):
                     # Must be second rule

          Suggested by Joshua Gerth and others.

Revision 1.12 / (download) - annotate - [select for diffs], Wed Feb 10 19:17:36 2010 UTC (14 years ago) by joerg
Branch: MAIN
CVS Tags: pkgsrc-2010Q2-base, pkgsrc-2010Q2, pkgsrc-2010Q1-base, pkgsrc-2010Q1
Changes since 1.11: +2 -2 lines
Diff to previous 1.11 (colored)

Bump revision for PYTHON_VERSION_DEFAULT change.

Revision 1.11 / (download) - annotate - [select for diffs], Fri Jun 12 21:47:32 2009 UTC (14 years, 8 months ago) by zafer
Branch: MAIN
CVS Tags: pkgsrc-2009Q4-base, pkgsrc-2009Q4, pkgsrc-2009Q3-base, pkgsrc-2009Q3, pkgsrc-2009Q2-base, pkgsrc-2009Q2
Changes since 1.10: +3 -3 lines
Diff to previous 1.10 (colored)

update homepage and master sites.

Revision 1.10 / (download) - annotate - [select for diffs], Mon Feb 9 22:56:23 2009 UTC (15 years ago) by joerg
Branch: MAIN
CVS Tags: pkgsrc-2009Q1-base, pkgsrc-2009Q1
Changes since 1.9: +2 -2 lines
Diff to previous 1.9 (colored)

Switch to Python 2.5 as default. Bump revision of all packages that have
changed runtime dependencies now.

Revision 1.9 / (download) - annotate - [select for diffs], Thu Jun 12 02:14:28 2008 UTC (15 years, 8 months ago) by joerg
Branch: MAIN
CVS Tags: pkgsrc-2008Q4-base, pkgsrc-2008Q4, pkgsrc-2008Q3-base, pkgsrc-2008Q3, pkgsrc-2008Q2-base, pkgsrc-2008Q2, cwrapper, cube-native-xorg-base, cube-native-xorg
Changes since 1.8: +7 -6 lines
Diff to previous 1.8 (colored)

Add DESTDIR support.

Revision 1.8 / (download) - annotate - [select for diffs], Mon May 26 02:13:17 2008 UTC (15 years, 9 months ago) by joerg
Branch: MAIN
Changes since 1.7: +4 -2 lines
Diff to previous 1.7 (colored)

Second round of explicit pax dependencies. As reminded by tnn@,
many packages used to use ${PAX}. Use the common way of directly calling
pax, it is created as tool after all.

Revision 1.7 / (download) - annotate - [select for diffs], Sat Jun 3 00:13:07 2006 UTC (17 years, 9 months ago) by joerg
Branch: MAIN
CVS Tags: pkgsrc-2008Q1-base, pkgsrc-2008Q1, pkgsrc-2007Q4-base, pkgsrc-2007Q4, pkgsrc-2007Q3-base, pkgsrc-2007Q3, pkgsrc-2007Q2-base, pkgsrc-2007Q2, pkgsrc-2007Q1-base, pkgsrc-2007Q1, pkgsrc-2006Q4-base, pkgsrc-2006Q4, pkgsrc-2006Q3-base, pkgsrc-2006Q3, pkgsrc-2006Q2-base, pkgsrc-2006Q2
Changes since 1.6: +1 -3 lines
Diff to previous 1.6 (colored)

No need to mark 1.5 as incompatible, it isn't active by default anyway.

Revision 1.6 / (download) - annotate - [select for diffs], Sun Feb 5 23:08:50 2006 UTC (18 years, 1 month ago) by joerg
Branch: MAIN
CVS Tags: pkgsrc-2006Q1-base, pkgsrc-2006Q1
Changes since 1.5: +2 -1 lines
Diff to previous 1.5 (colored)

Recursive revision bump / recommended bump for gettext ABI change.

Revision 1.5 / (download) - annotate - [select for diffs], Thu Dec 1 03:22:16 2005 UTC (18 years, 3 months ago) by rillig
Branch: MAIN
CVS Tags: pkgsrc-2005Q4-base, pkgsrc-2005Q4
Changes since 1.4: +2 -2 lines
Diff to previous 1.4 (colored)

Fixed a pkglint warning:
- WARN: devel/ply/Makefile:22: Found absolute pathname: /${EGDIR}

As ${EGDIR} is already an absolute pathname, there's no need to prefix it
with a slash.

Revision 1.4 / (download) - annotate - [select for diffs], Mon Apr 11 21:45:36 2005 UTC (18 years, 10 months ago) by tv
Branch: MAIN
CVS Tags: pkgsrc-2005Q3-base, pkgsrc-2005Q3, pkgsrc-2005Q2-base, pkgsrc-2005Q2
Changes since 1.3: +1 -2 lines
Diff to previous 1.3 (colored)

Remove USE_BUILDLINK3 and NO_BUILDLINK; these are no longer used.

Revision 1.3 / (download) - annotate - [select for diffs], Thu Mar 24 21:12:53 2005 UTC (18 years, 11 months ago) by wiz
Branch: MAIN
Changes since 1.2: +1 -2 lines
Diff to previous 1.2 (colored)

Remove FreeBSD RCS Ids. pkgsrc has diverged too much for syncing to be

Revision 1.2 / (download) - annotate - [select for diffs], Mon Jul 19 19:57:42 2004 UTC (19 years, 7 months ago) by recht
Branch: MAIN
CVS Tags: pkgsrc-2005Q1-base, pkgsrc-2005Q1, pkgsrc-2004Q4-base, pkgsrc-2004Q4, pkgsrc-2004Q3-base, pkgsrc-2004Q3
Changes since 1.1: +2 -2 lines
Diff to previous 1.1 (colored)

Set package creator as maintainer. Requested in private mail.

Revision / (download) - annotate - [select for diffs] (vendor branch), Fri Jul 16 15:36:50 2004 UTC (19 years, 7 months ago) by recht
Branch: TNF
CVS Tags: pkgsrc-base
Changes since 1.1: +0 -0 lines
Diff to previous 1.1 (colored)

initial import of ply-1.5

The 1.4 version of the package was provided by NONAKA Kimihiro
in PR 26344 .  (And seems to be based upon the FreeBSD port.)

Cleaned up and updated to 1.5 by me.

PLY is a Python-only implementation of the popular compiler construction
tools lex and yacc. The implementation borrows ideas from a number of
previous efforts; most notably John Aycock's SPARK toolkit. However, the
overall flavor of the implementation is more closely modeled after the C
version of lex and yacc. The other significant feature of PLY is that it
provides extensive input validation and error reporting--much more so than
other Python parsing tools.

Revision 1.1 / (download) - annotate - [select for diffs], Fri Jul 16 15:36:50 2004 UTC (19 years, 7 months ago) by recht
Branch: MAIN

Initial revision

This form allows you to request diff's between any two revisions of a file. You may select a symbolic revision name using the selection box or you may type in a numeric name using the type-in text box.

CVSweb <>