diff options
author | robot-piglet <robot-piglet@yandex-team.com> | 2023-12-04 15:32:14 +0300 |
---|---|---|
committer | robot-piglet <robot-piglet@yandex-team.com> | 2023-12-05 01:22:50 +0300 |
commit | c21ed9eedf73010bc81342518177dfdfb0d56bd7 (patch) | |
tree | 72f8fde4463080cfe5a38eb0babc051cfe32c51e /contrib/python/coverage/py2 | |
parent | ec1311bf2e8cc231723b8b5e484ca576663a1309 (diff) | |
download | ydb-c21ed9eedf73010bc81342518177dfdfb0d56bd7.tar.gz |
Intermediate changes
Diffstat (limited to 'contrib/python/coverage/py2')
66 files changed, 0 insertions, 16619 deletions
diff --git a/contrib/python/coverage/py2/.dist-info/METADATA b/contrib/python/coverage/py2/.dist-info/METADATA deleted file mode 100644 index 25a6049c45d..00000000000 --- a/contrib/python/coverage/py2/.dist-info/METADATA +++ /dev/null @@ -1,190 +0,0 @@ -Metadata-Version: 2.1 -Name: coverage -Version: 5.5 -Summary: Code coverage measurement for Python -Home-page: https://github.com/nedbat/coveragepy -Author: Ned Batchelder and 142 others -Author-email: ned@nedbatchelder.com -License: Apache 2.0 -Project-URL: Documentation, https://coverage.readthedocs.io -Project-URL: Funding, https://tidelift.com/subscription/pkg/pypi-coverage?utm_source=pypi-coverage&utm_medium=referral&utm_campaign=pypi -Project-URL: Issues, https://github.com/nedbat/coveragepy/issues -Keywords: code coverage testing -Platform: UNKNOWN -Classifier: Environment :: Console -Classifier: Intended Audience :: Developers -Classifier: License :: OSI Approved :: Apache Software License -Classifier: Operating System :: OS Independent -Classifier: Programming Language :: Python -Classifier: Programming Language :: Python :: 2 -Classifier: Programming Language :: Python :: 2.7 -Classifier: Programming Language :: Python :: 3 -Classifier: Programming Language :: Python :: 3.5 -Classifier: Programming Language :: Python :: 3.6 -Classifier: Programming Language :: Python :: 3.7 -Classifier: Programming Language :: Python :: 3.8 -Classifier: Programming Language :: Python :: 3.9 -Classifier: Programming Language :: Python :: 3.10 -Classifier: Programming Language :: Python :: Implementation :: CPython -Classifier: Programming Language :: Python :: Implementation :: PyPy -Classifier: Topic :: Software Development :: Quality Assurance -Classifier: Topic :: Software Development :: Testing -Classifier: Development Status :: 5 - Production/Stable -Requires-Python: >=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, <4 -Description-Content-Type: text/x-rst -Provides-Extra: toml -Requires-Dist: toml ; extra == 'toml' - -.. Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 -.. For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt - -=========== -Coverage.py -=========== - -Code coverage testing for Python. - -| |license| |versions| |status| -| |test-status| |quality-status| |docs| |codecov| -| |kit| |format| |repos| |downloads| -| |stars| |forks| |contributors| -| |tidelift| |twitter-coveragepy| |twitter-nedbat| - -Coverage.py measures code coverage, typically during test execution. It uses -the code analysis tools and tracing hooks provided in the Python standard -library to determine which lines are executable, and which have been executed. - -Coverage.py runs on many versions of Python: - -* CPython 2.7. -* CPython 3.5 through 3.10 alpha. -* PyPy2 7.3.3 and PyPy3 7.3.3. - -Documentation is on `Read the Docs`_. Code repository and issue tracker are on -`GitHub`_. - -.. _Read the Docs: https://coverage.readthedocs.io/ -.. _GitHub: https://github.com/nedbat/coveragepy - - -**New in 5.x:** SQLite data storage, JSON report, contexts, relative filenames, -dropped support for Python 2.6, 3.3 and 3.4. - - -For Enterprise --------------- - -.. |tideliftlogo| image:: https://nedbatchelder.com/pix/Tidelift_Logo_small.png - :alt: Tidelift - :target: https://tidelift.com/subscription/pkg/pypi-coverage?utm_source=pypi-coverage&utm_medium=referral&utm_campaign=readme - -.. list-table:: - :widths: 10 100 - - * - |tideliftlogo| - - `Available as part of the Tidelift Subscription. <https://tidelift.com/subscription/pkg/pypi-coverage?utm_source=pypi-coverage&utm_medium=referral&utm_campaign=readme>`_ - Coverage and thousands of other packages are working with - Tidelift to deliver one enterprise subscription that covers all of the open - source you use. If you want the flexibility of open source and the confidence - of commercial-grade software, this is for you. - `Learn more. <https://tidelift.com/subscription/pkg/pypi-coverage?utm_source=pypi-coverage&utm_medium=referral&utm_campaign=readme>`_ - - -Getting Started ---------------- - -See the `Quick Start section`_ of the docs. - -.. _Quick Start section: https://coverage.readthedocs.io/#quick-start - - -Change history --------------- - -The complete history of changes is on the `change history page`_. - -.. _change history page: https://coverage.readthedocs.io/en/latest/changes.html - - -Contributing ------------- - -See the `Contributing section`_ of the docs. - -.. _Contributing section: https://coverage.readthedocs.io/en/latest/contributing.html - - -Security --------- - -To report a security vulnerability, please use the `Tidelift security -contact`_. Tidelift will coordinate the fix and disclosure. - -.. _Tidelift security contact: https://tidelift.com/security - - -License -------- - -Licensed under the `Apache 2.0 License`_. For details, see `NOTICE.txt`_. - -.. _Apache 2.0 License: http://www.apache.org/licenses/LICENSE-2.0 -.. _NOTICE.txt: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt - - -.. |test-status| image:: https://github.com/nedbat/coveragepy/actions/workflows/testsuite.yml/badge.svg?branch=master&event=push - :target: https://github.com/nedbat/coveragepy/actions/workflows/testsuite.yml - :alt: Test suite status -.. |quality-status| image:: https://github.com/nedbat/coveragepy/actions/workflows/quality.yml/badge.svg?branch=master&event=push - :target: https://github.com/nedbat/coveragepy/actions/workflows/quality.yml - :alt: Quality check status -.. |docs| image:: https://readthedocs.org/projects/coverage/badge/?version=latest&style=flat - :target: https://coverage.readthedocs.io/ - :alt: Documentation -.. |reqs| image:: https://requires.io/github/nedbat/coveragepy/requirements.svg?branch=master - :target: https://requires.io/github/nedbat/coveragepy/requirements/?branch=master - :alt: Requirements status -.. |kit| image:: https://badge.fury.io/py/coverage.svg - :target: https://pypi.org/project/coverage/ - :alt: PyPI status -.. |format| image:: https://img.shields.io/pypi/format/coverage.svg - :target: https://pypi.org/project/coverage/ - :alt: Kit format -.. |downloads| image:: https://img.shields.io/pypi/dw/coverage.svg - :target: https://pypi.org/project/coverage/ - :alt: Weekly PyPI downloads -.. |versions| image:: https://img.shields.io/pypi/pyversions/coverage.svg?logo=python&logoColor=FBE072 - :target: https://pypi.org/project/coverage/ - :alt: Python versions supported -.. |status| image:: https://img.shields.io/pypi/status/coverage.svg - :target: https://pypi.org/project/coverage/ - :alt: Package stability -.. |license| image:: https://img.shields.io/pypi/l/coverage.svg - :target: https://pypi.org/project/coverage/ - :alt: License -.. |codecov| image:: https://codecov.io/github/nedbat/coveragepy/coverage.svg?branch=master&precision=2 - :target: https://codecov.io/github/nedbat/coveragepy?branch=master - :alt: Coverage! -.. |repos| image:: https://repology.org/badge/tiny-repos/python:coverage.svg - :target: https://repology.org/metapackage/python:coverage/versions - :alt: Packaging status -.. |tidelift| image:: https://tidelift.com/badges/package/pypi/coverage - :target: https://tidelift.com/subscription/pkg/pypi-coverage?utm_source=pypi-coverage&utm_medium=referral&utm_campaign=readme - :alt: Tidelift -.. |stars| image:: https://img.shields.io/github/stars/nedbat/coveragepy.svg?logo=github - :target: https://github.com/nedbat/coveragepy/stargazers - :alt: Github stars -.. |forks| image:: https://img.shields.io/github/forks/nedbat/coveragepy.svg?logo=github - :target: https://github.com/nedbat/coveragepy/network/members - :alt: Github forks -.. |contributors| image:: https://img.shields.io/github/contributors/nedbat/coveragepy.svg?logo=github - :target: https://github.com/nedbat/coveragepy/graphs/contributors - :alt: Contributors -.. |twitter-coveragepy| image:: https://img.shields.io/twitter/follow/coveragepy.svg?label=coveragepy&style=flat&logo=twitter&logoColor=4FADFF - :target: https://twitter.com/coveragepy - :alt: coverage.py on Twitter -.. |twitter-nedbat| image:: https://img.shields.io/twitter/follow/nedbat.svg?label=nedbat&style=flat&logo=twitter&logoColor=4FADFF - :target: https://twitter.com/nedbat - :alt: nedbat on Twitter - - diff --git a/contrib/python/coverage/py2/.dist-info/entry_points.txt b/contrib/python/coverage/py2/.dist-info/entry_points.txt deleted file mode 100644 index cd083fc1ff6..00000000000 --- a/contrib/python/coverage/py2/.dist-info/entry_points.txt +++ /dev/null @@ -1,5 +0,0 @@ -[console_scripts] -coverage = coverage.cmdline:main -coverage-3.9 = coverage.cmdline:main -coverage3 = coverage.cmdline:main - diff --git a/contrib/python/coverage/py2/.dist-info/top_level.txt b/contrib/python/coverage/py2/.dist-info/top_level.txt deleted file mode 100644 index 4ebc8aea50e..00000000000 --- a/contrib/python/coverage/py2/.dist-info/top_level.txt +++ /dev/null @@ -1 +0,0 @@ -coverage diff --git a/contrib/python/coverage/py2/LICENSE.txt b/contrib/python/coverage/py2/LICENSE.txt deleted file mode 100644 index f433b1a53f5..00000000000 --- a/contrib/python/coverage/py2/LICENSE.txt +++ /dev/null @@ -1,177 +0,0 @@ - - Apache License - Version 2.0, January 2004 - http://www.apache.org/licenses/ - - TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - - 1. Definitions. - - "License" shall mean the terms and conditions for use, reproduction, - and distribution as defined by Sections 1 through 9 of this document. - - "Licensor" shall mean the copyright owner or entity authorized by - the copyright owner that is granting the License. - - "Legal Entity" shall mean the union of the acting entity and all - other entities that control, are controlled by, or are under common - control with that entity. For the purposes of this definition, - "control" means (i) the power, direct or indirect, to cause the - direction or management of such entity, whether by contract or - otherwise, or (ii) ownership of fifty percent (50%) or more of the - outstanding shares, or (iii) beneficial ownership of such entity. - - "You" (or "Your") shall mean an individual or Legal Entity - exercising permissions granted by this License. - - "Source" form shall mean the preferred form for making modifications, - including but not limited to software source code, documentation - source, and configuration files. - - "Object" form shall mean any form resulting from mechanical - transformation or translation of a Source form, including but - not limited to compiled object code, generated documentation, - and conversions to other media types. - - "Work" shall mean the work of authorship, whether in Source or - Object form, made available under the License, as indicated by a - copyright notice that is included in or attached to the work - (an example is provided in the Appendix below). - - "Derivative Works" shall mean any work, whether in Source or Object - form, that is based on (or derived from) the Work and for which the - editorial revisions, annotations, elaborations, or other modifications - represent, as a whole, an original work of authorship. For the purposes - of this License, Derivative Works shall not include works that remain - separable from, or merely link (or bind by name) to the interfaces of, - the Work and Derivative Works thereof. - - "Contribution" shall mean any work of authorship, including - the original version of the Work and any modifications or additions - to that Work or Derivative Works thereof, that is intentionally - submitted to Licensor for inclusion in the Work by the copyright owner - or by an individual or Legal Entity authorized to submit on behalf of - the copyright owner. For the purposes of this definition, "submitted" - means any form of electronic, verbal, or written communication sent - to the Licensor or its representatives, including but not limited to - communication on electronic mailing lists, source code control systems, - and issue tracking systems that are managed by, or on behalf of, the - Licensor for the purpose of discussing and improving the Work, but - excluding communication that is conspicuously marked or otherwise - designated in writing by the copyright owner as "Not a Contribution." - - "Contributor" shall mean Licensor and any individual or Legal Entity - on behalf of whom a Contribution has been received by Licensor and - subsequently incorporated within the Work. - - 2. Grant of Copyright License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - copyright license to reproduce, prepare Derivative Works of, - publicly display, publicly perform, sublicense, and distribute the - Work and such Derivative Works in Source or Object form. - - 3. Grant of Patent License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - (except as stated in this section) patent license to make, have made, - use, offer to sell, sell, import, and otherwise transfer the Work, - where such license applies only to those patent claims licensable - by such Contributor that are necessarily infringed by their - Contribution(s) alone or by combination of their Contribution(s) - with the Work to which such Contribution(s) was submitted. If You - institute patent litigation against any entity (including a - cross-claim or counterclaim in a lawsuit) alleging that the Work - or a Contribution incorporated within the Work constitutes direct - or contributory patent infringement, then any patent licenses - granted to You under this License for that Work shall terminate - as of the date such litigation is filed. - - 4. Redistribution. You may reproduce and distribute copies of the - Work or Derivative Works thereof in any medium, with or without - modifications, and in Source or Object form, provided that You - meet the following conditions: - - (a) You must give any other recipients of the Work or - Derivative Works a copy of this License; and - - (b) You must cause any modified files to carry prominent notices - stating that You changed the files; and - - (c) You must retain, in the Source form of any Derivative Works - that You distribute, all copyright, patent, trademark, and - attribution notices from the Source form of the Work, - excluding those notices that do not pertain to any part of - the Derivative Works; and - - (d) If the Work includes a "NOTICE" text file as part of its - distribution, then any Derivative Works that You distribute must - include a readable copy of the attribution notices contained - within such NOTICE file, excluding those notices that do not - pertain to any part of the Derivative Works, in at least one - of the following places: within a NOTICE text file distributed - as part of the Derivative Works; within the Source form or - documentation, if provided along with the Derivative Works; or, - within a display generated by the Derivative Works, if and - wherever such third-party notices normally appear. The contents - of the NOTICE file are for informational purposes only and - do not modify the License. You may add Your own attribution - notices within Derivative Works that You distribute, alongside - or as an addendum to the NOTICE text from the Work, provided - that such additional attribution notices cannot be construed - as modifying the License. - - You may add Your own copyright statement to Your modifications and - may provide additional or different license terms and conditions - for use, reproduction, or distribution of Your modifications, or - for any such Derivative Works as a whole, provided Your use, - reproduction, and distribution of the Work otherwise complies with - the conditions stated in this License. - - 5. Submission of Contributions. Unless You explicitly state otherwise, - any Contribution intentionally submitted for inclusion in the Work - by You to the Licensor shall be under the terms and conditions of - this License, without any additional terms or conditions. - Notwithstanding the above, nothing herein shall supersede or modify - the terms of any separate license agreement you may have executed - with Licensor regarding such Contributions. - - 6. Trademarks. This License does not grant permission to use the trade - names, trademarks, service marks, or product names of the Licensor, - except as required for reasonable and customary use in describing the - origin of the Work and reproducing the content of the NOTICE file. - - 7. Disclaimer of Warranty. Unless required by applicable law or - agreed to in writing, Licensor provides the Work (and each - Contributor provides its Contributions) on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or - implied, including, without limitation, any warranties or conditions - of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A - PARTICULAR PURPOSE. You are solely responsible for determining the - appropriateness of using or redistributing the Work and assume any - risks associated with Your exercise of permissions under this License. - - 8. Limitation of Liability. In no event and under no legal theory, - whether in tort (including negligence), contract, or otherwise, - unless required by applicable law (such as deliberate and grossly - negligent acts) or agreed to in writing, shall any Contributor be - liable to You for damages, including any direct, indirect, special, - incidental, or consequential damages of any character arising as a - result of this License or out of the use or inability to use the - Work (including but not limited to damages for loss of goodwill, - work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses), even if such Contributor - has been advised of the possibility of such damages. - - 9. Accepting Warranty or Additional Liability. While redistributing - the Work or Derivative Works thereof, You may choose to offer, - and charge a fee for, acceptance of support, warranty, indemnity, - or other liability obligations and/or rights consistent with this - License. However, in accepting such obligations, You may act only - on Your own behalf and on Your sole responsibility, not on behalf - of any other Contributor, and only if You agree to indemnify, - defend, and hold each Contributor harmless for any liability - incurred by, or claims asserted against, such Contributor by reason - of your accepting any such warranty or additional liability. - - END OF TERMS AND CONDITIONS diff --git a/contrib/python/coverage/py2/NOTICE.txt b/contrib/python/coverage/py2/NOTICE.txt deleted file mode 100644 index 37ded535bf3..00000000000 --- a/contrib/python/coverage/py2/NOTICE.txt +++ /dev/null @@ -1,14 +0,0 @@ -Copyright 2001 Gareth Rees. All rights reserved. -Copyright 2004-2021 Ned Batchelder. All rights reserved. - -Except where noted otherwise, this software is licensed under the Apache -License, Version 2.0 (the "License"); you may not use this work except in -compliance with the License. You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. diff --git a/contrib/python/coverage/py2/README.rst b/contrib/python/coverage/py2/README.rst deleted file mode 100644 index 072f30ffeb3..00000000000 --- a/contrib/python/coverage/py2/README.rst +++ /dev/null @@ -1,151 +0,0 @@ -.. Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 -.. For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt - -=========== -Coverage.py -=========== - -Code coverage testing for Python. - -| |license| |versions| |status| -| |test-status| |quality-status| |docs| |codecov| -| |kit| |format| |repos| |downloads| -| |stars| |forks| |contributors| -| |tidelift| |twitter-coveragepy| |twitter-nedbat| - -Coverage.py measures code coverage, typically during test execution. It uses -the code analysis tools and tracing hooks provided in the Python standard -library to determine which lines are executable, and which have been executed. - -Coverage.py runs on many versions of Python: - -* CPython 2.7. -* CPython 3.5 through 3.10 alpha. -* PyPy2 7.3.3 and PyPy3 7.3.3. - -Documentation is on `Read the Docs`_. Code repository and issue tracker are on -`GitHub`_. - -.. _Read the Docs: https://coverage.readthedocs.io/ -.. _GitHub: https://github.com/nedbat/coveragepy - - -**New in 5.x:** SQLite data storage, JSON report, contexts, relative filenames, -dropped support for Python 2.6, 3.3 and 3.4. - - -For Enterprise --------------- - -.. |tideliftlogo| image:: https://nedbatchelder.com/pix/Tidelift_Logo_small.png - :alt: Tidelift - :target: https://tidelift.com/subscription/pkg/pypi-coverage?utm_source=pypi-coverage&utm_medium=referral&utm_campaign=readme - -.. list-table:: - :widths: 10 100 - - * - |tideliftlogo| - - `Available as part of the Tidelift Subscription. <https://tidelift.com/subscription/pkg/pypi-coverage?utm_source=pypi-coverage&utm_medium=referral&utm_campaign=readme>`_ - Coverage and thousands of other packages are working with - Tidelift to deliver one enterprise subscription that covers all of the open - source you use. If you want the flexibility of open source and the confidence - of commercial-grade software, this is for you. - `Learn more. <https://tidelift.com/subscription/pkg/pypi-coverage?utm_source=pypi-coverage&utm_medium=referral&utm_campaign=readme>`_ - - -Getting Started ---------------- - -See the `Quick Start section`_ of the docs. - -.. _Quick Start section: https://coverage.readthedocs.io/#quick-start - - -Change history --------------- - -The complete history of changes is on the `change history page`_. - -.. _change history page: https://coverage.readthedocs.io/en/latest/changes.html - - -Contributing ------------- - -See the `Contributing section`_ of the docs. - -.. _Contributing section: https://coverage.readthedocs.io/en/latest/contributing.html - - -Security --------- - -To report a security vulnerability, please use the `Tidelift security -contact`_. Tidelift will coordinate the fix and disclosure. - -.. _Tidelift security contact: https://tidelift.com/security - - -License -------- - -Licensed under the `Apache 2.0 License`_. For details, see `NOTICE.txt`_. - -.. _Apache 2.0 License: http://www.apache.org/licenses/LICENSE-2.0 -.. _NOTICE.txt: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt - - -.. |test-status| image:: https://github.com/nedbat/coveragepy/actions/workflows/testsuite.yml/badge.svg?branch=master&event=push - :target: https://github.com/nedbat/coveragepy/actions/workflows/testsuite.yml - :alt: Test suite status -.. |quality-status| image:: https://github.com/nedbat/coveragepy/actions/workflows/quality.yml/badge.svg?branch=master&event=push - :target: https://github.com/nedbat/coveragepy/actions/workflows/quality.yml - :alt: Quality check status -.. |docs| image:: https://readthedocs.org/projects/coverage/badge/?version=latest&style=flat - :target: https://coverage.readthedocs.io/ - :alt: Documentation -.. |reqs| image:: https://requires.io/github/nedbat/coveragepy/requirements.svg?branch=master - :target: https://requires.io/github/nedbat/coveragepy/requirements/?branch=master - :alt: Requirements status -.. |kit| image:: https://badge.fury.io/py/coverage.svg - :target: https://pypi.org/project/coverage/ - :alt: PyPI status -.. |format| image:: https://img.shields.io/pypi/format/coverage.svg - :target: https://pypi.org/project/coverage/ - :alt: Kit format -.. |downloads| image:: https://img.shields.io/pypi/dw/coverage.svg - :target: https://pypi.org/project/coverage/ - :alt: Weekly PyPI downloads -.. |versions| image:: https://img.shields.io/pypi/pyversions/coverage.svg?logo=python&logoColor=FBE072 - :target: https://pypi.org/project/coverage/ - :alt: Python versions supported -.. |status| image:: https://img.shields.io/pypi/status/coverage.svg - :target: https://pypi.org/project/coverage/ - :alt: Package stability -.. |license| image:: https://img.shields.io/pypi/l/coverage.svg - :target: https://pypi.org/project/coverage/ - :alt: License -.. |codecov| image:: https://codecov.io/github/nedbat/coveragepy/coverage.svg?branch=master&precision=2 - :target: https://codecov.io/github/nedbat/coveragepy?branch=master - :alt: Coverage! -.. |repos| image:: https://repology.org/badge/tiny-repos/python:coverage.svg - :target: https://repology.org/metapackage/python:coverage/versions - :alt: Packaging status -.. |tidelift| image:: https://tidelift.com/badges/package/pypi/coverage - :target: https://tidelift.com/subscription/pkg/pypi-coverage?utm_source=pypi-coverage&utm_medium=referral&utm_campaign=readme - :alt: Tidelift -.. |stars| image:: https://img.shields.io/github/stars/nedbat/coveragepy.svg?logo=github - :target: https://github.com/nedbat/coveragepy/stargazers - :alt: Github stars -.. |forks| image:: https://img.shields.io/github/forks/nedbat/coveragepy.svg?logo=github - :target: https://github.com/nedbat/coveragepy/network/members - :alt: Github forks -.. |contributors| image:: https://img.shields.io/github/contributors/nedbat/coveragepy.svg?logo=github - :target: https://github.com/nedbat/coveragepy/graphs/contributors - :alt: Contributors -.. |twitter-coveragepy| image:: https://img.shields.io/twitter/follow/coveragepy.svg?label=coveragepy&style=flat&logo=twitter&logoColor=4FADFF - :target: https://twitter.com/coveragepy - :alt: coverage.py on Twitter -.. |twitter-nedbat| image:: https://img.shields.io/twitter/follow/nedbat.svg?label=nedbat&style=flat&logo=twitter&logoColor=4FADFF - :target: https://twitter.com/nedbat - :alt: nedbat on Twitter diff --git a/contrib/python/coverage/py2/coverage/__init__.py b/contrib/python/coverage/py2/coverage/__init__.py deleted file mode 100644 index 331b304b683..00000000000 --- a/contrib/python/coverage/py2/coverage/__init__.py +++ /dev/null @@ -1,36 +0,0 @@ -# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 -# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt - -"""Code coverage measurement for Python. - -Ned Batchelder -https://nedbatchelder.com/code/coverage - -""" - -import sys - -from coverage.version import __version__, __url__, version_info - -from coverage.control import Coverage, process_startup -from coverage.data import CoverageData -from coverage.misc import CoverageException -from coverage.plugin import CoveragePlugin, FileTracer, FileReporter -from coverage.pytracer import PyTracer - -# Backward compatibility. -coverage = Coverage - -# On Windows, we encode and decode deep enough that something goes wrong and -# the encodings.utf_8 module is loaded and then unloaded, I don't know why. -# Adding a reference here prevents it from being unloaded. Yuk. -import encodings.utf_8 # pylint: disable=wrong-import-position, wrong-import-order - -# Because of the "from coverage.control import fooey" lines at the top of the -# file, there's an entry for coverage.coverage in sys.modules, mapped to None. -# This makes some inspection tools (like pydoc) unable to find the class -# coverage.coverage. So remove that entry. -try: - del sys.modules['coverage.coverage'] -except KeyError: - pass diff --git a/contrib/python/coverage/py2/coverage/__main__.py b/contrib/python/coverage/py2/coverage/__main__.py deleted file mode 100644 index 79aa4e2b35d..00000000000 --- a/contrib/python/coverage/py2/coverage/__main__.py +++ /dev/null @@ -1,8 +0,0 @@ -# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 -# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt - -"""Coverage.py's main entry point.""" - -import sys -from coverage.cmdline import main -sys.exit(main()) diff --git a/contrib/python/coverage/py2/coverage/annotate.py b/contrib/python/coverage/py2/coverage/annotate.py deleted file mode 100644 index 999ab6e557d..00000000000 --- a/contrib/python/coverage/py2/coverage/annotate.py +++ /dev/null @@ -1,108 +0,0 @@ -# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 -# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt - -"""Source file annotation for coverage.py.""" - -import io -import os -import re - -from coverage.files import flat_rootname -from coverage.misc import ensure_dir, isolate_module -from coverage.report import get_analysis_to_report - -os = isolate_module(os) - - -class AnnotateReporter(object): - """Generate annotated source files showing line coverage. - - This reporter creates annotated copies of the measured source files. Each - .py file is copied as a .py,cover file, with a left-hand margin annotating - each line:: - - > def h(x): - - if 0: #pragma: no cover - - pass - > if x == 1: - ! a = 1 - > else: - > a = 2 - - > h(2) - - Executed lines use '>', lines not executed use '!', lines excluded from - consideration use '-'. - - """ - - def __init__(self, coverage): - self.coverage = coverage - self.config = self.coverage.config - self.directory = None - - blank_re = re.compile(r"\s*(#|$)") - else_re = re.compile(r"\s*else\s*:\s*(#|$)") - - def report(self, morfs, directory=None): - """Run the report. - - See `coverage.report()` for arguments. - - """ - self.directory = directory - self.coverage.get_data() - for fr, analysis in get_analysis_to_report(self.coverage, morfs): - self.annotate_file(fr, analysis) - - def annotate_file(self, fr, analysis): - """Annotate a single file. - - `fr` is the FileReporter for the file to annotate. - - """ - statements = sorted(analysis.statements) - missing = sorted(analysis.missing) - excluded = sorted(analysis.excluded) - - if self.directory: - ensure_dir(self.directory) - dest_file = os.path.join(self.directory, flat_rootname(fr.relative_filename())) - if dest_file.endswith("_py"): - dest_file = dest_file[:-3] + ".py" - dest_file += ",cover" - else: - dest_file = fr.filename + ",cover" - - with io.open(dest_file, 'w', encoding='utf8') as dest: - i = 0 - j = 0 - covered = True - source = fr.source() - for lineno, line in enumerate(source.splitlines(True), start=1): - while i < len(statements) and statements[i] < lineno: - i += 1 - while j < len(missing) and missing[j] < lineno: - j += 1 - if i < len(statements) and statements[i] == lineno: - covered = j >= len(missing) or missing[j] > lineno - if self.blank_re.match(line): - dest.write(u' ') - elif self.else_re.match(line): - # Special logic for lines containing only 'else:'. - if i >= len(statements) and j >= len(missing): - dest.write(u'! ') - elif i >= len(statements) or j >= len(missing): - dest.write(u'> ') - elif statements[i] == missing[j]: - dest.write(u'! ') - else: - dest.write(u'> ') - elif lineno in excluded: - dest.write(u'- ') - elif covered: - dest.write(u'> ') - else: - dest.write(u'! ') - - dest.write(line) diff --git a/contrib/python/coverage/py2/coverage/backward.py b/contrib/python/coverage/py2/coverage/backward.py deleted file mode 100644 index ac781ab96a8..00000000000 --- a/contrib/python/coverage/py2/coverage/backward.py +++ /dev/null @@ -1,267 +0,0 @@ -# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 -# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt - -"""Add things to old Pythons so I can pretend they are newer.""" - -# This file's purpose is to provide modules to be imported from here. -# pylint: disable=unused-import - -import os -import sys - -from datetime import datetime - -from coverage import env - - -# Pythons 2 and 3 differ on where to get StringIO. -try: - from cStringIO import StringIO -except ImportError: - from io import StringIO - -# In py3, ConfigParser was renamed to the more-standard configparser. -# But there's a py3 backport that installs "configparser" in py2, and I don't -# want it because it has annoying deprecation warnings. So try the real py2 -# import first. -try: - import ConfigParser as configparser -except ImportError: - import configparser - -# What's a string called? -try: - string_class = basestring -except NameError: - string_class = str - -# What's a Unicode string called? -try: - unicode_class = unicode -except NameError: - unicode_class = str - -# range or xrange? -try: - range = xrange # pylint: disable=redefined-builtin -except NameError: - range = range - -try: - from itertools import zip_longest -except ImportError: - from itertools import izip_longest as zip_longest - -# Where do we get the thread id from? -try: - from thread import get_ident as get_thread_id -except ImportError: - from threading import get_ident as get_thread_id - -try: - os.PathLike -except AttributeError: - # This is Python 2 and 3 - path_types = (bytes, string_class, unicode_class) -else: - # 3.6+ - path_types = (bytes, str, os.PathLike) - -# shlex.quote is new, but there's an undocumented implementation in "pipes", -# who knew!? -try: - from shlex import quote as shlex_quote -except ImportError: - # Useful function, available under a different (undocumented) name - # in Python versions earlier than 3.3. - from pipes import quote as shlex_quote - -try: - import reprlib -except ImportError: # pragma: not covered - # We need this on Python 2, but in testing environments, a backport is - # installed, so this import isn't used. - import repr as reprlib - -# A function to iterate listlessly over a dict's items, and one to get the -# items as a list. -try: - {}.iteritems -except AttributeError: - # Python 3 - def iitems(d): - """Produce the items from dict `d`.""" - return d.items() - - def litems(d): - """Return a list of items from dict `d`.""" - return list(d.items()) -else: - # Python 2 - def iitems(d): - """Produce the items from dict `d`.""" - return d.iteritems() - - def litems(d): - """Return a list of items from dict `d`.""" - return d.items() - -# Getting the `next` function from an iterator is different in 2 and 3. -try: - iter([]).next -except AttributeError: - def iternext(seq): - """Get the `next` function for iterating over `seq`.""" - return iter(seq).__next__ -else: - def iternext(seq): - """Get the `next` function for iterating over `seq`.""" - return iter(seq).next - -# Python 3.x is picky about bytes and strings, so provide methods to -# get them right, and make them no-ops in 2.x -if env.PY3: - def to_bytes(s): - """Convert string `s` to bytes.""" - return s.encode('utf8') - - def to_string(b): - """Convert bytes `b` to string.""" - return b.decode('utf8') - - def binary_bytes(byte_values): - """Produce a byte string with the ints from `byte_values`.""" - return bytes(byte_values) - - def byte_to_int(byte): - """Turn a byte indexed from a bytes object into an int.""" - return byte - - def bytes_to_ints(bytes_value): - """Turn a bytes object into a sequence of ints.""" - # In Python 3, iterating bytes gives ints. - return bytes_value - -else: - def to_bytes(s): - """Convert string `s` to bytes (no-op in 2.x).""" - return s - - def to_string(b): - """Convert bytes `b` to string.""" - return b - - def binary_bytes(byte_values): - """Produce a byte string with the ints from `byte_values`.""" - return "".join(chr(b) for b in byte_values) - - def byte_to_int(byte): - """Turn a byte indexed from a bytes object into an int.""" - return ord(byte) - - def bytes_to_ints(bytes_value): - """Turn a bytes object into a sequence of ints.""" - for byte in bytes_value: - yield ord(byte) - - -try: - # In Python 2.x, the builtins were in __builtin__ - BUILTINS = sys.modules['__builtin__'] -except KeyError: - # In Python 3.x, they're in builtins - BUILTINS = sys.modules['builtins'] - - -# imp was deprecated in Python 3.3 -try: - import importlib - import importlib.util - imp = None -except ImportError: - importlib = None - -# We only want to use importlib if it has everything we need. -try: - importlib_util_find_spec = importlib.util.find_spec -except Exception: - import imp - importlib_util_find_spec = None - -# What is the .pyc magic number for this version of Python? -try: - PYC_MAGIC_NUMBER = importlib.util.MAGIC_NUMBER -except AttributeError: - PYC_MAGIC_NUMBER = imp.get_magic() - - -def code_object(fn): - """Get the code object from a function.""" - try: - return fn.func_code - except AttributeError: - return fn.__code__ - - -try: - from types import SimpleNamespace -except ImportError: - # The code from https://docs.python.org/3/library/types.html#types.SimpleNamespace - class SimpleNamespace: - """Python implementation of SimpleNamespace, for Python 2.""" - def __init__(self, **kwargs): - self.__dict__.update(kwargs) - - def __repr__(self): - keys = sorted(self.__dict__) - items = ("{}={!r}".format(k, self.__dict__[k]) for k in keys) - return "{}({})".format(type(self).__name__, ", ".join(items)) - - -def format_local_datetime(dt): - """Return a string with local timezone representing the date. - If python version is lower than 3.6, the time zone is not included. - """ - try: - return dt.astimezone().strftime('%Y-%m-%d %H:%M %z') - except (TypeError, ValueError): - # Datetime.astimezone in Python 3.5 can not handle naive datetime - return dt.strftime('%Y-%m-%d %H:%M') - - -def invalidate_import_caches(): - """Invalidate any import caches that may or may not exist.""" - if importlib and hasattr(importlib, "invalidate_caches"): - importlib.invalidate_caches() - - -def import_local_file(modname, modfile=None): - """Import a local file as a module. - - Opens a file in the current directory named `modname`.py, imports it - as `modname`, and returns the module object. `modfile` is the file to - import if it isn't in the current directory. - - """ - try: - import importlib.util as importlib_util - except ImportError: - importlib_util = None - - if modfile is None: - modfile = modname + '.py' - if importlib_util: - spec = importlib_util.spec_from_file_location(modname, modfile) - mod = importlib_util.module_from_spec(spec) - sys.modules[modname] = mod - spec.loader.exec_module(mod) - else: - for suff in imp.get_suffixes(): # pragma: part covered - if suff[0] == '.py': - break - - with open(modfile, 'r') as f: - # pylint: disable=undefined-loop-variable - mod = imp.load_module(modname, f, modfile, suff) - - return mod diff --git a/contrib/python/coverage/py2/coverage/bytecode.py b/contrib/python/coverage/py2/coverage/bytecode.py deleted file mode 100644 index ceb18cf3740..00000000000 --- a/contrib/python/coverage/py2/coverage/bytecode.py +++ /dev/null @@ -1,19 +0,0 @@ -# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 -# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt - -"""Bytecode manipulation for coverage.py""" - -import types - - -def code_objects(code): - """Iterate over all the code objects in `code`.""" - stack = [code] - while stack: - # We're going to return the code object on the stack, but first - # push its children for later returning. - code = stack.pop() - for c in code.co_consts: - if isinstance(c, types.CodeType): - stack.append(c) - yield code diff --git a/contrib/python/coverage/py2/coverage/cmdline.py b/contrib/python/coverage/py2/coverage/cmdline.py deleted file mode 100644 index 0be0cca19f1..00000000000 --- a/contrib/python/coverage/py2/coverage/cmdline.py +++ /dev/null @@ -1,910 +0,0 @@ -# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 -# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt - -"""Command-line support for coverage.py.""" - -from __future__ import print_function - -import glob -import optparse -import os.path -import shlex -import sys -import textwrap -import traceback - -import coverage -from coverage import Coverage -from coverage import env -from coverage.collector import CTracer -from coverage.data import line_counts -from coverage.debug import info_formatter, info_header, short_stack -from coverage.execfile import PyRunner -from coverage.misc import BaseCoverageException, ExceptionDuringRun, NoSource, output_encoding -from coverage.results import should_fail_under - - -class Opts(object): - """A namespace class for individual options we'll build parsers from.""" - - append = optparse.make_option( - '-a', '--append', action='store_true', - help="Append coverage data to .coverage, otherwise it starts clean each time.", - ) - keep = optparse.make_option( - '', '--keep', action='store_true', - help="Keep original coverage files, otherwise they are deleted.", - ) - branch = optparse.make_option( - '', '--branch', action='store_true', - help="Measure branch coverage in addition to statement coverage.", - ) - CONCURRENCY_CHOICES = [ - "thread", "gevent", "greenlet", "eventlet", "multiprocessing", - ] - concurrency = optparse.make_option( - '', '--concurrency', action='store', metavar="LIB", - choices=CONCURRENCY_CHOICES, - help=( - "Properly measure code using a concurrency library. " - "Valid values are: %s." - ) % ", ".join(CONCURRENCY_CHOICES), - ) - context = optparse.make_option( - '', '--context', action='store', metavar="LABEL", - help="The context label to record for this coverage run.", - ) - debug = optparse.make_option( - '', '--debug', action='store', metavar="OPTS", - help="Debug options, separated by commas. [env: COVERAGE_DEBUG]", - ) - directory = optparse.make_option( - '-d', '--directory', action='store', metavar="DIR", - help="Write the output files to DIR.", - ) - fail_under = optparse.make_option( - '', '--fail-under', action='store', metavar="MIN", type="float", - help="Exit with a status of 2 if the total coverage is less than MIN.", - ) - help = optparse.make_option( - '-h', '--help', action='store_true', - help="Get help on this command.", - ) - ignore_errors = optparse.make_option( - '-i', '--ignore-errors', action='store_true', - help="Ignore errors while reading source files.", - ) - include = optparse.make_option( - '', '--include', action='store', - metavar="PAT1,PAT2,...", - help=( - "Include only files whose paths match one of these patterns. " - "Accepts shell-style wildcards, which must be quoted." - ), - ) - pylib = optparse.make_option( - '-L', '--pylib', action='store_true', - help=( - "Measure coverage even inside the Python installed library, " - "which isn't done by default." - ), - ) - sort = optparse.make_option( - '--sort', action='store', metavar='COLUMN', - help="Sort the report by the named column: name, stmts, miss, branch, brpart, or cover. " - "Default is name." - ) - show_missing = optparse.make_option( - '-m', '--show-missing', action='store_true', - help="Show line numbers of statements in each module that weren't executed.", - ) - skip_covered = optparse.make_option( - '--skip-covered', action='store_true', - help="Skip files with 100% coverage.", - ) - no_skip_covered = optparse.make_option( - '--no-skip-covered', action='store_false', dest='skip_covered', - help="Disable --skip-covered.", - ) - skip_empty = optparse.make_option( - '--skip-empty', action='store_true', - help="Skip files with no code.", - ) - show_contexts = optparse.make_option( - '--show-contexts', action='store_true', - help="Show contexts for covered lines.", - ) - omit = optparse.make_option( - '', '--omit', action='store', - metavar="PAT1,PAT2,...", - help=( - "Omit files whose paths match one of these patterns. " - "Accepts shell-style wildcards, which must be quoted." - ), - ) - contexts = optparse.make_option( - '', '--contexts', action='store', - metavar="REGEX1,REGEX2,...", - help=( - "Only display data from lines covered in the given contexts. " - "Accepts Python regexes, which must be quoted." - ), - ) - output_xml = optparse.make_option( - '-o', '', action='store', dest="outfile", - metavar="OUTFILE", - help="Write the XML report to this file. Defaults to 'coverage.xml'", - ) - output_json = optparse.make_option( - '-o', '', action='store', dest="outfile", - metavar="OUTFILE", - help="Write the JSON report to this file. Defaults to 'coverage.json'", - ) - json_pretty_print = optparse.make_option( - '', '--pretty-print', action='store_true', - help="Format the JSON for human readers.", - ) - parallel_mode = optparse.make_option( - '-p', '--parallel-mode', action='store_true', - help=( - "Append the machine name, process id and random number to the " - ".coverage data file name to simplify collecting data from " - "many processes." - ), - ) - module = optparse.make_option( - '-m', '--module', action='store_true', - help=( - "<pyfile> is an importable Python module, not a script path, " - "to be run as 'python -m' would run it." - ), - ) - precision = optparse.make_option( - '', '--precision', action='store', metavar='N', type=int, - help=( - "Number of digits after the decimal point to display for " - "reported coverage percentages." - ), - ) - rcfile = optparse.make_option( - '', '--rcfile', action='store', - help=( - "Specify configuration file. " - "By default '.coveragerc', 'setup.cfg', 'tox.ini', and " - "'pyproject.toml' are tried. [env: COVERAGE_RCFILE]" - ), - ) - source = optparse.make_option( - '', '--source', action='store', metavar="SRC1,SRC2,...", - help="A list of packages or directories of code to be measured.", - ) - timid = optparse.make_option( - '', '--timid', action='store_true', - help=( - "Use a simpler but slower trace method. Try this if you get " - "seemingly impossible results!" - ), - ) - title = optparse.make_option( - '', '--title', action='store', metavar="TITLE", - help="A text string to use as the title on the HTML.", - ) - version = optparse.make_option( - '', '--version', action='store_true', - help="Display version information and exit.", - ) - - -class CoverageOptionParser(optparse.OptionParser, object): - """Base OptionParser for coverage.py. - - Problems don't exit the program. - Defaults are initialized for all options. - - """ - - def __init__(self, *args, **kwargs): - super(CoverageOptionParser, self).__init__( - add_help_option=False, *args, **kwargs - ) - self.set_defaults( - action=None, - append=None, - branch=None, - concurrency=None, - context=None, - debug=None, - directory=None, - fail_under=None, - help=None, - ignore_errors=None, - include=None, - keep=None, - module=None, - omit=None, - contexts=None, - parallel_mode=None, - precision=None, - pylib=None, - rcfile=True, - show_missing=None, - skip_covered=None, - skip_empty=None, - show_contexts=None, - sort=None, - source=None, - timid=None, - title=None, - version=None, - ) - - self.disable_interspersed_args() - - class OptionParserError(Exception): - """Used to stop the optparse error handler ending the process.""" - pass - - def parse_args_ok(self, args=None, options=None): - """Call optparse.parse_args, but return a triple: - - (ok, options, args) - - """ - try: - options, args = super(CoverageOptionParser, self).parse_args(args, options) - except self.OptionParserError: - return False, None, None - return True, options, args - - def error(self, msg): - """Override optparse.error so sys.exit doesn't get called.""" - show_help(msg) - raise self.OptionParserError - - -class GlobalOptionParser(CoverageOptionParser): - """Command-line parser for coverage.py global option arguments.""" - - def __init__(self): - super(GlobalOptionParser, self).__init__() - - self.add_options([ - Opts.help, - Opts.version, - ]) - - -class CmdOptionParser(CoverageOptionParser): - """Parse one of the new-style commands for coverage.py.""" - - def __init__(self, action, options, defaults=None, usage=None, description=None): - """Create an OptionParser for a coverage.py command. - - `action` is the slug to put into `options.action`. - `options` is a list of Option's for the command. - `defaults` is a dict of default value for options. - `usage` is the usage string to display in help. - `description` is the description of the command, for the help text. - - """ - if usage: - usage = "%prog " + usage - super(CmdOptionParser, self).__init__( - usage=usage, - description=description, - ) - self.set_defaults(action=action, **(defaults or {})) - self.add_options(options) - self.cmd = action - - def __eq__(self, other): - # A convenience equality, so that I can put strings in unit test - # results, and they will compare equal to objects. - return (other == "<CmdOptionParser:%s>" % self.cmd) - - __hash__ = None # This object doesn't need to be hashed. - - def get_prog_name(self): - """Override of an undocumented function in optparse.OptionParser.""" - program_name = super(CmdOptionParser, self).get_prog_name() - - # Include the sub-command for this parser as part of the command. - return "{command} {subcommand}".format(command=program_name, subcommand=self.cmd) - - -GLOBAL_ARGS = [ - Opts.debug, - Opts.help, - Opts.rcfile, - ] - -CMDS = { - 'annotate': CmdOptionParser( - "annotate", - [ - Opts.directory, - Opts.ignore_errors, - Opts.include, - Opts.omit, - ] + GLOBAL_ARGS, - usage="[options] [modules]", - description=( - "Make annotated copies of the given files, marking statements that are executed " - "with > and statements that are missed with !." - ), - ), - - 'combine': CmdOptionParser( - "combine", - [ - Opts.append, - Opts.keep, - ] + GLOBAL_ARGS, - usage="[options] <path1> <path2> ... <pathN>", - description=( - "Combine data from multiple coverage files collected " - "with 'run -p'. The combined results are written to a single " - "file representing the union of the data. The positional " - "arguments are data files or directories containing data files. " - "If no paths are provided, data files in the default data file's " - "directory are combined." - ), - ), - - 'debug': CmdOptionParser( - "debug", GLOBAL_ARGS, - usage="<topic>", - description=( - "Display information about the internals of coverage.py, " - "for diagnosing problems. " - "Topics are: " - "'data' to show a summary of the collected data; " - "'sys' to show installation information; " - "'config' to show the configuration; " - "'premain' to show what is calling coverage." - ), - ), - - 'erase': CmdOptionParser( - "erase", GLOBAL_ARGS, - description="Erase previously collected coverage data.", - ), - - 'help': CmdOptionParser( - "help", GLOBAL_ARGS, - usage="[command]", - description="Describe how to use coverage.py", - ), - - 'html': CmdOptionParser( - "html", - [ - Opts.contexts, - Opts.directory, - Opts.fail_under, - Opts.ignore_errors, - Opts.include, - Opts.omit, - Opts.precision, - Opts.show_contexts, - Opts.skip_covered, - Opts.no_skip_covered, - Opts.skip_empty, - Opts.title, - ] + GLOBAL_ARGS, - usage="[options] [modules]", - description=( - "Create an HTML report of the coverage of the files. " - "Each file gets its own page, with the source decorated to show " - "executed, excluded, and missed lines." - ), - ), - - 'json': CmdOptionParser( - "json", - [ - Opts.contexts, - Opts.fail_under, - Opts.ignore_errors, - Opts.include, - Opts.omit, - Opts.output_json, - Opts.json_pretty_print, - Opts.show_contexts, - ] + GLOBAL_ARGS, - usage="[options] [modules]", - description="Generate a JSON report of coverage results." - ), - - 'report': CmdOptionParser( - "report", - [ - Opts.contexts, - Opts.fail_under, - Opts.ignore_errors, - Opts.include, - Opts.omit, - Opts.precision, - Opts.sort, - Opts.show_missing, - Opts.skip_covered, - Opts.no_skip_covered, - Opts.skip_empty, - ] + GLOBAL_ARGS, - usage="[options] [modules]", - description="Report coverage statistics on modules." - ), - - 'run': CmdOptionParser( - "run", - [ - Opts.append, - Opts.branch, - Opts.concurrency, - Opts.context, - Opts.include, - Opts.module, - Opts.omit, - Opts.pylib, - Opts.parallel_mode, - Opts.source, - Opts.timid, - ] + GLOBAL_ARGS, - usage="[options] <pyfile> [program options]", - description="Run a Python program, measuring code execution." - ), - - 'xml': CmdOptionParser( - "xml", - [ - Opts.fail_under, - Opts.ignore_errors, - Opts.include, - Opts.omit, - Opts.output_xml, - Opts.skip_empty, - ] + GLOBAL_ARGS, - usage="[options] [modules]", - description="Generate an XML report of coverage results." - ), -} - - -def show_help(error=None, topic=None, parser=None): - """Display an error message, or the named topic.""" - assert error or topic or parser - - program_path = sys.argv[0] - if program_path.endswith(os.path.sep + '__main__.py'): - # The path is the main module of a package; get that path instead. - program_path = os.path.dirname(program_path) - program_name = os.path.basename(program_path) - if env.WINDOWS: - # entry_points={'console_scripts':...} on Windows makes files - # called coverage.exe, coverage3.exe, and coverage-3.5.exe. These - # invoke coverage-script.py, coverage3-script.py, and - # coverage-3.5-script.py. argv[0] is the .py file, but we want to - # get back to the original form. - auto_suffix = "-script.py" - if program_name.endswith(auto_suffix): - program_name = program_name[:-len(auto_suffix)] - - help_params = dict(coverage.__dict__) - help_params['program_name'] = program_name - if CTracer is not None: - help_params['extension_modifier'] = 'with C extension' - else: - help_params['extension_modifier'] = 'without C extension' - - if error: - print(error, file=sys.stderr) - print("Use '%s help' for help." % (program_name,), file=sys.stderr) - elif parser: - print(parser.format_help().strip()) - print() - else: - help_msg = textwrap.dedent(HELP_TOPICS.get(topic, '')).strip() - if help_msg: - print(help_msg.format(**help_params)) - else: - print("Don't know topic %r" % topic) - print("Full documentation is at {__url__}".format(**help_params)) - - -OK, ERR, FAIL_UNDER = 0, 1, 2 - - -class CoverageScript(object): - """The command-line interface to coverage.py.""" - - def __init__(self): - self.global_option = False - self.coverage = None - - def command_line(self, argv): - """The bulk of the command line interface to coverage.py. - - `argv` is the argument list to process. - - Returns 0 if all is well, 1 if something went wrong. - - """ - # Collect the command-line options. - if not argv: - show_help(topic='minimum_help') - return OK - - # The command syntax we parse depends on the first argument. Global - # switch syntax always starts with an option. - self.global_option = argv[0].startswith('-') - if self.global_option: - parser = GlobalOptionParser() - else: - parser = CMDS.get(argv[0]) - if not parser: - show_help("Unknown command: '%s'" % argv[0]) - return ERR - argv = argv[1:] - - ok, options, args = parser.parse_args_ok(argv) - if not ok: - return ERR - - # Handle help and version. - if self.do_help(options, args, parser): - return OK - - # Listify the list options. - source = unshell_list(options.source) - omit = unshell_list(options.omit) - include = unshell_list(options.include) - debug = unshell_list(options.debug) - contexts = unshell_list(options.contexts) - - # Do something. - self.coverage = Coverage( - data_suffix=options.parallel_mode, - cover_pylib=options.pylib, - timid=options.timid, - branch=options.branch, - config_file=options.rcfile, - source=source, - omit=omit, - include=include, - debug=debug, - concurrency=options.concurrency, - check_preimported=True, - context=options.context, - ) - - if options.action == "debug": - return self.do_debug(args) - - elif options.action == "erase": - self.coverage.erase() - return OK - - elif options.action == "run": - return self.do_run(options, args) - - elif options.action == "combine": - if options.append: - self.coverage.load() - data_dirs = args or None - self.coverage.combine(data_dirs, strict=True, keep=bool(options.keep)) - self.coverage.save() - return OK - - # Remaining actions are reporting, with some common options. - report_args = dict( - morfs=unglob_args(args), - ignore_errors=options.ignore_errors, - omit=omit, - include=include, - contexts=contexts, - ) - - # We need to be able to import from the current directory, because - # plugins may try to, for example, to read Django settings. - sys.path.insert(0, '') - - self.coverage.load() - - total = None - if options.action == "report": - total = self.coverage.report( - show_missing=options.show_missing, - skip_covered=options.skip_covered, - skip_empty=options.skip_empty, - precision=options.precision, - sort=options.sort, - **report_args - ) - elif options.action == "annotate": - self.coverage.annotate(directory=options.directory, **report_args) - elif options.action == "html": - total = self.coverage.html_report( - directory=options.directory, - title=options.title, - skip_covered=options.skip_covered, - skip_empty=options.skip_empty, - show_contexts=options.show_contexts, - precision=options.precision, - **report_args - ) - elif options.action == "xml": - outfile = options.outfile - total = self.coverage.xml_report( - outfile=outfile, skip_empty=options.skip_empty, - **report_args - ) - elif options.action == "json": - outfile = options.outfile - total = self.coverage.json_report( - outfile=outfile, - pretty_print=options.pretty_print, - show_contexts=options.show_contexts, - **report_args - ) - - if total is not None: - # Apply the command line fail-under options, and then use the config - # value, so we can get fail_under from the config file. - if options.fail_under is not None: - self.coverage.set_option("report:fail_under", options.fail_under) - - fail_under = self.coverage.get_option("report:fail_under") - precision = self.coverage.get_option("report:precision") - if should_fail_under(total, fail_under, precision): - msg = "total of {total:.{p}f} is less than fail-under={fail_under:.{p}f}".format( - total=total, fail_under=fail_under, p=precision, - ) - print("Coverage failure:", msg) - return FAIL_UNDER - - return OK - - def do_help(self, options, args, parser): - """Deal with help requests. - - Return True if it handled the request, False if not. - - """ - # Handle help. - if options.help: - if self.global_option: - show_help(topic='help') - else: - show_help(parser=parser) - return True - - if options.action == "help": - if args: - for a in args: - parser = CMDS.get(a) - if parser: - show_help(parser=parser) - else: - show_help(topic=a) - else: - show_help(topic='help') - return True - - # Handle version. - if options.version: - show_help(topic='version') - return True - - return False - - def do_run(self, options, args): - """Implementation of 'coverage run'.""" - - if not args: - if options.module: - # Specified -m with nothing else. - show_help("No module specified for -m") - return ERR - command_line = self.coverage.get_option("run:command_line") - if command_line is not None: - args = shlex.split(command_line) - if args and args[0] == "-m": - options.module = True - args = args[1:] - if not args: - show_help("Nothing to do.") - return ERR - - if options.append and self.coverage.get_option("run:parallel"): - show_help("Can't append to data files in parallel mode.") - return ERR - - if options.concurrency == "multiprocessing": - # Can't set other run-affecting command line options with - # multiprocessing. - for opt_name in ['branch', 'include', 'omit', 'pylib', 'source', 'timid']: - # As it happens, all of these options have no default, meaning - # they will be None if they have not been specified. - if getattr(options, opt_name) is not None: - show_help( - "Options affecting multiprocessing must only be specified " - "in a configuration file.\n" - "Remove --{} from the command line.".format(opt_name) - ) - return ERR - - runner = PyRunner(args, as_module=bool(options.module)) - runner.prepare() - - if options.append: - self.coverage.load() - - # Run the script. - self.coverage.start() - code_ran = True - try: - runner.run() - except NoSource: - code_ran = False - raise - finally: - self.coverage.stop() - if code_ran: - self.coverage.save() - - return OK - - def do_debug(self, args): - """Implementation of 'coverage debug'.""" - - if not args: - show_help("What information would you like: config, data, sys, premain?") - return ERR - - for info in args: - if info == 'sys': - sys_info = self.coverage.sys_info() - print(info_header("sys")) - for line in info_formatter(sys_info): - print(" %s" % line) - elif info == 'data': - self.coverage.load() - data = self.coverage.get_data() - print(info_header("data")) - print("path: %s" % data.data_filename()) - if data: - print("has_arcs: %r" % data.has_arcs()) - summary = line_counts(data, fullpath=True) - filenames = sorted(summary.keys()) - print("\n%d files:" % len(filenames)) - for f in filenames: - line = "%s: %d lines" % (f, summary[f]) - plugin = data.file_tracer(f) - if plugin: - line += " [%s]" % plugin - print(line) - else: - print("No data collected") - elif info == 'config': - print(info_header("config")) - config_info = self.coverage.config.__dict__.items() - for line in info_formatter(config_info): - print(" %s" % line) - elif info == "premain": - print(info_header("premain")) - print(short_stack()) - else: - show_help("Don't know what you mean by %r" % info) - return ERR - - return OK - - -def unshell_list(s): - """Turn a command-line argument into a list.""" - if not s: - return None - if env.WINDOWS: - # When running coverage.py as coverage.exe, some of the behavior - # of the shell is emulated: wildcards are expanded into a list of - # file names. So you have to single-quote patterns on the command - # line, but (not) helpfully, the single quotes are included in the - # argument, so we have to strip them off here. - s = s.strip("'") - return s.split(',') - - -def unglob_args(args): - """Interpret shell wildcards for platforms that need it.""" - if env.WINDOWS: - globbed = [] - for arg in args: - if '?' in arg or '*' in arg: - globbed.extend(glob.glob(arg)) - else: - globbed.append(arg) - args = globbed - return args - - -HELP_TOPICS = { - 'help': """\ - Coverage.py, version {__version__} {extension_modifier} - Measure, collect, and report on code coverage in Python programs. - - usage: {program_name} <command> [options] [args] - - Commands: - annotate Annotate source files with execution information. - combine Combine a number of data files. - debug Display information about the internals of coverage.py - erase Erase previously collected coverage data. - help Get help on using coverage.py. - html Create an HTML report. - json Create a JSON report of coverage results. - report Report coverage stats on modules. - run Run a Python program and measure code execution. - xml Create an XML report of coverage results. - - Use "{program_name} help <command>" for detailed help on any command. - """, - - 'minimum_help': """\ - Code coverage for Python, version {__version__} {extension_modifier}. Use '{program_name} help' for help. - """, - - 'version': """\ - Coverage.py, version {__version__} {extension_modifier} - """, -} - - -def main(argv=None): - """The main entry point to coverage.py. - - This is installed as the script entry point. - - """ - if argv is None: - argv = sys.argv[1:] - try: - status = CoverageScript().command_line(argv) - except ExceptionDuringRun as err: - # An exception was caught while running the product code. The - # sys.exc_info() return tuple is packed into an ExceptionDuringRun - # exception. - traceback.print_exception(*err.args) # pylint: disable=no-value-for-parameter - status = ERR - except BaseCoverageException as err: - # A controlled error inside coverage.py: print the message to the user. - msg = err.args[0] - if env.PY2: - msg = msg.encode(output_encoding()) - print(msg) - status = ERR - except SystemExit as err: - # The user called `sys.exit()`. Exit with their argument, if any. - if err.args: - status = err.args[0] - else: - status = None - return status - -# Profiling using ox_profile. Install it from GitHub: -# pip install git+https://github.com/emin63/ox_profile.git -# -# $set_env.py: COVERAGE_PROFILE - Set to use ox_profile. -_profile = os.environ.get("COVERAGE_PROFILE", "") -if _profile: # pragma: debugging - from ox_profile.core.launchers import SimpleLauncher # pylint: disable=import-error - original_main = main - - def main(argv=None): # pylint: disable=function-redefined - """A wrapper around main that profiles.""" - profiler = SimpleLauncher.launch() - try: - return original_main(argv) - finally: - data, _ = profiler.query(re_filter='coverage', max_records=100) - print(profiler.show(query=data, limit=100, sep='', col='')) - profiler.cancel() diff --git a/contrib/python/coverage/py2/coverage/collector.py b/contrib/python/coverage/py2/coverage/collector.py deleted file mode 100644 index c42d29feec9..00000000000 --- a/contrib/python/coverage/py2/coverage/collector.py +++ /dev/null @@ -1,455 +0,0 @@ -# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 -# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt - -"""Raw data collector for coverage.py.""" - -import os -import sys - -from coverage import env -from coverage.backward import litems, range # pylint: disable=redefined-builtin -from coverage.debug import short_stack -from coverage.disposition import FileDisposition -from coverage.misc import CoverageException, isolate_module -from coverage.pytracer import PyTracer - -os = isolate_module(os) - - -try: - # Use the C extension code when we can, for speed. - from coverage.tracer import CTracer, CFileDisposition -except ImportError: - # Couldn't import the C extension, maybe it isn't built. - if os.getenv('COVERAGE_TEST_TRACER') == 'c': - # During testing, we use the COVERAGE_TEST_TRACER environment variable - # to indicate that we've fiddled with the environment to test this - # fallback code. If we thought we had a C tracer, but couldn't import - # it, then exit quickly and clearly instead of dribbling confusing - # errors. I'm using sys.exit here instead of an exception because an - # exception here causes all sorts of other noise in unittest. - sys.stderr.write("*** COVERAGE_TEST_TRACER is 'c' but can't import CTracer!\n") - sys.exit(1) - CTracer = None - - -class Collector(object): - """Collects trace data. - - Creates a Tracer object for each thread, since they track stack - information. Each Tracer points to the same shared data, contributing - traced data points. - - When the Collector is started, it creates a Tracer for the current thread, - and installs a function to create Tracers for each new thread started. - When the Collector is stopped, all active Tracers are stopped. - - Threads started while the Collector is stopped will never have Tracers - associated with them. - - """ - - # The stack of active Collectors. Collectors are added here when started, - # and popped when stopped. Collectors on the stack are paused when not - # the top, and resumed when they become the top again. - _collectors = [] - - # The concurrency settings we support here. - SUPPORTED_CONCURRENCIES = {"greenlet", "eventlet", "gevent", "thread"} - - def __init__( - self, should_trace, check_include, should_start_context, file_mapper, - timid, branch, warn, concurrency, - ): - """Create a collector. - - `should_trace` is a function, taking a file name and a frame, and - returning a `coverage.FileDisposition object`. - - `check_include` is a function taking a file name and a frame. It returns - a boolean: True if the file should be traced, False if not. - - `should_start_context` is a function taking a frame, and returning a - string. If the frame should be the start of a new context, the string - is the new context. If the frame should not be the start of a new - context, return None. - - `file_mapper` is a function taking a filename, and returning a Unicode - filename. The result is the name that will be recorded in the data - file. - - If `timid` is true, then a slower simpler trace function will be - used. This is important for some environments where manipulation of - tracing functions make the faster more sophisticated trace function not - operate properly. - - If `branch` is true, then branches will be measured. This involves - collecting data on which statements followed each other (arcs). Use - `get_arc_data` to get the arc data. - - `warn` is a warning function, taking a single string message argument - and an optional slug argument which will be a string or None, to be - used if a warning needs to be issued. - - `concurrency` is a list of strings indicating the concurrency libraries - in use. Valid values are "greenlet", "eventlet", "gevent", or "thread" - (the default). Of these four values, only one can be supplied. Other - values are ignored. - - """ - self.should_trace = should_trace - self.check_include = check_include - self.should_start_context = should_start_context - self.file_mapper = file_mapper - self.warn = warn - self.branch = branch - self.threading = None - self.covdata = None - - self.static_context = None - - self.origin = short_stack() - - self.concur_id_func = None - self.mapped_file_cache = {} - - # We can handle a few concurrency options here, but only one at a time. - these_concurrencies = self.SUPPORTED_CONCURRENCIES.intersection(concurrency) - if len(these_concurrencies) > 1: - raise CoverageException("Conflicting concurrency settings: %s" % concurrency) - self.concurrency = these_concurrencies.pop() if these_concurrencies else '' - - try: - if self.concurrency == "greenlet": - import greenlet - self.concur_id_func = greenlet.getcurrent - elif self.concurrency == "eventlet": - import eventlet.greenthread # pylint: disable=import-error,useless-suppression - self.concur_id_func = eventlet.greenthread.getcurrent - elif self.concurrency == "gevent": - import gevent # pylint: disable=import-error,useless-suppression - self.concur_id_func = gevent.getcurrent - elif self.concurrency == "thread" or not self.concurrency: - # It's important to import threading only if we need it. If - # it's imported early, and the program being measured uses - # gevent, then gevent's monkey-patching won't work properly. - import threading - self.threading = threading - else: - raise CoverageException("Don't understand concurrency=%s" % concurrency) - except ImportError: - raise CoverageException( - "Couldn't trace with concurrency=%s, the module isn't installed." % ( - self.concurrency, - ) - ) - - self.reset() - - if timid: - # Being timid: use the simple Python trace function. - self._trace_class = PyTracer - else: - # Being fast: use the C Tracer if it is available, else the Python - # trace function. - self._trace_class = CTracer or PyTracer - - if self._trace_class is CTracer: - self.file_disposition_class = CFileDisposition - self.supports_plugins = True - else: - self.file_disposition_class = FileDisposition - self.supports_plugins = False - - def __repr__(self): - return "<Collector at 0x%x: %s>" % (id(self), self.tracer_name()) - - def use_data(self, covdata, context): - """Use `covdata` for recording data.""" - self.covdata = covdata - self.static_context = context - self.covdata.set_context(self.static_context) - - def tracer_name(self): - """Return the class name of the tracer we're using.""" - return self._trace_class.__name__ - - def _clear_data(self): - """Clear out existing data, but stay ready for more collection.""" - # We used to used self.data.clear(), but that would remove filename - # keys and data values that were still in use higher up the stack - # when we are called as part of switch_context. - for d in self.data.values(): - d.clear() - - for tracer in self.tracers: - tracer.reset_activity() - - def reset(self): - """Clear collected data, and prepare to collect more.""" - # A dictionary mapping file names to dicts with line number keys (if not - # branch coverage), or mapping file names to dicts with line number - # pairs as keys (if branch coverage). - self.data = {} - - # A dictionary mapping file names to file tracer plugin names that will - # handle them. - self.file_tracers = {} - - self.disabled_plugins = set() - - # The .should_trace_cache attribute is a cache from file names to - # coverage.FileDisposition objects, or None. When a file is first - # considered for tracing, a FileDisposition is obtained from - # Coverage.should_trace. Its .trace attribute indicates whether the - # file should be traced or not. If it should be, a plugin with dynamic - # file names can decide not to trace it based on the dynamic file name - # being excluded by the inclusion rules, in which case the - # FileDisposition will be replaced by None in the cache. - if env.PYPY: - import __pypy__ # pylint: disable=import-error - # Alex Gaynor said: - # should_trace_cache is a strictly growing key: once a key is in - # it, it never changes. Further, the keys used to access it are - # generally constant, given sufficient context. That is to say, at - # any given point _trace() is called, pypy is able to know the key. - # This is because the key is determined by the physical source code - # line, and that's invariant with the call site. - # - # This property of a dict with immutable keys, combined with - # call-site-constant keys is a match for PyPy's module dict, - # which is optimized for such workloads. - # - # This gives a 20% benefit on the workload described at - # https://bitbucket.org/pypy/pypy/issue/1871/10x-slower-than-cpython-under-coverage - self.should_trace_cache = __pypy__.newdict("module") - else: - self.should_trace_cache = {} - - # Our active Tracers. - self.tracers = [] - - self._clear_data() - - def _start_tracer(self): - """Start a new Tracer object, and store it in self.tracers.""" - tracer = self._trace_class() - tracer.data = self.data - tracer.trace_arcs = self.branch - tracer.should_trace = self.should_trace - tracer.should_trace_cache = self.should_trace_cache - tracer.warn = self.warn - - if hasattr(tracer, 'concur_id_func'): - tracer.concur_id_func = self.concur_id_func - elif self.concur_id_func: - raise CoverageException( - "Can't support concurrency=%s with %s, only threads are supported" % ( - self.concurrency, self.tracer_name(), - ) - ) - - if hasattr(tracer, 'file_tracers'): - tracer.file_tracers = self.file_tracers - if hasattr(tracer, 'threading'): - tracer.threading = self.threading - if hasattr(tracer, 'check_include'): - tracer.check_include = self.check_include - if hasattr(tracer, 'should_start_context'): - tracer.should_start_context = self.should_start_context - tracer.switch_context = self.switch_context - if hasattr(tracer, 'disable_plugin'): - tracer.disable_plugin = self.disable_plugin - - fn = tracer.start() - self.tracers.append(tracer) - - return fn - - # The trace function has to be set individually on each thread before - # execution begins. Ironically, the only support the threading module has - # for running code before the thread main is the tracing function. So we - # install this as a trace function, and the first time it's called, it does - # the real trace installation. - - def _installation_trace(self, frame, event, arg): - """Called on new threads, installs the real tracer.""" - # Remove ourselves as the trace function. - sys.settrace(None) - # Install the real tracer. - fn = self._start_tracer() - # Invoke the real trace function with the current event, to be sure - # not to lose an event. - if fn: - fn = fn(frame, event, arg) - # Return the new trace function to continue tracing in this scope. - return fn - - def start(self): - """Start collecting trace information.""" - if self._collectors: - self._collectors[-1].pause() - - self.tracers = [] - - # Check to see whether we had a fullcoverage tracer installed. If so, - # get the stack frames it stashed away for us. - traces0 = [] - fn0 = sys.gettrace() - if fn0: - tracer0 = getattr(fn0, '__self__', None) - if tracer0: - traces0 = getattr(tracer0, 'traces', []) - - try: - # Install the tracer on this thread. - fn = self._start_tracer() - except: - if self._collectors: - self._collectors[-1].resume() - raise - - # If _start_tracer succeeded, then we add ourselves to the global - # stack of collectors. - self._collectors.append(self) - - # Replay all the events from fullcoverage into the new trace function. - for args in traces0: - (frame, event, arg), lineno = args - try: - fn(frame, event, arg, lineno=lineno) - except TypeError: - raise Exception("fullcoverage must be run with the C trace function.") - - # Install our installation tracer in threading, to jump-start other - # threads. - if self.threading: - self.threading.settrace(self._installation_trace) - - def stop(self): - """Stop collecting trace information.""" - assert self._collectors - if self._collectors[-1] is not self: - print("self._collectors:") - for c in self._collectors: - print(" {!r}\n{}".format(c, c.origin)) - assert self._collectors[-1] is self, ( - "Expected current collector to be %r, but it's %r" % (self, self._collectors[-1]) - ) - - self.pause() - - # Remove this Collector from the stack, and resume the one underneath - # (if any). - self._collectors.pop() - if self._collectors: - self._collectors[-1].resume() - - def pause(self): - """Pause tracing, but be prepared to `resume`.""" - for tracer in self.tracers: - tracer.stop() - stats = tracer.get_stats() - if stats: - print("\nCoverage.py tracer stats:") - for k in sorted(stats.keys()): - print("%20s: %s" % (k, stats[k])) - if self.threading: - self.threading.settrace(None) - - def resume(self): - """Resume tracing after a `pause`.""" - for tracer in self.tracers: - tracer.start() - if self.threading: - self.threading.settrace(self._installation_trace) - else: - self._start_tracer() - - def _activity(self): - """Has any activity been traced? - - Returns a boolean, True if any trace function was invoked. - - """ - return any(tracer.activity() for tracer in self.tracers) - - def switch_context(self, new_context): - """Switch to a new dynamic context.""" - self.flush_data() - if self.static_context: - context = self.static_context - if new_context: - context += "|" + new_context - else: - context = new_context - self.covdata.set_context(context) - - def disable_plugin(self, disposition): - """Disable the plugin mentioned in `disposition`.""" - file_tracer = disposition.file_tracer - plugin = file_tracer._coverage_plugin - plugin_name = plugin._coverage_plugin_name - self.warn("Disabling plug-in {!r} due to previous exception".format(plugin_name)) - plugin._coverage_enabled = False - disposition.trace = False - - def cached_mapped_file(self, filename): - """A locally cached version of file names mapped through file_mapper.""" - key = (type(filename), filename) - try: - return self.mapped_file_cache[key] - except KeyError: - return self.mapped_file_cache.setdefault(key, self.file_mapper(filename)) - - def mapped_file_dict(self, d): - """Return a dict like d, but with keys modified by file_mapper.""" - # The call to litems() ensures that the GIL protects the dictionary - # iterator against concurrent modifications by tracers running - # in other threads. We try three times in case of concurrent - # access, hoping to get a clean copy. - runtime_err = None - for _ in range(3): - try: - items = litems(d) - except RuntimeError as ex: - runtime_err = ex - else: - break - else: - raise runtime_err - - if getattr(sys, 'is_standalone_binary', False): - # filenames should stay relative to the arcadia root, because files may not exist - return dict((k, v) for k, v in items if v) - - return dict((self.cached_mapped_file(k), v) for k, v in items if v) - - def plugin_was_disabled(self, plugin): - """Record that `plugin` was disabled during the run.""" - self.disabled_plugins.add(plugin._coverage_plugin_name) - - def flush_data(self): - """Save the collected data to our associated `CoverageData`. - - Data may have also been saved along the way. This forces the - last of the data to be saved. - - Returns True if there was data to save, False if not. - """ - if not self._activity(): - return False - - if self.branch: - self.covdata.add_arcs(self.mapped_file_dict(self.data)) - else: - self.covdata.add_lines(self.mapped_file_dict(self.data)) - - file_tracers = { - k: v for k, v in self.file_tracers.items() - if v not in self.disabled_plugins - } - self.covdata.add_file_tracers(self.mapped_file_dict(file_tracers)) - - self._clear_data() - return True diff --git a/contrib/python/coverage/py2/coverage/config.py b/contrib/python/coverage/py2/coverage/config.py deleted file mode 100644 index ceb7201b65f..00000000000 --- a/contrib/python/coverage/py2/coverage/config.py +++ /dev/null @@ -1,605 +0,0 @@ -# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 -# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt - -"""Config file for coverage.py""" - -import collections -import copy -import os -import os.path -import re -import sys - -from coverage import env -from coverage.backward import configparser, iitems, string_class -from coverage.misc import contract, CoverageException, isolate_module -from coverage.misc import substitute_variables - -from coverage.tomlconfig import TomlConfigParser, TomlDecodeError - -os = isolate_module(os) - - -class HandyConfigParser(configparser.RawConfigParser): - """Our specialization of ConfigParser.""" - - def __init__(self, our_file): - """Create the HandyConfigParser. - - `our_file` is True if this config file is specifically for coverage, - False if we are examining another config file (tox.ini, setup.cfg) - for possible settings. - """ - - configparser.RawConfigParser.__init__(self) - self.section_prefixes = ["coverage:"] - if our_file: - self.section_prefixes.append("") - - def read(self, filenames, encoding=None): - """Read a file name as UTF-8 configuration data.""" - kwargs = {} - if env.PYVERSION >= (3, 2): - kwargs['encoding'] = encoding or "utf-8" - return configparser.RawConfigParser.read(self, filenames, **kwargs) - - def has_option(self, section, option): - for section_prefix in self.section_prefixes: - real_section = section_prefix + section - has = configparser.RawConfigParser.has_option(self, real_section, option) - if has: - return has - return False - - def has_section(self, section): - for section_prefix in self.section_prefixes: - real_section = section_prefix + section - has = configparser.RawConfigParser.has_section(self, real_section) - if has: - return real_section - return False - - def options(self, section): - for section_prefix in self.section_prefixes: - real_section = section_prefix + section - if configparser.RawConfigParser.has_section(self, real_section): - return configparser.RawConfigParser.options(self, real_section) - raise configparser.NoSectionError(section) - - def get_section(self, section): - """Get the contents of a section, as a dictionary.""" - d = {} - for opt in self.options(section): - d[opt] = self.get(section, opt) - return d - - def get(self, section, option, *args, **kwargs): - """Get a value, replacing environment variables also. - - The arguments are the same as `RawConfigParser.get`, but in the found - value, ``$WORD`` or ``${WORD}`` are replaced by the value of the - environment variable ``WORD``. - - Returns the finished value. - - """ - for section_prefix in self.section_prefixes: - real_section = section_prefix + section - if configparser.RawConfigParser.has_option(self, real_section, option): - break - else: - raise configparser.NoOptionError(option, section) - - v = configparser.RawConfigParser.get(self, real_section, option, *args, **kwargs) - v = substitute_variables(v, os.environ) - return v - - def getlist(self, section, option): - """Read a list of strings. - - The value of `section` and `option` is treated as a comma- and newline- - separated list of strings. Each value is stripped of whitespace. - - Returns the list of strings. - - """ - value_list = self.get(section, option) - values = [] - for value_line in value_list.split('\n'): - for value in value_line.split(','): - value = value.strip() - if value: - values.append(value) - return values - - def getregexlist(self, section, option): - """Read a list of full-line regexes. - - The value of `section` and `option` is treated as a newline-separated - list of regexes. Each value is stripped of whitespace. - - Returns the list of strings. - - """ - line_list = self.get(section, option) - value_list = [] - for value in line_list.splitlines(): - value = value.strip() - try: - re.compile(value) - except re.error as e: - raise CoverageException( - "Invalid [%s].%s value %r: %s" % (section, option, value, e) - ) - if value: - value_list.append(value) - return value_list - - -# The default line exclusion regexes. -DEFAULT_EXCLUDE = [ - r'#\s*(pragma|PRAGMA)[:\s]?\s*(no|NO)\s*(cover|COVER)', -] - -# The default partial branch regexes, to be modified by the user. -DEFAULT_PARTIAL = [ - r'#\s*(pragma|PRAGMA)[:\s]?\s*(no|NO)\s*(branch|BRANCH)', -] - -# The default partial branch regexes, based on Python semantics. -# These are any Python branching constructs that can't actually execute all -# their branches. -DEFAULT_PARTIAL_ALWAYS = [ - 'while (True|1|False|0):', - 'if (True|1|False|0):', -] - - -class CoverageConfig(object): - """Coverage.py configuration. - - The attributes of this class are the various settings that control the - operation of coverage.py. - - """ - # pylint: disable=too-many-instance-attributes - - def __init__(self): - """Initialize the configuration attributes to their defaults.""" - # Metadata about the config. - # We tried to read these config files. - self.attempted_config_files = [] - # We did read these config files, but maybe didn't find any content for us. - self.config_files_read = [] - # The file that gave us our configuration. - self.config_file = None - self._config_contents = None - - # Defaults for [run] and [report] - self._include = None - self._omit = None - - # Defaults for [run] - self.branch = False - self.command_line = None - self.concurrency = None - self.context = None - self.cover_pylib = False - self.data_file = ".coverage" - self.debug = [] - self.disable_warnings = [] - self.dynamic_context = None - self.note = None - self.parallel = False - self.plugins = [] - self.relative_files = False - self.run_include = None - self.run_omit = None - self.source = None - self.source_pkgs = [] - self.timid = False - self._crash = None - - # Defaults for [report] - self.exclude_list = DEFAULT_EXCLUDE[:] - self.fail_under = 0.0 - self.ignore_errors = False - self.report_include = None - self.report_omit = None - self.partial_always_list = DEFAULT_PARTIAL_ALWAYS[:] - self.partial_list = DEFAULT_PARTIAL[:] - self.precision = 0 - self.report_contexts = None - self.show_missing = False - self.skip_covered = False - self.skip_empty = False - self.sort = None - - # Defaults for [html] - self.extra_css = None - self.html_dir = "htmlcov" - self.html_skip_covered = None - self.html_skip_empty = None - self.html_title = "Coverage report" - self.show_contexts = False - - # Defaults for [xml] - self.xml_output = "coverage.xml" - self.xml_package_depth = 99 - - # Defaults for [json] - self.json_output = "coverage.json" - self.json_pretty_print = False - self.json_show_contexts = False - - # Defaults for [paths] - self.paths = collections.OrderedDict() - - # Options for plugins - self.plugin_options = {} - self.suppress_plugin_errors = True - - MUST_BE_LIST = [ - "debug", "concurrency", "plugins", - "report_omit", "report_include", - "run_omit", "run_include", - ] - - def from_args(self, **kwargs): - """Read config values from `kwargs`.""" - for k, v in iitems(kwargs): - if v is not None: - if k in self.MUST_BE_LIST and isinstance(v, string_class): - v = [v] - setattr(self, k, v) - - def from_resource(self, resource_name): - assert getattr(sys, 'is_standalone_binary', False), 'You have used method by mistake in script, not binary' - cp, self._config_contents = _load_config_from_resource(resource_name) - return self._parse_config(cp, resource_name, True) - - @contract(filename=str) - def from_file(self, filename, our_file): - """Read configuration from a .rc file. - - `filename` is a file name to read. - - `our_file` is True if this config file is specifically for coverage, - False if we are examining another config file (tox.ini, setup.cfg) - for possible settings. - - Returns True or False, whether the file could be read, and it had some - coverage.py settings in it. - - """ - _, ext = os.path.splitext(filename) - if ext == '.toml': - cp = TomlConfigParser(our_file) - else: - cp = HandyConfigParser(our_file) - - self.attempted_config_files.append(filename) - - try: - files_read = cp.read(filename) - except (configparser.Error, TomlDecodeError) as err: - raise CoverageException("Couldn't read config file %s: %s" % (filename, err)) - if not files_read: - return False - - self.config_files_read.extend(map(os.path.abspath, files_read)) - - return self._parse_config(cp, filename, our_file) - - def _parse_config(self, cp, filename, our_file): - any_set = False - try: - for option_spec in self.CONFIG_FILE_OPTIONS: - was_set = self._set_attr_from_config_option(cp, *option_spec) - if was_set: - any_set = True - except ValueError as err: - raise CoverageException("Couldn't read config file %s: %s" % (filename, err)) - - # Check that there are no unrecognized options. - all_options = collections.defaultdict(set) - for option_spec in self.CONFIG_FILE_OPTIONS: - section, option = option_spec[1].split(":") - all_options[section].add(option) - - for section, options in iitems(all_options): - real_section = cp.has_section(section) - if real_section: - for unknown in set(cp.options(section)) - options: - raise CoverageException( - "Unrecognized option '[%s] %s=' in config file %s" % ( - real_section, unknown, filename - ) - ) - - # [paths] is special - if cp.has_section('paths'): - for option in cp.options('paths'): - self.paths[option] = cp.getlist('paths', option) - any_set = True - - # plugins can have options - for plugin in self.plugins: - if cp.has_section(plugin): - self.plugin_options[plugin] = cp.get_section(plugin) - any_set = True - - # Was this file used as a config file? If it's specifically our file, - # then it was used. If we're piggybacking on someone else's file, - # then it was only used if we found some settings in it. - if our_file: - used = True - else: - used = any_set - - if used: - self.config_file = os.path.abspath(filename) - if not getattr(sys, 'is_standalone_binary', False): - with open(filename, "rb") as f: - self._config_contents = f.read() - - return used - - def copy(self): - """Return a copy of the configuration.""" - return copy.deepcopy(self) - - CONFIG_FILE_OPTIONS = [ - # These are *args for _set_attr_from_config_option: - # (attr, where, type_="") - # - # attr is the attribute to set on the CoverageConfig object. - # where is the section:name to read from the configuration file. - # type_ is the optional type to apply, by using .getTYPE to read the - # configuration value from the file. - - # [run] - ('branch', 'run:branch', 'boolean'), - ('command_line', 'run:command_line'), - ('concurrency', 'run:concurrency', 'list'), - ('context', 'run:context'), - ('cover_pylib', 'run:cover_pylib', 'boolean'), - ('data_file', 'run:data_file'), - ('debug', 'run:debug', 'list'), - ('disable_warnings', 'run:disable_warnings', 'list'), - ('dynamic_context', 'run:dynamic_context'), - ('note', 'run:note'), - ('parallel', 'run:parallel', 'boolean'), - ('plugins', 'run:plugins', 'list'), - ('relative_files', 'run:relative_files', 'boolean'), - ('run_include', 'run:include', 'list'), - ('run_omit', 'run:omit', 'list'), - ('source', 'run:source', 'list'), - ('source_pkgs', 'run:source_pkgs', 'list'), - ('timid', 'run:timid', 'boolean'), - ('_crash', 'run:_crash'), - ('suppress_plugin_errors', 'run:suppress_plugin_errors', 'boolean'), - - # [report] - ('exclude_list', 'report:exclude_lines', 'regexlist'), - ('fail_under', 'report:fail_under', 'float'), - ('ignore_errors', 'report:ignore_errors', 'boolean'), - ('partial_always_list', 'report:partial_branches_always', 'regexlist'), - ('partial_list', 'report:partial_branches', 'regexlist'), - ('precision', 'report:precision', 'int'), - ('report_contexts', 'report:contexts', 'list'), - ('report_include', 'report:include', 'list'), - ('report_omit', 'report:omit', 'list'), - ('show_missing', 'report:show_missing', 'boolean'), - ('skip_covered', 'report:skip_covered', 'boolean'), - ('skip_empty', 'report:skip_empty', 'boolean'), - ('sort', 'report:sort'), - - # [html] - ('extra_css', 'html:extra_css'), - ('html_dir', 'html:directory'), - ('html_skip_covered', 'html:skip_covered', 'boolean'), - ('html_skip_empty', 'html:skip_empty', 'boolean'), - ('html_title', 'html:title'), - ('show_contexts', 'html:show_contexts', 'boolean'), - - # [xml] - ('xml_output', 'xml:output'), - ('xml_package_depth', 'xml:package_depth', 'int'), - - # [json] - ('json_output', 'json:output'), - ('json_pretty_print', 'json:pretty_print', 'boolean'), - ('json_show_contexts', 'json:show_contexts', 'boolean'), - ] - - def _set_attr_from_config_option(self, cp, attr, where, type_=''): - """Set an attribute on self if it exists in the ConfigParser. - - Returns True if the attribute was set. - - """ - section, option = where.split(":") - if cp.has_option(section, option): - method = getattr(cp, 'get' + type_) - setattr(self, attr, method(section, option)) - return True - return False - - def get_plugin_options(self, plugin): - """Get a dictionary of options for the plugin named `plugin`.""" - return self.plugin_options.get(plugin, {}) - - def set_option(self, option_name, value): - """Set an option in the configuration. - - `option_name` is a colon-separated string indicating the section and - option name. For example, the ``branch`` option in the ``[run]`` - section of the config file would be indicated with `"run:branch"`. - - `value` is the new value for the option. - - """ - # Special-cased options. - if option_name == "paths": - self.paths = value - return - - # Check all the hard-coded options. - for option_spec in self.CONFIG_FILE_OPTIONS: - attr, where = option_spec[:2] - if where == option_name: - setattr(self, attr, value) - return - - # See if it's a plugin option. - plugin_name, _, key = option_name.partition(":") - if key and plugin_name in self.plugins: - self.plugin_options.setdefault(plugin_name, {})[key] = value - return - - # If we get here, we didn't find the option. - raise CoverageException("No such option: %r" % option_name) - - def get_option(self, option_name): - """Get an option from the configuration. - - `option_name` is a colon-separated string indicating the section and - option name. For example, the ``branch`` option in the ``[run]`` - section of the config file would be indicated with `"run:branch"`. - - Returns the value of the option. - - """ - # Special-cased options. - if option_name == "paths": - return self.paths - - # Check all the hard-coded options. - for option_spec in self.CONFIG_FILE_OPTIONS: - attr, where = option_spec[:2] - if where == option_name: - return getattr(self, attr) - - # See if it's a plugin option. - plugin_name, _, key = option_name.partition(":") - if key and plugin_name in self.plugins: - return self.plugin_options.get(plugin_name, {}).get(key) - - # If we get here, we didn't find the option. - raise CoverageException("No such option: %r" % option_name) - - def post_process_file(self, path): - """Make final adjustments to a file path to make it usable.""" - return os.path.expanduser(path) - - def post_process(self): - """Make final adjustments to settings to make them usable.""" - self.data_file = self.post_process_file(self.data_file) - self.html_dir = self.post_process_file(self.html_dir) - self.xml_output = self.post_process_file(self.xml_output) - self.paths = collections.OrderedDict( - (k, [self.post_process_file(f) for f in v]) - for k, v in self.paths.items() - ) - - -def config_files_to_try(config_file): - """What config files should we try to read? - - Returns a list of tuples: - (filename, is_our_file, was_file_specified) - """ - - # Some API users were specifying ".coveragerc" to mean the same as - # True, so make it so. - if config_file == ".coveragerc": - config_file = True - specified_file = (config_file is not True) - if not specified_file: - # No file was specified. Check COVERAGE_RCFILE. - config_file = os.environ.get('COVERAGE_RCFILE') - if config_file: - specified_file = True - if not specified_file: - # Still no file specified. Default to .coveragerc - config_file = ".coveragerc" - files_to_try = [ - (config_file, True, specified_file), - ("setup.cfg", False, False), - ("tox.ini", False, False), - ("pyproject.toml", False, False), - ] - return files_to_try - - -def read_coverage_config(config_file, **kwargs): - """Read the coverage.py configuration. - - Arguments: - config_file: a boolean or string, see the `Coverage` class for the - tricky details. - all others: keyword arguments from the `Coverage` class, used for - setting values in the configuration. - - Returns: - config: - config is a CoverageConfig object read from the appropriate - configuration file. - - """ - # Build the configuration from a number of sources: - # 1) defaults: - config = CoverageConfig() - - # 1.1 built-in config - if getattr(sys, 'is_standalone_binary', False): - config.from_resource("/coverage_plugins/coveragerc.txt") - - # 2) from a file: - if config_file: - files_to_try = config_files_to_try(config_file) - - for fname, our_file, specified_file in files_to_try: - if getattr(sys, 'is_standalone_binary', False) and fname == "/coverage_plugins/coveragerc.txt": - continue - config_read = config.from_file(fname, our_file=our_file) - if config_read: - break - if specified_file: - raise CoverageException("Couldn't read '%s' as a config file" % fname) - - # $set_env.py: COVERAGE_DEBUG - Options for --debug. - # 3) from environment variables: - env_data_file = os.environ.get('COVERAGE_FILE') - if env_data_file: - config.data_file = env_data_file - debugs = os.environ.get('COVERAGE_DEBUG') - if debugs: - config.debug.extend(d.strip() for d in debugs.split(",")) - - # 4) from constructor arguments: - config.from_args(**kwargs) - - # Once all the config has been collected, there's a little post-processing - # to do. - config.post_process() - - return config - - -def _load_config_from_resource(resource_name): - from io import StringIO - from library.python import resource - - config_data = resource.find(resource_name) - if config_data is None: - raise IOError("No such resource: " + resource_name) - - config_data = config_data.decode('utf-8') - cp = HandyConfigParser(True) - try: - cp.readfp(StringIO(config_data)) - except configparser.Error as err: - raise CoverageException("Couldn't read config %s: %s" % (resource_name, err)) - return cp, config_data diff --git a/contrib/python/coverage/py2/coverage/context.py b/contrib/python/coverage/py2/coverage/context.py deleted file mode 100644 index ea13da21edd..00000000000 --- a/contrib/python/coverage/py2/coverage/context.py +++ /dev/null @@ -1,91 +0,0 @@ -# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 -# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt - -"""Determine contexts for coverage.py""" - - -def combine_context_switchers(context_switchers): - """Create a single context switcher from multiple switchers. - - `context_switchers` is a list of functions that take a frame as an - argument and return a string to use as the new context label. - - Returns a function that composites `context_switchers` functions, or None - if `context_switchers` is an empty list. - - When invoked, the combined switcher calls `context_switchers` one-by-one - until a string is returned. The combined switcher returns None if all - `context_switchers` return None. - """ - if not context_switchers: - return None - - if len(context_switchers) == 1: - return context_switchers[0] - - def should_start_context(frame): - """The combiner for multiple context switchers.""" - for switcher in context_switchers: - new_context = switcher(frame) - if new_context is not None: - return new_context - return None - - return should_start_context - - -def should_start_context_test_function(frame): - """Is this frame calling a test_* function?""" - co_name = frame.f_code.co_name - if co_name.startswith("test") or co_name == "runTest": - return qualname_from_frame(frame) - return None - - -def qualname_from_frame(frame): - """Get a qualified name for the code running in `frame`.""" - co = frame.f_code - fname = co.co_name - method = None - if co.co_argcount and co.co_varnames[0] == "self": - self = frame.f_locals["self"] - method = getattr(self, fname, None) - - if method is None: - func = frame.f_globals.get(fname) - if func is None: - return None - return func.__module__ + '.' + fname - - func = getattr(method, '__func__', None) - if func is None: - cls = self.__class__ - return cls.__module__ + '.' + cls.__name__ + "." + fname - - if hasattr(func, '__qualname__'): - qname = func.__module__ + '.' + func.__qualname__ - else: - for cls in getattr(self.__class__, '__mro__', ()): - f = cls.__dict__.get(fname, None) - if f is None: - continue - if f is func: - qname = cls.__module__ + '.' + cls.__name__ + "." + fname - break - else: - # Support for old-style classes. - def mro(bases): - for base in bases: - f = base.__dict__.get(fname, None) - if f is func: - return base.__module__ + '.' + base.__name__ + "." + fname - for base in bases: - qname = mro(base.__bases__) - if qname is not None: - return qname - return None - qname = mro([self.__class__]) - if qname is None: - qname = func.__module__ + '.' + fname - - return qname diff --git a/contrib/python/coverage/py2/coverage/control.py b/contrib/python/coverage/py2/coverage/control.py deleted file mode 100644 index 605b50c26b4..00000000000 --- a/contrib/python/coverage/py2/coverage/control.py +++ /dev/null @@ -1,1162 +0,0 @@ -# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 -# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt - -"""Core control stuff for coverage.py.""" - -import atexit -import collections -import contextlib -import os -import os.path -import platform -import sys -import time -import json - -from coverage import env -from coverage.annotate import AnnotateReporter -from coverage.backward import string_class, iitems -from coverage.collector import Collector, CTracer -from coverage.config import read_coverage_config -from coverage.context import should_start_context_test_function, combine_context_switchers -from coverage.data import CoverageData, combine_parallel_data -from coverage.debug import DebugControl, short_stack, write_formatted_info -from coverage.disposition import disposition_debug_msg -from coverage.files import PathAliases, abs_file, canonical_filename, relative_filename, set_relative_directory -from coverage.html import HtmlReporter -from coverage.inorout import InOrOut -from coverage.jsonreport import JsonReporter -from coverage.misc import CoverageException, bool_or_none, join_regex -from coverage.misc import DefaultValue, ensure_dir_for_file, isolate_module -from coverage.plugin import FileReporter -from coverage.plugin_support import Plugins -from coverage.python import PythonFileReporter -from coverage.report import render_report -from coverage.results import Analysis, Numbers -from coverage.summary import SummaryReporter -from coverage.xmlreport import XmlReporter - -try: - from coverage.multiproc import patch_multiprocessing -except ImportError: # pragma: only jython - # Jython has no multiprocessing module. - patch_multiprocessing = None - -os = isolate_module(os) - -@contextlib.contextmanager -def override_config(cov, **kwargs): - """Temporarily tweak the configuration of `cov`. - - The arguments are applied to `cov.config` with the `from_args` method. - At the end of the with-statement, the old configuration is restored. - """ - original_config = cov.config - cov.config = cov.config.copy() - try: - cov.config.from_args(**kwargs) - yield - finally: - cov.config = original_config - - -_DEFAULT_DATAFILE = DefaultValue("MISSING") - -class Coverage(object): - """Programmatic access to coverage.py. - - To use:: - - from coverage import Coverage - - cov = Coverage() - cov.start() - #.. call your code .. - cov.stop() - cov.html_report(directory='covhtml') - - Note: in keeping with Python custom, names starting with underscore are - not part of the public API. They might stop working at any point. Please - limit yourself to documented methods to avoid problems. - - """ - - # The stack of started Coverage instances. - _instances = [] - - @classmethod - def current(cls): - """Get the latest started `Coverage` instance, if any. - - Returns: a `Coverage` instance, or None. - - .. versionadded:: 5.0 - - """ - if cls._instances: - return cls._instances[-1] - else: - return None - - def __init__( - self, data_file=_DEFAULT_DATAFILE, data_suffix=None, cover_pylib=None, - auto_data=False, timid=None, branch=None, config_file=True, - source=None, source_pkgs=None, omit=None, include=None, debug=None, - concurrency=None, check_preimported=False, context=None, - ): # pylint: disable=too-many-arguments - """ - Many of these arguments duplicate and override values that can be - provided in a configuration file. Parameters that are missing here - will use values from the config file. - - `data_file` is the base name of the data file to use. The config value - defaults to ".coverage". None can be provided to prevent writing a data - file. `data_suffix` is appended (with a dot) to `data_file` to create - the final file name. If `data_suffix` is simply True, then a suffix is - created with the machine and process identity included. - - `cover_pylib` is a boolean determining whether Python code installed - with the Python interpreter is measured. This includes the Python - standard library and any packages installed with the interpreter. - - If `auto_data` is true, then any existing data file will be read when - coverage measurement starts, and data will be saved automatically when - measurement stops. - - If `timid` is true, then a slower and simpler trace function will be - used. This is important for some environments where manipulation of - tracing functions breaks the faster trace function. - - If `branch` is true, then branch coverage will be measured in addition - to the usual statement coverage. - - `config_file` determines what configuration file to read: - - * If it is ".coveragerc", it is interpreted as if it were True, - for backward compatibility. - - * If it is a string, it is the name of the file to read. If the - file can't be read, it is an error. - - * If it is True, then a few standard files names are tried - (".coveragerc", "setup.cfg", "tox.ini"). It is not an error for - these files to not be found. - - * If it is False, then no configuration file is read. - - `source` is a list of file paths or package names. Only code located - in the trees indicated by the file paths or package names will be - measured. - - `source_pkgs` is a list of package names. It works the same as - `source`, but can be used to name packages where the name can also be - interpreted as a file path. - - `include` and `omit` are lists of file name patterns. Files that match - `include` will be measured, files that match `omit` will not. Each - will also accept a single string argument. - - `debug` is a list of strings indicating what debugging information is - desired. - - `concurrency` is a string indicating the concurrency library being used - in the measured code. Without this, coverage.py will get incorrect - results if these libraries are in use. Valid strings are "greenlet", - "eventlet", "gevent", "multiprocessing", or "thread" (the default). - This can also be a list of these strings. - - If `check_preimported` is true, then when coverage is started, the - already-imported files will be checked to see if they should be - measured by coverage. Importing measured files before coverage is - started can mean that code is missed. - - `context` is a string to use as the :ref:`static context - <static_contexts>` label for collected data. - - .. versionadded:: 4.0 - The `concurrency` parameter. - - .. versionadded:: 4.2 - The `concurrency` parameter can now be a list of strings. - - .. versionadded:: 5.0 - The `check_preimported` and `context` parameters. - - .. versionadded:: 5.3 - The `source_pkgs` parameter. - - """ - # data_file=None means no disk file at all. data_file missing means - # use the value from the config file. - self._no_disk = data_file is None - if data_file is _DEFAULT_DATAFILE: - data_file = None - - # Build our configuration from a number of sources. - self.config = read_coverage_config( - config_file=config_file, - data_file=data_file, cover_pylib=cover_pylib, timid=timid, - branch=branch, parallel=bool_or_none(data_suffix), - source=source, source_pkgs=source_pkgs, run_omit=omit, run_include=include, debug=debug, - report_omit=omit, report_include=include, - concurrency=concurrency, context=context, - ) - - # This is injectable by tests. - self._debug_file = None - - self._auto_load = self._auto_save = auto_data - self._data_suffix_specified = data_suffix - - # Is it ok for no data to be collected? - self._warn_no_data = True - self._warn_unimported_source = True - self._warn_preimported_source = check_preimported - self._no_warn_slugs = None - - # A record of all the warnings that have been issued. - self._warnings = [] - - # Other instance attributes, set later. - self._data = self._collector = None - self._plugins = None - self._inorout = None - self._data_suffix = self._run_suffix = None - self._exclude_re = None - self._debug = None - self._file_mapper = None - - # State machine variables: - # Have we initialized everything? - self._inited = False - self._inited_for_start = False - # Have we started collecting and not stopped it? - self._started = False - # Should we write the debug output? - self._should_write_debug = True - - # If we have sub-process measurement happening automatically, then we - # want any explicit creation of a Coverage object to mean, this process - # is already coverage-aware, so don't auto-measure it. By now, the - # auto-creation of a Coverage object has already happened. But we can - # find it and tell it not to save its data. - if not env.METACOV: - _prevent_sub_process_measurement() - - # Store constructor args to reproduce Coverage object in a subprocess created via multiprocessing.Process - self._dumped_args = json.dumps(dict( - data_file=data_file, data_suffix=data_suffix, cover_pylib=cover_pylib, - auto_data=auto_data, timid=timid, branch=branch, config_file=config_file, - source=source, omit=omit, include=include, debug=debug, - concurrency=concurrency - )) - - def _init(self): - """Set all the initial state. - - This is called by the public methods to initialize state. This lets us - construct a :class:`Coverage` object, then tweak its state before this - function is called. - - """ - if self._inited: - return - - self._inited = True - - # Create and configure the debugging controller. COVERAGE_DEBUG_FILE - # is an environment variable, the name of a file to append debug logs - # to. - self._debug = DebugControl(self.config.debug, self._debug_file) - - if "multiprocessing" in (self.config.concurrency or ()): - # Multi-processing uses parallel for the subprocesses, so also use - # it for the main process. - self.config.parallel = True - - # _exclude_re is a dict that maps exclusion list names to compiled regexes. - self._exclude_re = {} - - set_relative_directory() - - if getattr(sys, 'is_standalone_binary', False): - self._file_mapper = canonical_filename - else: - self._file_mapper = relative_filename if self.config.relative_files else abs_file - - # Load plugins - self._plugins = Plugins.load_plugins(self.config.plugins, self.config, self._debug) - - # Run configuring plugins. - for plugin in self._plugins.configurers: - # We need an object with set_option and get_option. Either self or - # self.config will do. Choosing randomly stops people from doing - # other things with those objects, against the public API. Yes, - # this is a bit childish. :) - plugin.configure([self, self.config][int(time.time()) % 2]) - - def _post_init(self): - """Stuff to do after everything is initialized.""" - if self._should_write_debug: - self._should_write_debug = False - self._write_startup_debug() - - # '[run] _crash' will raise an exception if the value is close by in - # the call stack, for testing error handling. - if self.config._crash and self.config._crash in short_stack(limit=4): - raise Exception("Crashing because called by {}".format(self.config._crash)) - - def _write_startup_debug(self): - """Write out debug info at startup if needed.""" - wrote_any = False - with self._debug.without_callers(): - if self._debug.should('config'): - config_info = sorted(self.config.__dict__.items()) - config_info = [(k, v) for k, v in config_info if not k.startswith('_')] - write_formatted_info(self._debug, "config", config_info) - wrote_any = True - - if self._debug.should('sys'): - write_formatted_info(self._debug, "sys", self.sys_info()) - for plugin in self._plugins: - header = "sys: " + plugin._coverage_plugin_name - info = plugin.sys_info() - write_formatted_info(self._debug, header, info) - wrote_any = True - - if wrote_any: - write_formatted_info(self._debug, "end", ()) - - def _should_trace(self, filename, frame): - """Decide whether to trace execution in `filename`. - - Calls `_should_trace_internal`, and returns the FileDisposition. - - """ - disp = self._inorout.should_trace(filename, frame) - if self._debug.should('trace'): - self._debug.write(disposition_debug_msg(disp)) - return disp - - def _check_include_omit_etc(self, filename, frame): - """Check a file name against the include/omit/etc, rules, verbosely. - - Returns a boolean: True if the file should be traced, False if not. - - """ - reason = self._inorout.check_include_omit_etc(filename, frame) - if self._debug.should('trace'): - if not reason: - msg = "Including %r" % (filename,) - else: - msg = "Not including %r: %s" % (filename, reason) - self._debug.write(msg) - - return not reason - - def _warn(self, msg, slug=None, once=False): - """Use `msg` as a warning. - - For warning suppression, use `slug` as the shorthand. - - If `once` is true, only show this warning once (determined by the - slug.) - - """ - if self._no_warn_slugs is None: - self._no_warn_slugs = list(self.config.disable_warnings) - - if slug in self._no_warn_slugs: - # Don't issue the warning - return - - self._warnings.append(msg) - if slug: - msg = "%s (%s)" % (msg, slug) - if self._debug.should('pid'): - msg = "[%d] %s" % (os.getpid(), msg) - sys.stderr.write("Coverage.py warning: %s\n" % msg) - - if once: - self._no_warn_slugs.append(slug) - - def get_option(self, option_name): - """Get an option from the configuration. - - `option_name` is a colon-separated string indicating the section and - option name. For example, the ``branch`` option in the ``[run]`` - section of the config file would be indicated with `"run:branch"`. - - Returns the value of the option. The type depends on the option - selected. - - As a special case, an `option_name` of ``"paths"`` will return an - OrderedDict with the entire ``[paths]`` section value. - - .. versionadded:: 4.0 - - """ - return self.config.get_option(option_name) - - def set_option(self, option_name, value): - """Set an option in the configuration. - - `option_name` is a colon-separated string indicating the section and - option name. For example, the ``branch`` option in the ``[run]`` - section of the config file would be indicated with ``"run:branch"``. - - `value` is the new value for the option. This should be an - appropriate Python value. For example, use True for booleans, not the - string ``"True"``. - - As an example, calling:: - - cov.set_option("run:branch", True) - - has the same effect as this configuration file:: - - [run] - branch = True - - As a special case, an `option_name` of ``"paths"`` will replace the - entire ``[paths]`` section. The value should be an OrderedDict. - - .. versionadded:: 4.0 - - """ - self.config.set_option(option_name, value) - - def load(self): - """Load previously-collected coverage data from the data file.""" - self._init() - if self._collector: - self._collector.reset() - should_skip = self.config.parallel and not os.path.exists(self.config.data_file) - if not should_skip: - self._init_data(suffix=None) - self._post_init() - if not should_skip: - self._data.read() - - def _init_for_start(self): - """Initialization for start()""" - # Construct the collector. - concurrency = self.config.concurrency or () - if "multiprocessing" in concurrency: - if not patch_multiprocessing: - raise CoverageException( # pragma: only jython - "multiprocessing is not supported on this Python" - ) - patch_multiprocessing(rcfile=self.config.config_file, coverage_args=self._dumped_args) - - dycon = self.config.dynamic_context - if not dycon or dycon == "none": - context_switchers = [] - elif dycon == "test_function": - context_switchers = [should_start_context_test_function] - else: - raise CoverageException( - "Don't understand dynamic_context setting: {!r}".format(dycon) - ) - - context_switchers.extend( - plugin.dynamic_context for plugin in self._plugins.context_switchers - ) - - should_start_context = combine_context_switchers(context_switchers) - - self._collector = Collector( - should_trace=self._should_trace, - check_include=self._check_include_omit_etc, - should_start_context=should_start_context, - file_mapper=self._file_mapper, - timid=self.config.timid, - branch=self.config.branch, - warn=self._warn, - concurrency=concurrency, - ) - - suffix = self._data_suffix_specified - if suffix or self.config.parallel: - if not isinstance(suffix, string_class): - # if data_suffix=True, use .machinename.pid.random - suffix = True - else: - suffix = None - - self._init_data(suffix) - - self._collector.use_data(self._data, self.config.context) - - # Early warning if we aren't going to be able to support plugins. - if self._plugins.file_tracers and not self._collector.supports_plugins: - self._warn( - "Plugin file tracers (%s) aren't supported with %s" % ( - ", ".join( - plugin._coverage_plugin_name - for plugin in self._plugins.file_tracers - ), - self._collector.tracer_name(), - ) - ) - for plugin in self._plugins.file_tracers: - plugin._coverage_enabled = False - - # Create the file classifying substructure. - self._inorout = InOrOut( - warn=self._warn, - debug=(self._debug if self._debug.should('trace') else None), - ) - self._inorout.configure(self.config) - self._inorout.plugins = self._plugins - self._inorout.disp_class = self._collector.file_disposition_class - - # It's useful to write debug info after initing for start. - self._should_write_debug = True - - atexit.register(self._atexit) - - def _init_data(self, suffix): - """Create a data file if we don't have one yet.""" - if self._data is None: - # Create the data file. We do this at construction time so that the - # data file will be written into the directory where the process - # started rather than wherever the process eventually chdir'd to. - ensure_dir_for_file(self.config.data_file) - self._data = CoverageData( - basename=self.config.data_file, - suffix=suffix, - warn=self._warn, - debug=self._debug, - no_disk=self._no_disk, - ) - - def start(self): - """Start measuring code coverage. - - Coverage measurement only occurs in functions called after - :meth:`start` is invoked. Statements in the same scope as - :meth:`start` won't be measured. - - Once you invoke :meth:`start`, you must also call :meth:`stop` - eventually, or your process might not shut down cleanly. - - """ - self._init() - if not self._inited_for_start: - self._inited_for_start = True - self._init_for_start() - self._post_init() - - # Issue warnings for possible problems. - self._inorout.warn_conflicting_settings() - - # See if we think some code that would eventually be measured has - # already been imported. - if self._warn_preimported_source: - self._inorout.warn_already_imported_files() - - if self._auto_load: - self.load() - - self._collector.start() - self._started = True - self._instances.append(self) - - def stop(self): - """Stop measuring code coverage.""" - if self._instances: - if self._instances[-1] is self: - self._instances.pop() - if self._started: - self._collector.stop() - self._started = False - - def _atexit(self): - """Clean up on process shutdown.""" - if self._debug.should("process"): - self._debug.write("atexit: pid: {}, instance: {!r}".format(os.getpid(), self)) - if self._started: - self.stop() - if self._auto_save: - self.save() - - def erase(self): - """Erase previously collected coverage data. - - This removes the in-memory data collected in this session as well as - discarding the data file. - - """ - self._init() - self._post_init() - if self._collector: - self._collector.reset() - self._init_data(suffix=None) - self._data.erase(parallel=self.config.parallel) - self._data = None - self._inited_for_start = False - - def switch_context(self, new_context): - """Switch to a new dynamic context. - - `new_context` is a string to use as the :ref:`dynamic context - <dynamic_contexts>` label for collected data. If a :ref:`static - context <static_contexts>` is in use, the static and dynamic context - labels will be joined together with a pipe character. - - Coverage collection must be started already. - - .. versionadded:: 5.0 - - """ - if not self._started: # pragma: part started - raise CoverageException( - "Cannot switch context, coverage is not started" - ) - - if self._collector.should_start_context: - self._warn("Conflicting dynamic contexts", slug="dynamic-conflict", once=True) - - self._collector.switch_context(new_context) - - def clear_exclude(self, which='exclude'): - """Clear the exclude list.""" - self._init() - setattr(self.config, which + "_list", []) - self._exclude_regex_stale() - - def exclude(self, regex, which='exclude'): - """Exclude source lines from execution consideration. - - A number of lists of regular expressions are maintained. Each list - selects lines that are treated differently during reporting. - - `which` determines which list is modified. The "exclude" list selects - lines that are not considered executable at all. The "partial" list - indicates lines with branches that are not taken. - - `regex` is a regular expression. The regex is added to the specified - list. If any of the regexes in the list is found in a line, the line - is marked for special treatment during reporting. - - """ - self._init() - excl_list = getattr(self.config, which + "_list") - excl_list.append(regex) - self._exclude_regex_stale() - - def _exclude_regex_stale(self): - """Drop all the compiled exclusion regexes, a list was modified.""" - self._exclude_re.clear() - - def _exclude_regex(self, which): - """Return a compiled regex for the given exclusion list.""" - if which not in self._exclude_re: - excl_list = getattr(self.config, which + "_list") - self._exclude_re[which] = join_regex(excl_list) - return self._exclude_re[which] - - def get_exclude_list(self, which='exclude'): - """Return a list of excluded regex patterns. - - `which` indicates which list is desired. See :meth:`exclude` for the - lists that are available, and their meaning. - - """ - self._init() - return getattr(self.config, which + "_list") - - def save(self): - """Save the collected coverage data to the data file.""" - data = self.get_data() - data.write() - - def combine(self, data_paths=None, strict=False, keep=False): - """Combine together a number of similarly-named coverage data files. - - All coverage data files whose name starts with `data_file` (from the - coverage() constructor) will be read, and combined together into the - current measurements. - - `data_paths` is a list of files or directories from which data should - be combined. If no list is passed, then the data files from the - directory indicated by the current data file (probably the current - directory) will be combined. - - If `strict` is true, then it is an error to attempt to combine when - there are no data files to combine. - - If `keep` is true, then original input data files won't be deleted. - - .. versionadded:: 4.0 - The `data_paths` parameter. - - .. versionadded:: 4.3 - The `strict` parameter. - - .. versionadded: 5.5 - The `keep` parameter. - """ - self._init() - self._init_data(suffix=None) - self._post_init() - self.get_data() - - aliases = None - if self.config.paths: - aliases = PathAliases() - for paths in self.config.paths.values(): - result = paths[0] - for pattern in paths[1:]: - aliases.add(pattern, result) - - combine_parallel_data( - self._data, - aliases=aliases, - data_paths=data_paths, - strict=strict, - keep=keep, - ) - - def get_data(self): - """Get the collected data. - - Also warn about various problems collecting data. - - Returns a :class:`coverage.CoverageData`, the collected coverage data. - - .. versionadded:: 4.0 - - """ - self._init() - self._init_data(suffix=None) - self._post_init() - - for plugin in self._plugins: - if not plugin._coverage_enabled: - self._collector.plugin_was_disabled(plugin) - - if self._collector and self._collector.flush_data(): - self._post_save_work() - - return self._data - - def _post_save_work(self): - """After saving data, look for warnings, post-work, etc. - - Warn about things that should have happened but didn't. - Look for unexecuted files. - - """ - # If there are still entries in the source_pkgs_unmatched list, - # then we never encountered those packages. - if self._warn_unimported_source: - self._inorout.warn_unimported_source() - - # Find out if we got any data. - if not self._data and self._warn_no_data: - self._warn("No data was collected.", slug="no-data-collected") - - # Touch all the files that could have executed, so that we can - # mark completely unexecuted files as 0% covered. - if self._data is not None: - file_paths = collections.defaultdict(list) - for file_path, plugin_name in self._inorout.find_possibly_unexecuted_files(): - file_path = self._file_mapper(file_path) - file_paths[plugin_name].append(file_path) - for plugin_name, paths in file_paths.items(): - self._data.touch_files(paths, plugin_name) - - if self.config.note: - self._warn("The '[run] note' setting is no longer supported.") - - # Backward compatibility with version 1. - def analysis(self, morf): - """Like `analysis2` but doesn't return excluded line numbers.""" - f, s, _, m, mf = self.analysis2(morf) - return f, s, m, mf - - def analysis2(self, morf): - """Analyze a module. - - `morf` is a module or a file name. It will be analyzed to determine - its coverage statistics. The return value is a 5-tuple: - - * The file name for the module. - * A list of line numbers of executable statements. - * A list of line numbers of excluded statements. - * A list of line numbers of statements not run (missing from - execution). - * A readable formatted string of the missing line numbers. - - The analysis uses the source file itself and the current measured - coverage data. - - """ - analysis = self._analyze(morf) - return ( - analysis.filename, - sorted(analysis.statements), - sorted(analysis.excluded), - sorted(analysis.missing), - analysis.missing_formatted(), - ) - - def _analyze(self, it): - """Analyze a single morf or code unit. - - Returns an `Analysis` object. - - """ - # All reporting comes through here, so do reporting initialization. - self._init() - Numbers.set_precision(self.config.precision) - self._post_init() - - data = self.get_data() - if not isinstance(it, FileReporter): - it = self._get_file_reporter(it) - - return Analysis(data, it, self._file_mapper) - - def _get_file_reporter(self, morf): - """Get a FileReporter for a module or file name.""" - plugin = None - file_reporter = "python" - - if isinstance(morf, string_class): - if getattr(sys, 'is_standalone_binary', False): - # Leave morf in canonical format - relative to the arcadia root - mapped_morf = morf - else: - mapped_morf = self._file_mapper(morf) - plugin_name = self._data.file_tracer(mapped_morf) - if plugin_name: - plugin = self._plugins.get(plugin_name) - - if plugin: - file_reporter = plugin.file_reporter(mapped_morf) - if file_reporter is None: - raise CoverageException( - "Plugin %r did not provide a file reporter for %r." % ( - plugin._coverage_plugin_name, morf - ) - ) - - if file_reporter == "python": - file_reporter = PythonFileReporter(morf, self) - - return file_reporter - - def _get_file_reporters(self, morfs=None): - """Get a list of FileReporters for a list of modules or file names. - - For each module or file name in `morfs`, find a FileReporter. Return - the list of FileReporters. - - If `morfs` is a single module or file name, this returns a list of one - FileReporter. If `morfs` is empty or None, then the list of all files - measured is used to find the FileReporters. - - """ - if not morfs: - morfs = self._data.measured_files() - - # Be sure we have a collection. - if not isinstance(morfs, (list, tuple, set)): - morfs = [morfs] - - file_reporters = [self._get_file_reporter(morf) for morf in morfs] - return file_reporters - - def report( - self, morfs=None, show_missing=None, ignore_errors=None, - file=None, omit=None, include=None, skip_covered=None, - contexts=None, skip_empty=None, precision=None, sort=None - ): - """Write a textual summary report to `file`. - - Each module in `morfs` is listed, with counts of statements, executed - statements, missing statements, and a list of lines missed. - - If `show_missing` is true, then details of which lines or branches are - missing will be included in the report. If `ignore_errors` is true, - then a failure while reporting a single file will not stop the entire - report. - - `file` is a file-like object, suitable for writing. - - `include` is a list of file name patterns. Files that match will be - included in the report. Files matching `omit` will not be included in - the report. - - If `skip_covered` is true, don't report on files with 100% coverage. - - If `skip_empty` is true, don't report on empty files (those that have - no statements). - - `contexts` is a list of regular expressions. Only data from - :ref:`dynamic contexts <dynamic_contexts>` that match one of those - expressions (using :func:`re.search <python:re.search>`) will be - included in the report. - - `precision` is the number of digits to display after the decimal - point for percentages. - - All of the arguments default to the settings read from the - :ref:`configuration file <config>`. - - Returns a float, the total percentage covered. - - .. versionadded:: 4.0 - The `skip_covered` parameter. - - .. versionadded:: 5.0 - The `contexts` and `skip_empty` parameters. - - .. versionadded:: 5.2 - The `precision` parameter. - - """ - with override_config( - self, - ignore_errors=ignore_errors, report_omit=omit, report_include=include, - show_missing=show_missing, skip_covered=skip_covered, - report_contexts=contexts, skip_empty=skip_empty, precision=precision, - sort=sort - ): - reporter = SummaryReporter(self) - return reporter.report(morfs, outfile=file) - - def annotate( - self, morfs=None, directory=None, ignore_errors=None, - omit=None, include=None, contexts=None, - ): - """Annotate a list of modules. - - Each module in `morfs` is annotated. The source is written to a new - file, named with a ",cover" suffix, with each line prefixed with a - marker to indicate the coverage of the line. Covered lines have ">", - excluded lines have "-", and missing lines have "!". - - See :meth:`report` for other arguments. - - """ - with override_config(self, - ignore_errors=ignore_errors, report_omit=omit, - report_include=include, report_contexts=contexts, - ): - reporter = AnnotateReporter(self) - reporter.report(morfs, directory=directory) - - def html_report( - self, morfs=None, directory=None, ignore_errors=None, - omit=None, include=None, extra_css=None, title=None, - skip_covered=None, show_contexts=None, contexts=None, - skip_empty=None, precision=None, - ): - """Generate an HTML report. - - The HTML is written to `directory`. The file "index.html" is the - overview starting point, with links to more detailed pages for - individual modules. - - `extra_css` is a path to a file of other CSS to apply on the page. - It will be copied into the HTML directory. - - `title` is a text string (not HTML) to use as the title of the HTML - report. - - See :meth:`report` for other arguments. - - Returns a float, the total percentage covered. - - .. note:: - The HTML report files are generated incrementally based on the - source files and coverage results. If you modify the report files, - the changes will not be considered. You should be careful about - changing the files in the report folder. - - """ - with override_config(self, - ignore_errors=ignore_errors, report_omit=omit, report_include=include, - html_dir=directory, extra_css=extra_css, html_title=title, - html_skip_covered=skip_covered, show_contexts=show_contexts, report_contexts=contexts, - html_skip_empty=skip_empty, precision=precision, - ): - reporter = HtmlReporter(self) - return reporter.report(morfs) - - def xml_report( - self, morfs=None, outfile=None, ignore_errors=None, - omit=None, include=None, contexts=None, skip_empty=None, - ): - """Generate an XML report of coverage results. - - The report is compatible with Cobertura reports. - - Each module in `morfs` is included in the report. `outfile` is the - path to write the file to, "-" will write to stdout. - - See :meth:`report` for other arguments. - - Returns a float, the total percentage covered. - - """ - with override_config(self, - ignore_errors=ignore_errors, report_omit=omit, report_include=include, - xml_output=outfile, report_contexts=contexts, skip_empty=skip_empty, - ): - return render_report(self.config.xml_output, XmlReporter(self), morfs) - - def json_report( - self, morfs=None, outfile=None, ignore_errors=None, - omit=None, include=None, contexts=None, pretty_print=None, - show_contexts=None - ): - """Generate a JSON report of coverage results. - - Each module in `morfs` is included in the report. `outfile` is the - path to write the file to, "-" will write to stdout. - - See :meth:`report` for other arguments. - - Returns a float, the total percentage covered. - - .. versionadded:: 5.0 - - """ - with override_config(self, - ignore_errors=ignore_errors, report_omit=omit, report_include=include, - json_output=outfile, report_contexts=contexts, json_pretty_print=pretty_print, - json_show_contexts=show_contexts - ): - return render_report(self.config.json_output, JsonReporter(self), morfs) - - def sys_info(self): - """Return a list of (key, value) pairs showing internal information.""" - - import coverage as covmod - - self._init() - self._post_init() - - def plugin_info(plugins): - """Make an entry for the sys_info from a list of plug-ins.""" - entries = [] - for plugin in plugins: - entry = plugin._coverage_plugin_name - if not plugin._coverage_enabled: - entry += " (disabled)" - entries.append(entry) - return entries - - info = [ - ('version', covmod.__version__), - ('coverage', covmod.__file__), - ('tracer', self._collector.tracer_name() if self._collector else "-none-"), - ('CTracer', 'available' if CTracer else "unavailable"), - ('plugins.file_tracers', plugin_info(self._plugins.file_tracers)), - ('plugins.configurers', plugin_info(self._plugins.configurers)), - ('plugins.context_switchers', plugin_info(self._plugins.context_switchers)), - ('configs_attempted', self.config.attempted_config_files), - ('configs_read', self.config.config_files_read), - ('config_file', self.config.config_file), - ('config_contents', - repr(self.config._config_contents) - if self.config._config_contents - else '-none-' - ), - ('data_file', self._data.data_filename() if self._data is not None else "-none-"), - ('python', sys.version.replace('\n', '')), - ('platform', platform.platform()), - ('implementation', platform.python_implementation()), - ('executable', sys.executable), - ('def_encoding', sys.getdefaultencoding()), - ('fs_encoding', sys.getfilesystemencoding()), - ('pid', os.getpid()), - ('cwd', os.getcwd()), - ('path', sys.path), - ('environment', sorted( - ("%s = %s" % (k, v)) - for k, v in iitems(os.environ) - if any(slug in k for slug in ("COV", "PY")) - )), - ('command_line', " ".join(getattr(sys, 'argv', ['-none-']))), - ] - - if self._inorout: - info.extend(self._inorout.sys_info()) - - info.extend(CoverageData.sys_info()) - - return info - - -# Mega debugging... -# $set_env.py: COVERAGE_DEBUG_CALLS - Lots and lots of output about calls to Coverage. -if int(os.environ.get("COVERAGE_DEBUG_CALLS", 0)): # pragma: debugging - from coverage.debug import decorate_methods, show_calls - - Coverage = decorate_methods(show_calls(show_args=True), butnot=['get_data'])(Coverage) - - -def process_startup(): - """Call this at Python start-up to perhaps measure coverage. - - If the environment variable COVERAGE_PROCESS_START is defined, coverage - measurement is started. The value of the variable is the config file - to use. - - There are two ways to configure your Python installation to invoke this - function when Python starts: - - #. Create or append to sitecustomize.py to add these lines:: - - import coverage - coverage.process_startup() - - #. Create a .pth file in your Python installation containing:: - - import coverage; coverage.process_startup() - - Returns the :class:`Coverage` instance that was started, or None if it was - not started by this call. - - """ - cps = os.environ.get("COVERAGE_PROCESS_START") - if not cps: - # No request for coverage, nothing to do. - return None - - # This function can be called more than once in a process. This happens - # because some virtualenv configurations make the same directory visible - # twice in sys.path. This means that the .pth file will be found twice, - # and executed twice, executing this function twice. We set a global - # flag (an attribute on this function) to indicate that coverage.py has - # already been started, so we can avoid doing it twice. - # - # https://github.com/nedbat/coveragepy/issues/340 has more details. - - if hasattr(process_startup, "coverage"): - # We've annotated this function before, so we must have already - # started coverage.py in this process. Nothing to do. - return None - - cov = Coverage(config_file=cps) - process_startup.coverage = cov - cov._warn_no_data = False - cov._warn_unimported_source = False - cov._warn_preimported_source = False - cov._auto_save = True - cov.start() - - return cov - - -def _prevent_sub_process_measurement(): - """Stop any subprocess auto-measurement from writing data.""" - auto_created_coverage = getattr(process_startup, "coverage", None) - if auto_created_coverage is not None: - auto_created_coverage._auto_save = False diff --git a/contrib/python/coverage/py2/coverage/ctracer/datastack.c b/contrib/python/coverage/py2/coverage/ctracer/datastack.c deleted file mode 100644 index a9cfcc2cf2c..00000000000 --- a/contrib/python/coverage/py2/coverage/ctracer/datastack.c +++ /dev/null @@ -1,50 +0,0 @@ -/* Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 */ -/* For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt */ - -#include "util.h" -#include "datastack.h" - -#define STACK_DELTA 20 - -int -DataStack_init(Stats *pstats, DataStack *pdata_stack) -{ - pdata_stack->depth = -1; - pdata_stack->stack = NULL; - pdata_stack->alloc = 0; - return RET_OK; -} - -void -DataStack_dealloc(Stats *pstats, DataStack *pdata_stack) -{ - int i; - - for (i = 0; i < pdata_stack->alloc; i++) { - Py_XDECREF(pdata_stack->stack[i].file_data); - } - PyMem_Free(pdata_stack->stack); -} - -int -DataStack_grow(Stats *pstats, DataStack *pdata_stack) -{ - pdata_stack->depth++; - if (pdata_stack->depth >= pdata_stack->alloc) { - /* We've outgrown our data_stack array: make it bigger. */ - int bigger = pdata_stack->alloc + STACK_DELTA; - DataStackEntry * bigger_data_stack = PyMem_Realloc(pdata_stack->stack, bigger * sizeof(DataStackEntry)); - STATS( pstats->stack_reallocs++; ) - if (bigger_data_stack == NULL) { - PyErr_NoMemory(); - pdata_stack->depth--; - return RET_ERROR; - } - /* Zero the new entries. */ - memset(bigger_data_stack + pdata_stack->alloc, 0, STACK_DELTA * sizeof(DataStackEntry)); - - pdata_stack->stack = bigger_data_stack; - pdata_stack->alloc = bigger; - } - return RET_OK; -} diff --git a/contrib/python/coverage/py2/coverage/ctracer/datastack.h b/contrib/python/coverage/py2/coverage/ctracer/datastack.h deleted file mode 100644 index 3b3078ba27a..00000000000 --- a/contrib/python/coverage/py2/coverage/ctracer/datastack.h +++ /dev/null @@ -1,45 +0,0 @@ -/* Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 */ -/* For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt */ - -#ifndef _COVERAGE_DATASTACK_H -#define _COVERAGE_DATASTACK_H - -#include "util.h" -#include "stats.h" - -/* An entry on the data stack. For each call frame, we need to record all - * the information needed for CTracer_handle_line to operate as quickly as - * possible. - */ -typedef struct DataStackEntry { - /* The current file_data dictionary. Owned. */ - PyObject * file_data; - - /* The disposition object for this frame. A borrowed instance of CFileDisposition. */ - PyObject * disposition; - - /* The FileTracer handling this frame, or None if it's Python. Borrowed. */ - PyObject * file_tracer; - - /* The line number of the last line recorded, for tracing arcs. - -1 means there was no previous line, as when entering a code object. - */ - int last_line; - - BOOL started_context; -} DataStackEntry; - -/* A data stack is a dynamically allocated vector of DataStackEntry's. */ -typedef struct DataStack { - int depth; /* The index of the last-used entry in stack. */ - int alloc; /* number of entries allocated at stack. */ - /* The file data at each level, or NULL if not recording. */ - DataStackEntry * stack; -} DataStack; - - -int DataStack_init(Stats * pstats, DataStack *pdata_stack); -void DataStack_dealloc(Stats * pstats, DataStack *pdata_stack); -int DataStack_grow(Stats * pstats, DataStack *pdata_stack); - -#endif /* _COVERAGE_DATASTACK_H */ diff --git a/contrib/python/coverage/py2/coverage/ctracer/filedisp.c b/contrib/python/coverage/py2/coverage/ctracer/filedisp.c deleted file mode 100644 index 47782ae0900..00000000000 --- a/contrib/python/coverage/py2/coverage/ctracer/filedisp.c +++ /dev/null @@ -1,85 +0,0 @@ -/* Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 */ -/* For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt */ - -#include "util.h" -#include "filedisp.h" - -void -CFileDisposition_dealloc(CFileDisposition *self) -{ - Py_XDECREF(self->original_filename); - Py_XDECREF(self->canonical_filename); - Py_XDECREF(self->source_filename); - Py_XDECREF(self->trace); - Py_XDECREF(self->reason); - Py_XDECREF(self->file_tracer); - Py_XDECREF(self->has_dynamic_filename); -} - -static PyMemberDef -CFileDisposition_members[] = { - { "original_filename", T_OBJECT, offsetof(CFileDisposition, original_filename), 0, - PyDoc_STR("") }, - - { "canonical_filename", T_OBJECT, offsetof(CFileDisposition, canonical_filename), 0, - PyDoc_STR("") }, - - { "source_filename", T_OBJECT, offsetof(CFileDisposition, source_filename), 0, - PyDoc_STR("") }, - - { "trace", T_OBJECT, offsetof(CFileDisposition, trace), 0, - PyDoc_STR("") }, - - { "reason", T_OBJECT, offsetof(CFileDisposition, reason), 0, - PyDoc_STR("") }, - - { "file_tracer", T_OBJECT, offsetof(CFileDisposition, file_tracer), 0, - PyDoc_STR("") }, - - { "has_dynamic_filename", T_OBJECT, offsetof(CFileDisposition, has_dynamic_filename), 0, - PyDoc_STR("") }, - - { NULL } -}; - -PyTypeObject -CFileDispositionType = { - MyType_HEAD_INIT - "coverage.CFileDispositionType", /*tp_name*/ - sizeof(CFileDisposition), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - (destructor)CFileDisposition_dealloc, /*tp_dealloc*/ - 0, /*tp_print*/ - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - 0, /*tp_compare*/ - 0, /*tp_repr*/ - 0, /*tp_as_number*/ - 0, /*tp_as_sequence*/ - 0, /*tp_as_mapping*/ - 0, /*tp_hash */ - 0, /*tp_call*/ - 0, /*tp_str*/ - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - 0, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE, /*tp_flags*/ - "CFileDisposition objects", /* tp_doc */ - 0, /* tp_traverse */ - 0, /* tp_clear */ - 0, /* tp_richcompare */ - 0, /* tp_weaklistoffset */ - 0, /* tp_iter */ - 0, /* tp_iternext */ - 0, /* tp_methods */ - CFileDisposition_members, /* tp_members */ - 0, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - 0, /* tp_dictoffset */ - 0, /* tp_init */ - 0, /* tp_alloc */ - 0, /* tp_new */ -}; diff --git a/contrib/python/coverage/py2/coverage/ctracer/filedisp.h b/contrib/python/coverage/py2/coverage/ctracer/filedisp.h deleted file mode 100644 index 860f9a50b1f..00000000000 --- a/contrib/python/coverage/py2/coverage/ctracer/filedisp.h +++ /dev/null @@ -1,26 +0,0 @@ -/* Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 */ -/* For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt */ - -#ifndef _COVERAGE_FILEDISP_H -#define _COVERAGE_FILEDISP_H - -#include "util.h" -#include "structmember.h" - -typedef struct CFileDisposition { - PyObject_HEAD - - PyObject * original_filename; - PyObject * canonical_filename; - PyObject * source_filename; - PyObject * trace; - PyObject * reason; - PyObject * file_tracer; - PyObject * has_dynamic_filename; -} CFileDisposition; - -void CFileDisposition_dealloc(CFileDisposition *self); - -extern PyTypeObject CFileDispositionType; - -#endif /* _COVERAGE_FILEDISP_H */ diff --git a/contrib/python/coverage/py2/coverage/ctracer/module.c b/contrib/python/coverage/py2/coverage/ctracer/module.c deleted file mode 100644 index f308902b693..00000000000 --- a/contrib/python/coverage/py2/coverage/ctracer/module.c +++ /dev/null @@ -1,108 +0,0 @@ -/* Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 */ -/* For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt */ - -#include "util.h" -#include "tracer.h" -#include "filedisp.h" - -/* Module definition */ - -#define MODULE_DOC PyDoc_STR("Fast coverage tracer.") - -#if PY_MAJOR_VERSION >= 3 - -static PyModuleDef -moduledef = { - PyModuleDef_HEAD_INIT, - "coverage.tracer", - MODULE_DOC, - -1, - NULL, /* methods */ - NULL, - NULL, /* traverse */ - NULL, /* clear */ - NULL -}; - - -PyObject * -PyInit_tracer(void) -{ - PyObject * mod = PyModule_Create(&moduledef); - if (mod == NULL) { - return NULL; - } - - if (CTracer_intern_strings() < 0) { - return NULL; - } - - /* Initialize CTracer */ - CTracerType.tp_new = PyType_GenericNew; - if (PyType_Ready(&CTracerType) < 0) { - Py_DECREF(mod); - return NULL; - } - - Py_INCREF(&CTracerType); - if (PyModule_AddObject(mod, "CTracer", (PyObject *)&CTracerType) < 0) { - Py_DECREF(mod); - Py_DECREF(&CTracerType); - return NULL; - } - - /* Initialize CFileDisposition */ - CFileDispositionType.tp_new = PyType_GenericNew; - if (PyType_Ready(&CFileDispositionType) < 0) { - Py_DECREF(mod); - Py_DECREF(&CTracerType); - return NULL; - } - - Py_INCREF(&CFileDispositionType); - if (PyModule_AddObject(mod, "CFileDisposition", (PyObject *)&CFileDispositionType) < 0) { - Py_DECREF(mod); - Py_DECREF(&CTracerType); - Py_DECREF(&CFileDispositionType); - return NULL; - } - - return mod; -} - -#else - -void -inittracer(void) -{ - PyObject * mod; - - mod = Py_InitModule3("coverage.tracer", NULL, MODULE_DOC); - if (mod == NULL) { - return; - } - - if (CTracer_intern_strings() < 0) { - return; - } - - /* Initialize CTracer */ - CTracerType.tp_new = PyType_GenericNew; - if (PyType_Ready(&CTracerType) < 0) { - return; - } - - Py_INCREF(&CTracerType); - PyModule_AddObject(mod, "CTracer", (PyObject *)&CTracerType); - - /* Initialize CFileDisposition */ - CFileDispositionType.tp_new = PyType_GenericNew; - if (PyType_Ready(&CFileDispositionType) < 0) { - return; - } - - Py_INCREF(&CFileDispositionType); - PyModule_AddObject(mod, "CFileDisposition", (PyObject *)&CFileDispositionType); -} - -#endif /* Py3k */ diff --git a/contrib/python/coverage/py2/coverage/ctracer/stats.h b/contrib/python/coverage/py2/coverage/ctracer/stats.h deleted file mode 100644 index 05173369f78..00000000000 --- a/contrib/python/coverage/py2/coverage/ctracer/stats.h +++ /dev/null @@ -1,31 +0,0 @@ -/* Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 */ -/* For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt */ - -#ifndef _COVERAGE_STATS_H -#define _COVERAGE_STATS_H - -#include "util.h" - -#if COLLECT_STATS -#define STATS(x) x -#else -#define STATS(x) -#endif - -typedef struct Stats { - unsigned int calls; /* Need at least one member, but the rest only if needed. */ -#if COLLECT_STATS - unsigned int lines; - unsigned int returns; - unsigned int exceptions; - unsigned int others; - unsigned int files; - unsigned int missed_returns; - unsigned int stack_reallocs; - unsigned int errors; - unsigned int pycalls; - unsigned int start_context_calls; -#endif -} Stats; - -#endif /* _COVERAGE_STATS_H */ diff --git a/contrib/python/coverage/py2/coverage/ctracer/tracer.c b/contrib/python/coverage/py2/coverage/ctracer/tracer.c deleted file mode 100644 index 00e4218d8ec..00000000000 --- a/contrib/python/coverage/py2/coverage/ctracer/tracer.c +++ /dev/null @@ -1,1149 +0,0 @@ -/* Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 */ -/* For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt */ - -/* C-based Tracer for coverage.py. */ - -#include "util.h" -#include "datastack.h" -#include "filedisp.h" -#include "tracer.h" - -/* Python C API helpers. */ - -static int -pyint_as_int(PyObject * pyint, int *pint) -{ - int the_int = MyInt_AsInt(pyint); - if (the_int == -1 && PyErr_Occurred()) { - return RET_ERROR; - } - - *pint = the_int; - return RET_OK; -} - - -/* Interned strings to speed GetAttr etc. */ - -static PyObject *str_trace; -static PyObject *str_file_tracer; -static PyObject *str__coverage_enabled; -static PyObject *str__coverage_plugin; -static PyObject *str__coverage_plugin_name; -static PyObject *str_dynamic_source_filename; -static PyObject *str_line_number_range; - -int -CTracer_intern_strings(void) -{ - int ret = RET_ERROR; - -#define INTERN_STRING(v, s) \ - v = MyText_InternFromString(s); \ - if (v == NULL) { \ - goto error; \ - } - - INTERN_STRING(str_trace, "trace") - INTERN_STRING(str_file_tracer, "file_tracer") - INTERN_STRING(str__coverage_enabled, "_coverage_enabled") - INTERN_STRING(str__coverage_plugin, "_coverage_plugin") - INTERN_STRING(str__coverage_plugin_name, "_coverage_plugin_name") - INTERN_STRING(str_dynamic_source_filename, "dynamic_source_filename") - INTERN_STRING(str_line_number_range, "line_number_range") - - ret = RET_OK; - -error: - return ret; -} - -static void CTracer_disable_plugin(CTracer *self, PyObject * disposition); - -static int -CTracer_init(CTracer *self, PyObject *args_unused, PyObject *kwds_unused) -{ - int ret = RET_ERROR; - - if (DataStack_init(&self->stats, &self->data_stack) < 0) { - goto error; - } - - self->pdata_stack = &self->data_stack; - - self->context = Py_None; - Py_INCREF(self->context); - - ret = RET_OK; - goto ok; - -error: - STATS( self->stats.errors++; ) - -ok: - return ret; -} - -static void -CTracer_dealloc(CTracer *self) -{ - int i; - - if (self->started) { - PyEval_SetTrace(NULL, NULL); - } - - Py_XDECREF(self->should_trace); - Py_XDECREF(self->check_include); - Py_XDECREF(self->warn); - Py_XDECREF(self->concur_id_func); - Py_XDECREF(self->data); - Py_XDECREF(self->file_tracers); - Py_XDECREF(self->should_trace_cache); - Py_XDECREF(self->should_start_context); - Py_XDECREF(self->switch_context); - Py_XDECREF(self->context); - Py_XDECREF(self->disable_plugin); - - DataStack_dealloc(&self->stats, &self->data_stack); - if (self->data_stacks) { - for (i = 0; i < self->data_stacks_used; i++) { - DataStack_dealloc(&self->stats, self->data_stacks + i); - } - PyMem_Free(self->data_stacks); - } - - Py_XDECREF(self->data_stack_index); - - Py_TYPE(self)->tp_free((PyObject*)self); -} - -#if TRACE_LOG -static const char * -indent(int n) -{ - static const char * spaces = - " " - " " - " " - " " - ; - return spaces + strlen(spaces) - n*2; -} - -static BOOL logging = FALSE; -/* Set these constants to be a file substring and line number to start logging. */ -static const char * start_file = "tests/views"; -static int start_line = 27; - -static void -showlog(int depth, int lineno, PyObject * filename, const char * msg) -{ - if (logging) { - printf("%s%3d ", indent(depth), depth); - if (lineno) { - printf("%4d", lineno); - } - else { - printf(" "); - } - if (filename) { - PyObject *ascii = MyText_AS_BYTES(filename); - printf(" %s", MyBytes_AS_STRING(ascii)); - Py_DECREF(ascii); - } - if (msg) { - printf(" %s", msg); - } - printf("\n"); - } -} - -#define SHOWLOG(a,b,c,d) showlog(a,b,c,d) -#else -#define SHOWLOG(a,b,c,d) -#endif /* TRACE_LOG */ - -#if WHAT_LOG -static const char * what_sym[] = {"CALL", "EXC ", "LINE", "RET "}; -#endif - -/* Record a pair of integers in self->pcur_entry->file_data. */ -static int -CTracer_record_pair(CTracer *self, int l1, int l2) -{ - int ret = RET_ERROR; - - PyObject * t = NULL; - - t = Py_BuildValue("(ii)", l1, l2); - if (t == NULL) { - goto error; - } - - if (PyDict_SetItem(self->pcur_entry->file_data, t, Py_None) < 0) { - goto error; - } - - ret = RET_OK; - -error: - Py_XDECREF(t); - - return ret; -} - -/* Set self->pdata_stack to the proper data_stack to use. */ -static int -CTracer_set_pdata_stack(CTracer *self) -{ - int ret = RET_ERROR; - PyObject * co_obj = NULL; - PyObject * stack_index = NULL; - - if (self->concur_id_func != Py_None) { - int the_index = 0; - - if (self->data_stack_index == NULL) { - PyObject * weakref = NULL; - - weakref = PyImport_ImportModule("weakref"); - if (weakref == NULL) { - goto error; - } - STATS( self->stats.pycalls++; ) - self->data_stack_index = PyObject_CallMethod(weakref, "WeakKeyDictionary", NULL); - Py_XDECREF(weakref); - - if (self->data_stack_index == NULL) { - goto error; - } - } - - STATS( self->stats.pycalls++; ) - co_obj = PyObject_CallObject(self->concur_id_func, NULL); - if (co_obj == NULL) { - goto error; - } - stack_index = PyObject_GetItem(self->data_stack_index, co_obj); - if (stack_index == NULL) { - /* PyObject_GetItem sets an exception if it didn't find the thing. */ - PyErr_Clear(); - - /* A new concurrency object. Make a new data stack. */ - the_index = self->data_stacks_used; - stack_index = MyInt_FromInt(the_index); - if (stack_index == NULL) { - goto error; - } - if (PyObject_SetItem(self->data_stack_index, co_obj, stack_index) < 0) { - goto error; - } - self->data_stacks_used++; - if (self->data_stacks_used >= self->data_stacks_alloc) { - int bigger = self->data_stacks_alloc + 10; - DataStack * bigger_stacks = PyMem_Realloc(self->data_stacks, bigger * sizeof(DataStack)); - if (bigger_stacks == NULL) { - PyErr_NoMemory(); - goto error; - } - self->data_stacks = bigger_stacks; - self->data_stacks_alloc = bigger; - } - DataStack_init(&self->stats, &self->data_stacks[the_index]); - } - else { - if (pyint_as_int(stack_index, &the_index) < 0) { - goto error; - } - } - - self->pdata_stack = &self->data_stacks[the_index]; - } - else { - self->pdata_stack = &self->data_stack; - } - - ret = RET_OK; - -error: - - Py_XDECREF(co_obj); - Py_XDECREF(stack_index); - - return ret; -} - -/* - * Parts of the trace function. - */ - -static int -CTracer_check_missing_return(CTracer *self, PyFrameObject *frame) -{ - int ret = RET_ERROR; - - if (self->last_exc_back) { - if (frame == self->last_exc_back) { - /* Looks like someone forgot to send a return event. We'll clear - the exception state and do the RETURN code here. Notice that the - frame we have in hand here is not the correct frame for the RETURN, - that frame is gone. Our handling for RETURN doesn't need the - actual frame, but we do log it, so that will look a little off if - you're looking at the detailed log. - - If someday we need to examine the frame when doing RETURN, then - we'll need to keep more of the missed frame's state. - */ - STATS( self->stats.missed_returns++; ) - if (CTracer_set_pdata_stack(self) < 0) { - goto error; - } - if (self->pdata_stack->depth >= 0) { - if (self->tracing_arcs && self->pcur_entry->file_data) { - if (CTracer_record_pair(self, self->pcur_entry->last_line, -self->last_exc_firstlineno) < 0) { - goto error; - } - } - SHOWLOG(self->pdata_stack->depth, PyFrame_GetLineNumber(frame), frame->f_code->co_filename, "missedreturn"); - self->pdata_stack->depth--; - self->pcur_entry = &self->pdata_stack->stack[self->pdata_stack->depth]; - } - } - self->last_exc_back = NULL; - } - - ret = RET_OK; - -error: - - return ret; -} - -static int -CTracer_handle_call(CTracer *self, PyFrameObject *frame) -{ - int ret = RET_ERROR; - int ret2; - - /* Owned references that we clean up at the very end of the function. */ - PyObject * disposition = NULL; - PyObject * plugin = NULL; - PyObject * plugin_name = NULL; - PyObject * next_tracename = NULL; - - /* Borrowed references. */ - PyObject * filename = NULL; - PyObject * disp_trace = NULL; - PyObject * tracename = NULL; - PyObject * file_tracer = NULL; - PyObject * has_dynamic_filename = NULL; - - CFileDisposition * pdisp = NULL; - - STATS( self->stats.calls++; ) - - /* Grow the stack. */ - if (CTracer_set_pdata_stack(self) < 0) { - goto error; - } - if (DataStack_grow(&self->stats, self->pdata_stack) < 0) { - goto error; - } - self->pcur_entry = &self->pdata_stack->stack[self->pdata_stack->depth]; - - /* See if this frame begins a new context. */ - if (self->should_start_context != Py_None && self->context == Py_None) { - PyObject * context; - /* We're looking for our context, ask should_start_context if this is the start. */ - STATS( self->stats.start_context_calls++; ) - STATS( self->stats.pycalls++; ) - context = PyObject_CallFunctionObjArgs(self->should_start_context, frame, NULL); - if (context == NULL) { - goto error; - } - if (context != Py_None) { - PyObject * val; - Py_DECREF(self->context); - self->context = context; - self->pcur_entry->started_context = TRUE; - STATS( self->stats.pycalls++; ) - val = PyObject_CallFunctionObjArgs(self->switch_context, context, NULL); - if (val == NULL) { - goto error; - } - Py_DECREF(val); - } - else { - Py_DECREF(context); - self->pcur_entry->started_context = FALSE; - } - } - else { - self->pcur_entry->started_context = FALSE; - } - - /* Check if we should trace this line. */ - filename = frame->f_code->co_filename; - disposition = PyDict_GetItem(self->should_trace_cache, filename); - if (disposition == NULL) { - if (PyErr_Occurred()) { - goto error; - } - STATS( self->stats.files++; ) - - /* We've never considered this file before. */ - /* Ask should_trace about it. */ - STATS( self->stats.pycalls++; ) - disposition = PyObject_CallFunctionObjArgs(self->should_trace, filename, frame, NULL); - if (disposition == NULL) { - /* An error occurred inside should_trace. */ - goto error; - } - if (PyDict_SetItem(self->should_trace_cache, filename, disposition) < 0) { - goto error; - } - } - else { - Py_INCREF(disposition); - } - - if (disposition == Py_None) { - /* A later check_include returned false, so don't trace it. */ - disp_trace = Py_False; - } - else { - /* The object we got is a CFileDisposition, use it efficiently. */ - pdisp = (CFileDisposition *) disposition; - disp_trace = pdisp->trace; - if (disp_trace == NULL) { - goto error; - } - } - - if (disp_trace == Py_True) { - /* If tracename is a string, then we're supposed to trace. */ - tracename = pdisp->source_filename; - if (tracename == NULL) { - goto error; - } - file_tracer = pdisp->file_tracer; - if (file_tracer == NULL) { - goto error; - } - if (file_tracer != Py_None) { - plugin = PyObject_GetAttr(file_tracer, str__coverage_plugin); - if (plugin == NULL) { - goto error; - } - plugin_name = PyObject_GetAttr(plugin, str__coverage_plugin_name); - if (plugin_name == NULL) { - goto error; - } - } - has_dynamic_filename = pdisp->has_dynamic_filename; - if (has_dynamic_filename == NULL) { - goto error; - } - if (has_dynamic_filename == Py_True) { - STATS( self->stats.pycalls++; ) - next_tracename = PyObject_CallMethodObjArgs( - file_tracer, str_dynamic_source_filename, - tracename, frame, NULL - ); - if (next_tracename == NULL) { - /* An exception from the function. Alert the user with a - * warning and a traceback. - */ - CTracer_disable_plugin(self, disposition); - /* Because we handled the error, goto ok. */ - goto ok; - } - tracename = next_tracename; - - if (tracename != Py_None) { - /* Check the dynamic source filename against the include rules. */ - PyObject * included = NULL; - int should_include; - included = PyDict_GetItem(self->should_trace_cache, tracename); - if (included == NULL) { - PyObject * should_include_bool; - if (PyErr_Occurred()) { - goto error; - } - STATS( self->stats.files++; ) - STATS( self->stats.pycalls++; ) - should_include_bool = PyObject_CallFunctionObjArgs(self->check_include, tracename, frame, NULL); - if (should_include_bool == NULL) { - goto error; - } - should_include = (should_include_bool == Py_True); - Py_DECREF(should_include_bool); - if (PyDict_SetItem(self->should_trace_cache, tracename, should_include ? disposition : Py_None) < 0) { - goto error; - } - } - else { - should_include = (included != Py_None); - } - if (!should_include) { - tracename = Py_None; - } - } - } - } - else { - tracename = Py_None; - } - - if (tracename != Py_None) { - PyObject * file_data = PyDict_GetItem(self->data, tracename); - - if (file_data == NULL) { - if (PyErr_Occurred()) { - goto error; - } - file_data = PyDict_New(); - if (file_data == NULL) { - goto error; - } - ret2 = PyDict_SetItem(self->data, tracename, file_data); - if (ret2 < 0) { - goto error; - } - - /* If the disposition mentions a plugin, record that. */ - if (file_tracer != Py_None) { - ret2 = PyDict_SetItem(self->file_tracers, tracename, plugin_name); - if (ret2 < 0) { - goto error; - } - } - } - else { - /* PyDict_GetItem gives a borrowed reference. Own it. */ - Py_INCREF(file_data); - } - - Py_XDECREF(self->pcur_entry->file_data); - self->pcur_entry->file_data = file_data; - self->pcur_entry->file_tracer = file_tracer; - - SHOWLOG(self->pdata_stack->depth, PyFrame_GetLineNumber(frame), filename, "traced"); - } - else { - Py_XDECREF(self->pcur_entry->file_data); - self->pcur_entry->file_data = NULL; - self->pcur_entry->file_tracer = Py_None; - SHOWLOG(self->pdata_stack->depth, PyFrame_GetLineNumber(frame), filename, "skipped"); - } - - self->pcur_entry->disposition = disposition; - - /* Make the frame right in case settrace(gettrace()) happens. */ - Py_INCREF(self); - My_XSETREF(frame->f_trace, (PyObject*)self); - - /* A call event is really a "start frame" event, and can happen for - * re-entering a generator also. f_lasti is -1 for a true call, and a - * real byte offset for a generator re-entry. - */ - if (frame->f_lasti < 0) { - self->pcur_entry->last_line = -frame->f_code->co_firstlineno; - } - else { - self->pcur_entry->last_line = PyFrame_GetLineNumber(frame); - } - -ok: - ret = RET_OK; - -error: - Py_XDECREF(next_tracename); - Py_XDECREF(disposition); - Py_XDECREF(plugin); - Py_XDECREF(plugin_name); - - return ret; -} - - -static void -CTracer_disable_plugin(CTracer *self, PyObject * disposition) -{ - PyObject * ret; - PyErr_Print(); - - STATS( self->stats.pycalls++; ) - ret = PyObject_CallFunctionObjArgs(self->disable_plugin, disposition, NULL); - if (ret == NULL) { - goto error; - } - Py_DECREF(ret); - - return; - -error: - /* This function doesn't return a status, so if an error happens, print it, - * but don't interrupt the flow. */ - /* PySys_WriteStderr is nicer, but is not in the public API. */ - fprintf(stderr, "Error occurred while disabling plug-in:\n"); - PyErr_Print(); -} - - -static int -CTracer_unpack_pair(CTracer *self, PyObject *pair, int *p_one, int *p_two) -{ - int ret = RET_ERROR; - int the_int; - PyObject * pyint = NULL; - int index; - - if (!PyTuple_Check(pair) || PyTuple_Size(pair) != 2) { - PyErr_SetString( - PyExc_TypeError, - "line_number_range must return 2-tuple" - ); - goto error; - } - - for (index = 0; index < 2; index++) { - pyint = PyTuple_GetItem(pair, index); - if (pyint == NULL) { - goto error; - } - if (pyint_as_int(pyint, &the_int) < 0) { - goto error; - } - *(index == 0 ? p_one : p_two) = the_int; - } - - ret = RET_OK; - -error: - return ret; -} - -static int -CTracer_handle_line(CTracer *self, PyFrameObject *frame) -{ - int ret = RET_ERROR; - int ret2; - - STATS( self->stats.lines++; ) - if (self->pdata_stack->depth >= 0) { - SHOWLOG(self->pdata_stack->depth, PyFrame_GetLineNumber(frame), frame->f_code->co_filename, "line"); - if (self->pcur_entry->file_data) { - int lineno_from = -1; - int lineno_to = -1; - - /* We're tracing in this frame: record something. */ - if (self->pcur_entry->file_tracer != Py_None) { - PyObject * from_to = NULL; - STATS( self->stats.pycalls++; ) - from_to = PyObject_CallMethodObjArgs(self->pcur_entry->file_tracer, str_line_number_range, frame, NULL); - if (from_to == NULL) { - CTracer_disable_plugin(self, self->pcur_entry->disposition); - goto ok; - } - ret2 = CTracer_unpack_pair(self, from_to, &lineno_from, &lineno_to); - Py_DECREF(from_to); - if (ret2 < 0) { - CTracer_disable_plugin(self, self->pcur_entry->disposition); - goto ok; - } - } - else { - lineno_from = lineno_to = PyFrame_GetLineNumber(frame); - } - - if (lineno_from != -1) { - for (; lineno_from <= lineno_to; lineno_from++) { - if (self->tracing_arcs) { - /* Tracing arcs: key is (last_line,this_line). */ - if (CTracer_record_pair(self, self->pcur_entry->last_line, lineno_from) < 0) { - goto error; - } - } - else { - /* Tracing lines: key is simply this_line. */ - PyObject * this_line = MyInt_FromInt(lineno_from); - if (this_line == NULL) { - goto error; - } - - ret2 = PyDict_SetItem(self->pcur_entry->file_data, this_line, Py_None); - Py_DECREF(this_line); - if (ret2 < 0) { - goto error; - } - } - - self->pcur_entry->last_line = lineno_from; - } - } - } - } - -ok: - ret = RET_OK; - -error: - - return ret; -} - -static int -CTracer_handle_return(CTracer *self, PyFrameObject *frame) -{ - int ret = RET_ERROR; - - STATS( self->stats.returns++; ) - /* A near-copy of this code is above in the missing-return handler. */ - if (CTracer_set_pdata_stack(self) < 0) { - goto error; - } - self->pcur_entry = &self->pdata_stack->stack[self->pdata_stack->depth]; - - if (self->pdata_stack->depth >= 0) { - if (self->tracing_arcs && self->pcur_entry->file_data) { - /* Need to distinguish between RETURN_VALUE and YIELD_VALUE. Read - * the current bytecode to see what it is. In unusual circumstances - * (Cython code), co_code can be the empty string, so range-check - * f_lasti before reading the byte. - */ - int bytecode = RETURN_VALUE; - PyObject * pCode = frame->f_code->co_code; - int lasti = frame->f_lasti; - - if (lasti < MyBytes_GET_SIZE(pCode)) { - bytecode = MyBytes_AS_STRING(pCode)[lasti]; - } - if (bytecode != YIELD_VALUE) { - int first = frame->f_code->co_firstlineno; - if (CTracer_record_pair(self, self->pcur_entry->last_line, -first) < 0) { - goto error; - } - } - } - - /* If this frame started a context, then returning from it ends the context. */ - if (self->pcur_entry->started_context) { - PyObject * val; - Py_DECREF(self->context); - self->context = Py_None; - Py_INCREF(self->context); - STATS( self->stats.pycalls++; ) - - val = PyObject_CallFunctionObjArgs(self->switch_context, self->context, NULL); - if (val == NULL) { - goto error; - } - Py_DECREF(val); - } - - /* Pop the stack. */ - SHOWLOG(self->pdata_stack->depth, PyFrame_GetLineNumber(frame), frame->f_code->co_filename, "return"); - self->pdata_stack->depth--; - self->pcur_entry = &self->pdata_stack->stack[self->pdata_stack->depth]; - } - - ret = RET_OK; - -error: - - return ret; -} - -static int -CTracer_handle_exception(CTracer *self, PyFrameObject *frame) -{ - /* Some code (Python 2.3, and pyexpat anywhere) fires an exception event - without a return event. To detect that, we'll keep a copy of the - parent frame for an exception event. If the next event is in that - frame, then we must have returned without a return event. We can - synthesize the missing event then. - - Python itself fixed this problem in 2.4. Pyexpat still has the bug. - I've reported the problem with pyexpat as http://bugs.python.org/issue6359 . - If it gets fixed, this code should still work properly. Maybe some day - the bug will be fixed everywhere coverage.py is supported, and we can - remove this missing-return detection. - - More about this fix: https://nedbatchelder.com/blog/200907/a_nasty_little_bug.html - */ - STATS( self->stats.exceptions++; ) - self->last_exc_back = frame->f_back; - self->last_exc_firstlineno = frame->f_code->co_firstlineno; - - return RET_OK; -} - -/* - * The Trace Function - */ -static int -CTracer_trace(CTracer *self, PyFrameObject *frame, int what, PyObject *arg_unused) -{ - int ret = RET_ERROR; - - #if DO_NOTHING - return RET_OK; - #endif - - if (!self->started) { - /* If CTracer.stop() has been called from another thread, the tracer - is still active in the current thread. Let's deactivate ourselves - now. */ - PyEval_SetTrace(NULL, NULL); - return RET_OK; - } - - #if WHAT_LOG || TRACE_LOG - PyObject * ascii = NULL; - #endif - - #if WHAT_LOG - if (what <= (int)(sizeof(what_sym)/sizeof(const char *))) { - ascii = MyText_AS_BYTES(frame->f_code->co_filename); - printf("trace: %s @ %s %d\n", what_sym[what], MyBytes_AS_STRING(ascii), PyFrame_GetLineNumber(frame)); - Py_DECREF(ascii); - } - #endif - - #if TRACE_LOG - ascii = MyText_AS_BYTES(frame->f_code->co_filename); - if (strstr(MyBytes_AS_STRING(ascii), start_file) && PyFrame_GetLineNumber(frame) == start_line) { - logging = TRUE; - } - Py_DECREF(ascii); - #endif - - /* See below for details on missing-return detection. */ - if (CTracer_check_missing_return(self, frame) < 0) { - goto error; - } - - self->activity = TRUE; - - switch (what) { - case PyTrace_CALL: - if (CTracer_handle_call(self, frame) < 0) { - goto error; - } - break; - - case PyTrace_RETURN: - if (CTracer_handle_return(self, frame) < 0) { - goto error; - } - break; - - case PyTrace_LINE: - if (CTracer_handle_line(self, frame) < 0) { - goto error; - } - break; - - case PyTrace_EXCEPTION: - if (CTracer_handle_exception(self, frame) < 0) { - goto error; - } - break; - - default: - STATS( self->stats.others++; ) - break; - } - - ret = RET_OK; - goto cleanup; - -error: - STATS( self->stats.errors++; ) - -cleanup: - return ret; -} - - -/* - * Python has two ways to set the trace function: sys.settrace(fn), which - * takes a Python callable, and PyEval_SetTrace(func, obj), which takes - * a C function and a Python object. The way these work together is that - * sys.settrace(pyfn) calls PyEval_SetTrace(builtin_func, pyfn), using the - * Python callable as the object in PyEval_SetTrace. So sys.gettrace() - * simply returns the Python object used as the second argument to - * PyEval_SetTrace. So sys.gettrace() will return our self parameter, which - * means it must be callable to be used in sys.settrace(). - * - * So we make ourself callable, equivalent to invoking our trace function. - * - * To help with the process of replaying stored frames, this function has an - * optional keyword argument: - * - * def CTracer_call(frame, event, arg, lineno=0) - * - * If provided, the lineno argument is used as the line number, and the - * frame's f_lineno member is ignored. - */ -static PyObject * -CTracer_call(CTracer *self, PyObject *args, PyObject *kwds) -{ - PyFrameObject *frame; - PyObject *what_str; - PyObject *arg; - int lineno = 0; - int what; - int orig_lineno; - PyObject *ret = NULL; - PyObject * ascii = NULL; - - #if DO_NOTHING - CRASH - #endif - - static char *what_names[] = { - "call", "exception", "line", "return", - "c_call", "c_exception", "c_return", - NULL - }; - - static char *kwlist[] = {"frame", "event", "arg", "lineno", NULL}; - - if (!PyArg_ParseTupleAndKeywords(args, kwds, "O!O!O|i:Tracer_call", kwlist, - &PyFrame_Type, &frame, &MyText_Type, &what_str, &arg, &lineno)) { - goto done; - } - - /* In Python, the what argument is a string, we need to find an int - for the C function. */ - for (what = 0; what_names[what]; what++) { - int should_break; - ascii = MyText_AS_BYTES(what_str); - should_break = !strcmp(MyBytes_AS_STRING(ascii), what_names[what]); - Py_DECREF(ascii); - if (should_break) { - break; - } - } - - #if WHAT_LOG - ascii = MyText_AS_BYTES(frame->f_code->co_filename); - printf("pytrace: %s @ %s %d\n", what_sym[what], MyBytes_AS_STRING(ascii), PyFrame_GetLineNumber(frame)); - Py_DECREF(ascii); - #endif - - /* Save off the frame's lineno, and use the forced one, if provided. */ - orig_lineno = frame->f_lineno; - if (lineno > 0) { - frame->f_lineno = lineno; - } - - /* Invoke the C function, and return ourselves. */ - if (CTracer_trace(self, frame, what, arg) == RET_OK) { - Py_INCREF(self); - ret = (PyObject *)self; - } - - /* Clean up. */ - frame->f_lineno = orig_lineno; - - /* For better speed, install ourselves the C way so that future calls go - directly to CTracer_trace, without this intermediate function. - - Only do this if this is a CALL event, since new trace functions only - take effect then. If we don't condition it on CALL, then we'll clobber - the new trace function before it has a chance to get called. To - understand why, there are three internal values to track: frame.f_trace, - c_tracefunc, and c_traceobj. They are explained here: - https://nedbatchelder.com/text/trace-function.html - - Without the conditional on PyTrace_CALL, this is what happens: - - def func(): # f_trace c_tracefunc c_traceobj - # -------------- -------------- -------------- - # CTracer CTracer.trace CTracer - sys.settrace(my_func) - # CTracer trampoline my_func - # Now Python calls trampoline(CTracer), which calls this function - # which calls PyEval_SetTrace below, setting us as the tracer again: - # CTracer CTracer.trace CTracer - # and it's as if the settrace never happened. - */ - if (what == PyTrace_CALL) { - PyEval_SetTrace((Py_tracefunc)CTracer_trace, (PyObject*)self); - } - -done: - return ret; -} - -static PyObject * -CTracer_start(CTracer *self, PyObject *args_unused) -{ - PyEval_SetTrace((Py_tracefunc)CTracer_trace, (PyObject*)self); - self->started = TRUE; - self->tracing_arcs = self->trace_arcs && PyObject_IsTrue(self->trace_arcs); - - /* start() returns a trace function usable with sys.settrace() */ - Py_INCREF(self); - return (PyObject *)self; -} - -static PyObject * -CTracer_stop(CTracer *self, PyObject *args_unused) -{ - if (self->started) { - /* Set the started flag only. The actual call to - PyEval_SetTrace(NULL, NULL) is delegated to the callback - itself to ensure that it called from the right thread. - */ - self->started = FALSE; - } - - Py_RETURN_NONE; -} - -static PyObject * -CTracer_activity(CTracer *self, PyObject *args_unused) -{ - if (self->activity) { - Py_RETURN_TRUE; - } - else { - Py_RETURN_FALSE; - } -} - -static PyObject * -CTracer_reset_activity(CTracer *self, PyObject *args_unused) -{ - self->activity = FALSE; - Py_RETURN_NONE; -} - -static PyObject * -CTracer_get_stats(CTracer *self, PyObject *args_unused) -{ -#if COLLECT_STATS - return Py_BuildValue( - "{sI,sI,sI,sI,sI,sI,sI,sI,si,sI,sI,sI}", - "calls", self->stats.calls, - "lines", self->stats.lines, - "returns", self->stats.returns, - "exceptions", self->stats.exceptions, - "others", self->stats.others, - "files", self->stats.files, - "missed_returns", self->stats.missed_returns, - "stack_reallocs", self->stats.stack_reallocs, - "stack_alloc", self->pdata_stack->alloc, - "errors", self->stats.errors, - "pycalls", self->stats.pycalls, - "start_context_calls", self->stats.start_context_calls - ); -#else - Py_RETURN_NONE; -#endif /* COLLECT_STATS */ -} - -static PyMemberDef -CTracer_members[] = { - { "should_trace", T_OBJECT, offsetof(CTracer, should_trace), 0, - PyDoc_STR("Function indicating whether to trace a file.") }, - - { "check_include", T_OBJECT, offsetof(CTracer, check_include), 0, - PyDoc_STR("Function indicating whether to include a file.") }, - - { "warn", T_OBJECT, offsetof(CTracer, warn), 0, - PyDoc_STR("Function for issuing warnings.") }, - - { "concur_id_func", T_OBJECT, offsetof(CTracer, concur_id_func), 0, - PyDoc_STR("Function for determining concurrency context") }, - - { "data", T_OBJECT, offsetof(CTracer, data), 0, - PyDoc_STR("The raw dictionary of trace data.") }, - - { "file_tracers", T_OBJECT, offsetof(CTracer, file_tracers), 0, - PyDoc_STR("Mapping from file name to plugin name.") }, - - { "should_trace_cache", T_OBJECT, offsetof(CTracer, should_trace_cache), 0, - PyDoc_STR("Dictionary caching should_trace results.") }, - - { "trace_arcs", T_OBJECT, offsetof(CTracer, trace_arcs), 0, - PyDoc_STR("Should we trace arcs, or just lines?") }, - - { "should_start_context", T_OBJECT, offsetof(CTracer, should_start_context), 0, - PyDoc_STR("Function for starting contexts.") }, - - { "switch_context", T_OBJECT, offsetof(CTracer, switch_context), 0, - PyDoc_STR("Function for switching to a new context.") }, - - { "disable_plugin", T_OBJECT, offsetof(CTracer, disable_plugin), 0, - PyDoc_STR("Function for disabling a plugin.") }, - - { NULL } -}; - -static PyMethodDef -CTracer_methods[] = { - { "start", (PyCFunction) CTracer_start, METH_VARARGS, - PyDoc_STR("Start the tracer") }, - - { "stop", (PyCFunction) CTracer_stop, METH_VARARGS, - PyDoc_STR("Stop the tracer") }, - - { "get_stats", (PyCFunction) CTracer_get_stats, METH_VARARGS, - PyDoc_STR("Get statistics about the tracing") }, - - { "activity", (PyCFunction) CTracer_activity, METH_VARARGS, - PyDoc_STR("Has there been any activity?") }, - - { "reset_activity", (PyCFunction) CTracer_reset_activity, METH_VARARGS, - PyDoc_STR("Reset the activity flag") }, - - { NULL } -}; - -PyTypeObject -CTracerType = { - MyType_HEAD_INIT - "coverage.CTracer", /*tp_name*/ - sizeof(CTracer), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - (destructor)CTracer_dealloc, /*tp_dealloc*/ - 0, /*tp_print*/ - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - 0, /*tp_compare*/ - 0, /*tp_repr*/ - 0, /*tp_as_number*/ - 0, /*tp_as_sequence*/ - 0, /*tp_as_mapping*/ - 0, /*tp_hash */ - (ternaryfunc)CTracer_call, /*tp_call*/ - 0, /*tp_str*/ - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - 0, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE, /*tp_flags*/ - "CTracer objects", /* tp_doc */ - 0, /* tp_traverse */ - 0, /* tp_clear */ - 0, /* tp_richcompare */ - 0, /* tp_weaklistoffset */ - 0, /* tp_iter */ - 0, /* tp_iternext */ - CTracer_methods, /* tp_methods */ - CTracer_members, /* tp_members */ - 0, /* tp_getset */ - 0, /* tp_base */ - 0, /* tp_dict */ - 0, /* tp_descr_get */ - 0, /* tp_descr_set */ - 0, /* tp_dictoffset */ - (initproc)CTracer_init, /* tp_init */ - 0, /* tp_alloc */ - 0, /* tp_new */ -}; diff --git a/contrib/python/coverage/py2/coverage/ctracer/tracer.h b/contrib/python/coverage/py2/coverage/ctracer/tracer.h deleted file mode 100644 index 8994a9e3d60..00000000000 --- a/contrib/python/coverage/py2/coverage/ctracer/tracer.h +++ /dev/null @@ -1,75 +0,0 @@ -/* Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 */ -/* For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt */ - -#ifndef _COVERAGE_TRACER_H -#define _COVERAGE_TRACER_H - -#include "util.h" -#include "structmember.h" -#include "frameobject.h" -#include "opcode.h" - -#include "datastack.h" - -/* The CTracer type. */ - -typedef struct CTracer { - PyObject_HEAD - - /* Python objects manipulated directly by the Collector class. */ - PyObject * should_trace; - PyObject * check_include; - PyObject * warn; - PyObject * concur_id_func; - PyObject * data; - PyObject * file_tracers; - PyObject * should_trace_cache; - PyObject * trace_arcs; - PyObject * should_start_context; - PyObject * switch_context; - PyObject * disable_plugin; - - /* Has the tracer been started? */ - BOOL started; - /* Are we tracing arcs, or just lines? */ - BOOL tracing_arcs; - /* Have we had any activity? */ - BOOL activity; - /* The current dynamic context. */ - PyObject * context; - - /* - The data stack is a stack of dictionaries. Each dictionary collects - data for a single source file. The data stack parallels the call stack: - each call pushes the new frame's file data onto the data stack, and each - return pops file data off. - - The file data is a dictionary whose form depends on the tracing options. - If tracing arcs, the keys are line number pairs. If not tracing arcs, - the keys are line numbers. In both cases, the value is irrelevant - (None). - */ - - DataStack data_stack; /* Used if we aren't doing concurrency. */ - - PyObject * data_stack_index; /* Used if we are doing concurrency. */ - DataStack * data_stacks; - int data_stacks_alloc; - int data_stacks_used; - DataStack * pdata_stack; - - /* The current file's data stack entry. */ - DataStackEntry * pcur_entry; - - /* The parent frame for the last exception event, to fix missing returns. */ - PyFrameObject * last_exc_back; - int last_exc_firstlineno; - - Stats stats; -} CTracer; - -int CTracer_intern_strings(void); - -extern PyTypeObject CTracerType; - -#endif /* _COVERAGE_TRACER_H */ diff --git a/contrib/python/coverage/py2/coverage/ctracer/util.h b/contrib/python/coverage/py2/coverage/ctracer/util.h deleted file mode 100644 index 5cba9b3096b..00000000000 --- a/contrib/python/coverage/py2/coverage/ctracer/util.h +++ /dev/null @@ -1,67 +0,0 @@ -/* Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 */ -/* For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt */ - -#ifndef _COVERAGE_UTIL_H -#define _COVERAGE_UTIL_H - -#include <Python.h> - -/* Compile-time debugging helpers */ -#undef WHAT_LOG /* Define to log the WHAT params in the trace function. */ -#undef TRACE_LOG /* Define to log our bookkeeping. */ -#undef COLLECT_STATS /* Collect counters: stats are printed when tracer is stopped. */ -#undef DO_NOTHING /* Define this to make the tracer do nothing. */ - -/* Py 2.x and 3.x compatibility */ - -#if PY_MAJOR_VERSION >= 3 - -#define MyText_Type PyUnicode_Type -#define MyText_AS_BYTES(o) PyUnicode_AsASCIIString(o) -#define MyBytes_GET_SIZE(o) PyBytes_GET_SIZE(o) -#define MyBytes_AS_STRING(o) PyBytes_AS_STRING(o) -#define MyText_AsString(o) PyUnicode_AsUTF8(o) -#define MyText_FromFormat PyUnicode_FromFormat -#define MyInt_FromInt(i) PyLong_FromLong((long)i) -#define MyInt_AsInt(o) (int)PyLong_AsLong(o) -#define MyText_InternFromString(s) PyUnicode_InternFromString(s) - -#define MyType_HEAD_INIT PyVarObject_HEAD_INIT(NULL, 0) - -#else - -#define MyText_Type PyString_Type -#define MyText_AS_BYTES(o) (Py_INCREF(o), o) -#define MyBytes_GET_SIZE(o) PyString_GET_SIZE(o) -#define MyBytes_AS_STRING(o) PyString_AS_STRING(o) -#define MyText_AsString(o) PyString_AsString(o) -#define MyText_FromFormat PyUnicode_FromFormat -#define MyInt_FromInt(i) PyInt_FromLong((long)i) -#define MyInt_AsInt(o) (int)PyInt_AsLong(o) -#define MyText_InternFromString(s) PyString_InternFromString(s) - -#define MyType_HEAD_INIT PyObject_HEAD_INIT(NULL) 0, - -#endif /* Py3k */ - -// Undocumented, and not in all 2.7.x, so our own copy of it. -#define My_XSETREF(op, op2) \ - do { \ - PyObject *_py_tmp = (PyObject *)(op); \ - (op) = (op2); \ - Py_XDECREF(_py_tmp); \ - } while (0) - -/* The values returned to indicate ok or error. */ -#define RET_OK 0 -#define RET_ERROR -1 - -/* Nicer booleans */ -typedef int BOOL; -#define FALSE 0 -#define TRUE 1 - -/* Only for extreme machete-mode debugging! */ -#define CRASH { printf("*** CRASH! ***\n"); *((int*)1) = 1; } - -#endif /* _COVERAGE_UTIL_H */ diff --git a/contrib/python/coverage/py2/coverage/data.py b/contrib/python/coverage/py2/coverage/data.py deleted file mode 100644 index 5dd1dfe3f04..00000000000 --- a/contrib/python/coverage/py2/coverage/data.py +++ /dev/null @@ -1,125 +0,0 @@ -# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 -# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt - -"""Coverage data for coverage.py. - -This file had the 4.x JSON data support, which is now gone. This file still -has storage-agnostic helpers, and is kept to avoid changing too many imports. -CoverageData is now defined in sqldata.py, and imported here to keep the -imports working. - -""" - -import glob -import os.path - -from coverage.misc import CoverageException, file_be_gone -from coverage.sqldata import CoverageData - - -def line_counts(data, fullpath=False): - """Return a dict summarizing the line coverage data. - - Keys are based on the file names, and values are the number of executed - lines. If `fullpath` is true, then the keys are the full pathnames of - the files, otherwise they are the basenames of the files. - - Returns a dict mapping file names to counts of lines. - - """ - summ = {} - if fullpath: - filename_fn = lambda f: f - else: - filename_fn = os.path.basename - for filename in data.measured_files(): - summ[filename_fn(filename)] = len(data.lines(filename)) - return summ - - -def add_data_to_hash(data, filename, hasher): - """Contribute `filename`'s data to the `hasher`. - - `hasher` is a `coverage.misc.Hasher` instance to be updated with - the file's data. It should only get the results data, not the run - data. - - """ - if data.has_arcs(): - hasher.update(sorted(data.arcs(filename) or [])) - else: - hasher.update(sorted(data.lines(filename) or [])) - hasher.update(data.file_tracer(filename)) - - -def combine_parallel_data(data, aliases=None, data_paths=None, strict=False, keep=False): - """Combine a number of data files together. - - Treat `data.filename` as a file prefix, and combine the data from all - of the data files starting with that prefix plus a dot. - - If `aliases` is provided, it's a `PathAliases` object that is used to - re-map paths to match the local machine's. - - If `data_paths` is provided, it is a list of directories or files to - combine. Directories are searched for files that start with - `data.filename` plus dot as a prefix, and those files are combined. - - If `data_paths` is not provided, then the directory portion of - `data.filename` is used as the directory to search for data files. - - Unless `keep` is True every data file found and combined is then deleted from disk. If a file - cannot be read, a warning will be issued, and the file will not be - deleted. - - If `strict` is true, and no files are found to combine, an error is - raised. - - """ - # Because of the os.path.abspath in the constructor, data_dir will - # never be an empty string. - data_dir, local = os.path.split(data.base_filename()) - localdot = local + '.*' - - data_paths = data_paths or [data_dir] - files_to_combine = [] - for p in data_paths: - if os.path.isfile(p): - files_to_combine.append(os.path.abspath(p)) - elif os.path.isdir(p): - pattern = os.path.join(os.path.abspath(p), localdot) - files_to_combine.extend(glob.glob(pattern)) - else: - raise CoverageException("Couldn't combine from non-existent path '%s'" % (p,)) - - if strict and not files_to_combine: - raise CoverageException("No data to combine") - - files_combined = 0 - for f in files_to_combine: - if f == data.data_filename(): - # Sometimes we are combining into a file which is one of the - # parallel files. Skip that file. - if data._debug.should('dataio'): - data._debug.write("Skipping combining ourself: %r" % (f,)) - continue - if data._debug.should('dataio'): - data._debug.write("Combining data file %r" % (f,)) - try: - new_data = CoverageData(f, debug=data._debug) - new_data.read() - except CoverageException as exc: - if data._warn: - # The CoverageException has the file name in it, so just - # use the message as the warning. - data._warn(str(exc)) - else: - data.update(new_data, aliases=aliases) - files_combined += 1 - if not keep: - if data._debug.should('dataio'): - data._debug.write("Deleting combined data file %r" % (f,)) - file_be_gone(f) - - if strict and not files_combined: - raise CoverageException("No usable data files") diff --git a/contrib/python/coverage/py2/coverage/debug.py b/contrib/python/coverage/py2/coverage/debug.py deleted file mode 100644 index 194f16f50db..00000000000 --- a/contrib/python/coverage/py2/coverage/debug.py +++ /dev/null @@ -1,406 +0,0 @@ -# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 -# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt - -"""Control of and utilities for debugging.""" - -import contextlib -import functools -import inspect -import itertools -import os -import pprint -import sys -try: - import _thread -except ImportError: - import thread as _thread - -from coverage.backward import reprlib, StringIO -from coverage.misc import isolate_module - -os = isolate_module(os) - - -# When debugging, it can be helpful to force some options, especially when -# debugging the configuration mechanisms you usually use to control debugging! -# This is a list of forced debugging options. -FORCED_DEBUG = [] -FORCED_DEBUG_FILE = None - - -class DebugControl(object): - """Control and output for debugging.""" - - show_repr_attr = False # For SimpleReprMixin - - def __init__(self, options, output): - """Configure the options and output file for debugging.""" - self.options = list(options) + FORCED_DEBUG - self.suppress_callers = False - - filters = [] - if self.should('pid'): - filters.append(add_pid_and_tid) - self.output = DebugOutputFile.get_one( - output, - show_process=self.should('process'), - filters=filters, - ) - self.raw_output = self.output.outfile - - def __repr__(self): - return "<DebugControl options=%r raw_output=%r>" % (self.options, self.raw_output) - - def should(self, option): - """Decide whether to output debug information in category `option`.""" - if option == "callers" and self.suppress_callers: - return False - return (option in self.options) - - @contextlib.contextmanager - def without_callers(self): - """A context manager to prevent call stacks from being logged.""" - old = self.suppress_callers - self.suppress_callers = True - try: - yield - finally: - self.suppress_callers = old - - def write(self, msg): - """Write a line of debug output. - - `msg` is the line to write. A newline will be appended. - - """ - self.output.write(msg+"\n") - if self.should('self'): - caller_self = inspect.stack()[1][0].f_locals.get('self') - if caller_self is not None: - self.output.write("self: {!r}\n".format(caller_self)) - if self.should('callers'): - dump_stack_frames(out=self.output, skip=1) - self.output.flush() - - -class DebugControlString(DebugControl): - """A `DebugControl` that writes to a StringIO, for testing.""" - def __init__(self, options): - super(DebugControlString, self).__init__(options, StringIO()) - - def get_output(self): - """Get the output text from the `DebugControl`.""" - return self.raw_output.getvalue() - - -class NoDebugging(object): - """A replacement for DebugControl that will never try to do anything.""" - def should(self, option): # pylint: disable=unused-argument - """Should we write debug messages? Never.""" - return False - - -def info_header(label): - """Make a nice header string.""" - return "--{:-<60s}".format(" "+label+" ") - - -def info_formatter(info): - """Produce a sequence of formatted lines from info. - - `info` is a sequence of pairs (label, data). The produced lines are - nicely formatted, ready to print. - - """ - info = list(info) - if not info: - return - label_len = 30 - assert all(len(l) < label_len for l, _ in info) - for label, data in info: - if data == []: - data = "-none-" - if isinstance(data, (list, set, tuple)): - prefix = "%*s:" % (label_len, label) - for e in data: - yield "%*s %s" % (label_len+1, prefix, e) - prefix = "" - else: - yield "%*s: %s" % (label_len, label, data) - - -def write_formatted_info(writer, header, info): - """Write a sequence of (label,data) pairs nicely.""" - writer.write(info_header(header)) - for line in info_formatter(info): - writer.write(" %s" % line) - - -def short_stack(limit=None, skip=0): - """Return a string summarizing the call stack. - - The string is multi-line, with one line per stack frame. Each line shows - the function name, the file name, and the line number: - - ... - start_import_stop : /Users/ned/coverage/trunk/tests/coveragetest.py @95 - import_local_file : /Users/ned/coverage/trunk/tests/coveragetest.py @81 - import_local_file : /Users/ned/coverage/trunk/coverage/backward.py @159 - ... - - `limit` is the number of frames to include, defaulting to all of them. - - `skip` is the number of frames to skip, so that debugging functions can - call this and not be included in the result. - - """ - stack = inspect.stack()[limit:skip:-1] - return "\n".join("%30s : %s:%d" % (t[3], t[1], t[2]) for t in stack) - - -def dump_stack_frames(limit=None, out=None, skip=0): - """Print a summary of the stack to stdout, or someplace else.""" - out = out or sys.stdout - out.write(short_stack(limit=limit, skip=skip+1)) - out.write("\n") - - -def clipped_repr(text, numchars=50): - """`repr(text)`, but limited to `numchars`.""" - r = reprlib.Repr() - r.maxstring = numchars - return r.repr(text) - - -def short_id(id64): - """Given a 64-bit id, make a shorter 16-bit one.""" - id16 = 0 - for offset in range(0, 64, 16): - id16 ^= id64 >> offset - return id16 & 0xFFFF - - -def add_pid_and_tid(text): - """A filter to add pid and tid to debug messages.""" - # Thread ids are useful, but too long. Make a shorter one. - tid = "{:04x}".format(short_id(_thread.get_ident())) - text = "{:5d}.{}: {}".format(os.getpid(), tid, text) - return text - - -class SimpleReprMixin(object): - """A mixin implementing a simple __repr__.""" - simple_repr_ignore = ['simple_repr_ignore', '$coverage.object_id'] - - def __repr__(self): - show_attrs = ( - (k, v) for k, v in self.__dict__.items() - if getattr(v, "show_repr_attr", True) - and not callable(v) - and k not in self.simple_repr_ignore - ) - return "<{klass} @0x{id:x} {attrs}>".format( - klass=self.__class__.__name__, - id=id(self), - attrs=" ".join("{}={!r}".format(k, v) for k, v in show_attrs), - ) - - -def simplify(v): # pragma: debugging - """Turn things which are nearly dict/list/etc into dict/list/etc.""" - if isinstance(v, dict): - return {k:simplify(vv) for k, vv in v.items()} - elif isinstance(v, (list, tuple)): - return type(v)(simplify(vv) for vv in v) - elif hasattr(v, "__dict__"): - return simplify({'.'+k: v for k, v in v.__dict__.items()}) - else: - return v - - -def pp(v): # pragma: debugging - """Debug helper to pretty-print data, including SimpleNamespace objects.""" - # Might not be needed in 3.9+ - pprint.pprint(simplify(v)) - - -def filter_text(text, filters): - """Run `text` through a series of filters. - - `filters` is a list of functions. Each takes a string and returns a - string. Each is run in turn. - - Returns: the final string that results after all of the filters have - run. - - """ - clean_text = text.rstrip() - ending = text[len(clean_text):] - text = clean_text - for fn in filters: - lines = [] - for line in text.splitlines(): - lines.extend(fn(line).splitlines()) - text = "\n".join(lines) - return text + ending - - -class CwdTracker(object): # pragma: debugging - """A class to add cwd info to debug messages.""" - def __init__(self): - self.cwd = None - - def filter(self, text): - """Add a cwd message for each new cwd.""" - cwd = os.getcwd() - if cwd != self.cwd: - text = "cwd is now {!r}\n".format(cwd) + text - self.cwd = cwd - return text - - -class DebugOutputFile(object): # pragma: debugging - """A file-like object that includes pid and cwd information.""" - def __init__(self, outfile, show_process, filters): - self.outfile = outfile - self.show_process = show_process - self.filters = list(filters) - - if self.show_process: - self.filters.insert(0, CwdTracker().filter) - self.write("New process: executable: %r\n" % (sys.executable,)) - self.write("New process: cmd: %r\n" % (getattr(sys, 'argv', None),)) - if hasattr(os, 'getppid'): - self.write("New process: pid: %r, parent pid: %r\n" % (os.getpid(), os.getppid())) - - SYS_MOD_NAME = '$coverage.debug.DebugOutputFile.the_one' - - @classmethod - def get_one(cls, fileobj=None, show_process=True, filters=(), interim=False): - """Get a DebugOutputFile. - - If `fileobj` is provided, then a new DebugOutputFile is made with it. - - If `fileobj` isn't provided, then a file is chosen - (COVERAGE_DEBUG_FILE, or stderr), and a process-wide singleton - DebugOutputFile is made. - - `show_process` controls whether the debug file adds process-level - information, and filters is a list of other message filters to apply. - - `filters` are the text filters to apply to the stream to annotate with - pids, etc. - - If `interim` is true, then a future `get_one` can replace this one. - - """ - if fileobj is not None: - # Make DebugOutputFile around the fileobj passed. - return cls(fileobj, show_process, filters) - - # Because of the way igor.py deletes and re-imports modules, - # this class can be defined more than once. But we really want - # a process-wide singleton. So stash it in sys.modules instead of - # on a class attribute. Yes, this is aggressively gross. - the_one, is_interim = sys.modules.get(cls.SYS_MOD_NAME, (None, True)) - if the_one is None or is_interim: - if fileobj is None: - debug_file_name = os.environ.get("COVERAGE_DEBUG_FILE", FORCED_DEBUG_FILE) - if debug_file_name: - fileobj = open(debug_file_name, "a") - else: - fileobj = sys.stderr - the_one = cls(fileobj, show_process, filters) - sys.modules[cls.SYS_MOD_NAME] = (the_one, interim) - return the_one - - def write(self, text): - """Just like file.write, but filter through all our filters.""" - self.outfile.write(filter_text(text, self.filters)) - self.outfile.flush() - - def flush(self): - """Flush our file.""" - self.outfile.flush() - - -def log(msg, stack=False): # pragma: debugging - """Write a log message as forcefully as possible.""" - out = DebugOutputFile.get_one(interim=True) - out.write(msg+"\n") - if stack: - dump_stack_frames(out=out, skip=1) - - -def decorate_methods(decorator, butnot=(), private=False): # pragma: debugging - """A class decorator to apply a decorator to methods.""" - def _decorator(cls): - for name, meth in inspect.getmembers(cls, inspect.isroutine): - if name not in cls.__dict__: - continue - if name != "__init__": - if not private and name.startswith("_"): - continue - if name in butnot: - continue - setattr(cls, name, decorator(meth)) - return cls - return _decorator - - -def break_in_pudb(func): # pragma: debugging - """A function decorator to stop in the debugger for each call.""" - @functools.wraps(func) - def _wrapper(*args, **kwargs): - import pudb - sys.stdout = sys.__stdout__ - pudb.set_trace() - return func(*args, **kwargs) - return _wrapper - - -OBJ_IDS = itertools.count() -CALLS = itertools.count() -OBJ_ID_ATTR = "$coverage.object_id" - -def show_calls(show_args=True, show_stack=False, show_return=False): # pragma: debugging - """A method decorator to debug-log each call to the function.""" - def _decorator(func): - @functools.wraps(func) - def _wrapper(self, *args, **kwargs): - oid = getattr(self, OBJ_ID_ATTR, None) - if oid is None: - oid = "{:08d} {:04d}".format(os.getpid(), next(OBJ_IDS)) - setattr(self, OBJ_ID_ATTR, oid) - extra = "" - if show_args: - eargs = ", ".join(map(repr, args)) - ekwargs = ", ".join("{}={!r}".format(*item) for item in kwargs.items()) - extra += "(" - extra += eargs - if eargs and ekwargs: - extra += ", " - extra += ekwargs - extra += ")" - if show_stack: - extra += " @ " - extra += "; ".join(_clean_stack_line(l) for l in short_stack().splitlines()) - callid = next(CALLS) - msg = "{} {:04d} {}{}\n".format(oid, callid, func.__name__, extra) - DebugOutputFile.get_one(interim=True).write(msg) - ret = func(self, *args, **kwargs) - if show_return: - msg = "{} {:04d} {} return {!r}\n".format(oid, callid, func.__name__, ret) - DebugOutputFile.get_one(interim=True).write(msg) - return ret - return _wrapper - return _decorator - - -def _clean_stack_line(s): # pragma: debugging - """Simplify some paths in a stack trace, for compactness.""" - s = s.strip() - s = s.replace(os.path.dirname(__file__) + '/', '') - s = s.replace(os.path.dirname(os.__file__) + '/', '') - s = s.replace(sys.prefix + '/', '') - return s diff --git a/contrib/python/coverage/py2/coverage/disposition.py b/contrib/python/coverage/py2/coverage/disposition.py deleted file mode 100644 index 9b9a997d8ae..00000000000 --- a/contrib/python/coverage/py2/coverage/disposition.py +++ /dev/null @@ -1,37 +0,0 @@ -# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 -# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt - -"""Simple value objects for tracking what to do with files.""" - - -class FileDisposition(object): - """A simple value type for recording what to do with a file.""" - pass - - -# FileDisposition "methods": FileDisposition is a pure value object, so it can -# be implemented in either C or Python. Acting on them is done with these -# functions. - -def disposition_init(cls, original_filename): - """Construct and initialize a new FileDisposition object.""" - disp = cls() - disp.original_filename = original_filename - disp.canonical_filename = original_filename - disp.source_filename = None - disp.trace = False - disp.reason = "" - disp.file_tracer = None - disp.has_dynamic_filename = False - return disp - - -def disposition_debug_msg(disp): - """Make a nice debug message of what the FileDisposition is doing.""" - if disp.trace: - msg = "Tracing %r" % (disp.original_filename,) - if disp.file_tracer: - msg += ": will be traced by %r" % disp.file_tracer - else: - msg = "Not tracing %r: %s" % (disp.original_filename, disp.reason) - return msg diff --git a/contrib/python/coverage/py2/coverage/env.py b/contrib/python/coverage/py2/coverage/env.py deleted file mode 100644 index ea78a5be89b..00000000000 --- a/contrib/python/coverage/py2/coverage/env.py +++ /dev/null @@ -1,130 +0,0 @@ -# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 -# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt - -"""Determine facts about the environment.""" - -import os -import platform -import sys - -# Operating systems. -WINDOWS = sys.platform == "win32" -LINUX = sys.platform.startswith("linux") - -# Python implementations. -CPYTHON = (platform.python_implementation() == "CPython") -PYPY = (platform.python_implementation() == "PyPy") -JYTHON = (platform.python_implementation() == "Jython") -IRONPYTHON = (platform.python_implementation() == "IronPython") - -# Python versions. We amend version_info with one more value, a zero if an -# official version, or 1 if built from source beyond an official version. -PYVERSION = sys.version_info + (int(platform.python_version()[-1] == "+"),) -PY2 = PYVERSION < (3, 0) -PY3 = PYVERSION >= (3, 0) - -if PYPY: - PYPYVERSION = sys.pypy_version_info - -PYPY2 = PYPY and PY2 -PYPY3 = PYPY and PY3 - -# Python behavior. -class PYBEHAVIOR(object): - """Flags indicating this Python's behavior.""" - - pep626 = CPYTHON and (PYVERSION > (3, 10, 0, 'alpha', 4)) - - # Is "if __debug__" optimized away? - if PYPY3: - optimize_if_debug = True - elif PYPY2: - optimize_if_debug = False - else: - optimize_if_debug = not pep626 - - # Is "if not __debug__" optimized away? - optimize_if_not_debug = (not PYPY) and (PYVERSION >= (3, 7, 0, 'alpha', 4)) - if pep626: - optimize_if_not_debug = False - if PYPY3: - optimize_if_not_debug = True - - # Is "if not __debug__" optimized away even better? - optimize_if_not_debug2 = (not PYPY) and (PYVERSION >= (3, 8, 0, 'beta', 1)) - if pep626: - optimize_if_not_debug2 = False - - # Do we have yield-from? - yield_from = (PYVERSION >= (3, 3)) - - # Do we have PEP 420 namespace packages? - namespaces_pep420 = (PYVERSION >= (3, 3)) - - # Do .pyc files have the source file size recorded in them? - size_in_pyc = (PYVERSION >= (3, 3)) - - # Do we have async and await syntax? - async_syntax = (PYVERSION >= (3, 5)) - - # PEP 448 defined additional unpacking generalizations - unpackings_pep448 = (PYVERSION >= (3, 5)) - - # Can co_lnotab have negative deltas? - negative_lnotab = (PYVERSION >= (3, 6)) and not (PYPY and PYPYVERSION < (7, 2)) - - # Do .pyc files conform to PEP 552? Hash-based pyc's. - hashed_pyc_pep552 = (PYVERSION >= (3, 7, 0, 'alpha', 4)) - - # Python 3.7.0b3 changed the behavior of the sys.path[0] entry for -m. It - # used to be an empty string (meaning the current directory). It changed - # to be the actual path to the current directory, so that os.chdir wouldn't - # affect the outcome. - actual_syspath0_dash_m = CPYTHON and (PYVERSION >= (3, 7, 0, 'beta', 3)) - - # 3.7 changed how functions with only docstrings are numbered. - docstring_only_function = (not PYPY) and ((3, 7, 0, 'beta', 5) <= PYVERSION <= (3, 10)) - - # When a break/continue/return statement in a try block jumps to a finally - # block, does the finally block do the break/continue/return (pre-3.8), or - # does the finally jump back to the break/continue/return (3.8) to do the - # work? - finally_jumps_back = ((3, 8) <= PYVERSION < (3, 10)) - - # When a function is decorated, does the trace function get called for the - # @-line and also the def-line (new behavior in 3.8)? Or just the @-line - # (old behavior)? - trace_decorated_def = (PYVERSION >= (3, 8)) - - # Are while-true loops optimized into absolute jumps with no loop setup? - nix_while_true = (PYVERSION >= (3, 8)) - - # Python 3.9a1 made sys.argv[0] and other reported files absolute paths. - report_absolute_files = (PYVERSION >= (3, 9)) - - # Lines after break/continue/return/raise are no longer compiled into the - # bytecode. They used to be marked as missing, now they aren't executable. - omit_after_jump = pep626 - - # PyPy has always omitted statements after return. - omit_after_return = omit_after_jump or PYPY - - # Modules used to have firstlineno equal to the line number of the first - # real line of code. Now they always start at 1. - module_firstline_1 = pep626 - - # Are "if 0:" lines (and similar) kept in the compiled code? - keep_constant_test = pep626 - -# Coverage.py specifics. - -# Are we using the C-implemented trace function? -C_TRACER = os.getenv('COVERAGE_TEST_TRACER', 'c') == 'c' - -# Are we coverage-measuring ourselves? -METACOV = os.getenv('COVERAGE_COVERAGE', '') != '' - -# Are we running our test suite? -# Even when running tests, you can use COVERAGE_TESTING=0 to disable the -# test-specific behavior like contracts. -TESTING = os.getenv('COVERAGE_TESTING', '') == 'True' diff --git a/contrib/python/coverage/py2/coverage/execfile.py b/contrib/python/coverage/py2/coverage/execfile.py deleted file mode 100644 index 29409d517a2..00000000000 --- a/contrib/python/coverage/py2/coverage/execfile.py +++ /dev/null @@ -1,362 +0,0 @@ -# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 -# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt - -"""Execute files of Python code.""" - -import inspect -import marshal -import os -import struct -import sys -import types - -from coverage import env -from coverage.backward import BUILTINS -from coverage.backward import PYC_MAGIC_NUMBER, imp, importlib_util_find_spec -from coverage.files import canonical_filename, python_reported_file -from coverage.misc import CoverageException, ExceptionDuringRun, NoCode, NoSource, isolate_module -from coverage.phystokens import compile_unicode -from coverage.python import get_python_source - -os = isolate_module(os) - - -class DummyLoader(object): - """A shim for the pep302 __loader__, emulating pkgutil.ImpLoader. - - Currently only implements the .fullname attribute - """ - def __init__(self, fullname, *_args): - self.fullname = fullname - - -if importlib_util_find_spec: - def find_module(modulename): - """Find the module named `modulename`. - - Returns the file path of the module, the name of the enclosing - package, and the spec. - """ - try: - spec = importlib_util_find_spec(modulename) - except ImportError as err: - raise NoSource(str(err)) - if not spec: - raise NoSource("No module named %r" % (modulename,)) - pathname = spec.origin - packagename = spec.name - if spec.submodule_search_locations: - mod_main = modulename + ".__main__" - spec = importlib_util_find_spec(mod_main) - if not spec: - raise NoSource( - "No module named %s; " - "%r is a package and cannot be directly executed" - % (mod_main, modulename) - ) - pathname = spec.origin - packagename = spec.name - packagename = packagename.rpartition(".")[0] - return pathname, packagename, spec -else: - def find_module(modulename): - """Find the module named `modulename`. - - Returns the file path of the module, the name of the enclosing - package, and None (where a spec would have been). - """ - openfile = None - glo, loc = globals(), locals() - try: - # Search for the module - inside its parent package, if any - using - # standard import mechanics. - if '.' in modulename: - packagename, name = modulename.rsplit('.', 1) - package = __import__(packagename, glo, loc, ['__path__']) - searchpath = package.__path__ - else: - packagename, name = None, modulename - searchpath = None # "top-level search" in imp.find_module() - openfile, pathname, _ = imp.find_module(name, searchpath) - - # Complain if this is a magic non-file module. - if openfile is None and pathname is None: - raise NoSource( - "module does not live in a file: %r" % modulename - ) - - # If `modulename` is actually a package, not a mere module, then we - # pretend to be Python 2.7 and try running its __main__.py script. - if openfile is None: - packagename = modulename - name = '__main__' - package = __import__(packagename, glo, loc, ['__path__']) - searchpath = package.__path__ - openfile, pathname, _ = imp.find_module(name, searchpath) - except ImportError as err: - raise NoSource(str(err)) - finally: - if openfile: - openfile.close() - - return pathname, packagename, None - - -class PyRunner(object): - """Multi-stage execution of Python code. - - This is meant to emulate real Python execution as closely as possible. - - """ - def __init__(self, args, as_module=False): - self.args = args - self.as_module = as_module - - self.arg0 = args[0] - self.package = self.modulename = self.pathname = self.loader = self.spec = None - - def prepare(self): - """Set sys.path properly. - - This needs to happen before any importing, and without importing anything. - """ - if self.as_module: - if env.PYBEHAVIOR.actual_syspath0_dash_m: - path0 = os.getcwd() - else: - path0 = "" - elif os.path.isdir(self.arg0): - # Running a directory means running the __main__.py file in that - # directory. - path0 = self.arg0 - else: - path0 = os.path.abspath(os.path.dirname(self.arg0)) - - if os.path.isdir(sys.path[0]): - # sys.path fakery. If we are being run as a command, then sys.path[0] - # is the directory of the "coverage" script. If this is so, replace - # sys.path[0] with the directory of the file we're running, or the - # current directory when running modules. If it isn't so, then we - # don't know what's going on, and just leave it alone. - top_file = inspect.stack()[-1][0].f_code.co_filename - sys_path_0_abs = os.path.abspath(sys.path[0]) - top_file_dir_abs = os.path.abspath(os.path.dirname(top_file)) - sys_path_0_abs = canonical_filename(sys_path_0_abs) - top_file_dir_abs = canonical_filename(top_file_dir_abs) - if sys_path_0_abs != top_file_dir_abs: - path0 = None - - else: - # sys.path[0] is a file. Is the next entry the directory containing - # that file? - if sys.path[1] == os.path.dirname(sys.path[0]): - # Can it be right to always remove that? - del sys.path[1] - - if path0 is not None: - sys.path[0] = python_reported_file(path0) - - def _prepare2(self): - """Do more preparation to run Python code. - - Includes finding the module to run and adjusting sys.argv[0]. - This method is allowed to import code. - - """ - if self.as_module: - self.modulename = self.arg0 - pathname, self.package, self.spec = find_module(self.modulename) - if self.spec is not None: - self.modulename = self.spec.name - self.loader = DummyLoader(self.modulename) - self.pathname = os.path.abspath(pathname) - self.args[0] = self.arg0 = self.pathname - elif os.path.isdir(self.arg0): - # Running a directory means running the __main__.py file in that - # directory. - for ext in [".py", ".pyc", ".pyo"]: - try_filename = os.path.join(self.arg0, "__main__" + ext) - if os.path.exists(try_filename): - self.arg0 = try_filename - break - else: - raise NoSource("Can't find '__main__' module in '%s'" % self.arg0) - - if env.PY2: - self.arg0 = os.path.abspath(self.arg0) - - # Make a spec. I don't know if this is the right way to do it. - try: - import importlib.machinery - except ImportError: - pass - else: - try_filename = python_reported_file(try_filename) - self.spec = importlib.machinery.ModuleSpec("__main__", None, origin=try_filename) - self.spec.has_location = True - self.package = "" - self.loader = DummyLoader("__main__") - else: - if env.PY3: - self.loader = DummyLoader("__main__") - - self.arg0 = python_reported_file(self.arg0) - - def run(self): - """Run the Python code!""" - - self._prepare2() - - # Create a module to serve as __main__ - main_mod = types.ModuleType('__main__') - - from_pyc = self.arg0.endswith((".pyc", ".pyo")) - main_mod.__file__ = self.arg0 - if from_pyc: - main_mod.__file__ = main_mod.__file__[:-1] - if self.package is not None: - main_mod.__package__ = self.package - main_mod.__loader__ = self.loader - if self.spec is not None: - main_mod.__spec__ = self.spec - - main_mod.__builtins__ = BUILTINS - - sys.modules['__main__'] = main_mod - - # Set sys.argv properly. - sys.argv = self.args - - try: - # Make a code object somehow. - if from_pyc: - code = make_code_from_pyc(self.arg0) - else: - code = make_code_from_py(self.arg0) - except CoverageException: - raise - except Exception as exc: - msg = "Couldn't run '{filename}' as Python code: {exc.__class__.__name__}: {exc}" - raise CoverageException(msg.format(filename=self.arg0, exc=exc)) - - # Execute the code object. - # Return to the original directory in case the test code exits in - # a non-existent directory. - cwd = os.getcwd() - try: - exec(code, main_mod.__dict__) - except SystemExit: # pylint: disable=try-except-raise - # The user called sys.exit(). Just pass it along to the upper - # layers, where it will be handled. - raise - except Exception: - # Something went wrong while executing the user code. - # Get the exc_info, and pack them into an exception that we can - # throw up to the outer loop. We peel one layer off the traceback - # so that the coverage.py code doesn't appear in the final printed - # traceback. - typ, err, tb = sys.exc_info() - - # PyPy3 weirdness. If I don't access __context__, then somehow it - # is non-None when the exception is reported at the upper layer, - # and a nested exception is shown to the user. This getattr fixes - # it somehow? https://bitbucket.org/pypy/pypy/issue/1903 - getattr(err, '__context__', None) - - # Call the excepthook. - try: - if hasattr(err, "__traceback__"): - err.__traceback__ = err.__traceback__.tb_next - sys.excepthook(typ, err, tb.tb_next) - except SystemExit: # pylint: disable=try-except-raise - raise - except Exception: - # Getting the output right in the case of excepthook - # shenanigans is kind of involved. - sys.stderr.write("Error in sys.excepthook:\n") - typ2, err2, tb2 = sys.exc_info() - err2.__suppress_context__ = True - if hasattr(err2, "__traceback__"): - err2.__traceback__ = err2.__traceback__.tb_next - sys.__excepthook__(typ2, err2, tb2.tb_next) - sys.stderr.write("\nOriginal exception was:\n") - raise ExceptionDuringRun(typ, err, tb.tb_next) - else: - sys.exit(1) - finally: - os.chdir(cwd) - - -def run_python_module(args): - """Run a Python module, as though with ``python -m name args...``. - - `args` is the argument array to present as sys.argv, including the first - element naming the module being executed. - - This is a helper for tests, to encapsulate how to use PyRunner. - - """ - runner = PyRunner(args, as_module=True) - runner.prepare() - runner.run() - - -def run_python_file(args): - """Run a Python file as if it were the main program on the command line. - - `args` is the argument array to present as sys.argv, including the first - element naming the file being executed. `package` is the name of the - enclosing package, if any. - - This is a helper for tests, to encapsulate how to use PyRunner. - - """ - runner = PyRunner(args, as_module=False) - runner.prepare() - runner.run() - - -def make_code_from_py(filename): - """Get source from `filename` and make a code object of it.""" - # Open the source file. - try: - source = get_python_source(filename) - except (IOError, NoSource): - raise NoSource("No file to run: '%s'" % filename) - - code = compile_unicode(source, filename, "exec") - return code - - -def make_code_from_pyc(filename): - """Get a code object from a .pyc file.""" - try: - fpyc = open(filename, "rb") - except IOError: - raise NoCode("No file to run: '%s'" % filename) - - with fpyc: - # First four bytes are a version-specific magic number. It has to - # match or we won't run the file. - magic = fpyc.read(4) - if magic != PYC_MAGIC_NUMBER: - raise NoCode("Bad magic number in .pyc file: {} != {}".format(magic, PYC_MAGIC_NUMBER)) - - date_based = True - if env.PYBEHAVIOR.hashed_pyc_pep552: - flags = struct.unpack('<L', fpyc.read(4))[0] - hash_based = flags & 0x01 - if hash_based: - fpyc.read(8) # Skip the hash. - date_based = False - if date_based: - # Skip the junk in the header that we don't need. - fpyc.read(4) # Skip the moddate. - if env.PYBEHAVIOR.size_in_pyc: - # 3.3 added another long to the header (size), skip it. - fpyc.read(4) - - # The rest of the file is the code object we want. - code = marshal.load(fpyc) - - return code diff --git a/contrib/python/coverage/py2/coverage/files.py b/contrib/python/coverage/py2/coverage/files.py deleted file mode 100644 index 5133ad07f36..00000000000 --- a/contrib/python/coverage/py2/coverage/files.py +++ /dev/null @@ -1,441 +0,0 @@ -# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 -# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt - -"""File wrangling.""" - -import hashlib -import fnmatch -import ntpath -import os -import os.path -import posixpath -import re -import sys - -from coverage import env -from coverage.backward import unicode_class -from coverage.misc import contract, CoverageException, join_regex, isolate_module - - -os = isolate_module(os) - - -def set_relative_directory(): - """Set the directory that `relative_filename` will be relative to.""" - global RELATIVE_DIR, CANONICAL_FILENAME_CACHE - - # The absolute path to our current directory. - RELATIVE_DIR = os.path.normcase(abs_file(os.curdir) + os.sep) - - # Cache of results of calling the canonical_filename() method, to - # avoid duplicating work. - CANONICAL_FILENAME_CACHE = {} - - -def relative_directory(): - """Return the directory that `relative_filename` is relative to.""" - return RELATIVE_DIR - - -@contract(returns='unicode') -def relative_filename(filename): - """Return the relative form of `filename`. - - The file name will be relative to the current directory when the - `set_relative_directory` was called. - - """ - fnorm = os.path.normcase(filename) - if fnorm.startswith(RELATIVE_DIR): - filename = filename[len(RELATIVE_DIR):] - return unicode_filename(filename) - - -@contract(returns='unicode') -def canonical_filename(filename): - """Return a canonical file name for `filename`. - - An absolute path with no redundant components and normalized case. - - """ - if filename not in CANONICAL_FILENAME_CACHE: - cf = filename - if not os.path.isabs(filename): - for path in [os.curdir] + sys.path: - if path is None: - continue - f = os.path.join(path, filename) - try: - exists = os.path.exists(f) - except UnicodeError: - exists = False - if exists: - cf = f - break - cf = abs_file(cf) - CANONICAL_FILENAME_CACHE[filename] = cf - return CANONICAL_FILENAME_CACHE[filename] - -if getattr(sys, 'is_standalone_binary', False): - # filename for py files in binary is already canonical, - # it's relative to the arcadia root - def canonical_filename(filename): - # next assert does not needed in case when we load coverage from not arcadia source in arcadia binary - # assert not filename.startswith("/"), filename - return filename - -MAX_FLAT = 200 - -@contract(filename='unicode', returns='unicode') -def flat_rootname(filename): - """A base for a flat file name to correspond to this file. - - Useful for writing files about the code where you want all the files in - the same directory, but need to differentiate same-named files from - different directories. - - For example, the file a/b/c.py will return 'a_b_c_py' - - """ - name = ntpath.splitdrive(filename)[1] - name = re.sub(r"[\\/.:]", "_", name) - if len(name) > MAX_FLAT: - h = hashlib.sha1(name.encode('UTF-8')).hexdigest() - name = name[-(MAX_FLAT-len(h)-1):] + '_' + h - return name - - -if env.WINDOWS: - - _ACTUAL_PATH_CACHE = {} - _ACTUAL_PATH_LIST_CACHE = {} - - def actual_path(path): - """Get the actual path of `path`, including the correct case.""" - if env.PY2 and isinstance(path, unicode_class): - path = path.encode(sys.getfilesystemencoding()) - if path in _ACTUAL_PATH_CACHE: - return _ACTUAL_PATH_CACHE[path] - - head, tail = os.path.split(path) - if not tail: - # This means head is the drive spec: normalize it. - actpath = head.upper() - elif not head: - actpath = tail - else: - head = actual_path(head) - if head in _ACTUAL_PATH_LIST_CACHE: - files = _ACTUAL_PATH_LIST_CACHE[head] - else: - try: - files = os.listdir(head) - except Exception: - # This will raise OSError, or this bizarre TypeError: - # https://bugs.python.org/issue1776160 - files = [] - _ACTUAL_PATH_LIST_CACHE[head] = files - normtail = os.path.normcase(tail) - for f in files: - if os.path.normcase(f) == normtail: - tail = f - break - actpath = os.path.join(head, tail) - _ACTUAL_PATH_CACHE[path] = actpath - return actpath - -else: - def actual_path(filename): - """The actual path for non-Windows platforms.""" - return filename - - -if env.PY2: - @contract(returns='unicode') - def unicode_filename(filename): - """Return a Unicode version of `filename`.""" - if isinstance(filename, str): - encoding = sys.getfilesystemencoding() or sys.getdefaultencoding() - filename = filename.decode(encoding, "replace") - return filename -else: - @contract(filename='unicode', returns='unicode') - def unicode_filename(filename): - """Return a Unicode version of `filename`.""" - return filename - - -@contract(returns='unicode') -def abs_file(path): - """Return the absolute normalized form of `path`.""" - try: - path = os.path.realpath(path) - except UnicodeError: - pass - path = os.path.abspath(path) - path = actual_path(path) - path = unicode_filename(path) - return path - - -def python_reported_file(filename): - """Return the string as Python would describe this file name.""" - if env.PYBEHAVIOR.report_absolute_files: - filename = os.path.abspath(filename) - return filename - - -RELATIVE_DIR = None -CANONICAL_FILENAME_CACHE = None -set_relative_directory() - - -def isabs_anywhere(filename): - """Is `filename` an absolute path on any OS?""" - return ntpath.isabs(filename) or posixpath.isabs(filename) - - -def prep_patterns(patterns): - """Prepare the file patterns for use in a `FnmatchMatcher`. - - If a pattern starts with a wildcard, it is used as a pattern - as-is. If it does not start with a wildcard, then it is made - absolute with the current directory. - - If `patterns` is None, an empty list is returned. - - """ - prepped = [] - for p in patterns or []: - if p.startswith(("*", "?")): - prepped.append(p) - else: - prepped.append(abs_file(p)) - return prepped - - -class TreeMatcher(object): - """A matcher for files in a tree. - - Construct with a list of paths, either files or directories. Paths match - with the `match` method if they are one of the files, or if they are - somewhere in a subtree rooted at one of the directories. - - """ - def __init__(self, paths): - self.paths = list(paths) - - def __repr__(self): - return "<TreeMatcher %r>" % self.paths - - def info(self): - """A list of strings for displaying when dumping state.""" - return self.paths - - def match(self, fpath): - """Does `fpath` indicate a file in one of our trees?""" - for p in self.paths: - if fpath.startswith(p): - if fpath == p: - # This is the same file! - return True - if fpath[len(p)] == os.sep: - # This is a file in the directory - return True - return False - - -class ModuleMatcher(object): - """A matcher for modules in a tree.""" - def __init__(self, module_names): - self.modules = list(module_names) - - def __repr__(self): - return "<ModuleMatcher %r>" % (self.modules) - - def info(self): - """A list of strings for displaying when dumping state.""" - return self.modules - - def match(self, module_name): - """Does `module_name` indicate a module in one of our packages?""" - if not module_name: - return False - - for m in self.modules: - if module_name.startswith(m): - if module_name == m: - return True - if module_name[len(m)] == '.': - # This is a module in the package - return True - - return False - - -class FnmatchMatcher(object): - """A matcher for files by file name pattern.""" - def __init__(self, pats): - self.pats = list(pats) - self.re = fnmatches_to_regex(self.pats, case_insensitive=env.WINDOWS) - - def __repr__(self): - return "<FnmatchMatcher %r>" % self.pats - - def info(self): - """A list of strings for displaying when dumping state.""" - return self.pats - - def match(self, fpath): - """Does `fpath` match one of our file name patterns?""" - return self.re.match(fpath) is not None - - -def sep(s): - """Find the path separator used in this string, or os.sep if none.""" - sep_match = re.search(r"[\\/]", s) - if sep_match: - the_sep = sep_match.group(0) - else: - the_sep = os.sep - return the_sep - - -def fnmatches_to_regex(patterns, case_insensitive=False, partial=False): - """Convert fnmatch patterns to a compiled regex that matches any of them. - - Slashes are always converted to match either slash or backslash, for - Windows support, even when running elsewhere. - - If `partial` is true, then the pattern will match if the target string - starts with the pattern. Otherwise, it must match the entire string. - - Returns: a compiled regex object. Use the .match method to compare target - strings. - - """ - regexes = (fnmatch.translate(pattern) for pattern in patterns) - # Python3.7 fnmatch translates "/" as "/". Before that, it translates as "\/", - # so we have to deal with maybe a backslash. - regexes = (re.sub(r"\\?/", r"[\\\\/]", regex) for regex in regexes) - - if partial: - # fnmatch always adds a \Z to match the whole string, which we don't - # want, so we remove the \Z. While removing it, we only replace \Z if - # followed by paren (introducing flags), or at end, to keep from - # destroying a literal \Z in the pattern. - regexes = (re.sub(r'\\Z(\(\?|$)', r'\1', regex) for regex in regexes) - - flags = 0 - if case_insensitive: - flags |= re.IGNORECASE - compiled = re.compile(join_regex(regexes), flags=flags) - - return compiled - - -class PathAliases(object): - """A collection of aliases for paths. - - When combining data files from remote machines, often the paths to source - code are different, for example, due to OS differences, or because of - serialized checkouts on continuous integration machines. - - A `PathAliases` object tracks a list of pattern/result pairs, and can - map a path through those aliases to produce a unified path. - - """ - def __init__(self): - self.aliases = [] - - def pprint(self): # pragma: debugging - """Dump the important parts of the PathAliases, for debugging.""" - for regex, result in self.aliases: - print("{!r} --> {!r}".format(regex.pattern, result)) - - def add(self, pattern, result): - """Add the `pattern`/`result` pair to the list of aliases. - - `pattern` is an `fnmatch`-style pattern. `result` is a simple - string. When mapping paths, if a path starts with a match against - `pattern`, then that match is replaced with `result`. This models - isomorphic source trees being rooted at different places on two - different machines. - - `pattern` can't end with a wildcard component, since that would - match an entire tree, and not just its root. - - """ - pattern_sep = sep(pattern) - - if len(pattern) > 1: - pattern = pattern.rstrip(r"\/") - - # The pattern can't end with a wildcard component. - if pattern.endswith("*"): - raise CoverageException("Pattern must not end with wildcards.") - - # The pattern is meant to match a filepath. Let's make it absolute - # unless it already is, or is meant to match any prefix. - if not pattern.startswith('*') and not isabs_anywhere(pattern + - pattern_sep): - pattern = abs_file(pattern) - if not pattern.endswith(pattern_sep): - pattern += pattern_sep - - # Make a regex from the pattern. - regex = fnmatches_to_regex([pattern], case_insensitive=True, partial=True) - - # Normalize the result: it must end with a path separator. - result_sep = sep(result) - result = result.rstrip(r"\/") + result_sep - self.aliases.append((regex, result)) - - def map(self, path): - """Map `path` through the aliases. - - `path` is checked against all of the patterns. The first pattern to - match is used to replace the root of the path with the result root. - Only one pattern is ever used. If no patterns match, `path` is - returned unchanged. - - The separator style in the result is made to match that of the result - in the alias. - - Returns the mapped path. If a mapping has happened, this is a - canonical path. If no mapping has happened, it is the original value - of `path` unchanged. - - """ - for regex, result in self.aliases: - m = regex.match(path) - if m: - new = path.replace(m.group(0), result) - new = new.replace(sep(path), sep(result)) - new = canonical_filename(new) - return new - return path - - -def find_python_files(dirname): - """Yield all of the importable Python files in `dirname`, recursively. - - To be importable, the files have to be in a directory with a __init__.py, - except for `dirname` itself, which isn't required to have one. The - assumption is that `dirname` was specified directly, so the user knows - best, but sub-directories are checked for a __init__.py to be sure we only - find the importable files. - - """ - for i, (dirpath, dirnames, filenames) in enumerate(os.walk(dirname)): - if i > 0 and '__init__.py' not in filenames: - # If a directory doesn't have __init__.py, then it isn't - # importable and neither are its files - del dirnames[:] - continue - for filename in filenames: - # We're only interested in files that look like reasonable Python - # files: Must end with .py or .pyw, and must not have certain funny - # characters that probably mean they are editor junk. - if re.match(r"^[^.#~!$@%^&*()+=,]+\.pyw?$", filename): - yield os.path.join(dirpath, filename) diff --git a/contrib/python/coverage/py2/coverage/fullcoverage/encodings.py b/contrib/python/coverage/py2/coverage/fullcoverage/encodings.py deleted file mode 100644 index aeb416e4060..00000000000 --- a/contrib/python/coverage/py2/coverage/fullcoverage/encodings.py +++ /dev/null @@ -1,60 +0,0 @@ -# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 -# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt - -"""Imposter encodings module that installs a coverage-style tracer. - -This is NOT the encodings module; it is an imposter that sets up tracing -instrumentation and then replaces itself with the real encodings module. - -If the directory that holds this file is placed first in the PYTHONPATH when -using "coverage" to run Python's tests, then this file will become the very -first module imported by the internals of Python 3. It installs a -coverage.py-compatible trace function that can watch Standard Library modules -execute from the very earliest stages of Python's own boot process. This fixes -a problem with coverage.py - that it starts too late to trace the coverage of -many of the most fundamental modules in the Standard Library. - -""" - -import sys - -class FullCoverageTracer(object): - def __init__(self): - # `traces` is a list of trace events. Frames are tricky: the same - # frame object is used for a whole scope, with new line numbers - # written into it. So in one scope, all the frame objects are the - # same object, and will eventually all will point to the last line - # executed. So we keep the line numbers alongside the frames. - # The list looks like: - # - # traces = [ - # ((frame, event, arg), lineno), ... - # ] - # - self.traces = [] - - def fullcoverage_trace(self, *args): - frame, event, arg = args - self.traces.append((args, frame.f_lineno)) - return self.fullcoverage_trace - -sys.settrace(FullCoverageTracer().fullcoverage_trace) - -# In coverage/files.py is actual_filename(), which uses glob.glob. I don't -# understand why, but that use of glob borks everything if fullcoverage is in -# effect. So here we make an ugly hail-mary pass to switch off glob.glob over -# there. This means when using fullcoverage, Windows path names will not be -# their actual case. - -#sys.fullcoverage = True - -# Finally, remove our own directory from sys.path; remove ourselves from -# sys.modules; and re-import "encodings", which will be the real package -# this time. Note that the delete from sys.modules dictionary has to -# happen last, since all of the symbols in this module will become None -# at that exact moment, including "sys". - -parentdir = max(filter(__file__.startswith, sys.path), key=len) -sys.path.remove(parentdir) -del sys.modules['encodings'] -import encodings diff --git a/contrib/python/coverage/py2/coverage/html.py b/contrib/python/coverage/py2/coverage/html.py deleted file mode 100644 index 9d8e342716e..00000000000 --- a/contrib/python/coverage/py2/coverage/html.py +++ /dev/null @@ -1,539 +0,0 @@ -# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 -# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt - -"""HTML reporting for coverage.py.""" - -import datetime -import json -import os -import re -import shutil -import sys - -import coverage -from coverage import env -from coverage.backward import iitems, SimpleNamespace, format_local_datetime -from coverage.data import add_data_to_hash -from coverage.files import flat_rootname -from coverage.misc import CoverageException, ensure_dir, file_be_gone, Hasher, isolate_module -from coverage.report import get_analysis_to_report -from coverage.results import Numbers -from coverage.templite import Templite - -os = isolate_module(os) - - -# Static files are looked for in a list of places. -STATIC_PATH = [ - # The place Debian puts system Javascript libraries. - "/usr/share/javascript", - - # Our htmlfiles directory. - os.path.join(os.path.dirname(__file__), "htmlfiles"), -] - - -def data_filename(fname, pkgdir=""): - """Return the path to a data file of ours. - - The file is searched for on `STATIC_PATH`, and the first place it's found, - is returned. - - Each directory in `STATIC_PATH` is searched as-is, and also, if `pkgdir` - is provided, at that sub-directory. - - """ - tried = [] - for static_dir in STATIC_PATH: - static_filename = os.path.join(static_dir, fname) - if os.path.exists(static_filename): - return static_filename - else: - tried.append(static_filename) - if pkgdir: - static_filename = os.path.join(static_dir, pkgdir, fname) - if os.path.exists(static_filename): - return static_filename - else: - tried.append(static_filename) - raise CoverageException( - "Couldn't find static file %r from %r, tried: %r" % (fname, os.getcwd(), tried) - ) - - -def get_htmlfiles_resource(name): - import pkgutil - return pkgutil.get_data(__package__, 'htmlfiles/' + name) - - -def read_data(fname): - """Return the contents of a data file of ours.""" - if getattr(sys, 'is_standalone_binary', False): - res_buf = get_htmlfiles_resource(fname).decode() - if res_buf is not None: - return res_buf - - with open(data_filename(fname)) as data_file: - return data_file.read() - - -def write_html(fname, html): - """Write `html` to `fname`, properly encoded.""" - html = re.sub(r"(\A\s+)|(\s+$)", "", html, flags=re.MULTILINE) + "\n" - with open(fname, "wb") as fout: - fout.write(html.encode('ascii', 'xmlcharrefreplace')) - - -class HtmlDataGeneration(object): - """Generate structured data to be turned into HTML reports.""" - - EMPTY = "(empty)" - - def __init__(self, cov): - self.coverage = cov - self.config = self.coverage.config - data = self.coverage.get_data() - self.has_arcs = data.has_arcs() - if self.config.show_contexts: - if data.measured_contexts() == {""}: - self.coverage._warn("No contexts were measured") - data.set_query_contexts(self.config.report_contexts) - - def data_for_file(self, fr, analysis): - """Produce the data needed for one file's report.""" - if self.has_arcs: - missing_branch_arcs = analysis.missing_branch_arcs() - arcs_executed = analysis.arcs_executed() - - if self.config.show_contexts: - contexts_by_lineno = analysis.data.contexts_by_lineno(analysis.filename) - - lines = [] - - for lineno, tokens in enumerate(fr.source_token_lines(), start=1): - # Figure out how to mark this line. - category = None - short_annotations = [] - long_annotations = [] - - if lineno in analysis.excluded: - category = 'exc' - elif lineno in analysis.missing: - category = 'mis' - elif self.has_arcs and lineno in missing_branch_arcs: - category = 'par' - for b in missing_branch_arcs[lineno]: - if b < 0: - short_annotations.append("exit") - else: - short_annotations.append(b) - long_annotations.append(fr.missing_arc_description(lineno, b, arcs_executed)) - elif lineno in analysis.statements: - category = 'run' - - contexts = contexts_label = None - context_list = None - if category and self.config.show_contexts: - contexts = sorted(c or self.EMPTY for c in contexts_by_lineno[lineno]) - if contexts == [self.EMPTY]: - contexts_label = self.EMPTY - else: - contexts_label = "{} ctx".format(len(contexts)) - context_list = contexts - - lines.append(SimpleNamespace( - tokens=tokens, - number=lineno, - category=category, - statement=(lineno in analysis.statements), - contexts=contexts, - contexts_label=contexts_label, - context_list=context_list, - short_annotations=short_annotations, - long_annotations=long_annotations, - )) - - file_data = SimpleNamespace( - relative_filename=fr.relative_filename(), - nums=analysis.numbers, - lines=lines, - ) - - return file_data - - -class HtmlReporter(object): - """HTML reporting.""" - - # These files will be copied from the htmlfiles directory to the output - # directory. - STATIC_FILES = [ - ("style.css", ""), - ("jquery.min.js", "jquery"), - ("jquery.ba-throttle-debounce.min.js", "jquery-throttle-debounce"), - ("jquery.hotkeys.js", "jquery-hotkeys"), - ("jquery.isonscreen.js", "jquery-isonscreen"), - ("jquery.tablesorter.min.js", "jquery-tablesorter"), - ("coverage_html.js", ""), - ("keybd_closed.png", ""), - ("keybd_open.png", ""), - ("favicon_32.png", ""), - ] - - def __init__(self, cov): - self.coverage = cov - self.config = self.coverage.config - self.directory = self.config.html_dir - - self.skip_covered = self.config.html_skip_covered - if self.skip_covered is None: - self.skip_covered = self.config.skip_covered - self.skip_empty = self.config.html_skip_empty - if self.skip_empty is None: - self.skip_empty= self.config.skip_empty - - title = self.config.html_title - if env.PY2: - title = title.decode("utf8") - - if self.config.extra_css: - self.extra_css = os.path.basename(self.config.extra_css) - else: - self.extra_css = None - - self.data = self.coverage.get_data() - self.has_arcs = self.data.has_arcs() - - self.file_summaries = [] - self.all_files_nums = [] - self.incr = IncrementalChecker(self.directory) - self.datagen = HtmlDataGeneration(self.coverage) - self.totals = Numbers() - - self.template_globals = { - # Functions available in the templates. - 'escape': escape, - 'pair': pair, - 'len': len, - - # Constants for this report. - '__url__': coverage.__url__, - '__version__': coverage.__version__, - 'title': title, - 'time_stamp': format_local_datetime(datetime.datetime.now()), - 'extra_css': self.extra_css, - 'has_arcs': self.has_arcs, - 'show_contexts': self.config.show_contexts, - - # Constants for all reports. - # These css classes determine which lines are highlighted by default. - 'category': { - 'exc': 'exc show_exc', - 'mis': 'mis show_mis', - 'par': 'par run show_par', - 'run': 'run', - } - } - self.pyfile_html_source = read_data("pyfile.html") - self.source_tmpl = Templite(self.pyfile_html_source, self.template_globals) - - def report(self, morfs): - """Generate an HTML report for `morfs`. - - `morfs` is a list of modules or file names. - - """ - # Read the status data and check that this run used the same - # global data as the last run. - self.incr.read() - self.incr.check_global_data(self.config, self.pyfile_html_source) - - # Process all the files. - for fr, analysis in get_analysis_to_report(self.coverage, morfs): - self.html_file(fr, analysis) - - if not self.all_files_nums: - raise CoverageException("No data to report.") - - self.totals = sum(self.all_files_nums) - - # Write the index file. - self.index_file() - - self.make_local_static_report_files() - return self.totals.n_statements and self.totals.pc_covered - - def make_local_static_report_files(self): - """Make local instances of static files for HTML report.""" - # The files we provide must always be copied. - for static, pkgdir in self.STATIC_FILES: - if getattr(sys, 'is_standalone_binary', False): - data = get_htmlfiles_resource(static) - if data is None: - raise IOError("No such resource: " + static) - - with open(os.path.join(self.directory, static), "wb") as afile: - afile.write(data) - else: - shutil.copyfile( - data_filename(static, pkgdir), - os.path.join(self.directory, static) - ) - - # The user may have extra CSS they want copied. - if self.extra_css: - shutil.copyfile( - self.config.extra_css, - os.path.join(self.directory, self.extra_css) - ) - - def html_file(self, fr, analysis): - """Generate an HTML file for one source file.""" - rootname = flat_rootname(fr.relative_filename()) - html_filename = rootname + ".html" - ensure_dir(self.directory) - html_path = os.path.join(self.directory, html_filename) - - # Get the numbers for this file. - nums = analysis.numbers - self.all_files_nums.append(nums) - - if self.skip_covered: - # Don't report on 100% files. - no_missing_lines = (nums.n_missing == 0) - no_missing_branches = (nums.n_partial_branches == 0) - if no_missing_lines and no_missing_branches: - # If there's an existing file, remove it. - file_be_gone(html_path) - return - - if self.skip_empty: - # Don't report on empty files. - if nums.n_statements == 0: - file_be_gone(html_path) - return - - # Find out if the file on disk is already correct. - if self.incr.can_skip_file(self.data, fr, rootname): - self.file_summaries.append(self.incr.index_info(rootname)) - return - - # Write the HTML page for this file. - file_data = self.datagen.data_for_file(fr, analysis) - for ldata in file_data.lines: - # Build the HTML for the line. - html = [] - for tok_type, tok_text in ldata.tokens: - if tok_type == "ws": - html.append(escape(tok_text)) - else: - tok_html = escape(tok_text) or ' ' - html.append( - u'<span class="{}">{}</span>'.format(tok_type, tok_html) - ) - ldata.html = ''.join(html) - - if ldata.short_annotations: - # 202F is NARROW NO-BREAK SPACE. - # 219B is RIGHTWARDS ARROW WITH STROKE. - ldata.annotate = u", ".join( - u"{} ↛ {}".format(ldata.number, d) - for d in ldata.short_annotations - ) - else: - ldata.annotate = None - - if ldata.long_annotations: - longs = ldata.long_annotations - if len(longs) == 1: - ldata.annotate_long = longs[0] - else: - ldata.annotate_long = u"{:d} missed branches: {}".format( - len(longs), - u", ".join( - u"{:d}) {}".format(num, ann_long) - for num, ann_long in enumerate(longs, start=1) - ), - ) - else: - ldata.annotate_long = None - - css_classes = [] - if ldata.category: - css_classes.append(self.template_globals['category'][ldata.category]) - ldata.css_class = ' '.join(css_classes) or "pln" - - html = self.source_tmpl.render(file_data.__dict__) - write_html(html_path, html) - - # Save this file's information for the index file. - index_info = { - 'nums': nums, - 'html_filename': html_filename, - 'relative_filename': fr.relative_filename(), - } - self.file_summaries.append(index_info) - self.incr.set_index_info(rootname, index_info) - - def index_file(self): - """Write the index.html file for this report.""" - index_tmpl = Templite(read_data("index.html"), self.template_globals) - - html = index_tmpl.render({ - 'files': self.file_summaries, - 'totals': self.totals, - }) - - write_html(os.path.join(self.directory, "index.html"), html) - - # Write the latest hashes for next time. - self.incr.write() - - -class IncrementalChecker(object): - """Logic and data to support incremental reporting.""" - - STATUS_FILE = "status.json" - STATUS_FORMAT = 2 - - # pylint: disable=wrong-spelling-in-comment,useless-suppression - # The data looks like: - # - # { - # "format": 2, - # "globals": "540ee119c15d52a68a53fe6f0897346d", - # "version": "4.0a1", - # "files": { - # "cogapp___init__": { - # "hash": "e45581a5b48f879f301c0f30bf77a50c", - # "index": { - # "html_filename": "cogapp___init__.html", - # "relative_filename": "cogapp/__init__", - # "nums": [ 1, 14, 0, 0, 0, 0, 0 ] - # } - # }, - # ... - # "cogapp_whiteutils": { - # "hash": "8504bb427fc488c4176809ded0277d51", - # "index": { - # "html_filename": "cogapp_whiteutils.html", - # "relative_filename": "cogapp/whiteutils", - # "nums": [ 1, 59, 0, 1, 28, 2, 2 ] - # } - # } - # } - # } - - def __init__(self, directory): - self.directory = directory - self.reset() - - def reset(self): - """Initialize to empty. Causes all files to be reported.""" - self.globals = '' - self.files = {} - - def read(self): - """Read the information we stored last time.""" - usable = False - try: - status_file = os.path.join(self.directory, self.STATUS_FILE) - with open(status_file) as fstatus: - status = json.load(fstatus) - except (IOError, ValueError): - usable = False - else: - usable = True - if status['format'] != self.STATUS_FORMAT: - usable = False - elif status['version'] != coverage.__version__: - usable = False - - if usable: - self.files = {} - for filename, fileinfo in iitems(status['files']): - fileinfo['index']['nums'] = Numbers(*fileinfo['index']['nums']) - self.files[filename] = fileinfo - self.globals = status['globals'] - else: - self.reset() - - def write(self): - """Write the current status.""" - status_file = os.path.join(self.directory, self.STATUS_FILE) - files = {} - for filename, fileinfo in iitems(self.files): - fileinfo['index']['nums'] = fileinfo['index']['nums'].init_args() - files[filename] = fileinfo - - status = { - 'format': self.STATUS_FORMAT, - 'version': coverage.__version__, - 'globals': self.globals, - 'files': files, - } - with open(status_file, "w") as fout: - json.dump(status, fout, separators=(',', ':')) - - def check_global_data(self, *data): - """Check the global data that can affect incremental reporting.""" - m = Hasher() - for d in data: - m.update(d) - these_globals = m.hexdigest() - if self.globals != these_globals: - self.reset() - self.globals = these_globals - - def can_skip_file(self, data, fr, rootname): - """Can we skip reporting this file? - - `data` is a CoverageData object, `fr` is a `FileReporter`, and - `rootname` is the name being used for the file. - """ - m = Hasher() - m.update(fr.source().encode('utf-8')) - add_data_to_hash(data, fr.filename, m) - this_hash = m.hexdigest() - - that_hash = self.file_hash(rootname) - - if this_hash == that_hash: - # Nothing has changed to require the file to be reported again. - return True - else: - self.set_file_hash(rootname, this_hash) - return False - - def file_hash(self, fname): - """Get the hash of `fname`'s contents.""" - return self.files.get(fname, {}).get('hash', '') - - def set_file_hash(self, fname, val): - """Set the hash of `fname`'s contents.""" - self.files.setdefault(fname, {})['hash'] = val - - def index_info(self, fname): - """Get the information for index.html for `fname`.""" - return self.files.get(fname, {}).get('index', {}) - - def set_index_info(self, fname, info): - """Set the information for index.html for `fname`.""" - self.files.setdefault(fname, {})['index'] = info - - -# Helpers for templates and generating HTML - -def escape(t): - """HTML-escape the text in `t`. - - This is only suitable for HTML text, not attributes. - - """ - # Convert HTML special chars into HTML entities. - return t.replace("&", "&").replace("<", "<") - - -def pair(ratio): - """Format a pair of numbers so JavaScript can read them in an attribute.""" - return "%s %s" % ratio diff --git a/contrib/python/coverage/py2/coverage/htmlfiles/coverage_html.js b/contrib/python/coverage/py2/coverage/htmlfiles/coverage_html.js deleted file mode 100644 index 27b49b36f96..00000000000 --- a/contrib/python/coverage/py2/coverage/htmlfiles/coverage_html.js +++ /dev/null @@ -1,616 +0,0 @@ -// Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 -// For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt - -// Coverage.py HTML report browser code. -/*jslint browser: true, sloppy: true, vars: true, plusplus: true, maxerr: 50, indent: 4 */ -/*global coverage: true, document, window, $ */ - -coverage = {}; - -// Find all the elements with shortkey_* class, and use them to assign a shortcut key. -coverage.assign_shortkeys = function () { - $("*[class*='shortkey_']").each(function (i, e) { - $.each($(e).attr("class").split(" "), function (i, c) { - if (/^shortkey_/.test(c)) { - $(document).bind('keydown', c.substr(9), function () { - $(e).click(); - }); - } - }); - }); -}; - -// Create the events for the help panel. -coverage.wire_up_help_panel = function () { - $("#keyboard_icon").click(function () { - // Show the help panel, and position it so the keyboard icon in the - // panel is in the same place as the keyboard icon in the header. - $(".help_panel").show(); - var koff = $("#keyboard_icon").offset(); - var poff = $("#panel_icon").position(); - $(".help_panel").offset({ - top: koff.top-poff.top, - left: koff.left-poff.left - }); - }); - $("#panel_icon").click(function () { - $(".help_panel").hide(); - }); -}; - -// Create the events for the filter box. -coverage.wire_up_filter = function () { - // Cache elements. - var table = $("table.index"); - var table_rows = table.find("tbody tr"); - var table_row_names = table_rows.find("td.name a"); - var no_rows = $("#no_rows"); - - // Create a duplicate table footer that we can modify with dynamic summed values. - var table_footer = $("table.index tfoot tr"); - var table_dynamic_footer = table_footer.clone(); - table_dynamic_footer.attr('class', 'total_dynamic hidden'); - table_footer.after(table_dynamic_footer); - - // Observe filter keyevents. - $("#filter").on("keyup change", $.debounce(150, function (event) { - var filter_value = $(this).val(); - - if (filter_value === "") { - // Filter box is empty, remove all filtering. - table_rows.removeClass("hidden"); - - // Show standard footer, hide dynamic footer. - table_footer.removeClass("hidden"); - table_dynamic_footer.addClass("hidden"); - - // Hide placeholder, show table. - if (no_rows.length > 0) { - no_rows.hide(); - } - table.show(); - - } - else { - // Filter table items by value. - var hidden = 0; - var shown = 0; - - // Hide / show elements. - $.each(table_row_names, function () { - var element = $(this).parents("tr"); - - if ($(this).text().indexOf(filter_value) === -1) { - // hide - element.addClass("hidden"); - hidden++; - } - else { - // show - element.removeClass("hidden"); - shown++; - } - }); - - // Show placeholder if no rows will be displayed. - if (no_rows.length > 0) { - if (shown === 0) { - // Show placeholder, hide table. - no_rows.show(); - table.hide(); - } - else { - // Hide placeholder, show table. - no_rows.hide(); - table.show(); - } - } - - // Manage dynamic header: - if (hidden > 0) { - // Calculate new dynamic sum values based on visible rows. - for (var column = 2; column < 20; column++) { - // Calculate summed value. - var cells = table_rows.find('td:nth-child(' + column + ')'); - if (!cells.length) { - // No more columns...! - break; - } - - var sum = 0, numer = 0, denom = 0; - $.each(cells.filter(':visible'), function () { - var ratio = $(this).data("ratio"); - if (ratio) { - var splitted = ratio.split(" "); - numer += parseInt(splitted[0], 10); - denom += parseInt(splitted[1], 10); - } - else { - sum += parseInt(this.innerHTML, 10); - } - }); - - // Get footer cell element. - var footer_cell = table_dynamic_footer.find('td:nth-child(' + column + ')'); - - // Set value into dynamic footer cell element. - if (cells[0].innerHTML.indexOf('%') > -1) { - // Percentage columns use the numerator and denominator, - // and adapt to the number of decimal places. - var match = /\.([0-9]+)/.exec(cells[0].innerHTML); - var places = 0; - if (match) { - places = match[1].length; - } - var pct = numer * 100 / denom; - footer_cell.text(pct.toFixed(places) + '%'); - } - else { - footer_cell.text(sum); - } - } - - // Hide standard footer, show dynamic footer. - table_footer.addClass("hidden"); - table_dynamic_footer.removeClass("hidden"); - } - else { - // Show standard footer, hide dynamic footer. - table_footer.removeClass("hidden"); - table_dynamic_footer.addClass("hidden"); - } - } - })); - - // Trigger change event on setup, to force filter on page refresh - // (filter value may still be present). - $("#filter").trigger("change"); -}; - -// Loaded on index.html -coverage.index_ready = function ($) { - // Look for a localStorage item containing previous sort settings: - var sort_list = []; - var storage_name = "COVERAGE_INDEX_SORT"; - var stored_list = undefined; - try { - stored_list = localStorage.getItem(storage_name); - } catch(err) {} - - if (stored_list) { - sort_list = JSON.parse('[[' + stored_list + ']]'); - } - - // Create a new widget which exists only to save and restore - // the sort order: - $.tablesorter.addWidget({ - id: "persistentSort", - - // Format is called by the widget before displaying: - format: function (table) { - if (table.config.sortList.length === 0 && sort_list.length > 0) { - // This table hasn't been sorted before - we'll use - // our stored settings: - $(table).trigger('sorton', [sort_list]); - } - else { - // This is not the first load - something has - // already defined sorting so we'll just update - // our stored value to match: - sort_list = table.config.sortList; - } - } - }); - - // Configure our tablesorter to handle the variable number of - // columns produced depending on report options: - var headers = []; - var col_count = $("table.index > thead > tr > th").length; - - headers[0] = { sorter: 'text' }; - for (i = 1; i < col_count-1; i++) { - headers[i] = { sorter: 'digit' }; - } - headers[col_count-1] = { sorter: 'percent' }; - - // Enable the table sorter: - $("table.index").tablesorter({ - widgets: ['persistentSort'], - headers: headers - }); - - coverage.assign_shortkeys(); - coverage.wire_up_help_panel(); - coverage.wire_up_filter(); - - // Watch for page unload events so we can save the final sort settings: - $(window).on("unload", function () { - try { - localStorage.setItem(storage_name, sort_list.toString()) - } catch(err) {} - }); -}; - -// -- pyfile stuff -- - -coverage.LINE_FILTERS_STORAGE = "COVERAGE_LINE_FILTERS"; - -coverage.pyfile_ready = function ($) { - // If we're directed to a particular line number, highlight the line. - var frag = location.hash; - if (frag.length > 2 && frag[1] === 't') { - $(frag).addClass('highlight'); - coverage.set_sel(parseInt(frag.substr(2), 10)); - } - else { - coverage.set_sel(0); - } - - $(document) - .bind('keydown', 'j', coverage.to_next_chunk_nicely) - .bind('keydown', 'k', coverage.to_prev_chunk_nicely) - .bind('keydown', '0', coverage.to_top) - .bind('keydown', '1', coverage.to_first_chunk) - ; - - $(".button_toggle_run").click(function (evt) {coverage.toggle_lines(evt.target, "run");}); - $(".button_toggle_exc").click(function (evt) {coverage.toggle_lines(evt.target, "exc");}); - $(".button_toggle_mis").click(function (evt) {coverage.toggle_lines(evt.target, "mis");}); - $(".button_toggle_par").click(function (evt) {coverage.toggle_lines(evt.target, "par");}); - - coverage.filters = undefined; - try { - coverage.filters = localStorage.getItem(coverage.LINE_FILTERS_STORAGE); - } catch(err) {} - - if (coverage.filters) { - coverage.filters = JSON.parse(coverage.filters); - } - else { - coverage.filters = {run: false, exc: true, mis: true, par: true}; - } - - for (cls in coverage.filters) { - coverage.set_line_visibilty(cls, coverage.filters[cls]); - } - - coverage.assign_shortkeys(); - coverage.wire_up_help_panel(); - - coverage.init_scroll_markers(); - - // Rebuild scroll markers when the window height changes. - $(window).resize(coverage.build_scroll_markers); -}; - -coverage.toggle_lines = function (btn, cls) { - var onoff = !$(btn).hasClass("show_" + cls); - coverage.set_line_visibilty(cls, onoff); - coverage.build_scroll_markers(); - coverage.filters[cls] = onoff; - try { - localStorage.setItem(coverage.LINE_FILTERS_STORAGE, JSON.stringify(coverage.filters)); - } catch(err) {} -}; - -coverage.set_line_visibilty = function (cls, onoff) { - var show = "show_" + cls; - var btn = $(".button_toggle_" + cls); - if (onoff) { - $("#source ." + cls).addClass(show); - btn.addClass(show); - } - else { - $("#source ." + cls).removeClass(show); - btn.removeClass(show); - } -}; - -// Return the nth line div. -coverage.line_elt = function (n) { - return $("#t" + n); -}; - -// Return the nth line number div. -coverage.num_elt = function (n) { - return $("#n" + n); -}; - -// Set the selection. b and e are line numbers. -coverage.set_sel = function (b, e) { - // The first line selected. - coverage.sel_begin = b; - // The next line not selected. - coverage.sel_end = (e === undefined) ? b+1 : e; -}; - -coverage.to_top = function () { - coverage.set_sel(0, 1); - coverage.scroll_window(0); -}; - -coverage.to_first_chunk = function () { - coverage.set_sel(0, 1); - coverage.to_next_chunk(); -}; - -// Return a string indicating what kind of chunk this line belongs to, -// or null if not a chunk. -coverage.chunk_indicator = function (line_elt) { - var klass = line_elt.attr('class'); - if (klass) { - var m = klass.match(/\bshow_\w+\b/); - if (m) { - return m[0]; - } - } - return null; -}; - -coverage.to_next_chunk = function () { - var c = coverage; - - // Find the start of the next colored chunk. - var probe = c.sel_end; - var chunk_indicator, probe_line; - while (true) { - probe_line = c.line_elt(probe); - if (probe_line.length === 0) { - return; - } - chunk_indicator = c.chunk_indicator(probe_line); - if (chunk_indicator) { - break; - } - probe++; - } - - // There's a next chunk, `probe` points to it. - var begin = probe; - - // Find the end of this chunk. - var next_indicator = chunk_indicator; - while (next_indicator === chunk_indicator) { - probe++; - probe_line = c.line_elt(probe); - next_indicator = c.chunk_indicator(probe_line); - } - c.set_sel(begin, probe); - c.show_selection(); -}; - -coverage.to_prev_chunk = function () { - var c = coverage; - - // Find the end of the prev colored chunk. - var probe = c.sel_begin-1; - var probe_line = c.line_elt(probe); - if (probe_line.length === 0) { - return; - } - var chunk_indicator = c.chunk_indicator(probe_line); - while (probe > 0 && !chunk_indicator) { - probe--; - probe_line = c.line_elt(probe); - if (probe_line.length === 0) { - return; - } - chunk_indicator = c.chunk_indicator(probe_line); - } - - // There's a prev chunk, `probe` points to its last line. - var end = probe+1; - - // Find the beginning of this chunk. - var prev_indicator = chunk_indicator; - while (prev_indicator === chunk_indicator) { - probe--; - probe_line = c.line_elt(probe); - prev_indicator = c.chunk_indicator(probe_line); - } - c.set_sel(probe+1, end); - c.show_selection(); -}; - -// Return the line number of the line nearest pixel position pos -coverage.line_at_pos = function (pos) { - var l1 = coverage.line_elt(1), - l2 = coverage.line_elt(2), - result; - if (l1.length && l2.length) { - var l1_top = l1.offset().top, - line_height = l2.offset().top - l1_top, - nlines = (pos - l1_top) / line_height; - if (nlines < 1) { - result = 1; - } - else { - result = Math.ceil(nlines); - } - } - else { - result = 1; - } - return result; -}; - -// Returns 0, 1, or 2: how many of the two ends of the selection are on -// the screen right now? -coverage.selection_ends_on_screen = function () { - if (coverage.sel_begin === 0) { - return 0; - } - - var top = coverage.line_elt(coverage.sel_begin); - var next = coverage.line_elt(coverage.sel_end-1); - - return ( - (top.isOnScreen() ? 1 : 0) + - (next.isOnScreen() ? 1 : 0) - ); -}; - -coverage.to_next_chunk_nicely = function () { - coverage.finish_scrolling(); - if (coverage.selection_ends_on_screen() === 0) { - // The selection is entirely off the screen: select the top line on - // the screen. - var win = $(window); - coverage.select_line_or_chunk(coverage.line_at_pos(win.scrollTop())); - } - coverage.to_next_chunk(); -}; - -coverage.to_prev_chunk_nicely = function () { - coverage.finish_scrolling(); - if (coverage.selection_ends_on_screen() === 0) { - var win = $(window); - coverage.select_line_or_chunk(coverage.line_at_pos(win.scrollTop() + win.height())); - } - coverage.to_prev_chunk(); -}; - -// Select line number lineno, or if it is in a colored chunk, select the -// entire chunk -coverage.select_line_or_chunk = function (lineno) { - var c = coverage; - var probe_line = c.line_elt(lineno); - if (probe_line.length === 0) { - return; - } - var the_indicator = c.chunk_indicator(probe_line); - if (the_indicator) { - // The line is in a highlighted chunk. - // Search backward for the first line. - var probe = lineno; - var indicator = the_indicator; - while (probe > 0 && indicator === the_indicator) { - probe--; - probe_line = c.line_elt(probe); - if (probe_line.length === 0) { - break; - } - indicator = c.chunk_indicator(probe_line); - } - var begin = probe + 1; - - // Search forward for the last line. - probe = lineno; - indicator = the_indicator; - while (indicator === the_indicator) { - probe++; - probe_line = c.line_elt(probe); - indicator = c.chunk_indicator(probe_line); - } - - coverage.set_sel(begin, probe); - } - else { - coverage.set_sel(lineno); - } -}; - -coverage.show_selection = function () { - var c = coverage; - - // Highlight the lines in the chunk - $(".linenos .highlight").removeClass("highlight"); - for (var probe = c.sel_begin; probe > 0 && probe < c.sel_end; probe++) { - c.num_elt(probe).addClass("highlight"); - } - - c.scroll_to_selection(); -}; - -coverage.scroll_to_selection = function () { - // Scroll the page if the chunk isn't fully visible. - if (coverage.selection_ends_on_screen() < 2) { - // Need to move the page. The html,body trick makes it scroll in all - // browsers, got it from http://stackoverflow.com/questions/3042651 - var top = coverage.line_elt(coverage.sel_begin); - var top_pos = parseInt(top.offset().top, 10); - coverage.scroll_window(top_pos - 30); - } -}; - -coverage.scroll_window = function (to_pos) { - $("html,body").animate({scrollTop: to_pos}, 200); -}; - -coverage.finish_scrolling = function () { - $("html,body").stop(true, true); -}; - -coverage.init_scroll_markers = function () { - var c = coverage; - // Init some variables - c.lines_len = $('#source p').length; - c.body_h = $('body').height(); - c.header_h = $('div#header').height(); - - // Build html - c.build_scroll_markers(); -}; - -coverage.build_scroll_markers = function () { - var c = coverage, - min_line_height = 3, - max_line_height = 10, - visible_window_h = $(window).height(); - - c.lines_to_mark = $('#source').find('p.show_run, p.show_mis, p.show_exc, p.show_exc, p.show_par'); - $('#scroll_marker').remove(); - // Don't build markers if the window has no scroll bar. - if (c.body_h <= visible_window_h) { - return; - } - - $("body").append("<div id='scroll_marker'> </div>"); - var scroll_marker = $('#scroll_marker'), - marker_scale = scroll_marker.height() / c.body_h, - line_height = scroll_marker.height() / c.lines_len; - - // Line height must be between the extremes. - if (line_height > min_line_height) { - if (line_height > max_line_height) { - line_height = max_line_height; - } - } - else { - line_height = min_line_height; - } - - var previous_line = -99, - last_mark, - last_top, - offsets = {}; - - // Calculate line offsets outside loop to prevent relayouts - c.lines_to_mark.each(function() { - offsets[this.id] = $(this).offset().top; - }); - c.lines_to_mark.each(function () { - var id_name = $(this).attr('id'), - line_top = Math.round(offsets[id_name] * marker_scale), - line_number = parseInt(id_name.substring(1, id_name.length)); - - if (line_number === previous_line + 1) { - // If this solid missed block just make previous mark higher. - last_mark.css({ - 'height': line_top + line_height - last_top - }); - } - else { - // Add colored line in scroll_marker block. - scroll_marker.append('<div id="m' + line_number + '" class="marker"></div>'); - last_mark = $('#m' + line_number); - last_mark.css({ - 'height': line_height, - 'top': line_top - }); - last_top = line_top; - } - - previous_line = line_number; - }); -}; diff --git a/contrib/python/coverage/py2/coverage/htmlfiles/favicon_32.png b/contrib/python/coverage/py2/coverage/htmlfiles/favicon_32.png Binary files differdeleted file mode 100644 index 8649f0475d8..00000000000 --- a/contrib/python/coverage/py2/coverage/htmlfiles/favicon_32.png +++ /dev/null diff --git a/contrib/python/coverage/py2/coverage/htmlfiles/index.html b/contrib/python/coverage/py2/coverage/htmlfiles/index.html deleted file mode 100644 index 983db06125e..00000000000 --- a/contrib/python/coverage/py2/coverage/htmlfiles/index.html +++ /dev/null @@ -1,119 +0,0 @@ -{# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 #} -{# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt #} - -<!DOCTYPE html> -<html> -<head> - <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> - <title>{{ title|escape }}</title> - <link rel="icon" sizes="32x32" href="favicon_32.png"> - <link rel="stylesheet" href="style.css" type="text/css"> - {% if extra_css %} - <link rel="stylesheet" href="{{ extra_css }}" type="text/css"> - {% endif %} - <script type="text/javascript" src="jquery.min.js"></script> - <script type="text/javascript" src="jquery.ba-throttle-debounce.min.js"></script> - <script type="text/javascript" src="jquery.tablesorter.min.js"></script> - <script type="text/javascript" src="jquery.hotkeys.js"></script> - <script type="text/javascript" src="coverage_html.js"></script> - <script type="text/javascript"> - jQuery(document).ready(coverage.index_ready); - </script> -</head> -<body class="indexfile"> - -<div id="header"> - <div class="content"> - <h1>{{ title|escape }}: - <span class="pc_cov">{{totals.pc_covered_str}}%</span> - </h1> - - <img id="keyboard_icon" src="keybd_closed.png" alt="Show keyboard shortcuts" /> - - <form id="filter_container"> - <input id="filter" type="text" value="" placeholder="filter..." /> - </form> - </div> -</div> - -<div class="help_panel"> - <img id="panel_icon" src="keybd_open.png" alt="Hide keyboard shortcuts" /> - <p class="legend">Hot-keys on this page</p> - <div> - <p class="keyhelp"> - <span class="key">n</span> - <span class="key">s</span> - <span class="key">m</span> - <span class="key">x</span> - {% if has_arcs %} - <span class="key">b</span> - <span class="key">p</span> - {% endif %} - <span class="key">c</span> change column sorting - </p> - </div> -</div> - -<div id="index"> - <table class="index"> - <thead> - {# The title="" attr doesn"t work in Safari. #} - <tr class="tablehead" title="Click to sort"> - <th class="name left headerSortDown shortkey_n">Module</th> - <th class="shortkey_s">statements</th> - <th class="shortkey_m">missing</th> - <th class="shortkey_x">excluded</th> - {% if has_arcs %} - <th class="shortkey_b">branches</th> - <th class="shortkey_p">partial</th> - {% endif %} - <th class="right shortkey_c">coverage</th> - </tr> - </thead> - {# HTML syntax requires thead, tfoot, tbody #} - <tfoot> - <tr class="total"> - <td class="name left">Total</td> - <td>{{totals.n_statements}}</td> - <td>{{totals.n_missing}}</td> - <td>{{totals.n_excluded}}</td> - {% if has_arcs %} - <td>{{totals.n_branches}}</td> - <td>{{totals.n_partial_branches}}</td> - {% endif %} - <td class="right" data-ratio="{{totals.ratio_covered|pair}}">{{totals.pc_covered_str}}%</td> - </tr> - </tfoot> - <tbody> - {% for file in files %} - <tr class="file"> - <td class="name left"><a href="{{file.html_filename}}">{{file.relative_filename}}</a></td> - <td>{{file.nums.n_statements}}</td> - <td>{{file.nums.n_missing}}</td> - <td>{{file.nums.n_excluded}}</td> - {% if has_arcs %} - <td>{{file.nums.n_branches}}</td> - <td>{{file.nums.n_partial_branches}}</td> - {% endif %} - <td class="right" data-ratio="{{file.nums.ratio_covered|pair}}">{{file.nums.pc_covered_str}}%</td> - </tr> - {% endfor %} - </tbody> - </table> - - <p id="no_rows"> - No items found using the specified filter. - </p> -</div> - -<div id="footer"> - <div class="content"> - <p> - <a class="nav" href="{{__url__}}">coverage.py v{{__version__}}</a>, - created at {{ time_stamp }} - </p> - </div> -</div> - -</body> -</html> diff --git a/contrib/python/coverage/py2/coverage/htmlfiles/jquery.ba-throttle-debounce.min.js b/contrib/python/coverage/py2/coverage/htmlfiles/jquery.ba-throttle-debounce.min.js deleted file mode 100644 index 648fe5d3c22..00000000000 --- a/contrib/python/coverage/py2/coverage/htmlfiles/jquery.ba-throttle-debounce.min.js +++ /dev/null @@ -1,9 +0,0 @@ -/* - * jQuery throttle / debounce - v1.1 - 3/7/2010 - * http://benalman.com/projects/jquery-throttle-debounce-plugin/ - * - * Copyright (c) 2010 "Cowboy" Ben Alman - * Dual licensed under the MIT and GPL licenses. - * http://benalman.com/about/license/ - */ -(function(b,c){var $=b.jQuery||b.Cowboy||(b.Cowboy={}),a;$.throttle=a=function(e,f,j,i){var h,d=0;if(typeof f!=="boolean"){i=j;j=f;f=c}function g(){var o=this,m=+new Date()-d,n=arguments;function l(){d=+new Date();j.apply(o,n)}function k(){h=c}if(i&&!h){l()}h&&clearTimeout(h);if(i===c&&m>e){l()}else{if(f!==true){h=setTimeout(i?k:l,i===c?e-m:e)}}}if($.guid){g.guid=j.guid=j.guid||$.guid++}return g};$.debounce=function(d,e,f){return f===c?a(d,e,false):a(d,f,e!==false)}})(this); diff --git a/contrib/python/coverage/py2/coverage/htmlfiles/jquery.hotkeys.js b/contrib/python/coverage/py2/coverage/htmlfiles/jquery.hotkeys.js deleted file mode 100644 index 09b21e03c7f..00000000000 --- a/contrib/python/coverage/py2/coverage/htmlfiles/jquery.hotkeys.js +++ /dev/null @@ -1,99 +0,0 @@ -/* - * jQuery Hotkeys Plugin - * Copyright 2010, John Resig - * Dual licensed under the MIT or GPL Version 2 licenses. - * - * Based upon the plugin by Tzury Bar Yochay: - * http://github.com/tzuryby/hotkeys - * - * Original idea by: - * Binny V A, http://www.openjs.com/scripts/events/keyboard_shortcuts/ -*/ - -(function(jQuery){ - - jQuery.hotkeys = { - version: "0.8", - - specialKeys: { - 8: "backspace", 9: "tab", 13: "return", 16: "shift", 17: "ctrl", 18: "alt", 19: "pause", - 20: "capslock", 27: "esc", 32: "space", 33: "pageup", 34: "pagedown", 35: "end", 36: "home", - 37: "left", 38: "up", 39: "right", 40: "down", 45: "insert", 46: "del", - 96: "0", 97: "1", 98: "2", 99: "3", 100: "4", 101: "5", 102: "6", 103: "7", - 104: "8", 105: "9", 106: "*", 107: "+", 109: "-", 110: ".", 111 : "/", - 112: "f1", 113: "f2", 114: "f3", 115: "f4", 116: "f5", 117: "f6", 118: "f7", 119: "f8", - 120: "f9", 121: "f10", 122: "f11", 123: "f12", 144: "numlock", 145: "scroll", 191: "/", 224: "meta" - }, - - shiftNums: { - "`": "~", "1": "!", "2": "@", "3": "#", "4": "$", "5": "%", "6": "^", "7": "&", - "8": "*", "9": "(", "0": ")", "-": "_", "=": "+", ";": ": ", "'": "\"", ",": "<", - ".": ">", "/": "?", "\\": "|" - } - }; - - function keyHandler( handleObj ) { - // Only care when a possible input has been specified - if ( typeof handleObj.data !== "string" ) { - return; - } - - var origHandler = handleObj.handler, - keys = handleObj.data.toLowerCase().split(" "); - - handleObj.handler = function( event ) { - // Don't fire in text-accepting inputs that we didn't directly bind to - if ( this !== event.target && (/textarea|select/i.test( event.target.nodeName ) || - event.target.type === "text") ) { - return; - } - - // Keypress represents characters, not special keys - var special = event.type !== "keypress" && jQuery.hotkeys.specialKeys[ event.which ], - character = String.fromCharCode( event.which ).toLowerCase(), - key, modif = "", possible = {}; - - // check combinations (alt|ctrl|shift+anything) - if ( event.altKey && special !== "alt" ) { - modif += "alt+"; - } - - if ( event.ctrlKey && special !== "ctrl" ) { - modif += "ctrl+"; - } - - // TODO: Need to make sure this works consistently across platforms - if ( event.metaKey && !event.ctrlKey && special !== "meta" ) { - modif += "meta+"; - } - - if ( event.shiftKey && special !== "shift" ) { - modif += "shift+"; - } - - if ( special ) { - possible[ modif + special ] = true; - - } else { - possible[ modif + character ] = true; - possible[ modif + jQuery.hotkeys.shiftNums[ character ] ] = true; - - // "$" can be triggered as "Shift+4" or "Shift+$" or just "$" - if ( modif === "shift+" ) { - possible[ jQuery.hotkeys.shiftNums[ character ] ] = true; - } - } - - for ( var i = 0, l = keys.length; i < l; i++ ) { - if ( possible[ keys[i] ] ) { - return origHandler.apply( this, arguments ); - } - } - }; - } - - jQuery.each([ "keydown", "keyup", "keypress" ], function() { - jQuery.event.special[ this ] = { add: keyHandler }; - }); - -})( jQuery ); diff --git a/contrib/python/coverage/py2/coverage/htmlfiles/jquery.isonscreen.js b/contrib/python/coverage/py2/coverage/htmlfiles/jquery.isonscreen.js deleted file mode 100644 index 0182ebd2137..00000000000 --- a/contrib/python/coverage/py2/coverage/htmlfiles/jquery.isonscreen.js +++ /dev/null @@ -1,53 +0,0 @@ -/* Copyright (c) 2010 - * @author Laurence Wheway - * Dual licensed under the MIT (http://www.opensource.org/licenses/mit-license.php) - * and GPL (http://www.opensource.org/licenses/gpl-license.php) licenses. - * - * @version 1.2.0 - */ -(function($) { - jQuery.extend({ - isOnScreen: function(box, container) { - //ensure numbers come in as intgers (not strings) and remove 'px' is it's there - for(var i in box){box[i] = parseFloat(box[i])}; - for(var i in container){container[i] = parseFloat(container[i])}; - - if(!container){ - container = { - left: $(window).scrollLeft(), - top: $(window).scrollTop(), - width: $(window).width(), - height: $(window).height() - } - } - - if( box.left+box.width-container.left > 0 && - box.left < container.width+container.left && - box.top+box.height-container.top > 0 && - box.top < container.height+container.top - ) return true; - return false; - } - }) - - - jQuery.fn.isOnScreen = function (container) { - for(var i in container){container[i] = parseFloat(container[i])}; - - if(!container){ - container = { - left: $(window).scrollLeft(), - top: $(window).scrollTop(), - width: $(window).width(), - height: $(window).height() - } - } - - if( $(this).offset().left+$(this).width()-container.left > 0 && - $(this).offset().left < container.width+container.left && - $(this).offset().top+$(this).height()-container.top > 0 && - $(this).offset().top < container.height+container.top - ) return true; - return false; - } -})(jQuery); diff --git a/contrib/python/coverage/py2/coverage/htmlfiles/jquery.min.js b/contrib/python/coverage/py2/coverage/htmlfiles/jquery.min.js deleted file mode 100644 index d1608e37ffa..00000000000 --- a/contrib/python/coverage/py2/coverage/htmlfiles/jquery.min.js +++ /dev/null @@ -1,4 +0,0 @@ -/*! jQuery v1.11.1 | (c) 2005, 2014 jQuery Foundation, Inc. | jquery.org/license */ -!function(a,b){"object"==typeof module&&"object"==typeof module.exports?module.exports=a.document?b(a,!0):function(a){if(!a.document)throw new Error("jQuery requires a window with a document");return b(a)}:b(a)}("undefined"!=typeof window?window:this,function(a,b){var c=[],d=c.slice,e=c.concat,f=c.push,g=c.indexOf,h={},i=h.toString,j=h.hasOwnProperty,k={},l="1.11.1",m=function(a,b){return new m.fn.init(a,b)},n=/^[\s\uFEFF\xA0]+|[\s\uFEFF\xA0]+$/g,o=/^-ms-/,p=/-([\da-z])/gi,q=function(a,b){return b.toUpperCase()};m.fn=m.prototype={jquery:l,constructor:m,selector:"",length:0,toArray:function(){return d.call(this)},get:function(a){return null!=a?0>a?this[a+this.length]:this[a]:d.call(this)},pushStack:function(a){var b=m.merge(this.constructor(),a);return b.prevObject=this,b.context=this.context,b},each:function(a,b){return m.each(this,a,b)},map:function(a){return this.pushStack(m.map(this,function(b,c){return a.call(b,c,b)}))},slice:function(){return this.pushStack(d.apply(this,arguments))},first:function(){return this.eq(0)},last:function(){return this.eq(-1)},eq:function(a){var b=this.length,c=+a+(0>a?b:0);return this.pushStack(c>=0&&b>c?[this[c]]:[])},end:function(){return this.prevObject||this.constructor(null)},push:f,sort:c.sort,splice:c.splice},m.extend=m.fn.extend=function(){var a,b,c,d,e,f,g=arguments[0]||{},h=1,i=arguments.length,j=!1;for("boolean"==typeof g&&(j=g,g=arguments[h]||{},h++),"object"==typeof g||m.isFunction(g)||(g={}),h===i&&(g=this,h--);i>h;h++)if(null!=(e=arguments[h]))for(d in e)a=g[d],c=e[d],g!==c&&(j&&c&&(m.isPlainObject(c)||(b=m.isArray(c)))?(b?(b=!1,f=a&&m.isArray(a)?a:[]):f=a&&m.isPlainObject(a)?a:{},g[d]=m.extend(j,f,c)):void 0!==c&&(g[d]=c));return g},m.extend({expando:"jQuery"+(l+Math.random()).replace(/\D/g,""),isReady:!0,error:function(a){throw new Error(a)},noop:function(){},isFunction:function(a){return"function"===m.type(a)},isArray:Array.isArray||function(a){return"array"===m.type(a)},isWindow:function(a){return null!=a&&a==a.window},isNumeric:function(a){return!m.isArray(a)&&a-parseFloat(a)>=0},isEmptyObject:function(a){var b;for(b in a)return!1;return!0},isPlainObject:function(a){var b;if(!a||"object"!==m.type(a)||a.nodeType||m.isWindow(a))return!1;try{if(a.constructor&&!j.call(a,"constructor")&&!j.call(a.constructor.prototype,"isPrototypeOf"))return!1}catch(c){return!1}if(k.ownLast)for(b in a)return j.call(a,b);for(b in a);return void 0===b||j.call(a,b)},type:function(a){return null==a?a+"":"object"==typeof a||"function"==typeof a?h[i.call(a)]||"object":typeof a},globalEval:function(b){b&&m.trim(b)&&(a.execScript||function(b){a.eval.call(a,b)})(b)},camelCase:function(a){return a.replace(o,"ms-").replace(p,q)},nodeName:function(a,b){return a.nodeName&&a.nodeName.toLowerCase()===b.toLowerCase()},each:function(a,b,c){var d,e=0,f=a.length,g=r(a);if(c){if(g){for(;f>e;e++)if(d=b.apply(a[e],c),d===!1)break}else for(e in a)if(d=b.apply(a[e],c),d===!1)break}else if(g){for(;f>e;e++)if(d=b.call(a[e],e,a[e]),d===!1)break}else for(e in a)if(d=b.call(a[e],e,a[e]),d===!1)break;return a},trim:function(a){return null==a?"":(a+"").replace(n,"")},makeArray:function(a,b){var c=b||[];return null!=a&&(r(Object(a))?m.merge(c,"string"==typeof a?[a]:a):f.call(c,a)),c},inArray:function(a,b,c){var d;if(b){if(g)return g.call(b,a,c);for(d=b.length,c=c?0>c?Math.max(0,d+c):c:0;d>c;c++)if(c in b&&b[c]===a)return c}return-1},merge:function(a,b){var c=+b.length,d=0,e=a.length;while(c>d)a[e++]=b[d++];if(c!==c)while(void 0!==b[d])a[e++]=b[d++];return a.length=e,a},grep:function(a,b,c){for(var d,e=[],f=0,g=a.length,h=!c;g>f;f++)d=!b(a[f],f),d!==h&&e.push(a[f]);return e},map:function(a,b,c){var d,f=0,g=a.length,h=r(a),i=[];if(h)for(;g>f;f++)d=b(a[f],f,c),null!=d&&i.push(d);else for(f in a)d=b(a[f],f,c),null!=d&&i.push(d);return e.apply([],i)},guid:1,proxy:function(a,b){var c,e,f;return"string"==typeof b&&(f=a[b],b=a,a=f),m.isFunction(a)?(c=d.call(arguments,2),e=function(){return a.apply(b||this,c.concat(d.call(arguments)))},e.guid=a.guid=a.guid||m.guid++,e):void 0},now:function(){return+new Date},support:k}),m.each("Boolean Number String Function Array Date RegExp Object Error".split(" "),function(a,b){h["[object "+b+"]"]=b.toLowerCase()});function r(a){var b=a.length,c=m.type(a);return"function"===c||m.isWindow(a)?!1:1===a.nodeType&&b?!0:"array"===c||0===b||"number"==typeof b&&b>0&&b-1 in a}var s=function(a){var b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u="sizzle"+-new Date,v=a.document,w=0,x=0,y=gb(),z=gb(),A=gb(),B=function(a,b){return a===b&&(l=!0),0},C="undefined",D=1<<31,E={}.hasOwnProperty,F=[],G=F.pop,H=F.push,I=F.push,J=F.slice,K=F.indexOf||function(a){for(var b=0,c=this.length;c>b;b++)if(this[b]===a)return b;return-1},L="checked|selected|async|autofocus|autoplay|controls|defer|disabled|hidden|ismap|loop|multiple|open|readonly|required|scoped",M="[\\x20\\t\\r\\n\\f]",N="(?:\\\\.|[\\w-]|[^\\x00-\\xa0])+",O=N.replace("w","w#"),P="\\["+M+"*("+N+")(?:"+M+"*([*^$|!~]?=)"+M+"*(?:'((?:\\\\.|[^\\\\'])*)'|\"((?:\\\\.|[^\\\\\"])*)\"|("+O+"))|)"+M+"*\\]",Q=":("+N+")(?:\\((('((?:\\\\.|[^\\\\'])*)'|\"((?:\\\\.|[^\\\\\"])*)\")|((?:\\\\.|[^\\\\()[\\]]|"+P+")*)|.*)\\)|)",R=new RegExp("^"+M+"+|((?:^|[^\\\\])(?:\\\\.)*)"+M+"+$","g"),S=new RegExp("^"+M+"*,"+M+"*"),T=new RegExp("^"+M+"*([>+~]|"+M+")"+M+"*"),U=new RegExp("="+M+"*([^\\]'\"]*?)"+M+"*\\]","g"),V=new RegExp(Q),W=new RegExp("^"+O+"$"),X={ID:new RegExp("^#("+N+")"),CLASS:new RegExp("^\\.("+N+")"),TAG:new RegExp("^("+N.replace("w","w*")+")"),ATTR:new RegExp("^"+P),PSEUDO:new RegExp("^"+Q),CHILD:new RegExp("^:(only|first|last|nth|nth-last)-(child|of-type)(?:\\("+M+"*(even|odd|(([+-]|)(\\d*)n|)"+M+"*(?:([+-]|)"+M+"*(\\d+)|))"+M+"*\\)|)","i"),bool:new RegExp("^(?:"+L+")$","i"),needsContext:new RegExp("^"+M+"*[>+~]|:(even|odd|eq|gt|lt|nth|first|last)(?:\\("+M+"*((?:-\\d)?\\d*)"+M+"*\\)|)(?=[^-]|$)","i")},Y=/^(?:input|select|textarea|button)$/i,Z=/^h\d$/i,$=/^[^{]+\{\s*\[native \w/,_=/^(?:#([\w-]+)|(\w+)|\.([\w-]+))$/,ab=/[+~]/,bb=/'|\\/g,cb=new RegExp("\\\\([\\da-f]{1,6}"+M+"?|("+M+")|.)","ig"),db=function(a,b,c){var d="0x"+b-65536;return d!==d||c?b:0>d?String.fromCharCode(d+65536):String.fromCharCode(d>>10|55296,1023&d|56320)};try{I.apply(F=J.call(v.childNodes),v.childNodes),F[v.childNodes.length].nodeType}catch(eb){I={apply:F.length?function(a,b){H.apply(a,J.call(b))}:function(a,b){var c=a.length,d=0;while(a[c++]=b[d++]);a.length=c-1}}}function fb(a,b,d,e){var f,h,j,k,l,o,r,s,w,x;if((b?b.ownerDocument||b:v)!==n&&m(b),b=b||n,d=d||[],!a||"string"!=typeof a)return d;if(1!==(k=b.nodeType)&&9!==k)return[];if(p&&!e){if(f=_.exec(a))if(j=f[1]){if(9===k){if(h=b.getElementById(j),!h||!h.parentNode)return d;if(h.id===j)return d.push(h),d}else if(b.ownerDocument&&(h=b.ownerDocument.getElementById(j))&&t(b,h)&&h.id===j)return d.push(h),d}else{if(f[2])return I.apply(d,b.getElementsByTagName(a)),d;if((j=f[3])&&c.getElementsByClassName&&b.getElementsByClassName)return I.apply(d,b.getElementsByClassName(j)),d}if(c.qsa&&(!q||!q.test(a))){if(s=r=u,w=b,x=9===k&&a,1===k&&"object"!==b.nodeName.toLowerCase()){o=g(a),(r=b.getAttribute("id"))?s=r.replace(bb,"\\$&"):b.setAttribute("id",s),s="[id='"+s+"'] ",l=o.length;while(l--)o[l]=s+qb(o[l]);w=ab.test(a)&&ob(b.parentNode)||b,x=o.join(",")}if(x)try{return I.apply(d,w.querySelectorAll(x)),d}catch(y){}finally{r||b.removeAttribute("id")}}}return i(a.replace(R,"$1"),b,d,e)}function gb(){var a=[];function b(c,e){return a.push(c+" ")>d.cacheLength&&delete b[a.shift()],b[c+" "]=e}return b}function hb(a){return a[u]=!0,a}function ib(a){var b=n.createElement("div");try{return!!a(b)}catch(c){return!1}finally{b.parentNode&&b.parentNode.removeChild(b),b=null}}function jb(a,b){var c=a.split("|"),e=a.length;while(e--)d.attrHandle[c[e]]=b}function kb(a,b){var c=b&&a,d=c&&1===a.nodeType&&1===b.nodeType&&(~b.sourceIndex||D)-(~a.sourceIndex||D);if(d)return d;if(c)while(c=c.nextSibling)if(c===b)return-1;return a?1:-1}function lb(a){return function(b){var c=b.nodeName.toLowerCase();return"input"===c&&b.type===a}}function mb(a){return function(b){var c=b.nodeName.toLowerCase();return("input"===c||"button"===c)&&b.type===a}}function nb(a){return hb(function(b){return b=+b,hb(function(c,d){var e,f=a([],c.length,b),g=f.length;while(g--)c[e=f[g]]&&(c[e]=!(d[e]=c[e]))})})}function ob(a){return a&&typeof a.getElementsByTagName!==C&&a}c=fb.support={},f=fb.isXML=function(a){var b=a&&(a.ownerDocument||a).documentElement;return b?"HTML"!==b.nodeName:!1},m=fb.setDocument=function(a){var b,e=a?a.ownerDocument||a:v,g=e.defaultView;return e!==n&&9===e.nodeType&&e.documentElement?(n=e,o=e.documentElement,p=!f(e),g&&g!==g.top&&(g.addEventListener?g.addEventListener("unload",function(){m()},!1):g.attachEvent&&g.attachEvent("onunload",function(){m()})),c.attributes=ib(function(a){return a.className="i",!a.getAttribute("className")}),c.getElementsByTagName=ib(function(a){return a.appendChild(e.createComment("")),!a.getElementsByTagName("*").length}),c.getElementsByClassName=$.test(e.getElementsByClassName)&&ib(function(a){return a.innerHTML="<div class='a'></div><div class='a i'></div>",a.firstChild.className="i",2===a.getElementsByClassName("i").length}),c.getById=ib(function(a){return o.appendChild(a).id=u,!e.getElementsByName||!e.getElementsByName(u).length}),c.getById?(d.find.ID=function(a,b){if(typeof b.getElementById!==C&&p){var c=b.getElementById(a);return c&&c.parentNode?[c]:[]}},d.filter.ID=function(a){var b=a.replace(cb,db);return function(a){return a.getAttribute("id")===b}}):(delete d.find.ID,d.filter.ID=function(a){var b=a.replace(cb,db);return function(a){var c=typeof a.getAttributeNode!==C&&a.getAttributeNode("id");return c&&c.value===b}}),d.find.TAG=c.getElementsByTagName?function(a,b){return typeof b.getElementsByTagName!==C?b.getElementsByTagName(a):void 0}:function(a,b){var c,d=[],e=0,f=b.getElementsByTagName(a);if("*"===a){while(c=f[e++])1===c.nodeType&&d.push(c);return d}return f},d.find.CLASS=c.getElementsByClassName&&function(a,b){return typeof b.getElementsByClassName!==C&&p?b.getElementsByClassName(a):void 0},r=[],q=[],(c.qsa=$.test(e.querySelectorAll))&&(ib(function(a){a.innerHTML="<select msallowclip=''><option selected=''></option></select>",a.querySelectorAll("[msallowclip^='']").length&&q.push("[*^$]="+M+"*(?:''|\"\")"),a.querySelectorAll("[selected]").length||q.push("\\["+M+"*(?:value|"+L+")"),a.querySelectorAll(":checked").length||q.push(":checked")}),ib(function(a){var b=e.createElement("input");b.setAttribute("type","hidden"),a.appendChild(b).setAttribute("name","D"),a.querySelectorAll("[name=d]").length&&q.push("name"+M+"*[*^$|!~]?="),a.querySelectorAll(":enabled").length||q.push(":enabled",":disabled"),a.querySelectorAll("*,:x"),q.push(",.*:")})),(c.matchesSelector=$.test(s=o.matches||o.webkitMatchesSelector||o.mozMatchesSelector||o.oMatchesSelector||o.msMatchesSelector))&&ib(function(a){c.disconnectedMatch=s.call(a,"div"),s.call(a,"[s!='']:x"),r.push("!=",Q)}),q=q.length&&new RegExp(q.join("|")),r=r.length&&new RegExp(r.join("|")),b=$.test(o.compareDocumentPosition),t=b||$.test(o.contains)?function(a,b){var c=9===a.nodeType?a.documentElement:a,d=b&&b.parentNode;return a===d||!(!d||1!==d.nodeType||!(c.contains?c.contains(d):a.compareDocumentPosition&&16&a.compareDocumentPosition(d)))}:function(a,b){if(b)while(b=b.parentNode)if(b===a)return!0;return!1},B=b?function(a,b){if(a===b)return l=!0,0;var d=!a.compareDocumentPosition-!b.compareDocumentPosition;return d?d:(d=(a.ownerDocument||a)===(b.ownerDocument||b)?a.compareDocumentPosition(b):1,1&d||!c.sortDetached&&b.compareDocumentPosition(a)===d?a===e||a.ownerDocument===v&&t(v,a)?-1:b===e||b.ownerDocument===v&&t(v,b)?1:k?K.call(k,a)-K.call(k,b):0:4&d?-1:1)}:function(a,b){if(a===b)return l=!0,0;var c,d=0,f=a.parentNode,g=b.parentNode,h=[a],i=[b];if(!f||!g)return a===e?-1:b===e?1:f?-1:g?1:k?K.call(k,a)-K.call(k,b):0;if(f===g)return kb(a,b);c=a;while(c=c.parentNode)h.unshift(c);c=b;while(c=c.parentNode)i.unshift(c);while(h[d]===i[d])d++;return d?kb(h[d],i[d]):h[d]===v?-1:i[d]===v?1:0},e):n},fb.matches=function(a,b){return fb(a,null,null,b)},fb.matchesSelector=function(a,b){if((a.ownerDocument||a)!==n&&m(a),b=b.replace(U,"='$1']"),!(!c.matchesSelector||!p||r&&r.test(b)||q&&q.test(b)))try{var d=s.call(a,b);if(d||c.disconnectedMatch||a.document&&11!==a.document.nodeType)return d}catch(e){}return fb(b,n,null,[a]).length>0},fb.contains=function(a,b){return(a.ownerDocument||a)!==n&&m(a),t(a,b)},fb.attr=function(a,b){(a.ownerDocument||a)!==n&&m(a);var e=d.attrHandle[b.toLowerCase()],f=e&&E.call(d.attrHandle,b.toLowerCase())?e(a,b,!p):void 0;return void 0!==f?f:c.attributes||!p?a.getAttribute(b):(f=a.getAttributeNode(b))&&f.specified?f.value:null},fb.error=function(a){throw new Error("Syntax error, unrecognized expression: "+a)},fb.uniqueSort=function(a){var b,d=[],e=0,f=0;if(l=!c.detectDuplicates,k=!c.sortStable&&a.slice(0),a.sort(B),l){while(b=a[f++])b===a[f]&&(e=d.push(f));while(e--)a.splice(d[e],1)}return k=null,a},e=fb.getText=function(a){var b,c="",d=0,f=a.nodeType;if(f){if(1===f||9===f||11===f){if("string"==typeof a.textContent)return a.textContent;for(a=a.firstChild;a;a=a.nextSibling)c+=e(a)}else if(3===f||4===f)return a.nodeValue}else while(b=a[d++])c+=e(b);return c},d=fb.selectors={cacheLength:50,createPseudo:hb,match:X,attrHandle:{},find:{},relative:{">":{dir:"parentNode",first:!0}," ":{dir:"parentNode"},"+":{dir:"previousSibling",first:!0},"~":{dir:"previousSibling"}},preFilter:{ATTR:function(a){return a[1]=a[1].replace(cb,db),a[3]=(a[3]||a[4]||a[5]||"").replace(cb,db),"~="===a[2]&&(a[3]=" "+a[3]+" "),a.slice(0,4)},CHILD:function(a){return a[1]=a[1].toLowerCase(),"nth"===a[1].slice(0,3)?(a[3]||fb.error(a[0]),a[4]=+(a[4]?a[5]+(a[6]||1):2*("even"===a[3]||"odd"===a[3])),a[5]=+(a[7]+a[8]||"odd"===a[3])):a[3]&&fb.error(a[0]),a},PSEUDO:function(a){var b,c=!a[6]&&a[2];return X.CHILD.test(a[0])?null:(a[3]?a[2]=a[4]||a[5]||"":c&&V.test(c)&&(b=g(c,!0))&&(b=c.indexOf(")",c.length-b)-c.length)&&(a[0]=a[0].slice(0,b),a[2]=c.slice(0,b)),a.slice(0,3))}},filter:{TAG:function(a){var b=a.replace(cb,db).toLowerCase();return"*"===a?function(){return!0}:function(a){return a.nodeName&&a.nodeName.toLowerCase()===b}},CLASS:function(a){var b=y[a+" "];return b||(b=new RegExp("(^|"+M+")"+a+"("+M+"|$)"))&&y(a,function(a){return b.test("string"==typeof a.className&&a.className||typeof a.getAttribute!==C&&a.getAttribute("class")||"")})},ATTR:function(a,b,c){return function(d){var e=fb.attr(d,a);return null==e?"!="===b:b?(e+="","="===b?e===c:"!="===b?e!==c:"^="===b?c&&0===e.indexOf(c):"*="===b?c&&e.indexOf(c)>-1:"$="===b?c&&e.slice(-c.length)===c:"~="===b?(" "+e+" ").indexOf(c)>-1:"|="===b?e===c||e.slice(0,c.length+1)===c+"-":!1):!0}},CHILD:function(a,b,c,d,e){var f="nth"!==a.slice(0,3),g="last"!==a.slice(-4),h="of-type"===b;return 1===d&&0===e?function(a){return!!a.parentNode}:function(b,c,i){var j,k,l,m,n,o,p=f!==g?"nextSibling":"previousSibling",q=b.parentNode,r=h&&b.nodeName.toLowerCase(),s=!i&&!h;if(q){if(f){while(p){l=b;while(l=l[p])if(h?l.nodeName.toLowerCase()===r:1===l.nodeType)return!1;o=p="only"===a&&!o&&"nextSibling"}return!0}if(o=[g?q.firstChild:q.lastChild],g&&s){k=q[u]||(q[u]={}),j=k[a]||[],n=j[0]===w&&j[1],m=j[0]===w&&j[2],l=n&&q.childNodes[n];while(l=++n&&l&&l[p]||(m=n=0)||o.pop())if(1===l.nodeType&&++m&&l===b){k[a]=[w,n,m];break}}else if(s&&(j=(b[u]||(b[u]={}))[a])&&j[0]===w)m=j[1];else while(l=++n&&l&&l[p]||(m=n=0)||o.pop())if((h?l.nodeName.toLowerCase()===r:1===l.nodeType)&&++m&&(s&&((l[u]||(l[u]={}))[a]=[w,m]),l===b))break;return m-=e,m===d||m%d===0&&m/d>=0}}},PSEUDO:function(a,b){var c,e=d.pseudos[a]||d.setFilters[a.toLowerCase()]||fb.error("unsupported pseudo: "+a);return e[u]?e(b):e.length>1?(c=[a,a,"",b],d.setFilters.hasOwnProperty(a.toLowerCase())?hb(function(a,c){var d,f=e(a,b),g=f.length;while(g--)d=K.call(a,f[g]),a[d]=!(c[d]=f[g])}):function(a){return e(a,0,c)}):e}},pseudos:{not:hb(function(a){var b=[],c=[],d=h(a.replace(R,"$1"));return d[u]?hb(function(a,b,c,e){var f,g=d(a,null,e,[]),h=a.length;while(h--)(f=g[h])&&(a[h]=!(b[h]=f))}):function(a,e,f){return b[0]=a,d(b,null,f,c),!c.pop()}}),has:hb(function(a){return function(b){return fb(a,b).length>0}}),contains:hb(function(a){return function(b){return(b.textContent||b.innerText||e(b)).indexOf(a)>-1}}),lang:hb(function(a){return W.test(a||"")||fb.error("unsupported lang: "+a),a=a.replace(cb,db).toLowerCase(),function(b){var c;do if(c=p?b.lang:b.getAttribute("xml:lang")||b.getAttribute("lang"))return c=c.toLowerCase(),c===a||0===c.indexOf(a+"-");while((b=b.parentNode)&&1===b.nodeType);return!1}}),target:function(b){var c=a.location&&a.location.hash;return c&&c.slice(1)===b.id},root:function(a){return a===o},focus:function(a){return a===n.activeElement&&(!n.hasFocus||n.hasFocus())&&!!(a.type||a.href||~a.tabIndex)},enabled:function(a){return a.disabled===!1},disabled:function(a){return a.disabled===!0},checked:function(a){var b=a.nodeName.toLowerCase();return"input"===b&&!!a.checked||"option"===b&&!!a.selected},selected:function(a){return a.parentNode&&a.parentNode.selectedIndex,a.selected===!0},empty:function(a){for(a=a.firstChild;a;a=a.nextSibling)if(a.nodeType<6)return!1;return!0},parent:function(a){return!d.pseudos.empty(a)},header:function(a){return Z.test(a.nodeName)},input:function(a){return Y.test(a.nodeName)},button:function(a){var b=a.nodeName.toLowerCase();return"input"===b&&"button"===a.type||"button"===b},text:function(a){var b;return"input"===a.nodeName.toLowerCase()&&"text"===a.type&&(null==(b=a.getAttribute("type"))||"text"===b.toLowerCase())},first:nb(function(){return[0]}),last:nb(function(a,b){return[b-1]}),eq:nb(function(a,b,c){return[0>c?c+b:c]}),even:nb(function(a,b){for(var c=0;b>c;c+=2)a.push(c);return a}),odd:nb(function(a,b){for(var c=1;b>c;c+=2)a.push(c);return a}),lt:nb(function(a,b,c){for(var d=0>c?c+b:c;--d>=0;)a.push(d);return a}),gt:nb(function(a,b,c){for(var d=0>c?c+b:c;++d<b;)a.push(d);return a})}},d.pseudos.nth=d.pseudos.eq;for(b in{radio:!0,checkbox:!0,file:!0,password:!0,image:!0})d.pseudos[b]=lb(b);for(b in{submit:!0,reset:!0})d.pseudos[b]=mb(b);function pb(){}pb.prototype=d.filters=d.pseudos,d.setFilters=new pb,g=fb.tokenize=function(a,b){var c,e,f,g,h,i,j,k=z[a+" "];if(k)return b?0:k.slice(0);h=a,i=[],j=d.preFilter;while(h){(!c||(e=S.exec(h)))&&(e&&(h=h.slice(e[0].length)||h),i.push(f=[])),c=!1,(e=T.exec(h))&&(c=e.shift(),f.push({value:c,type:e[0].replace(R," ")}),h=h.slice(c.length));for(g in d.filter)!(e=X[g].exec(h))||j[g]&&!(e=j[g](e))||(c=e.shift(),f.push({value:c,type:g,matches:e}),h=h.slice(c.length));if(!c)break}return b?h.length:h?fb.error(a):z(a,i).slice(0)};function qb(a){for(var b=0,c=a.length,d="";c>b;b++)d+=a[b].value;return d}function rb(a,b,c){var d=b.dir,e=c&&"parentNode"===d,f=x++;return b.first?function(b,c,f){while(b=b[d])if(1===b.nodeType||e)return a(b,c,f)}:function(b,c,g){var h,i,j=[w,f];if(g){while(b=b[d])if((1===b.nodeType||e)&&a(b,c,g))return!0}else while(b=b[d])if(1===b.nodeType||e){if(i=b[u]||(b[u]={}),(h=i[d])&&h[0]===w&&h[1]===f)return j[2]=h[2];if(i[d]=j,j[2]=a(b,c,g))return!0}}}function sb(a){return a.length>1?function(b,c,d){var e=a.length;while(e--)if(!a[e](b,c,d))return!1;return!0}:a[0]}function tb(a,b,c){for(var d=0,e=b.length;e>d;d++)fb(a,b[d],c);return c}function ub(a,b,c,d,e){for(var f,g=[],h=0,i=a.length,j=null!=b;i>h;h++)(f=a[h])&&(!c||c(f,d,e))&&(g.push(f),j&&b.push(h));return g}function vb(a,b,c,d,e,f){return d&&!d[u]&&(d=vb(d)),e&&!e[u]&&(e=vb(e,f)),hb(function(f,g,h,i){var j,k,l,m=[],n=[],o=g.length,p=f||tb(b||"*",h.nodeType?[h]:h,[]),q=!a||!f&&b?p:ub(p,m,a,h,i),r=c?e||(f?a:o||d)?[]:g:q;if(c&&c(q,r,h,i),d){j=ub(r,n),d(j,[],h,i),k=j.length;while(k--)(l=j[k])&&(r[n[k]]=!(q[n[k]]=l))}if(f){if(e||a){if(e){j=[],k=r.length;while(k--)(l=r[k])&&j.push(q[k]=l);e(null,r=[],j,i)}k=r.length;while(k--)(l=r[k])&&(j=e?K.call(f,l):m[k])>-1&&(f[j]=!(g[j]=l))}}else r=ub(r===g?r.splice(o,r.length):r),e?e(null,g,r,i):I.apply(g,r)})}function wb(a){for(var b,c,e,f=a.length,g=d.relative[a[0].type],h=g||d.relative[" "],i=g?1:0,k=rb(function(a){return a===b},h,!0),l=rb(function(a){return K.call(b,a)>-1},h,!0),m=[function(a,c,d){return!g&&(d||c!==j)||((b=c).nodeType?k(a,c,d):l(a,c,d))}];f>i;i++)if(c=d.relative[a[i].type])m=[rb(sb(m),c)];else{if(c=d.filter[a[i].type].apply(null,a[i].matches),c[u]){for(e=++i;f>e;e++)if(d.relative[a[e].type])break;return vb(i>1&&sb(m),i>1&&qb(a.slice(0,i-1).concat({value:" "===a[i-2].type?"*":""})).replace(R,"$1"),c,e>i&&wb(a.slice(i,e)),f>e&&wb(a=a.slice(e)),f>e&&qb(a))}m.push(c)}return sb(m)}function xb(a,b){var c=b.length>0,e=a.length>0,f=function(f,g,h,i,k){var l,m,o,p=0,q="0",r=f&&[],s=[],t=j,u=f||e&&d.find.TAG("*",k),v=w+=null==t?1:Math.random()||.1,x=u.length;for(k&&(j=g!==n&&g);q!==x&&null!=(l=u[q]);q++){if(e&&l){m=0;while(o=a[m++])if(o(l,g,h)){i.push(l);break}k&&(w=v)}c&&((l=!o&&l)&&p--,f&&r.push(l))}if(p+=q,c&&q!==p){m=0;while(o=b[m++])o(r,s,g,h);if(f){if(p>0)while(q--)r[q]||s[q]||(s[q]=G.call(i));s=ub(s)}I.apply(i,s),k&&!f&&s.length>0&&p+b.length>1&&fb.uniqueSort(i)}return k&&(w=v,j=t),r};return c?hb(f):f}return h=fb.compile=function(a,b){var c,d=[],e=[],f=A[a+" "];if(!f){b||(b=g(a)),c=b.length;while(c--)f=wb(b[c]),f[u]?d.push(f):e.push(f);f=A(a,xb(e,d)),f.selector=a}return f},i=fb.select=function(a,b,e,f){var i,j,k,l,m,n="function"==typeof a&&a,o=!f&&g(a=n.selector||a);if(e=e||[],1===o.length){if(j=o[0]=o[0].slice(0),j.length>2&&"ID"===(k=j[0]).type&&c.getById&&9===b.nodeType&&p&&d.relative[j[1].type]){if(b=(d.find.ID(k.matches[0].replace(cb,db),b)||[])[0],!b)return e;n&&(b=b.parentNode),a=a.slice(j.shift().value.length)}i=X.needsContext.test(a)?0:j.length;while(i--){if(k=j[i],d.relative[l=k.type])break;if((m=d.find[l])&&(f=m(k.matches[0].replace(cb,db),ab.test(j[0].type)&&ob(b.parentNode)||b))){if(j.splice(i,1),a=f.length&&qb(j),!a)return I.apply(e,f),e;break}}}return(n||h(a,o))(f,b,!p,e,ab.test(a)&&ob(b.parentNode)||b),e},c.sortStable=u.split("").sort(B).join("")===u,c.detectDuplicates=!!l,m(),c.sortDetached=ib(function(a){return 1&a.compareDocumentPosition(n.createElement("div"))}),ib(function(a){return a.innerHTML="<a href='#'></a>","#"===a.firstChild.getAttribute("href")})||jb("type|href|height|width",function(a,b,c){return c?void 0:a.getAttribute(b,"type"===b.toLowerCase()?1:2)}),c.attributes&&ib(function(a){return a.innerHTML="<input/>",a.firstChild.setAttribute("value",""),""===a.firstChild.getAttribute("value")})||jb("value",function(a,b,c){return c||"input"!==a.nodeName.toLowerCase()?void 0:a.defaultValue}),ib(function(a){return null==a.getAttribute("disabled")})||jb(L,function(a,b,c){var d;return c?void 0:a[b]===!0?b.toLowerCase():(d=a.getAttributeNode(b))&&d.specified?d.value:null}),fb}(a);m.find=s,m.expr=s.selectors,m.expr[":"]=m.expr.pseudos,m.unique=s.uniqueSort,m.text=s.getText,m.isXMLDoc=s.isXML,m.contains=s.contains;var t=m.expr.match.needsContext,u=/^<(\w+)\s*\/?>(?:<\/\1>|)$/,v=/^.[^:#\[\.,]*$/;function w(a,b,c){if(m.isFunction(b))return m.grep(a,function(a,d){return!!b.call(a,d,a)!==c});if(b.nodeType)return m.grep(a,function(a){return a===b!==c});if("string"==typeof b){if(v.test(b))return m.filter(b,a,c);b=m.filter(b,a)}return m.grep(a,function(a){return m.inArray(a,b)>=0!==c})}m.filter=function(a,b,c){var d=b[0];return c&&(a=":not("+a+")"),1===b.length&&1===d.nodeType?m.find.matchesSelector(d,a)?[d]:[]:m.find.matches(a,m.grep(b,function(a){return 1===a.nodeType}))},m.fn.extend({find:function(a){var b,c=[],d=this,e=d.length;if("string"!=typeof a)return this.pushStack(m(a).filter(function(){for(b=0;e>b;b++)if(m.contains(d[b],this))return!0}));for(b=0;e>b;b++)m.find(a,d[b],c);return c=this.pushStack(e>1?m.unique(c):c),c.selector=this.selector?this.selector+" "+a:a,c},filter:function(a){return this.pushStack(w(this,a||[],!1))},not:function(a){return this.pushStack(w(this,a||[],!0))},is:function(a){return!!w(this,"string"==typeof a&&t.test(a)?m(a):a||[],!1).length}});var x,y=a.document,z=/^(?:\s*(<[\w\W]+>)[^>]*|#([\w-]*))$/,A=m.fn.init=function(a,b){var c,d;if(!a)return this;if("string"==typeof a){if(c="<"===a.charAt(0)&&">"===a.charAt(a.length-1)&&a.length>=3?[null,a,null]:z.exec(a),!c||!c[1]&&b)return!b||b.jquery?(b||x).find(a):this.constructor(b).find(a);if(c[1]){if(b=b instanceof m?b[0]:b,m.merge(this,m.parseHTML(c[1],b&&b.nodeType?b.ownerDocument||b:y,!0)),u.test(c[1])&&m.isPlainObject(b))for(c in b)m.isFunction(this[c])?this[c](b[c]):this.attr(c,b[c]);return this}if(d=y.getElementById(c[2]),d&&d.parentNode){if(d.id!==c[2])return x.find(a);this.length=1,this[0]=d}return this.context=y,this.selector=a,this}return a.nodeType?(this.context=this[0]=a,this.length=1,this):m.isFunction(a)?"undefined"!=typeof x.ready?x.ready(a):a(m):(void 0!==a.selector&&(this.selector=a.selector,this.context=a.context),m.makeArray(a,this))};A.prototype=m.fn,x=m(y);var B=/^(?:parents|prev(?:Until|All))/,C={children:!0,contents:!0,next:!0,prev:!0};m.extend({dir:function(a,b,c){var d=[],e=a[b];while(e&&9!==e.nodeType&&(void 0===c||1!==e.nodeType||!m(e).is(c)))1===e.nodeType&&d.push(e),e=e[b];return d},sibling:function(a,b){for(var c=[];a;a=a.nextSibling)1===a.nodeType&&a!==b&&c.push(a);return c}}),m.fn.extend({has:function(a){var b,c=m(a,this),d=c.length;return this.filter(function(){for(b=0;d>b;b++)if(m.contains(this,c[b]))return!0})},closest:function(a,b){for(var c,d=0,e=this.length,f=[],g=t.test(a)||"string"!=typeof a?m(a,b||this.context):0;e>d;d++)for(c=this[d];c&&c!==b;c=c.parentNode)if(c.nodeType<11&&(g?g.index(c)>-1:1===c.nodeType&&m.find.matchesSelector(c,a))){f.push(c);break}return this.pushStack(f.length>1?m.unique(f):f)},index:function(a){return a?"string"==typeof a?m.inArray(this[0],m(a)):m.inArray(a.jquery?a[0]:a,this):this[0]&&this[0].parentNode?this.first().prevAll().length:-1},add:function(a,b){return this.pushStack(m.unique(m.merge(this.get(),m(a,b))))},addBack:function(a){return this.add(null==a?this.prevObject:this.prevObject.filter(a))}});function D(a,b){do a=a[b];while(a&&1!==a.nodeType);return a}m.each({parent:function(a){var b=a.parentNode;return b&&11!==b.nodeType?b:null},parents:function(a){return m.dir(a,"parentNode")},parentsUntil:function(a,b,c){return m.dir(a,"parentNode",c)},next:function(a){return D(a,"nextSibling")},prev:function(a){return D(a,"previousSibling")},nextAll:function(a){return m.dir(a,"nextSibling")},prevAll:function(a){return m.dir(a,"previousSibling")},nextUntil:function(a,b,c){return m.dir(a,"nextSibling",c)},prevUntil:function(a,b,c){return m.dir(a,"previousSibling",c)},siblings:function(a){return m.sibling((a.parentNode||{}).firstChild,a)},children:function(a){return m.sibling(a.firstChild)},contents:function(a){return m.nodeName(a,"iframe")?a.contentDocument||a.contentWindow.document:m.merge([],a.childNodes)}},function(a,b){m.fn[a]=function(c,d){var e=m.map(this,b,c);return"Until"!==a.slice(-5)&&(d=c),d&&"string"==typeof d&&(e=m.filter(d,e)),this.length>1&&(C[a]||(e=m.unique(e)),B.test(a)&&(e=e.reverse())),this.pushStack(e)}});var E=/\S+/g,F={};function G(a){var b=F[a]={};return m.each(a.match(E)||[],function(a,c){b[c]=!0}),b}m.Callbacks=function(a){a="string"==typeof a?F[a]||G(a):m.extend({},a);var b,c,d,e,f,g,h=[],i=!a.once&&[],j=function(l){for(c=a.memory&&l,d=!0,f=g||0,g=0,e=h.length,b=!0;h&&e>f;f++)if(h[f].apply(l[0],l[1])===!1&&a.stopOnFalse){c=!1;break}b=!1,h&&(i?i.length&&j(i.shift()):c?h=[]:k.disable())},k={add:function(){if(h){var d=h.length;!function f(b){m.each(b,function(b,c){var d=m.type(c);"function"===d?a.unique&&k.has(c)||h.push(c):c&&c.length&&"string"!==d&&f(c)})}(arguments),b?e=h.length:c&&(g=d,j(c))}return this},remove:function(){return h&&m.each(arguments,function(a,c){var d;while((d=m.inArray(c,h,d))>-1)h.splice(d,1),b&&(e>=d&&e--,f>=d&&f--)}),this},has:function(a){return a?m.inArray(a,h)>-1:!(!h||!h.length)},empty:function(){return h=[],e=0,this},disable:function(){return h=i=c=void 0,this},disabled:function(){return!h},lock:function(){return i=void 0,c||k.disable(),this},locked:function(){return!i},fireWith:function(a,c){return!h||d&&!i||(c=c||[],c=[a,c.slice?c.slice():c],b?i.push(c):j(c)),this},fire:function(){return k.fireWith(this,arguments),this},fired:function(){return!!d}};return k},m.extend({Deferred:function(a){var b=[["resolve","done",m.Callbacks("once memory"),"resolved"],["reject","fail",m.Callbacks("once memory"),"rejected"],["notify","progress",m.Callbacks("memory")]],c="pending",d={state:function(){return c},always:function(){return e.done(arguments).fail(arguments),this},then:function(){var a=arguments;return m.Deferred(function(c){m.each(b,function(b,f){var g=m.isFunction(a[b])&&a[b];e[f[1]](function(){var a=g&&g.apply(this,arguments);a&&m.isFunction(a.promise)?a.promise().done(c.resolve).fail(c.reject).progress(c.notify):c[f[0]+"With"](this===d?c.promise():this,g?[a]:arguments)})}),a=null}).promise()},promise:function(a){return null!=a?m.extend(a,d):d}},e={};return d.pipe=d.then,m.each(b,function(a,f){var g=f[2],h=f[3];d[f[1]]=g.add,h&&g.add(function(){c=h},b[1^a][2].disable,b[2][2].lock),e[f[0]]=function(){return e[f[0]+"With"](this===e?d:this,arguments),this},e[f[0]+"With"]=g.fireWith}),d.promise(e),a&&a.call(e,e),e},when:function(a){var b=0,c=d.call(arguments),e=c.length,f=1!==e||a&&m.isFunction(a.promise)?e:0,g=1===f?a:m.Deferred(),h=function(a,b,c){return function(e){b[a]=this,c[a]=arguments.length>1?d.call(arguments):e,c===i?g.notifyWith(b,c):--f||g.resolveWith(b,c)}},i,j,k;if(e>1)for(i=new Array(e),j=new Array(e),k=new Array(e);e>b;b++)c[b]&&m.isFunction(c[b].promise)?c[b].promise().done(h(b,k,c)).fail(g.reject).progress(h(b,j,i)):--f;return f||g.resolveWith(k,c),g.promise()}});var H;m.fn.ready=function(a){return m.ready.promise().done(a),this},m.extend({isReady:!1,readyWait:1,holdReady:function(a){a?m.readyWait++:m.ready(!0)},ready:function(a){if(a===!0?!--m.readyWait:!m.isReady){if(!y.body)return setTimeout(m.ready);m.isReady=!0,a!==!0&&--m.readyWait>0||(H.resolveWith(y,[m]),m.fn.triggerHandler&&(m(y).triggerHandler("ready"),m(y).off("ready")))}}});function I(){y.addEventListener?(y.removeEventListener("DOMContentLoaded",J,!1),a.removeEventListener("load",J,!1)):(y.detachEvent("onreadystatechange",J),a.detachEvent("onload",J))}function J(){(y.addEventListener||"load"===event.type||"complete"===y.readyState)&&(I(),m.ready())}m.ready.promise=function(b){if(!H)if(H=m.Deferred(),"complete"===y.readyState)setTimeout(m.ready);else if(y.addEventListener)y.addEventListener("DOMContentLoaded",J,!1),a.addEventListener("load",J,!1);else{y.attachEvent("onreadystatechange",J),a.attachEvent("onload",J);var c=!1;try{c=null==a.frameElement&&y.documentElement}catch(d){}c&&c.doScroll&&!function e(){if(!m.isReady){try{c.doScroll("left")}catch(a){return setTimeout(e,50)}I(),m.ready()}}()}return H.promise(b)};var K="undefined",L;for(L in m(k))break;k.ownLast="0"!==L,k.inlineBlockNeedsLayout=!1,m(function(){var a,b,c,d;c=y.getElementsByTagName("body")[0],c&&c.style&&(b=y.createElement("div"),d=y.createElement("div"),d.style.cssText="position:absolute;border:0;width:0;height:0;top:0;left:-9999px",c.appendChild(d).appendChild(b),typeof b.style.zoom!==K&&(b.style.cssText="display:inline;margin:0;border:0;padding:1px;width:1px;zoom:1",k.inlineBlockNeedsLayout=a=3===b.offsetWidth,a&&(c.style.zoom=1)),c.removeChild(d))}),function(){var a=y.createElement("div");if(null==k.deleteExpando){k.deleteExpando=!0;try{delete a.test}catch(b){k.deleteExpando=!1}}a=null}(),m.acceptData=function(a){var b=m.noData[(a.nodeName+" ").toLowerCase()],c=+a.nodeType||1;return 1!==c&&9!==c?!1:!b||b!==!0&&a.getAttribute("classid")===b};var M=/^(?:\{[\w\W]*\}|\[[\w\W]*\])$/,N=/([A-Z])/g;function O(a,b,c){if(void 0===c&&1===a.nodeType){var d="data-"+b.replace(N,"-$1").toLowerCase();if(c=a.getAttribute(d),"string"==typeof c){try{c="true"===c?!0:"false"===c?!1:"null"===c?null:+c+""===c?+c:M.test(c)?m.parseJSON(c):c}catch(e){}m.data(a,b,c)}else c=void 0}return c}function P(a){var b;for(b in a)if(("data"!==b||!m.isEmptyObject(a[b]))&&"toJSON"!==b)return!1;return!0}function Q(a,b,d,e){if(m.acceptData(a)){var f,g,h=m.expando,i=a.nodeType,j=i?m.cache:a,k=i?a[h]:a[h]&&h; -if(k&&j[k]&&(e||j[k].data)||void 0!==d||"string"!=typeof b)return k||(k=i?a[h]=c.pop()||m.guid++:h),j[k]||(j[k]=i?{}:{toJSON:m.noop}),("object"==typeof b||"function"==typeof b)&&(e?j[k]=m.extend(j[k],b):j[k].data=m.extend(j[k].data,b)),g=j[k],e||(g.data||(g.data={}),g=g.data),void 0!==d&&(g[m.camelCase(b)]=d),"string"==typeof b?(f=g[b],null==f&&(f=g[m.camelCase(b)])):f=g,f}}function R(a,b,c){if(m.acceptData(a)){var d,e,f=a.nodeType,g=f?m.cache:a,h=f?a[m.expando]:m.expando;if(g[h]){if(b&&(d=c?g[h]:g[h].data)){m.isArray(b)?b=b.concat(m.map(b,m.camelCase)):b in d?b=[b]:(b=m.camelCase(b),b=b in d?[b]:b.split(" ")),e=b.length;while(e--)delete d[b[e]];if(c?!P(d):!m.isEmptyObject(d))return}(c||(delete g[h].data,P(g[h])))&&(f?m.cleanData([a],!0):k.deleteExpando||g!=g.window?delete g[h]:g[h]=null)}}}m.extend({cache:{},noData:{"applet ":!0,"embed ":!0,"object ":"clsid:D27CDB6E-AE6D-11cf-96B8-444553540000"},hasData:function(a){return a=a.nodeType?m.cache[a[m.expando]]:a[m.expando],!!a&&!P(a)},data:function(a,b,c){return Q(a,b,c)},removeData:function(a,b){return R(a,b)},_data:function(a,b,c){return Q(a,b,c,!0)},_removeData:function(a,b){return R(a,b,!0)}}),m.fn.extend({data:function(a,b){var c,d,e,f=this[0],g=f&&f.attributes;if(void 0===a){if(this.length&&(e=m.data(f),1===f.nodeType&&!m._data(f,"parsedAttrs"))){c=g.length;while(c--)g[c]&&(d=g[c].name,0===d.indexOf("data-")&&(d=m.camelCase(d.slice(5)),O(f,d,e[d])));m._data(f,"parsedAttrs",!0)}return e}return"object"==typeof a?this.each(function(){m.data(this,a)}):arguments.length>1?this.each(function(){m.data(this,a,b)}):f?O(f,a,m.data(f,a)):void 0},removeData:function(a){return this.each(function(){m.removeData(this,a)})}}),m.extend({queue:function(a,b,c){var d;return a?(b=(b||"fx")+"queue",d=m._data(a,b),c&&(!d||m.isArray(c)?d=m._data(a,b,m.makeArray(c)):d.push(c)),d||[]):void 0},dequeue:function(a,b){b=b||"fx";var c=m.queue(a,b),d=c.length,e=c.shift(),f=m._queueHooks(a,b),g=function(){m.dequeue(a,b)};"inprogress"===e&&(e=c.shift(),d--),e&&("fx"===b&&c.unshift("inprogress"),delete f.stop,e.call(a,g,f)),!d&&f&&f.empty.fire()},_queueHooks:function(a,b){var c=b+"queueHooks";return m._data(a,c)||m._data(a,c,{empty:m.Callbacks("once memory").add(function(){m._removeData(a,b+"queue"),m._removeData(a,c)})})}}),m.fn.extend({queue:function(a,b){var c=2;return"string"!=typeof a&&(b=a,a="fx",c--),arguments.length<c?m.queue(this[0],a):void 0===b?this:this.each(function(){var c=m.queue(this,a,b);m._queueHooks(this,a),"fx"===a&&"inprogress"!==c[0]&&m.dequeue(this,a)})},dequeue:function(a){return this.each(function(){m.dequeue(this,a)})},clearQueue:function(a){return this.queue(a||"fx",[])},promise:function(a,b){var c,d=1,e=m.Deferred(),f=this,g=this.length,h=function(){--d||e.resolveWith(f,[f])};"string"!=typeof a&&(b=a,a=void 0),a=a||"fx";while(g--)c=m._data(f[g],a+"queueHooks"),c&&c.empty&&(d++,c.empty.add(h));return h(),e.promise(b)}});var S=/[+-]?(?:\d*\.|)\d+(?:[eE][+-]?\d+|)/.source,T=["Top","Right","Bottom","Left"],U=function(a,b){return a=b||a,"none"===m.css(a,"display")||!m.contains(a.ownerDocument,a)},V=m.access=function(a,b,c,d,e,f,g){var h=0,i=a.length,j=null==c;if("object"===m.type(c)){e=!0;for(h in c)m.access(a,b,h,c[h],!0,f,g)}else if(void 0!==d&&(e=!0,m.isFunction(d)||(g=!0),j&&(g?(b.call(a,d),b=null):(j=b,b=function(a,b,c){return j.call(m(a),c)})),b))for(;i>h;h++)b(a[h],c,g?d:d.call(a[h],h,b(a[h],c)));return e?a:j?b.call(a):i?b(a[0],c):f},W=/^(?:checkbox|radio)$/i;!function(){var a=y.createElement("input"),b=y.createElement("div"),c=y.createDocumentFragment();if(b.innerHTML=" <link/><table></table><a href='/a'>a</a><input type='checkbox'/>",k.leadingWhitespace=3===b.firstChild.nodeType,k.tbody=!b.getElementsByTagName("tbody").length,k.htmlSerialize=!!b.getElementsByTagName("link").length,k.html5Clone="<:nav></:nav>"!==y.createElement("nav").cloneNode(!0).outerHTML,a.type="checkbox",a.checked=!0,c.appendChild(a),k.appendChecked=a.checked,b.innerHTML="<textarea>x</textarea>",k.noCloneChecked=!!b.cloneNode(!0).lastChild.defaultValue,c.appendChild(b),b.innerHTML="<input type='radio' checked='checked' name='t'/>",k.checkClone=b.cloneNode(!0).cloneNode(!0).lastChild.checked,k.noCloneEvent=!0,b.attachEvent&&(b.attachEvent("onclick",function(){k.noCloneEvent=!1}),b.cloneNode(!0).click()),null==k.deleteExpando){k.deleteExpando=!0;try{delete b.test}catch(d){k.deleteExpando=!1}}}(),function(){var b,c,d=y.createElement("div");for(b in{submit:!0,change:!0,focusin:!0})c="on"+b,(k[b+"Bubbles"]=c in a)||(d.setAttribute(c,"t"),k[b+"Bubbles"]=d.attributes[c].expando===!1);d=null}();var X=/^(?:input|select|textarea)$/i,Y=/^key/,Z=/^(?:mouse|pointer|contextmenu)|click/,$=/^(?:focusinfocus|focusoutblur)$/,_=/^([^.]*)(?:\.(.+)|)$/;function ab(){return!0}function bb(){return!1}function cb(){try{return y.activeElement}catch(a){}}m.event={global:{},add:function(a,b,c,d,e){var f,g,h,i,j,k,l,n,o,p,q,r=m._data(a);if(r){c.handler&&(i=c,c=i.handler,e=i.selector),c.guid||(c.guid=m.guid++),(g=r.events)||(g=r.events={}),(k=r.handle)||(k=r.handle=function(a){return typeof m===K||a&&m.event.triggered===a.type?void 0:m.event.dispatch.apply(k.elem,arguments)},k.elem=a),b=(b||"").match(E)||[""],h=b.length;while(h--)f=_.exec(b[h])||[],o=q=f[1],p=(f[2]||"").split(".").sort(),o&&(j=m.event.special[o]||{},o=(e?j.delegateType:j.bindType)||o,j=m.event.special[o]||{},l=m.extend({type:o,origType:q,data:d,handler:c,guid:c.guid,selector:e,needsContext:e&&m.expr.match.needsContext.test(e),namespace:p.join(".")},i),(n=g[o])||(n=g[o]=[],n.delegateCount=0,j.setup&&j.setup.call(a,d,p,k)!==!1||(a.addEventListener?a.addEventListener(o,k,!1):a.attachEvent&&a.attachEvent("on"+o,k))),j.add&&(j.add.call(a,l),l.handler.guid||(l.handler.guid=c.guid)),e?n.splice(n.delegateCount++,0,l):n.push(l),m.event.global[o]=!0);a=null}},remove:function(a,b,c,d,e){var f,g,h,i,j,k,l,n,o,p,q,r=m.hasData(a)&&m._data(a);if(r&&(k=r.events)){b=(b||"").match(E)||[""],j=b.length;while(j--)if(h=_.exec(b[j])||[],o=q=h[1],p=(h[2]||"").split(".").sort(),o){l=m.event.special[o]||{},o=(d?l.delegateType:l.bindType)||o,n=k[o]||[],h=h[2]&&new RegExp("(^|\\.)"+p.join("\\.(?:.*\\.|)")+"(\\.|$)"),i=f=n.length;while(f--)g=n[f],!e&&q!==g.origType||c&&c.guid!==g.guid||h&&!h.test(g.namespace)||d&&d!==g.selector&&("**"!==d||!g.selector)||(n.splice(f,1),g.selector&&n.delegateCount--,l.remove&&l.remove.call(a,g));i&&!n.length&&(l.teardown&&l.teardown.call(a,p,r.handle)!==!1||m.removeEvent(a,o,r.handle),delete k[o])}else for(o in k)m.event.remove(a,o+b[j],c,d,!0);m.isEmptyObject(k)&&(delete r.handle,m._removeData(a,"events"))}},trigger:function(b,c,d,e){var f,g,h,i,k,l,n,o=[d||y],p=j.call(b,"type")?b.type:b,q=j.call(b,"namespace")?b.namespace.split("."):[];if(h=l=d=d||y,3!==d.nodeType&&8!==d.nodeType&&!$.test(p+m.event.triggered)&&(p.indexOf(".")>=0&&(q=p.split("."),p=q.shift(),q.sort()),g=p.indexOf(":")<0&&"on"+p,b=b[m.expando]?b:new m.Event(p,"object"==typeof b&&b),b.isTrigger=e?2:3,b.namespace=q.join("."),b.namespace_re=b.namespace?new RegExp("(^|\\.)"+q.join("\\.(?:.*\\.|)")+"(\\.|$)"):null,b.result=void 0,b.target||(b.target=d),c=null==c?[b]:m.makeArray(c,[b]),k=m.event.special[p]||{},e||!k.trigger||k.trigger.apply(d,c)!==!1)){if(!e&&!k.noBubble&&!m.isWindow(d)){for(i=k.delegateType||p,$.test(i+p)||(h=h.parentNode);h;h=h.parentNode)o.push(h),l=h;l===(d.ownerDocument||y)&&o.push(l.defaultView||l.parentWindow||a)}n=0;while((h=o[n++])&&!b.isPropagationStopped())b.type=n>1?i:k.bindType||p,f=(m._data(h,"events")||{})[b.type]&&m._data(h,"handle"),f&&f.apply(h,c),f=g&&h[g],f&&f.apply&&m.acceptData(h)&&(b.result=f.apply(h,c),b.result===!1&&b.preventDefault());if(b.type=p,!e&&!b.isDefaultPrevented()&&(!k._default||k._default.apply(o.pop(),c)===!1)&&m.acceptData(d)&&g&&d[p]&&!m.isWindow(d)){l=d[g],l&&(d[g]=null),m.event.triggered=p;try{d[p]()}catch(r){}m.event.triggered=void 0,l&&(d[g]=l)}return b.result}},dispatch:function(a){a=m.event.fix(a);var b,c,e,f,g,h=[],i=d.call(arguments),j=(m._data(this,"events")||{})[a.type]||[],k=m.event.special[a.type]||{};if(i[0]=a,a.delegateTarget=this,!k.preDispatch||k.preDispatch.call(this,a)!==!1){h=m.event.handlers.call(this,a,j),b=0;while((f=h[b++])&&!a.isPropagationStopped()){a.currentTarget=f.elem,g=0;while((e=f.handlers[g++])&&!a.isImmediatePropagationStopped())(!a.namespace_re||a.namespace_re.test(e.namespace))&&(a.handleObj=e,a.data=e.data,c=((m.event.special[e.origType]||{}).handle||e.handler).apply(f.elem,i),void 0!==c&&(a.result=c)===!1&&(a.preventDefault(),a.stopPropagation()))}return k.postDispatch&&k.postDispatch.call(this,a),a.result}},handlers:function(a,b){var c,d,e,f,g=[],h=b.delegateCount,i=a.target;if(h&&i.nodeType&&(!a.button||"click"!==a.type))for(;i!=this;i=i.parentNode||this)if(1===i.nodeType&&(i.disabled!==!0||"click"!==a.type)){for(e=[],f=0;h>f;f++)d=b[f],c=d.selector+" ",void 0===e[c]&&(e[c]=d.needsContext?m(c,this).index(i)>=0:m.find(c,this,null,[i]).length),e[c]&&e.push(d);e.length&&g.push({elem:i,handlers:e})}return h<b.length&&g.push({elem:this,handlers:b.slice(h)}),g},fix:function(a){if(a[m.expando])return a;var b,c,d,e=a.type,f=a,g=this.fixHooks[e];g||(this.fixHooks[e]=g=Z.test(e)?this.mouseHooks:Y.test(e)?this.keyHooks:{}),d=g.props?this.props.concat(g.props):this.props,a=new m.Event(f),b=d.length;while(b--)c=d[b],a[c]=f[c];return a.target||(a.target=f.srcElement||y),3===a.target.nodeType&&(a.target=a.target.parentNode),a.metaKey=!!a.metaKey,g.filter?g.filter(a,f):a},props:"altKey bubbles cancelable ctrlKey currentTarget eventPhase metaKey relatedTarget shiftKey target timeStamp view which".split(" "),fixHooks:{},keyHooks:{props:"char charCode key keyCode".split(" "),filter:function(a,b){return null==a.which&&(a.which=null!=b.charCode?b.charCode:b.keyCode),a}},mouseHooks:{props:"button buttons clientX clientY fromElement offsetX offsetY pageX pageY screenX screenY toElement".split(" "),filter:function(a,b){var c,d,e,f=b.button,g=b.fromElement;return null==a.pageX&&null!=b.clientX&&(d=a.target.ownerDocument||y,e=d.documentElement,c=d.body,a.pageX=b.clientX+(e&&e.scrollLeft||c&&c.scrollLeft||0)-(e&&e.clientLeft||c&&c.clientLeft||0),a.pageY=b.clientY+(e&&e.scrollTop||c&&c.scrollTop||0)-(e&&e.clientTop||c&&c.clientTop||0)),!a.relatedTarget&&g&&(a.relatedTarget=g===a.target?b.toElement:g),a.which||void 0===f||(a.which=1&f?1:2&f?3:4&f?2:0),a}},special:{load:{noBubble:!0},focus:{trigger:function(){if(this!==cb()&&this.focus)try{return this.focus(),!1}catch(a){}},delegateType:"focusin"},blur:{trigger:function(){return this===cb()&&this.blur?(this.blur(),!1):void 0},delegateType:"focusout"},click:{trigger:function(){return m.nodeName(this,"input")&&"checkbox"===this.type&&this.click?(this.click(),!1):void 0},_default:function(a){return m.nodeName(a.target,"a")}},beforeunload:{postDispatch:function(a){void 0!==a.result&&a.originalEvent&&(a.originalEvent.returnValue=a.result)}}},simulate:function(a,b,c,d){var e=m.extend(new m.Event,c,{type:a,isSimulated:!0,originalEvent:{}});d?m.event.trigger(e,null,b):m.event.dispatch.call(b,e),e.isDefaultPrevented()&&c.preventDefault()}},m.removeEvent=y.removeEventListener?function(a,b,c){a.removeEventListener&&a.removeEventListener(b,c,!1)}:function(a,b,c){var d="on"+b;a.detachEvent&&(typeof a[d]===K&&(a[d]=null),a.detachEvent(d,c))},m.Event=function(a,b){return this instanceof m.Event?(a&&a.type?(this.originalEvent=a,this.type=a.type,this.isDefaultPrevented=a.defaultPrevented||void 0===a.defaultPrevented&&a.returnValue===!1?ab:bb):this.type=a,b&&m.extend(this,b),this.timeStamp=a&&a.timeStamp||m.now(),void(this[m.expando]=!0)):new m.Event(a,b)},m.Event.prototype={isDefaultPrevented:bb,isPropagationStopped:bb,isImmediatePropagationStopped:bb,preventDefault:function(){var a=this.originalEvent;this.isDefaultPrevented=ab,a&&(a.preventDefault?a.preventDefault():a.returnValue=!1)},stopPropagation:function(){var a=this.originalEvent;this.isPropagationStopped=ab,a&&(a.stopPropagation&&a.stopPropagation(),a.cancelBubble=!0)},stopImmediatePropagation:function(){var a=this.originalEvent;this.isImmediatePropagationStopped=ab,a&&a.stopImmediatePropagation&&a.stopImmediatePropagation(),this.stopPropagation()}},m.each({mouseenter:"mouseover",mouseleave:"mouseout",pointerenter:"pointerover",pointerleave:"pointerout"},function(a,b){m.event.special[a]={delegateType:b,bindType:b,handle:function(a){var c,d=this,e=a.relatedTarget,f=a.handleObj;return(!e||e!==d&&!m.contains(d,e))&&(a.type=f.origType,c=f.handler.apply(this,arguments),a.type=b),c}}}),k.submitBubbles||(m.event.special.submit={setup:function(){return m.nodeName(this,"form")?!1:void m.event.add(this,"click._submit keypress._submit",function(a){var b=a.target,c=m.nodeName(b,"input")||m.nodeName(b,"button")?b.form:void 0;c&&!m._data(c,"submitBubbles")&&(m.event.add(c,"submit._submit",function(a){a._submit_bubble=!0}),m._data(c,"submitBubbles",!0))})},postDispatch:function(a){a._submit_bubble&&(delete a._submit_bubble,this.parentNode&&!a.isTrigger&&m.event.simulate("submit",this.parentNode,a,!0))},teardown:function(){return m.nodeName(this,"form")?!1:void m.event.remove(this,"._submit")}}),k.changeBubbles||(m.event.special.change={setup:function(){return X.test(this.nodeName)?(("checkbox"===this.type||"radio"===this.type)&&(m.event.add(this,"propertychange._change",function(a){"checked"===a.originalEvent.propertyName&&(this._just_changed=!0)}),m.event.add(this,"click._change",function(a){this._just_changed&&!a.isTrigger&&(this._just_changed=!1),m.event.simulate("change",this,a,!0)})),!1):void m.event.add(this,"beforeactivate._change",function(a){var b=a.target;X.test(b.nodeName)&&!m._data(b,"changeBubbles")&&(m.event.add(b,"change._change",function(a){!this.parentNode||a.isSimulated||a.isTrigger||m.event.simulate("change",this.parentNode,a,!0)}),m._data(b,"changeBubbles",!0))})},handle:function(a){var b=a.target;return this!==b||a.isSimulated||a.isTrigger||"radio"!==b.type&&"checkbox"!==b.type?a.handleObj.handler.apply(this,arguments):void 0},teardown:function(){return m.event.remove(this,"._change"),!X.test(this.nodeName)}}),k.focusinBubbles||m.each({focus:"focusin",blur:"focusout"},function(a,b){var c=function(a){m.event.simulate(b,a.target,m.event.fix(a),!0)};m.event.special[b]={setup:function(){var d=this.ownerDocument||this,e=m._data(d,b);e||d.addEventListener(a,c,!0),m._data(d,b,(e||0)+1)},teardown:function(){var d=this.ownerDocument||this,e=m._data(d,b)-1;e?m._data(d,b,e):(d.removeEventListener(a,c,!0),m._removeData(d,b))}}}),m.fn.extend({on:function(a,b,c,d,e){var f,g;if("object"==typeof a){"string"!=typeof b&&(c=c||b,b=void 0);for(f in a)this.on(f,b,c,a[f],e);return this}if(null==c&&null==d?(d=b,c=b=void 0):null==d&&("string"==typeof b?(d=c,c=void 0):(d=c,c=b,b=void 0)),d===!1)d=bb;else if(!d)return this;return 1===e&&(g=d,d=function(a){return m().off(a),g.apply(this,arguments)},d.guid=g.guid||(g.guid=m.guid++)),this.each(function(){m.event.add(this,a,d,c,b)})},one:function(a,b,c,d){return this.on(a,b,c,d,1)},off:function(a,b,c){var d,e;if(a&&a.preventDefault&&a.handleObj)return d=a.handleObj,m(a.delegateTarget).off(d.namespace?d.origType+"."+d.namespace:d.origType,d.selector,d.handler),this;if("object"==typeof a){for(e in a)this.off(e,b,a[e]);return this}return(b===!1||"function"==typeof b)&&(c=b,b=void 0),c===!1&&(c=bb),this.each(function(){m.event.remove(this,a,c,b)})},trigger:function(a,b){return this.each(function(){m.event.trigger(a,b,this)})},triggerHandler:function(a,b){var c=this[0];return c?m.event.trigger(a,b,c,!0):void 0}});function db(a){var b=eb.split("|"),c=a.createDocumentFragment();if(c.createElement)while(b.length)c.createElement(b.pop());return c}var eb="abbr|article|aside|audio|bdi|canvas|data|datalist|details|figcaption|figure|footer|header|hgroup|mark|meter|nav|output|progress|section|summary|time|video",fb=/ jQuery\d+="(?:null|\d+)"/g,gb=new RegExp("<(?:"+eb+")[\\s/>]","i"),hb=/^\s+/,ib=/<(?!area|br|col|embed|hr|img|input|link|meta|param)(([\w:]+)[^>]*)\/>/gi,jb=/<([\w:]+)/,kb=/<tbody/i,lb=/<|&#?\w+;/,mb=/<(?:script|style|link)/i,nb=/checked\s*(?:[^=]|=\s*.checked.)/i,ob=/^$|\/(?:java|ecma)script/i,pb=/^true\/(.*)/,qb=/^\s*<!(?:\[CDATA\[|--)|(?:\]\]|--)>\s*$/g,rb={option:[1,"<select multiple='multiple'>","</select>"],legend:[1,"<fieldset>","</fieldset>"],area:[1,"<map>","</map>"],param:[1,"<object>","</object>"],thead:[1,"<table>","</table>"],tr:[2,"<table><tbody>","</tbody></table>"],col:[2,"<table><tbody></tbody><colgroup>","</colgroup></table>"],td:[3,"<table><tbody><tr>","</tr></tbody></table>"],_default:k.htmlSerialize?[0,"",""]:[1,"X<div>","</div>"]},sb=db(y),tb=sb.appendChild(y.createElement("div"));rb.optgroup=rb.option,rb.tbody=rb.tfoot=rb.colgroup=rb.caption=rb.thead,rb.th=rb.td;function ub(a,b){var c,d,e=0,f=typeof a.getElementsByTagName!==K?a.getElementsByTagName(b||"*"):typeof a.querySelectorAll!==K?a.querySelectorAll(b||"*"):void 0;if(!f)for(f=[],c=a.childNodes||a;null!=(d=c[e]);e++)!b||m.nodeName(d,b)?f.push(d):m.merge(f,ub(d,b));return void 0===b||b&&m.nodeName(a,b)?m.merge([a],f):f}function vb(a){W.test(a.type)&&(a.defaultChecked=a.checked)}function wb(a,b){return m.nodeName(a,"table")&&m.nodeName(11!==b.nodeType?b:b.firstChild,"tr")?a.getElementsByTagName("tbody")[0]||a.appendChild(a.ownerDocument.createElement("tbody")):a}function xb(a){return a.type=(null!==m.find.attr(a,"type"))+"/"+a.type,a}function yb(a){var b=pb.exec(a.type);return b?a.type=b[1]:a.removeAttribute("type"),a}function zb(a,b){for(var c,d=0;null!=(c=a[d]);d++)m._data(c,"globalEval",!b||m._data(b[d],"globalEval"))}function Ab(a,b){if(1===b.nodeType&&m.hasData(a)){var c,d,e,f=m._data(a),g=m._data(b,f),h=f.events;if(h){delete g.handle,g.events={};for(c in h)for(d=0,e=h[c].length;e>d;d++)m.event.add(b,c,h[c][d])}g.data&&(g.data=m.extend({},g.data))}}function Bb(a,b){var c,d,e;if(1===b.nodeType){if(c=b.nodeName.toLowerCase(),!k.noCloneEvent&&b[m.expando]){e=m._data(b);for(d in e.events)m.removeEvent(b,d,e.handle);b.removeAttribute(m.expando)}"script"===c&&b.text!==a.text?(xb(b).text=a.text,yb(b)):"object"===c?(b.parentNode&&(b.outerHTML=a.outerHTML),k.html5Clone&&a.innerHTML&&!m.trim(b.innerHTML)&&(b.innerHTML=a.innerHTML)):"input"===c&&W.test(a.type)?(b.defaultChecked=b.checked=a.checked,b.value!==a.value&&(b.value=a.value)):"option"===c?b.defaultSelected=b.selected=a.defaultSelected:("input"===c||"textarea"===c)&&(b.defaultValue=a.defaultValue)}}m.extend({clone:function(a,b,c){var d,e,f,g,h,i=m.contains(a.ownerDocument,a);if(k.html5Clone||m.isXMLDoc(a)||!gb.test("<"+a.nodeName+">")?f=a.cloneNode(!0):(tb.innerHTML=a.outerHTML,tb.removeChild(f=tb.firstChild)),!(k.noCloneEvent&&k.noCloneChecked||1!==a.nodeType&&11!==a.nodeType||m.isXMLDoc(a)))for(d=ub(f),h=ub(a),g=0;null!=(e=h[g]);++g)d[g]&&Bb(e,d[g]);if(b)if(c)for(h=h||ub(a),d=d||ub(f),g=0;null!=(e=h[g]);g++)Ab(e,d[g]);else Ab(a,f);return d=ub(f,"script"),d.length>0&&zb(d,!i&&ub(a,"script")),d=h=e=null,f},buildFragment:function(a,b,c,d){for(var e,f,g,h,i,j,l,n=a.length,o=db(b),p=[],q=0;n>q;q++)if(f=a[q],f||0===f)if("object"===m.type(f))m.merge(p,f.nodeType?[f]:f);else if(lb.test(f)){h=h||o.appendChild(b.createElement("div")),i=(jb.exec(f)||["",""])[1].toLowerCase(),l=rb[i]||rb._default,h.innerHTML=l[1]+f.replace(ib,"<$1></$2>")+l[2],e=l[0];while(e--)h=h.lastChild;if(!k.leadingWhitespace&&hb.test(f)&&p.push(b.createTextNode(hb.exec(f)[0])),!k.tbody){f="table"!==i||kb.test(f)?"<table>"!==l[1]||kb.test(f)?0:h:h.firstChild,e=f&&f.childNodes.length;while(e--)m.nodeName(j=f.childNodes[e],"tbody")&&!j.childNodes.length&&f.removeChild(j)}m.merge(p,h.childNodes),h.textContent="";while(h.firstChild)h.removeChild(h.firstChild);h=o.lastChild}else p.push(b.createTextNode(f));h&&o.removeChild(h),k.appendChecked||m.grep(ub(p,"input"),vb),q=0;while(f=p[q++])if((!d||-1===m.inArray(f,d))&&(g=m.contains(f.ownerDocument,f),h=ub(o.appendChild(f),"script"),g&&zb(h),c)){e=0;while(f=h[e++])ob.test(f.type||"")&&c.push(f)}return h=null,o},cleanData:function(a,b){for(var d,e,f,g,h=0,i=m.expando,j=m.cache,l=k.deleteExpando,n=m.event.special;null!=(d=a[h]);h++)if((b||m.acceptData(d))&&(f=d[i],g=f&&j[f])){if(g.events)for(e in g.events)n[e]?m.event.remove(d,e):m.removeEvent(d,e,g.handle);j[f]&&(delete j[f],l?delete d[i]:typeof d.removeAttribute!==K?d.removeAttribute(i):d[i]=null,c.push(f))}}}),m.fn.extend({text:function(a){return V(this,function(a){return void 0===a?m.text(this):this.empty().append((this[0]&&this[0].ownerDocument||y).createTextNode(a))},null,a,arguments.length)},append:function(){return this.domManip(arguments,function(a){if(1===this.nodeType||11===this.nodeType||9===this.nodeType){var b=wb(this,a);b.appendChild(a)}})},prepend:function(){return this.domManip(arguments,function(a){if(1===this.nodeType||11===this.nodeType||9===this.nodeType){var b=wb(this,a);b.insertBefore(a,b.firstChild)}})},before:function(){return this.domManip(arguments,function(a){this.parentNode&&this.parentNode.insertBefore(a,this)})},after:function(){return this.domManip(arguments,function(a){this.parentNode&&this.parentNode.insertBefore(a,this.nextSibling)})},remove:function(a,b){for(var c,d=a?m.filter(a,this):this,e=0;null!=(c=d[e]);e++)b||1!==c.nodeType||m.cleanData(ub(c)),c.parentNode&&(b&&m.contains(c.ownerDocument,c)&&zb(ub(c,"script")),c.parentNode.removeChild(c));return this},empty:function(){for(var a,b=0;null!=(a=this[b]);b++){1===a.nodeType&&m.cleanData(ub(a,!1));while(a.firstChild)a.removeChild(a.firstChild);a.options&&m.nodeName(a,"select")&&(a.options.length=0)}return this},clone:function(a,b){return a=null==a?!1:a,b=null==b?a:b,this.map(function(){return m.clone(this,a,b)})},html:function(a){return V(this,function(a){var b=this[0]||{},c=0,d=this.length;if(void 0===a)return 1===b.nodeType?b.innerHTML.replace(fb,""):void 0;if(!("string"!=typeof a||mb.test(a)||!k.htmlSerialize&&gb.test(a)||!k.leadingWhitespace&&hb.test(a)||rb[(jb.exec(a)||["",""])[1].toLowerCase()])){a=a.replace(ib,"<$1></$2>");try{for(;d>c;c++)b=this[c]||{},1===b.nodeType&&(m.cleanData(ub(b,!1)),b.innerHTML=a);b=0}catch(e){}}b&&this.empty().append(a)},null,a,arguments.length)},replaceWith:function(){var a=arguments[0];return this.domManip(arguments,function(b){a=this.parentNode,m.cleanData(ub(this)),a&&a.replaceChild(b,this)}),a&&(a.length||a.nodeType)?this:this.remove()},detach:function(a){return this.remove(a,!0)},domManip:function(a,b){a=e.apply([],a);var c,d,f,g,h,i,j=0,l=this.length,n=this,o=l-1,p=a[0],q=m.isFunction(p);if(q||l>1&&"string"==typeof p&&!k.checkClone&&nb.test(p))return this.each(function(c){var d=n.eq(c);q&&(a[0]=p.call(this,c,d.html())),d.domManip(a,b)});if(l&&(i=m.buildFragment(a,this[0].ownerDocument,!1,this),c=i.firstChild,1===i.childNodes.length&&(i=c),c)){for(g=m.map(ub(i,"script"),xb),f=g.length;l>j;j++)d=i,j!==o&&(d=m.clone(d,!0,!0),f&&m.merge(g,ub(d,"script"))),b.call(this[j],d,j);if(f)for(h=g[g.length-1].ownerDocument,m.map(g,yb),j=0;f>j;j++)d=g[j],ob.test(d.type||"")&&!m._data(d,"globalEval")&&m.contains(h,d)&&(d.src?m._evalUrl&&m._evalUrl(d.src):m.globalEval((d.text||d.textContent||d.innerHTML||"").replace(qb,"")));i=c=null}return this}}),m.each({appendTo:"append",prependTo:"prepend",insertBefore:"before",insertAfter:"after",replaceAll:"replaceWith"},function(a,b){m.fn[a]=function(a){for(var c,d=0,e=[],g=m(a),h=g.length-1;h>=d;d++)c=d===h?this:this.clone(!0),m(g[d])[b](c),f.apply(e,c.get());return this.pushStack(e)}});var Cb,Db={};function Eb(b,c){var d,e=m(c.createElement(b)).appendTo(c.body),f=a.getDefaultComputedStyle&&(d=a.getDefaultComputedStyle(e[0]))?d.display:m.css(e[0],"display");return e.detach(),f}function Fb(a){var b=y,c=Db[a];return c||(c=Eb(a,b),"none"!==c&&c||(Cb=(Cb||m("<iframe frameborder='0' width='0' height='0'/>")).appendTo(b.documentElement),b=(Cb[0].contentWindow||Cb[0].contentDocument).document,b.write(),b.close(),c=Eb(a,b),Cb.detach()),Db[a]=c),c}!function(){var a;k.shrinkWrapBlocks=function(){if(null!=a)return a;a=!1;var b,c,d;return c=y.getElementsByTagName("body")[0],c&&c.style?(b=y.createElement("div"),d=y.createElement("div"),d.style.cssText="position:absolute;border:0;width:0;height:0;top:0;left:-9999px",c.appendChild(d).appendChild(b),typeof b.style.zoom!==K&&(b.style.cssText="-webkit-box-sizing:content-box;-moz-box-sizing:content-box;box-sizing:content-box;display:block;margin:0;border:0;padding:1px;width:1px;zoom:1",b.appendChild(y.createElement("div")).style.width="5px",a=3!==b.offsetWidth),c.removeChild(d),a):void 0}}();var Gb=/^margin/,Hb=new RegExp("^("+S+")(?!px)[a-z%]+$","i"),Ib,Jb,Kb=/^(top|right|bottom|left)$/;a.getComputedStyle?(Ib=function(a){return a.ownerDocument.defaultView.getComputedStyle(a,null)},Jb=function(a,b,c){var d,e,f,g,h=a.style;return c=c||Ib(a),g=c?c.getPropertyValue(b)||c[b]:void 0,c&&(""!==g||m.contains(a.ownerDocument,a)||(g=m.style(a,b)),Hb.test(g)&&Gb.test(b)&&(d=h.width,e=h.minWidth,f=h.maxWidth,h.minWidth=h.maxWidth=h.width=g,g=c.width,h.width=d,h.minWidth=e,h.maxWidth=f)),void 0===g?g:g+""}):y.documentElement.currentStyle&&(Ib=function(a){return a.currentStyle},Jb=function(a,b,c){var d,e,f,g,h=a.style;return c=c||Ib(a),g=c?c[b]:void 0,null==g&&h&&h[b]&&(g=h[b]),Hb.test(g)&&!Kb.test(b)&&(d=h.left,e=a.runtimeStyle,f=e&&e.left,f&&(e.left=a.currentStyle.left),h.left="fontSize"===b?"1em":g,g=h.pixelLeft+"px",h.left=d,f&&(e.left=f)),void 0===g?g:g+""||"auto"});function Lb(a,b){return{get:function(){var c=a();if(null!=c)return c?void delete this.get:(this.get=b).apply(this,arguments)}}}!function(){var b,c,d,e,f,g,h;if(b=y.createElement("div"),b.innerHTML=" <link/><table></table><a href='/a'>a</a><input type='checkbox'/>",d=b.getElementsByTagName("a")[0],c=d&&d.style){c.cssText="float:left;opacity:.5",k.opacity="0.5"===c.opacity,k.cssFloat=!!c.cssFloat,b.style.backgroundClip="content-box",b.cloneNode(!0).style.backgroundClip="",k.clearCloneStyle="content-box"===b.style.backgroundClip,k.boxSizing=""===c.boxSizing||""===c.MozBoxSizing||""===c.WebkitBoxSizing,m.extend(k,{reliableHiddenOffsets:function(){return null==g&&i(),g},boxSizingReliable:function(){return null==f&&i(),f},pixelPosition:function(){return null==e&&i(),e},reliableMarginRight:function(){return null==h&&i(),h}});function i(){var b,c,d,i;c=y.getElementsByTagName("body")[0],c&&c.style&&(b=y.createElement("div"),d=y.createElement("div"),d.style.cssText="position:absolute;border:0;width:0;height:0;top:0;left:-9999px",c.appendChild(d).appendChild(b),b.style.cssText="-webkit-box-sizing:border-box;-moz-box-sizing:border-box;box-sizing:border-box;display:block;margin-top:1%;top:1%;border:1px;padding:1px;width:4px;position:absolute",e=f=!1,h=!0,a.getComputedStyle&&(e="1%"!==(a.getComputedStyle(b,null)||{}).top,f="4px"===(a.getComputedStyle(b,null)||{width:"4px"}).width,i=b.appendChild(y.createElement("div")),i.style.cssText=b.style.cssText="-webkit-box-sizing:content-box;-moz-box-sizing:content-box;box-sizing:content-box;display:block;margin:0;border:0;padding:0",i.style.marginRight=i.style.width="0",b.style.width="1px",h=!parseFloat((a.getComputedStyle(i,null)||{}).marginRight)),b.innerHTML="<table><tr><td></td><td>t</td></tr></table>",i=b.getElementsByTagName("td"),i[0].style.cssText="margin:0;border:0;padding:0;display:none",g=0===i[0].offsetHeight,g&&(i[0].style.display="",i[1].style.display="none",g=0===i[0].offsetHeight),c.removeChild(d))}}}(),m.swap=function(a,b,c,d){var e,f,g={};for(f in b)g[f]=a.style[f],a.style[f]=b[f];e=c.apply(a,d||[]);for(f in b)a.style[f]=g[f];return e};var Mb=/alpha\([^)]*\)/i,Nb=/opacity\s*=\s*([^)]*)/,Ob=/^(none|table(?!-c[ea]).+)/,Pb=new RegExp("^("+S+")(.*)$","i"),Qb=new RegExp("^([+-])=("+S+")","i"),Rb={position:"absolute",visibility:"hidden",display:"block"},Sb={letterSpacing:"0",fontWeight:"400"},Tb=["Webkit","O","Moz","ms"];function Ub(a,b){if(b in a)return b;var c=b.charAt(0).toUpperCase()+b.slice(1),d=b,e=Tb.length;while(e--)if(b=Tb[e]+c,b in a)return b;return d}function Vb(a,b){for(var c,d,e,f=[],g=0,h=a.length;h>g;g++)d=a[g],d.style&&(f[g]=m._data(d,"olddisplay"),c=d.style.display,b?(f[g]||"none"!==c||(d.style.display=""),""===d.style.display&&U(d)&&(f[g]=m._data(d,"olddisplay",Fb(d.nodeName)))):(e=U(d),(c&&"none"!==c||!e)&&m._data(d,"olddisplay",e?c:m.css(d,"display"))));for(g=0;h>g;g++)d=a[g],d.style&&(b&&"none"!==d.style.display&&""!==d.style.display||(d.style.display=b?f[g]||"":"none"));return a}function Wb(a,b,c){var d=Pb.exec(b);return d?Math.max(0,d[1]-(c||0))+(d[2]||"px"):b}function Xb(a,b,c,d,e){for(var f=c===(d?"border":"content")?4:"width"===b?1:0,g=0;4>f;f+=2)"margin"===c&&(g+=m.css(a,c+T[f],!0,e)),d?("content"===c&&(g-=m.css(a,"padding"+T[f],!0,e)),"margin"!==c&&(g-=m.css(a,"border"+T[f]+"Width",!0,e))):(g+=m.css(a,"padding"+T[f],!0,e),"padding"!==c&&(g+=m.css(a,"border"+T[f]+"Width",!0,e)));return g}function Yb(a,b,c){var d=!0,e="width"===b?a.offsetWidth:a.offsetHeight,f=Ib(a),g=k.boxSizing&&"border-box"===m.css(a,"boxSizing",!1,f);if(0>=e||null==e){if(e=Jb(a,b,f),(0>e||null==e)&&(e=a.style[b]),Hb.test(e))return e;d=g&&(k.boxSizingReliable()||e===a.style[b]),e=parseFloat(e)||0}return e+Xb(a,b,c||(g?"border":"content"),d,f)+"px"}m.extend({cssHooks:{opacity:{get:function(a,b){if(b){var c=Jb(a,"opacity");return""===c?"1":c}}}},cssNumber:{columnCount:!0,fillOpacity:!0,flexGrow:!0,flexShrink:!0,fontWeight:!0,lineHeight:!0,opacity:!0,order:!0,orphans:!0,widows:!0,zIndex:!0,zoom:!0},cssProps:{"float":k.cssFloat?"cssFloat":"styleFloat"},style:function(a,b,c,d){if(a&&3!==a.nodeType&&8!==a.nodeType&&a.style){var e,f,g,h=m.camelCase(b),i=a.style;if(b=m.cssProps[h]||(m.cssProps[h]=Ub(i,h)),g=m.cssHooks[b]||m.cssHooks[h],void 0===c)return g&&"get"in g&&void 0!==(e=g.get(a,!1,d))?e:i[b];if(f=typeof c,"string"===f&&(e=Qb.exec(c))&&(c=(e[1]+1)*e[2]+parseFloat(m.css(a,b)),f="number"),null!=c&&c===c&&("number"!==f||m.cssNumber[h]||(c+="px"),k.clearCloneStyle||""!==c||0!==b.indexOf("background")||(i[b]="inherit"),!(g&&"set"in g&&void 0===(c=g.set(a,c,d)))))try{i[b]=c}catch(j){}}},css:function(a,b,c,d){var e,f,g,h=m.camelCase(b);return b=m.cssProps[h]||(m.cssProps[h]=Ub(a.style,h)),g=m.cssHooks[b]||m.cssHooks[h],g&&"get"in g&&(f=g.get(a,!0,c)),void 0===f&&(f=Jb(a,b,d)),"normal"===f&&b in Sb&&(f=Sb[b]),""===c||c?(e=parseFloat(f),c===!0||m.isNumeric(e)?e||0:f):f}}),m.each(["height","width"],function(a,b){m.cssHooks[b]={get:function(a,c,d){return c?Ob.test(m.css(a,"display"))&&0===a.offsetWidth?m.swap(a,Rb,function(){return Yb(a,b,d)}):Yb(a,b,d):void 0},set:function(a,c,d){var e=d&&Ib(a);return Wb(a,c,d?Xb(a,b,d,k.boxSizing&&"border-box"===m.css(a,"boxSizing",!1,e),e):0)}}}),k.opacity||(m.cssHooks.opacity={get:function(a,b){return Nb.test((b&&a.currentStyle?a.currentStyle.filter:a.style.filter)||"")?.01*parseFloat(RegExp.$1)+"":b?"1":""},set:function(a,b){var c=a.style,d=a.currentStyle,e=m.isNumeric(b)?"alpha(opacity="+100*b+")":"",f=d&&d.filter||c.filter||"";c.zoom=1,(b>=1||""===b)&&""===m.trim(f.replace(Mb,""))&&c.removeAttribute&&(c.removeAttribute("filter"),""===b||d&&!d.filter)||(c.filter=Mb.test(f)?f.replace(Mb,e):f+" "+e)}}),m.cssHooks.marginRight=Lb(k.reliableMarginRight,function(a,b){return b?m.swap(a,{display:"inline-block"},Jb,[a,"marginRight"]):void 0}),m.each({margin:"",padding:"",border:"Width"},function(a,b){m.cssHooks[a+b]={expand:function(c){for(var d=0,e={},f="string"==typeof c?c.split(" "):[c];4>d;d++)e[a+T[d]+b]=f[d]||f[d-2]||f[0];return e}},Gb.test(a)||(m.cssHooks[a+b].set=Wb)}),m.fn.extend({css:function(a,b){return V(this,function(a,b,c){var d,e,f={},g=0;if(m.isArray(b)){for(d=Ib(a),e=b.length;e>g;g++)f[b[g]]=m.css(a,b[g],!1,d);return f}return void 0!==c?m.style(a,b,c):m.css(a,b)},a,b,arguments.length>1)},show:function(){return Vb(this,!0)},hide:function(){return Vb(this)},toggle:function(a){return"boolean"==typeof a?a?this.show():this.hide():this.each(function(){U(this)?m(this).show():m(this).hide()})}});function Zb(a,b,c,d,e){return new Zb.prototype.init(a,b,c,d,e)}m.Tween=Zb,Zb.prototype={constructor:Zb,init:function(a,b,c,d,e,f){this.elem=a,this.prop=c,this.easing=e||"swing",this.options=b,this.start=this.now=this.cur(),this.end=d,this.unit=f||(m.cssNumber[c]?"":"px") -},cur:function(){var a=Zb.propHooks[this.prop];return a&&a.get?a.get(this):Zb.propHooks._default.get(this)},run:function(a){var b,c=Zb.propHooks[this.prop];return this.pos=b=this.options.duration?m.easing[this.easing](a,this.options.duration*a,0,1,this.options.duration):a,this.now=(this.end-this.start)*b+this.start,this.options.step&&this.options.step.call(this.elem,this.now,this),c&&c.set?c.set(this):Zb.propHooks._default.set(this),this}},Zb.prototype.init.prototype=Zb.prototype,Zb.propHooks={_default:{get:function(a){var b;return null==a.elem[a.prop]||a.elem.style&&null!=a.elem.style[a.prop]?(b=m.css(a.elem,a.prop,""),b&&"auto"!==b?b:0):a.elem[a.prop]},set:function(a){m.fx.step[a.prop]?m.fx.step[a.prop](a):a.elem.style&&(null!=a.elem.style[m.cssProps[a.prop]]||m.cssHooks[a.prop])?m.style(a.elem,a.prop,a.now+a.unit):a.elem[a.prop]=a.now}}},Zb.propHooks.scrollTop=Zb.propHooks.scrollLeft={set:function(a){a.elem.nodeType&&a.elem.parentNode&&(a.elem[a.prop]=a.now)}},m.easing={linear:function(a){return a},swing:function(a){return.5-Math.cos(a*Math.PI)/2}},m.fx=Zb.prototype.init,m.fx.step={};var $b,_b,ac=/^(?:toggle|show|hide)$/,bc=new RegExp("^(?:([+-])=|)("+S+")([a-z%]*)$","i"),cc=/queueHooks$/,dc=[ic],ec={"*":[function(a,b){var c=this.createTween(a,b),d=c.cur(),e=bc.exec(b),f=e&&e[3]||(m.cssNumber[a]?"":"px"),g=(m.cssNumber[a]||"px"!==f&&+d)&&bc.exec(m.css(c.elem,a)),h=1,i=20;if(g&&g[3]!==f){f=f||g[3],e=e||[],g=+d||1;do h=h||".5",g/=h,m.style(c.elem,a,g+f);while(h!==(h=c.cur()/d)&&1!==h&&--i)}return e&&(g=c.start=+g||+d||0,c.unit=f,c.end=e[1]?g+(e[1]+1)*e[2]:+e[2]),c}]};function fc(){return setTimeout(function(){$b=void 0}),$b=m.now()}function gc(a,b){var c,d={height:a},e=0;for(b=b?1:0;4>e;e+=2-b)c=T[e],d["margin"+c]=d["padding"+c]=a;return b&&(d.opacity=d.width=a),d}function hc(a,b,c){for(var d,e=(ec[b]||[]).concat(ec["*"]),f=0,g=e.length;g>f;f++)if(d=e[f].call(c,b,a))return d}function ic(a,b,c){var d,e,f,g,h,i,j,l,n=this,o={},p=a.style,q=a.nodeType&&U(a),r=m._data(a,"fxshow");c.queue||(h=m._queueHooks(a,"fx"),null==h.unqueued&&(h.unqueued=0,i=h.empty.fire,h.empty.fire=function(){h.unqueued||i()}),h.unqueued++,n.always(function(){n.always(function(){h.unqueued--,m.queue(a,"fx").length||h.empty.fire()})})),1===a.nodeType&&("height"in b||"width"in b)&&(c.overflow=[p.overflow,p.overflowX,p.overflowY],j=m.css(a,"display"),l="none"===j?m._data(a,"olddisplay")||Fb(a.nodeName):j,"inline"===l&&"none"===m.css(a,"float")&&(k.inlineBlockNeedsLayout&&"inline"!==Fb(a.nodeName)?p.zoom=1:p.display="inline-block")),c.overflow&&(p.overflow="hidden",k.shrinkWrapBlocks()||n.always(function(){p.overflow=c.overflow[0],p.overflowX=c.overflow[1],p.overflowY=c.overflow[2]}));for(d in b)if(e=b[d],ac.exec(e)){if(delete b[d],f=f||"toggle"===e,e===(q?"hide":"show")){if("show"!==e||!r||void 0===r[d])continue;q=!0}o[d]=r&&r[d]||m.style(a,d)}else j=void 0;if(m.isEmptyObject(o))"inline"===("none"===j?Fb(a.nodeName):j)&&(p.display=j);else{r?"hidden"in r&&(q=r.hidden):r=m._data(a,"fxshow",{}),f&&(r.hidden=!q),q?m(a).show():n.done(function(){m(a).hide()}),n.done(function(){var b;m._removeData(a,"fxshow");for(b in o)m.style(a,b,o[b])});for(d in o)g=hc(q?r[d]:0,d,n),d in r||(r[d]=g.start,q&&(g.end=g.start,g.start="width"===d||"height"===d?1:0))}}function jc(a,b){var c,d,e,f,g;for(c in a)if(d=m.camelCase(c),e=b[d],f=a[c],m.isArray(f)&&(e=f[1],f=a[c]=f[0]),c!==d&&(a[d]=f,delete a[c]),g=m.cssHooks[d],g&&"expand"in g){f=g.expand(f),delete a[d];for(c in f)c in a||(a[c]=f[c],b[c]=e)}else b[d]=e}function kc(a,b,c){var d,e,f=0,g=dc.length,h=m.Deferred().always(function(){delete i.elem}),i=function(){if(e)return!1;for(var b=$b||fc(),c=Math.max(0,j.startTime+j.duration-b),d=c/j.duration||0,f=1-d,g=0,i=j.tweens.length;i>g;g++)j.tweens[g].run(f);return h.notifyWith(a,[j,f,c]),1>f&&i?c:(h.resolveWith(a,[j]),!1)},j=h.promise({elem:a,props:m.extend({},b),opts:m.extend(!0,{specialEasing:{}},c),originalProperties:b,originalOptions:c,startTime:$b||fc(),duration:c.duration,tweens:[],createTween:function(b,c){var d=m.Tween(a,j.opts,b,c,j.opts.specialEasing[b]||j.opts.easing);return j.tweens.push(d),d},stop:function(b){var c=0,d=b?j.tweens.length:0;if(e)return this;for(e=!0;d>c;c++)j.tweens[c].run(1);return b?h.resolveWith(a,[j,b]):h.rejectWith(a,[j,b]),this}}),k=j.props;for(jc(k,j.opts.specialEasing);g>f;f++)if(d=dc[f].call(j,a,k,j.opts))return d;return m.map(k,hc,j),m.isFunction(j.opts.start)&&j.opts.start.call(a,j),m.fx.timer(m.extend(i,{elem:a,anim:j,queue:j.opts.queue})),j.progress(j.opts.progress).done(j.opts.done,j.opts.complete).fail(j.opts.fail).always(j.opts.always)}m.Animation=m.extend(kc,{tweener:function(a,b){m.isFunction(a)?(b=a,a=["*"]):a=a.split(" ");for(var c,d=0,e=a.length;e>d;d++)c=a[d],ec[c]=ec[c]||[],ec[c].unshift(b)},prefilter:function(a,b){b?dc.unshift(a):dc.push(a)}}),m.speed=function(a,b,c){var d=a&&"object"==typeof a?m.extend({},a):{complete:c||!c&&b||m.isFunction(a)&&a,duration:a,easing:c&&b||b&&!m.isFunction(b)&&b};return d.duration=m.fx.off?0:"number"==typeof d.duration?d.duration:d.duration in m.fx.speeds?m.fx.speeds[d.duration]:m.fx.speeds._default,(null==d.queue||d.queue===!0)&&(d.queue="fx"),d.old=d.complete,d.complete=function(){m.isFunction(d.old)&&d.old.call(this),d.queue&&m.dequeue(this,d.queue)},d},m.fn.extend({fadeTo:function(a,b,c,d){return this.filter(U).css("opacity",0).show().end().animate({opacity:b},a,c,d)},animate:function(a,b,c,d){var e=m.isEmptyObject(a),f=m.speed(b,c,d),g=function(){var b=kc(this,m.extend({},a),f);(e||m._data(this,"finish"))&&b.stop(!0)};return g.finish=g,e||f.queue===!1?this.each(g):this.queue(f.queue,g)},stop:function(a,b,c){var d=function(a){var b=a.stop;delete a.stop,b(c)};return"string"!=typeof a&&(c=b,b=a,a=void 0),b&&a!==!1&&this.queue(a||"fx",[]),this.each(function(){var b=!0,e=null!=a&&a+"queueHooks",f=m.timers,g=m._data(this);if(e)g[e]&&g[e].stop&&d(g[e]);else for(e in g)g[e]&&g[e].stop&&cc.test(e)&&d(g[e]);for(e=f.length;e--;)f[e].elem!==this||null!=a&&f[e].queue!==a||(f[e].anim.stop(c),b=!1,f.splice(e,1));(b||!c)&&m.dequeue(this,a)})},finish:function(a){return a!==!1&&(a=a||"fx"),this.each(function(){var b,c=m._data(this),d=c[a+"queue"],e=c[a+"queueHooks"],f=m.timers,g=d?d.length:0;for(c.finish=!0,m.queue(this,a,[]),e&&e.stop&&e.stop.call(this,!0),b=f.length;b--;)f[b].elem===this&&f[b].queue===a&&(f[b].anim.stop(!0),f.splice(b,1));for(b=0;g>b;b++)d[b]&&d[b].finish&&d[b].finish.call(this);delete c.finish})}}),m.each(["toggle","show","hide"],function(a,b){var c=m.fn[b];m.fn[b]=function(a,d,e){return null==a||"boolean"==typeof a?c.apply(this,arguments):this.animate(gc(b,!0),a,d,e)}}),m.each({slideDown:gc("show"),slideUp:gc("hide"),slideToggle:gc("toggle"),fadeIn:{opacity:"show"},fadeOut:{opacity:"hide"},fadeToggle:{opacity:"toggle"}},function(a,b){m.fn[a]=function(a,c,d){return this.animate(b,a,c,d)}}),m.timers=[],m.fx.tick=function(){var a,b=m.timers,c=0;for($b=m.now();c<b.length;c++)a=b[c],a()||b[c]!==a||b.splice(c--,1);b.length||m.fx.stop(),$b=void 0},m.fx.timer=function(a){m.timers.push(a),a()?m.fx.start():m.timers.pop()},m.fx.interval=13,m.fx.start=function(){_b||(_b=setInterval(m.fx.tick,m.fx.interval))},m.fx.stop=function(){clearInterval(_b),_b=null},m.fx.speeds={slow:600,fast:200,_default:400},m.fn.delay=function(a,b){return a=m.fx?m.fx.speeds[a]||a:a,b=b||"fx",this.queue(b,function(b,c){var d=setTimeout(b,a);c.stop=function(){clearTimeout(d)}})},function(){var a,b,c,d,e;b=y.createElement("div"),b.setAttribute("className","t"),b.innerHTML=" <link/><table></table><a href='/a'>a</a><input type='checkbox'/>",d=b.getElementsByTagName("a")[0],c=y.createElement("select"),e=c.appendChild(y.createElement("option")),a=b.getElementsByTagName("input")[0],d.style.cssText="top:1px",k.getSetAttribute="t"!==b.className,k.style=/top/.test(d.getAttribute("style")),k.hrefNormalized="/a"===d.getAttribute("href"),k.checkOn=!!a.value,k.optSelected=e.selected,k.enctype=!!y.createElement("form").enctype,c.disabled=!0,k.optDisabled=!e.disabled,a=y.createElement("input"),a.setAttribute("value",""),k.input=""===a.getAttribute("value"),a.value="t",a.setAttribute("type","radio"),k.radioValue="t"===a.value}();var lc=/\r/g;m.fn.extend({val:function(a){var b,c,d,e=this[0];{if(arguments.length)return d=m.isFunction(a),this.each(function(c){var e;1===this.nodeType&&(e=d?a.call(this,c,m(this).val()):a,null==e?e="":"number"==typeof e?e+="":m.isArray(e)&&(e=m.map(e,function(a){return null==a?"":a+""})),b=m.valHooks[this.type]||m.valHooks[this.nodeName.toLowerCase()],b&&"set"in b&&void 0!==b.set(this,e,"value")||(this.value=e))});if(e)return b=m.valHooks[e.type]||m.valHooks[e.nodeName.toLowerCase()],b&&"get"in b&&void 0!==(c=b.get(e,"value"))?c:(c=e.value,"string"==typeof c?c.replace(lc,""):null==c?"":c)}}}),m.extend({valHooks:{option:{get:function(a){var b=m.find.attr(a,"value");return null!=b?b:m.trim(m.text(a))}},select:{get:function(a){for(var b,c,d=a.options,e=a.selectedIndex,f="select-one"===a.type||0>e,g=f?null:[],h=f?e+1:d.length,i=0>e?h:f?e:0;h>i;i++)if(c=d[i],!(!c.selected&&i!==e||(k.optDisabled?c.disabled:null!==c.getAttribute("disabled"))||c.parentNode.disabled&&m.nodeName(c.parentNode,"optgroup"))){if(b=m(c).val(),f)return b;g.push(b)}return g},set:function(a,b){var c,d,e=a.options,f=m.makeArray(b),g=e.length;while(g--)if(d=e[g],m.inArray(m.valHooks.option.get(d),f)>=0)try{d.selected=c=!0}catch(h){d.scrollHeight}else d.selected=!1;return c||(a.selectedIndex=-1),e}}}}),m.each(["radio","checkbox"],function(){m.valHooks[this]={set:function(a,b){return m.isArray(b)?a.checked=m.inArray(m(a).val(),b)>=0:void 0}},k.checkOn||(m.valHooks[this].get=function(a){return null===a.getAttribute("value")?"on":a.value})});var mc,nc,oc=m.expr.attrHandle,pc=/^(?:checked|selected)$/i,qc=k.getSetAttribute,rc=k.input;m.fn.extend({attr:function(a,b){return V(this,m.attr,a,b,arguments.length>1)},removeAttr:function(a){return this.each(function(){m.removeAttr(this,a)})}}),m.extend({attr:function(a,b,c){var d,e,f=a.nodeType;if(a&&3!==f&&8!==f&&2!==f)return typeof a.getAttribute===K?m.prop(a,b,c):(1===f&&m.isXMLDoc(a)||(b=b.toLowerCase(),d=m.attrHooks[b]||(m.expr.match.bool.test(b)?nc:mc)),void 0===c?d&&"get"in d&&null!==(e=d.get(a,b))?e:(e=m.find.attr(a,b),null==e?void 0:e):null!==c?d&&"set"in d&&void 0!==(e=d.set(a,c,b))?e:(a.setAttribute(b,c+""),c):void m.removeAttr(a,b))},removeAttr:function(a,b){var c,d,e=0,f=b&&b.match(E);if(f&&1===a.nodeType)while(c=f[e++])d=m.propFix[c]||c,m.expr.match.bool.test(c)?rc&&qc||!pc.test(c)?a[d]=!1:a[m.camelCase("default-"+c)]=a[d]=!1:m.attr(a,c,""),a.removeAttribute(qc?c:d)},attrHooks:{type:{set:function(a,b){if(!k.radioValue&&"radio"===b&&m.nodeName(a,"input")){var c=a.value;return a.setAttribute("type",b),c&&(a.value=c),b}}}}}),nc={set:function(a,b,c){return b===!1?m.removeAttr(a,c):rc&&qc||!pc.test(c)?a.setAttribute(!qc&&m.propFix[c]||c,c):a[m.camelCase("default-"+c)]=a[c]=!0,c}},m.each(m.expr.match.bool.source.match(/\w+/g),function(a,b){var c=oc[b]||m.find.attr;oc[b]=rc&&qc||!pc.test(b)?function(a,b,d){var e,f;return d||(f=oc[b],oc[b]=e,e=null!=c(a,b,d)?b.toLowerCase():null,oc[b]=f),e}:function(a,b,c){return c?void 0:a[m.camelCase("default-"+b)]?b.toLowerCase():null}}),rc&&qc||(m.attrHooks.value={set:function(a,b,c){return m.nodeName(a,"input")?void(a.defaultValue=b):mc&&mc.set(a,b,c)}}),qc||(mc={set:function(a,b,c){var d=a.getAttributeNode(c);return d||a.setAttributeNode(d=a.ownerDocument.createAttribute(c)),d.value=b+="","value"===c||b===a.getAttribute(c)?b:void 0}},oc.id=oc.name=oc.coords=function(a,b,c){var d;return c?void 0:(d=a.getAttributeNode(b))&&""!==d.value?d.value:null},m.valHooks.button={get:function(a,b){var c=a.getAttributeNode(b);return c&&c.specified?c.value:void 0},set:mc.set},m.attrHooks.contenteditable={set:function(a,b,c){mc.set(a,""===b?!1:b,c)}},m.each(["width","height"],function(a,b){m.attrHooks[b]={set:function(a,c){return""===c?(a.setAttribute(b,"auto"),c):void 0}}})),k.style||(m.attrHooks.style={get:function(a){return a.style.cssText||void 0},set:function(a,b){return a.style.cssText=b+""}});var sc=/^(?:input|select|textarea|button|object)$/i,tc=/^(?:a|area)$/i;m.fn.extend({prop:function(a,b){return V(this,m.prop,a,b,arguments.length>1)},removeProp:function(a){return a=m.propFix[a]||a,this.each(function(){try{this[a]=void 0,delete this[a]}catch(b){}})}}),m.extend({propFix:{"for":"htmlFor","class":"className"},prop:function(a,b,c){var d,e,f,g=a.nodeType;if(a&&3!==g&&8!==g&&2!==g)return f=1!==g||!m.isXMLDoc(a),f&&(b=m.propFix[b]||b,e=m.propHooks[b]),void 0!==c?e&&"set"in e&&void 0!==(d=e.set(a,c,b))?d:a[b]=c:e&&"get"in e&&null!==(d=e.get(a,b))?d:a[b]},propHooks:{tabIndex:{get:function(a){var b=m.find.attr(a,"tabindex");return b?parseInt(b,10):sc.test(a.nodeName)||tc.test(a.nodeName)&&a.href?0:-1}}}}),k.hrefNormalized||m.each(["href","src"],function(a,b){m.propHooks[b]={get:function(a){return a.getAttribute(b,4)}}}),k.optSelected||(m.propHooks.selected={get:function(a){var b=a.parentNode;return b&&(b.selectedIndex,b.parentNode&&b.parentNode.selectedIndex),null}}),m.each(["tabIndex","readOnly","maxLength","cellSpacing","cellPadding","rowSpan","colSpan","useMap","frameBorder","contentEditable"],function(){m.propFix[this.toLowerCase()]=this}),k.enctype||(m.propFix.enctype="encoding");var uc=/[\t\r\n\f]/g;m.fn.extend({addClass:function(a){var b,c,d,e,f,g,h=0,i=this.length,j="string"==typeof a&&a;if(m.isFunction(a))return this.each(function(b){m(this).addClass(a.call(this,b,this.className))});if(j)for(b=(a||"").match(E)||[];i>h;h++)if(c=this[h],d=1===c.nodeType&&(c.className?(" "+c.className+" ").replace(uc," "):" ")){f=0;while(e=b[f++])d.indexOf(" "+e+" ")<0&&(d+=e+" ");g=m.trim(d),c.className!==g&&(c.className=g)}return this},removeClass:function(a){var b,c,d,e,f,g,h=0,i=this.length,j=0===arguments.length||"string"==typeof a&&a;if(m.isFunction(a))return this.each(function(b){m(this).removeClass(a.call(this,b,this.className))});if(j)for(b=(a||"").match(E)||[];i>h;h++)if(c=this[h],d=1===c.nodeType&&(c.className?(" "+c.className+" ").replace(uc," "):"")){f=0;while(e=b[f++])while(d.indexOf(" "+e+" ")>=0)d=d.replace(" "+e+" "," ");g=a?m.trim(d):"",c.className!==g&&(c.className=g)}return this},toggleClass:function(a,b){var c=typeof a;return"boolean"==typeof b&&"string"===c?b?this.addClass(a):this.removeClass(a):this.each(m.isFunction(a)?function(c){m(this).toggleClass(a.call(this,c,this.className,b),b)}:function(){if("string"===c){var b,d=0,e=m(this),f=a.match(E)||[];while(b=f[d++])e.hasClass(b)?e.removeClass(b):e.addClass(b)}else(c===K||"boolean"===c)&&(this.className&&m._data(this,"__className__",this.className),this.className=this.className||a===!1?"":m._data(this,"__className__")||"")})},hasClass:function(a){for(var b=" "+a+" ",c=0,d=this.length;d>c;c++)if(1===this[c].nodeType&&(" "+this[c].className+" ").replace(uc," ").indexOf(b)>=0)return!0;return!1}}),m.each("blur focus focusin focusout load resize scroll unload click dblclick mousedown mouseup mousemove mouseover mouseout mouseenter mouseleave change select submit keydown keypress keyup error contextmenu".split(" "),function(a,b){m.fn[b]=function(a,c){return arguments.length>0?this.on(b,null,a,c):this.trigger(b)}}),m.fn.extend({hover:function(a,b){return this.mouseenter(a).mouseleave(b||a)},bind:function(a,b,c){return this.on(a,null,b,c)},unbind:function(a,b){return this.off(a,null,b)},delegate:function(a,b,c,d){return this.on(b,a,c,d)},undelegate:function(a,b,c){return 1===arguments.length?this.off(a,"**"):this.off(b,a||"**",c)}});var vc=m.now(),wc=/\?/,xc=/(,)|(\[|{)|(}|])|"(?:[^"\\\r\n]|\\["\\\/bfnrt]|\\u[\da-fA-F]{4})*"\s*:?|true|false|null|-?(?!0\d)\d+(?:\.\d+|)(?:[eE][+-]?\d+|)/g;m.parseJSON=function(b){if(a.JSON&&a.JSON.parse)return a.JSON.parse(b+"");var c,d=null,e=m.trim(b+"");return e&&!m.trim(e.replace(xc,function(a,b,e,f){return c&&b&&(d=0),0===d?a:(c=e||b,d+=!f-!e,"")}))?Function("return "+e)():m.error("Invalid JSON: "+b)},m.parseXML=function(b){var c,d;if(!b||"string"!=typeof b)return null;try{a.DOMParser?(d=new DOMParser,c=d.parseFromString(b,"text/xml")):(c=new ActiveXObject("Microsoft.XMLDOM"),c.async="false",c.loadXML(b))}catch(e){c=void 0}return c&&c.documentElement&&!c.getElementsByTagName("parsererror").length||m.error("Invalid XML: "+b),c};var yc,zc,Ac=/#.*$/,Bc=/([?&])_=[^&]*/,Cc=/^(.*?):[ \t]*([^\r\n]*)\r?$/gm,Dc=/^(?:about|app|app-storage|.+-extension|file|res|widget):$/,Ec=/^(?:GET|HEAD)$/,Fc=/^\/\//,Gc=/^([\w.+-]+:)(?:\/\/(?:[^\/?#]*@|)([^\/?#:]*)(?::(\d+)|)|)/,Hc={},Ic={},Jc="*/".concat("*");try{zc=location.href}catch(Kc){zc=y.createElement("a"),zc.href="",zc=zc.href}yc=Gc.exec(zc.toLowerCase())||[];function Lc(a){return function(b,c){"string"!=typeof b&&(c=b,b="*");var d,e=0,f=b.toLowerCase().match(E)||[];if(m.isFunction(c))while(d=f[e++])"+"===d.charAt(0)?(d=d.slice(1)||"*",(a[d]=a[d]||[]).unshift(c)):(a[d]=a[d]||[]).push(c)}}function Mc(a,b,c,d){var e={},f=a===Ic;function g(h){var i;return e[h]=!0,m.each(a[h]||[],function(a,h){var j=h(b,c,d);return"string"!=typeof j||f||e[j]?f?!(i=j):void 0:(b.dataTypes.unshift(j),g(j),!1)}),i}return g(b.dataTypes[0])||!e["*"]&&g("*")}function Nc(a,b){var c,d,e=m.ajaxSettings.flatOptions||{};for(d in b)void 0!==b[d]&&((e[d]?a:c||(c={}))[d]=b[d]);return c&&m.extend(!0,a,c),a}function Oc(a,b,c){var d,e,f,g,h=a.contents,i=a.dataTypes;while("*"===i[0])i.shift(),void 0===e&&(e=a.mimeType||b.getResponseHeader("Content-Type"));if(e)for(g in h)if(h[g]&&h[g].test(e)){i.unshift(g);break}if(i[0]in c)f=i[0];else{for(g in c){if(!i[0]||a.converters[g+" "+i[0]]){f=g;break}d||(d=g)}f=f||d}return f?(f!==i[0]&&i.unshift(f),c[f]):void 0}function Pc(a,b,c,d){var e,f,g,h,i,j={},k=a.dataTypes.slice();if(k[1])for(g in a.converters)j[g.toLowerCase()]=a.converters[g];f=k.shift();while(f)if(a.responseFields[f]&&(c[a.responseFields[f]]=b),!i&&d&&a.dataFilter&&(b=a.dataFilter(b,a.dataType)),i=f,f=k.shift())if("*"===f)f=i;else if("*"!==i&&i!==f){if(g=j[i+" "+f]||j["* "+f],!g)for(e in j)if(h=e.split(" "),h[1]===f&&(g=j[i+" "+h[0]]||j["* "+h[0]])){g===!0?g=j[e]:j[e]!==!0&&(f=h[0],k.unshift(h[1]));break}if(g!==!0)if(g&&a["throws"])b=g(b);else try{b=g(b)}catch(l){return{state:"parsererror",error:g?l:"No conversion from "+i+" to "+f}}}return{state:"success",data:b}}m.extend({active:0,lastModified:{},etag:{},ajaxSettings:{url:zc,type:"GET",isLocal:Dc.test(yc[1]),global:!0,processData:!0,async:!0,contentType:"application/x-www-form-urlencoded; charset=UTF-8",accepts:{"*":Jc,text:"text/plain",html:"text/html",xml:"application/xml, text/xml",json:"application/json, text/javascript"},contents:{xml:/xml/,html:/html/,json:/json/},responseFields:{xml:"responseXML",text:"responseText",json:"responseJSON"},converters:{"* text":String,"text html":!0,"text json":m.parseJSON,"text xml":m.parseXML},flatOptions:{url:!0,context:!0}},ajaxSetup:function(a,b){return b?Nc(Nc(a,m.ajaxSettings),b):Nc(m.ajaxSettings,a)},ajaxPrefilter:Lc(Hc),ajaxTransport:Lc(Ic),ajax:function(a,b){"object"==typeof a&&(b=a,a=void 0),b=b||{};var c,d,e,f,g,h,i,j,k=m.ajaxSetup({},b),l=k.context||k,n=k.context&&(l.nodeType||l.jquery)?m(l):m.event,o=m.Deferred(),p=m.Callbacks("once memory"),q=k.statusCode||{},r={},s={},t=0,u="canceled",v={readyState:0,getResponseHeader:function(a){var b;if(2===t){if(!j){j={};while(b=Cc.exec(f))j[b[1].toLowerCase()]=b[2]}b=j[a.toLowerCase()]}return null==b?null:b},getAllResponseHeaders:function(){return 2===t?f:null},setRequestHeader:function(a,b){var c=a.toLowerCase();return t||(a=s[c]=s[c]||a,r[a]=b),this},overrideMimeType:function(a){return t||(k.mimeType=a),this},statusCode:function(a){var b;if(a)if(2>t)for(b in a)q[b]=[q[b],a[b]];else v.always(a[v.status]);return this},abort:function(a){var b=a||u;return i&&i.abort(b),x(0,b),this}};if(o.promise(v).complete=p.add,v.success=v.done,v.error=v.fail,k.url=((a||k.url||zc)+"").replace(Ac,"").replace(Fc,yc[1]+"//"),k.type=b.method||b.type||k.method||k.type,k.dataTypes=m.trim(k.dataType||"*").toLowerCase().match(E)||[""],null==k.crossDomain&&(c=Gc.exec(k.url.toLowerCase()),k.crossDomain=!(!c||c[1]===yc[1]&&c[2]===yc[2]&&(c[3]||("http:"===c[1]?"80":"443"))===(yc[3]||("http:"===yc[1]?"80":"443")))),k.data&&k.processData&&"string"!=typeof k.data&&(k.data=m.param(k.data,k.traditional)),Mc(Hc,k,b,v),2===t)return v;h=k.global,h&&0===m.active++&&m.event.trigger("ajaxStart"),k.type=k.type.toUpperCase(),k.hasContent=!Ec.test(k.type),e=k.url,k.hasContent||(k.data&&(e=k.url+=(wc.test(e)?"&":"?")+k.data,delete k.data),k.cache===!1&&(k.url=Bc.test(e)?e.replace(Bc,"$1_="+vc++):e+(wc.test(e)?"&":"?")+"_="+vc++)),k.ifModified&&(m.lastModified[e]&&v.setRequestHeader("If-Modified-Since",m.lastModified[e]),m.etag[e]&&v.setRequestHeader("If-None-Match",m.etag[e])),(k.data&&k.hasContent&&k.contentType!==!1||b.contentType)&&v.setRequestHeader("Content-Type",k.contentType),v.setRequestHeader("Accept",k.dataTypes[0]&&k.accepts[k.dataTypes[0]]?k.accepts[k.dataTypes[0]]+("*"!==k.dataTypes[0]?", "+Jc+"; q=0.01":""):k.accepts["*"]);for(d in k.headers)v.setRequestHeader(d,k.headers[d]);if(k.beforeSend&&(k.beforeSend.call(l,v,k)===!1||2===t))return v.abort();u="abort";for(d in{success:1,error:1,complete:1})v[d](k[d]);if(i=Mc(Ic,k,b,v)){v.readyState=1,h&&n.trigger("ajaxSend",[v,k]),k.async&&k.timeout>0&&(g=setTimeout(function(){v.abort("timeout")},k.timeout));try{t=1,i.send(r,x)}catch(w){if(!(2>t))throw w;x(-1,w)}}else x(-1,"No Transport");function x(a,b,c,d){var j,r,s,u,w,x=b;2!==t&&(t=2,g&&clearTimeout(g),i=void 0,f=d||"",v.readyState=a>0?4:0,j=a>=200&&300>a||304===a,c&&(u=Oc(k,v,c)),u=Pc(k,u,v,j),j?(k.ifModified&&(w=v.getResponseHeader("Last-Modified"),w&&(m.lastModified[e]=w),w=v.getResponseHeader("etag"),w&&(m.etag[e]=w)),204===a||"HEAD"===k.type?x="nocontent":304===a?x="notmodified":(x=u.state,r=u.data,s=u.error,j=!s)):(s=x,(a||!x)&&(x="error",0>a&&(a=0))),v.status=a,v.statusText=(b||x)+"",j?o.resolveWith(l,[r,x,v]):o.rejectWith(l,[v,x,s]),v.statusCode(q),q=void 0,h&&n.trigger(j?"ajaxSuccess":"ajaxError",[v,k,j?r:s]),p.fireWith(l,[v,x]),h&&(n.trigger("ajaxComplete",[v,k]),--m.active||m.event.trigger("ajaxStop")))}return v},getJSON:function(a,b,c){return m.get(a,b,c,"json")},getScript:function(a,b){return m.get(a,void 0,b,"script")}}),m.each(["get","post"],function(a,b){m[b]=function(a,c,d,e){return m.isFunction(c)&&(e=e||d,d=c,c=void 0),m.ajax({url:a,type:b,dataType:e,data:c,success:d})}}),m.each(["ajaxStart","ajaxStop","ajaxComplete","ajaxError","ajaxSuccess","ajaxSend"],function(a,b){m.fn[b]=function(a){return this.on(b,a)}}),m._evalUrl=function(a){return m.ajax({url:a,type:"GET",dataType:"script",async:!1,global:!1,"throws":!0})},m.fn.extend({wrapAll:function(a){if(m.isFunction(a))return this.each(function(b){m(this).wrapAll(a.call(this,b))});if(this[0]){var b=m(a,this[0].ownerDocument).eq(0).clone(!0);this[0].parentNode&&b.insertBefore(this[0]),b.map(function(){var a=this;while(a.firstChild&&1===a.firstChild.nodeType)a=a.firstChild;return a}).append(this)}return this},wrapInner:function(a){return this.each(m.isFunction(a)?function(b){m(this).wrapInner(a.call(this,b))}:function(){var b=m(this),c=b.contents();c.length?c.wrapAll(a):b.append(a)})},wrap:function(a){var b=m.isFunction(a);return this.each(function(c){m(this).wrapAll(b?a.call(this,c):a)})},unwrap:function(){return this.parent().each(function(){m.nodeName(this,"body")||m(this).replaceWith(this.childNodes)}).end()}}),m.expr.filters.hidden=function(a){return a.offsetWidth<=0&&a.offsetHeight<=0||!k.reliableHiddenOffsets()&&"none"===(a.style&&a.style.display||m.css(a,"display"))},m.expr.filters.visible=function(a){return!m.expr.filters.hidden(a)};var Qc=/%20/g,Rc=/\[\]$/,Sc=/\r?\n/g,Tc=/^(?:submit|button|image|reset|file)$/i,Uc=/^(?:input|select|textarea|keygen)/i;function Vc(a,b,c,d){var e;if(m.isArray(b))m.each(b,function(b,e){c||Rc.test(a)?d(a,e):Vc(a+"["+("object"==typeof e?b:"")+"]",e,c,d)});else if(c||"object"!==m.type(b))d(a,b);else for(e in b)Vc(a+"["+e+"]",b[e],c,d)}m.param=function(a,b){var c,d=[],e=function(a,b){b=m.isFunction(b)?b():null==b?"":b,d[d.length]=encodeURIComponent(a)+"="+encodeURIComponent(b)};if(void 0===b&&(b=m.ajaxSettings&&m.ajaxSettings.traditional),m.isArray(a)||a.jquery&&!m.isPlainObject(a))m.each(a,function(){e(this.name,this.value)});else for(c in a)Vc(c,a[c],b,e);return d.join("&").replace(Qc,"+")},m.fn.extend({serialize:function(){return m.param(this.serializeArray())},serializeArray:function(){return this.map(function(){var a=m.prop(this,"elements");return a?m.makeArray(a):this}).filter(function(){var a=this.type;return this.name&&!m(this).is(":disabled")&&Uc.test(this.nodeName)&&!Tc.test(a)&&(this.checked||!W.test(a))}).map(function(a,b){var c=m(this).val();return null==c?null:m.isArray(c)?m.map(c,function(a){return{name:b.name,value:a.replace(Sc,"\r\n")}}):{name:b.name,value:c.replace(Sc,"\r\n")}}).get()}}),m.ajaxSettings.xhr=void 0!==a.ActiveXObject?function(){return!this.isLocal&&/^(get|post|head|put|delete|options)$/i.test(this.type)&&Zc()||$c()}:Zc;var Wc=0,Xc={},Yc=m.ajaxSettings.xhr();a.ActiveXObject&&m(a).on("unload",function(){for(var a in Xc)Xc[a](void 0,!0)}),k.cors=!!Yc&&"withCredentials"in Yc,Yc=k.ajax=!!Yc,Yc&&m.ajaxTransport(function(a){if(!a.crossDomain||k.cors){var b;return{send:function(c,d){var e,f=a.xhr(),g=++Wc;if(f.open(a.type,a.url,a.async,a.username,a.password),a.xhrFields)for(e in a.xhrFields)f[e]=a.xhrFields[e];a.mimeType&&f.overrideMimeType&&f.overrideMimeType(a.mimeType),a.crossDomain||c["X-Requested-With"]||(c["X-Requested-With"]="XMLHttpRequest");for(e in c)void 0!==c[e]&&f.setRequestHeader(e,c[e]+"");f.send(a.hasContent&&a.data||null),b=function(c,e){var h,i,j;if(b&&(e||4===f.readyState))if(delete Xc[g],b=void 0,f.onreadystatechange=m.noop,e)4!==f.readyState&&f.abort();else{j={},h=f.status,"string"==typeof f.responseText&&(j.text=f.responseText);try{i=f.statusText}catch(k){i=""}h||!a.isLocal||a.crossDomain?1223===h&&(h=204):h=j.text?200:404}j&&d(h,i,j,f.getAllResponseHeaders())},a.async?4===f.readyState?setTimeout(b):f.onreadystatechange=Xc[g]=b:b()},abort:function(){b&&b(void 0,!0)}}}});function Zc(){try{return new a.XMLHttpRequest}catch(b){}}function $c(){try{return new a.ActiveXObject("Microsoft.XMLHTTP")}catch(b){}}m.ajaxSetup({accepts:{script:"text/javascript, application/javascript, application/ecmascript, application/x-ecmascript"},contents:{script:/(?:java|ecma)script/},converters:{"text script":function(a){return m.globalEval(a),a}}}),m.ajaxPrefilter("script",function(a){void 0===a.cache&&(a.cache=!1),a.crossDomain&&(a.type="GET",a.global=!1)}),m.ajaxTransport("script",function(a){if(a.crossDomain){var b,c=y.head||m("head")[0]||y.documentElement;return{send:function(d,e){b=y.createElement("script"),b.async=!0,a.scriptCharset&&(b.charset=a.scriptCharset),b.src=a.url,b.onload=b.onreadystatechange=function(a,c){(c||!b.readyState||/loaded|complete/.test(b.readyState))&&(b.onload=b.onreadystatechange=null,b.parentNode&&b.parentNode.removeChild(b),b=null,c||e(200,"success"))},c.insertBefore(b,c.firstChild)},abort:function(){b&&b.onload(void 0,!0)}}}});var _c=[],ad=/(=)\?(?=&|$)|\?\?/;m.ajaxSetup({jsonp:"callback",jsonpCallback:function(){var a=_c.pop()||m.expando+"_"+vc++;return this[a]=!0,a}}),m.ajaxPrefilter("json jsonp",function(b,c,d){var e,f,g,h=b.jsonp!==!1&&(ad.test(b.url)?"url":"string"==typeof b.data&&!(b.contentType||"").indexOf("application/x-www-form-urlencoded")&&ad.test(b.data)&&"data");return h||"jsonp"===b.dataTypes[0]?(e=b.jsonpCallback=m.isFunction(b.jsonpCallback)?b.jsonpCallback():b.jsonpCallback,h?b[h]=b[h].replace(ad,"$1"+e):b.jsonp!==!1&&(b.url+=(wc.test(b.url)?"&":"?")+b.jsonp+"="+e),b.converters["script json"]=function(){return g||m.error(e+" was not called"),g[0]},b.dataTypes[0]="json",f=a[e],a[e]=function(){g=arguments},d.always(function(){a[e]=f,b[e]&&(b.jsonpCallback=c.jsonpCallback,_c.push(e)),g&&m.isFunction(f)&&f(g[0]),g=f=void 0}),"script"):void 0}),m.parseHTML=function(a,b,c){if(!a||"string"!=typeof a)return null;"boolean"==typeof b&&(c=b,b=!1),b=b||y;var d=u.exec(a),e=!c&&[];return d?[b.createElement(d[1])]:(d=m.buildFragment([a],b,e),e&&e.length&&m(e).remove(),m.merge([],d.childNodes))};var bd=m.fn.load;m.fn.load=function(a,b,c){if("string"!=typeof a&&bd)return bd.apply(this,arguments);var d,e,f,g=this,h=a.indexOf(" ");return h>=0&&(d=m.trim(a.slice(h,a.length)),a=a.slice(0,h)),m.isFunction(b)?(c=b,b=void 0):b&&"object"==typeof b&&(f="POST"),g.length>0&&m.ajax({url:a,type:f,dataType:"html",data:b}).done(function(a){e=arguments,g.html(d?m("<div>").append(m.parseHTML(a)).find(d):a)}).complete(c&&function(a,b){g.each(c,e||[a.responseText,b,a])}),this},m.expr.filters.animated=function(a){return m.grep(m.timers,function(b){return a===b.elem}).length};var cd=a.document.documentElement;function dd(a){return m.isWindow(a)?a:9===a.nodeType?a.defaultView||a.parentWindow:!1}m.offset={setOffset:function(a,b,c){var d,e,f,g,h,i,j,k=m.css(a,"position"),l=m(a),n={};"static"===k&&(a.style.position="relative"),h=l.offset(),f=m.css(a,"top"),i=m.css(a,"left"),j=("absolute"===k||"fixed"===k)&&m.inArray("auto",[f,i])>-1,j?(d=l.position(),g=d.top,e=d.left):(g=parseFloat(f)||0,e=parseFloat(i)||0),m.isFunction(b)&&(b=b.call(a,c,h)),null!=b.top&&(n.top=b.top-h.top+g),null!=b.left&&(n.left=b.left-h.left+e),"using"in b?b.using.call(a,n):l.css(n)}},m.fn.extend({offset:function(a){if(arguments.length)return void 0===a?this:this.each(function(b){m.offset.setOffset(this,a,b)});var b,c,d={top:0,left:0},e=this[0],f=e&&e.ownerDocument;if(f)return b=f.documentElement,m.contains(b,e)?(typeof e.getBoundingClientRect!==K&&(d=e.getBoundingClientRect()),c=dd(f),{top:d.top+(c.pageYOffset||b.scrollTop)-(b.clientTop||0),left:d.left+(c.pageXOffset||b.scrollLeft)-(b.clientLeft||0)}):d},position:function(){if(this[0]){var a,b,c={top:0,left:0},d=this[0];return"fixed"===m.css(d,"position")?b=d.getBoundingClientRect():(a=this.offsetParent(),b=this.offset(),m.nodeName(a[0],"html")||(c=a.offset()),c.top+=m.css(a[0],"borderTopWidth",!0),c.left+=m.css(a[0],"borderLeftWidth",!0)),{top:b.top-c.top-m.css(d,"marginTop",!0),left:b.left-c.left-m.css(d,"marginLeft",!0)}}},offsetParent:function(){return this.map(function(){var a=this.offsetParent||cd;while(a&&!m.nodeName(a,"html")&&"static"===m.css(a,"position"))a=a.offsetParent;return a||cd})}}),m.each({scrollLeft:"pageXOffset",scrollTop:"pageYOffset"},function(a,b){var c=/Y/.test(b);m.fn[a]=function(d){return V(this,function(a,d,e){var f=dd(a);return void 0===e?f?b in f?f[b]:f.document.documentElement[d]:a[d]:void(f?f.scrollTo(c?m(f).scrollLeft():e,c?e:m(f).scrollTop()):a[d]=e)},a,d,arguments.length,null)}}),m.each(["top","left"],function(a,b){m.cssHooks[b]=Lb(k.pixelPosition,function(a,c){return c?(c=Jb(a,b),Hb.test(c)?m(a).position()[b]+"px":c):void 0})}),m.each({Height:"height",Width:"width"},function(a,b){m.each({padding:"inner"+a,content:b,"":"outer"+a},function(c,d){m.fn[d]=function(d,e){var f=arguments.length&&(c||"boolean"!=typeof d),g=c||(d===!0||e===!0?"margin":"border");return V(this,function(b,c,d){var e;return m.isWindow(b)?b.document.documentElement["client"+a]:9===b.nodeType?(e=b.documentElement,Math.max(b.body["scroll"+a],e["scroll"+a],b.body["offset"+a],e["offset"+a],e["client"+a])):void 0===d?m.css(b,c,g):m.style(b,c,d,g)},b,f?d:void 0,f,null)}})}),m.fn.size=function(){return this.length},m.fn.andSelf=m.fn.addBack,"function"==typeof define&&define.amd&&define("jquery",[],function(){return m});var ed=a.jQuery,fd=a.$;return m.noConflict=function(b){return a.$===m&&(a.$=fd),b&&a.jQuery===m&&(a.jQuery=ed),m},typeof b===K&&(a.jQuery=a.$=m),m});
\ No newline at end of file diff --git a/contrib/python/coverage/py2/coverage/htmlfiles/jquery.tablesorter.min.js b/contrib/python/coverage/py2/coverage/htmlfiles/jquery.tablesorter.min.js deleted file mode 100644 index 64c70071291..00000000000 --- a/contrib/python/coverage/py2/coverage/htmlfiles/jquery.tablesorter.min.js +++ /dev/null @@ -1,2 +0,0 @@ - -(function($){$.extend({tablesorter:new function(){var parsers=[],widgets=[];this.defaults={cssHeader:"header",cssAsc:"headerSortUp",cssDesc:"headerSortDown",sortInitialOrder:"asc",sortMultiSortKey:"shiftKey",sortForce:null,sortAppend:null,textExtraction:"simple",parsers:{},widgets:[],widgetZebra:{css:["even","odd"]},headers:{},widthFixed:false,cancelSelection:true,sortList:[],headerList:[],dateFormat:"us",decimal:'.',debug:false};function benchmark(s,d){log(s+","+(new Date().getTime()-d.getTime())+"ms");}this.benchmark=benchmark;function log(s){if(typeof console!="undefined"&&typeof console.debug!="undefined"){console.log(s);}else{alert(s);}}function buildParserCache(table,$headers){if(table.config.debug){var parsersDebug="";}var rows=table.tBodies[0].rows;if(table.tBodies[0].rows[0]){var list=[],cells=rows[0].cells,l=cells.length;for(var i=0;i<l;i++){var p=false;if($.metadata&&($($headers[i]).metadata()&&$($headers[i]).metadata().sorter)){p=getParserById($($headers[i]).metadata().sorter);}else if((table.config.headers[i]&&table.config.headers[i].sorter)){p=getParserById(table.config.headers[i].sorter);}if(!p){p=detectParserForColumn(table,cells[i]);}if(table.config.debug){parsersDebug+="column:"+i+" parser:"+p.id+"\n";}list.push(p);}}if(table.config.debug){log(parsersDebug);}return list;};function detectParserForColumn(table,node){var l=parsers.length;for(var i=1;i<l;i++){if(parsers[i].is($.trim(getElementText(table.config,node)),table,node)){return parsers[i];}}return parsers[0];}function getParserById(name){var l=parsers.length;for(var i=0;i<l;i++){if(parsers[i].id.toLowerCase()==name.toLowerCase()){return parsers[i];}}return false;}function buildCache(table){if(table.config.debug){var cacheTime=new Date();}var totalRows=(table.tBodies[0]&&table.tBodies[0].rows.length)||0,totalCells=(table.tBodies[0].rows[0]&&table.tBodies[0].rows[0].cells.length)||0,parsers=table.config.parsers,cache={row:[],normalized:[]};for(var i=0;i<totalRows;++i){var c=table.tBodies[0].rows[i],cols=[];cache.row.push($(c));for(var j=0;j<totalCells;++j){cols.push(parsers[j].format(getElementText(table.config,c.cells[j]),table,c.cells[j]));}cols.push(i);cache.normalized.push(cols);cols=null;};if(table.config.debug){benchmark("Building cache for "+totalRows+" rows:",cacheTime);}return cache;};function getElementText(config,node){if(!node)return"";var t="";if(config.textExtraction=="simple"){if(node.childNodes[0]&&node.childNodes[0].hasChildNodes()){t=node.childNodes[0].innerHTML;}else{t=node.innerHTML;}}else{if(typeof(config.textExtraction)=="function"){t=config.textExtraction(node);}else{t=$(node).text();}}return t;}function appendToTable(table,cache){if(table.config.debug){var appendTime=new Date()}var c=cache,r=c.row,n=c.normalized,totalRows=n.length,checkCell=(n[0].length-1),tableBody=$(table.tBodies[0]),rows=[];for(var i=0;i<totalRows;i++){rows.push(r[n[i][checkCell]]);if(!table.config.appender){var o=r[n[i][checkCell]];var l=o.length;for(var j=0;j<l;j++){tableBody[0].appendChild(o[j]);}}}if(table.config.appender){table.config.appender(table,rows);}rows=null;if(table.config.debug){benchmark("Rebuilt table:",appendTime);}applyWidget(table);setTimeout(function(){$(table).trigger("sortEnd");},0);};function buildHeaders(table){if(table.config.debug){var time=new Date();}var meta=($.metadata)?true:false,tableHeadersRows=[];for(var i=0;i<table.tHead.rows.length;i++){tableHeadersRows[i]=0;};$tableHeaders=$("thead th",table);$tableHeaders.each(function(index){this.count=0;this.column=index;this.order=formatSortingOrder(table.config.sortInitialOrder);if(checkHeaderMetadata(this)||checkHeaderOptions(table,index))this.sortDisabled=true;if(!this.sortDisabled){$(this).addClass(table.config.cssHeader);}table.config.headerList[index]=this;});if(table.config.debug){benchmark("Built headers:",time);log($tableHeaders);}return $tableHeaders;};function checkCellColSpan(table,rows,row){var arr=[],r=table.tHead.rows,c=r[row].cells;for(var i=0;i<c.length;i++){var cell=c[i];if(cell.colSpan>1){arr=arr.concat(checkCellColSpan(table,headerArr,row++));}else{if(table.tHead.length==1||(cell.rowSpan>1||!r[row+1])){arr.push(cell);}}}return arr;};function checkHeaderMetadata(cell){if(($.metadata)&&($(cell).metadata().sorter===false)){return true;};return false;}function checkHeaderOptions(table,i){if((table.config.headers[i])&&(table.config.headers[i].sorter===false)){return true;};return false;}function applyWidget(table){var c=table.config.widgets;var l=c.length;for(var i=0;i<l;i++){getWidgetById(c[i]).format(table);}}function getWidgetById(name){var l=widgets.length;for(var i=0;i<l;i++){if(widgets[i].id.toLowerCase()==name.toLowerCase()){return widgets[i];}}};function formatSortingOrder(v){if(typeof(v)!="Number"){i=(v.toLowerCase()=="desc")?1:0;}else{i=(v==(0||1))?v:0;}return i;}function isValueInArray(v,a){var l=a.length;for(var i=0;i<l;i++){if(a[i][0]==v){return true;}}return false;}function setHeadersCss(table,$headers,list,css){$headers.removeClass(css[0]).removeClass(css[1]);var h=[];$headers.each(function(offset){if(!this.sortDisabled){h[this.column]=$(this);}});var l=list.length;for(var i=0;i<l;i++){h[list[i][0]].addClass(css[list[i][1]]);}}function fixColumnWidth(table,$headers){var c=table.config;if(c.widthFixed){var colgroup=$('<colgroup>');$("tr:first td",table.tBodies[0]).each(function(){colgroup.append($('<col>').css('width',$(this).width()));});$(table).prepend(colgroup);};}function updateHeaderSortCount(table,sortList){var c=table.config,l=sortList.length;for(var i=0;i<l;i++){var s=sortList[i],o=c.headerList[s[0]];o.count=s[1];o.count++;}}function multisort(table,sortList,cache){if(table.config.debug){var sortTime=new Date();}var dynamicExp="var sortWrapper = function(a,b) {",l=sortList.length;for(var i=0;i<l;i++){var c=sortList[i][0];var order=sortList[i][1];var s=(getCachedSortType(table.config.parsers,c)=="text")?((order==0)?"sortText":"sortTextDesc"):((order==0)?"sortNumeric":"sortNumericDesc");var e="e"+i;dynamicExp+="var "+e+" = "+s+"(a["+c+"],b["+c+"]); ";dynamicExp+="if("+e+") { return "+e+"; } ";dynamicExp+="else { ";}var orgOrderCol=cache.normalized[0].length-1;dynamicExp+="return a["+orgOrderCol+"]-b["+orgOrderCol+"];";for(var i=0;i<l;i++){dynamicExp+="}; ";}dynamicExp+="return 0; ";dynamicExp+="}; ";eval(dynamicExp);cache.normalized.sort(sortWrapper);if(table.config.debug){benchmark("Sorting on "+sortList.toString()+" and dir "+order+" time:",sortTime);}return cache;};function sortText(a,b){return((a<b)?-1:((a>b)?1:0));};function sortTextDesc(a,b){return((b<a)?-1:((b>a)?1:0));};function sortNumeric(a,b){return a-b;};function sortNumericDesc(a,b){return b-a;};function getCachedSortType(parsers,i){return parsers[i].type;};this.construct=function(settings){return this.each(function(){if(!this.tHead||!this.tBodies)return;var $this,$document,$headers,cache,config,shiftDown=0,sortOrder;this.config={};config=$.extend(this.config,$.tablesorter.defaults,settings);$this=$(this);$headers=buildHeaders(this);this.config.parsers=buildParserCache(this,$headers);cache=buildCache(this);var sortCSS=[config.cssDesc,config.cssAsc];fixColumnWidth(this);$headers.click(function(e){$this.trigger("sortStart");var totalRows=($this[0].tBodies[0]&&$this[0].tBodies[0].rows.length)||0;if(!this.sortDisabled&&totalRows>0){var $cell=$(this);var i=this.column;this.order=this.count++%2;if(!e[config.sortMultiSortKey]){config.sortList=[];if(config.sortForce!=null){var a=config.sortForce;for(var j=0;j<a.length;j++){if(a[j][0]!=i){config.sortList.push(a[j]);}}}config.sortList.push([i,this.order]);}else{if(isValueInArray(i,config.sortList)){for(var j=0;j<config.sortList.length;j++){var s=config.sortList[j],o=config.headerList[s[0]];if(s[0]==i){o.count=s[1];o.count++;s[1]=o.count%2;}}}else{config.sortList.push([i,this.order]);}};setTimeout(function(){setHeadersCss($this[0],$headers,config.sortList,sortCSS);appendToTable($this[0],multisort($this[0],config.sortList,cache));},1);return false;}}).mousedown(function(){if(config.cancelSelection){this.onselectstart=function(){return false};return false;}});$this.bind("update",function(){this.config.parsers=buildParserCache(this,$headers);cache=buildCache(this);}).bind("sorton",function(e,list){$(this).trigger("sortStart");config.sortList=list;var sortList=config.sortList;updateHeaderSortCount(this,sortList);setHeadersCss(this,$headers,sortList,sortCSS);appendToTable(this,multisort(this,sortList,cache));}).bind("appendCache",function(){appendToTable(this,cache);}).bind("applyWidgetId",function(e,id){getWidgetById(id).format(this);}).bind("applyWidgets",function(){applyWidget(this);});if($.metadata&&($(this).metadata()&&$(this).metadata().sortlist)){config.sortList=$(this).metadata().sortlist;}if(config.sortList.length>0){$this.trigger("sorton",[config.sortList]);}applyWidget(this);});};this.addParser=function(parser){var l=parsers.length,a=true;for(var i=0;i<l;i++){if(parsers[i].id.toLowerCase()==parser.id.toLowerCase()){a=false;}}if(a){parsers.push(parser);};};this.addWidget=function(widget){widgets.push(widget);};this.formatFloat=function(s){var i=parseFloat(s);return(isNaN(i))?0:i;};this.formatInt=function(s){var i=parseInt(s);return(isNaN(i))?0:i;};this.isDigit=function(s,config){var DECIMAL='\\'+config.decimal;var exp='/(^[+]?0('+DECIMAL+'0+)?$)|(^([-+]?[1-9][0-9]*)$)|(^([-+]?((0?|[1-9][0-9]*)'+DECIMAL+'(0*[1-9][0-9]*)))$)|(^[-+]?[1-9]+[0-9]*'+DECIMAL+'0+$)/';return RegExp(exp).test($.trim(s));};this.clearTableBody=function(table){if($.browser.msie){function empty(){while(this.firstChild)this.removeChild(this.firstChild);}empty.apply(table.tBodies[0]);}else{table.tBodies[0].innerHTML="";}};}});$.fn.extend({tablesorter:$.tablesorter.construct});var ts=$.tablesorter;ts.addParser({id:"text",is:function(s){return true;},format:function(s){return $.trim(s.toLowerCase());},type:"text"});ts.addParser({id:"digit",is:function(s,table){var c=table.config;return $.tablesorter.isDigit(s,c);},format:function(s){return $.tablesorter.formatFloat(s);},type:"numeric"});ts.addParser({id:"currency",is:function(s){return/^[£$€?.]/.test(s);},format:function(s){return $.tablesorter.formatFloat(s.replace(new RegExp(/[^0-9.]/g),""));},type:"numeric"});ts.addParser({id:"ipAddress",is:function(s){return/^\d{2,3}[\.]\d{2,3}[\.]\d{2,3}[\.]\d{2,3}$/.test(s);},format:function(s){var a=s.split("."),r="",l=a.length;for(var i=0;i<l;i++){var item=a[i];if(item.length==2){r+="0"+item;}else{r+=item;}}return $.tablesorter.formatFloat(r);},type:"numeric"});ts.addParser({id:"url",is:function(s){return/^(https?|ftp|file):\/\/$/.test(s);},format:function(s){return jQuery.trim(s.replace(new RegExp(/(https?|ftp|file):\/\//),''));},type:"text"});ts.addParser({id:"isoDate",is:function(s){return/^\d{4}[\/-]\d{1,2}[\/-]\d{1,2}$/.test(s);},format:function(s){return $.tablesorter.formatFloat((s!="")?new Date(s.replace(new RegExp(/-/g),"/")).getTime():"0");},type:"numeric"});ts.addParser({id:"percent",is:function(s){return/\%$/.test($.trim(s));},format:function(s){return $.tablesorter.formatFloat(s.replace(new RegExp(/%/g),""));},type:"numeric"});ts.addParser({id:"usLongDate",is:function(s){return s.match(new RegExp(/^[A-Za-z]{3,10}\.? [0-9]{1,2}, ([0-9]{4}|'?[0-9]{2}) (([0-2]?[0-9]:[0-5][0-9])|([0-1]?[0-9]:[0-5][0-9]\s(AM|PM)))$/));},format:function(s){return $.tablesorter.formatFloat(new Date(s).getTime());},type:"numeric"});ts.addParser({id:"shortDate",is:function(s){return/\d{1,2}[\/\-]\d{1,2}[\/\-]\d{2,4}/.test(s);},format:function(s,table){var c=table.config;s=s.replace(/\-/g,"/");if(c.dateFormat=="us"){s=s.replace(/(\d{1,2})[\/\-](\d{1,2})[\/\-](\d{4})/,"$3/$1/$2");}else if(c.dateFormat=="uk"){s=s.replace(/(\d{1,2})[\/\-](\d{1,2})[\/\-](\d{4})/,"$3/$2/$1");}else if(c.dateFormat=="dd/mm/yy"||c.dateFormat=="dd-mm-yy"){s=s.replace(/(\d{1,2})[\/\-](\d{1,2})[\/\-](\d{2})/,"$1/$2/$3");}return $.tablesorter.formatFloat(new Date(s).getTime());},type:"numeric"});ts.addParser({id:"time",is:function(s){return/^(([0-2]?[0-9]:[0-5][0-9])|([0-1]?[0-9]:[0-5][0-9]\s(am|pm)))$/.test(s);},format:function(s){return $.tablesorter.formatFloat(new Date("2000/01/01 "+s).getTime());},type:"numeric"});ts.addParser({id:"metadata",is:function(s){return false;},format:function(s,table,cell){var c=table.config,p=(!c.parserMetadataName)?'sortValue':c.parserMetadataName;return $(cell).metadata()[p];},type:"numeric"});ts.addWidget({id:"zebra",format:function(table){if(table.config.debug){var time=new Date();}$("tr:visible",table.tBodies[0]).filter(':even').removeClass(table.config.widgetZebra.css[1]).addClass(table.config.widgetZebra.css[0]).end().filter(':odd').removeClass(table.config.widgetZebra.css[0]).addClass(table.config.widgetZebra.css[1]);if(table.config.debug){$.tablesorter.benchmark("Applying Zebra widget",time);}}});})(jQuery);
\ No newline at end of file diff --git a/contrib/python/coverage/py2/coverage/htmlfiles/keybd_closed.png b/contrib/python/coverage/py2/coverage/htmlfiles/keybd_closed.png Binary files differdeleted file mode 100644 index db114023f09..00000000000 --- a/contrib/python/coverage/py2/coverage/htmlfiles/keybd_closed.png +++ /dev/null diff --git a/contrib/python/coverage/py2/coverage/htmlfiles/keybd_open.png b/contrib/python/coverage/py2/coverage/htmlfiles/keybd_open.png Binary files differdeleted file mode 100644 index db114023f09..00000000000 --- a/contrib/python/coverage/py2/coverage/htmlfiles/keybd_open.png +++ /dev/null diff --git a/contrib/python/coverage/py2/coverage/htmlfiles/pyfile.html b/contrib/python/coverage/py2/coverage/htmlfiles/pyfile.html deleted file mode 100644 index e15be066fba..00000000000 --- a/contrib/python/coverage/py2/coverage/htmlfiles/pyfile.html +++ /dev/null @@ -1,113 +0,0 @@ -{# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 #} -{# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt #} - -<!DOCTYPE html> -<html> -<head> - <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> - {# IE8 rounds line-height incorrectly, and adding this emulateIE7 line makes it right! #} - {# http://social.msdn.microsoft.com/Forums/en-US/iewebdevelopment/thread/7684445e-f080-4d8f-8529-132763348e21 #} - <meta http-equiv="X-UA-Compatible" content="IE=emulateIE7" /> - <title>Coverage for {{relative_filename|escape}}: {{nums.pc_covered_str}}%</title> - <link rel="icon" sizes="32x32" href="favicon_32.png"> - <link rel="stylesheet" href="style.css" type="text/css"> - {% if extra_css %} - <link rel="stylesheet" href="{{ extra_css }}" type="text/css"> - {% endif %} - <script type="text/javascript" src="jquery.min.js"></script> - <script type="text/javascript" src="jquery.hotkeys.js"></script> - <script type="text/javascript" src="jquery.isonscreen.js"></script> - <script type="text/javascript" src="coverage_html.js"></script> - <script type="text/javascript"> - jQuery(document).ready(coverage.pyfile_ready); - </script> -</head> -<body class="pyfile"> - -<div id="header"> - <div class="content"> - <h1>Coverage for <b>{{relative_filename|escape}}</b> : - <span class="pc_cov">{{nums.pc_covered_str}}%</span> - </h1> - - <img id="keyboard_icon" src="keybd_closed.png" alt="Show keyboard shortcuts" /> - - <h2 class="stats"> - {{nums.n_statements}} statements - <button type="button" class="{{category.run}} shortkey_r button_toggle_run" title="Toggle lines run">{{nums.n_executed}} run</button> - <button type="button" class="{{category.mis}} shortkey_m button_toggle_mis" title="Toggle lines missing">{{nums.n_missing}} missing</button> - <button type="button" class="{{category.exc}} shortkey_x button_toggle_exc" title="Toggle lines excluded">{{nums.n_excluded}} excluded</button> - - {% if has_arcs %} - <button type="button" class="{{category.par}} shortkey_p button_toggle_par" title="Toggle lines partially run">{{nums.n_partial_branches}} partial</button> - {% endif %} - </h2> - </div> -</div> - -<div class="help_panel"> - <img id="panel_icon" src="keybd_open.png" alt="Hide keyboard shortcuts" /> - <p class="legend">Hot-keys on this page</p> - <div> - <p class="keyhelp"> - <span class="key">r</span> - <span class="key">m</span> - <span class="key">x</span> - <span class="key">p</span> toggle line displays - </p> - <p class="keyhelp"> - <span class="key">j</span> - <span class="key">k</span> next/prev highlighted chunk - </p> - <p class="keyhelp"> - <span class="key">0</span> (zero) top of page - </p> - <p class="keyhelp"> - <span class="key">1</span> (one) first highlighted chunk - </p> - </div> -</div> - -<div id="source"> - {% for line in lines -%} - {% joined %} - <p id="t{{line.number}}" class="{{line.css_class}}"> - <span class="n"><a href="#t{{line.number}}">{{line.number}}</a></span> - <span class="t">{{line.html}} </span> - {% if line.context_list %} - <input type="checkbox" id="ctxs{{line.number}}" /> - {% endif %} - {# Things that should float right in the line. #} - <span class="r"> - {% if line.annotate %} - <span class="annotate short">{{line.annotate}}</span> - <span class="annotate long">{{line.annotate_long}}</span> - {% endif %} - {% if line.contexts %} - <label for="ctxs{{line.number}}" class="ctx">{{ line.contexts_label }}</label> - {% endif %} - </span> - {# Things that should appear below the line. #} - {% if line.context_list %} - <span class="ctxs"> - {% for context in line.context_list %} - <span>{{context}}</span> - {% endfor %} - </span> - {% endif %} - </p> - {% endjoined %} - {% endfor %} -</div> - -<div id="footer"> - <div class="content"> - <p> - <a class="nav" href="index.html">« index</a> <a class="nav" href="{{__url__}}">coverage.py v{{__version__}}</a>, - created at {{ time_stamp }} - </p> - </div> -</div> - -</body> -</html> diff --git a/contrib/python/coverage/py2/coverage/htmlfiles/style.css b/contrib/python/coverage/py2/coverage/htmlfiles/style.css deleted file mode 100644 index 36ee2a6e65f..00000000000 --- a/contrib/python/coverage/py2/coverage/htmlfiles/style.css +++ /dev/null @@ -1,291 +0,0 @@ -@charset "UTF-8"; -/* Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 */ -/* For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt */ -/* Don't edit this .css file. Edit the .scss file instead! */ -html, body, h1, h2, h3, p, table, td, th { margin: 0; padding: 0; border: 0; font-weight: inherit; font-style: inherit; font-size: 100%; font-family: inherit; vertical-align: baseline; } - -body { font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, Ubuntu, Cantarell, "Helvetica Neue", sans-serif; font-size: 1em; background: #fff; color: #000; } - -@media (prefers-color-scheme: dark) { body { background: #1e1e1e; } } - -@media (prefers-color-scheme: dark) { body { color: #eee; } } - -html > body { font-size: 16px; } - -a:active, a:focus { outline: 2px dashed #007acc; } - -p { font-size: .875em; line-height: 1.4em; } - -table { border-collapse: collapse; } - -td { vertical-align: top; } - -table tr.hidden { display: none !important; } - -p#no_rows { display: none; font-size: 1.2em; } - -a.nav { text-decoration: none; color: inherit; } - -a.nav:hover { text-decoration: underline; color: inherit; } - -#header { background: #f8f8f8; width: 100%; border-bottom: 1px solid #eee; } - -@media (prefers-color-scheme: dark) { #header { background: black; } } - -@media (prefers-color-scheme: dark) { #header { border-color: #333; } } - -.indexfile #footer { margin: 1rem 3.5rem; } - -.pyfile #footer { margin: 1rem 1rem; } - -#footer .content { padding: 0; color: #666; font-style: italic; } - -@media (prefers-color-scheme: dark) { #footer .content { color: #aaa; } } - -#index { margin: 1rem 0 0 3.5rem; } - -#header .content { padding: 1rem 3.5rem; } - -h1 { font-size: 1.25em; display: inline-block; } - -#filter_container { float: right; margin: 0 2em 0 0; } - -#filter_container input { width: 10em; padding: 0.2em 0.5em; border: 2px solid #ccc; background: #fff; color: #000; } - -@media (prefers-color-scheme: dark) { #filter_container input { border-color: #444; } } - -@media (prefers-color-scheme: dark) { #filter_container input { background: #1e1e1e; } } - -@media (prefers-color-scheme: dark) { #filter_container input { color: #eee; } } - -#filter_container input:focus { border-color: #007acc; } - -h2.stats { margin-top: .5em; font-size: 1em; } - -.stats button { font-family: inherit; font-size: inherit; border: 1px solid; border-radius: .2em; color: inherit; padding: .1em .5em; margin: 1px calc(.1em + 1px); cursor: pointer; border-color: #ccc; } - -@media (prefers-color-scheme: dark) { .stats button { border-color: #444; } } - -.stats button:active, .stats button:focus { outline: 2px dashed #007acc; } - -.stats button:active, .stats button:focus { outline: 2px dashed #007acc; } - -.stats button.run { background: #eeffee; } - -@media (prefers-color-scheme: dark) { .stats button.run { background: #373d29; } } - -.stats button.run.show_run { background: #dfd; border: 2px solid #00dd00; margin: 0 .1em; } - -@media (prefers-color-scheme: dark) { .stats button.run.show_run { background: #373d29; } } - -.stats button.mis { background: #ffeeee; } - -@media (prefers-color-scheme: dark) { .stats button.mis { background: #4b1818; } } - -.stats button.mis.show_mis { background: #fdd; border: 2px solid #ff0000; margin: 0 .1em; } - -@media (prefers-color-scheme: dark) { .stats button.mis.show_mis { background: #4b1818; } } - -.stats button.exc { background: #f7f7f7; } - -@media (prefers-color-scheme: dark) { .stats button.exc { background: #333; } } - -.stats button.exc.show_exc { background: #eee; border: 2px solid #808080; margin: 0 .1em; } - -@media (prefers-color-scheme: dark) { .stats button.exc.show_exc { background: #333; } } - -.stats button.par { background: #ffffd5; } - -@media (prefers-color-scheme: dark) { .stats button.par { background: #650; } } - -.stats button.par.show_par { background: #ffa; border: 2px solid #dddd00; margin: 0 .1em; } - -@media (prefers-color-scheme: dark) { .stats button.par.show_par { background: #650; } } - -.help_panel, #source p .annotate.long { display: none; position: absolute; z-index: 999; background: #ffffcc; border: 1px solid #888; border-radius: .2em; color: #333; padding: .25em .5em; } - -#source p .annotate.long { white-space: normal; float: right; top: 1.75em; right: 1em; height: auto; } - -#keyboard_icon { float: right; margin: 5px; cursor: pointer; } - -.help_panel { padding: .5em; border: 1px solid #883; } - -.help_panel .legend { font-style: italic; margin-bottom: 1em; } - -.indexfile .help_panel { width: 20em; min-height: 4em; } - -.pyfile .help_panel { width: 16em; min-height: 8em; } - -#panel_icon { float: right; cursor: pointer; } - -.keyhelp { margin: .75em; } - -.keyhelp .key { border: 1px solid black; border-color: #888 #333 #333 #888; padding: .1em .35em; font-family: SFMono-Regular, Menlo, Monaco, Consolas, monospace; font-weight: bold; background: #eee; } - -#source { padding: 1em 0 1em 3.5rem; font-family: SFMono-Regular, Menlo, Monaco, Consolas, monospace; } - -#source p { position: relative; white-space: pre; } - -#source p * { box-sizing: border-box; } - -#source p .n { float: left; text-align: right; width: 3.5rem; box-sizing: border-box; margin-left: -3.5rem; padding-right: 1em; color: #999; } - -@media (prefers-color-scheme: dark) { #source p .n { color: #777; } } - -#source p .n a { text-decoration: none; color: #999; } - -@media (prefers-color-scheme: dark) { #source p .n a { color: #777; } } - -#source p .n a:hover { text-decoration: underline; color: #999; } - -@media (prefers-color-scheme: dark) { #source p .n a:hover { color: #777; } } - -#source p.highlight .n { background: #ffdd00; } - -#source p .t { display: inline-block; width: 100%; box-sizing: border-box; margin-left: -.5em; padding-left: 0.3em; border-left: 0.2em solid #fff; } - -@media (prefers-color-scheme: dark) { #source p .t { border-color: #1e1e1e; } } - -#source p .t:hover { background: #f2f2f2; } - -@media (prefers-color-scheme: dark) { #source p .t:hover { background: #282828; } } - -#source p .t:hover ~ .r .annotate.long { display: block; } - -#source p .t .com { color: #008000; font-style: italic; line-height: 1px; } - -@media (prefers-color-scheme: dark) { #source p .t .com { color: #6A9955; } } - -#source p .t .key { font-weight: bold; line-height: 1px; } - -#source p .t .str { color: #0451A5; } - -@media (prefers-color-scheme: dark) { #source p .t .str { color: #9CDCFE; } } - -#source p.mis .t { border-left: 0.2em solid #ff0000; } - -#source p.mis.show_mis .t { background: #fdd; } - -@media (prefers-color-scheme: dark) { #source p.mis.show_mis .t { background: #4b1818; } } - -#source p.mis.show_mis .t:hover { background: #f2d2d2; } - -@media (prefers-color-scheme: dark) { #source p.mis.show_mis .t:hover { background: #532323; } } - -#source p.run .t { border-left: 0.2em solid #00dd00; } - -#source p.run.show_run .t { background: #dfd; } - -@media (prefers-color-scheme: dark) { #source p.run.show_run .t { background: #373d29; } } - -#source p.run.show_run .t:hover { background: #d2f2d2; } - -@media (prefers-color-scheme: dark) { #source p.run.show_run .t:hover { background: #404633; } } - -#source p.exc .t { border-left: 0.2em solid #808080; } - -#source p.exc.show_exc .t { background: #eee; } - -@media (prefers-color-scheme: dark) { #source p.exc.show_exc .t { background: #333; } } - -#source p.exc.show_exc .t:hover { background: #e2e2e2; } - -@media (prefers-color-scheme: dark) { #source p.exc.show_exc .t:hover { background: #3c3c3c; } } - -#source p.par .t { border-left: 0.2em solid #dddd00; } - -#source p.par.show_par .t { background: #ffa; } - -@media (prefers-color-scheme: dark) { #source p.par.show_par .t { background: #650; } } - -#source p.par.show_par .t:hover { background: #f2f2a2; } - -@media (prefers-color-scheme: dark) { #source p.par.show_par .t:hover { background: #6d5d0c; } } - -#source p .r { position: absolute; top: 0; right: 2.5em; font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, Ubuntu, Cantarell, "Helvetica Neue", sans-serif; } - -#source p .annotate { font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, Ubuntu, Cantarell, "Helvetica Neue", sans-serif; color: #666; padding-right: .5em; } - -@media (prefers-color-scheme: dark) { #source p .annotate { color: #ddd; } } - -#source p .annotate.short:hover ~ .long { display: block; } - -#source p .annotate.long { width: 30em; right: 2.5em; } - -#source p input { display: none; } - -#source p input ~ .r label.ctx { cursor: pointer; border-radius: .25em; } - -#source p input ~ .r label.ctx::before { content: "â–¶ "; } - -#source p input ~ .r label.ctx:hover { background: #d5f7ff; color: #666; } - -@media (prefers-color-scheme: dark) { #source p input ~ .r label.ctx:hover { background: #0f3a42; } } - -@media (prefers-color-scheme: dark) { #source p input ~ .r label.ctx:hover { color: #aaa; } } - -#source p input:checked ~ .r label.ctx { background: #aef; color: #666; border-radius: .75em .75em 0 0; padding: 0 .5em; margin: -.25em 0; } - -@media (prefers-color-scheme: dark) { #source p input:checked ~ .r label.ctx { background: #056; } } - -@media (prefers-color-scheme: dark) { #source p input:checked ~ .r label.ctx { color: #aaa; } } - -#source p input:checked ~ .r label.ctx::before { content: "â–¼ "; } - -#source p input:checked ~ .ctxs { padding: .25em .5em; overflow-y: scroll; max-height: 10.5em; } - -#source p label.ctx { color: #999; display: inline-block; padding: 0 .5em; font-size: .8333em; } - -@media (prefers-color-scheme: dark) { #source p label.ctx { color: #777; } } - -#source p .ctxs { display: block; max-height: 0; overflow-y: hidden; transition: all .2s; padding: 0 .5em; font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, Ubuntu, Cantarell, "Helvetica Neue", sans-serif; white-space: nowrap; background: #aef; border-radius: .25em; margin-right: 1.75em; } - -@media (prefers-color-scheme: dark) { #source p .ctxs { background: #056; } } - -#source p .ctxs span { display: block; text-align: right; } - -#index { font-family: SFMono-Regular, Menlo, Monaco, Consolas, monospace; font-size: 0.875em; } - -#index table.index { margin-left: -.5em; } - -#index td, #index th { text-align: right; width: 5em; padding: .25em .5em; border-bottom: 1px solid #eee; } - -@media (prefers-color-scheme: dark) { #index td, #index th { border-color: #333; } } - -#index td.name, #index th.name { text-align: left; width: auto; } - -#index th { font-style: italic; color: #333; cursor: pointer; } - -@media (prefers-color-scheme: dark) { #index th { color: #ddd; } } - -#index th:hover { background: #eee; } - -@media (prefers-color-scheme: dark) { #index th:hover { background: #333; } } - -#index th.headerSortDown, #index th.headerSortUp { white-space: nowrap; background: #eee; } - -@media (prefers-color-scheme: dark) { #index th.headerSortDown, #index th.headerSortUp { background: #333; } } - -#index th.headerSortDown:after { content: " ↑"; } - -#index th.headerSortUp:after { content: " ↓"; } - -#index td.name a { text-decoration: none; color: inherit; } - -#index tr.total td, #index tr.total_dynamic td { font-weight: bold; border-top: 1px solid #ccc; border-bottom: none; } - -#index tr.file:hover { background: #eee; } - -@media (prefers-color-scheme: dark) { #index tr.file:hover { background: #333; } } - -#index tr.file:hover td.name { text-decoration: underline; color: inherit; } - -#scroll_marker { position: fixed; right: 0; top: 0; width: 16px; height: 100%; background: #fff; border-left: 1px solid #eee; will-change: transform; } - -@media (prefers-color-scheme: dark) { #scroll_marker { background: #1e1e1e; } } - -@media (prefers-color-scheme: dark) { #scroll_marker { border-color: #333; } } - -#scroll_marker .marker { background: #ccc; position: absolute; min-height: 3px; width: 100%; } - -@media (prefers-color-scheme: dark) { #scroll_marker .marker { background: #444; } } diff --git a/contrib/python/coverage/py2/coverage/htmlfiles/style.scss b/contrib/python/coverage/py2/coverage/htmlfiles/style.scss deleted file mode 100644 index 158d1fb4933..00000000000 --- a/contrib/python/coverage/py2/coverage/htmlfiles/style.scss +++ /dev/null @@ -1,660 +0,0 @@ -/* Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 */ -/* For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt */ - -// CSS styles for coverage.py HTML reports. - -// When you edit this file, you need to run "make css" to get the CSS file -// generated, and then check in both the .scss and the .css files. - -// When working on the file, this command is useful: -// sass --watch --style=compact --sourcemap=none --no-cache coverage/htmlfiles/style.scss:htmlcov/style.css -// -// OR you can process sass purely in python with `pip install pysass`, then: -// pysassc --style=compact coverage/htmlfiles/style.scss coverage/htmlfiles/style.css - -// Ignore this comment, it's for the CSS output file: -/* Don't edit this .css file. Edit the .scss file instead! */ - -// Dimensions -$left-gutter: 3.5rem; - - -// -// Declare colors and variables -// - -$font-normal: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, Ubuntu, Cantarell, "Helvetica Neue", sans-serif; -$font-code: SFMono-Regular, Menlo, Monaco, Consolas, monospace; - -$off-button-lighten: 50%; -$hover-dark-amt: 95%; - -$focus-color: #007acc; - -$mis-color: #ff0000; -$run-color: #00dd00; -$exc-color: #808080; -$par-color: #dddd00; - -$light-bg: #fff; -$light-fg: #000; -$light-gray1: #f8f8f8; -$light-gray2: #eee; -$light-gray3: #ccc; -$light-gray4: #999; -$light-gray5: #666; -$light-gray6: #333; -$light-pln-bg: $light-bg; -$light-mis-bg: #fdd; -$light-run-bg: #dfd; -$light-exc-bg: $light-gray2; -$light-par-bg: #ffa; -$light-token-com: #008000; -$light-token-str: #0451A5; -$light-context-bg-color: #aef; - -$dark-bg: #1e1e1e; -$dark-fg: #eee; -$dark-gray1: #222; -$dark-gray2: #333; -$dark-gray3: #444; -$dark-gray4: #777; -$dark-gray5: #aaa; -$dark-gray6: #ddd; -$dark-pln-bg: $dark-bg; -$dark-mis-bg: #4b1818; -$dark-run-bg: #373d29; -$dark-exc-bg: $dark-gray2; -$dark-par-bg: #650; -$dark-token-com: #6A9955; -$dark-token-str: #9CDCFE; -$dark-context-bg-color: #056; - -// -// Mixins and utilities -// -@mixin background-dark($color) { - @media (prefers-color-scheme: dark) { - background: $color; - } -} -@mixin color-dark($color) { - @media (prefers-color-scheme: dark) { - color: $color; - } -} -@mixin border-color-dark($color) { - @media (prefers-color-scheme: dark) { - border-color: $color; - } -} - -// Add visual outline to navigable elements on focus improve accessibility. -@mixin focus-border { - &:active, &:focus { - outline: 2px dashed $focus-color; - } -} - -// Page-wide styles -html, body, h1, h2, h3, p, table, td, th { - margin: 0; - padding: 0; - border: 0; - font-weight: inherit; - font-style: inherit; - font-size: 100%; - font-family: inherit; - vertical-align: baseline; -} - -// Set baseline grid to 16 pt. -body { - font-family: $font-normal; - font-size: 1em; - background: $light-bg; - color: $light-fg; - @include background-dark($dark-bg); - @include color-dark($dark-fg); -} - -html>body { - font-size: 16px; -} - -a { - @include focus-border; -} - -p { - font-size: .875em; - line-height: 1.4em; -} - -table { - border-collapse: collapse; -} -td { - vertical-align: top; -} -table tr.hidden { - display: none !important; -} - -p#no_rows { - display: none; - font-size: 1.2em; -} - -a.nav { - text-decoration: none; - color: inherit; - - &:hover { - text-decoration: underline; - color: inherit; - } -} - -// Page structure -#header { - background: $light-gray1; - @include background-dark(black); - width: 100%; - border-bottom: 1px solid $light-gray2; - @include border-color-dark($dark-gray2); -} - -.indexfile #footer { - margin: 1rem $left-gutter; -} - -.pyfile #footer { - margin: 1rem 1rem; -} - -#footer .content { - padding: 0; - color: $light-gray5; - @include color-dark($dark-gray5); - font-style: italic; -} - -#index { - margin: 1rem 0 0 $left-gutter; -} - -// Header styles -#header .content { - padding: 1rem $left-gutter; -} - -h1 { - font-size: 1.25em; - display: inline-block; -} - -#filter_container { - float: right; - margin: 0 2em 0 0; - - input { - width: 10em; - padding: 0.2em 0.5em; - border: 2px solid $light-gray3; - background: $light-bg; - color: $light-fg; - @include border-color-dark($dark-gray3); - @include background-dark($dark-bg); - @include color-dark($dark-fg); - &:focus { - border-color: $focus-color; - } - } -} - -h2.stats { - margin-top: .5em; - font-size: 1em; -} -.stats button { - font-family: inherit; - font-size: inherit; - border: 1px solid; - border-radius: .2em; - color: inherit; - padding: .1em .5em; - margin: 1px calc(.1em + 1px); - cursor: pointer; - border-color: $light-gray3; - @include border-color-dark($dark-gray3); - @include focus-border; - - @include focus-border; - - &.run { - background: mix($light-run-bg, $light-bg, $off-button-lighten); - @include background-dark($dark-run-bg); - &.show_run { - background: $light-run-bg; - @include background-dark($dark-run-bg); - border: 2px solid $run-color; - margin: 0 .1em; - } - } - &.mis { - background: mix($light-mis-bg, $light-bg, $off-button-lighten); - @include background-dark($dark-mis-bg); - &.show_mis { - background: $light-mis-bg; - @include background-dark($dark-mis-bg); - border: 2px solid $mis-color; - margin: 0 .1em; - } - } - &.exc { - background: mix($light-exc-bg, $light-bg, $off-button-lighten); - @include background-dark($dark-exc-bg); - &.show_exc { - background: $light-exc-bg; - @include background-dark($dark-exc-bg); - border: 2px solid $exc-color; - margin: 0 .1em; - } - } - &.par { - background: mix($light-par-bg, $light-bg, $off-button-lighten); - @include background-dark($dark-par-bg); - &.show_par { - background: $light-par-bg; - @include background-dark($dark-par-bg); - border: 2px solid $par-color; - margin: 0 .1em; - } - } -} - -// Yellow post-it things. -%popup { - display: none; - position: absolute; - z-index: 999; - background: #ffffcc; - border: 1px solid #888; - border-radius: .2em; - color: #333; - padding: .25em .5em; -} - -// Yellow post-it's in the text listings. -%in-text-popup { - @extend %popup; - white-space: normal; - float: right; - top: 1.75em; - right: 1em; - height: auto; -} - -// Help panel -#keyboard_icon { - float: right; - margin: 5px; - cursor: pointer; -} - -.help_panel { - @extend %popup; - padding: .5em; - border: 1px solid #883; - - .legend { - font-style: italic; - margin-bottom: 1em; - } - - .indexfile & { - width: 20em; - min-height: 4em; - } - - .pyfile & { - width: 16em; - min-height: 8em; - } -} - -#panel_icon { - float: right; - cursor: pointer; -} - -.keyhelp { - margin: .75em; - - .key { - border: 1px solid black; - border-color: #888 #333 #333 #888; - padding: .1em .35em; - font-family: $font-code; - font-weight: bold; - background: #eee; - } -} - -// Source file styles - -// The slim bar at the left edge of the source lines, colored by coverage. -$border-indicator-width: .2em; - -#source { - padding: 1em 0 1em $left-gutter; - font-family: $font-code; - - p { - // position relative makes position:absolute pop-ups appear in the right place. - position: relative; - white-space: pre; - - * { - box-sizing: border-box; - } - - .n { - float: left; - text-align: right; - width: $left-gutter; - box-sizing: border-box; - margin-left: -$left-gutter; - padding-right: 1em; - color: $light-gray4; - @include color-dark($dark-gray4); - - a { - text-decoration: none; - color: $light-gray4; - @include color-dark($dark-gray4); - &:hover { - text-decoration: underline; - color: $light-gray4; - @include color-dark($dark-gray4); - } - } - } - - &.highlight .n { - background: #ffdd00; - } - - .t { - display: inline-block; - width: 100%; - box-sizing: border-box; - margin-left: -.5em; - padding-left: .5em - $border-indicator-width; - border-left: $border-indicator-width solid $light-bg; - @include border-color-dark($dark-bg); - - &:hover { - background: mix($light-pln-bg, $light-fg, $hover-dark-amt); - @include background-dark(mix($dark-pln-bg, $dark-fg, $hover-dark-amt)); - - & ~ .r .annotate.long { - display: block; - } - } - - // Syntax coloring - .com { - color: $light-token-com; - @include color-dark($dark-token-com); - font-style: italic; - line-height: 1px; - } - .key { - font-weight: bold; - line-height: 1px; - } - .str { - color: $light-token-str; - @include color-dark($dark-token-str); - } - } - - &.mis { - .t { - border-left: $border-indicator-width solid $mis-color; - } - - &.show_mis .t { - background: $light-mis-bg; - @include background-dark($dark-mis-bg); - - &:hover { - background: mix($light-mis-bg, $light-fg, $hover-dark-amt); - @include background-dark(mix($dark-mis-bg, $dark-fg, $hover-dark-amt)); - } - } - } - - &.run { - .t { - border-left: $border-indicator-width solid $run-color; - } - - &.show_run .t { - background: $light-run-bg; - @include background-dark($dark-run-bg); - - &:hover { - background: mix($light-run-bg, $light-fg, $hover-dark-amt); - @include background-dark(mix($dark-run-bg, $dark-fg, $hover-dark-amt)); - } - } - } - - &.exc { - .t { - border-left: $border-indicator-width solid $exc-color; - } - - &.show_exc .t { - background: $light-exc-bg; - @include background-dark($dark-exc-bg); - - &:hover { - background: mix($light-exc-bg, $light-fg, $hover-dark-amt); - @include background-dark(mix($dark-exc-bg, $dark-fg, $hover-dark-amt)); - } - } - } - - &.par { - .t { - border-left: $border-indicator-width solid $par-color; - } - - &.show_par .t { - background: $light-par-bg; - @include background-dark($dark-par-bg); - - &:hover { - background: mix($light-par-bg, $light-fg, $hover-dark-amt); - @include background-dark(mix($dark-par-bg, $dark-fg, $hover-dark-amt)); - } - } - - } - - .r { - position: absolute; - top: 0; - right: 2.5em; - font-family: $font-normal; - } - - .annotate { - font-family: $font-normal; - color: $light-gray5; - @include color-dark($dark-gray6); - padding-right: .5em; - - &.short:hover ~ .long { - display: block; - } - - &.long { - @extend %in-text-popup; - width: 30em; - right: 2.5em; - } - } - - input { - display: none; - - & ~ .r label.ctx { - cursor: pointer; - border-radius: .25em; - &::before { - content: "â–¶ "; - } - &:hover { - background: mix($light-context-bg-color, $light-bg, $off-button-lighten); - @include background-dark(mix($dark-context-bg-color, $dark-bg, $off-button-lighten)); - color: $light-gray5; - @include color-dark($dark-gray5); - } - } - - &:checked ~ .r label.ctx { - background: $light-context-bg-color; - @include background-dark($dark-context-bg-color); - color: $light-gray5; - @include color-dark($dark-gray5); - border-radius: .75em .75em 0 0; - padding: 0 .5em; - margin: -.25em 0; - &::before { - content: "â–¼ "; - } - } - - &:checked ~ .ctxs { - padding: .25em .5em; - overflow-y: scroll; - max-height: 10.5em; - } - } - - label.ctx { - color: $light-gray4; - @include color-dark($dark-gray4); - display: inline-block; - padding: 0 .5em; - font-size: .8333em; // 10/12 - } - - .ctxs { - display: block; - max-height: 0; - overflow-y: hidden; - transition: all .2s; - padding: 0 .5em; - font-family: $font-normal; - white-space: nowrap; - background: $light-context-bg-color; - @include background-dark($dark-context-bg-color); - border-radius: .25em; - margin-right: 1.75em; - span { - display: block; - text-align: right; - } - } - } -} - - -// index styles -#index { - font-family: $font-code; - font-size: 0.875em; - - table.index { - margin-left: -.5em; - } - td, th { - text-align: right; - width: 5em; - padding: .25em .5em; - border-bottom: 1px solid $light-gray2; - @include border-color-dark($dark-gray2); - &.name { - text-align: left; - width: auto; - } - } - th { - font-style: italic; - color: $light-gray6; - @include color-dark($dark-gray6); - cursor: pointer; - &:hover { - background: $light-gray2; - @include background-dark($dark-gray2); - } - &.headerSortDown, &.headerSortUp { - white-space: nowrap; - background: $light-gray2; - @include background-dark($dark-gray2); - } - &.headerSortDown:after { - content: " ↑"; - } - &.headerSortUp:after { - content: " ↓"; - } - } - td.name a { - text-decoration: none; - color: inherit; - } - - tr.total td, - tr.total_dynamic td { - font-weight: bold; - border-top: 1px solid #ccc; - border-bottom: none; - } - tr.file:hover { - background: $light-gray2; - @include background-dark($dark-gray2); - td.name { - text-decoration: underline; - color: inherit; - } - } -} - -// scroll marker styles -#scroll_marker { - position: fixed; - right: 0; - top: 0; - width: 16px; - height: 100%; - background: $light-bg; - border-left: 1px solid $light-gray2; - @include background-dark($dark-bg); - @include border-color-dark($dark-gray2); - will-change: transform; // for faster scrolling of fixed element in Chrome - - .marker { - background: $light-gray3; - @include background-dark($dark-gray3); - position: absolute; - min-height: 3px; - width: 100%; - } -} diff --git a/contrib/python/coverage/py2/coverage/inorout.py b/contrib/python/coverage/py2/coverage/inorout.py deleted file mode 100644 index cbc80e8fb5f..00000000000 --- a/contrib/python/coverage/py2/coverage/inorout.py +++ /dev/null @@ -1,513 +0,0 @@ -# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 -# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt - -"""Determining whether files are being measured/reported or not.""" - -# For finding the stdlib -import atexit -import inspect -import itertools -import os -import platform -import re -import sys -import traceback - -from coverage import env -from coverage.backward import code_object -from coverage.disposition import FileDisposition, disposition_init -from coverage.files import TreeMatcher, FnmatchMatcher, ModuleMatcher -from coverage.files import prep_patterns, find_python_files, canonical_filename -from coverage.misc import CoverageException -from coverage.python import source_for_file, source_for_morf - - -# Pypy has some unusual stuff in the "stdlib". Consider those locations -# when deciding where the stdlib is. These modules are not used for anything, -# they are modules importable from the pypy lib directories, so that we can -# find those directories. -_structseq = _pypy_irc_topic = None -if env.PYPY: - try: - import _structseq - except ImportError: - pass - - try: - import _pypy_irc_topic - except ImportError: - pass - - -def canonical_path(morf, directory=False): - """Return the canonical path of the module or file `morf`. - - If the module is a package, then return its directory. If it is a - module, then return its file, unless `directory` is True, in which - case return its enclosing directory. - - """ - morf_path = canonical_filename(source_for_morf(morf)) - if morf_path.endswith("__init__.py") or directory: - morf_path = os.path.split(morf_path)[0] - return morf_path - - -def name_for_module(filename, frame): - """Get the name of the module for a filename and frame. - - For configurability's sake, we allow __main__ modules to be matched by - their importable name. - - If loaded via runpy (aka -m), we can usually recover the "original" - full dotted module name, otherwise, we resort to interpreting the - file name to get the module's name. In the case that the module name - can't be determined, None is returned. - - """ - module_globals = frame.f_globals if frame is not None else {} - if module_globals is None: # pragma: only ironpython - # IronPython doesn't provide globals: https://github.com/IronLanguages/main/issues/1296 - module_globals = {} - - dunder_name = module_globals.get('__name__', None) - - if isinstance(dunder_name, str) and dunder_name != '__main__': - # This is the usual case: an imported module. - return dunder_name - - loader = module_globals.get('__loader__', None) - for attrname in ('fullname', 'name'): # attribute renamed in py3.2 - if hasattr(loader, attrname): - fullname = getattr(loader, attrname) - else: - continue - - if isinstance(fullname, str) and fullname != '__main__': - # Module loaded via: runpy -m - return fullname - - # Script as first argument to Python command line. - inspectedname = inspect.getmodulename(filename) - if inspectedname is not None: - return inspectedname - else: - return dunder_name - - -def module_is_namespace(mod): - """Is the module object `mod` a PEP420 namespace module?""" - return hasattr(mod, '__path__') and getattr(mod, '__file__', None) is None - - -def module_has_file(mod): - """Does the module object `mod` have an existing __file__ ?""" - mod__file__ = getattr(mod, '__file__', None) - if mod__file__ is None: - return False - return os.path.exists(mod__file__) - - -class InOrOut(object): - """Machinery for determining what files to measure.""" - - def __init__(self, warn, debug): - self.warn = warn - self.debug = debug - - # The matchers for should_trace. - self.source_match = None - self.source_pkgs_match = None - self.pylib_paths = self.cover_paths = None - self.pylib_match = self.cover_match = None - self.include_match = self.omit_match = None - self.plugins = [] - self.disp_class = FileDisposition - - # The source argument can be directories or package names. - self.source = [] - self.source_pkgs = [] - self.source_pkgs_unmatched = [] - self.omit = self.include = None - - def configure(self, config): - """Apply the configuration to get ready for decision-time.""" - self.config = config - self.source_pkgs.extend(config.source_pkgs) - for src in config.source or []: - if os.path.isdir(src): - self.source.append(canonical_filename(src)) - else: - self.source_pkgs.append(src) - self.source_pkgs_unmatched = self.source_pkgs[:] - - self.omit = prep_patterns(config.run_omit) - if getattr(sys, 'is_standalone_binary', False): - # don't trace contrib - self.omit.append('contrib/python/*') - self.omit.append('contrib/libs/protobuf/*') - self.omit.append('library/python/pytest/*') - self.include = prep_patterns(config.run_include) - - # The directories for files considered "installed with the interpreter". - self.pylib_paths = set() - if getattr(sys, 'is_standalone_binary', False): - self.pylib_paths.add('contrib/tools/python') - self.pylib_paths.add('contrib/tools/python3') - if not self.pylib_paths and not config.cover_pylib: - # Look at where some standard modules are located. That's the - # indication for "installed with the interpreter". In some - # environments (virtualenv, for example), these modules may be - # spread across a few locations. Look at all the candidate modules - # we've imported, and take all the different ones. - for m in (atexit, inspect, os, platform, _pypy_irc_topic, re, _structseq, traceback): - if m is not None and hasattr(m, "__file__"): - self.pylib_paths.add(canonical_path(m, directory=True)) - - if _structseq and not hasattr(_structseq, '__file__'): - # PyPy 2.4 has no __file__ in the builtin modules, but the code - # objects still have the file names. So dig into one to find - # the path to exclude. The "filename" might be synthetic, - # don't be fooled by those. - structseq_file = code_object(_structseq.structseq_new).co_filename - if not structseq_file.startswith("<"): - self.pylib_paths.add(canonical_path(structseq_file)) - - # To avoid tracing the coverage.py code itself, we skip anything - # located where we are. - if getattr(sys, 'is_standalone_binary', False): - self.cover_paths = ["contrib/python/coverage"] - else: - self.cover_paths = [canonical_path(__file__, directory=True)] - if env.TESTING: - # Don't include our own test code. - self.cover_paths.append(os.path.join(self.cover_paths[0], "tests")) - - # When testing, we use PyContracts, which should be considered - # part of coverage.py, and it uses six. Exclude those directories - # just as we exclude ourselves. - import contracts - import six - for mod in [contracts, six]: - self.cover_paths.append(canonical_path(mod)) - - def debug(msg): - if self.debug: - self.debug.write(msg) - - # Create the matchers we need for should_trace - if self.source or self.source_pkgs: - against = [] - if self.source: - self.source_match = TreeMatcher(self.source) - against.append("trees {!r}".format(self.source_match)) - if self.source_pkgs: - self.source_pkgs_match = ModuleMatcher(self.source_pkgs) - against.append("modules {!r}".format(self.source_pkgs_match)) - debug("Source matching against " + " and ".join(against)) - else: - if self.cover_paths: - self.cover_match = TreeMatcher(self.cover_paths) - debug("Coverage code matching: {!r}".format(self.cover_match)) - if self.pylib_paths: - self.pylib_match = TreeMatcher(self.pylib_paths) - debug("Python stdlib matching: {!r}".format(self.pylib_match)) - if self.include: - self.include_match = FnmatchMatcher(self.include) - debug("Include matching: {!r}".format(self.include_match)) - if self.omit: - self.omit_match = FnmatchMatcher(self.omit) - debug("Omit matching: {!r}".format(self.omit_match)) - - def should_trace(self, filename, frame=None): - """Decide whether to trace execution in `filename`, with a reason. - - This function is called from the trace function. As each new file name - is encountered, this function determines whether it is traced or not. - - Returns a FileDisposition object. - - """ - original_filename = filename - disp = disposition_init(self.disp_class, filename) - - def nope(disp, reason): - """Simple helper to make it easy to return NO.""" - disp.trace = False - disp.reason = reason - return disp - - if frame is not None: - # Compiled Python files have two file names: frame.f_code.co_filename is - # the file name at the time the .pyc was compiled. The second name is - # __file__, which is where the .pyc was actually loaded from. Since - # .pyc files can be moved after compilation (for example, by being - # installed), we look for __file__ in the frame and prefer it to the - # co_filename value. - dunder_file = frame.f_globals and frame.f_globals.get('__file__') - if dunder_file: - filename = source_for_file(dunder_file) - if original_filename and not original_filename.startswith('<'): - orig = os.path.basename(original_filename) - if orig != os.path.basename(filename): - # Files shouldn't be renamed when moved. This happens when - # exec'ing code. If it seems like something is wrong with - # the frame's file name, then just use the original. - filename = original_filename - - if not filename: - # Empty string is pretty useless. - return nope(disp, "empty string isn't a file name") - - if filename.startswith('memory:'): - return nope(disp, "memory isn't traceable") - - if filename.startswith('<'): - # Lots of non-file execution is represented with artificial - # file names like "<string>", "<doctest readme.txt[0]>", or - # "<exec_function>". Don't ever trace these executions, since we - # can't do anything with the data later anyway. - return nope(disp, "not a real file name") - - # pyexpat does a dumb thing, calling the trace function explicitly from - # C code with a C file name. - if re.search(r"[/\\]Modules[/\\]pyexpat.c", filename): - return nope(disp, "pyexpat lies about itself") - - # Jython reports the .class file to the tracer, use the source file. - if filename.endswith("$py.class"): - filename = filename[:-9] + ".py" - - # XXX maybe we need to support both at the same time? - # Don't trace modules imported from environment in standalone mode - if getattr(sys, 'is_standalone_binary', False) and filename.startswith("/"): - return nope(disp, "skip modules from environment") - - canonical = canonical_filename(filename) - disp.canonical_filename = canonical - - # Try the plugins, see if they have an opinion about the file. - plugin = None - for plugin in self.plugins.file_tracers: - if not plugin._coverage_enabled: - continue - - try: - file_tracer = plugin.file_tracer(canonical) - if file_tracer is not None: - file_tracer._coverage_plugin = plugin - disp.trace = True - disp.file_tracer = file_tracer - if file_tracer.has_dynamic_source_filename(): - disp.has_dynamic_filename = True - else: - disp.source_filename = canonical_filename( - file_tracer.source_filename() - ) - break - except Exception: - if not self.config.suppress_plugin_errors: - raise - self.warn( - "Disabling plug-in %r due to an exception:" % (plugin._coverage_plugin_name) - ) - traceback.print_exc() - plugin._coverage_enabled = False - continue - else: - # No plugin wanted it: it's Python. - disp.trace = True - disp.source_filename = canonical - - if not disp.has_dynamic_filename: - if not disp.source_filename: - raise CoverageException( - "Plugin %r didn't set source_filename for %r" % - (plugin, disp.original_filename) - ) - reason = self.check_include_omit_etc(disp.source_filename, frame) - if reason: - nope(disp, reason) - - return disp - - def check_include_omit_etc(self, filename, frame): - """Check a file name against the include, omit, etc, rules. - - Returns a string or None. String means, don't trace, and is the reason - why. None means no reason found to not trace. - - """ - modulename = name_for_module(filename, frame) - - # If the user specified source or include, then that's authoritative - # about the outer bound of what to measure and we don't have to apply - # any canned exclusions. If they didn't, then we have to exclude the - # stdlib and coverage.py directories. - if self.source_match or self.source_pkgs_match: - extra = "" - ok = False - if self.source_pkgs_match: - if self.source_pkgs_match.match(modulename): - ok = True - if modulename in self.source_pkgs_unmatched: - self.source_pkgs_unmatched.remove(modulename) - else: - extra = "module {!r} ".format(modulename) - if not ok and self.source_match: - if self.source_match.match(filename): - ok = True - if not ok: - return extra + "falls outside the --source spec" - elif self.include_match: - if not self.include_match.match(filename): - return "falls outside the --include trees" - else: - # If we aren't supposed to trace installed code, then check if this - # is near the Python standard library and skip it if so. - if self.pylib_match and self.pylib_match.match(filename): - return "is in the stdlib" - - # We exclude the coverage.py code itself, since a little of it - # will be measured otherwise. - if self.cover_match and self.cover_match.match(filename): - return "is part of coverage.py" - - # Check the file against the omit pattern. - if self.omit_match and self.omit_match.match(filename): - return "is inside an --omit pattern" - - # No point tracing a file we can't later write to SQLite. - try: - filename.encode("utf8") - except UnicodeEncodeError: - return "non-encodable filename" - - # No reason found to skip this file. - return None - - def warn_conflicting_settings(self): - """Warn if there are settings that conflict.""" - if self.include: - if self.source or self.source_pkgs: - self.warn("--include is ignored because --source is set", slug="include-ignored") - - def warn_already_imported_files(self): - """Warn if files have already been imported that we will be measuring.""" - if self.include or self.source or self.source_pkgs: - warned = set() - for mod in list(sys.modules.values()): - filename = getattr(mod, "__file__", None) - if filename is None: - continue - if filename in warned: - continue - - disp = self.should_trace(filename) - if disp.trace: - msg = "Already imported a file that will be measured: {}".format(filename) - self.warn(msg, slug="already-imported") - warned.add(filename) - - def warn_unimported_source(self): - """Warn about source packages that were of interest, but never traced.""" - for pkg in self.source_pkgs_unmatched: - self._warn_about_unmeasured_code(pkg) - - def _warn_about_unmeasured_code(self, pkg): - """Warn about a package or module that we never traced. - - `pkg` is a string, the name of the package or module. - - """ - mod = sys.modules.get(pkg) - if mod is None: - self.warn("Module %s was never imported." % pkg, slug="module-not-imported") - return - - if module_is_namespace(mod): - # A namespace package. It's OK for this not to have been traced, - # since there is no code directly in it. - return - - if not module_has_file(mod): - self.warn("Module %s has no Python source." % pkg, slug="module-not-python") - return - - # The module was in sys.modules, and seems like a module with code, but - # we never measured it. I guess that means it was imported before - # coverage even started. - self.warn( - "Module %s was previously imported, but not measured" % pkg, - slug="module-not-measured", - ) - - def find_possibly_unexecuted_files(self): - """Find files in the areas of interest that might be untraced. - - Yields pairs: file path, and responsible plug-in name. - """ - for pkg in self.source_pkgs: - if (not pkg in sys.modules or - not module_has_file(sys.modules[pkg])): - continue - pkg_file = source_for_file(sys.modules[pkg].__file__) - for ret in self._find_executable_files(canonical_path(pkg_file)): - yield ret - - for src in self.source: - for ret in self._find_executable_files(src): - yield ret - - def _find_plugin_files(self, src_dir): - """Get executable files from the plugins.""" - for plugin in self.plugins.file_tracers: - for x_file in plugin.find_executable_files(src_dir): - yield x_file, plugin._coverage_plugin_name - - def _find_executable_files(self, src_dir): - """Find executable files in `src_dir`. - - Search for files in `src_dir` that can be executed because they - are probably importable. Don't include ones that have been omitted - by the configuration. - - Yield the file path, and the plugin name that handles the file. - - """ - py_files = ((py_file, None) for py_file in find_python_files(src_dir)) - plugin_files = self._find_plugin_files(src_dir) - - for file_path, plugin_name in itertools.chain(py_files, plugin_files): - file_path = canonical_filename(file_path) - if self.omit_match and self.omit_match.match(file_path): - # Turns out this file was omitted, so don't pull it back - # in as unexecuted. - continue - yield file_path, plugin_name - - def sys_info(self): - """Our information for Coverage.sys_info. - - Returns a list of (key, value) pairs. - """ - info = [ - ('cover_paths', self.cover_paths), - ('pylib_paths', self.pylib_paths), - ] - - matcher_names = [ - 'source_match', 'source_pkgs_match', - 'include_match', 'omit_match', - 'cover_match', 'pylib_match', - ] - - for matcher_name in matcher_names: - matcher = getattr(self, matcher_name) - if matcher: - matcher_info = matcher.info() - else: - matcher_info = '-none-' - info.append((matcher_name, matcher_info)) - - return info diff --git a/contrib/python/coverage/py2/coverage/jsonreport.py b/contrib/python/coverage/py2/coverage/jsonreport.py deleted file mode 100644 index 4287bc79a35..00000000000 --- a/contrib/python/coverage/py2/coverage/jsonreport.py +++ /dev/null @@ -1,103 +0,0 @@ -# coding: utf-8 -# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 -# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt - -"""Json reporting for coverage.py""" -import datetime -import json -import sys - -from coverage import __version__ -from coverage.report import get_analysis_to_report -from coverage.results import Numbers - - -class JsonReporter(object): - """A reporter for writing JSON coverage results.""" - - def __init__(self, coverage): - self.coverage = coverage - self.config = self.coverage.config - self.total = Numbers() - self.report_data = {} - - def report(self, morfs, outfile=None): - """Generate a json report for `morfs`. - - `morfs` is a list of modules or file names. - - `outfile` is a file object to write the json to - - """ - outfile = outfile or sys.stdout - coverage_data = self.coverage.get_data() - coverage_data.set_query_contexts(self.config.report_contexts) - self.report_data["meta"] = { - "version": __version__, - "timestamp": datetime.datetime.now().isoformat(), - "branch_coverage": coverage_data.has_arcs(), - "show_contexts": self.config.json_show_contexts, - } - - measured_files = {} - for file_reporter, analysis in get_analysis_to_report(self.coverage, morfs): - measured_files[file_reporter.relative_filename()] = self.report_one_file( - coverage_data, - analysis - ) - - self.report_data["files"] = measured_files - - self.report_data["totals"] = { - 'covered_lines': self.total.n_executed, - 'num_statements': self.total.n_statements, - 'percent_covered': self.total.pc_covered, - 'missing_lines': self.total.n_missing, - 'excluded_lines': self.total.n_excluded, - } - - if coverage_data.has_arcs(): - self.report_data["totals"].update({ - 'num_branches': self.total.n_branches, - 'num_partial_branches': self.total.n_partial_branches, - 'covered_branches': self.total.n_executed_branches, - 'missing_branches': self.total.n_missing_branches, - }) - - json.dump( - self.report_data, - outfile, - indent=4 if self.config.json_pretty_print else None - ) - - return self.total.n_statements and self.total.pc_covered - - def report_one_file(self, coverage_data, analysis): - """Extract the relevant report data for a single file""" - nums = analysis.numbers - self.total += nums - summary = { - 'covered_lines': nums.n_executed, - 'num_statements': nums.n_statements, - 'percent_covered': nums.pc_covered, - 'missing_lines': nums.n_missing, - 'excluded_lines': nums.n_excluded, - } - reported_file = { - 'executed_lines': sorted(analysis.executed), - 'summary': summary, - 'missing_lines': sorted(analysis.missing), - 'excluded_lines': sorted(analysis.excluded) - } - if self.config.json_show_contexts: - reported_file['contexts'] = analysis.data.contexts_by_lineno( - analysis.filename, - ) - if coverage_data.has_arcs(): - reported_file['summary'].update({ - 'num_branches': nums.n_branches, - 'num_partial_branches': nums.n_partial_branches, - 'covered_branches': nums.n_executed_branches, - 'missing_branches': nums.n_missing_branches, - }) - return reported_file diff --git a/contrib/python/coverage/py2/coverage/misc.py b/contrib/python/coverage/py2/coverage/misc.py deleted file mode 100644 index 034e288eb9a..00000000000 --- a/contrib/python/coverage/py2/coverage/misc.py +++ /dev/null @@ -1,361 +0,0 @@ -# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 -# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt - -"""Miscellaneous stuff for coverage.py.""" - -import errno -import hashlib -import inspect -import locale -import os -import os.path -import random -import re -import socket -import sys -import types - -from coverage import env -from coverage.backward import to_bytes, unicode_class - -ISOLATED_MODULES = {} - - -def isolate_module(mod): - """Copy a module so that we are isolated from aggressive mocking. - - If a test suite mocks os.path.exists (for example), and then we need to use - it during the test, everything will get tangled up if we use their mock. - Making a copy of the module when we import it will isolate coverage.py from - those complications. - """ - if mod not in ISOLATED_MODULES: - new_mod = types.ModuleType(mod.__name__) - ISOLATED_MODULES[mod] = new_mod - for name in dir(mod): - value = getattr(mod, name) - if isinstance(value, types.ModuleType): - value = isolate_module(value) - setattr(new_mod, name, value) - return ISOLATED_MODULES[mod] - -os = isolate_module(os) - - -def dummy_decorator_with_args(*args_unused, **kwargs_unused): - """Dummy no-op implementation of a decorator with arguments.""" - def _decorator(func): - return func - return _decorator - - -# Environment COVERAGE_NO_CONTRACTS=1 can turn off contracts while debugging -# tests to remove noise from stack traces. -# $set_env.py: COVERAGE_NO_CONTRACTS - Disable PyContracts to simplify stack traces. -USE_CONTRACTS = env.TESTING and not bool(int(os.environ.get("COVERAGE_NO_CONTRACTS", 0))) - -# Use PyContracts for assertion testing on parameters and returns, but only if -# we are running our own test suite. -if USE_CONTRACTS: - from contracts import contract # pylint: disable=unused-import - from contracts import new_contract as raw_new_contract - - def new_contract(*args, **kwargs): - """A proxy for contracts.new_contract that doesn't mind happening twice.""" - try: - raw_new_contract(*args, **kwargs) - except ValueError: - # During meta-coverage, this module is imported twice, and - # PyContracts doesn't like redefining contracts. It's OK. - pass - - # Define contract words that PyContract doesn't have. - new_contract('bytes', lambda v: isinstance(v, bytes)) - if env.PY3: - new_contract('unicode', lambda v: isinstance(v, unicode_class)) - - def one_of(argnames): - """Ensure that only one of the argnames is non-None.""" - def _decorator(func): - argnameset = {name.strip() for name in argnames.split(",")} - def _wrapper(*args, **kwargs): - vals = [kwargs.get(name) for name in argnameset] - assert sum(val is not None for val in vals) == 1 - return func(*args, **kwargs) - return _wrapper - return _decorator -else: # pragma: not testing - # We aren't using real PyContracts, so just define our decorators as - # stunt-double no-ops. - contract = dummy_decorator_with_args - one_of = dummy_decorator_with_args - - def new_contract(*args_unused, **kwargs_unused): - """Dummy no-op implementation of `new_contract`.""" - pass - - -def nice_pair(pair): - """Make a nice string representation of a pair of numbers. - - If the numbers are equal, just return the number, otherwise return the pair - with a dash between them, indicating the range. - - """ - start, end = pair - if start == end: - return "%d" % start - else: - return "%d-%d" % (start, end) - - -def expensive(fn): - """A decorator to indicate that a method shouldn't be called more than once. - - Normally, this does nothing. During testing, this raises an exception if - called more than once. - - """ - if env.TESTING: - attr = "_once_" + fn.__name__ - - def _wrapper(self): - if hasattr(self, attr): - raise AssertionError("Shouldn't have called %s more than once" % fn.__name__) - setattr(self, attr, True) - return fn(self) - return _wrapper - else: - return fn # pragma: not testing - - -def bool_or_none(b): - """Return bool(b), but preserve None.""" - if b is None: - return None - else: - return bool(b) - - -def join_regex(regexes): - """Combine a list of regexes into one that matches any of them.""" - return "|".join("(?:%s)" % r for r in regexes) - - -def file_be_gone(path): - """Remove a file, and don't get annoyed if it doesn't exist.""" - try: - os.remove(path) - except OSError as e: - if e.errno != errno.ENOENT: - raise - - -def ensure_dir(directory): - """Make sure the directory exists. - - If `directory` is None or empty, do nothing. - """ - if directory and not os.path.isdir(directory): - os.makedirs(directory) - - -def ensure_dir_for_file(path): - """Make sure the directory for the path exists.""" - ensure_dir(os.path.dirname(path)) - - -def output_encoding(outfile=None): - """Determine the encoding to use for output written to `outfile` or stdout.""" - if outfile is None: - outfile = sys.stdout - encoding = ( - getattr(outfile, "encoding", None) or - getattr(sys.__stdout__, "encoding", None) or - locale.getpreferredencoding() - ) - return encoding - - -def filename_suffix(suffix): - """Compute a filename suffix for a data file. - - If `suffix` is a string or None, simply return it. If `suffix` is True, - then build a suffix incorporating the hostname, process id, and a random - number. - - Returns a string or None. - - """ - if suffix is True: - # If data_suffix was a simple true value, then make a suffix with - # plenty of distinguishing information. We do this here in - # `save()` at the last minute so that the pid will be correct even - # if the process forks. - dice = random.Random(os.urandom(8)).randint(0, 999999) - suffix = "%s.%s.%06d" % (socket.gethostname(), os.getpid(), dice) - return suffix - - -class Hasher(object): - """Hashes Python data into md5.""" - def __init__(self): - self.md5 = hashlib.md5() - - def update(self, v): - """Add `v` to the hash, recursively if needed.""" - self.md5.update(to_bytes(str(type(v)))) - if isinstance(v, unicode_class): - self.md5.update(v.encode('utf8')) - elif isinstance(v, bytes): - self.md5.update(v) - elif v is None: - pass - elif isinstance(v, (int, float)): - self.md5.update(to_bytes(str(v))) - elif isinstance(v, (tuple, list)): - for e in v: - self.update(e) - elif isinstance(v, dict): - keys = v.keys() - for k in sorted(keys): - self.update(k) - self.update(v[k]) - else: - for k in dir(v): - if k.startswith('__'): - continue - a = getattr(v, k) - if inspect.isroutine(a): - continue - self.update(k) - self.update(a) - self.md5.update(b'.') - - def hexdigest(self): - """Retrieve the hex digest of the hash.""" - return self.md5.hexdigest() - - -def _needs_to_implement(that, func_name): - """Helper to raise NotImplementedError in interface stubs.""" - if hasattr(that, "_coverage_plugin_name"): - thing = "Plugin" - name = that._coverage_plugin_name - else: - thing = "Class" - klass = that.__class__ - name = "{klass.__module__}.{klass.__name__}".format(klass=klass) - - raise NotImplementedError( - "{thing} {name!r} needs to implement {func_name}()".format( - thing=thing, name=name, func_name=func_name - ) - ) - - -class DefaultValue(object): - """A sentinel object to use for unusual default-value needs. - - Construct with a string that will be used as the repr, for display in help - and Sphinx output. - - """ - def __init__(self, display_as): - self.display_as = display_as - - def __repr__(self): - return self.display_as - - -def substitute_variables(text, variables): - """Substitute ``${VAR}`` variables in `text` with their values. - - Variables in the text can take a number of shell-inspired forms:: - - $VAR - ${VAR} - ${VAR?} strict: an error if VAR isn't defined. - ${VAR-missing} defaulted: "missing" if VAR isn't defined. - $$ just a dollar sign. - - `variables` is a dictionary of variable values. - - Returns the resulting text with values substituted. - - """ - dollar_pattern = r"""(?x) # Use extended regex syntax - \$ # A dollar sign, - (?: # then - (?P<dollar>\$) | # a dollar sign, or - (?P<word1>\w+) | # a plain word, or - { # a {-wrapped - (?P<word2>\w+) # word, - (?: - (?P<strict>\?) | # with a strict marker - -(?P<defval>[^}]*) # or a default value - )? # maybe. - } - ) - """ - - def dollar_replace(match): - """Called for each $replacement.""" - # Only one of the groups will have matched, just get its text. - word = next(g for g in match.group('dollar', 'word1', 'word2') if g) - if word == "$": - return "$" - elif word in variables: - return variables[word] - elif match.group('strict'): - msg = "Variable {} is undefined: {!r}".format(word, text) - raise CoverageException(msg) - else: - return match.group('defval') - - text = re.sub(dollar_pattern, dollar_replace, text) - return text - - -class BaseCoverageException(Exception): - """The base of all Coverage exceptions.""" - pass - - -class CoverageException(BaseCoverageException): - """An exception raised by a coverage.py function.""" - pass - - -class NoSource(CoverageException): - """We couldn't find the source for a module.""" - pass - - -class NoCode(NoSource): - """We couldn't find any code at all.""" - pass - - -class NotPython(CoverageException): - """A source file turned out not to be parsable Python.""" - pass - - -class ExceptionDuringRun(CoverageException): - """An exception happened while running customer code. - - Construct it with three arguments, the values from `sys.exc_info`. - - """ - pass - - -class StopEverything(BaseCoverageException): - """An exception that means everything should stop. - - The CoverageTest class converts these to SkipTest, so that when running - tests, raising this exception will automatically skip the test. - - """ - pass diff --git a/contrib/python/coverage/py2/coverage/multiproc.py b/contrib/python/coverage/py2/coverage/multiproc.py deleted file mode 100644 index 21ed2e2c956..00000000000 --- a/contrib/python/coverage/py2/coverage/multiproc.py +++ /dev/null @@ -1,117 +0,0 @@ -# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 -# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt - -"""Monkey-patching to add multiprocessing support for coverage.py""" - -import multiprocessing -import multiprocessing.process -import os -import os.path -import sys -import traceback - -from coverage import env -from coverage.misc import contract - -# An attribute that will be set on the module to indicate that it has been -# monkey-patched. -PATCHED_MARKER = "_coverage$patched" - -COVERAGE_CONFIGURATION_ENV = "_COVERAGE_CONFIGURATION_ENV" - - -if env.PYVERSION >= (3, 4): - OriginalProcess = multiprocessing.process.BaseProcess -else: - OriginalProcess = multiprocessing.Process - -original_bootstrap = OriginalProcess._bootstrap - -class ProcessWithCoverage(OriginalProcess): # pylint: disable=abstract-method - """A replacement for multiprocess.Process that starts coverage.""" - - def _bootstrap(self, *args, **kwargs): - """Wrapper around _bootstrap to start coverage.""" - try: - from coverage import Coverage # avoid circular import - import json - kwconf = json.loads(os.environ[COVERAGE_CONFIGURATION_ENV]) - cov = Coverage(**kwconf) - cov._warn_preimported_source = False - cov.start() - debug = cov._debug - if debug.should("multiproc"): - debug.write("Calling multiprocessing bootstrap") - except Exception: - print("Exception during multiprocessing bootstrap init:") - traceback.print_exc(file=sys.stdout) - sys.stdout.flush() - raise - try: - return original_bootstrap(self, *args, **kwargs) - finally: - if debug.should("multiproc"): - debug.write("Finished multiprocessing bootstrap") - cov.stop() - cov.save() - if debug.should("multiproc"): - debug.write("Saved multiprocessing data") - -class Stowaway(object): - """An object to pickle, so when it is unpickled, it can apply the monkey-patch.""" - def __init__(self, rcfile): - self.rcfile = rcfile - - def __getstate__(self): - return {'rcfile': self.rcfile} - - def __setstate__(self, state): - patch_multiprocessing(state['rcfile']) - - -@contract(rcfile=str) -def patch_multiprocessing(rcfile, coverage_args): - """Monkey-patch the multiprocessing module. - - This enables coverage measurement of processes started by multiprocessing. - This involves aggressive monkey-patching. - - `rcfile` is the path to the rcfile being used. - - """ - - if hasattr(multiprocessing, PATCHED_MARKER): - return - - if env.PYVERSION >= (3, 4): - OriginalProcess._bootstrap = ProcessWithCoverage._bootstrap - else: - multiprocessing.Process = ProcessWithCoverage - - # Set the value in ProcessWithCoverage that will be pickled into the child - # process. - os.environ["COVERAGE_RCFILE"] = os.path.abspath(rcfile) - - os.environ[COVERAGE_CONFIGURATION_ENV] = coverage_args - - # When spawning processes rather than forking them, we have no state in the - # new process. We sneak in there with a Stowaway: we stuff one of our own - # objects into the data that gets pickled and sent to the sub-process. When - # the Stowaway is unpickled, it's __setstate__ method is called, which - # re-applies the monkey-patch. - # Windows only spawns, so this is needed to keep Windows working. - try: - from multiprocessing import spawn - original_get_preparation_data = spawn.get_preparation_data - except (ImportError, AttributeError): - pass - else: - def get_preparation_data_with_stowaway(name): - """Get the original preparation data, and also insert our stowaway.""" - d = original_get_preparation_data(name) - d['stowaway'] = Stowaway(rcfile) - return d - - spawn.get_preparation_data = get_preparation_data_with_stowaway - - setattr(multiprocessing, PATCHED_MARKER, True) diff --git a/contrib/python/coverage/py2/coverage/numbits.py b/contrib/python/coverage/py2/coverage/numbits.py deleted file mode 100644 index 6ca96fbcf72..00000000000 --- a/contrib/python/coverage/py2/coverage/numbits.py +++ /dev/null @@ -1,163 +0,0 @@ -# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 -# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt - -""" -Functions to manipulate packed binary representations of number sets. - -To save space, coverage stores sets of line numbers in SQLite using a packed -binary representation called a numbits. A numbits is a set of positive -integers. - -A numbits is stored as a blob in the database. The exact meaning of the bytes -in the blobs should be considered an implementation detail that might change in -the future. Use these functions to work with those binary blobs of data. - -""" -import json - -from coverage import env -from coverage.backward import byte_to_int, bytes_to_ints, binary_bytes, zip_longest -from coverage.misc import contract, new_contract - -if env.PY3: - def _to_blob(b): - """Convert a bytestring into a type SQLite will accept for a blob.""" - return b - - new_contract('blob', lambda v: isinstance(v, bytes)) -else: - def _to_blob(b): - """Convert a bytestring into a type SQLite will accept for a blob.""" - return buffer(b) # pylint: disable=undefined-variable - - new_contract('blob', lambda v: isinstance(v, buffer)) # pylint: disable=undefined-variable - - -@contract(nums='Iterable', returns='blob') -def nums_to_numbits(nums): - """Convert `nums` into a numbits. - - Arguments: - nums: a reusable iterable of integers, the line numbers to store. - - Returns: - A binary blob. - """ - try: - nbytes = max(nums) // 8 + 1 - except ValueError: - # nums was empty. - return _to_blob(b'') - b = bytearray(nbytes) - for num in nums: - b[num//8] |= 1 << num % 8 - return _to_blob(bytes(b)) - - -@contract(numbits='blob', returns='list[int]') -def numbits_to_nums(numbits): - """Convert a numbits into a list of numbers. - - Arguments: - numbits: a binary blob, the packed number set. - - Returns: - A list of ints. - - When registered as a SQLite function by :func:`register_sqlite_functions`, - this returns a string, a JSON-encoded list of ints. - - """ - nums = [] - for byte_i, byte in enumerate(bytes_to_ints(numbits)): - for bit_i in range(8): - if (byte & (1 << bit_i)): - nums.append(byte_i * 8 + bit_i) - return nums - - -@contract(numbits1='blob', numbits2='blob', returns='blob') -def numbits_union(numbits1, numbits2): - """Compute the union of two numbits. - - Returns: - A new numbits, the union of `numbits1` and `numbits2`. - """ - byte_pairs = zip_longest(bytes_to_ints(numbits1), bytes_to_ints(numbits2), fillvalue=0) - return _to_blob(binary_bytes(b1 | b2 for b1, b2 in byte_pairs)) - - -@contract(numbits1='blob', numbits2='blob', returns='blob') -def numbits_intersection(numbits1, numbits2): - """Compute the intersection of two numbits. - - Returns: - A new numbits, the intersection `numbits1` and `numbits2`. - """ - byte_pairs = zip_longest(bytes_to_ints(numbits1), bytes_to_ints(numbits2), fillvalue=0) - intersection_bytes = binary_bytes(b1 & b2 for b1, b2 in byte_pairs) - return _to_blob(intersection_bytes.rstrip(b'\0')) - - -@contract(numbits1='blob', numbits2='blob', returns='bool') -def numbits_any_intersection(numbits1, numbits2): - """Is there any number that appears in both numbits? - - Determine whether two number sets have a non-empty intersection. This is - faster than computing the intersection. - - Returns: - A bool, True if there is any number in both `numbits1` and `numbits2`. - """ - byte_pairs = zip_longest(bytes_to_ints(numbits1), bytes_to_ints(numbits2), fillvalue=0) - return any(b1 & b2 for b1, b2 in byte_pairs) - - -@contract(num='int', numbits='blob', returns='bool') -def num_in_numbits(num, numbits): - """Does the integer `num` appear in `numbits`? - - Returns: - A bool, True if `num` is a member of `numbits`. - """ - nbyte, nbit = divmod(num, 8) - if nbyte >= len(numbits): - return False - return bool(byte_to_int(numbits[nbyte]) & (1 << nbit)) - - -def register_sqlite_functions(connection): - """ - Define numbits functions in a SQLite connection. - - This defines these functions for use in SQLite statements: - - * :func:`numbits_union` - * :func:`numbits_intersection` - * :func:`numbits_any_intersection` - * :func:`num_in_numbits` - * :func:`numbits_to_nums` - - `connection` is a :class:`sqlite3.Connection <python:sqlite3.Connection>` - object. After creating the connection, pass it to this function to - register the numbits functions. Then you can use numbits functions in your - queries:: - - import sqlite3 - from coverage.numbits import register_sqlite_functions - - conn = sqlite3.connect('example.db') - register_sqlite_functions(conn) - c = conn.cursor() - # Kind of a nonsense query: find all the files and contexts that - # executed line 47 in any file: - c.execute( - "select file_id, context_id from line_bits where num_in_numbits(?, numbits)", - (47,) - ) - """ - connection.create_function("numbits_union", 2, numbits_union) - connection.create_function("numbits_intersection", 2, numbits_intersection) - connection.create_function("numbits_any_intersection", 2, numbits_any_intersection) - connection.create_function("num_in_numbits", 2, num_in_numbits) - connection.create_function("numbits_to_nums", 1, lambda b: json.dumps(numbits_to_nums(b))) diff --git a/contrib/python/coverage/py2/coverage/parser.py b/contrib/python/coverage/py2/coverage/parser.py deleted file mode 100644 index 258f9560394..00000000000 --- a/contrib/python/coverage/py2/coverage/parser.py +++ /dev/null @@ -1,1276 +0,0 @@ -# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 -# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt - -"""Code parsing for coverage.py.""" - -import ast -import collections -import os -import re -import token -import tokenize - -from coverage import env -from coverage.backward import range # pylint: disable=redefined-builtin -from coverage.backward import bytes_to_ints, string_class -from coverage.bytecode import code_objects -from coverage.debug import short_stack -from coverage.misc import contract, join_regex, new_contract, nice_pair, one_of -from coverage.misc import NoSource, NotPython, StopEverything -from coverage.phystokens import compile_unicode, generate_tokens, neuter_encoding_declaration - - -class PythonParser(object): - """Parse code to find executable lines, excluded lines, etc. - - This information is all based on static analysis: no code execution is - involved. - - """ - @contract(text='unicode|None') - def __init__(self, text=None, filename=None, exclude=None): - """ - Source can be provided as `text`, the text itself, or `filename`, from - which the text will be read. Excluded lines are those that match - `exclude`, a regex. - - """ - assert text or filename, "PythonParser needs either text or filename" - self.filename = filename or "<code>" - self.text = text - if not self.text: - from coverage.python import get_python_source - try: - self.text = get_python_source(self.filename) - except IOError as err: - raise NoSource( - "No source for code: '%s': %s" % (self.filename, err) - ) - - self.exclude = exclude - - # The text lines of the parsed code. - self.lines = self.text.split('\n') - - # The normalized line numbers of the statements in the code. Exclusions - # are taken into account, and statements are adjusted to their first - # lines. - self.statements = set() - - # The normalized line numbers of the excluded lines in the code, - # adjusted to their first lines. - self.excluded = set() - - # The raw_* attributes are only used in this class, and in - # lab/parser.py to show how this class is working. - - # The line numbers that start statements, as reported by the line - # number table in the bytecode. - self.raw_statements = set() - - # The raw line numbers of excluded lines of code, as marked by pragmas. - self.raw_excluded = set() - - # The line numbers of class definitions. - self.raw_classdefs = set() - - # Function definitions (start, end, name) - self._raw_funcdefs = set() - - # The line numbers of docstring lines. - self.raw_docstrings = set() - - # Internal detail, used by lab/parser.py. - self.show_tokens = False - - # A dict mapping line numbers to lexical statement starts for - # multi-line statements. - self._multiline = {} - - # Lazily-created ByteParser, arc data, and missing arc descriptions. - self._byte_parser = None - self._all_arcs = None - self._missing_arc_fragments = None - - @property - def byte_parser(self): - """Create a ByteParser on demand.""" - if not self._byte_parser: - self._byte_parser = ByteParser(self.text, filename=self.filename) - return self._byte_parser - - @property - def raw_funcdefs(self): - return self._raw_funcdefs - - def lines_matching(self, *regexes): - """Find the lines matching one of a list of regexes. - - Returns a set of line numbers, the lines that contain a match for one - of the regexes in `regexes`. The entire line needn't match, just a - part of it. - - """ - combined = join_regex(regexes) - if env.PY2: - combined = combined.decode("utf8") - regex_c = re.compile(combined) - matches = set() - for i, ltext in enumerate(self.lines, start=1): - if regex_c.search(ltext): - matches.add(i) - return matches - - def _raw_parse(self): - """Parse the source to find the interesting facts about its lines. - - A handful of attributes are updated. - - """ - # Find lines which match an exclusion pattern. - if self.exclude: - self.raw_excluded = self.lines_matching(self.exclude) - - # Tokenize, to find excluded suites, to find docstrings, and to find - # multi-line statements. - indent = 0 - exclude_indent = 0 - excluding = False - excluding_decorators = False - prev_toktype = token.INDENT - first_line = None - empty = True - first_on_line = True - - tokgen = generate_tokens(self.text) - for toktype, ttext, (slineno, _), (elineno, _), ltext in tokgen: - if self.show_tokens: # pragma: debugging - print("%10s %5s %-20r %r" % ( - tokenize.tok_name.get(toktype, toktype), - nice_pair((slineno, elineno)), ttext, ltext - )) - if toktype == token.INDENT: - indent += 1 - elif toktype == token.DEDENT: - indent -= 1 - elif toktype == token.NAME: - if ttext == 'class': - # Class definitions look like branches in the bytecode, so - # we need to exclude them. The simplest way is to note the - # lines with the 'class' keyword. - self.raw_classdefs.add(slineno) - elif toktype == token.OP: - if ttext == ':': - should_exclude = (elineno in self.raw_excluded) or excluding_decorators - if not excluding and should_exclude: - # Start excluding a suite. We trigger off of the colon - # token so that the #pragma comment will be recognized on - # the same line as the colon. - self.raw_excluded.add(elineno) - exclude_indent = indent - excluding = True - excluding_decorators = False - elif ttext == '@' and first_on_line: - # A decorator. - if elineno in self.raw_excluded: - excluding_decorators = True - if excluding_decorators: - self.raw_excluded.add(elineno) - elif toktype == token.STRING and prev_toktype == token.INDENT: - # Strings that are first on an indented line are docstrings. - # (a trick from trace.py in the stdlib.) This works for - # 99.9999% of cases. For the rest (!) see: - # http://stackoverflow.com/questions/1769332/x/1769794#1769794 - self.raw_docstrings.update(range(slineno, elineno+1)) - elif toktype == token.NEWLINE: - if first_line is not None and elineno != first_line: - # We're at the end of a line, and we've ended on a - # different line than the first line of the statement, - # so record a multi-line range. - for l in range(first_line, elineno+1): - self._multiline[l] = first_line - first_line = None - first_on_line = True - - if ttext.strip() and toktype != tokenize.COMMENT: - # A non-whitespace token. - empty = False - if first_line is None: - # The token is not whitespace, and is the first in a - # statement. - first_line = slineno - # Check whether to end an excluded suite. - if excluding and indent <= exclude_indent: - excluding = False - if excluding: - self.raw_excluded.add(elineno) - first_on_line = False - - prev_toktype = toktype - - # Find the starts of the executable statements. - if not empty: - self.raw_statements.update(self.byte_parser._find_statements()) - - # The first line of modules can lie and say 1 always, even if the first - # line of code is later. If so, map 1 to the actual first line of the - # module. - if env.PYBEHAVIOR.module_firstline_1 and self._multiline: - self._multiline[1] = min(self.raw_statements) - - def first_line(self, line): - """Return the first line number of the statement including `line`.""" - if line < 0: - line = -self._multiline.get(-line, -line) - else: - line = self._multiline.get(line, line) - return line - - def first_lines(self, lines): - """Map the line numbers in `lines` to the correct first line of the - statement. - - Returns a set of the first lines. - - """ - return {self.first_line(l) for l in lines} - - def translate_lines(self, lines): - """Implement `FileReporter.translate_lines`.""" - return self.first_lines(lines) - - def translate_arcs(self, arcs): - """Implement `FileReporter.translate_arcs`.""" - return [(self.first_line(a), self.first_line(b)) for (a, b) in arcs] - - def parse_source(self): - """Parse source text to find executable lines, excluded lines, etc. - - Sets the .excluded and .statements attributes, normalized to the first - line of multi-line statements. - - """ - try: - self._raw_parse() - except (tokenize.TokenError, IndentationError) as err: - if hasattr(err, "lineno"): - lineno = err.lineno # IndentationError - else: - lineno = err.args[1][0] # TokenError - raise NotPython( - u"Couldn't parse '%s' as Python source: '%s' at line %d" % ( - self.filename, err.args[0], lineno - ) - ) - - self.excluded = self.first_lines(self.raw_excluded) - - ignore = self.excluded | self.raw_docstrings - starts = self.raw_statements - ignore - self.statements = self.first_lines(starts) - ignore - - def arcs(self): - """Get information about the arcs available in the code. - - Returns a set of line number pairs. Line numbers have been normalized - to the first line of multi-line statements. - - """ - if self._all_arcs is None: - self._analyze_ast() - return self._all_arcs - - def _analyze_ast(self): - """Run the AstArcAnalyzer and save its results. - - `_all_arcs` is the set of arcs in the code. - - """ - aaa = AstArcAnalyzer(self.text, self.raw_statements, self._multiline) - aaa.analyze() - - self._all_arcs = set() - for l1, l2 in aaa.arcs: - fl1 = self.first_line(l1) - fl2 = self.first_line(l2) - if fl1 != fl2: - self._all_arcs.add((fl1, fl2)) - - self._missing_arc_fragments = aaa.missing_arc_fragments - self._raw_funcdefs = aaa.funcdefs - - def exit_counts(self): - """Get a count of exits from that each line. - - Excluded lines are excluded. - - """ - exit_counts = collections.defaultdict(int) - for l1, l2 in self.arcs(): - if l1 < 0: - # Don't ever report -1 as a line number - continue - if l1 in self.excluded: - # Don't report excluded lines as line numbers. - continue - if l2 in self.excluded: - # Arcs to excluded lines shouldn't count. - continue - exit_counts[l1] += 1 - - # Class definitions have one extra exit, so remove one for each: - for l in self.raw_classdefs: - # Ensure key is there: class definitions can include excluded lines. - if l in exit_counts: - exit_counts[l] -= 1 - - return exit_counts - - def missing_arc_description(self, start, end, executed_arcs=None): - """Provide an English sentence describing a missing arc.""" - if self._missing_arc_fragments is None: - self._analyze_ast() - - actual_start = start - - if ( - executed_arcs and - end < 0 and end == -start and - (end, start) not in executed_arcs and - (end, start) in self._missing_arc_fragments - ): - # It's a one-line callable, and we never even started it, - # and we have a message about not starting it. - start, end = end, start - - fragment_pairs = self._missing_arc_fragments.get((start, end), [(None, None)]) - - msgs = [] - for smsg, emsg in fragment_pairs: - if emsg is None: - if end < 0: - # Hmm, maybe we have a one-line callable, let's check. - if (-end, end) in self._missing_arc_fragments: - return self.missing_arc_description(-end, end) - emsg = "didn't jump to the function exit" - else: - emsg = "didn't jump to line {lineno}" - emsg = emsg.format(lineno=end) - - msg = "line {start} {emsg}".format(start=actual_start, emsg=emsg) - if smsg is not None: - msg += ", because {smsg}".format(smsg=smsg.format(lineno=actual_start)) - - msgs.append(msg) - - return " or ".join(msgs) - - -class ByteParser(object): - """Parse bytecode to understand the structure of code.""" - - @contract(text='unicode') - def __init__(self, text, code=None, filename=None): - self.text = text - if code: - self.code = code - else: - try: - self.code = compile_unicode(text, filename, "exec") - except SyntaxError as synerr: - raise NotPython( - u"Couldn't parse '%s' as Python source: '%s' at line %d" % ( - filename, synerr.msg, synerr.lineno - ) - ) - - # Alternative Python implementations don't always provide all the - # attributes on code objects that we need to do the analysis. - for attr in ['co_lnotab', 'co_firstlineno']: - if not hasattr(self.code, attr): - raise StopEverything( # pragma: only jython - "This implementation of Python doesn't support code analysis.\n" - "Run coverage.py under another Python for this command." - ) - - def child_parsers(self): - """Iterate over all the code objects nested within this one. - - The iteration includes `self` as its first value. - - """ - return (ByteParser(self.text, code=c) for c in code_objects(self.code)) - - def _line_numbers(self): - """Yield the line numbers possible in this code object. - - Uses co_lnotab described in Python/compile.c to find the - line numbers. Produces a sequence: l0, l1, ... - """ - if hasattr(self.code, "co_lines"): - for _, _, line in self.code.co_lines(): - if line is not None: - yield line - else: - # Adapted from dis.py in the standard library. - byte_increments = bytes_to_ints(self.code.co_lnotab[0::2]) - line_increments = bytes_to_ints(self.code.co_lnotab[1::2]) - - last_line_num = None - line_num = self.code.co_firstlineno - byte_num = 0 - for byte_incr, line_incr in zip(byte_increments, line_increments): - if byte_incr: - if line_num != last_line_num: - yield line_num - last_line_num = line_num - byte_num += byte_incr - if env.PYBEHAVIOR.negative_lnotab and line_incr >= 0x80: - line_incr -= 0x100 - line_num += line_incr - if line_num != last_line_num: - yield line_num - - def _find_statements(self): - """Find the statements in `self.code`. - - Produce a sequence of line numbers that start statements. Recurses - into all code objects reachable from `self.code`. - - """ - for bp in self.child_parsers(): - # Get all of the lineno information from this code. - for l in bp._line_numbers(): - yield l - - -# -# AST analysis -# - -class LoopBlock(object): - """A block on the block stack representing a `for` or `while` loop.""" - @contract(start=int) - def __init__(self, start): - # The line number where the loop starts. - self.start = start - # A set of ArcStarts, the arcs from break statements exiting this loop. - self.break_exits = set() - - -class FunctionBlock(object): - """A block on the block stack representing a function definition.""" - @contract(start=int, name=str) - def __init__(self, start, name): - # The line number where the function starts. - self.start = start - # The name of the function. - self.name = name - - -class TryBlock(object): - """A block on the block stack representing a `try` block.""" - @contract(handler_start='int|None', final_start='int|None') - def __init__(self, handler_start, final_start): - # The line number of the first "except" handler, if any. - self.handler_start = handler_start - # The line number of the "finally:" clause, if any. - self.final_start = final_start - - # The ArcStarts for breaks/continues/returns/raises inside the "try:" - # that need to route through the "finally:" clause. - self.break_from = set() - self.continue_from = set() - self.return_from = set() - self.raise_from = set() - - -class ArcStart(collections.namedtuple("Arc", "lineno, cause")): - """The information needed to start an arc. - - `lineno` is the line number the arc starts from. - - `cause` is an English text fragment used as the `startmsg` for - AstArcAnalyzer.missing_arc_fragments. It will be used to describe why an - arc wasn't executed, so should fit well into a sentence of the form, - "Line 17 didn't run because {cause}." The fragment can include "{lineno}" - to have `lineno` interpolated into it. - - """ - def __new__(cls, lineno, cause=None): - return super(ArcStart, cls).__new__(cls, lineno, cause) - - -# Define contract words that PyContract doesn't have. -# ArcStarts is for a list or set of ArcStart's. -new_contract('ArcStarts', lambda seq: all(isinstance(x, ArcStart) for x in seq)) - - -# Turn on AST dumps with an environment variable. -# $set_env.py: COVERAGE_AST_DUMP - Dump the AST nodes when parsing code. -AST_DUMP = bool(int(os.environ.get("COVERAGE_AST_DUMP", 0))) - -class NodeList(object): - """A synthetic fictitious node, containing a sequence of nodes. - - This is used when collapsing optimized if-statements, to represent the - unconditional execution of one of the clauses. - - """ - def __init__(self, body): - self.body = body - self.lineno = body[0].lineno - - -# TODO: some add_arcs methods here don't add arcs, they return them. Rename them. -# TODO: the cause messages have too many commas. -# TODO: Shouldn't the cause messages join with "and" instead of "or"? - -class AstArcAnalyzer(object): - """Analyze source text with an AST to find executable code paths.""" - - @contract(text='unicode', statements=set) - def __init__(self, text, statements, multiline): - self.root_node = ast.parse(neuter_encoding_declaration(text)) - # TODO: I think this is happening in too many places. - self.statements = {multiline.get(l, l) for l in statements} - self.multiline = multiline - - if AST_DUMP: # pragma: debugging - # Dump the AST so that failing tests have helpful output. - print("Statements: {}".format(self.statements)) - print("Multiline map: {}".format(self.multiline)) - ast_dump(self.root_node) - - self.arcs = set() - - # A map from arc pairs to a list of pairs of sentence fragments: - # { (start, end): [(startmsg, endmsg), ...], } - # - # For an arc from line 17, they should be usable like: - # "Line 17 {endmsg}, because {startmsg}" - self.missing_arc_fragments = collections.defaultdict(list) - self.block_stack = [] - self.funcdefs = set() - - # $set_env.py: COVERAGE_TRACK_ARCS - Trace every arc added while parsing code. - self.debug = bool(int(os.environ.get("COVERAGE_TRACK_ARCS", 0))) - - def analyze(self): - """Examine the AST tree from `root_node` to determine possible arcs. - - This sets the `arcs` attribute to be a set of (from, to) line number - pairs. - - """ - for node in ast.walk(self.root_node): - node_name = node.__class__.__name__ - code_object_handler = getattr(self, "_code_object__" + node_name, None) - if code_object_handler is not None: - code_object_handler(node) - - @contract(start=int, end=int) - def add_arc(self, start, end, smsg=None, emsg=None): - """Add an arc, including message fragments to use if it is missing.""" - if self.debug: # pragma: debugging - print("\nAdding arc: ({}, {}): {!r}, {!r}".format(start, end, smsg, emsg)) - print(short_stack(limit=6)) - self.arcs.add((start, end)) - - if smsg is not None or emsg is not None: - self.missing_arc_fragments[(start, end)].append((smsg, emsg)) - - def nearest_blocks(self): - """Yield the blocks in nearest-to-farthest order.""" - return reversed(self.block_stack) - - @contract(returns=int) - def line_for_node(self, node): - """What is the right line number to use for this node? - - This dispatches to _line__Node functions where needed. - - """ - node_name = node.__class__.__name__ - handler = getattr(self, "_line__" + node_name, None) - if handler is not None: - return handler(node) - else: - return node.lineno - - def _line_decorated(self, node): - """Compute first line number for things that can be decorated (classes and functions).""" - lineno = node.lineno - if env.PYBEHAVIOR.trace_decorated_def: - if node.decorator_list: - lineno = node.decorator_list[0].lineno - return lineno - - def _line__Assign(self, node): - return self.line_for_node(node.value) - - _line__ClassDef = _line_decorated - - def _line__Dict(self, node): - # Python 3.5 changed how dict literals are made. - if env.PYVERSION >= (3, 5) and node.keys: - if node.keys[0] is not None: - return node.keys[0].lineno - else: - # Unpacked dict literals `{**{'a':1}}` have None as the key, - # use the value in that case. - return node.values[0].lineno - else: - return node.lineno - - _line__FunctionDef = _line_decorated - _line__AsyncFunctionDef = _line_decorated - - def _line__List(self, node): - if node.elts: - return self.line_for_node(node.elts[0]) - else: - return node.lineno - - def _line__Module(self, node): - if env.PYBEHAVIOR.module_firstline_1: - return 1 - elif node.body: - return self.line_for_node(node.body[0]) - else: - # Empty modules have no line number, they always start at 1. - return 1 - - # The node types that just flow to the next node with no complications. - OK_TO_DEFAULT = { - "Assign", "Assert", "AugAssign", "Delete", "Exec", "Expr", "Global", - "Import", "ImportFrom", "Nonlocal", "Pass", "Print", - } - - @contract(returns='ArcStarts') - def add_arcs(self, node): - """Add the arcs for `node`. - - Return a set of ArcStarts, exits from this node to the next. Because a - node represents an entire sub-tree (including its children), the exits - from a node can be arbitrarily complex:: - - if something(1): - if other(2): - doit(3) - else: - doit(5) - - There are two exits from line 1: they start at line 3 and line 5. - - """ - node_name = node.__class__.__name__ - handler = getattr(self, "_handle__" + node_name, None) - if handler is not None: - return handler(node) - else: - # No handler: either it's something that's ok to default (a simple - # statement), or it's something we overlooked. Change this 0 to 1 - # to see if it's overlooked. - if 0: - if node_name not in self.OK_TO_DEFAULT: - print("*** Unhandled: {}".format(node)) - - # Default for simple statements: one exit from this node. - return {ArcStart(self.line_for_node(node))} - - @one_of("from_start, prev_starts") - @contract(returns='ArcStarts') - def add_body_arcs(self, body, from_start=None, prev_starts=None): - """Add arcs for the body of a compound statement. - - `body` is the body node. `from_start` is a single `ArcStart` that can - be the previous line in flow before this body. `prev_starts` is a set - of ArcStarts that can be the previous line. Only one of them should be - given. - - Returns a set of ArcStarts, the exits from this body. - - """ - if prev_starts is None: - prev_starts = {from_start} - for body_node in body: - lineno = self.line_for_node(body_node) - first_line = self.multiline.get(lineno, lineno) - if first_line not in self.statements: - body_node = self.find_non_missing_node(body_node) - if body_node is None: - continue - lineno = self.line_for_node(body_node) - for prev_start in prev_starts: - self.add_arc(prev_start.lineno, lineno, prev_start.cause) - prev_starts = self.add_arcs(body_node) - return prev_starts - - def find_non_missing_node(self, node): - """Search `node` looking for a child that has not been optimized away. - - This might return the node you started with, or it will work recursively - to find a child node in self.statements. - - Returns a node, or None if none of the node remains. - - """ - # This repeats work just done in add_body_arcs, but this duplication - # means we can avoid a function call in the 99.9999% case of not - # optimizing away statements. - lineno = self.line_for_node(node) - first_line = self.multiline.get(lineno, lineno) - if first_line in self.statements: - return node - - missing_fn = getattr(self, "_missing__" + node.__class__.__name__, None) - if missing_fn: - node = missing_fn(node) - else: - node = None - return node - - # Missing nodes: _missing__* - # - # Entire statements can be optimized away by Python. They will appear in - # the AST, but not the bytecode. These functions are called (by - # find_non_missing_node) to find a node to use instead of the missing - # node. They can return None if the node should truly be gone. - - def _missing__If(self, node): - # If the if-node is missing, then one of its children might still be - # here, but not both. So return the first of the two that isn't missing. - # Use a NodeList to hold the clauses as a single node. - non_missing = self.find_non_missing_node(NodeList(node.body)) - if non_missing: - return non_missing - if node.orelse: - return self.find_non_missing_node(NodeList(node.orelse)) - return None - - def _missing__NodeList(self, node): - # A NodeList might be a mixture of missing and present nodes. Find the - # ones that are present. - non_missing_children = [] - for child in node.body: - child = self.find_non_missing_node(child) - if child is not None: - non_missing_children.append(child) - - # Return the simplest representation of the present children. - if not non_missing_children: - return None - if len(non_missing_children) == 1: - return non_missing_children[0] - return NodeList(non_missing_children) - - def _missing__While(self, node): - body_nodes = self.find_non_missing_node(NodeList(node.body)) - if not body_nodes: - return None - # Make a synthetic While-true node. - new_while = ast.While() - new_while.lineno = body_nodes.lineno - new_while.test = ast.Name() - new_while.test.lineno = body_nodes.lineno - new_while.test.id = "True" - new_while.body = body_nodes.body - new_while.orelse = None - return new_while - - def is_constant_expr(self, node): - """Is this a compile-time constant?""" - node_name = node.__class__.__name__ - if node_name in ["Constant", "NameConstant", "Num"]: - return "Num" - elif node_name == "Name": - if node.id in ["True", "False", "None", "__debug__"]: - return "Name" - return None - - # In the fullness of time, these might be good tests to write: - # while EXPR: - # while False: - # listcomps hidden deep in other expressions - # listcomps hidden in lists: x = [[i for i in range(10)]] - # nested function definitions - - - # Exit processing: process_*_exits - # - # These functions process the four kinds of jump exits: break, continue, - # raise, and return. To figure out where an exit goes, we have to look at - # the block stack context. For example, a break will jump to the nearest - # enclosing loop block, or the nearest enclosing finally block, whichever - # is nearer. - - @contract(exits='ArcStarts') - def process_break_exits(self, exits): - """Add arcs due to jumps from `exits` being breaks.""" - for block in self.nearest_blocks(): - if isinstance(block, LoopBlock): - block.break_exits.update(exits) - break - elif isinstance(block, TryBlock) and block.final_start is not None: - block.break_from.update(exits) - break - - @contract(exits='ArcStarts') - def process_continue_exits(self, exits): - """Add arcs due to jumps from `exits` being continues.""" - for block in self.nearest_blocks(): - if isinstance(block, LoopBlock): - for xit in exits: - self.add_arc(xit.lineno, block.start, xit.cause) - break - elif isinstance(block, TryBlock) and block.final_start is not None: - block.continue_from.update(exits) - break - - @contract(exits='ArcStarts') - def process_raise_exits(self, exits): - """Add arcs due to jumps from `exits` being raises.""" - for block in self.nearest_blocks(): - if isinstance(block, TryBlock): - if block.handler_start is not None: - for xit in exits: - self.add_arc(xit.lineno, block.handler_start, xit.cause) - break - elif block.final_start is not None: - block.raise_from.update(exits) - break - elif isinstance(block, FunctionBlock): - for xit in exits: - self.add_arc( - xit.lineno, -block.start, xit.cause, - "didn't except from function {!r}".format(block.name), - ) - break - - @contract(exits='ArcStarts') - def process_return_exits(self, exits): - """Add arcs due to jumps from `exits` being returns.""" - for block in self.nearest_blocks(): - if isinstance(block, TryBlock) and block.final_start is not None: - block.return_from.update(exits) - break - elif isinstance(block, FunctionBlock): - for xit in exits: - self.add_arc( - xit.lineno, -block.start, xit.cause, - "didn't return from function {!r}".format(block.name), - ) - break - - - # Handlers: _handle__* - # - # Each handler deals with a specific AST node type, dispatched from - # add_arcs. Handlers return the set of exits from that node, and can - # also call self.add_arc to record arcs they find. These functions mirror - # the Python semantics of each syntactic construct. See the docstring - # for add_arcs to understand the concept of exits from a node. - - @contract(returns='ArcStarts') - def _handle__Break(self, node): - here = self.line_for_node(node) - break_start = ArcStart(here, cause="the break on line {lineno} wasn't executed") - self.process_break_exits([break_start]) - return set() - - @contract(returns='ArcStarts') - def _handle_decorated(self, node): - """Add arcs for things that can be decorated (classes and functions).""" - main_line = last = node.lineno - if node.decorator_list: - if env.PYBEHAVIOR.trace_decorated_def: - last = None - for dec_node in node.decorator_list: - dec_start = self.line_for_node(dec_node) - if last is not None and dec_start != last: - self.add_arc(last, dec_start) - last = dec_start - if env.PYBEHAVIOR.trace_decorated_def: - self.add_arc(last, main_line) - last = main_line - # The definition line may have been missed, but we should have it - # in `self.statements`. For some constructs, `line_for_node` is - # not what we'd think of as the first line in the statement, so map - # it to the first one. - if node.body: - body_start = self.line_for_node(node.body[0]) - body_start = self.multiline.get(body_start, body_start) - for lineno in range(last+1, body_start): - if lineno in self.statements: - self.add_arc(last, lineno) - last = lineno - # The body is handled in collect_arcs. - return {ArcStart(last)} - - _handle__ClassDef = _handle_decorated - - @contract(returns='ArcStarts') - def _handle__Continue(self, node): - here = self.line_for_node(node) - continue_start = ArcStart(here, cause="the continue on line {lineno} wasn't executed") - self.process_continue_exits([continue_start]) - return set() - - @contract(returns='ArcStarts') - def _handle__For(self, node): - start = self.line_for_node(node.iter) - self.block_stack.append(LoopBlock(start=start)) - from_start = ArcStart(start, cause="the loop on line {lineno} never started") - exits = self.add_body_arcs(node.body, from_start=from_start) - # Any exit from the body will go back to the top of the loop. - for xit in exits: - self.add_arc(xit.lineno, start, xit.cause) - my_block = self.block_stack.pop() - exits = my_block.break_exits - from_start = ArcStart(start, cause="the loop on line {lineno} didn't complete") - if node.orelse: - else_exits = self.add_body_arcs(node.orelse, from_start=from_start) - exits |= else_exits - else: - # No else clause: exit from the for line. - exits.add(from_start) - return exits - - _handle__AsyncFor = _handle__For - - _handle__FunctionDef = _handle_decorated - _handle__AsyncFunctionDef = _handle_decorated - - @contract(returns='ArcStarts') - def _handle__If(self, node): - start = self.line_for_node(node.test) - from_start = ArcStart(start, cause="the condition on line {lineno} was never true") - exits = self.add_body_arcs(node.body, from_start=from_start) - from_start = ArcStart(start, cause="the condition on line {lineno} was never false") - exits |= self.add_body_arcs(node.orelse, from_start=from_start) - return exits - - @contract(returns='ArcStarts') - def _handle__NodeList(self, node): - start = self.line_for_node(node) - exits = self.add_body_arcs(node.body, from_start=ArcStart(start)) - return exits - - @contract(returns='ArcStarts') - def _handle__Raise(self, node): - here = self.line_for_node(node) - raise_start = ArcStart(here, cause="the raise on line {lineno} wasn't executed") - self.process_raise_exits([raise_start]) - # `raise` statement jumps away, no exits from here. - return set() - - @contract(returns='ArcStarts') - def _handle__Return(self, node): - here = self.line_for_node(node) - return_start = ArcStart(here, cause="the return on line {lineno} wasn't executed") - self.process_return_exits([return_start]) - # `return` statement jumps away, no exits from here. - return set() - - @contract(returns='ArcStarts') - def _handle__Try(self, node): - if node.handlers: - handler_start = self.line_for_node(node.handlers[0]) - else: - handler_start = None - - if node.finalbody: - final_start = self.line_for_node(node.finalbody[0]) - else: - final_start = None - - try_block = TryBlock(handler_start, final_start) - self.block_stack.append(try_block) - - start = self.line_for_node(node) - exits = self.add_body_arcs(node.body, from_start=ArcStart(start)) - - # We're done with the `try` body, so this block no longer handles - # exceptions. We keep the block so the `finally` clause can pick up - # flows from the handlers and `else` clause. - if node.finalbody: - try_block.handler_start = None - if node.handlers: - # If there are `except` clauses, then raises in the try body - # will already jump to them. Start this set over for raises in - # `except` and `else`. - try_block.raise_from = set() - else: - self.block_stack.pop() - - handler_exits = set() - - if node.handlers: - last_handler_start = None - for handler_node in node.handlers: - handler_start = self.line_for_node(handler_node) - if last_handler_start is not None: - self.add_arc(last_handler_start, handler_start) - last_handler_start = handler_start - from_cause = "the exception caught by line {lineno} didn't happen" - from_start = ArcStart(handler_start, cause=from_cause) - handler_exits |= self.add_body_arcs(handler_node.body, from_start=from_start) - - if node.orelse: - exits = self.add_body_arcs(node.orelse, prev_starts=exits) - - exits |= handler_exits - - if node.finalbody: - self.block_stack.pop() - final_from = ( # You can get to the `finally` clause from: - exits | # the exits of the body or `else` clause, - try_block.break_from | # or a `break`, - try_block.continue_from | # or a `continue`, - try_block.raise_from | # or a `raise`, - try_block.return_from # or a `return`. - ) - - final_exits = self.add_body_arcs(node.finalbody, prev_starts=final_from) - - if try_block.break_from: - if env.PYBEHAVIOR.finally_jumps_back: - for break_line in try_block.break_from: - lineno = break_line.lineno - cause = break_line.cause.format(lineno=lineno) - for final_exit in final_exits: - self.add_arc(final_exit.lineno, lineno, cause) - breaks = try_block.break_from - else: - breaks = self._combine_finally_starts(try_block.break_from, final_exits) - self.process_break_exits(breaks) - - if try_block.continue_from: - if env.PYBEHAVIOR.finally_jumps_back: - for continue_line in try_block.continue_from: - lineno = continue_line.lineno - cause = continue_line.cause.format(lineno=lineno) - for final_exit in final_exits: - self.add_arc(final_exit.lineno, lineno, cause) - continues = try_block.continue_from - else: - continues = self._combine_finally_starts(try_block.continue_from, final_exits) - self.process_continue_exits(continues) - - if try_block.raise_from: - self.process_raise_exits( - self._combine_finally_starts(try_block.raise_from, final_exits) - ) - - if try_block.return_from: - if env.PYBEHAVIOR.finally_jumps_back: - for return_line in try_block.return_from: - lineno = return_line.lineno - cause = return_line.cause.format(lineno=lineno) - for final_exit in final_exits: - self.add_arc(final_exit.lineno, lineno, cause) - returns = try_block.return_from - else: - returns = self._combine_finally_starts(try_block.return_from, final_exits) - self.process_return_exits(returns) - - if exits: - # The finally clause's exits are only exits for the try block - # as a whole if the try block had some exits to begin with. - exits = final_exits - - return exits - - @contract(starts='ArcStarts', exits='ArcStarts', returns='ArcStarts') - def _combine_finally_starts(self, starts, exits): - """Helper for building the cause of `finally` branches. - - "finally" clauses might not execute their exits, and the causes could - be due to a failure to execute any of the exits in the try block. So - we use the causes from `starts` as the causes for `exits`. - """ - causes = [] - for start in sorted(starts): - if start.cause is not None: - causes.append(start.cause.format(lineno=start.lineno)) - cause = " or ".join(causes) - exits = {ArcStart(xit.lineno, cause) for xit in exits} - return exits - - @contract(returns='ArcStarts') - def _handle__TryExcept(self, node): - # Python 2.7 uses separate TryExcept and TryFinally nodes. If we get - # TryExcept, it means there was no finally, so fake it, and treat as - # a general Try node. - node.finalbody = [] - return self._handle__Try(node) - - @contract(returns='ArcStarts') - def _handle__TryFinally(self, node): - # Python 2.7 uses separate TryExcept and TryFinally nodes. If we get - # TryFinally, see if there's a TryExcept nested inside. If so, merge - # them. Otherwise, fake fields to complete a Try node. - node.handlers = [] - node.orelse = [] - - first = node.body[0] - if first.__class__.__name__ == "TryExcept" and node.lineno == first.lineno: - assert len(node.body) == 1 - node.body = first.body - node.handlers = first.handlers - node.orelse = first.orelse - - return self._handle__Try(node) - - @contract(returns='ArcStarts') - def _handle__While(self, node): - start = to_top = self.line_for_node(node.test) - constant_test = self.is_constant_expr(node.test) - top_is_body0 = False - if constant_test and (env.PY3 or constant_test == "Num"): - top_is_body0 = True - if env.PYBEHAVIOR.keep_constant_test: - top_is_body0 = False - if top_is_body0: - to_top = self.line_for_node(node.body[0]) - self.block_stack.append(LoopBlock(start=to_top)) - from_start = ArcStart(start, cause="the condition on line {lineno} was never true") - exits = self.add_body_arcs(node.body, from_start=from_start) - for xit in exits: - self.add_arc(xit.lineno, to_top, xit.cause) - exits = set() - my_block = self.block_stack.pop() - exits.update(my_block.break_exits) - from_start = ArcStart(start, cause="the condition on line {lineno} was never false") - if node.orelse: - else_exits = self.add_body_arcs(node.orelse, from_start=from_start) - exits |= else_exits - else: - # No `else` clause: you can exit from the start. - if not constant_test: - exits.add(from_start) - return exits - - @contract(returns='ArcStarts') - def _handle__With(self, node): - start = self.line_for_node(node) - exits = self.add_body_arcs(node.body, from_start=ArcStart(start)) - return exits - - _handle__AsyncWith = _handle__With - - def _code_object__Module(self, node): - start = self.line_for_node(node) - if node.body: - exits = self.add_body_arcs(node.body, from_start=ArcStart(-start)) - for xit in exits: - self.add_arc(xit.lineno, -start, xit.cause, "didn't exit the module") - else: - # Empty module. - self.add_arc(-start, start) - self.add_arc(start, -start) - - def _process_function_def(self, start, node): - self.funcdefs.add((start, node.body[-1].lineno, node.name)) - - def _code_object__FunctionDef(self, node): - start = self.line_for_node(node) - self.block_stack.append(FunctionBlock(start=start, name=node.name)) - exits = self.add_body_arcs(node.body, from_start=ArcStart(-start)) - self.process_return_exits(exits) - self._process_function_def(start, node) - self.block_stack.pop() - - _code_object__AsyncFunctionDef = _code_object__FunctionDef - - def _code_object__ClassDef(self, node): - start = self.line_for_node(node) - self.add_arc(-start, start) - exits = self.add_body_arcs(node.body, from_start=ArcStart(start)) - for xit in exits: - self.add_arc( - xit.lineno, -start, xit.cause, - "didn't exit the body of class {!r}".format(node.name), - ) - - def _make_oneline_code_method(noun): # pylint: disable=no-self-argument - """A function to make methods for online callable _code_object__ methods.""" - def _code_object__oneline_callable(self, node): - start = self.line_for_node(node) - self.add_arc(-start, start, None, "didn't run the {} on line {}".format(noun, start)) - self.add_arc( - start, -start, None, - "didn't finish the {} on line {}".format(noun, start), - ) - return _code_object__oneline_callable - - _code_object__Lambda = _make_oneline_code_method("lambda") - _code_object__GeneratorExp = _make_oneline_code_method("generator expression") - _code_object__DictComp = _make_oneline_code_method("dictionary comprehension") - _code_object__SetComp = _make_oneline_code_method("set comprehension") - if env.PY3: - _code_object__ListComp = _make_oneline_code_method("list comprehension") - - -if AST_DUMP: # pragma: debugging - # Code only used when dumping the AST for debugging. - - SKIP_DUMP_FIELDS = ["ctx"] - - def _is_simple_value(value): - """Is `value` simple enough to be displayed on a single line?""" - return ( - value in [None, [], (), {}, set()] or - isinstance(value, (string_class, int, float)) - ) - - def ast_dump(node, depth=0): - """Dump the AST for `node`. - - This recursively walks the AST, printing a readable version. - - """ - indent = " " * depth - if not isinstance(node, ast.AST): - print("{}<{} {!r}>".format(indent, node.__class__.__name__, node)) - return - - lineno = getattr(node, "lineno", None) - if lineno is not None: - linemark = " @ {}".format(node.lineno) - else: - linemark = "" - head = "{}<{}{}".format(indent, node.__class__.__name__, linemark) - - named_fields = [ - (name, value) - for name, value in ast.iter_fields(node) - if name not in SKIP_DUMP_FIELDS - ] - if not named_fields: - print("{}>".format(head)) - elif len(named_fields) == 1 and _is_simple_value(named_fields[0][1]): - field_name, value = named_fields[0] - print("{} {}: {!r}>".format(head, field_name, value)) - else: - print(head) - if 0: - print("{}# mro: {}".format( - indent, ", ".join(c.__name__ for c in node.__class__.__mro__[1:]), - )) - next_indent = indent + " " - for field_name, value in named_fields: - prefix = "{}{}:".format(next_indent, field_name) - if _is_simple_value(value): - print("{} {!r}".format(prefix, value)) - elif isinstance(value, list): - print("{} [".format(prefix)) - for n in value: - ast_dump(n, depth + 8) - print("{}]".format(next_indent)) - else: - print(prefix) - ast_dump(value, depth + 8) - - print("{}>".format(indent)) diff --git a/contrib/python/coverage/py2/coverage/phystokens.py b/contrib/python/coverage/py2/coverage/phystokens.py deleted file mode 100644 index 54378b3bc8c..00000000000 --- a/contrib/python/coverage/py2/coverage/phystokens.py +++ /dev/null @@ -1,297 +0,0 @@ -# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 -# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt - -"""Better tokenizing for coverage.py.""" - -import codecs -import keyword -import re -import sys -import token -import tokenize - -from coverage import env -from coverage.backward import iternext, unicode_class -from coverage.misc import contract - - -def phys_tokens(toks): - """Return all physical tokens, even line continuations. - - tokenize.generate_tokens() doesn't return a token for the backslash that - continues lines. This wrapper provides those tokens so that we can - re-create a faithful representation of the original source. - - Returns the same values as generate_tokens() - - """ - last_line = None - last_lineno = -1 - last_ttext = None - for ttype, ttext, (slineno, scol), (elineno, ecol), ltext in toks: - if last_lineno != elineno: - if last_line and last_line.endswith("\\\n"): - # We are at the beginning of a new line, and the last line - # ended with a backslash. We probably have to inject a - # backslash token into the stream. Unfortunately, there's more - # to figure out. This code:: - # - # usage = """\ - # HEY THERE - # """ - # - # triggers this condition, but the token text is:: - # - # '"""\\\nHEY THERE\n"""' - # - # so we need to figure out if the backslash is already in the - # string token or not. - inject_backslash = True - if last_ttext.endswith("\\"): - inject_backslash = False - elif ttype == token.STRING: - if "\n" in ttext and ttext.split('\n', 1)[0][-1] == '\\': - # It's a multi-line string and the first line ends with - # a backslash, so we don't need to inject another. - inject_backslash = False - if inject_backslash: - # Figure out what column the backslash is in. - ccol = len(last_line.split("\n")[-2]) - 1 - # Yield the token, with a fake token type. - yield ( - 99999, "\\\n", - (slineno, ccol), (slineno, ccol+2), - last_line - ) - last_line = ltext - if ttype not in (tokenize.NEWLINE, tokenize.NL): - last_ttext = ttext - yield ttype, ttext, (slineno, scol), (elineno, ecol), ltext - last_lineno = elineno - - -@contract(source='unicode') -def source_token_lines(source): - """Generate a series of lines, one for each line in `source`. - - Each line is a list of pairs, each pair is a token:: - - [('key', 'def'), ('ws', ' '), ('nam', 'hello'), ('op', '('), ... ] - - Each pair has a token class, and the token text. - - If you concatenate all the token texts, and then join them with newlines, - you should have your original `source` back, with two differences: - trailing whitespace is not preserved, and a final line with no newline - is indistinguishable from a final line with a newline. - - """ - - ws_tokens = {token.INDENT, token.DEDENT, token.NEWLINE, tokenize.NL} - line = [] - col = 0 - - source = source.expandtabs(8).replace('\r\n', '\n') - tokgen = generate_tokens(source) - - for ttype, ttext, (_, scol), (_, ecol), _ in phys_tokens(tokgen): - mark_start = True - for part in re.split('(\n)', ttext): - if part == '\n': - yield line - line = [] - col = 0 - mark_end = False - elif part == '': - mark_end = False - elif ttype in ws_tokens: - mark_end = False - else: - if mark_start and scol > col: - line.append(("ws", u" " * (scol - col))) - mark_start = False - tok_class = tokenize.tok_name.get(ttype, 'xx').lower()[:3] - if ttype == token.NAME and keyword.iskeyword(ttext): - tok_class = "key" - line.append((tok_class, part)) - mark_end = True - scol = 0 - if mark_end: - col = ecol - - if line: - yield line - - -class CachedTokenizer(object): - """A one-element cache around tokenize.generate_tokens. - - When reporting, coverage.py tokenizes files twice, once to find the - structure of the file, and once to syntax-color it. Tokenizing is - expensive, and easily cached. - - This is a one-element cache so that our twice-in-a-row tokenizing doesn't - actually tokenize twice. - - """ - def __init__(self): - self.last_text = None - self.last_tokens = None - - @contract(text='unicode') - def generate_tokens(self, text): - """A stand-in for `tokenize.generate_tokens`.""" - if text != self.last_text: - self.last_text = text - readline = iternext(text.splitlines(True)) - self.last_tokens = list(tokenize.generate_tokens(readline)) - return self.last_tokens - -# Create our generate_tokens cache as a callable replacement function. -generate_tokens = CachedTokenizer().generate_tokens - - -COOKIE_RE = re.compile(r"^[ \t]*#.*coding[:=][ \t]*([-\w.]+)", flags=re.MULTILINE) - -@contract(source='bytes') -def _source_encoding_py2(source): - """Determine the encoding for `source`, according to PEP 263. - - `source` is a byte string, the text of the program. - - Returns a string, the name of the encoding. - - """ - assert isinstance(source, bytes) - - # Do this so the detect_encode code we copied will work. - readline = iternext(source.splitlines(True)) - - # This is mostly code adapted from Py3.2's tokenize module. - - def _get_normal_name(orig_enc): - """Imitates get_normal_name in tokenizer.c.""" - # Only care about the first 12 characters. - enc = orig_enc[:12].lower().replace("_", "-") - if re.match(r"^utf-8($|-)", enc): - return "utf-8" - if re.match(r"^(latin-1|iso-8859-1|iso-latin-1)($|-)", enc): - return "iso-8859-1" - return orig_enc - - # From detect_encode(): - # It detects the encoding from the presence of a UTF-8 BOM or an encoding - # cookie as specified in PEP-0263. If both a BOM and a cookie are present, - # but disagree, a SyntaxError will be raised. If the encoding cookie is an - # invalid charset, raise a SyntaxError. Note that if a UTF-8 BOM is found, - # 'utf-8-sig' is returned. - - # If no encoding is specified, then the default will be returned. - default = 'ascii' - - bom_found = False - encoding = None - - def read_or_stop(): - """Get the next source line, or ''.""" - try: - return readline() - except StopIteration: - return '' - - def find_cookie(line): - """Find an encoding cookie in `line`.""" - try: - line_string = line.decode('ascii') - except UnicodeDecodeError: - return None - - matches = COOKIE_RE.findall(line_string) - if not matches: - return None - encoding = _get_normal_name(matches[0]) - try: - codec = codecs.lookup(encoding) - except LookupError: - # This behavior mimics the Python interpreter - raise SyntaxError("unknown encoding: " + encoding) - - if bom_found: - # codecs in 2.3 were raw tuples of functions, assume the best. - codec_name = getattr(codec, 'name', encoding) - if codec_name != 'utf-8': - # This behavior mimics the Python interpreter - raise SyntaxError('encoding problem: utf-8') - encoding += '-sig' - return encoding - - first = read_or_stop() - if first.startswith(codecs.BOM_UTF8): - bom_found = True - first = first[3:] - default = 'utf-8-sig' - if not first: - return default - - encoding = find_cookie(first) - if encoding: - return encoding - - second = read_or_stop() - if not second: - return default - - encoding = find_cookie(second) - if encoding: - return encoding - - return default - - -@contract(source='bytes') -def _source_encoding_py3(source): - """Determine the encoding for `source`, according to PEP 263. - - `source` is a byte string: the text of the program. - - Returns a string, the name of the encoding. - - """ - readline = iternext(source.splitlines(True)) - return tokenize.detect_encoding(readline)[0] - - -if env.PY3: - source_encoding = _source_encoding_py3 -else: - source_encoding = _source_encoding_py2 - - -@contract(source='unicode') -def compile_unicode(source, filename, mode): - """Just like the `compile` builtin, but works on any Unicode string. - - Python 2's compile() builtin has a stupid restriction: if the source string - is Unicode, then it may not have a encoding declaration in it. Why not? - Who knows! It also decodes to utf8, and then tries to interpret those utf8 - bytes according to the encoding declaration. Why? Who knows! - - This function neuters the coding declaration, and compiles it. - - """ - source = neuter_encoding_declaration(source) - if env.PY2 and isinstance(filename, unicode_class): - filename = filename.encode(sys.getfilesystemencoding(), "replace") - code = compile(source, filename, mode) - return code - - -@contract(source='unicode', returns='unicode') -def neuter_encoding_declaration(source): - """Return `source`, with any encoding declaration neutered.""" - if COOKIE_RE.search(source): - source_lines = source.splitlines(True) - for lineno in range(min(2, len(source_lines))): - source_lines[lineno] = COOKIE_RE.sub("# (deleted declaration)", source_lines[lineno]) - source = "".join(source_lines) - return source diff --git a/contrib/python/coverage/py2/coverage/plugin.py b/contrib/python/coverage/py2/coverage/plugin.py deleted file mode 100644 index 6997b489bb6..00000000000 --- a/contrib/python/coverage/py2/coverage/plugin.py +++ /dev/null @@ -1,533 +0,0 @@ -# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 -# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt - -""" -.. versionadded:: 4.0 - -Plug-in interfaces for coverage.py. - -Coverage.py supports a few different kinds of plug-ins that change its -behavior: - -* File tracers implement tracing of non-Python file types. - -* Configurers add custom configuration, using Python code to change the - configuration. - -* Dynamic context switchers decide when the dynamic context has changed, for - example, to record what test function produced the coverage. - -To write a coverage.py plug-in, create a module with a subclass of -:class:`~coverage.CoveragePlugin`. You will override methods in your class to -participate in various aspects of coverage.py's processing. -Different types of plug-ins have to override different methods. - -Any plug-in can optionally implement :meth:`~coverage.CoveragePlugin.sys_info` -to provide debugging information about their operation. - -Your module must also contain a ``coverage_init`` function that registers an -instance of your plug-in class:: - - import coverage - - class MyPlugin(coverage.CoveragePlugin): - ... - - def coverage_init(reg, options): - reg.add_file_tracer(MyPlugin()) - -You use the `reg` parameter passed to your ``coverage_init`` function to -register your plug-in object. The registration method you call depends on -what kind of plug-in it is. - -If your plug-in takes options, the `options` parameter is a dictionary of your -plug-in's options from the coverage.py configuration file. Use them however -you want to configure your object before registering it. - -Coverage.py will store its own information on your plug-in object, using -attributes whose names start with ``_coverage_``. Don't be startled. - -.. warning:: - Plug-ins are imported by coverage.py before it begins measuring code. - If you write a plugin in your own project, it might import your product - code before coverage.py can start measuring. This can result in your - own code being reported as missing. - - One solution is to put your plugins in your project tree, but not in - your importable Python package. - - -.. _file_tracer_plugins: - -File Tracers -============ - -File tracers implement measurement support for non-Python files. File tracers -implement the :meth:`~coverage.CoveragePlugin.file_tracer` method to claim -files and the :meth:`~coverage.CoveragePlugin.file_reporter` method to report -on those files. - -In your ``coverage_init`` function, use the ``add_file_tracer`` method to -register your file tracer. - - -.. _configurer_plugins: - -Configurers -=========== - -.. versionadded:: 4.5 - -Configurers modify the configuration of coverage.py during start-up. -Configurers implement the :meth:`~coverage.CoveragePlugin.configure` method to -change the configuration. - -In your ``coverage_init`` function, use the ``add_configurer`` method to -register your configurer. - - -.. _dynamic_context_plugins: - -Dynamic Context Switchers -========================= - -.. versionadded:: 5.0 - -Dynamic context switcher plugins implement the -:meth:`~coverage.CoveragePlugin.dynamic_context` method to dynamically compute -the context label for each measured frame. - -Computed context labels are useful when you want to group measured data without -modifying the source code. - -For example, you could write a plugin that checks `frame.f_code` to inspect -the currently executed method, and set the context label to a fully qualified -method name if it's an instance method of `unittest.TestCase` and the method -name starts with 'test'. Such a plugin would provide basic coverage grouping -by test and could be used with test runners that have no built-in coveragepy -support. - -In your ``coverage_init`` function, use the ``add_dynamic_context`` method to -register your dynamic context switcher. - -""" - -from coverage import files -from coverage.misc import contract, _needs_to_implement - - -class CoveragePlugin(object): - """Base class for coverage.py plug-ins.""" - - def file_tracer(self, filename): # pylint: disable=unused-argument - """Get a :class:`FileTracer` object for a file. - - Plug-in type: file tracer. - - Every Python source file is offered to your plug-in to give it a chance - to take responsibility for tracing the file. If your plug-in can - handle the file, it should return a :class:`FileTracer` object. - Otherwise return None. - - There is no way to register your plug-in for particular files. - Instead, this method is invoked for all files as they are executed, - and the plug-in decides whether it can trace the file or not. - Be prepared for `filename` to refer to all kinds of files that have - nothing to do with your plug-in. - - The file name will be a Python file being executed. There are two - broad categories of behavior for a plug-in, depending on the kind of - files your plug-in supports: - - * Static file names: each of your original source files has been - converted into a distinct Python file. Your plug-in is invoked with - the Python file name, and it maps it back to its original source - file. - - * Dynamic file names: all of your source files are executed by the same - Python file. In this case, your plug-in implements - :meth:`FileTracer.dynamic_source_filename` to provide the actual - source file for each execution frame. - - `filename` is a string, the path to the file being considered. This is - the absolute real path to the file. If you are comparing to other - paths, be sure to take this into account. - - Returns a :class:`FileTracer` object to use to trace `filename`, or - None if this plug-in cannot trace this file. - - """ - return None - - def file_reporter(self, filename): # pylint: disable=unused-argument - """Get the :class:`FileReporter` class to use for a file. - - Plug-in type: file tracer. - - This will only be invoked if `filename` returns non-None from - :meth:`file_tracer`. It's an error to return None from this method. - - Returns a :class:`FileReporter` object to use to report on `filename`, - or the string `"python"` to have coverage.py treat the file as Python. - - """ - _needs_to_implement(self, "file_reporter") - - def dynamic_context(self, frame): # pylint: disable=unused-argument - """Get the dynamically computed context label for `frame`. - - Plug-in type: dynamic context. - - This method is invoked for each frame when outside of a dynamic - context, to see if a new dynamic context should be started. If it - returns a string, a new context label is set for this and deeper - frames. The dynamic context ends when this frame returns. - - Returns a string to start a new dynamic context, or None if no new - context should be started. - - """ - return None - - def find_executable_files(self, src_dir): # pylint: disable=unused-argument - """Yield all of the executable files in `src_dir`, recursively. - - Plug-in type: file tracer. - - Executability is a plug-in-specific property, but generally means files - which would have been considered for coverage analysis, had they been - included automatically. - - Returns or yields a sequence of strings, the paths to files that could - have been executed, including files that had been executed. - - """ - return [] - - def configure(self, config): - """Modify the configuration of coverage.py. - - Plug-in type: configurer. - - This method is called during coverage.py start-up, to give your plug-in - a chance to change the configuration. The `config` parameter is an - object with :meth:`~coverage.Coverage.get_option` and - :meth:`~coverage.Coverage.set_option` methods. Do not call any other - methods on the `config` object. - - """ - pass - - def sys_info(self): - """Get a list of information useful for debugging. - - Plug-in type: any. - - This method will be invoked for ``--debug=sys``. Your - plug-in can return any information it wants to be displayed. - - Returns a list of pairs: `[(name, value), ...]`. - - """ - return [] - - -class FileTracer(object): - """Support needed for files during the execution phase. - - File tracer plug-ins implement subclasses of FileTracer to return from - their :meth:`~CoveragePlugin.file_tracer` method. - - You may construct this object from :meth:`CoveragePlugin.file_tracer` any - way you like. A natural choice would be to pass the file name given to - `file_tracer`. - - `FileTracer` objects should only be created in the - :meth:`CoveragePlugin.file_tracer` method. - - See :ref:`howitworks` for details of the different coverage.py phases. - - """ - - def source_filename(self): - """The source file name for this file. - - This may be any file name you like. A key responsibility of a plug-in - is to own the mapping from Python execution back to whatever source - file name was originally the source of the code. - - See :meth:`CoveragePlugin.file_tracer` for details about static and - dynamic file names. - - Returns the file name to credit with this execution. - - """ - _needs_to_implement(self, "source_filename") - - def has_dynamic_source_filename(self): - """Does this FileTracer have dynamic source file names? - - FileTracers can provide dynamically determined file names by - implementing :meth:`dynamic_source_filename`. Invoking that function - is expensive. To determine whether to invoke it, coverage.py uses the - result of this function to know if it needs to bother invoking - :meth:`dynamic_source_filename`. - - See :meth:`CoveragePlugin.file_tracer` for details about static and - dynamic file names. - - Returns True if :meth:`dynamic_source_filename` should be called to get - dynamic source file names. - - """ - return False - - def dynamic_source_filename(self, filename, frame): # pylint: disable=unused-argument - """Get a dynamically computed source file name. - - Some plug-ins need to compute the source file name dynamically for each - frame. - - This function will not be invoked if - :meth:`has_dynamic_source_filename` returns False. - - Returns the source file name for this frame, or None if this frame - shouldn't be measured. - - """ - return None - - def line_number_range(self, frame): - """Get the range of source line numbers for a given a call frame. - - The call frame is examined, and the source line number in the original - file is returned. The return value is a pair of numbers, the starting - line number and the ending line number, both inclusive. For example, - returning (5, 7) means that lines 5, 6, and 7 should be considered - executed. - - This function might decide that the frame doesn't indicate any lines - from the source file were executed. Return (-1, -1) in this case to - tell coverage.py that no lines should be recorded for this frame. - - """ - lineno = frame.f_lineno - return lineno, lineno - - -class FileReporter(object): - """Support needed for files during the analysis and reporting phases. - - File tracer plug-ins implement a subclass of `FileReporter`, and return - instances from their :meth:`CoveragePlugin.file_reporter` method. - - There are many methods here, but only :meth:`lines` is required, to provide - the set of executable lines in the file. - - See :ref:`howitworks` for details of the different coverage.py phases. - - """ - - def __init__(self, filename): - """Simple initialization of a `FileReporter`. - - The `filename` argument is the path to the file being reported. This - will be available as the `.filename` attribute on the object. Other - method implementations on this base class rely on this attribute. - - """ - self.filename = filename - - def __repr__(self): - return "<{0.__class__.__name__} filename={0.filename!r}>".format(self) - - def relative_filename(self): - """Get the relative file name for this file. - - This file path will be displayed in reports. The default - implementation will supply the actual project-relative file path. You - only need to supply this method if you have an unusual syntax for file - paths. - - """ - return files.relative_filename(self.filename) - - @contract(returns='unicode') - def source(self): - """Get the source for the file. - - Returns a Unicode string. - - The base implementation simply reads the `self.filename` file and - decodes it as UTF8. Override this method if your file isn't readable - as a text file, or if you need other encoding support. - - """ - with open(self.filename, "rb") as f: - return f.read().decode("utf8") - - def lines(self): - """Get the executable lines in this file. - - Your plug-in must determine which lines in the file were possibly - executable. This method returns a set of those line numbers. - - Returns a set of line numbers. - - """ - _needs_to_implement(self, "lines") - - def excluded_lines(self): - """Get the excluded executable lines in this file. - - Your plug-in can use any method it likes to allow the user to exclude - executable lines from consideration. - - Returns a set of line numbers. - - The base implementation returns the empty set. - - """ - return set() - - def translate_lines(self, lines): - """Translate recorded lines into reported lines. - - Some file formats will want to report lines slightly differently than - they are recorded. For example, Python records the last line of a - multi-line statement, but reports are nicer if they mention the first - line. - - Your plug-in can optionally define this method to perform these kinds - of adjustment. - - `lines` is a sequence of integers, the recorded line numbers. - - Returns a set of integers, the adjusted line numbers. - - The base implementation returns the numbers unchanged. - - """ - return set(lines) - - def arcs(self): - """Get the executable arcs in this file. - - To support branch coverage, your plug-in needs to be able to indicate - possible execution paths, as a set of line number pairs. Each pair is - a `(prev, next)` pair indicating that execution can transition from the - `prev` line number to the `next` line number. - - Returns a set of pairs of line numbers. The default implementation - returns an empty set. - - """ - return set() - - def no_branch_lines(self): - """Get the lines excused from branch coverage in this file. - - Your plug-in can use any method it likes to allow the user to exclude - lines from consideration of branch coverage. - - Returns a set of line numbers. - - The base implementation returns the empty set. - - """ - return set() - - def translate_arcs(self, arcs): - """Translate recorded arcs into reported arcs. - - Similar to :meth:`translate_lines`, but for arcs. `arcs` is a set of - line number pairs. - - Returns a set of line number pairs. - - The default implementation returns `arcs` unchanged. - - """ - return arcs - - def exit_counts(self): - """Get a count of exits from that each line. - - To determine which lines are branches, coverage.py looks for lines that - have more than one exit. This function creates a dict mapping each - executable line number to a count of how many exits it has. - - To be honest, this feels wrong, and should be refactored. Let me know - if you attempt to implement this method in your plug-in... - - """ - return {} - - def missing_arc_description(self, start, end, executed_arcs=None): # pylint: disable=unused-argument - """Provide an English sentence describing a missing arc. - - The `start` and `end` arguments are the line numbers of the missing - arc. Negative numbers indicate entering or exiting code objects. - - The `executed_arcs` argument is a set of line number pairs, the arcs - that were executed in this file. - - By default, this simply returns the string "Line {start} didn't jump - to {end}". - - """ - return "Line {start} didn't jump to line {end}".format(start=start, end=end) - - def source_token_lines(self): - """Generate a series of tokenized lines, one for each line in `source`. - - These tokens are used for syntax-colored reports. - - Each line is a list of pairs, each pair is a token:: - - [('key', 'def'), ('ws', ' '), ('nam', 'hello'), ('op', '('), ... ] - - Each pair has a token class, and the token text. The token classes - are: - - * ``'com'``: a comment - * ``'key'``: a keyword - * ``'nam'``: a name, or identifier - * ``'num'``: a number - * ``'op'``: an operator - * ``'str'``: a string literal - * ``'ws'``: some white space - * ``'txt'``: some other kind of text - - If you concatenate all the token texts, and then join them with - newlines, you should have your original source back. - - The default implementation simply returns each line tagged as - ``'txt'``. - - """ - for line in self.source().splitlines(): - yield [('txt', line)] - - # Annoying comparison operators. Py3k wants __lt__ etc, and Py2k needs all - # of them defined. - - def __eq__(self, other): - return isinstance(other, FileReporter) and self.filename == other.filename - - def __ne__(self, other): - return not (self == other) - - def __lt__(self, other): - return self.filename < other.filename - - def __le__(self, other): - return self.filename <= other.filename - - def __gt__(self, other): - return self.filename > other.filename - - def __ge__(self, other): - return self.filename >= other.filename - - __hash__ = None # This object doesn't need to be hashed. diff --git a/contrib/python/coverage/py2/coverage/plugin_support.py b/contrib/python/coverage/py2/coverage/plugin_support.py deleted file mode 100644 index 89c1c7658f0..00000000000 --- a/contrib/python/coverage/py2/coverage/plugin_support.py +++ /dev/null @@ -1,281 +0,0 @@ -# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 -# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt - -"""Support for plugins.""" - -import os -import os.path -import sys - -from coverage.misc import CoverageException, isolate_module -from coverage.plugin import CoveragePlugin, FileTracer, FileReporter - -os = isolate_module(os) - - -class Plugins(object): - """The currently loaded collection of coverage.py plugins.""" - - def __init__(self): - self.order = [] - self.names = {} - self.file_tracers = [] - self.configurers = [] - self.context_switchers = [] - - self.current_module = None - self.debug = None - - @classmethod - def load_plugins(cls, modules, config, debug=None): - """Load plugins from `modules`. - - Returns a Plugins object with the loaded and configured plugins. - - """ - plugins = cls() - plugins.debug = debug - - for module in modules: - plugins.current_module = module - __import__(module) - mod = sys.modules[module] - - coverage_init = getattr(mod, "coverage_init", None) - if not coverage_init: - raise CoverageException( - "Plugin module %r didn't define a coverage_init function" % module - ) - - options = config.get_plugin_options(module) - coverage_init(plugins, options) - - plugins.current_module = None - return plugins - - def add_file_tracer(self, plugin): - """Add a file tracer plugin. - - `plugin` is an instance of a third-party plugin class. It must - implement the :meth:`CoveragePlugin.file_tracer` method. - - """ - self._add_plugin(plugin, self.file_tracers) - - def add_configurer(self, plugin): - """Add a configuring plugin. - - `plugin` is an instance of a third-party plugin class. It must - implement the :meth:`CoveragePlugin.configure` method. - - """ - self._add_plugin(plugin, self.configurers) - - def add_dynamic_context(self, plugin): - """Add a dynamic context plugin. - - `plugin` is an instance of a third-party plugin class. It must - implement the :meth:`CoveragePlugin.dynamic_context` method. - - """ - self._add_plugin(plugin, self.context_switchers) - - def add_noop(self, plugin): - """Add a plugin that does nothing. - - This is only useful for testing the plugin support. - - """ - self._add_plugin(plugin, None) - - def _add_plugin(self, plugin, specialized): - """Add a plugin object. - - `plugin` is a :class:`CoveragePlugin` instance to add. `specialized` - is a list to append the plugin to. - - """ - plugin_name = "%s.%s" % (self.current_module, plugin.__class__.__name__) - if self.debug and self.debug.should('plugin'): - self.debug.write("Loaded plugin %r: %r" % (self.current_module, plugin)) - labelled = LabelledDebug("plugin %r" % (self.current_module,), self.debug) - plugin = DebugPluginWrapper(plugin, labelled) - - # pylint: disable=attribute-defined-outside-init - plugin._coverage_plugin_name = plugin_name - plugin._coverage_enabled = True - self.order.append(plugin) - self.names[plugin_name] = plugin - if specialized is not None: - specialized.append(plugin) - - def __nonzero__(self): - return bool(self.order) - - __bool__ = __nonzero__ - - def __iter__(self): - return iter(self.order) - - def get(self, plugin_name): - """Return a plugin by name.""" - return self.names[plugin_name] - - -class LabelledDebug(object): - """A Debug writer, but with labels for prepending to the messages.""" - - def __init__(self, label, debug, prev_labels=()): - self.labels = list(prev_labels) + [label] - self.debug = debug - - def add_label(self, label): - """Add a label to the writer, and return a new `LabelledDebug`.""" - return LabelledDebug(label, self.debug, self.labels) - - def message_prefix(self): - """The prefix to use on messages, combining the labels.""" - prefixes = self.labels + [''] - return ":\n".join(" "*i+label for i, label in enumerate(prefixes)) - - def write(self, message): - """Write `message`, but with the labels prepended.""" - self.debug.write("%s%s" % (self.message_prefix(), message)) - - -class DebugPluginWrapper(CoveragePlugin): - """Wrap a plugin, and use debug to report on what it's doing.""" - - def __init__(self, plugin, debug): - super(DebugPluginWrapper, self).__init__() - self.plugin = plugin - self.debug = debug - - def file_tracer(self, filename): - tracer = self.plugin.file_tracer(filename) - self.debug.write("file_tracer(%r) --> %r" % (filename, tracer)) - if tracer: - debug = self.debug.add_label("file %r" % (filename,)) - tracer = DebugFileTracerWrapper(tracer, debug) - return tracer - - def file_reporter(self, filename): - reporter = self.plugin.file_reporter(filename) - self.debug.write("file_reporter(%r) --> %r" % (filename, reporter)) - if reporter: - debug = self.debug.add_label("file %r" % (filename,)) - reporter = DebugFileReporterWrapper(filename, reporter, debug) - return reporter - - def dynamic_context(self, frame): - context = self.plugin.dynamic_context(frame) - self.debug.write("dynamic_context(%r) --> %r" % (frame, context)) - return context - - def find_executable_files(self, src_dir): - executable_files = self.plugin.find_executable_files(src_dir) - self.debug.write("find_executable_files(%r) --> %r" % (src_dir, executable_files)) - return executable_files - - def configure(self, config): - self.debug.write("configure(%r)" % (config,)) - self.plugin.configure(config) - - def sys_info(self): - return self.plugin.sys_info() - - -class DebugFileTracerWrapper(FileTracer): - """A debugging `FileTracer`.""" - - def __init__(self, tracer, debug): - self.tracer = tracer - self.debug = debug - - def _show_frame(self, frame): - """A short string identifying a frame, for debug messages.""" - return "%s@%d" % ( - os.path.basename(frame.f_code.co_filename), - frame.f_lineno, - ) - - def source_filename(self): - sfilename = self.tracer.source_filename() - self.debug.write("source_filename() --> %r" % (sfilename,)) - return sfilename - - def has_dynamic_source_filename(self): - has = self.tracer.has_dynamic_source_filename() - self.debug.write("has_dynamic_source_filename() --> %r" % (has,)) - return has - - def dynamic_source_filename(self, filename, frame): - dyn = self.tracer.dynamic_source_filename(filename, frame) - self.debug.write("dynamic_source_filename(%r, %s) --> %r" % ( - filename, self._show_frame(frame), dyn, - )) - return dyn - - def line_number_range(self, frame): - pair = self.tracer.line_number_range(frame) - self.debug.write("line_number_range(%s) --> %r" % (self._show_frame(frame), pair)) - return pair - - -class DebugFileReporterWrapper(FileReporter): - """A debugging `FileReporter`.""" - - def __init__(self, filename, reporter, debug): - super(DebugFileReporterWrapper, self).__init__(filename) - self.reporter = reporter - self.debug = debug - - def relative_filename(self): - ret = self.reporter.relative_filename() - self.debug.write("relative_filename() --> %r" % (ret,)) - return ret - - def lines(self): - ret = self.reporter.lines() - self.debug.write("lines() --> %r" % (ret,)) - return ret - - def excluded_lines(self): - ret = self.reporter.excluded_lines() - self.debug.write("excluded_lines() --> %r" % (ret,)) - return ret - - def translate_lines(self, lines): - ret = self.reporter.translate_lines(lines) - self.debug.write("translate_lines(%r) --> %r" % (lines, ret)) - return ret - - def translate_arcs(self, arcs): - ret = self.reporter.translate_arcs(arcs) - self.debug.write("translate_arcs(%r) --> %r" % (arcs, ret)) - return ret - - def no_branch_lines(self): - ret = self.reporter.no_branch_lines() - self.debug.write("no_branch_lines() --> %r" % (ret,)) - return ret - - def exit_counts(self): - ret = self.reporter.exit_counts() - self.debug.write("exit_counts() --> %r" % (ret,)) - return ret - - def arcs(self): - ret = self.reporter.arcs() - self.debug.write("arcs() --> %r" % (ret,)) - return ret - - def source(self): - ret = self.reporter.source() - self.debug.write("source() --> %d chars" % (len(ret),)) - return ret - - def source_token_lines(self): - ret = list(self.reporter.source_token_lines()) - self.debug.write("source_token_lines() --> %d tokens" % (len(ret),)) - return ret diff --git a/contrib/python/coverage/py2/coverage/python.py b/contrib/python/coverage/py2/coverage/python.py deleted file mode 100644 index 6ff19c34fe4..00000000000 --- a/contrib/python/coverage/py2/coverage/python.py +++ /dev/null @@ -1,261 +0,0 @@ -# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 -# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt - -"""Python source expertise for coverage.py""" - -import sys -import os.path -import types -import zipimport - -from coverage import env, files -from coverage.misc import contract, expensive, isolate_module, join_regex -from coverage.misc import CoverageException, NoSource -from coverage.parser import PythonParser -from coverage.phystokens import source_token_lines, source_encoding -from coverage.plugin import FileReporter - -os = isolate_module(os) - - -@contract(returns='bytes') -def read_python_source(filename): - """Read the Python source text from `filename`. - - Returns bytes. - - """ - with open(filename, "rb") as f: - source = f.read() - - if env.IRONPYTHON: - # IronPython reads Unicode strings even for "rb" files. - source = bytes(source) - - return source.replace(b"\r\n", b"\n").replace(b"\r", b"\n") - - -@contract(returns='unicode') -def get_python_source(filename, force_fs=False): - """Return the source code, as unicode.""" - if getattr(sys, 'is_standalone_binary', False) and not force_fs: - import __res - - modname = __res.importer.file_source(filename) - if modname: - source = __res.find(modname) - source = source.replace(b"\r\n", b"\n").replace(b"\r", b"\n") - return source.decode('utf-8') - else: - # it's fake generated package - return u'' - base, ext = os.path.splitext(filename) - if ext == ".py" and env.WINDOWS: - exts = [".py", ".pyw"] - else: - exts = [ext] - - for ext in exts: - try_filename = base + ext - if os.path.exists(try_filename): - # A regular text file: open it. - source = read_python_source(try_filename) - break - - # Maybe it's in a zip file? - source = get_zip_bytes(try_filename) - if source is not None: - break - else: - # Couldn't find source. - exc_msg = "No source for code: '%s'.\n" % (filename,) - exc_msg += "Aborting report output, consider using -i." - raise NoSource(exc_msg) - - # Replace \f because of http://bugs.python.org/issue19035 - source = source.replace(b'\f', b' ') - source = source.decode(source_encoding(source), "replace") - - # Python code should always end with a line with a newline. - if source and source[-1] != '\n': - source += '\n' - - return source - - -@contract(returns='bytes|None') -def get_zip_bytes(filename): - """Get data from `filename` if it is a zip file path. - - Returns the bytestring data read from the zip file, or None if no zip file - could be found or `filename` isn't in it. The data returned will be - an empty string if the file is empty. - - """ - markers = ['.zip'+os.sep, '.egg'+os.sep, '.pex'+os.sep] - for marker in markers: - if marker in filename: - parts = filename.split(marker) - try: - zi = zipimport.zipimporter(parts[0]+marker[:-1]) - except zipimport.ZipImportError: - continue - try: - data = zi.get_data(parts[1]) - except IOError: - continue - return data - return None - - -def source_for_file(filename): - """Return the source filename for `filename`. - - Given a file name being traced, return the best guess as to the source - file to attribute it to. - - """ - if filename.endswith(".py"): - # .py files are themselves source files. - return filename - - elif filename.endswith((".pyc", ".pyo")): - # Bytecode files probably have source files near them. - py_filename = filename[:-1] - if os.path.exists(py_filename): - # Found a .py file, use that. - return py_filename - if env.WINDOWS: - # On Windows, it could be a .pyw file. - pyw_filename = py_filename + "w" - if os.path.exists(pyw_filename): - return pyw_filename - # Didn't find source, but it's probably the .py file we want. - return py_filename - - elif filename.endswith("$py.class"): - # Jython is easy to guess. - return filename[:-9] + ".py" - - # No idea, just use the file name as-is. - return filename - - -def source_for_morf(morf): - """Get the source filename for the module-or-file `morf`.""" - if hasattr(morf, '__file__') and morf.__file__: - filename = morf.__file__ - elif isinstance(morf, types.ModuleType): - # A module should have had .__file__, otherwise we can't use it. - # This could be a PEP-420 namespace package. - raise CoverageException("Module {} has no file".format(morf)) - else: - filename = morf - - filename = source_for_file(files.unicode_filename(filename)) - return filename - - -class PythonFileReporter(FileReporter): - """Report support for a Python file.""" - - def __init__(self, morf, coverage=None): - self.coverage = coverage - - filename = source_for_morf(morf) - - super(PythonFileReporter, self).__init__(files.canonical_filename(filename)) - - if hasattr(morf, '__name__'): - name = morf.__name__.replace(".", os.sep) - if os.path.basename(filename).startswith('__init__.'): - name += os.sep + "__init__" - name += ".py" - name = files.unicode_filename(name) - else: - name = files.relative_filename(filename) - self.relname = name - - self._source = None - self._parser = None - self._excluded = None - - def __repr__(self): - return "<PythonFileReporter {!r}>".format(self.filename) - - @contract(returns='unicode') - def relative_filename(self): - return self.relname - - @property - def parser(self): - """Lazily create a :class:`PythonParser`.""" - if self._parser is None: - self._parser = PythonParser( - filename=self.filename, - exclude=self.coverage._exclude_regex('exclude'), - ) - self._parser.parse_source() - return self._parser - - def lines(self): - """Return the line numbers of statements in the file.""" - return self.parser.statements - - def excluded_lines(self): - """Return the line numbers of statements in the file.""" - return self.parser.excluded - - def translate_lines(self, lines): - return self.parser.translate_lines(lines) - - def translate_arcs(self, arcs): - return self.parser.translate_arcs(arcs) - - @expensive - def no_branch_lines(self): - no_branch = self.parser.lines_matching( - join_regex(self.coverage.config.partial_list), - join_regex(self.coverage.config.partial_always_list) - ) - return no_branch - - @expensive - def arcs(self): - return self.parser.arcs() - - @expensive - def exit_counts(self): - return self.parser.exit_counts() - - def missing_arc_description(self, start, end, executed_arcs=None): - return self.parser.missing_arc_description(start, end, executed_arcs) - - @contract(returns='unicode') - def source(self): - if self._source is None: - self._source = get_python_source(self.filename) - return self._source - - def should_be_python(self): - """Does it seem like this file should contain Python? - - This is used to decide if a file reported as part of the execution of - a program was really likely to have contained Python in the first - place. - - """ - # Get the file extension. - _, ext = os.path.splitext(self.filename) - - # Anything named *.py* should be Python. - if ext.startswith('.py'): - return True - # A file with no extension should be Python. - if not ext: - return True - # Everything else is probably not Python. - return False - - def source_token_lines(self): - return source_token_lines(self.source()) diff --git a/contrib/python/coverage/py2/coverage/pytracer.py b/contrib/python/coverage/py2/coverage/pytracer.py deleted file mode 100644 index 7ab4d3ef922..00000000000 --- a/contrib/python/coverage/py2/coverage/pytracer.py +++ /dev/null @@ -1,274 +0,0 @@ -# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 -# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt - -"""Raw data collector for coverage.py.""" - -import atexit -import dis -import sys - -from coverage import env - -# We need the YIELD_VALUE opcode below, in a comparison-friendly form. -YIELD_VALUE = dis.opmap['YIELD_VALUE'] -if env.PY2: - YIELD_VALUE = chr(YIELD_VALUE) - -# When running meta-coverage, this file can try to trace itself, which confuses -# everything. Don't trace ourselves. - -THIS_FILE = __file__.rstrip("co") - - -class PyTracer(object): - """Python implementation of the raw data tracer.""" - - # Because of poor implementations of trace-function-manipulating tools, - # the Python trace function must be kept very simple. In particular, there - # must be only one function ever set as the trace function, both through - # sys.settrace, and as the return value from the trace function. Put - # another way, the trace function must always return itself. It cannot - # swap in other functions, or return None to avoid tracing a particular - # frame. - # - # The trace manipulator that introduced this restriction is DecoratorTools, - # which sets a trace function, and then later restores the pre-existing one - # by calling sys.settrace with a function it found in the current frame. - # - # Systems that use DecoratorTools (or similar trace manipulations) must use - # PyTracer to get accurate results. The command-line --timid argument is - # used to force the use of this tracer. - - def __init__(self): - # Attributes set from the collector: - self.data = None - self.trace_arcs = False - self.should_trace = None - self.should_trace_cache = None - self.should_start_context = None - self.warn = None - # The threading module to use, if any. - self.threading = None - - self.cur_file_dict = None - self.last_line = 0 # int, but uninitialized. - self.cur_file_name = None - self.context = None - self.started_context = False - - self.data_stack = [] - self.last_exc_back = None - self.last_exc_firstlineno = 0 - self.thread = None - self.stopped = False - self._activity = False - - self.in_atexit = False - # On exit, self.in_atexit = True - atexit.register(setattr, self, 'in_atexit', True) - - def __repr__(self): - return "<PyTracer at {}: {} lines in {} files>".format( - id(self), - sum(len(v) for v in self.data.values()), - len(self.data), - ) - - def log(self, marker, *args): - """For hard-core logging of what this tracer is doing.""" - with open("/tmp/debug_trace.txt", "a") as f: - f.write("{} {}[{}]".format( - marker, - id(self), - len(self.data_stack), - )) - if 0: - f.write(".{:x}.{:x}".format( - self.thread.ident, - self.threading.currentThread().ident, - )) - f.write(" {}".format(" ".join(map(str, args)))) - if 0: - f.write(" | ") - stack = " / ".join( - (fname or "???").rpartition("/")[-1] - for _, fname, _, _ in self.data_stack - ) - f.write(stack) - f.write("\n") - - def _trace(self, frame, event, arg_unused): - """The trace function passed to sys.settrace.""" - - if THIS_FILE in frame.f_code.co_filename: - return None - - #self.log(":", frame.f_code.co_filename, frame.f_lineno, frame.f_code.co_name + "()", event) - - if (self.stopped and sys.gettrace() == self._trace): # pylint: disable=comparison-with-callable - # The PyTrace.stop() method has been called, possibly by another - # thread, let's deactivate ourselves now. - if 0: - self.log("---\nX", frame.f_code.co_filename, frame.f_lineno) - f = frame - while f: - self.log(">", f.f_code.co_filename, f.f_lineno, f.f_code.co_name, f.f_trace) - f = f.f_back - sys.settrace(None) - self.cur_file_dict, self.cur_file_name, self.last_line, self.started_context = ( - self.data_stack.pop() - ) - return None - - if self.last_exc_back: - if frame == self.last_exc_back: - # Someone forgot a return event. - if self.trace_arcs and self.cur_file_dict: - pair = (self.last_line, -self.last_exc_firstlineno) - self.cur_file_dict[pair] = None - self.cur_file_dict, self.cur_file_name, self.last_line, self.started_context = ( - self.data_stack.pop() - ) - self.last_exc_back = None - - # if event != 'call' and frame.f_code.co_filename != self.cur_file_name: - # self.log("---\n*", frame.f_code.co_filename, self.cur_file_name, frame.f_lineno) - - if event == 'call': - # Should we start a new context? - if self.should_start_context and self.context is None: - context_maybe = self.should_start_context(frame) - if context_maybe is not None: - self.context = context_maybe - self.started_context = True - self.switch_context(self.context) - else: - self.started_context = False - else: - self.started_context = False - - # Entering a new frame. Decide if we should trace - # in this file. - self._activity = True - self.data_stack.append( - ( - self.cur_file_dict, - self.cur_file_name, - self.last_line, - self.started_context, - ) - ) - filename = frame.f_code.co_filename - self.cur_file_name = filename - disp = self.should_trace_cache.get(filename) - if disp is None: - disp = self.should_trace(filename, frame) - self.should_trace_cache[filename] = disp - - self.cur_file_dict = None - if disp.trace: - tracename = disp.source_filename - if tracename not in self.data: - self.data[tracename] = {} - self.cur_file_dict = self.data[tracename] - # The call event is really a "start frame" event, and happens for - # function calls and re-entering generators. The f_lasti field is - # -1 for calls, and a real offset for generators. Use <0 as the - # line number for calls, and the real line number for generators. - if getattr(frame, 'f_lasti', -1) < 0: - self.last_line = -frame.f_code.co_firstlineno - else: - self.last_line = frame.f_lineno - elif event == 'line': - # Record an executed line. - if self.cur_file_dict is not None: - lineno = frame.f_lineno - - if self.trace_arcs: - self.cur_file_dict[(self.last_line, lineno)] = None - else: - self.cur_file_dict[lineno] = None - self.last_line = lineno - elif event == 'return': - if self.trace_arcs and self.cur_file_dict: - # Record an arc leaving the function, but beware that a - # "return" event might just mean yielding from a generator. - # Jython seems to have an empty co_code, so just assume return. - code = frame.f_code.co_code - if (not code) or code[frame.f_lasti] != YIELD_VALUE: - first = frame.f_code.co_firstlineno - self.cur_file_dict[(self.last_line, -first)] = None - # Leaving this function, pop the filename stack. - self.cur_file_dict, self.cur_file_name, self.last_line, self.started_context = ( - self.data_stack.pop() - ) - # Leaving a context? - if self.started_context: - self.context = None - self.switch_context(None) - elif event == 'exception': - self.last_exc_back = frame.f_back - self.last_exc_firstlineno = frame.f_code.co_firstlineno - return self._trace - - def start(self): - """Start this Tracer. - - Return a Python function suitable for use with sys.settrace(). - - """ - self.stopped = False - if self.threading: - if self.thread is None: - self.thread = self.threading.currentThread() - else: - if self.thread.ident != self.threading.currentThread().ident: - # Re-starting from a different thread!? Don't set the trace - # function, but we are marked as running again, so maybe it - # will be ok? - #self.log("~", "starting on different threads") - return self._trace - - sys.settrace(self._trace) - return self._trace - - def stop(self): - """Stop this Tracer.""" - # Get the active tracer callback before setting the stop flag to be - # able to detect if the tracer was changed prior to stopping it. - tf = sys.gettrace() - - # Set the stop flag. The actual call to sys.settrace(None) will happen - # in the self._trace callback itself to make sure to call it from the - # right thread. - self.stopped = True - - if self.threading and self.thread.ident != self.threading.currentThread().ident: - # Called on a different thread than started us: we can't unhook - # ourselves, but we've set the flag that we should stop, so we - # won't do any more tracing. - #self.log("~", "stopping on different threads") - return - - if self.warn: - # PyPy clears the trace function before running atexit functions, - # so don't warn if we are in atexit on PyPy and the trace function - # has changed to None. - dont_warn = (env.PYPY and env.PYPYVERSION >= (5, 4) and self.in_atexit and tf is None) - if (not dont_warn) and tf != self._trace: # pylint: disable=comparison-with-callable - self.warn( - "Trace function changed, measurement is likely wrong: %r" % (tf,), - slug="trace-changed", - ) - - def activity(self): - """Has there been any activity?""" - return self._activity - - def reset_activity(self): - """Reset the activity() flag.""" - self._activity = False - - def get_stats(self): - """Return a dictionary of statistics, or None.""" - return None diff --git a/contrib/python/coverage/py2/coverage/report.py b/contrib/python/coverage/py2/coverage/report.py deleted file mode 100644 index 64678ff95df..00000000000 --- a/contrib/python/coverage/py2/coverage/report.py +++ /dev/null @@ -1,86 +0,0 @@ -# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 -# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt - -"""Reporter foundation for coverage.py.""" -import sys - -from coverage import env -from coverage.files import prep_patterns, FnmatchMatcher -from coverage.misc import CoverageException, NoSource, NotPython, ensure_dir_for_file, file_be_gone - - -def render_report(output_path, reporter, morfs): - """Run the provided reporter ensuring any required setup and cleanup is done - - At a high level this method ensures the output file is ready to be written to. Then writes the - report to it. Then closes the file and deletes any garbage created if necessary. - """ - file_to_close = None - delete_file = False - if output_path: - if output_path == '-': - outfile = sys.stdout - else: - # Ensure that the output directory is created; done here - # because this report pre-opens the output file. - # HTMLReport does this using the Report plumbing because - # its task is more complex, being multiple files. - ensure_dir_for_file(output_path) - open_kwargs = {} - if env.PY3: - open_kwargs['encoding'] = 'utf8' - outfile = open(output_path, "w", **open_kwargs) - file_to_close = outfile - try: - return reporter.report(morfs, outfile=outfile) - except CoverageException: - delete_file = True - raise - finally: - if file_to_close: - file_to_close.close() - if delete_file: - file_be_gone(output_path) - - -def get_analysis_to_report(coverage, morfs): - """Get the files to report on. - - For each morf in `morfs`, if it should be reported on (based on the omit - and include configuration options), yield a pair, the `FileReporter` and - `Analysis` for the morf. - - """ - file_reporters = coverage._get_file_reporters(morfs) - config = coverage.config - - if config.report_include: - matcher = FnmatchMatcher(prep_patterns(config.report_include)) - file_reporters = [fr for fr in file_reporters if matcher.match(fr.filename)] - - if config.report_omit: - matcher = FnmatchMatcher(prep_patterns(config.report_omit)) - file_reporters = [fr for fr in file_reporters if not matcher.match(fr.filename)] - - if not file_reporters: - raise CoverageException("No data to report.") - - for fr in sorted(file_reporters): - try: - analysis = coverage._analyze(fr) - except NoSource: - if not config.ignore_errors: - raise - except NotPython: - # Only report errors for .py files, and only if we didn't - # explicitly suppress those errors. - # NotPython is only raised by PythonFileReporter, which has a - # should_be_python() method. - if fr.should_be_python(): - if config.ignore_errors: - msg = "Couldn't parse Python file '{}'".format(fr.filename) - coverage._warn(msg, slug="couldnt-parse") - else: - raise - else: - yield (fr, analysis) diff --git a/contrib/python/coverage/py2/coverage/results.py b/contrib/python/coverage/py2/coverage/results.py deleted file mode 100644 index 4916864df34..00000000000 --- a/contrib/python/coverage/py2/coverage/results.py +++ /dev/null @@ -1,343 +0,0 @@ -# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 -# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt - -"""Results of coverage measurement.""" - -import collections - -from coverage.backward import iitems -from coverage.debug import SimpleReprMixin -from coverage.misc import contract, CoverageException, nice_pair - - -class Analysis(object): - """The results of analyzing a FileReporter.""" - - def __init__(self, data, file_reporter, file_mapper): - self.data = data - self.file_reporter = file_reporter - self.filename = file_mapper(self.file_reporter.filename) - self.statements = self.file_reporter.lines() - self.excluded = self.file_reporter.excluded_lines() - - # Identify missing statements. - executed = self.data.lines(self.filename) or [] - executed = self.file_reporter.translate_lines(executed) - self.executed = executed - self.missing = self.statements - self.executed - - if self.data.has_arcs(): - self._arc_possibilities = sorted(self.file_reporter.arcs()) - self.exit_counts = self.file_reporter.exit_counts() - self.no_branch = self.file_reporter.no_branch_lines() - n_branches = self._total_branches() - mba = self.missing_branch_arcs() - n_partial_branches = sum(len(v) for k,v in iitems(mba) if k not in self.missing) - n_missing_branches = sum(len(v) for k,v in iitems(mba)) - else: - self._arc_possibilities = [] - self.exit_counts = {} - self.no_branch = set() - n_branches = n_partial_branches = n_missing_branches = 0 - - self.numbers = Numbers( - n_files=1, - n_statements=len(self.statements), - n_excluded=len(self.excluded), - n_missing=len(self.missing), - n_branches=n_branches, - n_partial_branches=n_partial_branches, - n_missing_branches=n_missing_branches, - ) - - def missing_formatted(self, branches=False): - """The missing line numbers, formatted nicely. - - Returns a string like "1-2, 5-11, 13-14". - - If `branches` is true, includes the missing branch arcs also. - - """ - if branches and self.has_arcs(): - arcs = iitems(self.missing_branch_arcs()) - else: - arcs = None - - return format_lines(self.statements, self.missing, arcs=arcs) - - def has_arcs(self): - """Were arcs measured in this result?""" - return self.data.has_arcs() - - @contract(returns='list(tuple(int, int))') - def arc_possibilities(self): - """Returns a sorted list of the arcs in the code.""" - return self._arc_possibilities - - @contract(returns='list(tuple(int, int))') - def arcs_executed(self): - """Returns a sorted list of the arcs actually executed in the code.""" - executed = self.data.arcs(self.filename) or [] - executed = self.file_reporter.translate_arcs(executed) - return sorted(executed) - - @contract(returns='list(tuple(int, int))') - def arcs_missing(self): - """Returns a sorted list of the arcs in the code not executed.""" - possible = self.arc_possibilities() - executed = self.arcs_executed() - missing = ( - p for p in possible - if p not in executed - and p[0] not in self.no_branch - ) - return sorted(missing) - - @contract(returns='list(tuple(int, int))') - def arcs_unpredicted(self): - """Returns a sorted list of the executed arcs missing from the code.""" - possible = self.arc_possibilities() - executed = self.arcs_executed() - # Exclude arcs here which connect a line to itself. They can occur - # in executed data in some cases. This is where they can cause - # trouble, and here is where it's the least burden to remove them. - # Also, generators can somehow cause arcs from "enter" to "exit", so - # make sure we have at least one positive value. - unpredicted = ( - e for e in executed - if e not in possible - and e[0] != e[1] - and (e[0] > 0 or e[1] > 0) - ) - return sorted(unpredicted) - - def _branch_lines(self): - """Returns a list of line numbers that have more than one exit.""" - return [l1 for l1,count in iitems(self.exit_counts) if count > 1] - - def _total_branches(self): - """How many total branches are there?""" - return sum(count for count in self.exit_counts.values() if count > 1) - - @contract(returns='dict(int: list(int))') - def missing_branch_arcs(self): - """Return arcs that weren't executed from branch lines. - - Returns {l1:[l2a,l2b,...], ...} - - """ - missing = self.arcs_missing() - branch_lines = set(self._branch_lines()) - mba = collections.defaultdict(list) - for l1, l2 in missing: - if l1 in branch_lines: - mba[l1].append(l2) - return mba - - @contract(returns='dict(int: tuple(int, int))') - def branch_stats(self): - """Get stats about branches. - - Returns a dict mapping line numbers to a tuple: - (total_exits, taken_exits). - """ - - missing_arcs = self.missing_branch_arcs() - stats = {} - for lnum in self._branch_lines(): - exits = self.exit_counts[lnum] - missing = len(missing_arcs[lnum]) - stats[lnum] = (exits, exits - missing) - return stats - - -class Numbers(SimpleReprMixin): - """The numerical results of measuring coverage. - - This holds the basic statistics from `Analysis`, and is used to roll - up statistics across files. - - """ - # A global to determine the precision on coverage percentages, the number - # of decimal places. - _precision = 0 - _near0 = 1.0 # These will change when _precision is changed. - _near100 = 99.0 - - def __init__(self, n_files=0, n_statements=0, n_excluded=0, n_missing=0, - n_branches=0, n_partial_branches=0, n_missing_branches=0 - ): - self.n_files = n_files - self.n_statements = n_statements - self.n_excluded = n_excluded - self.n_missing = n_missing - self.n_branches = n_branches - self.n_partial_branches = n_partial_branches - self.n_missing_branches = n_missing_branches - - def init_args(self): - """Return a list for __init__(*args) to recreate this object.""" - return [ - self.n_files, self.n_statements, self.n_excluded, self.n_missing, - self.n_branches, self.n_partial_branches, self.n_missing_branches, - ] - - @classmethod - def set_precision(cls, precision): - """Set the number of decimal places used to report percentages.""" - assert 0 <= precision < 10 - cls._precision = precision - cls._near0 = 1.0 / 10**precision - cls._near100 = 100.0 - cls._near0 - - @property - def n_executed(self): - """Returns the number of executed statements.""" - return self.n_statements - self.n_missing - - @property - def n_executed_branches(self): - """Returns the number of executed branches.""" - return self.n_branches - self.n_missing_branches - - @property - def pc_covered(self): - """Returns a single percentage value for coverage.""" - if self.n_statements > 0: - numerator, denominator = self.ratio_covered - pc_cov = (100.0 * numerator) / denominator - else: - pc_cov = 100.0 - return pc_cov - - @property - def pc_covered_str(self): - """Returns the percent covered, as a string, without a percent sign. - - Note that "0" is only returned when the value is truly zero, and "100" - is only returned when the value is truly 100. Rounding can never - result in either "0" or "100". - - """ - pc = self.pc_covered - if 0 < pc < self._near0: - pc = self._near0 - elif self._near100 < pc < 100: - pc = self._near100 - else: - pc = round(pc, self._precision) - return "%.*f" % (self._precision, pc) - - @classmethod - def pc_str_width(cls): - """How many characters wide can pc_covered_str be?""" - width = 3 # "100" - if cls._precision > 0: - width += 1 + cls._precision - return width - - @property - def ratio_covered(self): - """Return a numerator and denominator for the coverage ratio.""" - numerator = self.n_executed + self.n_executed_branches - denominator = self.n_statements + self.n_branches - return numerator, denominator - - def __add__(self, other): - nums = Numbers() - nums.n_files = self.n_files + other.n_files - nums.n_statements = self.n_statements + other.n_statements - nums.n_excluded = self.n_excluded + other.n_excluded - nums.n_missing = self.n_missing + other.n_missing - nums.n_branches = self.n_branches + other.n_branches - nums.n_partial_branches = ( - self.n_partial_branches + other.n_partial_branches - ) - nums.n_missing_branches = ( - self.n_missing_branches + other.n_missing_branches - ) - return nums - - def __radd__(self, other): - # Implementing 0+Numbers allows us to sum() a list of Numbers. - if other == 0: - return self - return NotImplemented # pragma: not covered (we never call it this way) - - -def _line_ranges(statements, lines): - """Produce a list of ranges for `format_lines`.""" - statements = sorted(statements) - lines = sorted(lines) - - pairs = [] - start = None - lidx = 0 - for stmt in statements: - if lidx >= len(lines): - break - if stmt == lines[lidx]: - lidx += 1 - if not start: - start = stmt - end = stmt - elif start: - pairs.append((start, end)) - start = None - if start: - pairs.append((start, end)) - return pairs - - -def format_lines(statements, lines, arcs=None): - """Nicely format a list of line numbers. - - Format a list of line numbers for printing by coalescing groups of lines as - long as the lines represent consecutive statements. This will coalesce - even if there are gaps between statements. - - For example, if `statements` is [1,2,3,4,5,10,11,12,13,14] and - `lines` is [1,2,5,10,11,13,14] then the result will be "1-2, 5-11, 13-14". - - Both `lines` and `statements` can be any iterable. All of the elements of - `lines` must be in `statements`, and all of the values must be positive - integers. - - If `arcs` is provided, they are (start,[end,end,end]) pairs that will be - included in the output as long as start isn't in `lines`. - - """ - line_items = [(pair[0], nice_pair(pair)) for pair in _line_ranges(statements, lines)] - if arcs: - line_exits = sorted(arcs) - for line, exits in line_exits: - for ex in sorted(exits): - if line not in lines and ex not in lines: - dest = (ex if ex > 0 else "exit") - line_items.append((line, "%d->%s" % (line, dest))) - - ret = ', '.join(t[-1] for t in sorted(line_items)) - return ret - - -@contract(total='number', fail_under='number', precision=int, returns=bool) -def should_fail_under(total, fail_under, precision): - """Determine if a total should fail due to fail-under. - - `total` is a float, the coverage measurement total. `fail_under` is the - fail_under setting to compare with. `precision` is the number of digits - to consider after the decimal point. - - Returns True if the total should fail. - - """ - # We can never achieve higher than 100% coverage, or less than zero. - if not (0 <= fail_under <= 100.0): - msg = "fail_under={} is invalid. Must be between 0 and 100.".format(fail_under) - raise CoverageException(msg) - - # Special case for fail_under=100, it must really be 100. - if fail_under == 100.0 and total != 100.0: - return True - - return round(total, precision) < fail_under diff --git a/contrib/python/coverage/py2/coverage/sqldata.py b/contrib/python/coverage/py2/coverage/sqldata.py deleted file mode 100644 index a150fdfd0ff..00000000000 --- a/contrib/python/coverage/py2/coverage/sqldata.py +++ /dev/null @@ -1,1123 +0,0 @@ -# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 -# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt - -"""Sqlite coverage data.""" - -# TODO: factor out dataop debugging to a wrapper class? -# TODO: make sure all dataop debugging is in place somehow - -import collections -import datetime -import glob -import itertools -import os -import re -import sqlite3 -import sys -import zlib - -from coverage import env -from coverage.backward import get_thread_id, iitems, to_bytes, to_string -from coverage.debug import NoDebugging, SimpleReprMixin, clipped_repr -from coverage.files import PathAliases -from coverage.misc import CoverageException, contract, file_be_gone, filename_suffix, isolate_module -from coverage.numbits import numbits_to_nums, numbits_union, nums_to_numbits -from coverage.version import __version__ - -os = isolate_module(os) - -# If you change the schema, increment the SCHEMA_VERSION, and update the -# docs in docs/dbschema.rst also. - -SCHEMA_VERSION = 7 - -# Schema versions: -# 1: Released in 5.0a2 -# 2: Added contexts in 5.0a3. -# 3: Replaced line table with line_map table. -# 4: Changed line_map.bitmap to line_map.numbits. -# 5: Added foreign key declarations. -# 6: Key-value in meta. -# 7: line_map -> line_bits - -SCHEMA = """\ -CREATE TABLE coverage_schema ( - -- One row, to record the version of the schema in this db. - version integer -); - -CREATE TABLE meta ( - -- Key-value pairs, to record metadata about the data - key text, - value text, - unique (key) - -- Keys: - -- 'has_arcs' boolean -- Is this data recording branches? - -- 'sys_argv' text -- The coverage command line that recorded the data. - -- 'version' text -- The version of coverage.py that made the file. - -- 'when' text -- Datetime when the file was created. -); - -CREATE TABLE file ( - -- A row per file measured. - id integer primary key, - path text, - unique (path) -); - -CREATE TABLE context ( - -- A row per context measured. - id integer primary key, - context text, - unique (context) -); - -CREATE TABLE line_bits ( - -- If recording lines, a row per context per file executed. - -- All of the line numbers for that file/context are in one numbits. - file_id integer, -- foreign key to `file`. - context_id integer, -- foreign key to `context`. - numbits blob, -- see the numbits functions in coverage.numbits - foreign key (file_id) references file (id), - foreign key (context_id) references context (id), - unique (file_id, context_id) -); - -CREATE TABLE arc ( - -- If recording branches, a row per context per from/to line transition executed. - file_id integer, -- foreign key to `file`. - context_id integer, -- foreign key to `context`. - fromno integer, -- line number jumped from. - tono integer, -- line number jumped to. - foreign key (file_id) references file (id), - foreign key (context_id) references context (id), - unique (file_id, context_id, fromno, tono) -); - -CREATE TABLE tracer ( - -- A row per file indicating the tracer used for that file. - file_id integer primary key, - tracer text, - foreign key (file_id) references file (id) -); -""" - -class CoverageData(SimpleReprMixin): - """Manages collected coverage data, including file storage. - - This class is the public supported API to the data that coverage.py - collects during program execution. It includes information about what code - was executed. It does not include information from the analysis phase, to - determine what lines could have been executed, or what lines were not - executed. - - .. note:: - - The data file is currently a SQLite database file, with a - :ref:`documented schema <dbschema>`. The schema is subject to change - though, so be careful about querying it directly. Use this API if you - can to isolate yourself from changes. - - There are a number of kinds of data that can be collected: - - * **lines**: the line numbers of source lines that were executed. - These are always available. - - * **arcs**: pairs of source and destination line numbers for transitions - between source lines. These are only available if branch coverage was - used. - - * **file tracer names**: the module names of the file tracer plugins that - handled each file in the data. - - Lines, arcs, and file tracer names are stored for each source file. File - names in this API are case-sensitive, even on platforms with - case-insensitive file systems. - - A data file either stores lines, or arcs, but not both. - - A data file is associated with the data when the :class:`CoverageData` - is created, using the parameters `basename`, `suffix`, and `no_disk`. The - base name can be queried with :meth:`base_filename`, and the actual file - name being used is available from :meth:`data_filename`. - - To read an existing coverage.py data file, use :meth:`read`. You can then - access the line, arc, or file tracer data with :meth:`lines`, :meth:`arcs`, - or :meth:`file_tracer`. - - The :meth:`has_arcs` method indicates whether arc data is available. You - can get a set of the files in the data with :meth:`measured_files`. As - with most Python containers, you can determine if there is any data at all - by using this object as a boolean value. - - The contexts for each line in a file can be read with - :meth:`contexts_by_lineno`. - - To limit querying to certain contexts, use :meth:`set_query_context` or - :meth:`set_query_contexts`. These will narrow the focus of subsequent - :meth:`lines`, :meth:`arcs`, and :meth:`contexts_by_lineno` calls. The set - of all measured context names can be retrieved with - :meth:`measured_contexts`. - - Most data files will be created by coverage.py itself, but you can use - methods here to create data files if you like. The :meth:`add_lines`, - :meth:`add_arcs`, and :meth:`add_file_tracers` methods add data, in ways - that are convenient for coverage.py. - - To record data for contexts, use :meth:`set_context` to set a context to - be used for subsequent :meth:`add_lines` and :meth:`add_arcs` calls. - - To add a source file without any measured data, use :meth:`touch_file`, - or :meth:`touch_files` for a list of such files. - - Write the data to its file with :meth:`write`. - - You can clear the data in memory with :meth:`erase`. Two data collections - can be combined by using :meth:`update` on one :class:`CoverageData`, - passing it the other. - - Data in a :class:`CoverageData` can be serialized and deserialized with - :meth:`dumps` and :meth:`loads`. - - """ - - def __init__(self, basename=None, suffix=None, no_disk=False, warn=None, debug=None): - """Create a :class:`CoverageData` object to hold coverage-measured data. - - Arguments: - basename (str): the base name of the data file, defaulting to - ".coverage". - suffix (str or bool): has the same meaning as the `data_suffix` - argument to :class:`coverage.Coverage`. - no_disk (bool): if True, keep all data in memory, and don't - write any disk file. - warn: a warning callback function, accepting a warning message - argument. - debug: a `DebugControl` object (optional) - - """ - self._no_disk = no_disk - self._basename = os.path.abspath(basename or ".coverage") - self._suffix = suffix - self._warn = warn - self._debug = debug or NoDebugging() - - self._choose_filename() - self._file_map = {} - # Maps thread ids to SqliteDb objects. - self._dbs = {} - self._pid = os.getpid() - - # Are we in sync with the data file? - self._have_used = False - - self._has_lines = False - self._has_arcs = False - - self._current_context = None - self._current_context_id = None - self._query_context_ids = None - - def _choose_filename(self): - """Set self._filename based on inited attributes.""" - if self._no_disk: - self._filename = ":memory:" - else: - self._filename = self._basename - suffix = filename_suffix(self._suffix) - if suffix: - self._filename += "." + suffix - - def _reset(self): - """Reset our attributes.""" - if self._dbs: - for db in self._dbs.values(): - db.close() - self._dbs = {} - self._file_map = {} - self._have_used = False - self._current_context_id = None - - def _create_db(self): - """Create a db file that doesn't exist yet. - - Initializes the schema and certain metadata. - """ - if self._debug.should('dataio'): - self._debug.write("Creating data file {!r}".format(self._filename)) - self._dbs[get_thread_id()] = db = SqliteDb(self._filename, self._debug) - with db: - db.executescript(SCHEMA) - db.execute("insert into coverage_schema (version) values (?)", (SCHEMA_VERSION,)) - db.executemany( - "insert into meta (key, value) values (?, ?)", - [ - ('sys_argv', str(getattr(sys, 'argv', None))), - ('version', __version__), - ('when', datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')), - ] - ) - - def _open_db(self): - """Open an existing db file, and read its metadata.""" - if self._debug.should('dataio'): - self._debug.write("Opening data file {!r}".format(self._filename)) - self._dbs[get_thread_id()] = SqliteDb(self._filename, self._debug) - self._read_db() - - def _read_db(self): - """Read the metadata from a database so that we are ready to use it.""" - with self._dbs[get_thread_id()] as db: - try: - schema_version, = db.execute_one("select version from coverage_schema") - except Exception as exc: - raise CoverageException( - "Data file {!r} doesn't seem to be a coverage data file: {}".format( - self._filename, exc - ) - ) - else: - if schema_version != SCHEMA_VERSION: - raise CoverageException( - "Couldn't use data file {!r}: wrong schema: {} instead of {}".format( - self._filename, schema_version, SCHEMA_VERSION - ) - ) - - for row in db.execute("select value from meta where key = 'has_arcs'"): - self._has_arcs = bool(int(row[0])) - self._has_lines = not self._has_arcs - - for path, file_id in db.execute("select path, id from file"): - self._file_map[path] = file_id - - def _connect(self): - """Get the SqliteDb object to use.""" - if get_thread_id() not in self._dbs: - if os.path.exists(self._filename): - self._open_db() - else: - self._create_db() - return self._dbs[get_thread_id()] - - def __nonzero__(self): - if (get_thread_id() not in self._dbs and not os.path.exists(self._filename)): - return False - try: - with self._connect() as con: - rows = con.execute("select * from file limit 1") - return bool(list(rows)) - except CoverageException: - return False - - __bool__ = __nonzero__ - - @contract(returns='bytes') - def dumps(self): - """Serialize the current data to a byte string. - - The format of the serialized data is not documented. It is only - suitable for use with :meth:`loads` in the same version of - coverage.py. - - Returns: - A byte string of serialized data. - - .. versionadded:: 5.0 - - """ - if self._debug.should('dataio'): - self._debug.write("Dumping data from data file {!r}".format(self._filename)) - with self._connect() as con: - return b'z' + zlib.compress(to_bytes(con.dump())) - - @contract(data='bytes') - def loads(self, data): - """Deserialize data from :meth:`dumps` - - Use with a newly-created empty :class:`CoverageData` object. It's - undefined what happens if the object already has data in it. - - Arguments: - data: A byte string of serialized data produced by :meth:`dumps`. - - .. versionadded:: 5.0 - - """ - if self._debug.should('dataio'): - self._debug.write("Loading data into data file {!r}".format(self._filename)) - if data[:1] != b'z': - raise CoverageException( - "Unrecognized serialization: {!r} (head of {} bytes)".format(data[:40], len(data)) - ) - script = to_string(zlib.decompress(data[1:])) - self._dbs[get_thread_id()] = db = SqliteDb(self._filename, self._debug) - with db: - db.executescript(script) - self._read_db() - self._have_used = True - - def _file_id(self, filename, add=False): - """Get the file id for `filename`. - - If filename is not in the database yet, add it if `add` is True. - If `add` is not True, return None. - """ - if filename not in self._file_map: - if add: - with self._connect() as con: - cur = con.execute("insert or replace into file (path) values (?)", (filename,)) - self._file_map[filename] = cur.lastrowid - return self._file_map.get(filename) - - def _context_id(self, context): - """Get the id for a context.""" - assert context is not None - self._start_using() - with self._connect() as con: - row = con.execute_one("select id from context where context = ?", (context,)) - if row is not None: - return row[0] - else: - return None - - def set_context(self, context): - """Set the current context for future :meth:`add_lines` etc. - - `context` is a str, the name of the context to use for the next data - additions. The context persists until the next :meth:`set_context`. - - .. versionadded:: 5.0 - - """ - if self._debug.should('dataop'): - self._debug.write("Setting context: %r" % (context,)) - self._current_context = context - self._current_context_id = None - - def _set_context_id(self): - """Use the _current_context to set _current_context_id.""" - context = self._current_context or "" - context_id = self._context_id(context) - if context_id is not None: - self._current_context_id = context_id - else: - with self._connect() as con: - cur = con.execute("insert into context (context) values (?)", (context,)) - self._current_context_id = cur.lastrowid - - def base_filename(self): - """The base filename for storing data. - - .. versionadded:: 5.0 - - """ - return self._basename - - def data_filename(self): - """Where is the data stored? - - .. versionadded:: 5.0 - - """ - return self._filename - - def add_lines(self, line_data): - """Add measured line data. - - `line_data` is a dictionary mapping file names to dictionaries:: - - { filename: { lineno: None, ... }, ...} - - """ - if self._debug.should('dataop'): - self._debug.write("Adding lines: %d files, %d lines total" % ( - len(line_data), sum(len(lines) for lines in line_data.values()) - )) - self._start_using() - self._choose_lines_or_arcs(lines=True) - if not line_data: - return - with self._connect() as con: - self._set_context_id() - for filename, linenos in iitems(line_data): - linemap = nums_to_numbits(linenos) - file_id = self._file_id(filename, add=True) - query = "select numbits from line_bits where file_id = ? and context_id = ?" - existing = list(con.execute(query, (file_id, self._current_context_id))) - if existing: - linemap = numbits_union(linemap, existing[0][0]) - - con.execute( - "insert or replace into line_bits " - " (file_id, context_id, numbits) values (?, ?, ?)", - (file_id, self._current_context_id, linemap), - ) - - def add_arcs(self, arc_data): - """Add measured arc data. - - `arc_data` is a dictionary mapping file names to dictionaries:: - - { filename: { (l1,l2): None, ... }, ...} - - """ - if self._debug.should('dataop'): - self._debug.write("Adding arcs: %d files, %d arcs total" % ( - len(arc_data), sum(len(arcs) for arcs in arc_data.values()) - )) - self._start_using() - self._choose_lines_or_arcs(arcs=True) - if not arc_data: - return - with self._connect() as con: - self._set_context_id() - for filename, arcs in iitems(arc_data): - file_id = self._file_id(filename, add=True) - data = [(file_id, self._current_context_id, fromno, tono) for fromno, tono in arcs] - con.executemany( - "insert or ignore into arc " - "(file_id, context_id, fromno, tono) values (?, ?, ?, ?)", - data, - ) - - def _choose_lines_or_arcs(self, lines=False, arcs=False): - """Force the data file to choose between lines and arcs.""" - assert lines or arcs - assert not (lines and arcs) - if lines and self._has_arcs: - raise CoverageException("Can't add line measurements to existing branch data") - if arcs and self._has_lines: - raise CoverageException("Can't add branch measurements to existing line data") - if not self._has_arcs and not self._has_lines: - self._has_lines = lines - self._has_arcs = arcs - with self._connect() as con: - con.execute( - "insert into meta (key, value) values (?, ?)", - ('has_arcs', str(int(arcs))) - ) - - def add_file_tracers(self, file_tracers): - """Add per-file plugin information. - - `file_tracers` is { filename: plugin_name, ... } - - """ - if self._debug.should('dataop'): - self._debug.write("Adding file tracers: %d files" % (len(file_tracers),)) - if not file_tracers: - return - self._start_using() - with self._connect() as con: - for filename, plugin_name in iitems(file_tracers): - file_id = self._file_id(filename) - if file_id is None: - raise CoverageException( - "Can't add file tracer data for unmeasured file '%s'" % (filename,) - ) - - existing_plugin = self.file_tracer(filename) - if existing_plugin: - if existing_plugin != plugin_name: - raise CoverageException( - "Conflicting file tracer name for '%s': %r vs %r" % ( - filename, existing_plugin, plugin_name, - ) - ) - elif plugin_name: - con.execute( - "insert into tracer (file_id, tracer) values (?, ?)", - (file_id, plugin_name) - ) - - def touch_file(self, filename, plugin_name=""): - """Ensure that `filename` appears in the data, empty if needed. - - `plugin_name` is the name of the plugin responsible for this file. It is used - to associate the right filereporter, etc. - """ - self.touch_files([filename], plugin_name) - - def touch_files(self, filenames, plugin_name=""): - """Ensure that `filenames` appear in the data, empty if needed. - - `plugin_name` is the name of the plugin responsible for these files. It is used - to associate the right filereporter, etc. - """ - if self._debug.should('dataop'): - self._debug.write("Touching %r" % (filenames,)) - self._start_using() - with self._connect(): # Use this to get one transaction. - if not self._has_arcs and not self._has_lines: - raise CoverageException("Can't touch files in an empty CoverageData") - - for filename in filenames: - self._file_id(filename, add=True) - if plugin_name: - # Set the tracer for this file - self.add_file_tracers({filename: plugin_name}) - - def update(self, other_data, aliases=None): - """Update this data with data from several other :class:`CoverageData` instances. - - If `aliases` is provided, it's a `PathAliases` object that is used to - re-map paths to match the local machine's. - """ - if self._debug.should('dataop'): - self._debug.write("Updating with data from %r" % ( - getattr(other_data, '_filename', '???'), - )) - if self._has_lines and other_data._has_arcs: - raise CoverageException("Can't combine arc data with line data") - if self._has_arcs and other_data._has_lines: - raise CoverageException("Can't combine line data with arc data") - - aliases = aliases or PathAliases() - - # Force the database we're writing to to exist before we start nesting - # contexts. - self._start_using() - - # Collector for all arcs, lines and tracers - other_data.read() - with other_data._connect() as conn: - # Get files data. - cur = conn.execute('select path from file') - files = {path: aliases.map(path) for (path,) in cur} - cur.close() - - # Get contexts data. - cur = conn.execute('select context from context') - contexts = [context for (context,) in cur] - cur.close() - - # Get arc data. - cur = conn.execute( - 'select file.path, context.context, arc.fromno, arc.tono ' - 'from arc ' - 'inner join file on file.id = arc.file_id ' - 'inner join context on context.id = arc.context_id' - ) - arcs = [(files[path], context, fromno, tono) for (path, context, fromno, tono) in cur] - cur.close() - - # Get line data. - cur = conn.execute( - 'select file.path, context.context, line_bits.numbits ' - 'from line_bits ' - 'inner join file on file.id = line_bits.file_id ' - 'inner join context on context.id = line_bits.context_id' - ) - lines = { - (files[path], context): numbits - for (path, context, numbits) in cur - } - cur.close() - - # Get tracer data. - cur = conn.execute( - 'select file.path, tracer ' - 'from tracer ' - 'inner join file on file.id = tracer.file_id' - ) - tracers = {files[path]: tracer for (path, tracer) in cur} - cur.close() - - with self._connect() as conn: - conn.con.isolation_level = 'IMMEDIATE' - - # Get all tracers in the DB. Files not in the tracers are assumed - # to have an empty string tracer. Since Sqlite does not support - # full outer joins, we have to make two queries to fill the - # dictionary. - this_tracers = {path: '' for path, in conn.execute('select path from file')} - this_tracers.update({ - aliases.map(path): tracer - for path, tracer in conn.execute( - 'select file.path, tracer from tracer ' - 'inner join file on file.id = tracer.file_id' - ) - }) - - # Create all file and context rows in the DB. - conn.executemany( - 'insert or ignore into file (path) values (?)', - ((file,) for file in files.values()) - ) - file_ids = { - path: id - for id, path in conn.execute('select id, path from file') - } - conn.executemany( - 'insert or ignore into context (context) values (?)', - ((context,) for context in contexts) - ) - context_ids = { - context: id - for id, context in conn.execute('select id, context from context') - } - - # Prepare tracers and fail, if a conflict is found. - # tracer_paths is used to ensure consistency over the tracer data - # and tracer_map tracks the tracers to be inserted. - tracer_map = {} - for path in files.values(): - this_tracer = this_tracers.get(path) - other_tracer = tracers.get(path, '') - # If there is no tracer, there is always the None tracer. - if this_tracer is not None and this_tracer != other_tracer: - raise CoverageException( - "Conflicting file tracer name for '%s': %r vs %r" % ( - path, this_tracer, other_tracer - ) - ) - tracer_map[path] = other_tracer - - # Prepare arc and line rows to be inserted by converting the file - # and context strings with integer ids. Then use the efficient - # `executemany()` to insert all rows at once. - arc_rows = ( - (file_ids[file], context_ids[context], fromno, tono) - for file, context, fromno, tono in arcs - ) - - # Get line data. - cur = conn.execute( - 'select file.path, context.context, line_bits.numbits ' - 'from line_bits ' - 'inner join file on file.id = line_bits.file_id ' - 'inner join context on context.id = line_bits.context_id' - ) - for path, context, numbits in cur: - key = (aliases.map(path), context) - if key in lines: - numbits = numbits_union(lines[key], numbits) - lines[key] = numbits - cur.close() - - if arcs: - self._choose_lines_or_arcs(arcs=True) - - # Write the combined data. - conn.executemany( - 'insert or ignore into arc ' - '(file_id, context_id, fromno, tono) values (?, ?, ?, ?)', - arc_rows - ) - - if lines: - self._choose_lines_or_arcs(lines=True) - conn.execute("delete from line_bits") - conn.executemany( - "insert into line_bits " - "(file_id, context_id, numbits) values (?, ?, ?)", - [ - (file_ids[file], context_ids[context], numbits) - for (file, context), numbits in lines.items() - ] - ) - conn.executemany( - 'insert or ignore into tracer (file_id, tracer) values (?, ?)', - ((file_ids[filename], tracer) for filename, tracer in tracer_map.items()) - ) - - # Update all internal cache data. - self._reset() - self.read() - - def erase(self, parallel=False): - """Erase the data in this object. - - If `parallel` is true, then also deletes data files created from the - basename by parallel-mode. - - """ - self._reset() - if self._no_disk: - return - if self._debug.should('dataio'): - self._debug.write("Erasing data file {!r}".format(self._filename)) - file_be_gone(self._filename) - if parallel: - data_dir, local = os.path.split(self._filename) - localdot = local + '.*' - pattern = os.path.join(os.path.abspath(data_dir), localdot) - for filename in glob.glob(pattern): - if self._debug.should('dataio'): - self._debug.write("Erasing parallel data file {!r}".format(filename)) - file_be_gone(filename) - - def read(self): - """Start using an existing data file.""" - with self._connect(): # TODO: doesn't look right - self._have_used = True - - def write(self): - """Ensure the data is written to the data file.""" - pass - - def _start_using(self): - """Call this before using the database at all.""" - if self._pid != os.getpid(): - # Looks like we forked! Have to start a new data file. - self._reset() - self._choose_filename() - self._pid = os.getpid() - if not self._have_used: - self.erase() - self._have_used = True - - def has_arcs(self): - """Does the database have arcs (True) or lines (False).""" - return bool(self._has_arcs) - - def measured_files(self): - """A set of all files that had been measured.""" - return set(self._file_map) - - def measured_contexts(self): - """A set of all contexts that have been measured. - - .. versionadded:: 5.0 - - """ - self._start_using() - with self._connect() as con: - contexts = {row[0] for row in con.execute("select distinct(context) from context")} - return contexts - - def file_tracer(self, filename): - """Get the plugin name of the file tracer for a file. - - Returns the name of the plugin that handles this file. If the file was - measured, but didn't use a plugin, then "" is returned. If the file - was not measured, then None is returned. - - """ - self._start_using() - with self._connect() as con: - file_id = self._file_id(filename) - if file_id is None: - return None - row = con.execute_one("select tracer from tracer where file_id = ?", (file_id,)) - if row is not None: - return row[0] or "" - return "" # File was measured, but no tracer associated. - - def set_query_context(self, context): - """Set a context for subsequent querying. - - The next :meth:`lines`, :meth:`arcs`, or :meth:`contexts_by_lineno` - calls will be limited to only one context. `context` is a string which - must match a context exactly. If it does not, no exception is raised, - but queries will return no data. - - .. versionadded:: 5.0 - - """ - self._start_using() - with self._connect() as con: - cur = con.execute("select id from context where context = ?", (context,)) - self._query_context_ids = [row[0] for row in cur.fetchall()] - - def set_query_contexts(self, contexts): - """Set a number of contexts for subsequent querying. - - The next :meth:`lines`, :meth:`arcs`, or :meth:`contexts_by_lineno` - calls will be limited to the specified contexts. `contexts` is a list - of Python regular expressions. Contexts will be matched using - :func:`re.search <python:re.search>`. Data will be included in query - results if they are part of any of the contexts matched. - - .. versionadded:: 5.0 - - """ - self._start_using() - if contexts: - with self._connect() as con: - context_clause = ' or '.join(['context regexp ?'] * len(contexts)) - cur = con.execute("select id from context where " + context_clause, contexts) - self._query_context_ids = [row[0] for row in cur.fetchall()] - else: - self._query_context_ids = None - - def lines(self, filename): - """Get the list of lines executed for a file. - - If the file was not measured, returns None. A file might be measured, - and have no lines executed, in which case an empty list is returned. - - If the file was executed, returns a list of integers, the line numbers - executed in the file. The list is in no particular order. - - """ - self._start_using() - if self.has_arcs(): - arcs = self.arcs(filename) - if arcs is not None: - all_lines = itertools.chain.from_iterable(arcs) - return list({l for l in all_lines if l > 0}) - - with self._connect() as con: - file_id = self._file_id(filename) - if file_id is None: - return None - else: - query = "select numbits from line_bits where file_id = ?" - data = [file_id] - if self._query_context_ids is not None: - ids_array = ', '.join('?' * len(self._query_context_ids)) - query += " and context_id in (" + ids_array + ")" - data += self._query_context_ids - bitmaps = list(con.execute(query, data)) - nums = set() - for row in bitmaps: - nums.update(numbits_to_nums(row[0])) - return list(nums) - - def arcs(self, filename): - """Get the list of arcs executed for a file. - - If the file was not measured, returns None. A file might be measured, - and have no arcs executed, in which case an empty list is returned. - - If the file was executed, returns a list of 2-tuples of integers. Each - pair is a starting line number and an ending line number for a - transition from one line to another. The list is in no particular - order. - - Negative numbers have special meaning. If the starting line number is - -N, it represents an entry to the code object that starts at line N. - If the ending ling number is -N, it's an exit from the code object that - starts at line N. - - """ - self._start_using() - with self._connect() as con: - file_id = self._file_id(filename) - if file_id is None: - return None - else: - query = "select distinct fromno, tono from arc where file_id = ?" - data = [file_id] - if self._query_context_ids is not None: - ids_array = ', '.join('?' * len(self._query_context_ids)) - query += " and context_id in (" + ids_array + ")" - data += self._query_context_ids - arcs = con.execute(query, data) - return list(arcs) - - def contexts_by_lineno(self, filename): - """Get the contexts for each line in a file. - - Returns: - A dict mapping line numbers to a list of context names. - - .. versionadded:: 5.0 - - """ - lineno_contexts_map = collections.defaultdict(list) - self._start_using() - with self._connect() as con: - file_id = self._file_id(filename) - if file_id is None: - return lineno_contexts_map - if self.has_arcs(): - query = ( - "select arc.fromno, arc.tono, context.context " - "from arc, context " - "where arc.file_id = ? and arc.context_id = context.id" - ) - data = [file_id] - if self._query_context_ids is not None: - ids_array = ', '.join('?' * len(self._query_context_ids)) - query += " and arc.context_id in (" + ids_array + ")" - data += self._query_context_ids - for fromno, tono, context in con.execute(query, data): - if context not in lineno_contexts_map[fromno]: - lineno_contexts_map[fromno].append(context) - if context not in lineno_contexts_map[tono]: - lineno_contexts_map[tono].append(context) - else: - query = ( - "select l.numbits, c.context from line_bits l, context c " - "where l.context_id = c.id " - "and file_id = ?" - ) - data = [file_id] - if self._query_context_ids is not None: - ids_array = ', '.join('?' * len(self._query_context_ids)) - query += " and l.context_id in (" + ids_array + ")" - data += self._query_context_ids - for numbits, context in con.execute(query, data): - for lineno in numbits_to_nums(numbits): - lineno_contexts_map[lineno].append(context) - return lineno_contexts_map - - @classmethod - def sys_info(cls): - """Our information for `Coverage.sys_info`. - - Returns a list of (key, value) pairs. - - """ - with SqliteDb(":memory:", debug=NoDebugging()) as db: - temp_store = [row[0] for row in db.execute("pragma temp_store")] - compile_options = [row[0] for row in db.execute("pragma compile_options")] - - return [ - ('sqlite3_version', sqlite3.version), - ('sqlite3_sqlite_version', sqlite3.sqlite_version), - ('sqlite3_temp_store', temp_store), - ('sqlite3_compile_options', compile_options), - ] - - -class SqliteDb(SimpleReprMixin): - """A simple abstraction over a SQLite database. - - Use as a context manager, then you can use it like a - :class:`python:sqlite3.Connection` object:: - - with SqliteDb(filename, debug_control) as db: - db.execute("insert into schema (version) values (?)", (SCHEMA_VERSION,)) - - """ - def __init__(self, filename, debug): - self.debug = debug if debug.should('sql') else None - self.filename = filename - self.nest = 0 - self.con = None - - def _connect(self): - """Connect to the db and do universal initialization.""" - if self.con is not None: - return - - # SQLite on Windows on py2 won't open a file if the filename argument - # has non-ascii characters in it. Opening a relative file name avoids - # a problem if the current directory has non-ascii. - filename = self.filename - if env.WINDOWS and env.PY2: - try: - filename = os.path.relpath(self.filename) - except ValueError: - # ValueError can be raised under Windows when os.getcwd() returns a - # folder from a different drive than the drive of self.filename in - # which case we keep the original value of self.filename unchanged, - # hoping that we won't face the non-ascii directory problem. - pass - - # It can happen that Python switches threads while the tracer writes - # data. The second thread will also try to write to the data, - # effectively causing a nested context. However, given the idempotent - # nature of the tracer operations, sharing a connection among threads - # is not a problem. - if self.debug: - self.debug.write("Connecting to {!r}".format(self.filename)) - self.con = sqlite3.connect(filename, check_same_thread=False) - self.con.create_function('REGEXP', 2, _regexp) - - # This pragma makes writing faster. It disables rollbacks, but we never need them. - # PyPy needs the .close() calls here, or sqlite gets twisted up: - # https://bitbucket.org/pypy/pypy/issues/2872/default-isolation-mode-is-different-on - self.execute("pragma journal_mode=off").close() - # This pragma makes writing faster. - self.execute("pragma synchronous=off").close() - - def close(self): - """If needed, close the connection.""" - if self.con is not None and self.filename != ":memory:": - self.con.close() - self.con = None - - def __enter__(self): - if self.nest == 0: - self._connect() - self.con.__enter__() - self.nest += 1 - return self - - def __exit__(self, exc_type, exc_value, traceback): - self.nest -= 1 - if self.nest == 0: - try: - self.con.__exit__(exc_type, exc_value, traceback) - self.close() - except Exception as exc: - if self.debug: - self.debug.write("EXCEPTION from __exit__: {}".format(exc)) - raise - - def execute(self, sql, parameters=()): - """Same as :meth:`python:sqlite3.Connection.execute`.""" - if self.debug: - tail = " with {!r}".format(parameters) if parameters else "" - self.debug.write("Executing {!r}{}".format(sql, tail)) - try: - try: - return self.con.execute(sql, parameters) - except Exception: - # In some cases, an error might happen that isn't really an - # error. Try again immediately. - # https://github.com/nedbat/coveragepy/issues/1010 - return self.con.execute(sql, parameters) - except sqlite3.Error as exc: - msg = str(exc) - try: - # `execute` is the first thing we do with the database, so try - # hard to provide useful hints if something goes wrong now. - with open(self.filename, "rb") as bad_file: - cov4_sig = b"!coverage.py: This is a private format" - if bad_file.read(len(cov4_sig)) == cov4_sig: - msg = ( - "Looks like a coverage 4.x data file. " - "Are you mixing versions of coverage?" - ) - except Exception: - pass - if self.debug: - self.debug.write("EXCEPTION from execute: {}".format(msg)) - raise CoverageException("Couldn't use data file {!r}: {}".format(self.filename, msg)) - - def execute_one(self, sql, parameters=()): - """Execute a statement and return the one row that results. - - This is like execute(sql, parameters).fetchone(), except it is - correct in reading the entire result set. This will raise an - exception if more than one row results. - - Returns a row, or None if there were no rows. - """ - rows = list(self.execute(sql, parameters)) - if len(rows) == 0: - return None - elif len(rows) == 1: - return rows[0] - else: - raise CoverageException("Sql {!r} shouldn't return {} rows".format(sql, len(rows))) - - def executemany(self, sql, data): - """Same as :meth:`python:sqlite3.Connection.executemany`.""" - if self.debug: - data = list(data) - self.debug.write("Executing many {!r} with {} rows".format(sql, len(data))) - return self.con.executemany(sql, data) - - def executescript(self, script): - """Same as :meth:`python:sqlite3.Connection.executescript`.""" - if self.debug: - self.debug.write("Executing script with {} chars: {}".format( - len(script), clipped_repr(script, 100), - )) - self.con.executescript(script) - - def dump(self): - """Return a multi-line string, the SQL dump of the database.""" - return "\n".join(self.con.iterdump()) - - -def _regexp(text, pattern): - """A regexp function for SQLite.""" - return re.search(text, pattern) is not None diff --git a/contrib/python/coverage/py2/coverage/summary.py b/contrib/python/coverage/py2/coverage/summary.py deleted file mode 100644 index 65f80470061..00000000000 --- a/contrib/python/coverage/py2/coverage/summary.py +++ /dev/null @@ -1,152 +0,0 @@ -# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 -# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt - -"""Summary reporting""" - -import sys - -from coverage import env -from coverage.report import get_analysis_to_report -from coverage.results import Numbers -from coverage.misc import CoverageException, output_encoding - - -class SummaryReporter(object): - """A reporter for writing the summary report.""" - - def __init__(self, coverage): - self.coverage = coverage - self.config = self.coverage.config - self.branches = coverage.get_data().has_arcs() - self.outfile = None - self.fr_analysis = [] - self.skipped_count = 0 - self.empty_count = 0 - self.total = Numbers() - self.fmt_err = u"%s %s: %s" - - def writeout(self, line): - """Write a line to the output, adding a newline.""" - if env.PY2: - line = line.encode(output_encoding()) - self.outfile.write(line.rstrip()) - self.outfile.write("\n") - - def report(self, morfs, outfile=None): - """Writes a report summarizing coverage statistics per module. - - `outfile` is a file object to write the summary to. It must be opened - for native strings (bytes on Python 2, Unicode on Python 3). - - """ - self.outfile = outfile or sys.stdout - - self.coverage.get_data().set_query_contexts(self.config.report_contexts) - for fr, analysis in get_analysis_to_report(self.coverage, morfs): - self.report_one_file(fr, analysis) - - # Prepare the formatting strings, header, and column sorting. - max_name = max([len(fr.relative_filename()) for (fr, analysis) in self.fr_analysis] + [5]) - fmt_name = u"%%- %ds " % max_name - fmt_skip_covered = u"\n%s file%s skipped due to complete coverage." - fmt_skip_empty = u"\n%s empty file%s skipped." - - header = (fmt_name % "Name") + u" Stmts Miss" - fmt_coverage = fmt_name + u"%6d %6d" - if self.branches: - header += u" Branch BrPart" - fmt_coverage += u" %6d %6d" - width100 = Numbers.pc_str_width() - header += u"%*s" % (width100+4, "Cover") - fmt_coverage += u"%%%ds%%%%" % (width100+3,) - if self.config.show_missing: - header += u" Missing" - fmt_coverage += u" %s" - rule = u"-" * len(header) - - column_order = dict(name=0, stmts=1, miss=2, cover=-1) - if self.branches: - column_order.update(dict(branch=3, brpart=4)) - - # Write the header - self.writeout(header) - self.writeout(rule) - - # `lines` is a list of pairs, (line text, line values). The line text - # is a string that will be printed, and line values is a tuple of - # sortable values. - lines = [] - - for (fr, analysis) in self.fr_analysis: - nums = analysis.numbers - - args = (fr.relative_filename(), nums.n_statements, nums.n_missing) - if self.branches: - args += (nums.n_branches, nums.n_partial_branches) - args += (nums.pc_covered_str,) - if self.config.show_missing: - args += (analysis.missing_formatted(branches=True),) - text = fmt_coverage % args - # Add numeric percent coverage so that sorting makes sense. - args += (nums.pc_covered,) - lines.append((text, args)) - - # Sort the lines and write them out. - if getattr(self.config, 'sort', None): - sort_option = self.config.sort.lower() - reverse = False - if sort_option[0] == '-': - reverse = True - sort_option = sort_option[1:] - elif sort_option[0] == '+': - sort_option = sort_option[1:] - - position = column_order.get(sort_option) - if position is None: - raise CoverageException("Invalid sorting option: {!r}".format(self.config.sort)) - lines.sort(key=lambda l: (l[1][position], l[0]), reverse=reverse) - - for line in lines: - self.writeout(line[0]) - - # Write a TOTAL line if we had at least one file. - if self.total.n_files > 0: - self.writeout(rule) - args = ("TOTAL", self.total.n_statements, self.total.n_missing) - if self.branches: - args += (self.total.n_branches, self.total.n_partial_branches) - args += (self.total.pc_covered_str,) - if self.config.show_missing: - args += ("",) - self.writeout(fmt_coverage % args) - - # Write other final lines. - if not self.total.n_files and not self.skipped_count: - raise CoverageException("No data to report.") - - if self.config.skip_covered and self.skipped_count: - self.writeout( - fmt_skip_covered % (self.skipped_count, 's' if self.skipped_count > 1 else '') - ) - if self.config.skip_empty and self.empty_count: - self.writeout( - fmt_skip_empty % (self.empty_count, 's' if self.empty_count > 1 else '') - ) - - return self.total.n_statements and self.total.pc_covered - - def report_one_file(self, fr, analysis): - """Report on just one file, the callback from report().""" - nums = analysis.numbers - self.total += nums - - no_missing_lines = (nums.n_missing == 0) - no_missing_branches = (nums.n_partial_branches == 0) - if self.config.skip_covered and no_missing_lines and no_missing_branches: - # Don't report on 100% files. - self.skipped_count += 1 - elif self.config.skip_empty and nums.n_statements == 0: - # Don't report on empty files. - self.empty_count += 1 - else: - self.fr_analysis.append((fr, analysis)) diff --git a/contrib/python/coverage/py2/coverage/templite.py b/contrib/python/coverage/py2/coverage/templite.py deleted file mode 100644 index 7d4024e0afa..00000000000 --- a/contrib/python/coverage/py2/coverage/templite.py +++ /dev/null @@ -1,302 +0,0 @@ -# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 -# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt - -"""A simple Python template renderer, for a nano-subset of Django syntax. - -For a detailed discussion of this code, see this chapter from 500 Lines: -http://aosabook.org/en/500L/a-template-engine.html - -""" - -# Coincidentally named the same as http://code.activestate.com/recipes/496702/ - -import re - -from coverage import env - - -class TempliteSyntaxError(ValueError): - """Raised when a template has a syntax error.""" - pass - - -class TempliteValueError(ValueError): - """Raised when an expression won't evaluate in a template.""" - pass - - -class CodeBuilder(object): - """Build source code conveniently.""" - - def __init__(self, indent=0): - self.code = [] - self.indent_level = indent - - def __str__(self): - return "".join(str(c) for c in self.code) - - def add_line(self, line): - """Add a line of source to the code. - - Indentation and newline will be added for you, don't provide them. - - """ - self.code.extend([" " * self.indent_level, line, "\n"]) - - def add_section(self): - """Add a section, a sub-CodeBuilder.""" - section = CodeBuilder(self.indent_level) - self.code.append(section) - return section - - INDENT_STEP = 4 # PEP8 says so! - - def indent(self): - """Increase the current indent for following lines.""" - self.indent_level += self.INDENT_STEP - - def dedent(self): - """Decrease the current indent for following lines.""" - self.indent_level -= self.INDENT_STEP - - def get_globals(self): - """Execute the code, and return a dict of globals it defines.""" - # A check that the caller really finished all the blocks they started. - assert self.indent_level == 0 - # Get the Python source as a single string. - python_source = str(self) - # Execute the source, defining globals, and return them. - global_namespace = {} - exec(python_source, global_namespace) - return global_namespace - - -class Templite(object): - """A simple template renderer, for a nano-subset of Django syntax. - - Supported constructs are extended variable access:: - - {{var.modifier.modifier|filter|filter}} - - loops:: - - {% for var in list %}...{% endfor %} - - and ifs:: - - {% if var %}...{% endif %} - - Comments are within curly-hash markers:: - - {# This will be ignored #} - - Lines between `{% joined %}` and `{% endjoined %}` will have lines stripped - and joined. Be careful, this could join words together! - - Any of these constructs can have a hyphen at the end (`-}}`, `-%}`, `-#}`), - which will collapse the whitespace following the tag. - - Construct a Templite with the template text, then use `render` against a - dictionary context to create a finished string:: - - templite = Templite(''' - <h1>Hello {{name|upper}}!</h1> - {% for topic in topics %} - <p>You are interested in {{topic}}.</p> - {% endif %} - ''', - {'upper': str.upper}, - ) - text = templite.render({ - 'name': "Ned", - 'topics': ['Python', 'Geometry', 'Juggling'], - }) - - """ - def __init__(self, text, *contexts): - """Construct a Templite with the given `text`. - - `contexts` are dictionaries of values to use for future renderings. - These are good for filters and global values. - - """ - self.context = {} - for context in contexts: - self.context.update(context) - - self.all_vars = set() - self.loop_vars = set() - - # We construct a function in source form, then compile it and hold onto - # it, and execute it to render the template. - code = CodeBuilder() - - code.add_line("def render_function(context, do_dots):") - code.indent() - vars_code = code.add_section() - code.add_line("result = []") - code.add_line("append_result = result.append") - code.add_line("extend_result = result.extend") - if env.PY2: - code.add_line("to_str = unicode") - else: - code.add_line("to_str = str") - - buffered = [] - - def flush_output(): - """Force `buffered` to the code builder.""" - if len(buffered) == 1: - code.add_line("append_result(%s)" % buffered[0]) - elif len(buffered) > 1: - code.add_line("extend_result([%s])" % ", ".join(buffered)) - del buffered[:] - - ops_stack = [] - - # Split the text to form a list of tokens. - tokens = re.split(r"(?s)({{.*?}}|{%.*?%}|{#.*?#})", text) - - squash = in_joined = False - - for token in tokens: - if token.startswith('{'): - start, end = 2, -2 - squash = (token[-3] == '-') - if squash: - end = -3 - - if token.startswith('{#'): - # Comment: ignore it and move on. - continue - elif token.startswith('{{'): - # An expression to evaluate. - expr = self._expr_code(token[start:end].strip()) - buffered.append("to_str(%s)" % expr) - else: - # token.startswith('{%') - # Action tag: split into words and parse further. - flush_output() - - words = token[start:end].strip().split() - if words[0] == 'if': - # An if statement: evaluate the expression to determine if. - if len(words) != 2: - self._syntax_error("Don't understand if", token) - ops_stack.append('if') - code.add_line("if %s:" % self._expr_code(words[1])) - code.indent() - elif words[0] == 'for': - # A loop: iterate over expression result. - if len(words) != 4 or words[2] != 'in': - self._syntax_error("Don't understand for", token) - ops_stack.append('for') - self._variable(words[1], self.loop_vars) - code.add_line( - "for c_%s in %s:" % ( - words[1], - self._expr_code(words[3]) - ) - ) - code.indent() - elif words[0] == 'joined': - ops_stack.append('joined') - in_joined = True - elif words[0].startswith('end'): - # Endsomething. Pop the ops stack. - if len(words) != 1: - self._syntax_error("Don't understand end", token) - end_what = words[0][3:] - if not ops_stack: - self._syntax_error("Too many ends", token) - start_what = ops_stack.pop() - if start_what != end_what: - self._syntax_error("Mismatched end tag", end_what) - if end_what == 'joined': - in_joined = False - else: - code.dedent() - else: - self._syntax_error("Don't understand tag", words[0]) - else: - # Literal content. If it isn't empty, output it. - if in_joined: - token = re.sub(r"\s*\n\s*", "", token.strip()) - elif squash: - token = token.lstrip() - if token: - buffered.append(repr(token)) - - if ops_stack: - self._syntax_error("Unmatched action tag", ops_stack[-1]) - - flush_output() - - for var_name in self.all_vars - self.loop_vars: - vars_code.add_line("c_%s = context[%r]" % (var_name, var_name)) - - code.add_line('return "".join(result)') - code.dedent() - self._render_function = code.get_globals()['render_function'] - - def _expr_code(self, expr): - """Generate a Python expression for `expr`.""" - if "|" in expr: - pipes = expr.split("|") - code = self._expr_code(pipes[0]) - for func in pipes[1:]: - self._variable(func, self.all_vars) - code = "c_%s(%s)" % (func, code) - elif "." in expr: - dots = expr.split(".") - code = self._expr_code(dots[0]) - args = ", ".join(repr(d) for d in dots[1:]) - code = "do_dots(%s, %s)" % (code, args) - else: - self._variable(expr, self.all_vars) - code = "c_%s" % expr - return code - - def _syntax_error(self, msg, thing): - """Raise a syntax error using `msg`, and showing `thing`.""" - raise TempliteSyntaxError("%s: %r" % (msg, thing)) - - def _variable(self, name, vars_set): - """Track that `name` is used as a variable. - - Adds the name to `vars_set`, a set of variable names. - - Raises an syntax error if `name` is not a valid name. - - """ - if not re.match(r"[_a-zA-Z][_a-zA-Z0-9]*$", name): - self._syntax_error("Not a valid name", name) - vars_set.add(name) - - def render(self, context=None): - """Render this template by applying it to `context`. - - `context` is a dictionary of values to use in this rendering. - - """ - # Make the complete context we'll use. - render_context = dict(self.context) - if context: - render_context.update(context) - return self._render_function(render_context, self._do_dots) - - def _do_dots(self, value, *dots): - """Evaluate dotted expressions at run-time.""" - for dot in dots: - try: - value = getattr(value, dot) - except AttributeError: - try: - value = value[dot] - except (TypeError, KeyError): - raise TempliteValueError( - "Couldn't evaluate %r.%s" % (value, dot) - ) - if callable(value): - value = value() - return value diff --git a/contrib/python/coverage/py2/coverage/tomlconfig.py b/contrib/python/coverage/py2/coverage/tomlconfig.py deleted file mode 100644 index 3ad581571cf..00000000000 --- a/contrib/python/coverage/py2/coverage/tomlconfig.py +++ /dev/null @@ -1,168 +0,0 @@ -# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 -# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt - -"""TOML configuration support for coverage.py""" - -import io -import os -import re - -from coverage import env -from coverage.backward import configparser, path_types -from coverage.misc import CoverageException, substitute_variables - -# TOML support is an install-time extra option. -try: - import toml -except ImportError: # pragma: not covered - toml = None - - -class TomlDecodeError(Exception): - """An exception class that exists even when toml isn't installed.""" - pass - - -class TomlConfigParser: - """TOML file reading with the interface of HandyConfigParser.""" - - # This class has the same interface as config.HandyConfigParser, no - # need for docstrings. - # pylint: disable=missing-function-docstring - - def __init__(self, our_file): - self.our_file = our_file - self.data = None - - def read(self, filenames): - # RawConfigParser takes a filename or list of filenames, but we only - # ever call this with a single filename. - assert isinstance(filenames, path_types) - filename = filenames - if env.PYVERSION >= (3, 6): - filename = os.fspath(filename) - - try: - with io.open(filename, encoding='utf-8') as fp: - toml_text = fp.read() - except IOError: - return [] - if toml: - toml_text = substitute_variables(toml_text, os.environ) - try: - self.data = toml.loads(toml_text) - except toml.TomlDecodeError as err: - raise TomlDecodeError(*err.args) - return [filename] - else: - has_toml = re.search(r"^\[tool\.coverage\.", toml_text, flags=re.MULTILINE) - if self.our_file or has_toml: - # Looks like they meant to read TOML, but we can't read it. - msg = "Can't read {!r} without TOML support. Install with [toml] extra" - raise CoverageException(msg.format(filename)) - return [] - - def _get_section(self, section): - """Get a section from the data. - - Arguments: - section (str): A section name, which can be dotted. - - Returns: - name (str): the actual name of the section that was found, if any, - or None. - data (str): the dict of data in the section, or None if not found. - - """ - prefixes = ["tool.coverage."] - if self.our_file: - prefixes.append("") - for prefix in prefixes: - real_section = prefix + section - parts = real_section.split(".") - try: - data = self.data[parts[0]] - for part in parts[1:]: - data = data[part] - except KeyError: - continue - break - else: - return None, None - return real_section, data - - def _get(self, section, option): - """Like .get, but returns the real section name and the value.""" - name, data = self._get_section(section) - if data is None: - raise configparser.NoSectionError(section) - try: - return name, data[option] - except KeyError: - raise configparser.NoOptionError(option, name) - - def has_option(self, section, option): - _, data = self._get_section(section) - if data is None: - return False - return option in data - - def has_section(self, section): - name, _ = self._get_section(section) - return name - - def options(self, section): - _, data = self._get_section(section) - if data is None: - raise configparser.NoSectionError(section) - return list(data.keys()) - - def get_section(self, section): - _, data = self._get_section(section) - return data - - def get(self, section, option): - _, value = self._get(section, option) - return value - - def _check_type(self, section, option, value, type_, type_desc): - if not isinstance(value, type_): - raise ValueError( - 'Option {!r} in section {!r} is not {}: {!r}' - .format(option, section, type_desc, value) - ) - - def getboolean(self, section, option): - name, value = self._get(section, option) - self._check_type(name, option, value, bool, "a boolean") - return value - - def getlist(self, section, option): - name, values = self._get(section, option) - self._check_type(name, option, values, list, "a list") - return values - - def getregexlist(self, section, option): - name, values = self._get(section, option) - self._check_type(name, option, values, list, "a list") - for value in values: - value = value.strip() - try: - re.compile(value) - except re.error as e: - raise CoverageException( - "Invalid [%s].%s value %r: %s" % (name, option, value, e) - ) - return values - - def getint(self, section, option): - name, value = self._get(section, option) - self._check_type(name, option, value, int, "an integer") - return value - - def getfloat(self, section, option): - name, value = self._get(section, option) - if isinstance(value, int): - value = float(value) - self._check_type(name, option, value, float, "a float") - return value diff --git a/contrib/python/coverage/py2/coverage/version.py b/contrib/python/coverage/py2/coverage/version.py deleted file mode 100644 index d141a11da3a..00000000000 --- a/contrib/python/coverage/py2/coverage/version.py +++ /dev/null @@ -1,33 +0,0 @@ -# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 -# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt - -"""The version and URL for coverage.py""" -# This file is exec'ed in setup.py, don't import anything! - -# Same semantics as sys.version_info. -version_info = (5, 5, 0, "final", 0) - - -def _make_version(major, minor, micro, releaselevel, serial): - """Create a readable version string from version_info tuple components.""" - assert releaselevel in ['alpha', 'beta', 'candidate', 'final'] - version = "%d.%d" % (major, minor) - if micro: - version += ".%d" % (micro,) - if releaselevel != 'final': - short = {'alpha': 'a', 'beta': 'b', 'candidate': 'rc'}[releaselevel] - version += "%s%d" % (short, serial) - return version - - -def _make_url(major, minor, micro, releaselevel, serial): - """Make the URL people should start at for this version of coverage.py.""" - url = "https://coverage.readthedocs.io" - if releaselevel != 'final': - # For pre-releases, use a version-specific URL. - url += "/en/coverage-" + _make_version(major, minor, micro, releaselevel, serial) - return url - - -__version__ = _make_version(*version_info) -__url__ = _make_url(*version_info) diff --git a/contrib/python/coverage/py2/coverage/xmlreport.py b/contrib/python/coverage/py2/coverage/xmlreport.py deleted file mode 100644 index 6d012ee692e..00000000000 --- a/contrib/python/coverage/py2/coverage/xmlreport.py +++ /dev/null @@ -1,234 +0,0 @@ -# coding: utf-8 -# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 -# For details: https://github.com/nedbat/coveragepy/blob/master/NOTICE.txt - -"""XML reporting for coverage.py""" - -import os -import os.path -import sys -import time -import xml.dom.minidom - -from coverage import env -from coverage import __url__, __version__, files -from coverage.backward import iitems -from coverage.misc import isolate_module -from coverage.report import get_analysis_to_report - -os = isolate_module(os) - - -DTD_URL = 'https://raw.githubusercontent.com/cobertura/web/master/htdocs/xml/coverage-04.dtd' - - -def rate(hit, num): - """Return the fraction of `hit`/`num`, as a string.""" - if num == 0: - return "1" - else: - return "%.4g" % (float(hit) / num) - - -class XmlReporter(object): - """A reporter for writing Cobertura-style XML coverage results.""" - - def __init__(self, coverage): - self.coverage = coverage - self.config = self.coverage.config - - self.source_paths = set() - if self.config.source: - for src in self.config.source: - if os.path.exists(src): - if not self.config.relative_files: - src = files.canonical_filename(src) - self.source_paths.add(src) - self.packages = {} - self.xml_out = None - - def report(self, morfs, outfile=None): - """Generate a Cobertura-compatible XML report for `morfs`. - - `morfs` is a list of modules or file names. - - `outfile` is a file object to write the XML to. - - """ - # Initial setup. - outfile = outfile or sys.stdout - has_arcs = self.coverage.get_data().has_arcs() - - # Create the DOM that will store the data. - impl = xml.dom.minidom.getDOMImplementation() - self.xml_out = impl.createDocument(None, "coverage", None) - - # Write header stuff. - xcoverage = self.xml_out.documentElement - xcoverage.setAttribute("version", __version__) - xcoverage.setAttribute("timestamp", str(int(time.time()*1000))) - xcoverage.appendChild(self.xml_out.createComment( - " Generated by coverage.py: %s " % __url__ - )) - xcoverage.appendChild(self.xml_out.createComment(" Based on %s " % DTD_URL)) - - # Call xml_file for each file in the data. - for fr, analysis in get_analysis_to_report(self.coverage, morfs): - self.xml_file(fr, analysis, has_arcs) - - xsources = self.xml_out.createElement("sources") - xcoverage.appendChild(xsources) - - # Populate the XML DOM with the source info. - for path in sorted(self.source_paths): - xsource = self.xml_out.createElement("source") - xsources.appendChild(xsource) - txt = self.xml_out.createTextNode(path) - xsource.appendChild(txt) - - lnum_tot, lhits_tot = 0, 0 - bnum_tot, bhits_tot = 0, 0 - - xpackages = self.xml_out.createElement("packages") - xcoverage.appendChild(xpackages) - - # Populate the XML DOM with the package info. - for pkg_name, pkg_data in sorted(iitems(self.packages)): - class_elts, lhits, lnum, bhits, bnum = pkg_data - xpackage = self.xml_out.createElement("package") - xpackages.appendChild(xpackage) - xclasses = self.xml_out.createElement("classes") - xpackage.appendChild(xclasses) - for _, class_elt in sorted(iitems(class_elts)): - xclasses.appendChild(class_elt) - xpackage.setAttribute("name", pkg_name.replace(os.sep, '.')) - xpackage.setAttribute("line-rate", rate(lhits, lnum)) - if has_arcs: - branch_rate = rate(bhits, bnum) - else: - branch_rate = "0" - xpackage.setAttribute("branch-rate", branch_rate) - xpackage.setAttribute("complexity", "0") - - lnum_tot += lnum - lhits_tot += lhits - bnum_tot += bnum - bhits_tot += bhits - - xcoverage.setAttribute("lines-valid", str(lnum_tot)) - xcoverage.setAttribute("lines-covered", str(lhits_tot)) - xcoverage.setAttribute("line-rate", rate(lhits_tot, lnum_tot)) - if has_arcs: - xcoverage.setAttribute("branches-valid", str(bnum_tot)) - xcoverage.setAttribute("branches-covered", str(bhits_tot)) - xcoverage.setAttribute("branch-rate", rate(bhits_tot, bnum_tot)) - else: - xcoverage.setAttribute("branches-covered", "0") - xcoverage.setAttribute("branches-valid", "0") - xcoverage.setAttribute("branch-rate", "0") - xcoverage.setAttribute("complexity", "0") - - # Write the output file. - outfile.write(serialize_xml(self.xml_out)) - - # Return the total percentage. - denom = lnum_tot + bnum_tot - if denom == 0: - pct = 0.0 - else: - pct = 100.0 * (lhits_tot + bhits_tot) / denom - return pct - - def xml_file(self, fr, analysis, has_arcs): - """Add to the XML report for a single file.""" - - if self.config.skip_empty: - if analysis.numbers.n_statements == 0: - return - - # Create the 'lines' and 'package' XML elements, which - # are populated later. Note that a package == a directory. - filename = fr.filename.replace("\\", "/") - for source_path in self.source_paths: - source_path = files.canonical_filename(source_path) - if filename.startswith(source_path.replace("\\", "/") + "/"): - rel_name = filename[len(source_path)+1:] - break - else: - rel_name = fr.relative_filename() - self.source_paths.add(fr.filename[:-len(rel_name)].rstrip(r"\/")) - - dirname = os.path.dirname(rel_name) or u"." - dirname = "/".join(dirname.split("/")[:self.config.xml_package_depth]) - package_name = dirname.replace("/", ".") - - package = self.packages.setdefault(package_name, [{}, 0, 0, 0, 0]) - - xclass = self.xml_out.createElement("class") - - xclass.appendChild(self.xml_out.createElement("methods")) - - xlines = self.xml_out.createElement("lines") - xclass.appendChild(xlines) - - xclass.setAttribute("name", os.path.relpath(rel_name, dirname)) - xclass.setAttribute("filename", rel_name.replace("\\", "/")) - xclass.setAttribute("complexity", "0") - - branch_stats = analysis.branch_stats() - missing_branch_arcs = analysis.missing_branch_arcs() - - # For each statement, create an XML 'line' element. - for line in sorted(analysis.statements): - xline = self.xml_out.createElement("line") - xline.setAttribute("number", str(line)) - - # Q: can we get info about the number of times a statement is - # executed? If so, that should be recorded here. - xline.setAttribute("hits", str(int(line not in analysis.missing))) - - if has_arcs: - if line in branch_stats: - total, taken = branch_stats[line] - xline.setAttribute("branch", "true") - xline.setAttribute( - "condition-coverage", - "%d%% (%d/%d)" % (100*taken//total, taken, total) - ) - if line in missing_branch_arcs: - annlines = ["exit" if b < 0 else str(b) for b in missing_branch_arcs[line]] - xline.setAttribute("missing-branches", ",".join(annlines)) - xlines.appendChild(xline) - - class_lines = len(analysis.statements) - class_hits = class_lines - len(analysis.missing) - - if has_arcs: - class_branches = sum(t for t, k in branch_stats.values()) - missing_branches = sum(t - k for t, k in branch_stats.values()) - class_br_hits = class_branches - missing_branches - else: - class_branches = 0.0 - class_br_hits = 0.0 - - # Finalize the statistics that are collected in the XML DOM. - xclass.setAttribute("line-rate", rate(class_hits, class_lines)) - if has_arcs: - branch_rate = rate(class_br_hits, class_branches) - else: - branch_rate = "0" - xclass.setAttribute("branch-rate", branch_rate) - - package[0][rel_name] = xclass - package[1] += class_hits - package[2] += class_lines - package[3] += class_br_hits - package[4] += class_branches - - -def serialize_xml(dom): - """Serialize a minidom node to XML.""" - out = dom.toprettyxml() - if env.PY2: - out = out.encode("utf8") - return out diff --git a/contrib/python/coverage/py2/ya.make b/contrib/python/coverage/py2/ya.make deleted file mode 100644 index 7bff35eea3a..00000000000 --- a/contrib/python/coverage/py2/ya.make +++ /dev/null @@ -1,98 +0,0 @@ -# Generated by devtools/yamaker (pypi). - -PY2_LIBRARY() - -VERSION(5.5) - -LICENSE(Apache-2.0) - -PEERDIR( - contrib/python/coverage/plugins - library/python/resource -) - -NO_COMPILER_WARNINGS() - -NO_LINT() - -NO_CHECK_IMPORTS( - coverage.fullcoverage.encodings -) - -SRCS( - coverage/ctracer/datastack.c - coverage/ctracer/filedisp.c - coverage/ctracer/module.c - coverage/ctracer/tracer.c -) - -PY_REGISTER( - coverage.tracer -) - -PY_SRCS( - TOP_LEVEL - coverage/__init__.py - coverage/__main__.py - coverage/annotate.py - coverage/backward.py - coverage/bytecode.py - coverage/cmdline.py - coverage/collector.py - coverage/config.py - coverage/context.py - coverage/control.py - coverage/data.py - coverage/debug.py - coverage/disposition.py - coverage/env.py - coverage/execfile.py - coverage/files.py - coverage/fullcoverage/encodings.py - coverage/html.py - coverage/inorout.py - coverage/jsonreport.py - coverage/misc.py - coverage/multiproc.py - coverage/numbits.py - coverage/parser.py - coverage/phystokens.py - coverage/plugin.py - coverage/plugin_support.py - coverage/python.py - coverage/pytracer.py - coverage/report.py - coverage/results.py - coverage/sqldata.py - coverage/summary.py - coverage/templite.py - coverage/tomlconfig.py - coverage/version.py - coverage/xmlreport.py -) - -RESOURCE_FILES( - PREFIX contrib/python/coverage/py2/ - .dist-info/METADATA - .dist-info/entry_points.txt - .dist-info/top_level.txt - coverage/htmlfiles/coverage_html.js - coverage/htmlfiles/favicon_32.png - coverage/htmlfiles/index.html - coverage/htmlfiles/jquery.ba-throttle-debounce.min.js - coverage/htmlfiles/jquery.hotkeys.js - coverage/htmlfiles/jquery.isonscreen.js - coverage/htmlfiles/jquery.min.js - coverage/htmlfiles/jquery.tablesorter.min.js - coverage/htmlfiles/keybd_closed.png - coverage/htmlfiles/keybd_open.png - coverage/htmlfiles/pyfile.html - coverage/htmlfiles/style.css - coverage/htmlfiles/style.scss -) - -END() - -RECURSE( - bin -) |