diff options
author | unril <unril@yandex-team.ru> | 2022-02-10 16:46:05 +0300 |
---|---|---|
committer | Daniil Cherednik <dcherednik@yandex-team.ru> | 2022-02-10 16:46:05 +0300 |
commit | 11ae9eca250d0188b7962459cbc6706719e7dca9 (patch) | |
tree | 4b7d6755091980d33210de19b2eb35a401a761ea /contrib/restricted/aws/aws-c-common | |
parent | 9c914f41ba5e9f9365f404e892197553ac23809e (diff) | |
download | ydb-11ae9eca250d0188b7962459cbc6706719e7dca9.tar.gz |
Restoring authorship annotation for <unril@yandex-team.ru>. Commit 1 of 2.
Diffstat (limited to 'contrib/restricted/aws/aws-c-common')
63 files changed, 8872 insertions, 8872 deletions
diff --git a/contrib/restricted/aws/aws-c-common/LICENSE b/contrib/restricted/aws/aws-c-common/LICENSE index d645695673..c0fd617439 100644 --- a/contrib/restricted/aws/aws-c-common/LICENSE +++ b/contrib/restricted/aws/aws-c-common/LICENSE @@ -1,202 +1,202 @@ - - Apache License - Version 2.0, January 2004 - http://www.apache.org/licenses/ - - TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - - 1. Definitions. - - "License" shall mean the terms and conditions for use, reproduction, - and distribution as defined by Sections 1 through 9 of this document. - - "Licensor" shall mean the copyright owner or entity authorized by - the copyright owner that is granting the License. - - "Legal Entity" shall mean the union of the acting entity and all - other entities that control, are controlled by, or are under common - control with that entity. For the purposes of this definition, - "control" means (i) the power, direct or indirect, to cause the - direction or management of such entity, whether by contract or - otherwise, or (ii) ownership of fifty percent (50%) or more of the - outstanding shares, or (iii) beneficial ownership of such entity. - - "You" (or "Your") shall mean an individual or Legal Entity - exercising permissions granted by this License. - - "Source" form shall mean the preferred form for making modifications, - including but not limited to software source code, documentation - source, and configuration files. - - "Object" form shall mean any form resulting from mechanical - transformation or translation of a Source form, including but - not limited to compiled object code, generated documentation, - and conversions to other media types. - - "Work" shall mean the work of authorship, whether in Source or - Object form, made available under the License, as indicated by a - copyright notice that is included in or attached to the work - (an example is provided in the Appendix below). - - "Derivative Works" shall mean any work, whether in Source or Object - form, that is based on (or derived from) the Work and for which the - editorial revisions, annotations, elaborations, or other modifications - represent, as a whole, an original work of authorship. For the purposes - of this License, Derivative Works shall not include works that remain - separable from, or merely link (or bind by name) to the interfaces of, - the Work and Derivative Works thereof. - - "Contribution" shall mean any work of authorship, including - the original version of the Work and any modifications or additions - to that Work or Derivative Works thereof, that is intentionally - submitted to Licensor for inclusion in the Work by the copyright owner - or by an individual or Legal Entity authorized to submit on behalf of - the copyright owner. For the purposes of this definition, "submitted" - means any form of electronic, verbal, or written communication sent - to the Licensor or its representatives, including but not limited to - communication on electronic mailing lists, source code control systems, - and issue tracking systems that are managed by, or on behalf of, the - Licensor for the purpose of discussing and improving the Work, but - excluding communication that is conspicuously marked or otherwise - designated in writing by the copyright owner as "Not a Contribution." - - "Contributor" shall mean Licensor and any individual or Legal Entity - on behalf of whom a Contribution has been received by Licensor and - subsequently incorporated within the Work. - - 2. Grant of Copyright License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - copyright license to reproduce, prepare Derivative Works of, - publicly display, publicly perform, sublicense, and distribute the - Work and such Derivative Works in Source or Object form. - - 3. Grant of Patent License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - (except as stated in this section) patent license to make, have made, - use, offer to sell, sell, import, and otherwise transfer the Work, - where such license applies only to those patent claims licensable - by such Contributor that are necessarily infringed by their - Contribution(s) alone or by combination of their Contribution(s) - with the Work to which such Contribution(s) was submitted. If You - institute patent litigation against any entity (including a - cross-claim or counterclaim in a lawsuit) alleging that the Work - or a Contribution incorporated within the Work constitutes direct - or contributory patent infringement, then any patent licenses - granted to You under this License for that Work shall terminate - as of the date such litigation is filed. - - 4. Redistribution. You may reproduce and distribute copies of the - Work or Derivative Works thereof in any medium, with or without - modifications, and in Source or Object form, provided that You - meet the following conditions: - - (a) You must give any other recipients of the Work or - Derivative Works a copy of this License; and - - (b) You must cause any modified files to carry prominent notices - stating that You changed the files; and - - (c) You must retain, in the Source form of any Derivative Works - that You distribute, all copyright, patent, trademark, and - attribution notices from the Source form of the Work, - excluding those notices that do not pertain to any part of - the Derivative Works; and - - (d) If the Work includes a "NOTICE" text file as part of its - distribution, then any Derivative Works that You distribute must - include a readable copy of the attribution notices contained - within such NOTICE file, excluding those notices that do not - pertain to any part of the Derivative Works, in at least one - of the following places: within a NOTICE text file distributed - as part of the Derivative Works; within the Source form or - documentation, if provided along with the Derivative Works; or, - within a display generated by the Derivative Works, if and - wherever such third-party notices normally appear. The contents - of the NOTICE file are for informational purposes only and - do not modify the License. You may add Your own attribution - notices within Derivative Works that You distribute, alongside - or as an addendum to the NOTICE text from the Work, provided - that such additional attribution notices cannot be construed - as modifying the License. - - You may add Your own copyright statement to Your modifications and - may provide additional or different license terms and conditions - for use, reproduction, or distribution of Your modifications, or - for any such Derivative Works as a whole, provided Your use, - reproduction, and distribution of the Work otherwise complies with - the conditions stated in this License. - - 5. Submission of Contributions. Unless You explicitly state otherwise, - any Contribution intentionally submitted for inclusion in the Work - by You to the Licensor shall be under the terms and conditions of - this License, without any additional terms or conditions. - Notwithstanding the above, nothing herein shall supersede or modify - the terms of any separate license agreement you may have executed - with Licensor regarding such Contributions. - - 6. Trademarks. This License does not grant permission to use the trade - names, trademarks, service marks, or product names of the Licensor, - except as required for reasonable and customary use in describing the - origin of the Work and reproducing the content of the NOTICE file. - - 7. Disclaimer of Warranty. Unless required by applicable law or - agreed to in writing, Licensor provides the Work (and each - Contributor provides its Contributions) on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or - implied, including, without limitation, any warranties or conditions - of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A - PARTICULAR PURPOSE. You are solely responsible for determining the - appropriateness of using or redistributing the Work and assume any - risks associated with Your exercise of permissions under this License. - - 8. Limitation of Liability. In no event and under no legal theory, - whether in tort (including negligence), contract, or otherwise, - unless required by applicable law (such as deliberate and grossly - negligent acts) or agreed to in writing, shall any Contributor be - liable to You for damages, including any direct, indirect, special, - incidental, or consequential damages of any character arising as a - result of this License or out of the use or inability to use the - Work (including but not limited to damages for loss of goodwill, - work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses), even if such Contributor - has been advised of the possibility of such damages. - - 9. Accepting Warranty or Additional Liability. While redistributing - the Work or Derivative Works thereof, You may choose to offer, - and charge a fee for, acceptance of support, warranty, indemnity, - or other liability obligations and/or rights consistent with this - License. However, in accepting such obligations, You may act only - on Your own behalf and on Your sole responsibility, not on behalf - of any other Contributor, and only if You agree to indemnify, - defend, and hold each Contributor harmless for any liability - incurred by, or claims asserted against, such Contributor by reason - of your accepting any such warranty or additional liability. - - END OF TERMS AND CONDITIONS - - APPENDIX: How to apply the Apache License to your work. - - To apply the Apache License to your work, attach the following - boilerplate notice, with the fields enclosed by brackets "[]" - replaced with your own identifying information. (Don't include - the brackets!) The text should be enclosed in the appropriate - comment syntax for the file format. We also recommend that a - file or class name and description of purpose be included on the - same "printed page" as the copyright notice for easier - identification within third-party archives. - - Copyright [yyyy] [name of copyright owner] - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. + + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/contrib/restricted/aws/aws-c-common/README.md b/contrib/restricted/aws/aws-c-common/README.md index 054c918735..2b1b7d6fad 100644 --- a/contrib/restricted/aws/aws-c-common/README.md +++ b/contrib/restricted/aws/aws-c-common/README.md @@ -1,118 +1,118 @@ -## AWS C Common - - +## AWS C Common + + [![GitHub](https://img.shields.io/github/license/awslabs/aws-c-common.svg)](https://github.com/awslabs/aws-c-common/blob/main/LICENSE) -[![Language grade: C/C++](https://img.shields.io/lgtm/grade/cpp/g/awslabs/aws-c-common.svg?logo=lgtm&logoWidth=18)](https://lgtm.com/projects/g/awslabs/aws-c-common/context:cpp) -[![Total alerts](https://img.shields.io/lgtm/alerts/g/awslabs/aws-c-common.svg?logo=lgtm&logoWidth=18)](https://lgtm.com/projects/g/awslabs/aws-c-common/alerts/) - -Core c99 package for AWS SDK for C. Includes cross-platform primitives, configuration, data structures, and error handling. - -## License - -This library is licensed under the Apache 2.0 License. - -## Usage -### Building -aws-c-common uses CMake for setting up build environments. This library has no non-kernel dependencies so the build is quite -simple. - -For example: - - git clone git@github.com:awslabs/aws-c-common.git aws-c-common - mkdir aws-c-common-build - cd aws-c-common-build - cmake ../aws-c-common - make -j 12 - make test - sudo make install - -Keep in mind that CMake supports multiple build systems, so for each platform you can pass your own build system -as the `-G` option. For example: - - cmake -GNinja ../aws-c-common - ninja build - ninja test - sudo ninja install - -Or on windows, - - cmake -G "Visual Studio 14 2015 Win64" ../aws-c-common - msbuild.exe ALL_BUILD.vcproj - -### CMake Options -* -DCMAKE_CLANG_TIDY=/path/to/clang-tidy (or just clang-tidy or clang-tidy-7.0 if it is in your PATH) - Runs clang-tidy as part of your build. -* -DENABLE_SANITIZERS=ON - Enables gcc/clang sanitizers, by default this adds -fsanitizer=address,undefined to the compile flags for projects that call aws_add_sanitizers. -* -DENABLE_FUZZ_TESTS=ON - Includes fuzz tests in the unit test suite. Off by default, because fuzz tests can take a long time. Set -DFUZZ_TESTS_MAX_TIME=N to determine how long to run each fuzz test (default 60s). -* -DCMAKE_INSTALL_PREFIX=/path/to/install - Standard way of installing to a user defined path. If specified when configuring aws-c-common, ensure the same prefix is specified when configuring other aws-c-* SDKs. +[![Language grade: C/C++](https://img.shields.io/lgtm/grade/cpp/g/awslabs/aws-c-common.svg?logo=lgtm&logoWidth=18)](https://lgtm.com/projects/g/awslabs/aws-c-common/context:cpp) +[![Total alerts](https://img.shields.io/lgtm/alerts/g/awslabs/aws-c-common.svg?logo=lgtm&logoWidth=18)](https://lgtm.com/projects/g/awslabs/aws-c-common/alerts/) + +Core c99 package for AWS SDK for C. Includes cross-platform primitives, configuration, data structures, and error handling. + +## License + +This library is licensed under the Apache 2.0 License. + +## Usage +### Building +aws-c-common uses CMake for setting up build environments. This library has no non-kernel dependencies so the build is quite +simple. + +For example: + + git clone git@github.com:awslabs/aws-c-common.git aws-c-common + mkdir aws-c-common-build + cd aws-c-common-build + cmake ../aws-c-common + make -j 12 + make test + sudo make install + +Keep in mind that CMake supports multiple build systems, so for each platform you can pass your own build system +as the `-G` option. For example: + + cmake -GNinja ../aws-c-common + ninja build + ninja test + sudo ninja install + +Or on windows, + + cmake -G "Visual Studio 14 2015 Win64" ../aws-c-common + msbuild.exe ALL_BUILD.vcproj + +### CMake Options +* -DCMAKE_CLANG_TIDY=/path/to/clang-tidy (or just clang-tidy or clang-tidy-7.0 if it is in your PATH) - Runs clang-tidy as part of your build. +* -DENABLE_SANITIZERS=ON - Enables gcc/clang sanitizers, by default this adds -fsanitizer=address,undefined to the compile flags for projects that call aws_add_sanitizers. +* -DENABLE_FUZZ_TESTS=ON - Includes fuzz tests in the unit test suite. Off by default, because fuzz tests can take a long time. Set -DFUZZ_TESTS_MAX_TIME=N to determine how long to run each fuzz test (default 60s). +* -DCMAKE_INSTALL_PREFIX=/path/to/install - Standard way of installing to a user defined path. If specified when configuring aws-c-common, ensure the same prefix is specified when configuring other aws-c-* SDKs. * -DSTATIC_CRT=ON - On MSVC, use /MT(d) to link MSVCRT - -### API style and conventions -Every API has a specific set of styles and conventions. We'll outline them here. These conventions are followed in every -library in the AWS C SDK ecosystem. - -#### Error handling -Every function that returns an `int` type, returns `AWS_OP_SUCCESS` ( 0 ) or `AWS_OP_ERR` (-1) on failure. To retrieve -the error code, use the function `aws_last_error()`. Each error code also has a corresponding error string that can be -accessed via the `aws_error_str()` function. - -In addition, you can install both a global and a thread local error handler by using the `aws_set_global_error_handler_fn()` -and `aws_set_thread_local_error_handler_fn()` functions. - -All error functions are in the `include/aws/common/error.h` header file. - -#### Naming + +### API style and conventions +Every API has a specific set of styles and conventions. We'll outline them here. These conventions are followed in every +library in the AWS C SDK ecosystem. + +#### Error handling +Every function that returns an `int` type, returns `AWS_OP_SUCCESS` ( 0 ) or `AWS_OP_ERR` (-1) on failure. To retrieve +the error code, use the function `aws_last_error()`. Each error code also has a corresponding error string that can be +accessed via the `aws_error_str()` function. + +In addition, you can install both a global and a thread local error handler by using the `aws_set_global_error_handler_fn()` +and `aws_set_thread_local_error_handler_fn()` functions. + +All error functions are in the `include/aws/common/error.h` header file. + +#### Naming Any function that allocates and initializes an object will be suffixed with `new` (e.g. `aws_myobj_new()`). Similarly, these objects will always -have a corresponding function with a `destroy` suffix. The `new` functions will return the allocated object -on success and `NULL` on failure. To respond to the error, call `aws_last_error()`. If several `new` or `destroy` +have a corresponding function with a `destroy` suffix. The `new` functions will return the allocated object +on success and `NULL` on failure. To respond to the error, call `aws_last_error()`. If several `new` or `destroy` functions are available, the variants should be named like `new_x` or `destroy_x` (e.g. `aws_myobj_new_copy()` or `aws_myobj_destroy_secure()`). - + Any function that initializes an existing object will be suffixed with `init` (e.g. `aws_myobj_init()`. These objects will have a corresponding -`clean_up` function if necessary. In these cases, you are responsible for making the decisions for how your object is -allocated. The `init` functions return `AWS_OP_SUCCESS` ( 0 ) or `AWS_OP_ERR` (-1) on failure. If several `init` or +`clean_up` function if necessary. In these cases, you are responsible for making the decisions for how your object is +allocated. The `init` functions return `AWS_OP_SUCCESS` ( 0 ) or `AWS_OP_ERR` (-1) on failure. If several `init` or `clean_up` functions are available, they should be named like `init_x` or `clean_up_x` (e.g. `aws_myobj_init_static()` or `aws_myobj_clean_up_secure()`). - -## Contributing - -If you are contributing to this code-base, first off, THANK YOU!. There are a few things to keep in mind to minimize the -pull request turn around time. - -### Coding "guidelines" -These "guidelines" are followed in every library in the AWS C SDK ecosystem. - -#### Memory Management -* All APIs that need to be able to allocate memory, must take an instance of `aws_allocator` and use that. No `malloc()` or -`free()` calls should be made directly. -* If an API does not allocate the memory, it does not free it. All allocations and deallocations should take place at the same level. -For example, if a user allocates memory, the user is responsible for freeing it. There will inevitably be a few exceptions to this -rule, but they will need significant justification to make it through the code-review. -* All functions that allocate memory must raise an `AWS_ERROR_OOM` error code upon allocation failures. If it is a `new()` function -it should return NULL. If it is an `init()` function, it should return `AWS_OP_ERR`. - -#### Threading -* Occasionally a thread is necessary. In those cases, prefer for memory not to be shared between threads. If memory must cross -a thread barrier it should be a complete ownership hand-off. Bias towards, "if I need a mutex, I'm doing it wrong". -* Do not sleep or block .... ever .... under any circumstances, in non-test-code. -* Do not expose blocking APIs. - -### Error Handling -* For APIs returning an `int` error code. The only acceptable return types are `AWS_OP_SUCCESS` and `AWS_OP_ERR`. Before -returning control to the caller, if you have an error to raise, use the `aws_raise_error()` function. -* For APIs returning an allocated instance of an object, return the memory on success, and `NULL` on failure. Before -returning control to the caller, if you have an error to raise, use the `aws_raise_error()` function. - -#### Log Subjects & Error Codes -The logging & error handling infrastructure is designed to support multiple libraries. For this to work, AWS maintained libraries + +## Contributing + +If you are contributing to this code-base, first off, THANK YOU!. There are a few things to keep in mind to minimize the +pull request turn around time. + +### Coding "guidelines" +These "guidelines" are followed in every library in the AWS C SDK ecosystem. + +#### Memory Management +* All APIs that need to be able to allocate memory, must take an instance of `aws_allocator` and use that. No `malloc()` or +`free()` calls should be made directly. +* If an API does not allocate the memory, it does not free it. All allocations and deallocations should take place at the same level. +For example, if a user allocates memory, the user is responsible for freeing it. There will inevitably be a few exceptions to this +rule, but they will need significant justification to make it through the code-review. +* All functions that allocate memory must raise an `AWS_ERROR_OOM` error code upon allocation failures. If it is a `new()` function +it should return NULL. If it is an `init()` function, it should return `AWS_OP_ERR`. + +#### Threading +* Occasionally a thread is necessary. In those cases, prefer for memory not to be shared between threads. If memory must cross +a thread barrier it should be a complete ownership hand-off. Bias towards, "if I need a mutex, I'm doing it wrong". +* Do not sleep or block .... ever .... under any circumstances, in non-test-code. +* Do not expose blocking APIs. + +### Error Handling +* For APIs returning an `int` error code. The only acceptable return types are `AWS_OP_SUCCESS` and `AWS_OP_ERR`. Before +returning control to the caller, if you have an error to raise, use the `aws_raise_error()` function. +* For APIs returning an allocated instance of an object, return the memory on success, and `NULL` on failure. Before +returning control to the caller, if you have an error to raise, use the `aws_raise_error()` function. + +#### Log Subjects & Error Codes +The logging & error handling infrastructure is designed to support multiple libraries. For this to work, AWS maintained libraries have pre-slotted log subjects & error codes for each library. The currently allocated ranges are: - -| Range | Library Name | -| --- | --- | -| [0x0000, 0x0400) | aws-c-common | -| [0x0400, 0x0800) | aws-c-io | -| [0x0800, 0x0C00) | aws-c-http | -| [0x0C00, 0x1000) | aws-c-compression | -| [0x1000, 0x1400) | aws-c-eventstream | -| [0x1400, 0x1800) | aws-c-mqtt | + +| Range | Library Name | +| --- | --- | +| [0x0000, 0x0400) | aws-c-common | +| [0x0400, 0x0800) | aws-c-io | +| [0x0800, 0x0C00) | aws-c-http | +| [0x0C00, 0x1000) | aws-c-compression | +| [0x1000, 0x1400) | aws-c-eventstream | +| [0x1400, 0x1800) | aws-c-mqtt | | [0x1800, 0x1C00) | aws-c-auth | | [0x1C00, 0x2000) | aws-c-cal | | [0x2000, 0x2400) | aws-crt-cpp | @@ -121,127 +121,127 @@ have pre-slotted log subjects & error codes for each library. The currently allo | [0x2C00, 0x3000) | aws-crt-nodejs | | [0x3000, 0x3400) | aws-crt-dotnet | | [0x3400, 0x3800) | aws-c-iot | -| [0x3800, 0x3C00) | (reserved for future project) | -| [0x3C00, 0x4000) | (reserved for future project) | +| [0x3800, 0x3C00) | (reserved for future project) | +| [0x3C00, 0x4000) | (reserved for future project) | | [0x4000, 0x4400) | (reserved for future project) | | [0x4400, 0x4800) | (reserved for future project) | - + Each library should begin its error and log subject values at the beginning of its range and follow in sequence (don't skip codes). Upon adding an AWS maintained library, a new enum range must be approved and added to the above table. - -### Testing -We have a high bar for test coverage, and PRs fixing bugs or introducing new functionality need to have tests before -they will be accepted. A couple of tips: - -#### Aws Test Harness -We provide a test harness for writing unit tests. This includes an allocator that will fail your test if you have any -memory leaks, as well as some `ASSERT` macros. To write a test: - -* Create a *.c test file in the tests directory of the project. -* Implement one or more tests with the signature `int test_case_name(struct aws_allocator *, void *ctx)` -* Use the `AWS_TEST_CASE` macro to declare the test. -* Include your test in the `tests/main.c` file. + +### Testing +We have a high bar for test coverage, and PRs fixing bugs or introducing new functionality need to have tests before +they will be accepted. A couple of tips: + +#### Aws Test Harness +We provide a test harness for writing unit tests. This includes an allocator that will fail your test if you have any +memory leaks, as well as some `ASSERT` macros. To write a test: + +* Create a *.c test file in the tests directory of the project. +* Implement one or more tests with the signature `int test_case_name(struct aws_allocator *, void *ctx)` +* Use the `AWS_TEST_CASE` macro to declare the test. +* Include your test in the `tests/main.c` file. * Include your test in the `tests/CMakeLists.txt` file. - -### Coding Style -* No Tabs. -* Indent is 4 spaces. -* K & R style for braces. -* Space after if, before the `(`. -* `else` and `else if` stay on the same line as the closing brace. - -Example: - - if (condition) { - do_something(); - } else { - do_something_else(); - } -* Avoid C99 features in header files. For some types such as bool, uint32_t etc..., these are defined if not available for the language -standard being used in `aws/common/common.h`, so feel free to use them. -* For C++ compatibility, don't put const members in structs. -* Avoid C++ style comments e.g. `//`. -* All public API functions need C++ guards and Windows dll semantics. -* Use Unix line endings. -* Where implementation hiding is desired for either ABI or runtime polymorphism reasons, use the `void *impl` pattern. v-tables - should be the last member in the struct. -* For #ifdef, put a # as the first character on the line and then indent the compilation branches. - -Example: - - - #ifdef FOO - do_something(); - - # ifdef BAR - do_something_else(); - # endif - #endif - - -* For all error code names with the exception of aws-c-common, use `AWS_ERROR_<lib name>_<error name>`. -* All error strings should be written using correct English grammar. -* SNAKE_UPPER_CASE constants, macros, and enum members. -* snake_lower_case everything else. -* `static` (local file scope) variables that are not `const` are prefixed by `s_` and lower snake case. -* Global variables not prefixed as `const` are prefixed by `g_` and lower snake case. -* Thread local variables are prefixed as `tl_` and lower snake case. -* Macros and `const` variables are upper snake case. -* For constants, prefer anonymous enums. -* Don't typedef structs. It breaks forward declaration ability. -* Don't typedef enums. It breaks forward declaration ability. -* typedef function definitions for use as function pointers as values and suffixed with _fn. - -Example: - - typedef int(fn_name_fn)(void *); - -Not: - - typedef int(*fn_name_fn)(void *); - -* Every source and header file must have a copyright header (The standard AWS one for apache 2). -* Use standard include guards (e.g. #IFNDEF HEADER_NAME #define HEADER_NAME etc...). -* Include order should be: - the header for the translation unit for the .c file - newline - header files in a directory in alphabetical order - newline - header files not in a directory (system and stdlib headers) -* Platform specifics should be handled in c files and partitioned by directory. -* Do not use `extern inline`. It's too unpredictable between compiler versions and language standards. -* Namespace all definitions in header files with `aws_<libname>?_<api>_<what it does>`. Lib name is -not always required if a conflict is not likely and it provides better ergonomics. -* `init`, `clean_up`, `new`, `destroy` are suffixed to the function names for their object. - -Example: - - AWS_COMMON_API - int aws_module_init(aws_module_t *module); - AWS_COMMON_API - void aws_module_clean_up(aws_module_t *module); - AWS_COMMON_API - aws_module_t *aws_module_new(aws_allocator_t *allocator); - AWS_COMMON_API - void aws_module_destroy(aws_module_t *module); - -* Avoid c-strings, and don't write code that depends on `NULL` terminators. Expose `struct aws_byte_buf` APIs -and let the user figure it out. -* There is only one valid character encoding-- UTF-8. Try not to ever need to care about character encodings, but -where you do, the working assumption should always be UTF-8 unless it's something we don't get a choice in (e.g. a protocol -explicitly mandates a character set). -* If you are adding/using a compiler specific keyword, macro, or intrinsic, hide it behind a platform independent macro -definition. This mainly applies to header files. Obviously, if you are writing a file that will only be built on a certain -platform, you have more liberty on this. + +### Coding Style +* No Tabs. +* Indent is 4 spaces. +* K & R style for braces. +* Space after if, before the `(`. +* `else` and `else if` stay on the same line as the closing brace. + +Example: + + if (condition) { + do_something(); + } else { + do_something_else(); + } +* Avoid C99 features in header files. For some types such as bool, uint32_t etc..., these are defined if not available for the language +standard being used in `aws/common/common.h`, so feel free to use them. +* For C++ compatibility, don't put const members in structs. +* Avoid C++ style comments e.g. `//`. +* All public API functions need C++ guards and Windows dll semantics. +* Use Unix line endings. +* Where implementation hiding is desired for either ABI or runtime polymorphism reasons, use the `void *impl` pattern. v-tables + should be the last member in the struct. +* For #ifdef, put a # as the first character on the line and then indent the compilation branches. + +Example: + + + #ifdef FOO + do_something(); + + # ifdef BAR + do_something_else(); + # endif + #endif + + +* For all error code names with the exception of aws-c-common, use `AWS_ERROR_<lib name>_<error name>`. +* All error strings should be written using correct English grammar. +* SNAKE_UPPER_CASE constants, macros, and enum members. +* snake_lower_case everything else. +* `static` (local file scope) variables that are not `const` are prefixed by `s_` and lower snake case. +* Global variables not prefixed as `const` are prefixed by `g_` and lower snake case. +* Thread local variables are prefixed as `tl_` and lower snake case. +* Macros and `const` variables are upper snake case. +* For constants, prefer anonymous enums. +* Don't typedef structs. It breaks forward declaration ability. +* Don't typedef enums. It breaks forward declaration ability. +* typedef function definitions for use as function pointers as values and suffixed with _fn. + +Example: + + typedef int(fn_name_fn)(void *); + +Not: + + typedef int(*fn_name_fn)(void *); + +* Every source and header file must have a copyright header (The standard AWS one for apache 2). +* Use standard include guards (e.g. #IFNDEF HEADER_NAME #define HEADER_NAME etc...). +* Include order should be: + the header for the translation unit for the .c file + newline + header files in a directory in alphabetical order + newline + header files not in a directory (system and stdlib headers) +* Platform specifics should be handled in c files and partitioned by directory. +* Do not use `extern inline`. It's too unpredictable between compiler versions and language standards. +* Namespace all definitions in header files with `aws_<libname>?_<api>_<what it does>`. Lib name is +not always required if a conflict is not likely and it provides better ergonomics. +* `init`, `clean_up`, `new`, `destroy` are suffixed to the function names for their object. + +Example: + + AWS_COMMON_API + int aws_module_init(aws_module_t *module); + AWS_COMMON_API + void aws_module_clean_up(aws_module_t *module); + AWS_COMMON_API + aws_module_t *aws_module_new(aws_allocator_t *allocator); + AWS_COMMON_API + void aws_module_destroy(aws_module_t *module); + +* Avoid c-strings, and don't write code that depends on `NULL` terminators. Expose `struct aws_byte_buf` APIs +and let the user figure it out. +* There is only one valid character encoding-- UTF-8. Try not to ever need to care about character encodings, but +where you do, the working assumption should always be UTF-8 unless it's something we don't get a choice in (e.g. a protocol +explicitly mandates a character set). +* If you are adding/using a compiler specific keyword, macro, or intrinsic, hide it behind a platform independent macro +definition. This mainly applies to header files. Obviously, if you are writing a file that will only be built on a certain +platform, you have more liberty on this. * When checking more than one error condition, check and log each condition separately with a unique message. - + Example: - + if (options->callback == NULL) { AWS_LOGF_ERROR(AWS_LS_SOME_SUBJECT, "Invalid options - callback is null"); return aws_raise_error(AWS_ERROR_INVALID_ARGUMENT); } - + if (options->allocator == NULL) { AWS_LOGF_ERROR(AWS_LS_SOME_SUBJECT, "Invalid options - allocator is null"); return aws_raise_error(AWS_ERROR_INVALID_ARGUMENT); diff --git a/contrib/restricted/aws/aws-c-common/generated/include/aws/common/config.h b/contrib/restricted/aws/aws-c-common/generated/include/aws/common/config.h index decbdf88f0..d2a9049b30 100644 --- a/contrib/restricted/aws/aws-c-common/generated/include/aws/common/config.h +++ b/contrib/restricted/aws/aws-c-common/generated/include/aws/common/config.h @@ -1,20 +1,20 @@ -#ifndef AWS_COMMON_CONFIG_H -#define AWS_COMMON_CONFIG_H - +#ifndef AWS_COMMON_CONFIG_H +#define AWS_COMMON_CONFIG_H + /** * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. * SPDX-License-Identifier: Apache-2.0. - */ - -/* - * This header exposes compiler feature test results determined during cmake - * configure time to inline function implementations. The macros defined here - * should be considered to be an implementation detail, and can change at any - * time. - */ -#define AWS_HAVE_GCC_OVERFLOW_MATH_EXTENSIONS -#define AWS_HAVE_GCC_INLINE_ASM -/* #undef AWS_HAVE_MSVC_MULX */ -/* #undef AWS_HAVE_EXECINFO */ - -#endif + */ + +/* + * This header exposes compiler feature test results determined during cmake + * configure time to inline function implementations. The macros defined here + * should be considered to be an implementation detail, and can change at any + * time. + */ +#define AWS_HAVE_GCC_OVERFLOW_MATH_EXTENSIONS +#define AWS_HAVE_GCC_INLINE_ASM +/* #undef AWS_HAVE_MSVC_MULX */ +/* #undef AWS_HAVE_EXECINFO */ + +#endif diff --git a/contrib/restricted/aws/aws-c-common/include/aws/common/array_list.h b/contrib/restricted/aws/aws-c-common/include/aws/common/array_list.h index 1eb7f773cf..895362863b 100644 --- a/contrib/restricted/aws/aws-c-common/include/aws/common/array_list.h +++ b/contrib/restricted/aws/aws-c-common/include/aws/common/array_list.h @@ -1,110 +1,110 @@ -#ifndef AWS_COMMON_ARRAY_LIST_H -#define AWS_COMMON_ARRAY_LIST_H - +#ifndef AWS_COMMON_ARRAY_LIST_H +#define AWS_COMMON_ARRAY_LIST_H + /** * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. * SPDX-License-Identifier: Apache-2.0. - */ -#include <aws/common/common.h> -#include <aws/common/math.h> - -#include <stdlib.h> - -#define AWS_ARRAY_LIST_DEBUG_FILL 0xDD - -struct aws_array_list { - struct aws_allocator *alloc; - size_t current_size; - size_t length; - size_t item_size; - void *data; -}; - -/** - * Prototype for a comparator function for sorting elements. - * - * a and b should be cast to pointers to the element type held in the list - * before being dereferenced. The function should compare the elements and - * return a positive number if a > b, zero if a = b, and a negative number - * if a < b. - */ -typedef int(aws_array_list_comparator_fn)(const void *a, const void *b); - -AWS_EXTERN_C_BEGIN - -/** - * Initializes an array list with an array of size initial_item_allocation * item_size. In this mode, the array size - * will grow by a factor of 2 upon insertion if space is not available. initial_item_allocation is the number of - * elements you want space allocated for. item_size is the size of each element in bytes. Mixing items types is not - * supported by this API. - */ -AWS_STATIC_IMPL -int aws_array_list_init_dynamic( - struct aws_array_list *AWS_RESTRICT list, - struct aws_allocator *alloc, - size_t initial_item_allocation, - size_t item_size); - -/** - * Initializes an array list with a preallocated array of void *. item_count is the number of elements in the array, - * and item_size is the size in bytes of each element. Mixing items types is not supported - * by this API. Once this list is full, new items will be rejected. - */ -AWS_STATIC_IMPL -void aws_array_list_init_static( - struct aws_array_list *AWS_RESTRICT list, - void *raw_array, - size_t item_count, - size_t item_size); - -/** - * Set of properties of a valid aws_array_list. - */ -AWS_STATIC_IMPL -bool aws_array_list_is_valid(const struct aws_array_list *AWS_RESTRICT list); - -/** - * Deallocates any memory that was allocated for this list, and resets list for reuse or deletion. - */ -AWS_STATIC_IMPL -void aws_array_list_clean_up(struct aws_array_list *AWS_RESTRICT list); - -/** + */ +#include <aws/common/common.h> +#include <aws/common/math.h> + +#include <stdlib.h> + +#define AWS_ARRAY_LIST_DEBUG_FILL 0xDD + +struct aws_array_list { + struct aws_allocator *alloc; + size_t current_size; + size_t length; + size_t item_size; + void *data; +}; + +/** + * Prototype for a comparator function for sorting elements. + * + * a and b should be cast to pointers to the element type held in the list + * before being dereferenced. The function should compare the elements and + * return a positive number if a > b, zero if a = b, and a negative number + * if a < b. + */ +typedef int(aws_array_list_comparator_fn)(const void *a, const void *b); + +AWS_EXTERN_C_BEGIN + +/** + * Initializes an array list with an array of size initial_item_allocation * item_size. In this mode, the array size + * will grow by a factor of 2 upon insertion if space is not available. initial_item_allocation is the number of + * elements you want space allocated for. item_size is the size of each element in bytes. Mixing items types is not + * supported by this API. + */ +AWS_STATIC_IMPL +int aws_array_list_init_dynamic( + struct aws_array_list *AWS_RESTRICT list, + struct aws_allocator *alloc, + size_t initial_item_allocation, + size_t item_size); + +/** + * Initializes an array list with a preallocated array of void *. item_count is the number of elements in the array, + * and item_size is the size in bytes of each element. Mixing items types is not supported + * by this API. Once this list is full, new items will be rejected. + */ +AWS_STATIC_IMPL +void aws_array_list_init_static( + struct aws_array_list *AWS_RESTRICT list, + void *raw_array, + size_t item_count, + size_t item_size); + +/** + * Set of properties of a valid aws_array_list. + */ +AWS_STATIC_IMPL +bool aws_array_list_is_valid(const struct aws_array_list *AWS_RESTRICT list); + +/** + * Deallocates any memory that was allocated for this list, and resets list for reuse or deletion. + */ +AWS_STATIC_IMPL +void aws_array_list_clean_up(struct aws_array_list *AWS_RESTRICT list); + +/** * Erases and then deallocates any memory that was allocated for this list, and resets list for reuse or deletion. */ AWS_STATIC_IMPL void aws_array_list_clean_up_secure(struct aws_array_list *AWS_RESTRICT list); /** - * Pushes the memory pointed to by val onto the end of internal list - */ -AWS_STATIC_IMPL -int aws_array_list_push_back(struct aws_array_list *AWS_RESTRICT list, const void *val); - -/** - * Copies the element at the front of the list if it exists. If list is empty, AWS_ERROR_LIST_EMPTY will be raised - */ -AWS_STATIC_IMPL -int aws_array_list_front(const struct aws_array_list *AWS_RESTRICT list, void *val); - -/** - * Deletes the element at the front of the list if it exists. If list is empty, AWS_ERROR_LIST_EMPTY will be raised. - * This call results in shifting all of the elements at the end of the array to the front. Avoid this call unless that - * is intended behavior. - */ -AWS_STATIC_IMPL -int aws_array_list_pop_front(struct aws_array_list *AWS_RESTRICT list); - -/** - * Delete N elements from the front of the list. - * Remaining elements are shifted to the front of the list. - * If the list has less than N elements, the list is cleared. - * This call is more efficient than calling aws_array_list_pop_front() N times. - */ -AWS_STATIC_IMPL -void aws_array_list_pop_front_n(struct aws_array_list *AWS_RESTRICT list, size_t n); - -/** + * Pushes the memory pointed to by val onto the end of internal list + */ +AWS_STATIC_IMPL +int aws_array_list_push_back(struct aws_array_list *AWS_RESTRICT list, const void *val); + +/** + * Copies the element at the front of the list if it exists. If list is empty, AWS_ERROR_LIST_EMPTY will be raised + */ +AWS_STATIC_IMPL +int aws_array_list_front(const struct aws_array_list *AWS_RESTRICT list, void *val); + +/** + * Deletes the element at the front of the list if it exists. If list is empty, AWS_ERROR_LIST_EMPTY will be raised. + * This call results in shifting all of the elements at the end of the array to the front. Avoid this call unless that + * is intended behavior. + */ +AWS_STATIC_IMPL +int aws_array_list_pop_front(struct aws_array_list *AWS_RESTRICT list); + +/** + * Delete N elements from the front of the list. + * Remaining elements are shifted to the front of the list. + * If the list has less than N elements, the list is cleared. + * This call is more efficient than calling aws_array_list_pop_front() N times. + */ +AWS_STATIC_IMPL +void aws_array_list_pop_front_n(struct aws_array_list *AWS_RESTRICT list, size_t n); + +/** * Deletes the element this index in the list if it exists. * If element does not exist, AWS_ERROR_INVALID_INDEX will be raised. * This call results in shifting all remaining elements towards the front. @@ -114,102 +114,102 @@ AWS_STATIC_IMPL int aws_array_list_erase(struct aws_array_list *AWS_RESTRICT list, size_t index); /** - * Copies the element at the end of the list if it exists. If list is empty, AWS_ERROR_LIST_EMPTY will be raised. - */ -AWS_STATIC_IMPL -int aws_array_list_back(const struct aws_array_list *AWS_RESTRICT list, void *val); - -/** - * Deletes the element at the end of the list if it exists. If list is empty, AWS_ERROR_LIST_EMPTY will be raised. - */ -AWS_STATIC_IMPL -int aws_array_list_pop_back(struct aws_array_list *AWS_RESTRICT list); - -/** - * Clears all elements in the array and resets length to zero. Size does not change in this operation. - */ -AWS_STATIC_IMPL -void aws_array_list_clear(struct aws_array_list *AWS_RESTRICT list); - -/** - * If in dynamic mode, shrinks the allocated array size to the minimum amount necessary to store its elements. - */ -AWS_COMMON_API -int aws_array_list_shrink_to_fit(struct aws_array_list *AWS_RESTRICT list); - -/** - * Copies the elements from from to to. If to is in static mode, it must at least be the same length as from. Any data - * in to will be overwritten in this copy. - */ -AWS_COMMON_API -int aws_array_list_copy(const struct aws_array_list *AWS_RESTRICT from, struct aws_array_list *AWS_RESTRICT to); - -/** - * Swap contents between two dynamic lists. Both lists must use the same allocator. - */ -AWS_STATIC_IMPL -void aws_array_list_swap_contents( - struct aws_array_list *AWS_RESTRICT list_a, - struct aws_array_list *AWS_RESTRICT list_b); - -/** - * Returns the number of elements that can fit in the internal array. If list is initialized in dynamic mode, - * the capacity changes over time. - */ -AWS_STATIC_IMPL -size_t aws_array_list_capacity(const struct aws_array_list *AWS_RESTRICT list); - -/** - * Returns the number of elements in the internal array. - */ -AWS_STATIC_IMPL -size_t aws_array_list_length(const struct aws_array_list *AWS_RESTRICT list); - -/** - * Copies the memory at index to val. If element does not exist, AWS_ERROR_INVALID_INDEX will be raised. - */ -AWS_STATIC_IMPL -int aws_array_list_get_at(const struct aws_array_list *AWS_RESTRICT list, void *val, size_t index); - -/** - * Copies the memory address of the element at index to *val. If element does not exist, AWS_ERROR_INVALID_INDEX will be - * raised. - */ -AWS_STATIC_IMPL -int aws_array_list_get_at_ptr(const struct aws_array_list *AWS_RESTRICT list, void **val, size_t index); - -/** - * Ensures that the array list has enough capacity to store a value at the specified index. If there is not already - * enough capacity, and the list is in dynamic mode, this function will attempt to allocate more memory, expanding the - * list. In static mode, if 'index' is beyond the maximum index, AWS_ERROR_INVALID_INDEX will be raised. - */ -AWS_COMMON_API -int aws_array_list_ensure_capacity(struct aws_array_list *AWS_RESTRICT list, size_t index); - -/** - * Copies the the memory pointed to by val into the array at index. If in dynamic mode, the size will grow by a factor - * of two when the array is full. In static mode, AWS_ERROR_INVALID_INDEX will be raised if the index is past the bounds - * of the array. - */ -AWS_STATIC_IMPL -int aws_array_list_set_at(struct aws_array_list *AWS_RESTRICT list, const void *val, size_t index); - -/** - * Swap elements at the specified indices, which must be within the bounds of the array. - */ -AWS_COMMON_API -void aws_array_list_swap(struct aws_array_list *AWS_RESTRICT list, size_t a, size_t b); - -/** - * Sort elements in the list in-place according to the comparator function. - */ -AWS_STATIC_IMPL -void aws_array_list_sort(struct aws_array_list *AWS_RESTRICT list, aws_array_list_comparator_fn *compare_fn); - + * Copies the element at the end of the list if it exists. If list is empty, AWS_ERROR_LIST_EMPTY will be raised. + */ +AWS_STATIC_IMPL +int aws_array_list_back(const struct aws_array_list *AWS_RESTRICT list, void *val); + +/** + * Deletes the element at the end of the list if it exists. If list is empty, AWS_ERROR_LIST_EMPTY will be raised. + */ +AWS_STATIC_IMPL +int aws_array_list_pop_back(struct aws_array_list *AWS_RESTRICT list); + +/** + * Clears all elements in the array and resets length to zero. Size does not change in this operation. + */ +AWS_STATIC_IMPL +void aws_array_list_clear(struct aws_array_list *AWS_RESTRICT list); + +/** + * If in dynamic mode, shrinks the allocated array size to the minimum amount necessary to store its elements. + */ +AWS_COMMON_API +int aws_array_list_shrink_to_fit(struct aws_array_list *AWS_RESTRICT list); + +/** + * Copies the elements from from to to. If to is in static mode, it must at least be the same length as from. Any data + * in to will be overwritten in this copy. + */ +AWS_COMMON_API +int aws_array_list_copy(const struct aws_array_list *AWS_RESTRICT from, struct aws_array_list *AWS_RESTRICT to); + +/** + * Swap contents between two dynamic lists. Both lists must use the same allocator. + */ +AWS_STATIC_IMPL +void aws_array_list_swap_contents( + struct aws_array_list *AWS_RESTRICT list_a, + struct aws_array_list *AWS_RESTRICT list_b); + +/** + * Returns the number of elements that can fit in the internal array. If list is initialized in dynamic mode, + * the capacity changes over time. + */ +AWS_STATIC_IMPL +size_t aws_array_list_capacity(const struct aws_array_list *AWS_RESTRICT list); + +/** + * Returns the number of elements in the internal array. + */ +AWS_STATIC_IMPL +size_t aws_array_list_length(const struct aws_array_list *AWS_RESTRICT list); + +/** + * Copies the memory at index to val. If element does not exist, AWS_ERROR_INVALID_INDEX will be raised. + */ +AWS_STATIC_IMPL +int aws_array_list_get_at(const struct aws_array_list *AWS_RESTRICT list, void *val, size_t index); + +/** + * Copies the memory address of the element at index to *val. If element does not exist, AWS_ERROR_INVALID_INDEX will be + * raised. + */ +AWS_STATIC_IMPL +int aws_array_list_get_at_ptr(const struct aws_array_list *AWS_RESTRICT list, void **val, size_t index); + +/** + * Ensures that the array list has enough capacity to store a value at the specified index. If there is not already + * enough capacity, and the list is in dynamic mode, this function will attempt to allocate more memory, expanding the + * list. In static mode, if 'index' is beyond the maximum index, AWS_ERROR_INVALID_INDEX will be raised. + */ +AWS_COMMON_API +int aws_array_list_ensure_capacity(struct aws_array_list *AWS_RESTRICT list, size_t index); + +/** + * Copies the the memory pointed to by val into the array at index. If in dynamic mode, the size will grow by a factor + * of two when the array is full. In static mode, AWS_ERROR_INVALID_INDEX will be raised if the index is past the bounds + * of the array. + */ +AWS_STATIC_IMPL +int aws_array_list_set_at(struct aws_array_list *AWS_RESTRICT list, const void *val, size_t index); + +/** + * Swap elements at the specified indices, which must be within the bounds of the array. + */ +AWS_COMMON_API +void aws_array_list_swap(struct aws_array_list *AWS_RESTRICT list, size_t a, size_t b); + +/** + * Sort elements in the list in-place according to the comparator function. + */ +AWS_STATIC_IMPL +void aws_array_list_sort(struct aws_array_list *AWS_RESTRICT list, aws_array_list_comparator_fn *compare_fn); + #ifndef AWS_NO_STATIC_IMPL # include <aws/common/array_list.inl> #endif /* AWS_NO_STATIC_IMPL */ - -AWS_EXTERN_C_END - -#endif /* AWS_COMMON_ARRAY_LIST_H */ + +AWS_EXTERN_C_END + +#endif /* AWS_COMMON_ARRAY_LIST_H */ diff --git a/contrib/restricted/aws/aws-c-common/include/aws/common/array_list.inl b/contrib/restricted/aws/aws-c-common/include/aws/common/array_list.inl index d3ca30ecda..d50028c528 100644 --- a/contrib/restricted/aws/aws-c-common/include/aws/common/array_list.inl +++ b/contrib/restricted/aws/aws-c-common/include/aws/common/array_list.inl @@ -4,96 +4,96 @@ /** * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. * SPDX-License-Identifier: Apache-2.0. - */ - -/* This is implicitly included, but helps with editor highlighting */ -#include <aws/common/array_list.h> -/* - * Do not add system headers here; add them to array_list.h. This file is included under extern "C" guards, - * which might break system headers. - */ + */ + +/* This is implicitly included, but helps with editor highlighting */ +#include <aws/common/array_list.h> +/* + * Do not add system headers here; add them to array_list.h. This file is included under extern "C" guards, + * which might break system headers. + */ AWS_EXTERN_C_BEGIN - -AWS_STATIC_IMPL -int aws_array_list_init_dynamic( - struct aws_array_list *AWS_RESTRICT list, - struct aws_allocator *alloc, - size_t initial_item_allocation, - size_t item_size) { - + +AWS_STATIC_IMPL +int aws_array_list_init_dynamic( + struct aws_array_list *AWS_RESTRICT list, + struct aws_allocator *alloc, + size_t initial_item_allocation, + size_t item_size) { + AWS_FATAL_PRECONDITION(list != NULL); AWS_FATAL_PRECONDITION(alloc != NULL); AWS_FATAL_PRECONDITION(item_size > 0); AWS_ZERO_STRUCT(*list); - size_t allocation_size; - if (aws_mul_size_checked(initial_item_allocation, item_size, &allocation_size)) { + size_t allocation_size; + if (aws_mul_size_checked(initial_item_allocation, item_size, &allocation_size)) { goto error; - } - - if (allocation_size > 0) { + } + + if (allocation_size > 0) { list->data = aws_mem_acquire(alloc, allocation_size); - if (!list->data) { + if (!list->data) { goto error; - } -#ifdef DEBUG_BUILD - memset(list->data, AWS_ARRAY_LIST_DEBUG_FILL, allocation_size); + } +#ifdef DEBUG_BUILD + memset(list->data, AWS_ARRAY_LIST_DEBUG_FILL, allocation_size); -#endif - list->current_size = allocation_size; - } +#endif + list->current_size = allocation_size; + } list->item_size = item_size; list->alloc = alloc; - + AWS_FATAL_POSTCONDITION(list->current_size == 0 || list->data); - AWS_POSTCONDITION(aws_array_list_is_valid(list)); - return AWS_OP_SUCCESS; + AWS_POSTCONDITION(aws_array_list_is_valid(list)); + return AWS_OP_SUCCESS; error: AWS_POSTCONDITION(AWS_IS_ZEROED(*list)); return AWS_OP_ERR; -} - -AWS_STATIC_IMPL -void aws_array_list_init_static( - struct aws_array_list *AWS_RESTRICT list, - void *raw_array, - size_t item_count, - size_t item_size) { - +} + +AWS_STATIC_IMPL +void aws_array_list_init_static( + struct aws_array_list *AWS_RESTRICT list, + void *raw_array, + size_t item_count, + size_t item_size) { + AWS_FATAL_PRECONDITION(list != NULL); AWS_FATAL_PRECONDITION(raw_array != NULL); AWS_FATAL_PRECONDITION(item_count > 0); AWS_FATAL_PRECONDITION(item_size > 0); - list->alloc = NULL; - - int no_overflow = !aws_mul_size_checked(item_count, item_size, &list->current_size); + list->alloc = NULL; + + int no_overflow = !aws_mul_size_checked(item_count, item_size, &list->current_size); AWS_FATAL_PRECONDITION(no_overflow); - - list->item_size = item_size; - list->length = 0; - list->data = raw_array; - AWS_POSTCONDITION(aws_array_list_is_valid(list)); -} - -AWS_STATIC_IMPL -bool aws_array_list_is_valid(const struct aws_array_list *AWS_RESTRICT list) { + + list->item_size = item_size; + list->length = 0; + list->data = raw_array; + AWS_POSTCONDITION(aws_array_list_is_valid(list)); +} + +AWS_STATIC_IMPL +bool aws_array_list_is_valid(const struct aws_array_list *AWS_RESTRICT list) { if (!list) { - return false; - } - size_t required_size = 0; + return false; + } + size_t required_size = 0; bool required_size_is_valid = (aws_mul_size_checked(list->length, list->item_size, &required_size) == AWS_OP_SUCCESS); - bool current_size_is_valid = (list->current_size >= required_size); + bool current_size_is_valid = (list->current_size >= required_size); bool data_is_valid = AWS_IMPLIES(list->current_size == 0, list->data == NULL) && AWS_IMPLIES(list->current_size != 0, AWS_MEM_IS_WRITABLE(list->data, list->current_size)); bool item_size_is_valid = (list->item_size != 0); return required_size_is_valid && current_size_is_valid && data_is_valid && item_size_is_valid; -} - -AWS_STATIC_IMPL +} + +AWS_STATIC_IMPL void aws_array_list_debug_print(const struct aws_array_list *list) { printf( "arraylist %p. Alloc %p. current_size %zu. length %zu. item_size %zu. data %p\n", @@ -106,16 +106,16 @@ void aws_array_list_debug_print(const struct aws_array_list *list) { } AWS_STATIC_IMPL -void aws_array_list_clean_up(struct aws_array_list *AWS_RESTRICT list) { +void aws_array_list_clean_up(struct aws_array_list *AWS_RESTRICT list) { AWS_PRECONDITION(AWS_IS_ZEROED(*list) || aws_array_list_is_valid(list)); - if (list->alloc && list->data) { - aws_mem_release(list->alloc, list->data); - } - + if (list->alloc && list->data) { + aws_mem_release(list->alloc, list->data); + } + AWS_ZERO_STRUCT(*list); -} - -AWS_STATIC_IMPL +} + +AWS_STATIC_IMPL void aws_array_list_clean_up_secure(struct aws_array_list *AWS_RESTRICT list) { AWS_PRECONDITION(AWS_IS_ZEROED(*list) || aws_array_list_is_valid(list)); if (list->alloc && list->data) { @@ -127,75 +127,75 @@ void aws_array_list_clean_up_secure(struct aws_array_list *AWS_RESTRICT list) { } AWS_STATIC_IMPL -int aws_array_list_push_back(struct aws_array_list *AWS_RESTRICT list, const void *val) { - AWS_PRECONDITION(aws_array_list_is_valid(list)); +int aws_array_list_push_back(struct aws_array_list *AWS_RESTRICT list, const void *val) { + AWS_PRECONDITION(aws_array_list_is_valid(list)); AWS_PRECONDITION( val && AWS_MEM_IS_READABLE(val, list->item_size), "Input pointer [val] must point writable memory of [list->item_size] bytes."); - int err_code = aws_array_list_set_at(list, val, aws_array_list_length(list)); - - if (err_code && aws_last_error() == AWS_ERROR_INVALID_INDEX && !list->alloc) { - AWS_POSTCONDITION(aws_array_list_is_valid(list)); - return aws_raise_error(AWS_ERROR_LIST_EXCEEDS_MAX_SIZE); - } - - AWS_POSTCONDITION(aws_array_list_is_valid(list)); - return err_code; -} - -AWS_STATIC_IMPL -int aws_array_list_front(const struct aws_array_list *AWS_RESTRICT list, void *val) { - AWS_PRECONDITION(aws_array_list_is_valid(list)); + int err_code = aws_array_list_set_at(list, val, aws_array_list_length(list)); + + if (err_code && aws_last_error() == AWS_ERROR_INVALID_INDEX && !list->alloc) { + AWS_POSTCONDITION(aws_array_list_is_valid(list)); + return aws_raise_error(AWS_ERROR_LIST_EXCEEDS_MAX_SIZE); + } + + AWS_POSTCONDITION(aws_array_list_is_valid(list)); + return err_code; +} + +AWS_STATIC_IMPL +int aws_array_list_front(const struct aws_array_list *AWS_RESTRICT list, void *val) { + AWS_PRECONDITION(aws_array_list_is_valid(list)); AWS_PRECONDITION( val && AWS_MEM_IS_WRITABLE(val, list->item_size), "Input pointer [val] must point writable memory of [list->item_size] bytes."); - if (aws_array_list_length(list) > 0) { - memcpy(val, list->data, list->item_size); + if (aws_array_list_length(list) > 0) { + memcpy(val, list->data, list->item_size); AWS_POSTCONDITION(AWS_BYTES_EQ(val, list->data, list->item_size)); - AWS_POSTCONDITION(aws_array_list_is_valid(list)); - return AWS_OP_SUCCESS; - } - - AWS_POSTCONDITION(aws_array_list_is_valid(list)); - return aws_raise_error(AWS_ERROR_LIST_EMPTY); -} - -AWS_STATIC_IMPL -int aws_array_list_pop_front(struct aws_array_list *AWS_RESTRICT list) { - AWS_PRECONDITION(aws_array_list_is_valid(list)); - if (aws_array_list_length(list) > 0) { - aws_array_list_pop_front_n(list, 1); - AWS_POSTCONDITION(aws_array_list_is_valid(list)); - return AWS_OP_SUCCESS; - } - - AWS_POSTCONDITION(aws_array_list_is_valid(list)); - return aws_raise_error(AWS_ERROR_LIST_EMPTY); -} - -AWS_STATIC_IMPL -void aws_array_list_pop_front_n(struct aws_array_list *AWS_RESTRICT list, size_t n) { - AWS_PRECONDITION(aws_array_list_is_valid(list)); - if (n >= aws_array_list_length(list)) { - aws_array_list_clear(list); - AWS_POSTCONDITION(aws_array_list_is_valid(list)); - return; - } - - if (n > 0) { - size_t popping_bytes = list->item_size * n; - size_t remaining_items = aws_array_list_length(list) - n; - size_t remaining_bytes = remaining_items * list->item_size; - memmove(list->data, (uint8_t *)list->data + popping_bytes, remaining_bytes); - list->length = remaining_items; -#ifdef DEBUG_BUILD - memset((uint8_t *)list->data + remaining_bytes, AWS_ARRAY_LIST_DEBUG_FILL, popping_bytes); -#endif - } - AWS_POSTCONDITION(aws_array_list_is_valid(list)); -} - + AWS_POSTCONDITION(aws_array_list_is_valid(list)); + return AWS_OP_SUCCESS; + } + + AWS_POSTCONDITION(aws_array_list_is_valid(list)); + return aws_raise_error(AWS_ERROR_LIST_EMPTY); +} + +AWS_STATIC_IMPL +int aws_array_list_pop_front(struct aws_array_list *AWS_RESTRICT list) { + AWS_PRECONDITION(aws_array_list_is_valid(list)); + if (aws_array_list_length(list) > 0) { + aws_array_list_pop_front_n(list, 1); + AWS_POSTCONDITION(aws_array_list_is_valid(list)); + return AWS_OP_SUCCESS; + } + + AWS_POSTCONDITION(aws_array_list_is_valid(list)); + return aws_raise_error(AWS_ERROR_LIST_EMPTY); +} + +AWS_STATIC_IMPL +void aws_array_list_pop_front_n(struct aws_array_list *AWS_RESTRICT list, size_t n) { + AWS_PRECONDITION(aws_array_list_is_valid(list)); + if (n >= aws_array_list_length(list)) { + aws_array_list_clear(list); + AWS_POSTCONDITION(aws_array_list_is_valid(list)); + return; + } + + if (n > 0) { + size_t popping_bytes = list->item_size * n; + size_t remaining_items = aws_array_list_length(list) - n; + size_t remaining_bytes = remaining_items * list->item_size; + memmove(list->data, (uint8_t *)list->data + popping_bytes, remaining_bytes); + list->length = remaining_items; +#ifdef DEBUG_BUILD + memset((uint8_t *)list->data + remaining_bytes, AWS_ARRAY_LIST_DEBUG_FILL, popping_bytes); +#endif + } + AWS_POSTCONDITION(aws_array_list_is_valid(list)); +} + int aws_array_list_erase(struct aws_array_list *AWS_RESTRICT list, size_t index) { AWS_PRECONDITION(aws_array_list_is_valid(list)); @@ -227,162 +227,162 @@ int aws_array_list_erase(struct aws_array_list *AWS_RESTRICT list, size_t index) return AWS_OP_SUCCESS; } -AWS_STATIC_IMPL -int aws_array_list_back(const struct aws_array_list *AWS_RESTRICT list, void *val) { - AWS_PRECONDITION(aws_array_list_is_valid(list)); +AWS_STATIC_IMPL +int aws_array_list_back(const struct aws_array_list *AWS_RESTRICT list, void *val) { + AWS_PRECONDITION(aws_array_list_is_valid(list)); AWS_PRECONDITION( val && AWS_MEM_IS_WRITABLE(val, list->item_size), "Input pointer [val] must point writable memory of [list->item_size] bytes."); - if (aws_array_list_length(list) > 0) { - size_t last_item_offset = list->item_size * (aws_array_list_length(list) - 1); - - memcpy(val, (void *)((uint8_t *)list->data + last_item_offset), list->item_size); - AWS_POSTCONDITION(aws_array_list_is_valid(list)); - return AWS_OP_SUCCESS; - } - - AWS_POSTCONDITION(aws_array_list_is_valid(list)); - return aws_raise_error(AWS_ERROR_LIST_EMPTY); -} - -AWS_STATIC_IMPL -int aws_array_list_pop_back(struct aws_array_list *AWS_RESTRICT list) { - AWS_PRECONDITION(aws_array_list_is_valid(list)); - if (aws_array_list_length(list) > 0) { - + if (aws_array_list_length(list) > 0) { + size_t last_item_offset = list->item_size * (aws_array_list_length(list) - 1); + + memcpy(val, (void *)((uint8_t *)list->data + last_item_offset), list->item_size); + AWS_POSTCONDITION(aws_array_list_is_valid(list)); + return AWS_OP_SUCCESS; + } + + AWS_POSTCONDITION(aws_array_list_is_valid(list)); + return aws_raise_error(AWS_ERROR_LIST_EMPTY); +} + +AWS_STATIC_IMPL +int aws_array_list_pop_back(struct aws_array_list *AWS_RESTRICT list) { + AWS_PRECONDITION(aws_array_list_is_valid(list)); + if (aws_array_list_length(list) > 0) { + AWS_FATAL_PRECONDITION(list->data); - - size_t last_item_offset = list->item_size * (aws_array_list_length(list) - 1); - - memset((void *)((uint8_t *)list->data + last_item_offset), 0, list->item_size); - list->length--; - AWS_POSTCONDITION(aws_array_list_is_valid(list)); - return AWS_OP_SUCCESS; - } - - AWS_POSTCONDITION(aws_array_list_is_valid(list)); - return aws_raise_error(AWS_ERROR_LIST_EMPTY); -} - -AWS_STATIC_IMPL -void aws_array_list_clear(struct aws_array_list *AWS_RESTRICT list) { - AWS_PRECONDITION(aws_array_list_is_valid(list)); - if (list->data) { -#ifdef DEBUG_BUILD - memset(list->data, AWS_ARRAY_LIST_DEBUG_FILL, list->current_size); -#endif - list->length = 0; - } - AWS_POSTCONDITION(aws_array_list_is_valid(list)); -} - -AWS_STATIC_IMPL -void aws_array_list_swap_contents( - struct aws_array_list *AWS_RESTRICT list_a, - struct aws_array_list *AWS_RESTRICT list_b) { + + size_t last_item_offset = list->item_size * (aws_array_list_length(list) - 1); + + memset((void *)((uint8_t *)list->data + last_item_offset), 0, list->item_size); + list->length--; + AWS_POSTCONDITION(aws_array_list_is_valid(list)); + return AWS_OP_SUCCESS; + } + + AWS_POSTCONDITION(aws_array_list_is_valid(list)); + return aws_raise_error(AWS_ERROR_LIST_EMPTY); +} + +AWS_STATIC_IMPL +void aws_array_list_clear(struct aws_array_list *AWS_RESTRICT list) { + AWS_PRECONDITION(aws_array_list_is_valid(list)); + if (list->data) { +#ifdef DEBUG_BUILD + memset(list->data, AWS_ARRAY_LIST_DEBUG_FILL, list->current_size); +#endif + list->length = 0; + } + AWS_POSTCONDITION(aws_array_list_is_valid(list)); +} + +AWS_STATIC_IMPL +void aws_array_list_swap_contents( + struct aws_array_list *AWS_RESTRICT list_a, + struct aws_array_list *AWS_RESTRICT list_b) { AWS_FATAL_PRECONDITION(list_a->alloc); AWS_FATAL_PRECONDITION(list_a->alloc == list_b->alloc); AWS_FATAL_PRECONDITION(list_a->item_size == list_b->item_size); AWS_FATAL_PRECONDITION(list_a != list_b); - AWS_PRECONDITION(aws_array_list_is_valid(list_a)); - AWS_PRECONDITION(aws_array_list_is_valid(list_b)); - - struct aws_array_list tmp = *list_a; - *list_a = *list_b; - *list_b = tmp; - AWS_POSTCONDITION(aws_array_list_is_valid(list_a)); - AWS_POSTCONDITION(aws_array_list_is_valid(list_b)); -} - -AWS_STATIC_IMPL -size_t aws_array_list_capacity(const struct aws_array_list *AWS_RESTRICT list) { + AWS_PRECONDITION(aws_array_list_is_valid(list_a)); + AWS_PRECONDITION(aws_array_list_is_valid(list_b)); + + struct aws_array_list tmp = *list_a; + *list_a = *list_b; + *list_b = tmp; + AWS_POSTCONDITION(aws_array_list_is_valid(list_a)); + AWS_POSTCONDITION(aws_array_list_is_valid(list_b)); +} + +AWS_STATIC_IMPL +size_t aws_array_list_capacity(const struct aws_array_list *AWS_RESTRICT list) { AWS_FATAL_PRECONDITION(list->item_size); - AWS_PRECONDITION(aws_array_list_is_valid(list)); - size_t capacity = list->current_size / list->item_size; - AWS_POSTCONDITION(aws_array_list_is_valid(list)); - return capacity; -} - -AWS_STATIC_IMPL -size_t aws_array_list_length(const struct aws_array_list *AWS_RESTRICT list) { - /* - * This assert teaches clang-tidy and friends that list->data cannot be null in a non-empty - * list. - */ + AWS_PRECONDITION(aws_array_list_is_valid(list)); + size_t capacity = list->current_size / list->item_size; + AWS_POSTCONDITION(aws_array_list_is_valid(list)); + return capacity; +} + +AWS_STATIC_IMPL +size_t aws_array_list_length(const struct aws_array_list *AWS_RESTRICT list) { + /* + * This assert teaches clang-tidy and friends that list->data cannot be null in a non-empty + * list. + */ AWS_FATAL_PRECONDITION(!list->length || list->data); - AWS_PRECONDITION(aws_array_list_is_valid(list)); - size_t len = list->length; - AWS_POSTCONDITION(aws_array_list_is_valid(list)); - return len; -} - -AWS_STATIC_IMPL -int aws_array_list_get_at(const struct aws_array_list *AWS_RESTRICT list, void *val, size_t index) { - AWS_PRECONDITION(aws_array_list_is_valid(list)); + AWS_PRECONDITION(aws_array_list_is_valid(list)); + size_t len = list->length; + AWS_POSTCONDITION(aws_array_list_is_valid(list)); + return len; +} + +AWS_STATIC_IMPL +int aws_array_list_get_at(const struct aws_array_list *AWS_RESTRICT list, void *val, size_t index) { + AWS_PRECONDITION(aws_array_list_is_valid(list)); AWS_PRECONDITION( val && AWS_MEM_IS_WRITABLE(val, list->item_size), "Input pointer [val] must point writable memory of [list->item_size] bytes."); - if (aws_array_list_length(list) > index) { - memcpy(val, (void *)((uint8_t *)list->data + (list->item_size * index)), list->item_size); - AWS_POSTCONDITION(aws_array_list_is_valid(list)); - return AWS_OP_SUCCESS; - } - AWS_POSTCONDITION(aws_array_list_is_valid(list)); - return aws_raise_error(AWS_ERROR_INVALID_INDEX); -} - -AWS_STATIC_IMPL -int aws_array_list_get_at_ptr(const struct aws_array_list *AWS_RESTRICT list, void **val, size_t index) { - AWS_PRECONDITION(aws_array_list_is_valid(list)); + if (aws_array_list_length(list) > index) { + memcpy(val, (void *)((uint8_t *)list->data + (list->item_size * index)), list->item_size); + AWS_POSTCONDITION(aws_array_list_is_valid(list)); + return AWS_OP_SUCCESS; + } + AWS_POSTCONDITION(aws_array_list_is_valid(list)); + return aws_raise_error(AWS_ERROR_INVALID_INDEX); +} + +AWS_STATIC_IMPL +int aws_array_list_get_at_ptr(const struct aws_array_list *AWS_RESTRICT list, void **val, size_t index) { + AWS_PRECONDITION(aws_array_list_is_valid(list)); AWS_PRECONDITION(val != NULL); - if (aws_array_list_length(list) > index) { - *val = (void *)((uint8_t *)list->data + (list->item_size * index)); - AWS_POSTCONDITION(aws_array_list_is_valid(list)); - return AWS_OP_SUCCESS; - } - AWS_POSTCONDITION(aws_array_list_is_valid(list)); - return aws_raise_error(AWS_ERROR_INVALID_INDEX); -} - -AWS_STATIC_IMPL + if (aws_array_list_length(list) > index) { + *val = (void *)((uint8_t *)list->data + (list->item_size * index)); + AWS_POSTCONDITION(aws_array_list_is_valid(list)); + return AWS_OP_SUCCESS; + } + AWS_POSTCONDITION(aws_array_list_is_valid(list)); + return aws_raise_error(AWS_ERROR_INVALID_INDEX); +} + +AWS_STATIC_IMPL int aws_array_list_set_at(struct aws_array_list *AWS_RESTRICT list, const void *val, size_t index) { - AWS_PRECONDITION(aws_array_list_is_valid(list)); + AWS_PRECONDITION(aws_array_list_is_valid(list)); AWS_PRECONDITION( val && AWS_MEM_IS_READABLE(val, list->item_size), "Input pointer [val] must point readable memory of [list->item_size] bytes."); - + if (aws_array_list_ensure_capacity(list, index)) { - AWS_POSTCONDITION(aws_array_list_is_valid(list)); - return AWS_OP_ERR; - } - + AWS_POSTCONDITION(aws_array_list_is_valid(list)); + return AWS_OP_ERR; + } + AWS_FATAL_PRECONDITION(list->data); - - memcpy((void *)((uint8_t *)list->data + (list->item_size * index)), val, list->item_size); - - /* - * This isn't perfect, but its the best I can come up with for detecting - * length changes. - */ - if (index >= aws_array_list_length(list)) { - if (aws_add_size_checked(index, 1, &list->length)) { - AWS_POSTCONDITION(aws_array_list_is_valid(list)); - return AWS_OP_ERR; - } - } - - AWS_POSTCONDITION(aws_array_list_is_valid(list)); - return AWS_OP_SUCCESS; -} - -AWS_STATIC_IMPL -void aws_array_list_sort(struct aws_array_list *AWS_RESTRICT list, aws_array_list_comparator_fn *compare_fn) { - AWS_PRECONDITION(aws_array_list_is_valid(list)); - if (list->data) { - qsort(list->data, aws_array_list_length(list), list->item_size, compare_fn); - } - AWS_POSTCONDITION(aws_array_list_is_valid(list)); -} + + memcpy((void *)((uint8_t *)list->data + (list->item_size * index)), val, list->item_size); + + /* + * This isn't perfect, but its the best I can come up with for detecting + * length changes. + */ + if (index >= aws_array_list_length(list)) { + if (aws_add_size_checked(index, 1, &list->length)) { + AWS_POSTCONDITION(aws_array_list_is_valid(list)); + return AWS_OP_ERR; + } + } + + AWS_POSTCONDITION(aws_array_list_is_valid(list)); + return AWS_OP_SUCCESS; +} + +AWS_STATIC_IMPL +void aws_array_list_sort(struct aws_array_list *AWS_RESTRICT list, aws_array_list_comparator_fn *compare_fn) { + AWS_PRECONDITION(aws_array_list_is_valid(list)); + if (list->data) { + qsort(list->data, aws_array_list_length(list), list->item_size, compare_fn); + } + AWS_POSTCONDITION(aws_array_list_is_valid(list)); +} AWS_EXTERN_C_END diff --git a/contrib/restricted/aws/aws-c-common/include/aws/common/atomics.h b/contrib/restricted/aws/aws-c-common/include/aws/common/atomics.h index e2ee8df95a..fd204764ed 100644 --- a/contrib/restricted/aws/aws-c-common/include/aws/common/atomics.h +++ b/contrib/restricted/aws/aws-c-common/include/aws/common/atomics.h @@ -1,327 +1,327 @@ -#ifndef AWS_COMMON_ATOMICS_H -#define AWS_COMMON_ATOMICS_H - -#include <aws/common/common.h> - +#ifndef AWS_COMMON_ATOMICS_H +#define AWS_COMMON_ATOMICS_H + +#include <aws/common/common.h> + /** * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. * SPDX-License-Identifier: Apache-2.0. - */ - -/** - * struct aws_atomic_var represents an atomic variable - a value which can hold an integer or pointer - * that can be manipulated atomically. struct aws_atomic_vars should normally only be manipulated - * with atomics methods defined in this header. - */ -struct aws_atomic_var { - void *value; -}; -/* Helpers for extracting the integer and pointer values from aws_atomic_var. */ + */ + +/** + * struct aws_atomic_var represents an atomic variable - a value which can hold an integer or pointer + * that can be manipulated atomically. struct aws_atomic_vars should normally only be manipulated + * with atomics methods defined in this header. + */ +struct aws_atomic_var { + void *value; +}; +/* Helpers for extracting the integer and pointer values from aws_atomic_var. */ #define AWS_ATOMIC_VAR_PTRVAL(var) ((var)->value) #define AWS_ATOMIC_VAR_INTVAL(var) (*(aws_atomic_impl_int_t *)(var)) - -/* - * This enumeration specifies the memory ordering properties requested for a particular - * atomic operation. The atomic operation may provide stricter ordering than requested. - * Note that, within a single thread, all operations are still sequenced (that is, a thread - * sees its own atomic writes and reads happening in program order, but other threads may - * disagree on this ordering). - * - * The behavior of these memory orderings are the same as in the C11 atomics API; however, - * we only implement a subset that can be portably implemented on the compilers we target. - */ - -enum aws_memory_order { - /** - * No particular ordering constraints are guaranteed relative to other - * operations at all; we merely ensure that the operation itself is atomic. - */ - aws_memory_order_relaxed = 0, - /* aws_memory_order_consume - not currently implemented */ - - /** - * Specifies acquire ordering. No reads or writes on the current thread can be - * reordered to happen before this operation. This is typically paired with a release - * ordering; any writes that happened on the releasing operation will be visible - * after the paired acquire operation. - * - * Acquire ordering is only meaningful on load or load-store operations. - */ - aws_memory_order_acquire = 2, /* leave a spot for consume if we ever add it */ - - /** - * Specifies release order. No reads or writes can be reordered to come after this - * operation. Typically paired with an acquire operation. - * - * Release ordering is only meaningful on store or load-store operations. - */ - aws_memory_order_release, - - /** - * Specifies acquire-release order; if this operation acts as a load, it acts as an - * acquire operation; if it acts as a store, it acts as a release operation; if it's - * a load-store, it does both. - */ - aws_memory_order_acq_rel, - - /* - * Specifies sequentially consistent order. This behaves as acq_rel, but in addition, - * all seq_cst operations appear to occur in some globally consistent order. - * - * TODO: Figure out how to correctly implement this in MSVC. It appears that interlocked - * functions provide only acq_rel ordering. - */ - aws_memory_order_seq_cst -}; - -/** - * Statically initializes an aws_atomic_var to a given size_t value. - */ -#define AWS_ATOMIC_INIT_INT(x) \ - { .value = (void *)(uintptr_t)(x) } - -/** - * Statically initializes an aws_atomic_var to a given void * value. - */ -#define AWS_ATOMIC_INIT_PTR(x) \ - { .value = (void *)(x) } - + +/* + * This enumeration specifies the memory ordering properties requested for a particular + * atomic operation. The atomic operation may provide stricter ordering than requested. + * Note that, within a single thread, all operations are still sequenced (that is, a thread + * sees its own atomic writes and reads happening in program order, but other threads may + * disagree on this ordering). + * + * The behavior of these memory orderings are the same as in the C11 atomics API; however, + * we only implement a subset that can be portably implemented on the compilers we target. + */ + +enum aws_memory_order { + /** + * No particular ordering constraints are guaranteed relative to other + * operations at all; we merely ensure that the operation itself is atomic. + */ + aws_memory_order_relaxed = 0, + /* aws_memory_order_consume - not currently implemented */ + + /** + * Specifies acquire ordering. No reads or writes on the current thread can be + * reordered to happen before this operation. This is typically paired with a release + * ordering; any writes that happened on the releasing operation will be visible + * after the paired acquire operation. + * + * Acquire ordering is only meaningful on load or load-store operations. + */ + aws_memory_order_acquire = 2, /* leave a spot for consume if we ever add it */ + + /** + * Specifies release order. No reads or writes can be reordered to come after this + * operation. Typically paired with an acquire operation. + * + * Release ordering is only meaningful on store or load-store operations. + */ + aws_memory_order_release, + + /** + * Specifies acquire-release order; if this operation acts as a load, it acts as an + * acquire operation; if it acts as a store, it acts as a release operation; if it's + * a load-store, it does both. + */ + aws_memory_order_acq_rel, + + /* + * Specifies sequentially consistent order. This behaves as acq_rel, but in addition, + * all seq_cst operations appear to occur in some globally consistent order. + * + * TODO: Figure out how to correctly implement this in MSVC. It appears that interlocked + * functions provide only acq_rel ordering. + */ + aws_memory_order_seq_cst +}; + +/** + * Statically initializes an aws_atomic_var to a given size_t value. + */ +#define AWS_ATOMIC_INIT_INT(x) \ + { .value = (void *)(uintptr_t)(x) } + +/** + * Statically initializes an aws_atomic_var to a given void * value. + */ +#define AWS_ATOMIC_INIT_PTR(x) \ + { .value = (void *)(x) } + AWS_EXTERN_C_BEGIN -/* - * Note: We do not use the C11 atomics API; this is because we want to make sure the representation - * (and behavior) of atomic values is consistent, regardless of what --std= flag you pass to your compiler. - * Since C11 atomics can silently introduce locks, we run the risk of creating such ABI inconsistencies - * if we decide based on compiler features which atomics API to use, and in practice we expect to have - * either the GNU or MSVC atomics anyway. - * - * As future work, we could test to see if the C11 atomics API on this platform behaves consistently - * with the other APIs and use it if it does. - */ - -/** - * Initializes an atomic variable with an integer value. This operation should be done before any - * other operations on this atomic variable, and must be done before attempting any parallel operations. - * - * This operation does not imply a barrier. Ensure that you use an acquire-release barrier (or stronger) - * when communicating the fact that initialization is complete to the other thread. Launching the thread - * implies a sufficiently strong barrier. - */ -AWS_STATIC_IMPL -void aws_atomic_init_int(volatile struct aws_atomic_var *var, size_t n); - -/** - * Initializes an atomic variable with a pointer value. This operation should be done before any - * other operations on this atomic variable, and must be done before attempting any parallel operations. - * - * This operation does not imply a barrier. Ensure that you use an acquire-release barrier (or stronger) - * when communicating the fact that initialization is complete to the other thread. Launching the thread - * implies a sufficiently strong barrier. - */ -AWS_STATIC_IMPL -void aws_atomic_init_ptr(volatile struct aws_atomic_var *var, void *p); - -/** - * Reads an atomic var as an integer, using the specified ordering, and returns the result. - */ -AWS_STATIC_IMPL -size_t aws_atomic_load_int_explicit(volatile const struct aws_atomic_var *var, enum aws_memory_order memory_order); - -/** - * Reads an atomic var as an integer, using sequentially consistent ordering, and returns the result. - */ -AWS_STATIC_IMPL +/* + * Note: We do not use the C11 atomics API; this is because we want to make sure the representation + * (and behavior) of atomic values is consistent, regardless of what --std= flag you pass to your compiler. + * Since C11 atomics can silently introduce locks, we run the risk of creating such ABI inconsistencies + * if we decide based on compiler features which atomics API to use, and in practice we expect to have + * either the GNU or MSVC atomics anyway. + * + * As future work, we could test to see if the C11 atomics API on this platform behaves consistently + * with the other APIs and use it if it does. + */ + +/** + * Initializes an atomic variable with an integer value. This operation should be done before any + * other operations on this atomic variable, and must be done before attempting any parallel operations. + * + * This operation does not imply a barrier. Ensure that you use an acquire-release barrier (or stronger) + * when communicating the fact that initialization is complete to the other thread. Launching the thread + * implies a sufficiently strong barrier. + */ +AWS_STATIC_IMPL +void aws_atomic_init_int(volatile struct aws_atomic_var *var, size_t n); + +/** + * Initializes an atomic variable with a pointer value. This operation should be done before any + * other operations on this atomic variable, and must be done before attempting any parallel operations. + * + * This operation does not imply a barrier. Ensure that you use an acquire-release barrier (or stronger) + * when communicating the fact that initialization is complete to the other thread. Launching the thread + * implies a sufficiently strong barrier. + */ +AWS_STATIC_IMPL +void aws_atomic_init_ptr(volatile struct aws_atomic_var *var, void *p); + +/** + * Reads an atomic var as an integer, using the specified ordering, and returns the result. + */ +AWS_STATIC_IMPL +size_t aws_atomic_load_int_explicit(volatile const struct aws_atomic_var *var, enum aws_memory_order memory_order); + +/** + * Reads an atomic var as an integer, using sequentially consistent ordering, and returns the result. + */ +AWS_STATIC_IMPL size_t aws_atomic_load_int(volatile const struct aws_atomic_var *var); -/** - * Reads an atomic var as a pointer, using the specified ordering, and returns the result. - */ -AWS_STATIC_IMPL -void *aws_atomic_load_ptr_explicit(volatile const struct aws_atomic_var *var, enum aws_memory_order memory_order); - -/** - * Reads an atomic var as a pointer, using sequentially consistent ordering, and returns the result. - */ -AWS_STATIC_IMPL +/** + * Reads an atomic var as a pointer, using the specified ordering, and returns the result. + */ +AWS_STATIC_IMPL +void *aws_atomic_load_ptr_explicit(volatile const struct aws_atomic_var *var, enum aws_memory_order memory_order); + +/** + * Reads an atomic var as a pointer, using sequentially consistent ordering, and returns the result. + */ +AWS_STATIC_IMPL void *aws_atomic_load_ptr(volatile const struct aws_atomic_var *var); - -/** - * Stores an integer into an atomic var, using the specified ordering. - */ -AWS_STATIC_IMPL -void aws_atomic_store_int_explicit(volatile struct aws_atomic_var *var, size_t n, enum aws_memory_order memory_order); - -/** - * Stores an integer into an atomic var, using sequentially consistent ordering. - */ -AWS_STATIC_IMPL + +/** + * Stores an integer into an atomic var, using the specified ordering. + */ +AWS_STATIC_IMPL +void aws_atomic_store_int_explicit(volatile struct aws_atomic_var *var, size_t n, enum aws_memory_order memory_order); + +/** + * Stores an integer into an atomic var, using sequentially consistent ordering. + */ +AWS_STATIC_IMPL void aws_atomic_store_int(volatile struct aws_atomic_var *var, size_t n); - -/** - * Stores a pointer into an atomic var, using the specified ordering. - */ -AWS_STATIC_IMPL -void aws_atomic_store_ptr_explicit(volatile struct aws_atomic_var *var, void *p, enum aws_memory_order memory_order); - -/** - * Stores a pointer into an atomic var, using sequentially consistent ordering. - */ -AWS_STATIC_IMPL + +/** + * Stores a pointer into an atomic var, using the specified ordering. + */ +AWS_STATIC_IMPL +void aws_atomic_store_ptr_explicit(volatile struct aws_atomic_var *var, void *p, enum aws_memory_order memory_order); + +/** + * Stores a pointer into an atomic var, using sequentially consistent ordering. + */ +AWS_STATIC_IMPL void aws_atomic_store_ptr(volatile struct aws_atomic_var *var, void *p); - -/** - * Exchanges an integer with the value in an atomic_var, using the specified ordering. - * Returns the value that was previously in the atomic_var. - */ -AWS_STATIC_IMPL -size_t aws_atomic_exchange_int_explicit( - volatile struct aws_atomic_var *var, - size_t n, - enum aws_memory_order memory_order); - -/** - * Exchanges an integer with the value in an atomic_var, using sequentially consistent ordering. - * Returns the value that was previously in the atomic_var. - */ -AWS_STATIC_IMPL + +/** + * Exchanges an integer with the value in an atomic_var, using the specified ordering. + * Returns the value that was previously in the atomic_var. + */ +AWS_STATIC_IMPL +size_t aws_atomic_exchange_int_explicit( + volatile struct aws_atomic_var *var, + size_t n, + enum aws_memory_order memory_order); + +/** + * Exchanges an integer with the value in an atomic_var, using sequentially consistent ordering. + * Returns the value that was previously in the atomic_var. + */ +AWS_STATIC_IMPL size_t aws_atomic_exchange_int(volatile struct aws_atomic_var *var, size_t n); - -/** - * Exchanges a pointer with the value in an atomic_var, using the specified ordering. - * Returns the value that was previously in the atomic_var. - */ -AWS_STATIC_IMPL -void *aws_atomic_exchange_ptr_explicit( - volatile struct aws_atomic_var *var, - void *p, - enum aws_memory_order memory_order); - -/** - * Exchanges an integer with the value in an atomic_var, using sequentially consistent ordering. - * Returns the value that was previously in the atomic_var. - */ -AWS_STATIC_IMPL + +/** + * Exchanges a pointer with the value in an atomic_var, using the specified ordering. + * Returns the value that was previously in the atomic_var. + */ +AWS_STATIC_IMPL +void *aws_atomic_exchange_ptr_explicit( + volatile struct aws_atomic_var *var, + void *p, + enum aws_memory_order memory_order); + +/** + * Exchanges an integer with the value in an atomic_var, using sequentially consistent ordering. + * Returns the value that was previously in the atomic_var. + */ +AWS_STATIC_IMPL void *aws_atomic_exchange_ptr(volatile struct aws_atomic_var *var, void *p); - -/** - * Atomically compares *var to *expected; if they are equal, atomically sets *var = desired. Otherwise, *expected is set - * to the value in *var. On success, the memory ordering used was order_success; otherwise, it was order_failure. - * order_failure must be no stronger than order_success, and must not be release or acq_rel. - * Returns true if the compare was successful and the variable updated to desired. - */ -AWS_STATIC_IMPL -bool aws_atomic_compare_exchange_int_explicit( - volatile struct aws_atomic_var *var, - size_t *expected, - size_t desired, - enum aws_memory_order order_success, - enum aws_memory_order order_failure); - -/** - * Atomically compares *var to *expected; if they are equal, atomically sets *var = desired. Otherwise, *expected is set - * to the value in *var. Uses sequentially consistent memory ordering, regardless of success or failure. - * Returns true if the compare was successful and the variable updated to desired. - */ -AWS_STATIC_IMPL + +/** + * Atomically compares *var to *expected; if they are equal, atomically sets *var = desired. Otherwise, *expected is set + * to the value in *var. On success, the memory ordering used was order_success; otherwise, it was order_failure. + * order_failure must be no stronger than order_success, and must not be release or acq_rel. + * Returns true if the compare was successful and the variable updated to desired. + */ +AWS_STATIC_IMPL +bool aws_atomic_compare_exchange_int_explicit( + volatile struct aws_atomic_var *var, + size_t *expected, + size_t desired, + enum aws_memory_order order_success, + enum aws_memory_order order_failure); + +/** + * Atomically compares *var to *expected; if they are equal, atomically sets *var = desired. Otherwise, *expected is set + * to the value in *var. Uses sequentially consistent memory ordering, regardless of success or failure. + * Returns true if the compare was successful and the variable updated to desired. + */ +AWS_STATIC_IMPL bool aws_atomic_compare_exchange_int(volatile struct aws_atomic_var *var, size_t *expected, size_t desired); - -/** - * Atomically compares *var to *expected; if they are equal, atomically sets *var = desired. Otherwise, *expected is set - * to the value in *var. On success, the memory ordering used was order_success; otherwise, it was order_failure. - * order_failure must be no stronger than order_success, and must not be release or acq_rel. - * Returns true if the compare was successful and the variable updated to desired. - */ -AWS_STATIC_IMPL -bool aws_atomic_compare_exchange_ptr_explicit( - volatile struct aws_atomic_var *var, - void **expected, - void *desired, - enum aws_memory_order order_success, - enum aws_memory_order order_failure); - -/** - * Atomically compares *var to *expected; if they are equal, atomically sets *var = desired. Otherwise, *expected is set - * to the value in *var. Uses sequentially consistent memory ordering, regardless of success or failure. - * Returns true if the compare was successful and the variable updated to desired. - */ -AWS_STATIC_IMPL + +/** + * Atomically compares *var to *expected; if they are equal, atomically sets *var = desired. Otherwise, *expected is set + * to the value in *var. On success, the memory ordering used was order_success; otherwise, it was order_failure. + * order_failure must be no stronger than order_success, and must not be release or acq_rel. + * Returns true if the compare was successful and the variable updated to desired. + */ +AWS_STATIC_IMPL +bool aws_atomic_compare_exchange_ptr_explicit( + volatile struct aws_atomic_var *var, + void **expected, + void *desired, + enum aws_memory_order order_success, + enum aws_memory_order order_failure); + +/** + * Atomically compares *var to *expected; if they are equal, atomically sets *var = desired. Otherwise, *expected is set + * to the value in *var. Uses sequentially consistent memory ordering, regardless of success or failure. + * Returns true if the compare was successful and the variable updated to desired. + */ +AWS_STATIC_IMPL bool aws_atomic_compare_exchange_ptr(volatile struct aws_atomic_var *var, void **expected, void *desired); - -/** - * Atomically adds n to *var, and returns the previous value of *var. - */ -AWS_STATIC_IMPL -size_t aws_atomic_fetch_add_explicit(volatile struct aws_atomic_var *var, size_t n, enum aws_memory_order order); - -/** - * Atomically subtracts n from *var, and returns the previous value of *var. - */ -AWS_STATIC_IMPL -size_t aws_atomic_fetch_sub_explicit(volatile struct aws_atomic_var *var, size_t n, enum aws_memory_order order); - -/** - * Atomically ORs n with *var, and returns the previous value of *var. - */ -AWS_STATIC_IMPL -size_t aws_atomic_fetch_or_explicit(volatile struct aws_atomic_var *var, size_t n, enum aws_memory_order order); - -/** - * Atomically ANDs n with *var, and returns the previous value of *var. - */ -AWS_STATIC_IMPL -size_t aws_atomic_fetch_and_explicit(volatile struct aws_atomic_var *var, size_t n, enum aws_memory_order order); - -/** - * Atomically XORs n with *var, and returns the previous value of *var. - */ -AWS_STATIC_IMPL -size_t aws_atomic_fetch_xor_explicit(volatile struct aws_atomic_var *var, size_t n, enum aws_memory_order order); - -/** - * Atomically adds n to *var, and returns the previous value of *var. - * Uses sequentially consistent ordering. - */ -AWS_STATIC_IMPL + +/** + * Atomically adds n to *var, and returns the previous value of *var. + */ +AWS_STATIC_IMPL +size_t aws_atomic_fetch_add_explicit(volatile struct aws_atomic_var *var, size_t n, enum aws_memory_order order); + +/** + * Atomically subtracts n from *var, and returns the previous value of *var. + */ +AWS_STATIC_IMPL +size_t aws_atomic_fetch_sub_explicit(volatile struct aws_atomic_var *var, size_t n, enum aws_memory_order order); + +/** + * Atomically ORs n with *var, and returns the previous value of *var. + */ +AWS_STATIC_IMPL +size_t aws_atomic_fetch_or_explicit(volatile struct aws_atomic_var *var, size_t n, enum aws_memory_order order); + +/** + * Atomically ANDs n with *var, and returns the previous value of *var. + */ +AWS_STATIC_IMPL +size_t aws_atomic_fetch_and_explicit(volatile struct aws_atomic_var *var, size_t n, enum aws_memory_order order); + +/** + * Atomically XORs n with *var, and returns the previous value of *var. + */ +AWS_STATIC_IMPL +size_t aws_atomic_fetch_xor_explicit(volatile struct aws_atomic_var *var, size_t n, enum aws_memory_order order); + +/** + * Atomically adds n to *var, and returns the previous value of *var. + * Uses sequentially consistent ordering. + */ +AWS_STATIC_IMPL size_t aws_atomic_fetch_add(volatile struct aws_atomic_var *var, size_t n); - -/** - * Atomically subtracts n from *var, and returns the previous value of *var. - * Uses sequentially consistent ordering. - */ -AWS_STATIC_IMPL + +/** + * Atomically subtracts n from *var, and returns the previous value of *var. + * Uses sequentially consistent ordering. + */ +AWS_STATIC_IMPL size_t aws_atomic_fetch_sub(volatile struct aws_atomic_var *var, size_t n); - -/** - * Atomically ands n into *var, and returns the previous value of *var. - * Uses sequentially consistent ordering. - */ -AWS_STATIC_IMPL + +/** + * Atomically ands n into *var, and returns the previous value of *var. + * Uses sequentially consistent ordering. + */ +AWS_STATIC_IMPL size_t aws_atomic_fetch_and(volatile struct aws_atomic_var *var, size_t n); - -/** - * Atomically ors n into *var, and returns the previous value of *var. - * Uses sequentially consistent ordering. - */ -AWS_STATIC_IMPL + +/** + * Atomically ors n into *var, and returns the previous value of *var. + * Uses sequentially consistent ordering. + */ +AWS_STATIC_IMPL size_t aws_atomic_fetch_or(volatile struct aws_atomic_var *var, size_t n); - -/** - * Atomically xors n into *var, and returns the previous value of *var. - * Uses sequentially consistent ordering. - */ -AWS_STATIC_IMPL + +/** + * Atomically xors n into *var, and returns the previous value of *var. + * Uses sequentially consistent ordering. + */ +AWS_STATIC_IMPL size_t aws_atomic_fetch_xor(volatile struct aws_atomic_var *var, size_t n); - -/** - * Provides the same reordering guarantees as an atomic operation with the specified memory order, without - * needing to actually perform an atomic operation. - */ -AWS_STATIC_IMPL -void aws_atomic_thread_fence(enum aws_memory_order order); - + +/** + * Provides the same reordering guarantees as an atomic operation with the specified memory order, without + * needing to actually perform an atomic operation. + */ +AWS_STATIC_IMPL +void aws_atomic_thread_fence(enum aws_memory_order order); + #ifndef AWS_NO_STATIC_IMPL # include <aws/common/atomics.inl> #endif /* AWS_NO_STATIC_IMPL */ - + AWS_EXTERN_C_END -#endif +#endif diff --git a/contrib/restricted/aws/aws-c-common/include/aws/common/atomics_fallback.inl b/contrib/restricted/aws/aws-c-common/include/aws/common/atomics_fallback.inl index e0c52d80cc..e51252b4bc 100644 --- a/contrib/restricted/aws/aws-c-common/include/aws/common/atomics_fallback.inl +++ b/contrib/restricted/aws/aws-c-common/include/aws/common/atomics_fallback.inl @@ -4,20 +4,20 @@ /** * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. * SPDX-License-Identifier: Apache-2.0. - */ - + */ + AWS_EXTERN_C_BEGIN -#ifndef AWS_ATOMICS_HAVE_THREAD_FENCE - -void aws_atomic_thread_fence(enum aws_memory_order order) { - struct aws_atomic_var var; - aws_atomic_int_t expected = 0; - - aws_atomic_store_int(&var, expected, aws_memory_order_relaxed); - aws_atomic_compare_exchange_int(&var, &expected, 1, order, aws_memory_order_relaxed); -} - +#ifndef AWS_ATOMICS_HAVE_THREAD_FENCE + +void aws_atomic_thread_fence(enum aws_memory_order order) { + struct aws_atomic_var var; + aws_atomic_int_t expected = 0; + + aws_atomic_store_int(&var, expected, aws_memory_order_relaxed); + aws_atomic_compare_exchange_int(&var, &expected, 1, order, aws_memory_order_relaxed); +} + #endif /* AWS_ATOMICS_HAVE_THREAD_FENCE */ AWS_EXTERN_C_END diff --git a/contrib/restricted/aws/aws-c-common/include/aws/common/atomics_gnu.inl b/contrib/restricted/aws/aws-c-common/include/aws/common/atomics_gnu.inl index 711b7795d6..dc72543762 100644 --- a/contrib/restricted/aws/aws-c-common/include/aws/common/atomics_gnu.inl +++ b/contrib/restricted/aws/aws-c-common/include/aws/common/atomics_gnu.inl @@ -4,215 +4,215 @@ /** * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. * SPDX-License-Identifier: Apache-2.0. - */ - -/* These are implicitly included, but help with editor highlighting */ -#include <aws/common/atomics.h> -#include <aws/common/common.h> - -#include <stdint.h> -#include <stdlib.h> - + */ + +/* These are implicitly included, but help with editor highlighting */ +#include <aws/common/atomics.h> +#include <aws/common/common.h> + +#include <stdint.h> +#include <stdlib.h> + AWS_EXTERN_C_BEGIN -#ifdef __clang__ -# pragma clang diagnostic push -# pragma clang diagnostic ignored "-Wc11-extensions" -#else -# pragma GCC diagnostic push -# pragma GCC diagnostic ignored "-Wpedantic" -#endif - -typedef size_t aws_atomic_impl_int_t; - -static inline int aws_atomic_priv_xlate_order(enum aws_memory_order order) { - switch (order) { - case aws_memory_order_relaxed: - return __ATOMIC_RELAXED; - case aws_memory_order_acquire: - return __ATOMIC_ACQUIRE; - case aws_memory_order_release: - return __ATOMIC_RELEASE; - case aws_memory_order_acq_rel: - return __ATOMIC_ACQ_REL; - case aws_memory_order_seq_cst: - return __ATOMIC_SEQ_CST; - default: /* Unknown memory order */ - abort(); - } -} - -/** - * Initializes an atomic variable with an integer value. This operation should be done before any - * other operations on this atomic variable, and must be done before attempting any parallel operations. - */ -AWS_STATIC_IMPL -void aws_atomic_init_int(volatile struct aws_atomic_var *var, size_t n) { - AWS_ATOMIC_VAR_INTVAL(var) = n; -} - -/** - * Initializes an atomic variable with a pointer value. This operation should be done before any - * other operations on this atomic variable, and must be done before attempting any parallel operations. - */ -AWS_STATIC_IMPL -void aws_atomic_init_ptr(volatile struct aws_atomic_var *var, void *p) { - AWS_ATOMIC_VAR_PTRVAL(var) = p; -} - -/** - * Reads an atomic var as an integer, using the specified ordering, and returns the result. - */ -AWS_STATIC_IMPL -size_t aws_atomic_load_int_explicit(volatile const struct aws_atomic_var *var, enum aws_memory_order memory_order) { - return __atomic_load_n(&AWS_ATOMIC_VAR_INTVAL(var), aws_atomic_priv_xlate_order(memory_order)); -} - -/** +#ifdef __clang__ +# pragma clang diagnostic push +# pragma clang diagnostic ignored "-Wc11-extensions" +#else +# pragma GCC diagnostic push +# pragma GCC diagnostic ignored "-Wpedantic" +#endif + +typedef size_t aws_atomic_impl_int_t; + +static inline int aws_atomic_priv_xlate_order(enum aws_memory_order order) { + switch (order) { + case aws_memory_order_relaxed: + return __ATOMIC_RELAXED; + case aws_memory_order_acquire: + return __ATOMIC_ACQUIRE; + case aws_memory_order_release: + return __ATOMIC_RELEASE; + case aws_memory_order_acq_rel: + return __ATOMIC_ACQ_REL; + case aws_memory_order_seq_cst: + return __ATOMIC_SEQ_CST; + default: /* Unknown memory order */ + abort(); + } +} + +/** + * Initializes an atomic variable with an integer value. This operation should be done before any + * other operations on this atomic variable, and must be done before attempting any parallel operations. + */ +AWS_STATIC_IMPL +void aws_atomic_init_int(volatile struct aws_atomic_var *var, size_t n) { + AWS_ATOMIC_VAR_INTVAL(var) = n; +} + +/** + * Initializes an atomic variable with a pointer value. This operation should be done before any + * other operations on this atomic variable, and must be done before attempting any parallel operations. + */ +AWS_STATIC_IMPL +void aws_atomic_init_ptr(volatile struct aws_atomic_var *var, void *p) { + AWS_ATOMIC_VAR_PTRVAL(var) = p; +} + +/** + * Reads an atomic var as an integer, using the specified ordering, and returns the result. + */ +AWS_STATIC_IMPL +size_t aws_atomic_load_int_explicit(volatile const struct aws_atomic_var *var, enum aws_memory_order memory_order) { + return __atomic_load_n(&AWS_ATOMIC_VAR_INTVAL(var), aws_atomic_priv_xlate_order(memory_order)); +} + +/** * Reads an atomic var as a pointer, using the specified ordering, and returns the result. - */ -AWS_STATIC_IMPL -void *aws_atomic_load_ptr_explicit(volatile const struct aws_atomic_var *var, enum aws_memory_order memory_order) { - return __atomic_load_n(&AWS_ATOMIC_VAR_PTRVAL(var), aws_atomic_priv_xlate_order(memory_order)); -} - -/** - * Stores an integer into an atomic var, using the specified ordering. - */ -AWS_STATIC_IMPL -void aws_atomic_store_int_explicit(volatile struct aws_atomic_var *var, size_t n, enum aws_memory_order memory_order) { - __atomic_store_n(&AWS_ATOMIC_VAR_INTVAL(var), n, aws_atomic_priv_xlate_order(memory_order)); -} - -/** - * Stores an pointer into an atomic var, using the specified ordering. - */ -AWS_STATIC_IMPL -void aws_atomic_store_ptr_explicit(volatile struct aws_atomic_var *var, void *p, enum aws_memory_order memory_order) { - __atomic_store_n(&AWS_ATOMIC_VAR_PTRVAL(var), p, aws_atomic_priv_xlate_order(memory_order)); -} - -/** - * Exchanges an integer with the value in an atomic_var, using the specified ordering. - * Returns the value that was previously in the atomic_var. - */ -AWS_STATIC_IMPL -size_t aws_atomic_exchange_int_explicit( - volatile struct aws_atomic_var *var, - size_t n, - enum aws_memory_order memory_order) { - return __atomic_exchange_n(&AWS_ATOMIC_VAR_INTVAL(var), n, aws_atomic_priv_xlate_order(memory_order)); -} - -/** - * Exchanges a pointer with the value in an atomic_var, using the specified ordering. - * Returns the value that was previously in the atomic_var. - */ -AWS_STATIC_IMPL -void *aws_atomic_exchange_ptr_explicit( - volatile struct aws_atomic_var *var, - void *p, - enum aws_memory_order memory_order) { - return __atomic_exchange_n(&AWS_ATOMIC_VAR_PTRVAL(var), p, aws_atomic_priv_xlate_order(memory_order)); -} - -/** - * Atomically compares *var to *expected; if they are equal, atomically sets *var = desired. Otherwise, *expected is set - * to the value in *var. On success, the memory ordering used was order_success; otherwise, it was order_failure. - * order_failure must be no stronger than order_success, and must not be release or acq_rel. - */ -AWS_STATIC_IMPL -bool aws_atomic_compare_exchange_int_explicit( - volatile struct aws_atomic_var *var, - size_t *expected, - size_t desired, - enum aws_memory_order order_success, - enum aws_memory_order order_failure) { - return __atomic_compare_exchange_n( - &AWS_ATOMIC_VAR_INTVAL(var), - expected, - desired, - false, - aws_atomic_priv_xlate_order(order_success), - aws_atomic_priv_xlate_order(order_failure)); -} - -/** - * Atomically compares *var to *expected; if they are equal, atomically sets *var = desired. Otherwise, *expected is set - * to the value in *var. On success, the memory ordering used was order_success; otherwise, it was order_failure. - * order_failure must be no stronger than order_success, and must not be release or acq_rel. - */ -AWS_STATIC_IMPL -bool aws_atomic_compare_exchange_ptr_explicit( - volatile struct aws_atomic_var *var, - void **expected, - void *desired, - enum aws_memory_order order_success, - enum aws_memory_order order_failure) { - return __atomic_compare_exchange_n( - &AWS_ATOMIC_VAR_PTRVAL(var), - expected, - desired, - false, - aws_atomic_priv_xlate_order(order_success), - aws_atomic_priv_xlate_order(order_failure)); -} - -/** - * Atomically adds n to *var, and returns the previous value of *var. - */ -AWS_STATIC_IMPL -size_t aws_atomic_fetch_add_explicit(volatile struct aws_atomic_var *var, size_t n, enum aws_memory_order order) { - return __atomic_fetch_add(&AWS_ATOMIC_VAR_INTVAL(var), n, aws_atomic_priv_xlate_order(order)); -} - -/** - * Atomically subtracts n from *var, and returns the previous value of *var. - */ -AWS_STATIC_IMPL -size_t aws_atomic_fetch_sub_explicit(volatile struct aws_atomic_var *var, size_t n, enum aws_memory_order order) { - return __atomic_fetch_sub(&AWS_ATOMIC_VAR_INTVAL(var), n, aws_atomic_priv_xlate_order(order)); -} - -/** - * Atomically ORs n with *var, and returns the previous value of *var. - */ -AWS_STATIC_IMPL -size_t aws_atomic_fetch_or_explicit(volatile struct aws_atomic_var *var, size_t n, enum aws_memory_order order) { - return __atomic_fetch_or(&AWS_ATOMIC_VAR_INTVAL(var), n, aws_atomic_priv_xlate_order(order)); -} - -/** - * Atomically ANDs n with *var, and returns the previous value of *var. - */ -AWS_STATIC_IMPL -size_t aws_atomic_fetch_and_explicit(volatile struct aws_atomic_var *var, size_t n, enum aws_memory_order order) { - return __atomic_fetch_and(&AWS_ATOMIC_VAR_INTVAL(var), n, aws_atomic_priv_xlate_order(order)); -} - -/** - * Atomically XORs n with *var, and returns the previous value of *var. - */ -AWS_STATIC_IMPL -size_t aws_atomic_fetch_xor_explicit(volatile struct aws_atomic_var *var, size_t n, enum aws_memory_order order) { - return __atomic_fetch_xor(&AWS_ATOMIC_VAR_INTVAL(var), n, aws_atomic_priv_xlate_order(order)); -} - -/** - * Provides the same reordering guarantees as an atomic operation with the specified memory order, without - * needing to actually perform an atomic operation. - */ -AWS_STATIC_IMPL -void aws_atomic_thread_fence(enum aws_memory_order order) { - __atomic_thread_fence(order); -} - -#ifdef __clang__ -# pragma clang diagnostic pop -#else -# pragma GCC diagnostic pop -#endif - -#define AWS_ATOMICS_HAVE_THREAD_FENCE + */ +AWS_STATIC_IMPL +void *aws_atomic_load_ptr_explicit(volatile const struct aws_atomic_var *var, enum aws_memory_order memory_order) { + return __atomic_load_n(&AWS_ATOMIC_VAR_PTRVAL(var), aws_atomic_priv_xlate_order(memory_order)); +} + +/** + * Stores an integer into an atomic var, using the specified ordering. + */ +AWS_STATIC_IMPL +void aws_atomic_store_int_explicit(volatile struct aws_atomic_var *var, size_t n, enum aws_memory_order memory_order) { + __atomic_store_n(&AWS_ATOMIC_VAR_INTVAL(var), n, aws_atomic_priv_xlate_order(memory_order)); +} + +/** + * Stores an pointer into an atomic var, using the specified ordering. + */ +AWS_STATIC_IMPL +void aws_atomic_store_ptr_explicit(volatile struct aws_atomic_var *var, void *p, enum aws_memory_order memory_order) { + __atomic_store_n(&AWS_ATOMIC_VAR_PTRVAL(var), p, aws_atomic_priv_xlate_order(memory_order)); +} + +/** + * Exchanges an integer with the value in an atomic_var, using the specified ordering. + * Returns the value that was previously in the atomic_var. + */ +AWS_STATIC_IMPL +size_t aws_atomic_exchange_int_explicit( + volatile struct aws_atomic_var *var, + size_t n, + enum aws_memory_order memory_order) { + return __atomic_exchange_n(&AWS_ATOMIC_VAR_INTVAL(var), n, aws_atomic_priv_xlate_order(memory_order)); +} + +/** + * Exchanges a pointer with the value in an atomic_var, using the specified ordering. + * Returns the value that was previously in the atomic_var. + */ +AWS_STATIC_IMPL +void *aws_atomic_exchange_ptr_explicit( + volatile struct aws_atomic_var *var, + void *p, + enum aws_memory_order memory_order) { + return __atomic_exchange_n(&AWS_ATOMIC_VAR_PTRVAL(var), p, aws_atomic_priv_xlate_order(memory_order)); +} + +/** + * Atomically compares *var to *expected; if they are equal, atomically sets *var = desired. Otherwise, *expected is set + * to the value in *var. On success, the memory ordering used was order_success; otherwise, it was order_failure. + * order_failure must be no stronger than order_success, and must not be release or acq_rel. + */ +AWS_STATIC_IMPL +bool aws_atomic_compare_exchange_int_explicit( + volatile struct aws_atomic_var *var, + size_t *expected, + size_t desired, + enum aws_memory_order order_success, + enum aws_memory_order order_failure) { + return __atomic_compare_exchange_n( + &AWS_ATOMIC_VAR_INTVAL(var), + expected, + desired, + false, + aws_atomic_priv_xlate_order(order_success), + aws_atomic_priv_xlate_order(order_failure)); +} + +/** + * Atomically compares *var to *expected; if they are equal, atomically sets *var = desired. Otherwise, *expected is set + * to the value in *var. On success, the memory ordering used was order_success; otherwise, it was order_failure. + * order_failure must be no stronger than order_success, and must not be release or acq_rel. + */ +AWS_STATIC_IMPL +bool aws_atomic_compare_exchange_ptr_explicit( + volatile struct aws_atomic_var *var, + void **expected, + void *desired, + enum aws_memory_order order_success, + enum aws_memory_order order_failure) { + return __atomic_compare_exchange_n( + &AWS_ATOMIC_VAR_PTRVAL(var), + expected, + desired, + false, + aws_atomic_priv_xlate_order(order_success), + aws_atomic_priv_xlate_order(order_failure)); +} + +/** + * Atomically adds n to *var, and returns the previous value of *var. + */ +AWS_STATIC_IMPL +size_t aws_atomic_fetch_add_explicit(volatile struct aws_atomic_var *var, size_t n, enum aws_memory_order order) { + return __atomic_fetch_add(&AWS_ATOMIC_VAR_INTVAL(var), n, aws_atomic_priv_xlate_order(order)); +} + +/** + * Atomically subtracts n from *var, and returns the previous value of *var. + */ +AWS_STATIC_IMPL +size_t aws_atomic_fetch_sub_explicit(volatile struct aws_atomic_var *var, size_t n, enum aws_memory_order order) { + return __atomic_fetch_sub(&AWS_ATOMIC_VAR_INTVAL(var), n, aws_atomic_priv_xlate_order(order)); +} + +/** + * Atomically ORs n with *var, and returns the previous value of *var. + */ +AWS_STATIC_IMPL +size_t aws_atomic_fetch_or_explicit(volatile struct aws_atomic_var *var, size_t n, enum aws_memory_order order) { + return __atomic_fetch_or(&AWS_ATOMIC_VAR_INTVAL(var), n, aws_atomic_priv_xlate_order(order)); +} + +/** + * Atomically ANDs n with *var, and returns the previous value of *var. + */ +AWS_STATIC_IMPL +size_t aws_atomic_fetch_and_explicit(volatile struct aws_atomic_var *var, size_t n, enum aws_memory_order order) { + return __atomic_fetch_and(&AWS_ATOMIC_VAR_INTVAL(var), n, aws_atomic_priv_xlate_order(order)); +} + +/** + * Atomically XORs n with *var, and returns the previous value of *var. + */ +AWS_STATIC_IMPL +size_t aws_atomic_fetch_xor_explicit(volatile struct aws_atomic_var *var, size_t n, enum aws_memory_order order) { + return __atomic_fetch_xor(&AWS_ATOMIC_VAR_INTVAL(var), n, aws_atomic_priv_xlate_order(order)); +} + +/** + * Provides the same reordering guarantees as an atomic operation with the specified memory order, without + * needing to actually perform an atomic operation. + */ +AWS_STATIC_IMPL +void aws_atomic_thread_fence(enum aws_memory_order order) { + __atomic_thread_fence(order); +} + +#ifdef __clang__ +# pragma clang diagnostic pop +#else +# pragma GCC diagnostic pop +#endif + +#define AWS_ATOMICS_HAVE_THREAD_FENCE AWS_EXTERN_C_END #endif /* AWS_COMMON_ATOMICS_GNU_INL */ diff --git a/contrib/restricted/aws/aws-c-common/include/aws/common/byte_buf.h b/contrib/restricted/aws/aws-c-common/include/aws/common/byte_buf.h index 8e79a93b27..12915b829d 100644 --- a/contrib/restricted/aws/aws-c-common/include/aws/common/byte_buf.h +++ b/contrib/restricted/aws/aws-c-common/include/aws/common/byte_buf.h @@ -1,157 +1,157 @@ -#ifndef AWS_COMMON_BYTE_BUF_H -#define AWS_COMMON_BYTE_BUF_H +#ifndef AWS_COMMON_BYTE_BUF_H +#define AWS_COMMON_BYTE_BUF_H /** * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. * SPDX-License-Identifier: Apache-2.0. - */ - -#include <aws/common/array_list.h> -#include <aws/common/byte_order.h> -#include <aws/common/common.h> - -#include <string.h> - -/** - * Represents a length-delimited binary string or buffer. If byte buffer points - * to constant memory or memory that should otherwise not be freed by this - * struct, set allocator to NULL and free function will be a no-op. - * - * This structure used to define the output for all functions that write to a buffer. - * - * Note that this structure allocates memory at the buffer pointer only. The - * struct itself does not get dynamically allocated and must be either - * maintained or copied to avoid losing access to the memory. - */ -struct aws_byte_buf { - /* do not reorder this, this struct lines up nicely with windows buffer structures--saving us allocations.*/ - size_t len; - uint8_t *buffer; - size_t capacity; - struct aws_allocator *allocator; -}; - -/** - * Represents a movable pointer within a larger binary string or buffer. - * - * This structure is used to define buffers for reading. - */ -struct aws_byte_cursor { - /* do not reorder this, this struct lines up nicely with windows buffer structures--saving us allocations */ - size_t len; - uint8_t *ptr; -}; - -/** - * Helper macro for passing aws_byte_cursor to the printf family of functions. - * Intended for use with the PRInSTR format macro. - * Ex: printf(PRInSTR "\n", AWS_BYTE_CURSOR_PRI(my_cursor)); - */ -#define AWS_BYTE_CURSOR_PRI(C) ((int)(C).len < 0 ? 0 : (int)(C).len), (const char *)(C).ptr - -/** - * Helper macro for passing aws_byte_buf to the printf family of functions. - * Intended for use with the PRInSTR format macro. - * Ex: printf(PRInSTR "\n", AWS_BYTE_BUF_PRI(my_buf)); - */ -#define AWS_BYTE_BUF_PRI(B) ((int)(B).len < 0 ? 0 : (int)(B).len), (const char *)(B).buffer - -/** + */ + +#include <aws/common/array_list.h> +#include <aws/common/byte_order.h> +#include <aws/common/common.h> + +#include <string.h> + +/** + * Represents a length-delimited binary string or buffer. If byte buffer points + * to constant memory or memory that should otherwise not be freed by this + * struct, set allocator to NULL and free function will be a no-op. + * + * This structure used to define the output for all functions that write to a buffer. + * + * Note that this structure allocates memory at the buffer pointer only. The + * struct itself does not get dynamically allocated and must be either + * maintained or copied to avoid losing access to the memory. + */ +struct aws_byte_buf { + /* do not reorder this, this struct lines up nicely with windows buffer structures--saving us allocations.*/ + size_t len; + uint8_t *buffer; + size_t capacity; + struct aws_allocator *allocator; +}; + +/** + * Represents a movable pointer within a larger binary string or buffer. + * + * This structure is used to define buffers for reading. + */ +struct aws_byte_cursor { + /* do not reorder this, this struct lines up nicely with windows buffer structures--saving us allocations */ + size_t len; + uint8_t *ptr; +}; + +/** + * Helper macro for passing aws_byte_cursor to the printf family of functions. + * Intended for use with the PRInSTR format macro. + * Ex: printf(PRInSTR "\n", AWS_BYTE_CURSOR_PRI(my_cursor)); + */ +#define AWS_BYTE_CURSOR_PRI(C) ((int)(C).len < 0 ? 0 : (int)(C).len), (const char *)(C).ptr + +/** + * Helper macro for passing aws_byte_buf to the printf family of functions. + * Intended for use with the PRInSTR format macro. + * Ex: printf(PRInSTR "\n", AWS_BYTE_BUF_PRI(my_buf)); + */ +#define AWS_BYTE_BUF_PRI(B) ((int)(B).len < 0 ? 0 : (int)(B).len), (const char *)(B).buffer + +/** * Helper Macro for inititilizing a byte cursor from a string literal */ #define AWS_BYTE_CUR_INIT_FROM_STRING_LITERAL(literal) \ { .ptr = (uint8_t *)(const char *)(literal), .len = sizeof(literal) - 1 } /** - * Signature for function argument to trim APIs - */ -typedef bool(aws_byte_predicate_fn)(uint8_t value); - -AWS_EXTERN_C_BEGIN - -/** - * Compare two arrays. - * Return whether their contents are equivalent. - * NULL may be passed as the array pointer if its length is declared to be 0. - */ -AWS_COMMON_API + * Signature for function argument to trim APIs + */ +typedef bool(aws_byte_predicate_fn)(uint8_t value); + +AWS_EXTERN_C_BEGIN + +/** + * Compare two arrays. + * Return whether their contents are equivalent. + * NULL may be passed as the array pointer if its length is declared to be 0. + */ +AWS_COMMON_API bool aws_array_eq(const void *const array_a, const size_t len_a, const void *array_b, const size_t len_b); - -/** - * Perform a case-insensitive string comparison of two arrays. - * Return whether their contents are equivalent. - * NULL may be passed as the array pointer if its length is declared to be 0. - * The "C" locale is used for comparing upper and lowercase letters. - * Data is assumed to be ASCII text, UTF-8 will work fine too. - */ -AWS_COMMON_API + +/** + * Perform a case-insensitive string comparison of two arrays. + * Return whether their contents are equivalent. + * NULL may be passed as the array pointer if its length is declared to be 0. + * The "C" locale is used for comparing upper and lowercase letters. + * Data is assumed to be ASCII text, UTF-8 will work fine too. + */ +AWS_COMMON_API bool aws_array_eq_ignore_case( const void *const array_a, const size_t len_a, const void *const array_b, const size_t len_b); - -/** - * Compare an array and a null-terminated string. - * Returns true if their contents are equivalent. - * The array should NOT contain a null-terminator, or the comparison will always return false. - * NULL may be passed as the array pointer if its length is declared to be 0. - */ -AWS_COMMON_API + +/** + * Compare an array and a null-terminated string. + * Returns true if their contents are equivalent. + * The array should NOT contain a null-terminator, or the comparison will always return false. + * NULL may be passed as the array pointer if its length is declared to be 0. + */ +AWS_COMMON_API bool aws_array_eq_c_str(const void *const array, const size_t array_len, const char *const c_str); - -/** - * Perform a case-insensitive string comparison of an array and a null-terminated string. - * Return whether their contents are equivalent. - * The array should NOT contain a null-terminator, or the comparison will always return false. - * NULL may be passed as the array pointer if its length is declared to be 0. - * The "C" locale is used for comparing upper and lowercase letters. - * Data is assumed to be ASCII text, UTF-8 will work fine too. - */ -AWS_COMMON_API + +/** + * Perform a case-insensitive string comparison of an array and a null-terminated string. + * Return whether their contents are equivalent. + * The array should NOT contain a null-terminator, or the comparison will always return false. + * NULL may be passed as the array pointer if its length is declared to be 0. + * The "C" locale is used for comparing upper and lowercase letters. + * Data is assumed to be ASCII text, UTF-8 will work fine too. + */ +AWS_COMMON_API bool aws_array_eq_c_str_ignore_case(const void *const array, const size_t array_len, const char *const c_str); - -AWS_COMMON_API -int aws_byte_buf_init(struct aws_byte_buf *buf, struct aws_allocator *allocator, size_t capacity); - -/** - * Initializes an aws_byte_buf structure base on another valid one. - * Requires: *src and *allocator are valid objects. - * Ensures: *dest is a valid aws_byte_buf with a new backing array dest->buffer - * which is a copy of the elements from src->buffer. - */ -AWS_COMMON_API int aws_byte_buf_init_copy( - struct aws_byte_buf *dest, - struct aws_allocator *allocator, - const struct aws_byte_buf *src); - -/** - * Evaluates the set of properties that define the shape of all valid aws_byte_buf structures. - * It is also a cheap check, in the sense it run in constant time (i.e., no loops or recursion). - */ -AWS_COMMON_API -bool aws_byte_buf_is_valid(const struct aws_byte_buf *const buf); - -/** - * Evaluates the set of properties that define the shape of all valid aws_byte_cursor structures. - * It is also a cheap check, in the sense it runs in constant time (i.e., no loops or recursion). - */ -AWS_COMMON_API -bool aws_byte_cursor_is_valid(const struct aws_byte_cursor *cursor); - -/** - * Copies src buffer into dest and sets the correct len and capacity. - * A new memory zone is allocated for dest->buffer. When dest is no longer needed it will have to be cleaned-up using - * aws_byte_buf_clean_up(dest). - * Dest capacity and len will be equal to the src len. Allocator of the dest will be identical with parameter allocator. - * If src buffer is null the dest will have a null buffer with a len and a capacity of 0 - * Returns AWS_OP_SUCCESS in case of success or AWS_OP_ERR when memory can't be allocated. - */ -AWS_COMMON_API -int aws_byte_buf_init_copy_from_cursor( - struct aws_byte_buf *dest, - struct aws_allocator *allocator, - struct aws_byte_cursor src); - + +AWS_COMMON_API +int aws_byte_buf_init(struct aws_byte_buf *buf, struct aws_allocator *allocator, size_t capacity); + +/** + * Initializes an aws_byte_buf structure base on another valid one. + * Requires: *src and *allocator are valid objects. + * Ensures: *dest is a valid aws_byte_buf with a new backing array dest->buffer + * which is a copy of the elements from src->buffer. + */ +AWS_COMMON_API int aws_byte_buf_init_copy( + struct aws_byte_buf *dest, + struct aws_allocator *allocator, + const struct aws_byte_buf *src); + +/** + * Evaluates the set of properties that define the shape of all valid aws_byte_buf structures. + * It is also a cheap check, in the sense it run in constant time (i.e., no loops or recursion). + */ +AWS_COMMON_API +bool aws_byte_buf_is_valid(const struct aws_byte_buf *const buf); + +/** + * Evaluates the set of properties that define the shape of all valid aws_byte_cursor structures. + * It is also a cheap check, in the sense it runs in constant time (i.e., no loops or recursion). + */ +AWS_COMMON_API +bool aws_byte_cursor_is_valid(const struct aws_byte_cursor *cursor); + +/** + * Copies src buffer into dest and sets the correct len and capacity. + * A new memory zone is allocated for dest->buffer. When dest is no longer needed it will have to be cleaned-up using + * aws_byte_buf_clean_up(dest). + * Dest capacity and len will be equal to the src len. Allocator of the dest will be identical with parameter allocator. + * If src buffer is null the dest will have a null buffer with a len and a capacity of 0 + * Returns AWS_OP_SUCCESS in case of success or AWS_OP_ERR when memory can't be allocated. + */ +AWS_COMMON_API +int aws_byte_buf_init_copy_from_cursor( + struct aws_byte_buf *dest, + struct aws_allocator *allocator, + struct aws_byte_cursor src); + /** * Init buffer with contents of multiple cursors, and update cursors to reference the memory stored in the buffer. * Each cursor arg must be an `struct aws_byte_cursor *`. NULL must be passed as the final arg. @@ -159,20 +159,20 @@ int aws_byte_buf_init_copy_from_cursor( * Returns AWS_OP_SUCCESS in case of success. * AWS_OP_ERR is returned if memory can't be allocated or the total cursor length exceeds SIZE_MAX. */ -AWS_COMMON_API +AWS_COMMON_API int aws_byte_buf_init_cache_and_update_cursors(struct aws_byte_buf *dest, struct aws_allocator *allocator, ...); AWS_COMMON_API -void aws_byte_buf_clean_up(struct aws_byte_buf *buf); - -/** - * Equivalent to calling aws_byte_buf_secure_zero and then aws_byte_buf_clean_up - * on the buffer. - */ -AWS_COMMON_API -void aws_byte_buf_clean_up_secure(struct aws_byte_buf *buf); - -/** +void aws_byte_buf_clean_up(struct aws_byte_buf *buf); + +/** + * Equivalent to calling aws_byte_buf_secure_zero and then aws_byte_buf_clean_up + * on the buffer. + */ +AWS_COMMON_API +void aws_byte_buf_clean_up_secure(struct aws_byte_buf *buf); + +/** * Resets the len of the buffer to 0, but does not free the memory. The buffer can then be reused. * Optionally zeroes the contents, if the "zero_contents" flag is true. */ @@ -180,119 +180,119 @@ AWS_COMMON_API void aws_byte_buf_reset(struct aws_byte_buf *buf, bool zero_contents); /** - * Sets all bytes of buffer to zero and resets len to zero. - */ -AWS_COMMON_API -void aws_byte_buf_secure_zero(struct aws_byte_buf *buf); - -/** - * Compare two aws_byte_buf structures. - * Return whether their contents are equivalent. - */ -AWS_COMMON_API + * Sets all bytes of buffer to zero and resets len to zero. + */ +AWS_COMMON_API +void aws_byte_buf_secure_zero(struct aws_byte_buf *buf); + +/** + * Compare two aws_byte_buf structures. + * Return whether their contents are equivalent. + */ +AWS_COMMON_API bool aws_byte_buf_eq(const struct aws_byte_buf *const a, const struct aws_byte_buf *const b); - -/** - * Perform a case-insensitive string comparison of two aws_byte_buf structures. - * Return whether their contents are equivalent. - * The "C" locale is used for comparing upper and lowercase letters. - * Data is assumed to be ASCII text, UTF-8 will work fine too. - */ -AWS_COMMON_API + +/** + * Perform a case-insensitive string comparison of two aws_byte_buf structures. + * Return whether their contents are equivalent. + * The "C" locale is used for comparing upper and lowercase letters. + * Data is assumed to be ASCII text, UTF-8 will work fine too. + */ +AWS_COMMON_API bool aws_byte_buf_eq_ignore_case(const struct aws_byte_buf *const a, const struct aws_byte_buf *const b); - -/** - * Compare an aws_byte_buf and a null-terminated string. - * Returns true if their contents are equivalent. - * The buffer should NOT contain a null-terminator, or the comparison will always return false. - */ -AWS_COMMON_API + +/** + * Compare an aws_byte_buf and a null-terminated string. + * Returns true if their contents are equivalent. + * The buffer should NOT contain a null-terminator, or the comparison will always return false. + */ +AWS_COMMON_API bool aws_byte_buf_eq_c_str(const struct aws_byte_buf *const buf, const char *const c_str); - -/** - * Perform a case-insensitive string comparison of an aws_byte_buf and a null-terminated string. - * Return whether their contents are equivalent. - * The buffer should NOT contain a null-terminator, or the comparison will always return false. - * The "C" locale is used for comparing upper and lowercase letters. - * Data is assumed to be ASCII text, UTF-8 will work fine too. - */ -AWS_COMMON_API + +/** + * Perform a case-insensitive string comparison of an aws_byte_buf and a null-terminated string. + * Return whether their contents are equivalent. + * The buffer should NOT contain a null-terminator, or the comparison will always return false. + * The "C" locale is used for comparing upper and lowercase letters. + * Data is assumed to be ASCII text, UTF-8 will work fine too. + */ +AWS_COMMON_API bool aws_byte_buf_eq_c_str_ignore_case(const struct aws_byte_buf *const buf, const char *const c_str); - -/** - * No copies, no buffer allocations. Iterates over input_str, and returns the next substring between split_on instances. - * - * Edge case rules are as follows: + +/** + * No copies, no buffer allocations. Iterates over input_str, and returns the next substring between split_on instances. + * + * Edge case rules are as follows: * If the input is an empty string, an empty cursor will be the one entry returned. - * If the input begins with split_on, an empty cursor will be the first entry returned. - * If the input has two adjacent split_on tokens, an empty cursor will be returned. - * If the input ends with split_on, an empty cursor will be returned last. - * + * If the input begins with split_on, an empty cursor will be the first entry returned. + * If the input has two adjacent split_on tokens, an empty cursor will be returned. + * If the input ends with split_on, an empty cursor will be returned last. + * * It is the user's responsibility zero-initialize substr before the first call. - * - * It is the user's responsibility to make sure the input buffer stays in memory - * long enough to use the results. - */ -AWS_COMMON_API -bool aws_byte_cursor_next_split( - const struct aws_byte_cursor *AWS_RESTRICT input_str, - char split_on, - struct aws_byte_cursor *AWS_RESTRICT substr); - -/** - * No copies, no buffer allocations. Fills in output with a list of - * aws_byte_cursor instances where buffer is an offset into the input_str and - * len is the length of that string in the original buffer. - * - * Edge case rules are as follows: - * if the input begins with split_on, an empty cursor will be the first entry in - * output. if the input has two adjacent split_on tokens, an empty cursor will - * be inserted into the output. if the input ends with split_on, an empty cursor - * will be appended to the output. - * - * It is the user's responsibility to properly initialize output. Recommended number of preallocated elements from - * output is your most likely guess for the upper bound of the number of elements resulting from the split. - * - * The type that will be stored in output is struct aws_byte_cursor (you'll need - * this for the item size param). - * - * It is the user's responsibility to make sure the input buffer stays in memory - * long enough to use the results. - */ -AWS_COMMON_API -int aws_byte_cursor_split_on_char( - const struct aws_byte_cursor *AWS_RESTRICT input_str, - char split_on, - struct aws_array_list *AWS_RESTRICT output); - -/** - * No copies, no buffer allocations. Fills in output with a list of aws_byte_cursor instances where buffer is - * an offset into the input_str and len is the length of that string in the original buffer. N is the max number of - * splits, if this value is zero, it will add all splits to the output. - * - * Edge case rules are as follows: - * if the input begins with split_on, an empty cursor will be the first entry in output - * if the input has two adjacent split_on tokens, an empty cursor will be inserted into the output. - * if the input ends with split_on, an empty cursor will be appended to the output. - * - * It is the user's responsibility to properly initialize output. Recommended number of preallocated elements from - * output is your most likely guess for the upper bound of the number of elements resulting from the split. - * - * If the output array is not large enough, input_str will be updated to point to the first character after the last - * processed split_on instance. - * - * The type that will be stored in output is struct aws_byte_cursor (you'll need this for the item size param). - * - * It is the user's responsibility to make sure the input buffer stays in memory long enough to use the results. - */ -AWS_COMMON_API -int aws_byte_cursor_split_on_char_n( - const struct aws_byte_cursor *AWS_RESTRICT input_str, - char split_on, - size_t n, - struct aws_array_list *AWS_RESTRICT output); - -/** + * + * It is the user's responsibility to make sure the input buffer stays in memory + * long enough to use the results. + */ +AWS_COMMON_API +bool aws_byte_cursor_next_split( + const struct aws_byte_cursor *AWS_RESTRICT input_str, + char split_on, + struct aws_byte_cursor *AWS_RESTRICT substr); + +/** + * No copies, no buffer allocations. Fills in output with a list of + * aws_byte_cursor instances where buffer is an offset into the input_str and + * len is the length of that string in the original buffer. + * + * Edge case rules are as follows: + * if the input begins with split_on, an empty cursor will be the first entry in + * output. if the input has two adjacent split_on tokens, an empty cursor will + * be inserted into the output. if the input ends with split_on, an empty cursor + * will be appended to the output. + * + * It is the user's responsibility to properly initialize output. Recommended number of preallocated elements from + * output is your most likely guess for the upper bound of the number of elements resulting from the split. + * + * The type that will be stored in output is struct aws_byte_cursor (you'll need + * this for the item size param). + * + * It is the user's responsibility to make sure the input buffer stays in memory + * long enough to use the results. + */ +AWS_COMMON_API +int aws_byte_cursor_split_on_char( + const struct aws_byte_cursor *AWS_RESTRICT input_str, + char split_on, + struct aws_array_list *AWS_RESTRICT output); + +/** + * No copies, no buffer allocations. Fills in output with a list of aws_byte_cursor instances where buffer is + * an offset into the input_str and len is the length of that string in the original buffer. N is the max number of + * splits, if this value is zero, it will add all splits to the output. + * + * Edge case rules are as follows: + * if the input begins with split_on, an empty cursor will be the first entry in output + * if the input has two adjacent split_on tokens, an empty cursor will be inserted into the output. + * if the input ends with split_on, an empty cursor will be appended to the output. + * + * It is the user's responsibility to properly initialize output. Recommended number of preallocated elements from + * output is your most likely guess for the upper bound of the number of elements resulting from the split. + * + * If the output array is not large enough, input_str will be updated to point to the first character after the last + * processed split_on instance. + * + * The type that will be stored in output is struct aws_byte_cursor (you'll need this for the item size param). + * + * It is the user's responsibility to make sure the input buffer stays in memory long enough to use the results. + */ +AWS_COMMON_API +int aws_byte_cursor_split_on_char_n( + const struct aws_byte_cursor *AWS_RESTRICT input_str, + char split_on, + size_t n, + struct aws_array_list *AWS_RESTRICT output); + +/** * Search for an exact byte match inside a cursor. The first match will be returned. Returns AWS_OP_SUCCESS * on successful match and first_find will be set to the offset in input_str, and length will be the remaining length * from input_str past the returned offset. If the match was not found, AWS_OP_ERR will be returned and @@ -305,71 +305,71 @@ int aws_byte_cursor_find_exact( struct aws_byte_cursor *first_find); /** - * - * Shrinks a byte cursor from the right for as long as the supplied predicate is true - */ -AWS_COMMON_API -struct aws_byte_cursor aws_byte_cursor_right_trim_pred( - const struct aws_byte_cursor *source, - aws_byte_predicate_fn *predicate); - -/** - * Shrinks a byte cursor from the left for as long as the supplied predicate is true - */ -AWS_COMMON_API -struct aws_byte_cursor aws_byte_cursor_left_trim_pred( - const struct aws_byte_cursor *source, - aws_byte_predicate_fn *predicate); - -/** - * Shrinks a byte cursor from both sides for as long as the supplied predicate is true - */ -AWS_COMMON_API -struct aws_byte_cursor aws_byte_cursor_trim_pred( - const struct aws_byte_cursor *source, - aws_byte_predicate_fn *predicate); - -/** - * Returns true if the byte cursor's range of bytes all satisfy the predicate - */ -AWS_COMMON_API -bool aws_byte_cursor_satisfies_pred(const struct aws_byte_cursor *source, aws_byte_predicate_fn *predicate); - -/** - * Copies from to to. If to is too small, AWS_ERROR_DEST_COPY_TOO_SMALL will be - * returned. dest->len will contain the amount of data actually copied to dest. - * - * from and to may be the same buffer, permitting copying a buffer into itself. - */ -AWS_COMMON_API -int aws_byte_buf_append(struct aws_byte_buf *to, const struct aws_byte_cursor *from); - -/** - * Copies from to to while converting bytes via the passed in lookup table. - * If to is too small, AWS_ERROR_DEST_COPY_TOO_SMALL will be - * returned. to->len will contain its original size plus the amount of data actually copied to to. - * - * from and to should not be the same buffer (overlap is not handled) - * lookup_table must be at least 256 bytes - */ -AWS_COMMON_API -int aws_byte_buf_append_with_lookup( - struct aws_byte_buf *AWS_RESTRICT to, - const struct aws_byte_cursor *AWS_RESTRICT from, - const uint8_t *lookup_table); - -/** - * Copies from to to. If to is too small, the buffer will be grown appropriately and - * the old contents copied to, before the new contents are appended. - * - * If the grow fails (overflow or OOM), then an error will be returned. - * - * from and to may be the same buffer, permitting copying a buffer into itself. - */ -AWS_COMMON_API -int aws_byte_buf_append_dynamic(struct aws_byte_buf *to, const struct aws_byte_cursor *from); - -/** + * + * Shrinks a byte cursor from the right for as long as the supplied predicate is true + */ +AWS_COMMON_API +struct aws_byte_cursor aws_byte_cursor_right_trim_pred( + const struct aws_byte_cursor *source, + aws_byte_predicate_fn *predicate); + +/** + * Shrinks a byte cursor from the left for as long as the supplied predicate is true + */ +AWS_COMMON_API +struct aws_byte_cursor aws_byte_cursor_left_trim_pred( + const struct aws_byte_cursor *source, + aws_byte_predicate_fn *predicate); + +/** + * Shrinks a byte cursor from both sides for as long as the supplied predicate is true + */ +AWS_COMMON_API +struct aws_byte_cursor aws_byte_cursor_trim_pred( + const struct aws_byte_cursor *source, + aws_byte_predicate_fn *predicate); + +/** + * Returns true if the byte cursor's range of bytes all satisfy the predicate + */ +AWS_COMMON_API +bool aws_byte_cursor_satisfies_pred(const struct aws_byte_cursor *source, aws_byte_predicate_fn *predicate); + +/** + * Copies from to to. If to is too small, AWS_ERROR_DEST_COPY_TOO_SMALL will be + * returned. dest->len will contain the amount of data actually copied to dest. + * + * from and to may be the same buffer, permitting copying a buffer into itself. + */ +AWS_COMMON_API +int aws_byte_buf_append(struct aws_byte_buf *to, const struct aws_byte_cursor *from); + +/** + * Copies from to to while converting bytes via the passed in lookup table. + * If to is too small, AWS_ERROR_DEST_COPY_TOO_SMALL will be + * returned. to->len will contain its original size plus the amount of data actually copied to to. + * + * from and to should not be the same buffer (overlap is not handled) + * lookup_table must be at least 256 bytes + */ +AWS_COMMON_API +int aws_byte_buf_append_with_lookup( + struct aws_byte_buf *AWS_RESTRICT to, + const struct aws_byte_cursor *AWS_RESTRICT from, + const uint8_t *lookup_table); + +/** + * Copies from to to. If to is too small, the buffer will be grown appropriately and + * the old contents copied to, before the new contents are appended. + * + * If the grow fails (overflow or OOM), then an error will be returned. + * + * from and to may be the same buffer, permitting copying a buffer into itself. + */ +AWS_COMMON_API +int aws_byte_buf_append_dynamic(struct aws_byte_buf *to, const struct aws_byte_cursor *from); + +/** * Copies `from` to `to`. If `to` is too small, the buffer will be grown appropriately and * the old contents copied over, before the new contents are appended. * @@ -418,104 +418,104 @@ AWS_COMMON_API int aws_byte_buf_append_null_terminator(struct aws_byte_buf *buf); /** - * Attempts to increase the capacity of a buffer to the requested capacity - * - * If the the buffer's capacity is currently larger than the request capacity, the - * function does nothing (no shrink is performed). - */ -AWS_COMMON_API -int aws_byte_buf_reserve(struct aws_byte_buf *buffer, size_t requested_capacity); - -/** - * Convenience function that attempts to increase the capacity of a buffer relative to the current - * length. - * - * aws_byte_buf_reserve_relative(buf, x) ~~ aws_byte_buf_reserve(buf, buf->len + x) - * - */ -AWS_COMMON_API -int aws_byte_buf_reserve_relative(struct aws_byte_buf *buffer, size_t additional_length); - -/** - * Concatenates a variable number of struct aws_byte_buf * into destination. - * Number of args must be greater than 1. If dest is too small, - * AWS_ERROR_DEST_COPY_TOO_SMALL will be returned. dest->len will contain the - * amount of data actually copied to dest. - */ -AWS_COMMON_API -int aws_byte_buf_cat(struct aws_byte_buf *dest, size_t number_of_args, ...); - -/** - * Compare two aws_byte_cursor structures. - * Return whether their contents are equivalent. - */ -AWS_COMMON_API -bool aws_byte_cursor_eq(const struct aws_byte_cursor *a, const struct aws_byte_cursor *b); - -/** - * Perform a case-insensitive string comparison of two aws_byte_cursor structures. - * Return whether their contents are equivalent. - * The "C" locale is used for comparing upper and lowercase letters. - * Data is assumed to be ASCII text, UTF-8 will work fine too. - */ -AWS_COMMON_API -bool aws_byte_cursor_eq_ignore_case(const struct aws_byte_cursor *a, const struct aws_byte_cursor *b); - -/** - * Compare an aws_byte_cursor and an aws_byte_buf. - * Return whether their contents are equivalent. - */ -AWS_COMMON_API + * Attempts to increase the capacity of a buffer to the requested capacity + * + * If the the buffer's capacity is currently larger than the request capacity, the + * function does nothing (no shrink is performed). + */ +AWS_COMMON_API +int aws_byte_buf_reserve(struct aws_byte_buf *buffer, size_t requested_capacity); + +/** + * Convenience function that attempts to increase the capacity of a buffer relative to the current + * length. + * + * aws_byte_buf_reserve_relative(buf, x) ~~ aws_byte_buf_reserve(buf, buf->len + x) + * + */ +AWS_COMMON_API +int aws_byte_buf_reserve_relative(struct aws_byte_buf *buffer, size_t additional_length); + +/** + * Concatenates a variable number of struct aws_byte_buf * into destination. + * Number of args must be greater than 1. If dest is too small, + * AWS_ERROR_DEST_COPY_TOO_SMALL will be returned. dest->len will contain the + * amount of data actually copied to dest. + */ +AWS_COMMON_API +int aws_byte_buf_cat(struct aws_byte_buf *dest, size_t number_of_args, ...); + +/** + * Compare two aws_byte_cursor structures. + * Return whether their contents are equivalent. + */ +AWS_COMMON_API +bool aws_byte_cursor_eq(const struct aws_byte_cursor *a, const struct aws_byte_cursor *b); + +/** + * Perform a case-insensitive string comparison of two aws_byte_cursor structures. + * Return whether their contents are equivalent. + * The "C" locale is used for comparing upper and lowercase letters. + * Data is assumed to be ASCII text, UTF-8 will work fine too. + */ +AWS_COMMON_API +bool aws_byte_cursor_eq_ignore_case(const struct aws_byte_cursor *a, const struct aws_byte_cursor *b); + +/** + * Compare an aws_byte_cursor and an aws_byte_buf. + * Return whether their contents are equivalent. + */ +AWS_COMMON_API bool aws_byte_cursor_eq_byte_buf(const struct aws_byte_cursor *const a, const struct aws_byte_buf *const b); - -/** - * Perform a case-insensitive string comparison of an aws_byte_cursor and an aws_byte_buf. - * Return whether their contents are equivalent. - * The "C" locale is used for comparing upper and lowercase letters. - * Data is assumed to be ASCII text, UTF-8 will work fine too. - */ -AWS_COMMON_API + +/** + * Perform a case-insensitive string comparison of an aws_byte_cursor and an aws_byte_buf. + * Return whether their contents are equivalent. + * The "C" locale is used for comparing upper and lowercase letters. + * Data is assumed to be ASCII text, UTF-8 will work fine too. + */ +AWS_COMMON_API bool aws_byte_cursor_eq_byte_buf_ignore_case(const struct aws_byte_cursor *const a, const struct aws_byte_buf *const b); - -/** - * Compare an aws_byte_cursor and a null-terminated string. - * Returns true if their contents are equivalent. - * The cursor should NOT contain a null-terminator, or the comparison will always return false. - */ -AWS_COMMON_API + +/** + * Compare an aws_byte_cursor and a null-terminated string. + * Returns true if their contents are equivalent. + * The cursor should NOT contain a null-terminator, or the comparison will always return false. + */ +AWS_COMMON_API bool aws_byte_cursor_eq_c_str(const struct aws_byte_cursor *const cursor, const char *const c_str); - -/** - * Perform a case-insensitive string comparison of an aws_byte_cursor and a null-terminated string. - * Return whether their contents are equivalent. - * The cursor should NOT contain a null-terminator, or the comparison will always return false. - * The "C" locale is used for comparing upper and lowercase letters. - * Data is assumed to be ASCII text, UTF-8 will work fine too. - */ -AWS_COMMON_API + +/** + * Perform a case-insensitive string comparison of an aws_byte_cursor and a null-terminated string. + * Return whether their contents are equivalent. + * The cursor should NOT contain a null-terminator, or the comparison will always return false. + * The "C" locale is used for comparing upper and lowercase letters. + * Data is assumed to be ASCII text, UTF-8 will work fine too. + */ +AWS_COMMON_API bool aws_byte_cursor_eq_c_str_ignore_case(const struct aws_byte_cursor *const cursor, const char *const c_str); - -/** - * Case-insensitive hash function for array containing ASCII or UTF-8 text. - */ -AWS_COMMON_API + +/** + * Case-insensitive hash function for array containing ASCII or UTF-8 text. + */ +AWS_COMMON_API uint64_t aws_hash_array_ignore_case(const void *array, const size_t len); - -/** - * Case-insensitive hash function for aws_byte_cursors stored in an aws_hash_table. - * For case-sensitive hashing, use aws_hash_byte_cursor_ptr(). - */ -AWS_COMMON_API -uint64_t aws_hash_byte_cursor_ptr_ignore_case(const void *item); - -/** - * Returns a lookup table for bytes that is the identity transformation with the exception - * of uppercase ascii characters getting replaced with lowercase characters. Used in - * caseless comparisons. - */ -AWS_COMMON_API -const uint8_t *aws_lookup_table_to_lower_get(void); - + +/** + * Case-insensitive hash function for aws_byte_cursors stored in an aws_hash_table. + * For case-sensitive hashing, use aws_hash_byte_cursor_ptr(). + */ +AWS_COMMON_API +uint64_t aws_hash_byte_cursor_ptr_ignore_case(const void *item); + +/** + * Returns a lookup table for bytes that is the identity transformation with the exception + * of uppercase ascii characters getting replaced with lowercase characters. Used in + * caseless comparisons. + */ +AWS_COMMON_API +const uint8_t *aws_lookup_table_to_lower_get(void); + /** * Returns lookup table to go from ASCII/UTF-8 hex character to a number (0-15). * Non-hex characters map to 255. @@ -546,88 +546,88 @@ int aws_byte_cursor_compare_lookup( const struct aws_byte_cursor *rhs, const uint8_t *lookup_table); -/** - * For creating a byte buffer from a null-terminated string literal. - */ +/** + * For creating a byte buffer from a null-terminated string literal. + */ AWS_COMMON_API struct aws_byte_buf aws_byte_buf_from_c_str(const char *c_str); - + AWS_COMMON_API struct aws_byte_buf aws_byte_buf_from_array(const void *bytes, size_t len); - + AWS_COMMON_API struct aws_byte_buf aws_byte_buf_from_empty_array(const void *bytes, size_t capacity); - + AWS_COMMON_API struct aws_byte_cursor aws_byte_cursor_from_buf(const struct aws_byte_buf *const buf); - + AWS_COMMON_API struct aws_byte_cursor aws_byte_cursor_from_c_str(const char *c_str); - + AWS_COMMON_API struct aws_byte_cursor aws_byte_cursor_from_array(const void *const bytes, const size_t len); - -/** - * Tests if the given aws_byte_cursor has at least len bytes remaining. If so, - * *buf is advanced by len bytes (incrementing ->ptr and decrementing ->len), - * and an aws_byte_cursor referring to the first len bytes of the original *buf - * is returned. Otherwise, an aws_byte_cursor with ->ptr = NULL, ->len = 0 is - * returned. - * - * Note that if len is above (SIZE_MAX / 2), this function will also treat it as - * a buffer overflow, and return NULL without changing *buf. - */ + +/** + * Tests if the given aws_byte_cursor has at least len bytes remaining. If so, + * *buf is advanced by len bytes (incrementing ->ptr and decrementing ->len), + * and an aws_byte_cursor referring to the first len bytes of the original *buf + * is returned. Otherwise, an aws_byte_cursor with ->ptr = NULL, ->len = 0 is + * returned. + * + * Note that if len is above (SIZE_MAX / 2), this function will also treat it as + * a buffer overflow, and return NULL without changing *buf. + */ AWS_COMMON_API struct aws_byte_cursor aws_byte_cursor_advance(struct aws_byte_cursor *const cursor, const size_t len); - -/** - * Behaves identically to aws_byte_cursor_advance, but avoids speculative - * execution potentially reading out-of-bounds pointers (by returning an - * empty ptr in such speculated paths). - * - * This should generally be done when using an untrusted or - * data-dependent value for 'len', to avoid speculating into a path where - * cursor->ptr points outside the true ptr length. - */ - + +/** + * Behaves identically to aws_byte_cursor_advance, but avoids speculative + * execution potentially reading out-of-bounds pointers (by returning an + * empty ptr in such speculated paths). + * + * This should generally be done when using an untrusted or + * data-dependent value for 'len', to avoid speculating into a path where + * cursor->ptr points outside the true ptr length. + */ + AWS_COMMON_API struct aws_byte_cursor aws_byte_cursor_advance_nospec(struct aws_byte_cursor *const cursor, size_t len); - -/** - * Reads specified length of data from byte cursor and copies it to the - * destination array. - * - * On success, returns true and updates the cursor pointer/length accordingly. - * If there is insufficient space in the cursor, returns false, leaving the - * cursor unchanged. - */ + +/** + * Reads specified length of data from byte cursor and copies it to the + * destination array. + * + * On success, returns true and updates the cursor pointer/length accordingly. + * If there is insufficient space in the cursor, returns false, leaving the + * cursor unchanged. + */ AWS_COMMON_API bool aws_byte_cursor_read( - struct aws_byte_cursor *AWS_RESTRICT cur, - void *AWS_RESTRICT dest, + struct aws_byte_cursor *AWS_RESTRICT cur, + void *AWS_RESTRICT dest, const size_t len); - -/** - * Reads as many bytes from cursor as size of buffer, and copies them to buffer. - * - * On success, returns true and updates the cursor pointer/length accordingly. - * If there is insufficient space in the cursor, returns false, leaving the - * cursor unchanged. - */ + +/** + * Reads as many bytes from cursor as size of buffer, and copies them to buffer. + * + * On success, returns true and updates the cursor pointer/length accordingly. + * If there is insufficient space in the cursor, returns false, leaving the + * cursor unchanged. + */ AWS_COMMON_API bool aws_byte_cursor_read_and_fill_buffer( - struct aws_byte_cursor *AWS_RESTRICT cur, + struct aws_byte_cursor *AWS_RESTRICT cur, struct aws_byte_buf *AWS_RESTRICT dest); - -/** - * Reads a single byte from cursor, placing it in *var. - * - * On success, returns true and updates the cursor pointer/length accordingly. - * If there is insufficient space in the cursor, returns false, leaving the - * cursor unchanged. - */ + +/** + * Reads a single byte from cursor, placing it in *var. + * + * On success, returns true and updates the cursor pointer/length accordingly. + * If there is insufficient space in the cursor, returns false, leaving the + * cursor unchanged. + */ AWS_COMMON_API bool aws_byte_cursor_read_u8(struct aws_byte_cursor *AWS_RESTRICT cur, uint8_t *AWS_RESTRICT var); - -/** - * Reads a 16-bit value in network byte order from cur, and places it in host - * byte order into var. - * - * On success, returns true and updates the cursor pointer/length accordingly. - * If there is insufficient space in the cursor, returns false, leaving the - * cursor unchanged. - */ + +/** + * Reads a 16-bit value in network byte order from cur, and places it in host + * byte order into var. + * + * On success, returns true and updates the cursor pointer/length accordingly. + * If there is insufficient space in the cursor, returns false, leaving the + * cursor unchanged. + */ AWS_COMMON_API bool aws_byte_cursor_read_be16(struct aws_byte_cursor *cur, uint16_t *var); - + /** * Reads an unsigned 24-bit value (3 bytes) in network byte order from cur, * and places it in host byte order into 32-bit var. @@ -638,17 +638,17 @@ AWS_COMMON_API bool aws_byte_cursor_read_be16(struct aws_byte_cursor *cur, uint1 * cursor unchanged. */ AWS_COMMON_API bool aws_byte_cursor_read_be24(struct aws_byte_cursor *cur, uint32_t *var); - -/** - * Reads a 32-bit value in network byte order from cur, and places it in host - * byte order into var. - * - * On success, returns true and updates the cursor pointer/length accordingly. - * If there is insufficient space in the cursor, returns false, leaving the - * cursor unchanged. - */ + +/** + * Reads a 32-bit value in network byte order from cur, and places it in host + * byte order into var. + * + * On success, returns true and updates the cursor pointer/length accordingly. + * If there is insufficient space in the cursor, returns false, leaving the + * cursor unchanged. + */ AWS_COMMON_API bool aws_byte_cursor_read_be32(struct aws_byte_cursor *cur, uint32_t *var); - + /** * Reads a 64-bit value in network byte order from cur, and places it in host * byte order into var. @@ -658,7 +658,7 @@ AWS_COMMON_API bool aws_byte_cursor_read_be32(struct aws_byte_cursor *cur, uint3 * cursor unchanged. */ AWS_COMMON_API bool aws_byte_cursor_read_be64(struct aws_byte_cursor *cur, uint64_t *var); - + /** * Reads a 32-bit value in network byte order from cur, and places it in host * byte order into var. @@ -668,17 +668,17 @@ AWS_COMMON_API bool aws_byte_cursor_read_be64(struct aws_byte_cursor *cur, uint6 * cursor unchanged. */ AWS_COMMON_API bool aws_byte_cursor_read_float_be32(struct aws_byte_cursor *cur, float *var); - -/** - * Reads a 64-bit value in network byte order from cur, and places it in host - * byte order into var. - * - * On success, returns true and updates the cursor pointer/length accordingly. - * If there is insufficient space in the cursor, returns false, leaving the - * cursor unchanged. - */ + +/** + * Reads a 64-bit value in network byte order from cur, and places it in host + * byte order into var. + * + * On success, returns true and updates the cursor pointer/length accordingly. + * If there is insufficient space in the cursor, returns false, leaving the + * cursor unchanged. + */ AWS_COMMON_API bool aws_byte_cursor_read_float_be64(struct aws_byte_cursor *cur, double *var); - + /** * Reads 2 hex characters from ASCII/UTF-8 text to produce an 8-bit number. * Accepts both lowercase 'a'-'f' and uppercase 'A'-'F'. @@ -689,58 +689,58 @@ AWS_COMMON_API bool aws_byte_cursor_read_float_be64(struct aws_byte_cursor *cur, * is encountered, returns false, leaving the cursor unchanged. */ AWS_COMMON_API bool aws_byte_cursor_read_hex_u8(struct aws_byte_cursor *cur, uint8_t *var); - -/** - * Appends a sub-buffer to the specified buffer. - * - * If the buffer has at least `len' bytes remaining (buffer->capacity - buffer->len >= len), - * then buffer->len is incremented by len, and an aws_byte_buf is assigned to *output corresponding - * to the last len bytes of the input buffer. The aws_byte_buf at *output will have a null - * allocator, a zero initial length, and a capacity of 'len'. The function then returns true. - * - * If there is insufficient space, then this function nulls all fields in *output and returns - * false. - */ + +/** + * Appends a sub-buffer to the specified buffer. + * + * If the buffer has at least `len' bytes remaining (buffer->capacity - buffer->len >= len), + * then buffer->len is incremented by len, and an aws_byte_buf is assigned to *output corresponding + * to the last len bytes of the input buffer. The aws_byte_buf at *output will have a null + * allocator, a zero initial length, and a capacity of 'len'. The function then returns true. + * + * If there is insufficient space, then this function nulls all fields in *output and returns + * false. + */ AWS_COMMON_API bool aws_byte_buf_advance( struct aws_byte_buf *const AWS_RESTRICT buffer, struct aws_byte_buf *const AWS_RESTRICT output, const size_t len); - -/** - * Write specified number of bytes from array to byte buffer. - * - * On success, returns true and updates the buffer length accordingly. - * If there is insufficient space in the buffer, returns false, leaving the - * buffer unchanged. - */ + +/** + * Write specified number of bytes from array to byte buffer. + * + * On success, returns true and updates the buffer length accordingly. + * If there is insufficient space in the buffer, returns false, leaving the + * buffer unchanged. + */ AWS_COMMON_API bool aws_byte_buf_write( - struct aws_byte_buf *AWS_RESTRICT buf, - const uint8_t *AWS_RESTRICT src, + struct aws_byte_buf *AWS_RESTRICT buf, + const uint8_t *AWS_RESTRICT src, size_t len); - -/** - * Copies all bytes from buffer to buffer. - * - * On success, returns true and updates the buffer /length accordingly. - * If there is insufficient space in the buffer, returns false, leaving the - * buffer unchanged. - */ + +/** + * Copies all bytes from buffer to buffer. + * + * On success, returns true and updates the buffer /length accordingly. + * If there is insufficient space in the buffer, returns false, leaving the + * buffer unchanged. + */ AWS_COMMON_API bool aws_byte_buf_write_from_whole_buffer( - struct aws_byte_buf *AWS_RESTRICT buf, + struct aws_byte_buf *AWS_RESTRICT buf, struct aws_byte_buf src); - -/** - * Copies all bytes from buffer to buffer. - * - * On success, returns true and updates the buffer /length accordingly. - * If there is insufficient space in the buffer, returns false, leaving the - * buffer unchanged. - */ + +/** + * Copies all bytes from buffer to buffer. + * + * On success, returns true and updates the buffer /length accordingly. + * If there is insufficient space in the buffer, returns false, leaving the + * buffer unchanged. + */ AWS_COMMON_API bool aws_byte_buf_write_from_whole_cursor( - struct aws_byte_buf *AWS_RESTRICT buf, + struct aws_byte_buf *AWS_RESTRICT buf, struct aws_byte_cursor src); - -/** + +/** * Without increasing buf's capacity, write as much as possible from advancing_cursor into buf. * * buf's len is updated accordingly. @@ -760,34 +760,34 @@ AWS_COMMON_API struct aws_byte_cursor aws_byte_buf_write_to_capacity( struct aws_byte_cursor *advancing_cursor); /** - * Copies one byte to buffer. - * - * On success, returns true and updates the cursor /length - accordingly. + * Copies one byte to buffer. + * + * On success, returns true and updates the cursor /length + accordingly. * * If there is insufficient space in the buffer, returns false, leaving the * buffer unchanged. */ AWS_COMMON_API bool aws_byte_buf_write_u8(struct aws_byte_buf *AWS_RESTRICT buf, uint8_t c); - + /** * Writes one byte repeatedly to buffer (like memset) * * If there is insufficient space in the buffer, returns false, leaving the * buffer unchanged. - */ + */ AWS_COMMON_API bool aws_byte_buf_write_u8_n(struct aws_byte_buf *buf, uint8_t c, size_t count); - -/** - * Writes a 16-bit integer in network byte order (big endian) to buffer. - * + +/** + * Writes a 16-bit integer in network byte order (big endian) to buffer. + * * On success, returns true and updates the buffer /length accordingly. * If there is insufficient space in the buffer, returns false, leaving the * buffer unchanged. - */ + */ AWS_COMMON_API bool aws_byte_buf_write_be16(struct aws_byte_buf *buf, uint16_t x); - -/** + +/** * Writes low 24-bits (3 bytes) of an unsigned integer in network byte order (big endian) to buffer. * Ex: If x is 0x00AABBCC then {0xAA, 0xBB, 0xCC} is written to buffer. * @@ -798,15 +798,15 @@ AWS_COMMON_API bool aws_byte_buf_write_be16(struct aws_byte_buf *buf, uint16_t x AWS_COMMON_API bool aws_byte_buf_write_be24(struct aws_byte_buf *buf, uint32_t x); /** - * Writes a 32-bit integer in network byte order (big endian) to buffer. - * + * Writes a 32-bit integer in network byte order (big endian) to buffer. + * * On success, returns true and updates the buffer /length accordingly. * If there is insufficient space in the buffer, returns false, leaving the * buffer unchanged. - */ + */ AWS_COMMON_API bool aws_byte_buf_write_be32(struct aws_byte_buf *buf, uint32_t x); - -/** + +/** * Writes a 32-bit float in network byte order (big endian) to buffer. * * On success, returns true and updates the buffer /length accordingly. @@ -816,14 +816,14 @@ AWS_COMMON_API bool aws_byte_buf_write_be32(struct aws_byte_buf *buf, uint32_t x AWS_COMMON_API bool aws_byte_buf_write_float_be32(struct aws_byte_buf *buf, float x); /** - * Writes a 64-bit integer in network byte order (big endian) to buffer. - * + * Writes a 64-bit integer in network byte order (big endian) to buffer. + * * On success, returns true and updates the buffer /length accordingly. * If there is insufficient space in the buffer, returns false, leaving the * buffer unchanged. - */ + */ AWS_COMMON_API bool aws_byte_buf_write_be64(struct aws_byte_buf *buf, uint64_t x); - + /** * Writes a 64-bit float in network byte order (big endian) to buffer. * @@ -875,4 +875,4 @@ AWS_COMMON_API bool aws_isspace(uint8_t ch); AWS_EXTERN_C_END -#endif /* AWS_COMMON_BYTE_BUF_H */ +#endif /* AWS_COMMON_BYTE_BUF_H */ diff --git a/contrib/restricted/aws/aws-c-common/include/aws/common/byte_order.h b/contrib/restricted/aws/aws-c-common/include/aws/common/byte_order.h index efd59d60be..efd16b1915 100644 --- a/contrib/restricted/aws/aws-c-common/include/aws/common/byte_order.h +++ b/contrib/restricted/aws/aws-c-common/include/aws/common/byte_order.h @@ -1,37 +1,37 @@ -#ifndef AWS_COMMON_BYTE_ORDER_H -#define AWS_COMMON_BYTE_ORDER_H - +#ifndef AWS_COMMON_BYTE_ORDER_H +#define AWS_COMMON_BYTE_ORDER_H + /** * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. * SPDX-License-Identifier: Apache-2.0. - */ - -#include <aws/common/common.h> - + */ + +#include <aws/common/common.h> + AWS_EXTERN_C_BEGIN - -/** - * Returns 1 if machine is big endian, 0 if little endian. - * If you compile with even -O1 optimization, this check is completely optimized - * out at compile time and code which calls "if (aws_is_big_endian())" will do - * the right thing without branching. - */ + +/** + * Returns 1 if machine is big endian, 0 if little endian. + * If you compile with even -O1 optimization, this check is completely optimized + * out at compile time and code which calls "if (aws_is_big_endian())" will do + * the right thing without branching. + */ AWS_STATIC_IMPL int aws_is_big_endian(void); -/** - * Convert 64 bit integer from host to network byte order. - */ +/** + * Convert 64 bit integer from host to network byte order. + */ AWS_STATIC_IMPL uint64_t aws_hton64(uint64_t x); -/** - * Convert 64 bit integer from network to host byte order. - */ +/** + * Convert 64 bit integer from network to host byte order. + */ AWS_STATIC_IMPL uint64_t aws_ntoh64(uint64_t x); - -/** - * Convert 32 bit integer from host to network byte order. - */ + +/** + * Convert 32 bit integer from host to network byte order. + */ AWS_STATIC_IMPL uint32_t aws_hton32(uint32_t x); - -/** + +/** * Convert 32 bit float from host to network byte order. */ AWS_STATIC_IMPL float aws_htonf32(float x); @@ -42,11 +42,11 @@ AWS_STATIC_IMPL float aws_htonf32(float x); AWS_STATIC_IMPL double aws_htonf64(double x); /** - * Convert 32 bit integer from network to host byte order. - */ + * Convert 32 bit integer from network to host byte order. + */ AWS_STATIC_IMPL uint32_t aws_ntoh32(uint32_t x); - -/** + +/** * Convert 32 bit float from network to host byte order. */ AWS_STATIC_IMPL float aws_ntohf32(float x); @@ -56,19 +56,19 @@ AWS_STATIC_IMPL float aws_ntohf32(float x); AWS_STATIC_IMPL double aws_ntohf64(double x); /** - * Convert 16 bit integer from host to network byte order. - */ + * Convert 16 bit integer from host to network byte order. + */ AWS_STATIC_IMPL uint16_t aws_hton16(uint16_t x); - -/** - * Convert 16 bit integer from network to host byte order. - */ + +/** + * Convert 16 bit integer from network to host byte order. + */ AWS_STATIC_IMPL uint16_t aws_ntoh16(uint16_t x); - + #ifndef AWS_NO_STATIC_IMPL # include <aws/common/byte_order.inl> #endif /* AWS_NO_STATIC_IMPL */ AWS_EXTERN_C_END -#endif /* AWS_COMMON_BYTE_ORDER_H */ +#endif /* AWS_COMMON_BYTE_ORDER_H */ diff --git a/contrib/restricted/aws/aws-c-common/include/aws/common/clock.h b/contrib/restricted/aws/aws-c-common/include/aws/common/clock.h index 489a5f19a1..1a90a1f7b7 100644 --- a/contrib/restricted/aws/aws-c-common/include/aws/common/clock.h +++ b/contrib/restricted/aws/aws-c-common/include/aws/common/clock.h @@ -1,55 +1,55 @@ -#ifndef AWS_COMMON_CLOCK_H -#define AWS_COMMON_CLOCK_H - +#ifndef AWS_COMMON_CLOCK_H +#define AWS_COMMON_CLOCK_H + /** * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. * SPDX-License-Identifier: Apache-2.0. - */ - -#include <aws/common/common.h> -#include <aws/common/math.h> - -enum aws_timestamp_unit { - AWS_TIMESTAMP_SECS = 1, - AWS_TIMESTAMP_MILLIS = 1000, - AWS_TIMESTAMP_MICROS = 1000000, - AWS_TIMESTAMP_NANOS = 1000000000, -}; - + */ + +#include <aws/common/common.h> +#include <aws/common/math.h> + +enum aws_timestamp_unit { + AWS_TIMESTAMP_SECS = 1, + AWS_TIMESTAMP_MILLIS = 1000, + AWS_TIMESTAMP_MICROS = 1000000, + AWS_TIMESTAMP_NANOS = 1000000000, +}; + AWS_EXTERN_C_BEGIN -/** - * Converts 'timestamp' from unit 'convert_from' to unit 'convert_to', if the units are the same then 'timestamp' is - * returned. If 'remainder' is NOT NULL, it will be set to the remainder if convert_from is a more precise unit than - * convert_to. To avoid unnecessary branching, 'remainder' is not zero initialized in this function, be sure to set it - * to 0 first if you care about that kind of thing. If conversion would lead to integer overflow, the timestamp - * returned will be the highest possible time that is representable, i.e. UINT64_MAX. - */ -AWS_STATIC_IMPL uint64_t aws_timestamp_convert( - uint64_t timestamp, - enum aws_timestamp_unit convert_from, - enum aws_timestamp_unit convert_to, +/** + * Converts 'timestamp' from unit 'convert_from' to unit 'convert_to', if the units are the same then 'timestamp' is + * returned. If 'remainder' is NOT NULL, it will be set to the remainder if convert_from is a more precise unit than + * convert_to. To avoid unnecessary branching, 'remainder' is not zero initialized in this function, be sure to set it + * to 0 first if you care about that kind of thing. If conversion would lead to integer overflow, the timestamp + * returned will be the highest possible time that is representable, i.e. UINT64_MAX. + */ +AWS_STATIC_IMPL uint64_t aws_timestamp_convert( + uint64_t timestamp, + enum aws_timestamp_unit convert_from, + enum aws_timestamp_unit convert_to, uint64_t *remainder); - -/** - * Get ticks in nanoseconds (usually 100 nanosecond precision) on the high resolution clock (most-likely TSC). This - * clock has no bearing on the actual system time. On success, timestamp will be set. - */ -AWS_COMMON_API -int aws_high_res_clock_get_ticks(uint64_t *timestamp); - -/** - * Get ticks in nanoseconds (usually 100 nanosecond precision) on the system clock. Reflects actual system time via - * nanoseconds since unix epoch. Use with care since an inaccurately set clock will probably cause bugs. On success, - * timestamp will be set. - */ -AWS_COMMON_API -int aws_sys_clock_get_ticks(uint64_t *timestamp); - + +/** + * Get ticks in nanoseconds (usually 100 nanosecond precision) on the high resolution clock (most-likely TSC). This + * clock has no bearing on the actual system time. On success, timestamp will be set. + */ +AWS_COMMON_API +int aws_high_res_clock_get_ticks(uint64_t *timestamp); + +/** + * Get ticks in nanoseconds (usually 100 nanosecond precision) on the system clock. Reflects actual system time via + * nanoseconds since unix epoch. Use with care since an inaccurately set clock will probably cause bugs. On success, + * timestamp will be set. + */ +AWS_COMMON_API +int aws_sys_clock_get_ticks(uint64_t *timestamp); + #ifndef AWS_NO_STATIC_IMPL # include <aws/common/clock.inl> #endif /* AWS_NO_STATIC_IMPL */ -AWS_EXTERN_C_END - -#endif /* AWS_COMMON_CLOCK_H */ +AWS_EXTERN_C_END + +#endif /* AWS_COMMON_CLOCK_H */ diff --git a/contrib/restricted/aws/aws-c-common/include/aws/common/command_line_parser.h b/contrib/restricted/aws/aws-c-common/include/aws/common/command_line_parser.h index 8b31ae98ef..266e15abe9 100644 --- a/contrib/restricted/aws/aws-c-common/include/aws/common/command_line_parser.h +++ b/contrib/restricted/aws/aws-c-common/include/aws/common/command_line_parser.h @@ -3,56 +3,56 @@ /** * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. * SPDX-License-Identifier: Apache-2.0. - */ -#include <aws/common/common.h> - -enum aws_cli_options_has_arg { - AWS_CLI_OPTIONS_NO_ARGUMENT = 0, - AWS_CLI_OPTIONS_REQUIRED_ARGUMENT = 1, - AWS_CLI_OPTIONS_OPTIONAL_ARGUMENT = 2, -}; - + */ +#include <aws/common/common.h> + +enum aws_cli_options_has_arg { + AWS_CLI_OPTIONS_NO_ARGUMENT = 0, + AWS_CLI_OPTIONS_REQUIRED_ARGUMENT = 1, + AWS_CLI_OPTIONS_OPTIONAL_ARGUMENT = 2, +}; + /* Ignoring padding since we're trying to maintain getopt.h compatibility */ /* NOLINTNEXTLINE(clang-analyzer-optin.performance.Padding) */ -struct aws_cli_option { - const char *name; - enum aws_cli_options_has_arg has_arg; - int *flag; - int val; -}; - +struct aws_cli_option { + const char *name; + enum aws_cli_options_has_arg has_arg; + int *flag; + int val; +}; + AWS_EXTERN_C_BEGIN -/** - * Initialized to 1 (for where the first argument would be). As arguments are parsed, this number is the index - * of the next argument to parse. Reset this to 1 to parse another set of arguments, or to rerun the parser. - */ -AWS_COMMON_API extern int aws_cli_optind; - -/** - * If an option has an argument, when the option is encountered, this will be set to the argument portion. - */ -AWS_COMMON_API extern const char *aws_cli_optarg; - -/** - * A mostly compliant implementation of posix getopt_long(). Parses command-line arguments. argc is the number of - * command line arguments passed in argv. optstring contains the legitimate option characters. The option characters - * coorespond to aws_cli_option::val. If the character is followed by a :, the option requires an argument. If it is - * followed by '::', the argument is optional (not implemented yet). - * - * longopts, is an array of struct aws_cli_option. These are the allowed options for the program. - * The last member of the array must be zero initialized. - * - * If longindex is non-null, it will be set to the index in longopts, for the found option. - * - * Returns option val if it was found, '?' if an option was encountered that was not specified in the option string, - * returns -1 when all arguments that can be parsed have been parsed. - */ -AWS_COMMON_API int aws_cli_getopt_long( - int argc, - char *const argv[], - const char *optstring, - const struct aws_cli_option *longopts, - int *longindex); -AWS_EXTERN_C_END - +/** + * Initialized to 1 (for where the first argument would be). As arguments are parsed, this number is the index + * of the next argument to parse. Reset this to 1 to parse another set of arguments, or to rerun the parser. + */ +AWS_COMMON_API extern int aws_cli_optind; + +/** + * If an option has an argument, when the option is encountered, this will be set to the argument portion. + */ +AWS_COMMON_API extern const char *aws_cli_optarg; + +/** + * A mostly compliant implementation of posix getopt_long(). Parses command-line arguments. argc is the number of + * command line arguments passed in argv. optstring contains the legitimate option characters. The option characters + * coorespond to aws_cli_option::val. If the character is followed by a :, the option requires an argument. If it is + * followed by '::', the argument is optional (not implemented yet). + * + * longopts, is an array of struct aws_cli_option. These are the allowed options for the program. + * The last member of the array must be zero initialized. + * + * If longindex is non-null, it will be set to the index in longopts, for the found option. + * + * Returns option val if it was found, '?' if an option was encountered that was not specified in the option string, + * returns -1 when all arguments that can be parsed have been parsed. + */ +AWS_COMMON_API int aws_cli_getopt_long( + int argc, + char *const argv[], + const char *optstring, + const struct aws_cli_option *longopts, + int *longindex); +AWS_EXTERN_C_END + #endif /* AWS_COMMON_COMMAND_LINE_PARSER_H */ diff --git a/contrib/restricted/aws/aws-c-common/include/aws/common/common.h b/contrib/restricted/aws/aws-c-common/include/aws/common/common.h index 7968a5e009..e9bbf536da 100644 --- a/contrib/restricted/aws/aws-c-common/include/aws/common/common.h +++ b/contrib/restricted/aws/aws-c-common/include/aws/common/common.h @@ -1,14 +1,14 @@ -#ifndef AWS_COMMON_COMMON_H -#define AWS_COMMON_COMMON_H - +#ifndef AWS_COMMON_COMMON_H +#define AWS_COMMON_COMMON_H + /** * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. * SPDX-License-Identifier: Apache-2.0. - */ - + */ + #include <aws/common/config.h> -#include <aws/common/exports.h> - +#include <aws/common/exports.h> + #include <aws/common/allocator.h> #include <aws/common/assert.h> #include <aws/common/error.h> @@ -18,29 +18,29 @@ #include <aws/common/stdbool.h> #include <aws/common/stdint.h> #include <aws/common/zero.h> -#include <stddef.h> -#include <stdio.h> +#include <stddef.h> +#include <stdio.h> #include <stdlib.h> /* for abort() */ -#include <string.h> - -AWS_EXTERN_C_BEGIN - -/** +#include <string.h> + +AWS_EXTERN_C_BEGIN + +/** * Initializes internal datastructures used by aws-c-common. * Must be called before using any functionality in aws-c-common. - */ -AWS_COMMON_API + */ +AWS_COMMON_API void aws_common_library_init(struct aws_allocator *allocator); - -/** + +/** * Shuts down the internal datastructures used by aws-c-common. - */ -AWS_COMMON_API + */ +AWS_COMMON_API void aws_common_library_clean_up(void); - -AWS_COMMON_API + +AWS_COMMON_API void aws_common_fatal_assert_library_initialized(void); - -AWS_EXTERN_C_END - -#endif /* AWS_COMMON_COMMON_H */ + +AWS_EXTERN_C_END + +#endif /* AWS_COMMON_COMMON_H */ diff --git a/contrib/restricted/aws/aws-c-common/include/aws/common/condition_variable.h b/contrib/restricted/aws/aws-c-common/include/aws/common/condition_variable.h index e78ceea160..485bcc9368 100644 --- a/contrib/restricted/aws/aws-c-common/include/aws/common/condition_variable.h +++ b/contrib/restricted/aws/aws-c-common/include/aws/common/condition_variable.h @@ -1,111 +1,111 @@ -#ifndef AWS_COMMON_CONDITION_VARIABLE_H -#define AWS_COMMON_CONDITION_VARIABLE_H - +#ifndef AWS_COMMON_CONDITION_VARIABLE_H +#define AWS_COMMON_CONDITION_VARIABLE_H + /** * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. * SPDX-License-Identifier: Apache-2.0. - */ - -#include <aws/common/common.h> -#ifndef _WIN32 -# include <pthread.h> -#endif - -struct aws_mutex; - -struct aws_condition_variable; - -typedef bool(aws_condition_predicate_fn)(void *); - -struct aws_condition_variable { -#ifdef _WIN32 - void *condition_handle; -#else - pthread_cond_t condition_handle; -#endif + */ + +#include <aws/common/common.h> +#ifndef _WIN32 +# include <pthread.h> +#endif + +struct aws_mutex; + +struct aws_condition_variable; + +typedef bool(aws_condition_predicate_fn)(void *); + +struct aws_condition_variable { +#ifdef _WIN32 + void *condition_handle; +#else + pthread_cond_t condition_handle; +#endif bool initialized; -}; - -/** - * Static initializer for condition variable. - * You can do something like struct aws_condition_variable var = - * AWS_CONDITION_VARIABLE_INIT; - * - * If on Windows and you get an error about AWS_CONDITION_VARIABLE_INIT being undefined, please include Windows.h to get - * CONDITION_VARIABLE_INIT. - */ -#ifdef _WIN32 -# define AWS_CONDITION_VARIABLE_INIT \ +}; + +/** + * Static initializer for condition variable. + * You can do something like struct aws_condition_variable var = + * AWS_CONDITION_VARIABLE_INIT; + * + * If on Windows and you get an error about AWS_CONDITION_VARIABLE_INIT being undefined, please include Windows.h to get + * CONDITION_VARIABLE_INIT. + */ +#ifdef _WIN32 +# define AWS_CONDITION_VARIABLE_INIT \ { .condition_handle = NULL, .initialized = true } -#else -# define AWS_CONDITION_VARIABLE_INIT \ +#else +# define AWS_CONDITION_VARIABLE_INIT \ { .condition_handle = PTHREAD_COND_INITIALIZER, .initialized = true } -#endif - -AWS_EXTERN_C_BEGIN - -/** - * Initializes a condition variable. - */ -AWS_COMMON_API -int aws_condition_variable_init(struct aws_condition_variable *condition_variable); - -/** - * Cleans up a condition variable. - */ -AWS_COMMON_API -void aws_condition_variable_clean_up(struct aws_condition_variable *condition_variable); - -/** - * Notifies/Wakes one waiting thread - */ -AWS_COMMON_API -int aws_condition_variable_notify_one(struct aws_condition_variable *condition_variable); - -/** - * Notifies/Wakes all waiting threads. - */ -AWS_COMMON_API -int aws_condition_variable_notify_all(struct aws_condition_variable *condition_variable); - -/** - * Waits the calling thread on a notification from another thread. - */ -AWS_COMMON_API -int aws_condition_variable_wait(struct aws_condition_variable *condition_variable, struct aws_mutex *mutex); - -/** - * Waits the calling thread on a notification from another thread. If predicate returns false, the wait is reentered, - * otherwise control returns to the caller. - */ -AWS_COMMON_API -int aws_condition_variable_wait_pred( - struct aws_condition_variable *condition_variable, - struct aws_mutex *mutex, - aws_condition_predicate_fn *pred, - void *pred_ctx); - -/** - * Waits the calling thread on a notification from another thread. Times out after time_to_wait. time_to_wait is in - * nanoseconds. - */ -AWS_COMMON_API -int aws_condition_variable_wait_for( - struct aws_condition_variable *condition_variable, - struct aws_mutex *mutex, - int64_t time_to_wait); - -/** - * Waits the calling thread on a notification from another thread. Times out after time_to_wait. time_to_wait is in - * nanoseconds. If predicate returns false, the wait is reentered, otherwise control returns to the caller. - */ -AWS_COMMON_API -int aws_condition_variable_wait_for_pred( - struct aws_condition_variable *condition_variable, - struct aws_mutex *mutex, - int64_t time_to_wait, - aws_condition_predicate_fn *pred, - void *pred_ctx); - -AWS_EXTERN_C_END -#endif /* AWS_COMMON_CONDITION_VARIABLE_H */ +#endif + +AWS_EXTERN_C_BEGIN + +/** + * Initializes a condition variable. + */ +AWS_COMMON_API +int aws_condition_variable_init(struct aws_condition_variable *condition_variable); + +/** + * Cleans up a condition variable. + */ +AWS_COMMON_API +void aws_condition_variable_clean_up(struct aws_condition_variable *condition_variable); + +/** + * Notifies/Wakes one waiting thread + */ +AWS_COMMON_API +int aws_condition_variable_notify_one(struct aws_condition_variable *condition_variable); + +/** + * Notifies/Wakes all waiting threads. + */ +AWS_COMMON_API +int aws_condition_variable_notify_all(struct aws_condition_variable *condition_variable); + +/** + * Waits the calling thread on a notification from another thread. + */ +AWS_COMMON_API +int aws_condition_variable_wait(struct aws_condition_variable *condition_variable, struct aws_mutex *mutex); + +/** + * Waits the calling thread on a notification from another thread. If predicate returns false, the wait is reentered, + * otherwise control returns to the caller. + */ +AWS_COMMON_API +int aws_condition_variable_wait_pred( + struct aws_condition_variable *condition_variable, + struct aws_mutex *mutex, + aws_condition_predicate_fn *pred, + void *pred_ctx); + +/** + * Waits the calling thread on a notification from another thread. Times out after time_to_wait. time_to_wait is in + * nanoseconds. + */ +AWS_COMMON_API +int aws_condition_variable_wait_for( + struct aws_condition_variable *condition_variable, + struct aws_mutex *mutex, + int64_t time_to_wait); + +/** + * Waits the calling thread on a notification from another thread. Times out after time_to_wait. time_to_wait is in + * nanoseconds. If predicate returns false, the wait is reentered, otherwise control returns to the caller. + */ +AWS_COMMON_API +int aws_condition_variable_wait_for_pred( + struct aws_condition_variable *condition_variable, + struct aws_mutex *mutex, + int64_t time_to_wait, + aws_condition_predicate_fn *pred, + void *pred_ctx); + +AWS_EXTERN_C_END +#endif /* AWS_COMMON_CONDITION_VARIABLE_H */ diff --git a/contrib/restricted/aws/aws-c-common/include/aws/common/date_time.h b/contrib/restricted/aws/aws-c-common/include/aws/common/date_time.h index 5522c4fae5..14860c5d21 100644 --- a/contrib/restricted/aws/aws-c-common/include/aws/common/date_time.h +++ b/contrib/restricted/aws/aws-c-common/include/aws/common/date_time.h @@ -1,158 +1,158 @@ -#ifndef AWS_COMMON_DATE_TIME_H -#define AWS_COMMON_DATE_TIME_H +#ifndef AWS_COMMON_DATE_TIME_H +#define AWS_COMMON_DATE_TIME_H /** * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. * SPDX-License-Identifier: Apache-2.0. - */ -#include <aws/common/common.h> - -#include <time.h> - -#define AWS_DATE_TIME_STR_MAX_LEN 100 -#define AWS_DATE_TIME_STR_MAX_BASIC_LEN 20 - -struct aws_byte_buf; -struct aws_byte_cursor; - -enum aws_date_format { - AWS_DATE_FORMAT_RFC822, - AWS_DATE_FORMAT_ISO_8601, - AWS_DATE_FORMAT_ISO_8601_BASIC, - AWS_DATE_FORMAT_AUTO_DETECT, -}; - -enum aws_date_month { - AWS_DATE_MONTH_JANUARY = 0, - AWS_DATE_MONTH_FEBRUARY, - AWS_DATE_MONTH_MARCH, - AWS_DATE_MONTH_APRIL, - AWS_DATE_MONTH_MAY, - AWS_DATE_MONTH_JUNE, - AWS_DATE_MONTH_JULY, - AWS_DATE_MONTH_AUGUST, - AWS_DATE_MONTH_SEPTEMBER, - AWS_DATE_MONTH_OCTOBER, - AWS_DATE_MONTH_NOVEMBER, - AWS_DATE_MONTH_DECEMBER, -}; - -enum aws_date_day_of_week { - AWS_DATE_DAY_OF_WEEK_SUNDAY = 0, - AWS_DATE_DAY_OF_WEEK_MONDAY, - AWS_DATE_DAY_OF_WEEK_TUESDAY, - AWS_DATE_DAY_OF_WEEK_WEDNESDAY, - AWS_DATE_DAY_OF_WEEK_THURSDAY, - AWS_DATE_DAY_OF_WEEK_FRIDAY, - AWS_DATE_DAY_OF_WEEK_SATURDAY, -}; - -struct aws_date_time { - time_t timestamp; - char tz[6]; - struct tm gmt_time; - struct tm local_time; - bool utc_assumed; -}; - -AWS_EXTERN_C_BEGIN - -/** - * Initializes dt to be the current system time. - */ -AWS_COMMON_API void aws_date_time_init_now(struct aws_date_time *dt); - -/** - * Initializes dt to be the time represented in milliseconds since unix epoch. - */ -AWS_COMMON_API void aws_date_time_init_epoch_millis(struct aws_date_time *dt, uint64_t ms_since_epoch); - -/** - * Initializes dt to be the time represented in seconds.millis since unix epoch. - */ -AWS_COMMON_API void aws_date_time_init_epoch_secs(struct aws_date_time *dt, double sec_ms); - -/** - * Initializes dt to be the time represented by date_str in format 'fmt'. Returns AWS_OP_SUCCESS if the - * string was successfully parsed, returns AWS_OP_ERR if parsing failed. - * - * Notes for AWS_DATE_FORMAT_RFC822: - * If no time zone information is provided, it is assumed to be local time (please don't do this). - * - * If the time zone is something other than something indicating Universal Time (e.g. Z, UT, UTC, or GMT) or an offset - * from UTC (e.g. +0100, -0700), parsing will fail. - * - * Really, it's just better if you always use Universal Time. - */ -AWS_COMMON_API int aws_date_time_init_from_str( - struct aws_date_time *dt, - const struct aws_byte_buf *date_str, - enum aws_date_format fmt); - -/** - * aws_date_time_init variant that takes a byte_cursor rather than a byte_buf - */ -AWS_COMMON_API int aws_date_time_init_from_str_cursor( - struct aws_date_time *dt, - const struct aws_byte_cursor *date_str_cursor, - enum aws_date_format fmt); - -/** - * Copies the current time as a formatted date string in local time into output_buf. If buffer is too small, it will - * return AWS_OP_ERR. A good size suggestion is AWS_DATE_TIME_STR_MAX_LEN bytes. AWS_DATE_FORMAT_AUTO_DETECT is not - * allowed. - */ -AWS_COMMON_API int aws_date_time_to_local_time_str( - const struct aws_date_time *dt, - enum aws_date_format fmt, - struct aws_byte_buf *output_buf); - -/** - * Copies the current time as a formatted date string in utc time into output_buf. If buffer is too small, it will - * return AWS_OP_ERR. A good size suggestion is AWS_DATE_TIME_STR_MAX_LEN bytes. AWS_DATE_FORMAT_AUTO_DETECT is not - * allowed. - */ -AWS_COMMON_API int aws_date_time_to_utc_time_str( - const struct aws_date_time *dt, - enum aws_date_format fmt, - struct aws_byte_buf *output_buf); - -/** - * Copies the current time as a formatted short date string in local time into output_buf. If buffer is too small, it - * will return AWS_OP_ERR. A good size suggestion is AWS_DATE_TIME_STR_MAX_LEN bytes. AWS_DATE_FORMAT_AUTO_DETECT is not - * allowed. - */ -AWS_COMMON_API int aws_date_time_to_local_time_short_str( - const struct aws_date_time *dt, - enum aws_date_format fmt, - struct aws_byte_buf *output_buf); - -/** - * Copies the current time as a formatted short date string in utc time into output_buf. If buffer is too small, it will - * return AWS_OP_ERR. A good size suggestion is AWS_DATE_TIME_STR_MAX_LEN bytes. AWS_DATE_FORMAT_AUTO_DETECT is not - * allowed. - */ -AWS_COMMON_API int aws_date_time_to_utc_time_short_str( - const struct aws_date_time *dt, - enum aws_date_format fmt, - struct aws_byte_buf *output_buf); - -AWS_COMMON_API double aws_date_time_as_epoch_secs(const struct aws_date_time *dt); -AWS_COMMON_API uint64_t aws_date_time_as_nanos(const struct aws_date_time *dt); -AWS_COMMON_API uint64_t aws_date_time_as_millis(const struct aws_date_time *dt); -AWS_COMMON_API uint16_t aws_date_time_year(const struct aws_date_time *dt, bool local_time); -AWS_COMMON_API enum aws_date_month aws_date_time_month(const struct aws_date_time *dt, bool local_time); -AWS_COMMON_API uint8_t aws_date_time_month_day(const struct aws_date_time *dt, bool local_time); -AWS_COMMON_API enum aws_date_day_of_week aws_date_time_day_of_week(const struct aws_date_time *dt, bool local_time); -AWS_COMMON_API uint8_t aws_date_time_hour(const struct aws_date_time *dt, bool local_time); -AWS_COMMON_API uint8_t aws_date_time_minute(const struct aws_date_time *dt, bool local_time); -AWS_COMMON_API uint8_t aws_date_time_second(const struct aws_date_time *dt, bool local_time); -AWS_COMMON_API bool aws_date_time_dst(const struct aws_date_time *dt, bool local_time); - -/** - * returns the difference of a and b (a - b) in seconds. - */ -AWS_COMMON_API time_t aws_date_time_diff(const struct aws_date_time *a, const struct aws_date_time *b); - -AWS_EXTERN_C_END - -#endif /* AWS_COMMON_DATE_TIME_H */ + */ +#include <aws/common/common.h> + +#include <time.h> + +#define AWS_DATE_TIME_STR_MAX_LEN 100 +#define AWS_DATE_TIME_STR_MAX_BASIC_LEN 20 + +struct aws_byte_buf; +struct aws_byte_cursor; + +enum aws_date_format { + AWS_DATE_FORMAT_RFC822, + AWS_DATE_FORMAT_ISO_8601, + AWS_DATE_FORMAT_ISO_8601_BASIC, + AWS_DATE_FORMAT_AUTO_DETECT, +}; + +enum aws_date_month { + AWS_DATE_MONTH_JANUARY = 0, + AWS_DATE_MONTH_FEBRUARY, + AWS_DATE_MONTH_MARCH, + AWS_DATE_MONTH_APRIL, + AWS_DATE_MONTH_MAY, + AWS_DATE_MONTH_JUNE, + AWS_DATE_MONTH_JULY, + AWS_DATE_MONTH_AUGUST, + AWS_DATE_MONTH_SEPTEMBER, + AWS_DATE_MONTH_OCTOBER, + AWS_DATE_MONTH_NOVEMBER, + AWS_DATE_MONTH_DECEMBER, +}; + +enum aws_date_day_of_week { + AWS_DATE_DAY_OF_WEEK_SUNDAY = 0, + AWS_DATE_DAY_OF_WEEK_MONDAY, + AWS_DATE_DAY_OF_WEEK_TUESDAY, + AWS_DATE_DAY_OF_WEEK_WEDNESDAY, + AWS_DATE_DAY_OF_WEEK_THURSDAY, + AWS_DATE_DAY_OF_WEEK_FRIDAY, + AWS_DATE_DAY_OF_WEEK_SATURDAY, +}; + +struct aws_date_time { + time_t timestamp; + char tz[6]; + struct tm gmt_time; + struct tm local_time; + bool utc_assumed; +}; + +AWS_EXTERN_C_BEGIN + +/** + * Initializes dt to be the current system time. + */ +AWS_COMMON_API void aws_date_time_init_now(struct aws_date_time *dt); + +/** + * Initializes dt to be the time represented in milliseconds since unix epoch. + */ +AWS_COMMON_API void aws_date_time_init_epoch_millis(struct aws_date_time *dt, uint64_t ms_since_epoch); + +/** + * Initializes dt to be the time represented in seconds.millis since unix epoch. + */ +AWS_COMMON_API void aws_date_time_init_epoch_secs(struct aws_date_time *dt, double sec_ms); + +/** + * Initializes dt to be the time represented by date_str in format 'fmt'. Returns AWS_OP_SUCCESS if the + * string was successfully parsed, returns AWS_OP_ERR if parsing failed. + * + * Notes for AWS_DATE_FORMAT_RFC822: + * If no time zone information is provided, it is assumed to be local time (please don't do this). + * + * If the time zone is something other than something indicating Universal Time (e.g. Z, UT, UTC, or GMT) or an offset + * from UTC (e.g. +0100, -0700), parsing will fail. + * + * Really, it's just better if you always use Universal Time. + */ +AWS_COMMON_API int aws_date_time_init_from_str( + struct aws_date_time *dt, + const struct aws_byte_buf *date_str, + enum aws_date_format fmt); + +/** + * aws_date_time_init variant that takes a byte_cursor rather than a byte_buf + */ +AWS_COMMON_API int aws_date_time_init_from_str_cursor( + struct aws_date_time *dt, + const struct aws_byte_cursor *date_str_cursor, + enum aws_date_format fmt); + +/** + * Copies the current time as a formatted date string in local time into output_buf. If buffer is too small, it will + * return AWS_OP_ERR. A good size suggestion is AWS_DATE_TIME_STR_MAX_LEN bytes. AWS_DATE_FORMAT_AUTO_DETECT is not + * allowed. + */ +AWS_COMMON_API int aws_date_time_to_local_time_str( + const struct aws_date_time *dt, + enum aws_date_format fmt, + struct aws_byte_buf *output_buf); + +/** + * Copies the current time as a formatted date string in utc time into output_buf. If buffer is too small, it will + * return AWS_OP_ERR. A good size suggestion is AWS_DATE_TIME_STR_MAX_LEN bytes. AWS_DATE_FORMAT_AUTO_DETECT is not + * allowed. + */ +AWS_COMMON_API int aws_date_time_to_utc_time_str( + const struct aws_date_time *dt, + enum aws_date_format fmt, + struct aws_byte_buf *output_buf); + +/** + * Copies the current time as a formatted short date string in local time into output_buf. If buffer is too small, it + * will return AWS_OP_ERR. A good size suggestion is AWS_DATE_TIME_STR_MAX_LEN bytes. AWS_DATE_FORMAT_AUTO_DETECT is not + * allowed. + */ +AWS_COMMON_API int aws_date_time_to_local_time_short_str( + const struct aws_date_time *dt, + enum aws_date_format fmt, + struct aws_byte_buf *output_buf); + +/** + * Copies the current time as a formatted short date string in utc time into output_buf. If buffer is too small, it will + * return AWS_OP_ERR. A good size suggestion is AWS_DATE_TIME_STR_MAX_LEN bytes. AWS_DATE_FORMAT_AUTO_DETECT is not + * allowed. + */ +AWS_COMMON_API int aws_date_time_to_utc_time_short_str( + const struct aws_date_time *dt, + enum aws_date_format fmt, + struct aws_byte_buf *output_buf); + +AWS_COMMON_API double aws_date_time_as_epoch_secs(const struct aws_date_time *dt); +AWS_COMMON_API uint64_t aws_date_time_as_nanos(const struct aws_date_time *dt); +AWS_COMMON_API uint64_t aws_date_time_as_millis(const struct aws_date_time *dt); +AWS_COMMON_API uint16_t aws_date_time_year(const struct aws_date_time *dt, bool local_time); +AWS_COMMON_API enum aws_date_month aws_date_time_month(const struct aws_date_time *dt, bool local_time); +AWS_COMMON_API uint8_t aws_date_time_month_day(const struct aws_date_time *dt, bool local_time); +AWS_COMMON_API enum aws_date_day_of_week aws_date_time_day_of_week(const struct aws_date_time *dt, bool local_time); +AWS_COMMON_API uint8_t aws_date_time_hour(const struct aws_date_time *dt, bool local_time); +AWS_COMMON_API uint8_t aws_date_time_minute(const struct aws_date_time *dt, bool local_time); +AWS_COMMON_API uint8_t aws_date_time_second(const struct aws_date_time *dt, bool local_time); +AWS_COMMON_API bool aws_date_time_dst(const struct aws_date_time *dt, bool local_time); + +/** + * returns the difference of a and b (a - b) in seconds. + */ +AWS_COMMON_API time_t aws_date_time_diff(const struct aws_date_time *a, const struct aws_date_time *b); + +AWS_EXTERN_C_END + +#endif /* AWS_COMMON_DATE_TIME_H */ diff --git a/contrib/restricted/aws/aws-c-common/include/aws/common/device_random.h b/contrib/restricted/aws/aws-c-common/include/aws/common/device_random.h index ae79f7578d..c1fc2405c9 100644 --- a/contrib/restricted/aws/aws-c-common/include/aws/common/device_random.h +++ b/contrib/restricted/aws/aws-c-common/include/aws/common/device_random.h @@ -1,40 +1,40 @@ -#ifndef AWS_COMMON_DEVICE_RANDOM_H -#define AWS_COMMON_DEVICE_RANDOM_H +#ifndef AWS_COMMON_DEVICE_RANDOM_H +#define AWS_COMMON_DEVICE_RANDOM_H /** * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. * SPDX-License-Identifier: Apache-2.0. - */ -#include <aws/common/common.h> - -struct aws_byte_buf; - -AWS_EXTERN_C_BEGIN - -/** - * Get an unpredictably random 64bit number, suitable for cryptographic use. - */ -AWS_COMMON_API int aws_device_random_u64(uint64_t *output); - -/** - * Get an unpredictably random 32bit number, suitable for cryptographic use. - */ -AWS_COMMON_API int aws_device_random_u32(uint32_t *output); - -/** - * Get an unpredictably random 16bit number, suitable for cryptographic use. - */ -AWS_COMMON_API int aws_device_random_u16(uint16_t *output); - -/** - * Get an unpredictably random 8bit number, suitable for cryptographic use. - */ -AWS_COMMON_API int aws_device_random_u8(uint8_t *output); - -/** - * Fill a buffer with unpredictably random bytes, suitable for cryptographic use. - */ -AWS_COMMON_API int aws_device_random_buffer(struct aws_byte_buf *output); - -AWS_EXTERN_C_END - -#endif /* AWS_COMMON_DEVICE_RANDOM_H */ + */ +#include <aws/common/common.h> + +struct aws_byte_buf; + +AWS_EXTERN_C_BEGIN + +/** + * Get an unpredictably random 64bit number, suitable for cryptographic use. + */ +AWS_COMMON_API int aws_device_random_u64(uint64_t *output); + +/** + * Get an unpredictably random 32bit number, suitable for cryptographic use. + */ +AWS_COMMON_API int aws_device_random_u32(uint32_t *output); + +/** + * Get an unpredictably random 16bit number, suitable for cryptographic use. + */ +AWS_COMMON_API int aws_device_random_u16(uint16_t *output); + +/** + * Get an unpredictably random 8bit number, suitable for cryptographic use. + */ +AWS_COMMON_API int aws_device_random_u8(uint8_t *output); + +/** + * Fill a buffer with unpredictably random bytes, suitable for cryptographic use. + */ +AWS_COMMON_API int aws_device_random_buffer(struct aws_byte_buf *output); + +AWS_EXTERN_C_END + +#endif /* AWS_COMMON_DEVICE_RANDOM_H */ diff --git a/contrib/restricted/aws/aws-c-common/include/aws/common/encoding.h b/contrib/restricted/aws/aws-c-common/include/aws/common/encoding.h index a90b72ef0b..3739ac60b0 100644 --- a/contrib/restricted/aws/aws-c-common/include/aws/common/encoding.h +++ b/contrib/restricted/aws/aws-c-common/include/aws/common/encoding.h @@ -1,135 +1,135 @@ -#ifndef AWS_COMMON_ENCODING_H -#define AWS_COMMON_ENCODING_H - +#ifndef AWS_COMMON_ENCODING_H +#define AWS_COMMON_ENCODING_H + /** * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. * SPDX-License-Identifier: Apache-2.0. - */ - -#include <aws/common/byte_buf.h> -#include <aws/common/byte_order.h> -#include <aws/common/common.h> - -#include <memory.h> - -AWS_EXTERN_C_BEGIN - -/* - * computes the length necessary to store the result of aws_hex_encode(). - * returns -1 on failure, and 0 on success. encoded_length will be set on - * success. - */ -AWS_COMMON_API -int aws_hex_compute_encoded_len(size_t to_encode_len, size_t *encoded_length); - -/* - * Base 16 (hex) encodes the contents of to_encode and stores the result in - * output. 0 terminates the result. Assumes the buffer is empty and does not resize on - * insufficient capacity. - */ -AWS_COMMON_API -int aws_hex_encode(const struct aws_byte_cursor *AWS_RESTRICT to_encode, struct aws_byte_buf *AWS_RESTRICT output); - -/* - * Base 16 (hex) encodes the contents of to_encode and appends the result in - * output. Does not 0-terminate. Grows the destination buffer dynamically if necessary. - */ -AWS_COMMON_API -int aws_hex_encode_append_dynamic( - const struct aws_byte_cursor *AWS_RESTRICT to_encode, - struct aws_byte_buf *AWS_RESTRICT output); - -/* - * computes the length necessary to store the result of aws_hex_decode(). - * returns -1 on failure, and 0 on success. decoded_len will be set on success. - */ -AWS_COMMON_API -int aws_hex_compute_decoded_len(size_t to_decode_len, size_t *decoded_len); - -/* - * Base 16 (hex) decodes the contents of to_decode and stores the result in - * output. If output is NULL, output_size will be set to what the output_size - * should be. - */ -AWS_COMMON_API -int aws_hex_decode(const struct aws_byte_cursor *AWS_RESTRICT to_decode, struct aws_byte_buf *AWS_RESTRICT output); - -/* - * Computes the length necessary to store the output of aws_base64_encode call. - * returns -1 on failure, and 0 on success. encoded_length will be set on - * success. - */ -AWS_COMMON_API -int aws_base64_compute_encoded_len(size_t to_encode_len, size_t *encoded_len); - -/* - * Base 64 encodes the contents of to_encode and stores the result in output. - */ -AWS_COMMON_API -int aws_base64_encode(const struct aws_byte_cursor *AWS_RESTRICT to_encode, struct aws_byte_buf *AWS_RESTRICT output); - -/* - * Computes the length necessary to store the output of aws_base64_decode call. - * returns -1 on failure, and 0 on success. decoded_len will be set on success. - */ -AWS_COMMON_API -int aws_base64_compute_decoded_len(const struct aws_byte_cursor *AWS_RESTRICT to_decode, size_t *decoded_len); - -/* - * Base 64 decodes the contents of to_decode and stores the result in output. - */ -AWS_COMMON_API -int aws_base64_decode(const struct aws_byte_cursor *AWS_RESTRICT to_decode, struct aws_byte_buf *AWS_RESTRICT output); - -/* Add a 64 bit unsigned integer to the buffer, ensuring network - byte order - * Assumes the buffer size is at least 8 bytes. - */ + */ + +#include <aws/common/byte_buf.h> +#include <aws/common/byte_order.h> +#include <aws/common/common.h> + +#include <memory.h> + +AWS_EXTERN_C_BEGIN + +/* + * computes the length necessary to store the result of aws_hex_encode(). + * returns -1 on failure, and 0 on success. encoded_length will be set on + * success. + */ +AWS_COMMON_API +int aws_hex_compute_encoded_len(size_t to_encode_len, size_t *encoded_length); + +/* + * Base 16 (hex) encodes the contents of to_encode and stores the result in + * output. 0 terminates the result. Assumes the buffer is empty and does not resize on + * insufficient capacity. + */ +AWS_COMMON_API +int aws_hex_encode(const struct aws_byte_cursor *AWS_RESTRICT to_encode, struct aws_byte_buf *AWS_RESTRICT output); + +/* + * Base 16 (hex) encodes the contents of to_encode and appends the result in + * output. Does not 0-terminate. Grows the destination buffer dynamically if necessary. + */ +AWS_COMMON_API +int aws_hex_encode_append_dynamic( + const struct aws_byte_cursor *AWS_RESTRICT to_encode, + struct aws_byte_buf *AWS_RESTRICT output); + +/* + * computes the length necessary to store the result of aws_hex_decode(). + * returns -1 on failure, and 0 on success. decoded_len will be set on success. + */ +AWS_COMMON_API +int aws_hex_compute_decoded_len(size_t to_decode_len, size_t *decoded_len); + +/* + * Base 16 (hex) decodes the contents of to_decode and stores the result in + * output. If output is NULL, output_size will be set to what the output_size + * should be. + */ +AWS_COMMON_API +int aws_hex_decode(const struct aws_byte_cursor *AWS_RESTRICT to_decode, struct aws_byte_buf *AWS_RESTRICT output); + +/* + * Computes the length necessary to store the output of aws_base64_encode call. + * returns -1 on failure, and 0 on success. encoded_length will be set on + * success. + */ +AWS_COMMON_API +int aws_base64_compute_encoded_len(size_t to_encode_len, size_t *encoded_len); + +/* + * Base 64 encodes the contents of to_encode and stores the result in output. + */ +AWS_COMMON_API +int aws_base64_encode(const struct aws_byte_cursor *AWS_RESTRICT to_encode, struct aws_byte_buf *AWS_RESTRICT output); + +/* + * Computes the length necessary to store the output of aws_base64_decode call. + * returns -1 on failure, and 0 on success. decoded_len will be set on success. + */ +AWS_COMMON_API +int aws_base64_compute_decoded_len(const struct aws_byte_cursor *AWS_RESTRICT to_decode, size_t *decoded_len); + +/* + * Base 64 decodes the contents of to_decode and stores the result in output. + */ +AWS_COMMON_API +int aws_base64_decode(const struct aws_byte_cursor *AWS_RESTRICT to_decode, struct aws_byte_buf *AWS_RESTRICT output); + +/* Add a 64 bit unsigned integer to the buffer, ensuring network - byte order + * Assumes the buffer size is at least 8 bytes. + */ AWS_STATIC_IMPL void aws_write_u64(uint64_t value, uint8_t *buffer); - -/* - * Extracts a 64 bit unsigned integer from buffer. Ensures conversion from - * network byte order to host byte order. Assumes buffer size is at least 8 - * bytes. - */ + +/* + * Extracts a 64 bit unsigned integer from buffer. Ensures conversion from + * network byte order to host byte order. Assumes buffer size is at least 8 + * bytes. + */ AWS_STATIC_IMPL uint64_t aws_read_u64(const uint8_t *buffer); - -/* Add a 32 bit unsigned integer to the buffer, ensuring network - byte order - * Assumes the buffer size is at least 4 bytes. - */ + +/* Add a 32 bit unsigned integer to the buffer, ensuring network - byte order + * Assumes the buffer size is at least 4 bytes. + */ AWS_STATIC_IMPL void aws_write_u32(uint32_t value, uint8_t *buffer); - -/* - * Extracts a 32 bit unsigned integer from buffer. Ensures conversion from - * network byte order to host byte order. Assumes the buffer size is at least 4 - * bytes. - */ + +/* + * Extracts a 32 bit unsigned integer from buffer. Ensures conversion from + * network byte order to host byte order. Assumes the buffer size is at least 4 + * bytes. + */ AWS_STATIC_IMPL uint32_t aws_read_u32(const uint8_t *buffer); - -/* Add a 24 bit unsigned integer to the buffer, ensuring network - byte order - * return the new position in the buffer for the next operation. - * Note, since this uses uint32_t for storage, the 3 least significant bytes - * will be used. Assumes buffer is at least 3 bytes long. - */ + +/* Add a 24 bit unsigned integer to the buffer, ensuring network - byte order + * return the new position in the buffer for the next operation. + * Note, since this uses uint32_t for storage, the 3 least significant bytes + * will be used. Assumes buffer is at least 3 bytes long. + */ AWS_STATIC_IMPL void aws_write_u24(uint32_t value, uint8_t *buffer); -/* - * Extracts a 24 bit unsigned integer from buffer. Ensures conversion from - * network byte order to host byte order. Assumes buffer is at least 3 bytes - * long. - */ +/* + * Extracts a 24 bit unsigned integer from buffer. Ensures conversion from + * network byte order to host byte order. Assumes buffer is at least 3 bytes + * long. + */ AWS_STATIC_IMPL uint32_t aws_read_u24(const uint8_t *buffer); - -/* Add a 16 bit unsigned integer to the buffer, ensuring network-byte order - * return the new position in the buffer for the next operation. - * Assumes buffer is at least 2 bytes long. - */ + +/* Add a 16 bit unsigned integer to the buffer, ensuring network-byte order + * return the new position in the buffer for the next operation. + * Assumes buffer is at least 2 bytes long. + */ AWS_STATIC_IMPL void aws_write_u16(uint16_t value, uint8_t *buffer); -/* - * Extracts a 16 bit unsigned integer from buffer. Ensures conversion from - * network byte order to host byte order. Assumes buffer is at least 2 bytes - * long. - */ +/* + * Extracts a 16 bit unsigned integer from buffer. Ensures conversion from + * network byte order to host byte order. Assumes buffer is at least 2 bytes + * long. + */ AWS_STATIC_IMPL uint16_t aws_read_u16(const uint8_t *buffer); - + enum aws_text_encoding { AWS_TEXT_UNKNOWN, AWS_TEXT_UTF8, @@ -137,7 +137,7 @@ enum aws_text_encoding { AWS_TEXT_UTF32, AWS_TEXT_ASCII, }; - + /* Checks the BOM in the buffer to see if encoding can be determined. If there is no BOM or * it is unrecognizable, then AWS_TEXT_UNKNOWN will be returned. */ @@ -154,4 +154,4 @@ AWS_STATIC_IMPL bool aws_text_is_utf8(const uint8_t *bytes, size_t size); AWS_EXTERN_C_END -#endif /* AWS_COMMON_ENCODING_H */ +#endif /* AWS_COMMON_ENCODING_H */ diff --git a/contrib/restricted/aws/aws-c-common/include/aws/common/environment.h b/contrib/restricted/aws/aws-c-common/include/aws/common/environment.h index c11bac3ad0..154b9faa71 100644 --- a/contrib/restricted/aws/aws-c-common/include/aws/common/environment.h +++ b/contrib/restricted/aws/aws-c-common/include/aws/common/environment.h @@ -1,46 +1,46 @@ -#ifndef AWS_COMMON_ENVIRONMENT_H -#define AWS_COMMON_ENVIRONMENT_H - +#ifndef AWS_COMMON_ENVIRONMENT_H +#define AWS_COMMON_ENVIRONMENT_H + /** * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. * SPDX-License-Identifier: Apache-2.0. - */ - -#include <aws/common/common.h> - -struct aws_string; - -/* - * Simple shims to the appropriate platform calls for environment variable manipulation. - * - * Not thread safe to use set/unset unsynced with get. Set/unset only used in unit tests. - */ -AWS_EXTERN_C_BEGIN - -/* - * Get the value of an environment variable. If the variable is not set, the output string will be set to NULL. - * Not thread-safe - */ -AWS_COMMON_API -int aws_get_environment_value( - struct aws_allocator *allocator, - const struct aws_string *variable_name, - struct aws_string **value_out); - -/* - * Set the value of an environment variable. On Windows, setting a variable to the empty string will actually unset it. - * Not thread-safe - */ -AWS_COMMON_API -int aws_set_environment_value(const struct aws_string *variable_name, const struct aws_string *value); - -/* - * Unset an environment variable. - * Not thread-safe - */ -AWS_COMMON_API -int aws_unset_environment_value(const struct aws_string *variable_name); - -AWS_EXTERN_C_END - -#endif /* AWS_COMMON_ENVIRONMENT_H */ + */ + +#include <aws/common/common.h> + +struct aws_string; + +/* + * Simple shims to the appropriate platform calls for environment variable manipulation. + * + * Not thread safe to use set/unset unsynced with get. Set/unset only used in unit tests. + */ +AWS_EXTERN_C_BEGIN + +/* + * Get the value of an environment variable. If the variable is not set, the output string will be set to NULL. + * Not thread-safe + */ +AWS_COMMON_API +int aws_get_environment_value( + struct aws_allocator *allocator, + const struct aws_string *variable_name, + struct aws_string **value_out); + +/* + * Set the value of an environment variable. On Windows, setting a variable to the empty string will actually unset it. + * Not thread-safe + */ +AWS_COMMON_API +int aws_set_environment_value(const struct aws_string *variable_name, const struct aws_string *value); + +/* + * Unset an environment variable. + * Not thread-safe + */ +AWS_COMMON_API +int aws_unset_environment_value(const struct aws_string *variable_name); + +AWS_EXTERN_C_END + +#endif /* AWS_COMMON_ENVIRONMENT_H */ diff --git a/contrib/restricted/aws/aws-c-common/include/aws/common/error.h b/contrib/restricted/aws/aws-c-common/include/aws/common/error.h index 200de33146..7be3e616e9 100644 --- a/contrib/restricted/aws/aws-c-common/include/aws/common/error.h +++ b/contrib/restricted/aws/aws-c-common/include/aws/common/error.h @@ -1,17 +1,17 @@ -#ifndef AWS_COMMON_ERROR_H -#define AWS_COMMON_ERROR_H - +#ifndef AWS_COMMON_ERROR_H +#define AWS_COMMON_ERROR_H + /** * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. * SPDX-License-Identifier: Apache-2.0. - */ - + */ + #include <aws/common/assert.h> -#include <aws/common/exports.h> +#include <aws/common/exports.h> #include <aws/common/macros.h> #include <aws/common/package.h> #include <aws/common/stdint.h> - + #define AWS_OP_SUCCESS (0) #define AWS_OP_ERR (-1) @@ -21,110 +21,110 @@ #define AWS_ERROR_ENUM_BEGIN_RANGE(x) ((x)*AWS_ERROR_ENUM_STRIDE) #define AWS_ERROR_ENUM_END_RANGE(x) (((x) + 1) * AWS_ERROR_ENUM_STRIDE - 1) -struct aws_error_info { - int error_code; - const char *literal_name; - const char *error_str; - const char *lib_name; - const char *formatted_name; -}; - -struct aws_error_info_list { - const struct aws_error_info *error_list; - uint16_t count; -}; - -#define AWS_DEFINE_ERROR_INFO(C, ES, LN) \ - { \ - .literal_name = #C, .error_code = (C), .error_str = (ES), .lib_name = (LN), \ - .formatted_name = LN ": " #C ", " ES, \ - } - -typedef void(aws_error_handler_fn)(int err, void *ctx); - -AWS_EXTERN_C_BEGIN - -/* - * Returns the latest error code on the current thread, or 0 if none have - * occurred. - */ -AWS_COMMON_API -int aws_last_error(void); - -/* - * Returns the error str corresponding to `err`. - */ -AWS_COMMON_API -const char *aws_error_str(int err); - -/* - * Returns the enum name corresponding to `err`. - */ -AWS_COMMON_API -const char *aws_error_name(int err); - -/* - * Returns the error lib name corresponding to `err`. - */ -AWS_COMMON_API -const char *aws_error_lib_name(int err); - -/* - * Returns libname concatenated with error string. - */ -AWS_COMMON_API -const char *aws_error_debug_str(int err); - -/* - * Internal implementation detail. - */ -AWS_COMMON_API -void aws_raise_error_private(int err); - -/* - * Raises `err` to the installed callbacks, and sets the thread's error. - */ -AWS_STATIC_IMPL +struct aws_error_info { + int error_code; + const char *literal_name; + const char *error_str; + const char *lib_name; + const char *formatted_name; +}; + +struct aws_error_info_list { + const struct aws_error_info *error_list; + uint16_t count; +}; + +#define AWS_DEFINE_ERROR_INFO(C, ES, LN) \ + { \ + .literal_name = #C, .error_code = (C), .error_str = (ES), .lib_name = (LN), \ + .formatted_name = LN ": " #C ", " ES, \ + } + +typedef void(aws_error_handler_fn)(int err, void *ctx); + +AWS_EXTERN_C_BEGIN + +/* + * Returns the latest error code on the current thread, or 0 if none have + * occurred. + */ +AWS_COMMON_API +int aws_last_error(void); + +/* + * Returns the error str corresponding to `err`. + */ +AWS_COMMON_API +const char *aws_error_str(int err); + +/* + * Returns the enum name corresponding to `err`. + */ +AWS_COMMON_API +const char *aws_error_name(int err); + +/* + * Returns the error lib name corresponding to `err`. + */ +AWS_COMMON_API +const char *aws_error_lib_name(int err); + +/* + * Returns libname concatenated with error string. + */ +AWS_COMMON_API +const char *aws_error_debug_str(int err); + +/* + * Internal implementation detail. + */ +AWS_COMMON_API +void aws_raise_error_private(int err); + +/* + * Raises `err` to the installed callbacks, and sets the thread's error. + */ +AWS_STATIC_IMPL int aws_raise_error(int err); - -/* - * Resets the `err` back to defaults - */ -AWS_COMMON_API -void aws_reset_error(void); -/* - * Sets `err` to the latest error. Does not invoke callbacks. - */ -AWS_COMMON_API -void aws_restore_error(int err); - -/* - * Sets an application wide error handler function. This will be overridden by - * the thread local handler. The previous handler is returned, this can be used - * for restoring an error handler if it needs to be overridden temporarily. - * Setting this to NULL will turn off this error callback after it has been - * enabled. - */ -AWS_COMMON_API -aws_error_handler_fn *aws_set_global_error_handler_fn(aws_error_handler_fn *handler, void *ctx); - -/* - * Sets a thread-local error handler function. This will override the global - * handler. The previous handler is returned, this can be used for restoring an - * error handler if it needs to be overridden temporarily. Setting this to NULL - * will turn off this error callback after it has been enabled. - */ -AWS_COMMON_API -aws_error_handler_fn *aws_set_thread_local_error_handler_fn(aws_error_handler_fn *handler, void *ctx); - -/** TODO: this needs to be a private function (wait till we have the cmake story - * better before moving it though). It should be external for the purpose of - * other libs we own, but customers should not be able to hit it without going - * out of their way to do so. - */ -AWS_COMMON_API -void aws_register_error_info(const struct aws_error_info_list *error_info); - + +/* + * Resets the `err` back to defaults + */ +AWS_COMMON_API +void aws_reset_error(void); +/* + * Sets `err` to the latest error. Does not invoke callbacks. + */ +AWS_COMMON_API +void aws_restore_error(int err); + +/* + * Sets an application wide error handler function. This will be overridden by + * the thread local handler. The previous handler is returned, this can be used + * for restoring an error handler if it needs to be overridden temporarily. + * Setting this to NULL will turn off this error callback after it has been + * enabled. + */ +AWS_COMMON_API +aws_error_handler_fn *aws_set_global_error_handler_fn(aws_error_handler_fn *handler, void *ctx); + +/* + * Sets a thread-local error handler function. This will override the global + * handler. The previous handler is returned, this can be used for restoring an + * error handler if it needs to be overridden temporarily. Setting this to NULL + * will turn off this error callback after it has been enabled. + */ +AWS_COMMON_API +aws_error_handler_fn *aws_set_thread_local_error_handler_fn(aws_error_handler_fn *handler, void *ctx); + +/** TODO: this needs to be a private function (wait till we have the cmake story + * better before moving it though). It should be external for the purpose of + * other libs we own, but customers should not be able to hit it without going + * out of their way to do so. + */ +AWS_COMMON_API +void aws_register_error_info(const struct aws_error_info_list *error_info); + AWS_COMMON_API void aws_unregister_error_info(const struct aws_error_info_list *error_info); @@ -138,8 +138,8 @@ int aws_translate_and_raise_io_error(int error_no); # include <aws/common/error.inl> #endif /* AWS_NO_STATIC_IMPL */ -AWS_EXTERN_C_END - +AWS_EXTERN_C_END + enum aws_common_error { AWS_ERROR_SUCCESS = AWS_ERROR_ENUM_BEGIN_RANGE(AWS_C_COMMON_PACKAGE_ID), AWS_ERROR_OOM, @@ -194,4 +194,4 @@ enum aws_common_error { AWS_ERROR_END_COMMON_RANGE = AWS_ERROR_ENUM_END_RANGE(AWS_C_COMMON_PACKAGE_ID) }; -#endif /* AWS_COMMON_ERROR_H */ +#endif /* AWS_COMMON_ERROR_H */ diff --git a/contrib/restricted/aws/aws-c-common/include/aws/common/exports.h b/contrib/restricted/aws/aws-c-common/include/aws/common/exports.h index 017a5f04ef..ba07e743ce 100644 --- a/contrib/restricted/aws/aws-c-common/include/aws/common/exports.h +++ b/contrib/restricted/aws/aws-c-common/include/aws/common/exports.h @@ -1,30 +1,30 @@ -#ifndef AWS_COMMON_EXPORTS_H -#define AWS_COMMON_EXPORTS_H +#ifndef AWS_COMMON_EXPORTS_H +#define AWS_COMMON_EXPORTS_H /** * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. * SPDX-License-Identifier: Apache-2.0. - */ + */ #if defined(AWS_C_RT_USE_WINDOWS_DLL_SEMANTICS) || defined(_WIN32) -# ifdef AWS_COMMON_USE_IMPORT_EXPORT -# ifdef AWS_COMMON_EXPORTS -# define AWS_COMMON_API __declspec(dllexport) -# else -# define AWS_COMMON_API __declspec(dllimport) -# endif /* AWS_COMMON_EXPORTS */ -# else -# define AWS_COMMON_API -# endif /* AWS_COMMON_USE_IMPORT_EXPORT */ - +# ifdef AWS_COMMON_USE_IMPORT_EXPORT +# ifdef AWS_COMMON_EXPORTS +# define AWS_COMMON_API __declspec(dllexport) +# else +# define AWS_COMMON_API __declspec(dllimport) +# endif /* AWS_COMMON_EXPORTS */ +# else +# define AWS_COMMON_API +# endif /* AWS_COMMON_USE_IMPORT_EXPORT */ + #else /* defined (AWS_C_RT_USE_WINDOWS_DLL_SEMANTICS) || defined (_WIN32) */ - -# if ((__GNUC__ >= 4) || defined(__clang__)) && defined(AWS_COMMON_USE_IMPORT_EXPORT) && defined(AWS_COMMON_EXPORTS) -# define AWS_COMMON_API __attribute__((visibility("default"))) -# else -# define AWS_COMMON_API -# endif /* __GNUC__ >= 4 || defined(__clang__) */ - + +# if ((__GNUC__ >= 4) || defined(__clang__)) && defined(AWS_COMMON_USE_IMPORT_EXPORT) && defined(AWS_COMMON_EXPORTS) +# define AWS_COMMON_API __attribute__((visibility("default"))) +# else +# define AWS_COMMON_API +# endif /* __GNUC__ >= 4 || defined(__clang__) */ + #endif /* defined (AWS_C_RT_USE_WINDOWS_DLL_SEMANTICS) || defined (_WIN32) */ - + #ifdef AWS_NO_STATIC_IMPL # define AWS_STATIC_IMPL AWS_COMMON_API #endif @@ -37,4 +37,4 @@ # define AWS_STATIC_IMPL static inline #endif -#endif /* AWS_COMMON_EXPORTS_H */ +#endif /* AWS_COMMON_EXPORTS_H */ diff --git a/contrib/restricted/aws/aws-c-common/include/aws/common/hash_table.h b/contrib/restricted/aws/aws-c-common/include/aws/common/hash_table.h index c4ac55cb64..648266266b 100644 --- a/contrib/restricted/aws/aws-c-common/include/aws/common/hash_table.h +++ b/contrib/restricted/aws/aws-c-common/include/aws/common/hash_table.h @@ -1,298 +1,298 @@ -#ifndef AWS_COMMON_HASH_TABLE_H -#define AWS_COMMON_HASH_TABLE_H - +#ifndef AWS_COMMON_HASH_TABLE_H +#define AWS_COMMON_HASH_TABLE_H + /** * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. * SPDX-License-Identifier: Apache-2.0. - */ - -#include <aws/common/common.h> - -#include <stddef.h> - -#define AWS_COMMON_HASH_TABLE_ITER_CONTINUE (1 << 0) -#define AWS_COMMON_HASH_TABLE_ITER_DELETE (1 << 1) - -/** - * Hash table data structure. This module provides an automatically resizing - * hash table implementation for general purpose use. The hash table stores a - * mapping between void * keys and values; it is expected that in most cases, - * these will point to a structure elsewhere in the heap, instead of inlining a - * key or value into the hash table element itself. - * - * Currently, this hash table implements a variant of robin hood hashing, but - * we do not guarantee that this won't change in the future. - * - * Associated with each hash function are four callbacks: - * - * hash_fn - A hash function from the keys to a uint64_t. It is critical that - * the hash function for a key does not change while the key is in the hash - * table; violating this results in undefined behavior. Collisions are - * tolerated, though naturally with reduced performance. - * - * equals_fn - An equality comparison function. This function must be - * reflexive and consistent with hash_fn. - * - * destroy_key_fn, destroy_value_fn - Optional callbacks invoked when the - * table is cleared or cleaned up and at the caller's option when an element - * is removed from the table. Either or both may be set to NULL, which - * has the same effect as a no-op destroy function. - * - * This datastructure can be safely moved between threads, subject to the - * requirements of the underlying allocator. It is also safe to invoke - * non-mutating operations on the hash table from multiple threads. A suitable - * memory barrier must be used when transitioning from single-threaded mutating - * usage to multithreaded usage. - */ -struct hash_table_state; /* Opaque pointer */ -struct aws_hash_table { - struct hash_table_state *p_impl; -}; - -/** - * Represents an element in the hash table. Various operations on the hash - * table may provide pointers to elements stored within the hash table; - * generally, calling code may alter value, but must not alter key (or any - * information used to compute key's hash code). - * - * Pointers to elements within the hash are invalidated whenever an operation - * which may change the number of elements in the hash is invoked (i.e. put, - * delete, clear, and clean_up), regardless of whether the number of elements - * actually changes. - */ -struct aws_hash_element { - const void *key; - void *value; -}; - -enum aws_hash_iter_status { - AWS_HASH_ITER_STATUS_DONE, - AWS_HASH_ITER_STATUS_DELETE_CALLED, - AWS_HASH_ITER_STATUS_READY_FOR_USE, -}; - -struct aws_hash_iter { - const struct aws_hash_table *map; - struct aws_hash_element element; - size_t slot; - size_t limit; - enum aws_hash_iter_status status; - /* - * Reserving extra fields for binary compatibility with future expansion of - * iterator in case hash table implementation changes. - */ - int unused_0; - void *unused_1; - void *unused_2; -}; - -/** - * Prototype for a key hashing function pointer. - */ -typedef uint64_t(aws_hash_fn)(const void *key); - -/** - * Prototype for a hash table equality check function pointer. - * - * This type is usually used for a function that compares two hash table - * keys, but note that the same type is used for a function that compares - * two hash table values in aws_hash_table_eq. - * - * Equality functions used in a hash table must be reflexive (i.e., a == b if - * and only if b == a), and must be consistent with the hash function in use. - */ -typedef bool(aws_hash_callback_eq_fn)(const void *a, const void *b); - -/** - * Prototype for a hash table key or value destructor function pointer. - * - * This function is used to destroy elements in the hash table when the - * table is cleared or cleaned up. - * - * Note that functions which remove individual elements from the hash - * table provide options of whether or not to invoke the destructors - * on the key and value of a removed element. - */ -typedef void(aws_hash_callback_destroy_fn)(void *key_or_value); - -AWS_EXTERN_C_BEGIN - -/** - * Initializes a hash map with initial capacity for 'size' elements - * without resizing. Uses hash_fn to compute the hash of each element. - * equals_fn to compute equality of two keys. Whenever an element is - * removed without being returned, destroy_key_fn is run on the pointer - * to the key and destroy_value_fn is run on the pointer to the value. - * Either or both may be NULL if a callback is not desired in this case. - */ -AWS_COMMON_API -int aws_hash_table_init( - struct aws_hash_table *map, - struct aws_allocator *alloc, - size_t size, - aws_hash_fn *hash_fn, - aws_hash_callback_eq_fn *equals_fn, - aws_hash_callback_destroy_fn *destroy_key_fn, - aws_hash_callback_destroy_fn *destroy_value_fn); - -/** - * Deletes every element from map and frees all associated memory. - * destroy_fn will be called for each element. aws_hash_table_init - * must be called before reusing the hash table. - * - * This method is idempotent. - */ -AWS_COMMON_API -void aws_hash_table_clean_up(struct aws_hash_table *map); - -/** - * Safely swaps two hash tables. Note that we swap the entirety of the hash - * table, including which allocator is associated. - * - * Neither hash table is required to be initialized; if one or both is - * uninitialized, then the uninitialized state is also swapped. - */ -AWS_COMMON_API -void aws_hash_table_swap(struct aws_hash_table *AWS_RESTRICT a, struct aws_hash_table *AWS_RESTRICT b); - -/** - * Moves the hash table in 'from' to 'to'. After this move, 'from' will - * be identical to the state of the original 'to' hash table, and 'to' - * will be in the same state as if it had been passed to aws_hash_table_clean_up - * (that is, it will have no memory allocated, and it will be safe to - * either discard it or call aws_hash_table_clean_up again). - * - * Note that 'to' will not be cleaned up. You should make sure that 'to' - * is either uninitialized or cleaned up before moving a hashtable into - * it. - */ -AWS_COMMON_API -void aws_hash_table_move(struct aws_hash_table *AWS_RESTRICT to, struct aws_hash_table *AWS_RESTRICT from); - -/** - * Returns the current number of entries in the table. - */ -AWS_COMMON_API -size_t aws_hash_table_get_entry_count(const struct aws_hash_table *map); - -/** - * Returns an iterator to be used for iterating through a hash table. - * Iterator will already point to the first element of the table it finds, - * which can be accessed as iter.element. - * - * This function cannot fail, but if there are no elements in the table, - * the returned iterator will return true for aws_hash_iter_done(&iter). - */ -AWS_COMMON_API -struct aws_hash_iter aws_hash_iter_begin(const struct aws_hash_table *map); - -/** - * Returns true if iterator is done iterating through table, false otherwise. - * If this is true, the iterator will not include an element of the table. - */ -AWS_COMMON_API -bool aws_hash_iter_done(const struct aws_hash_iter *iter); - -/** - * Updates iterator so that it points to next element of hash table. - * - * This and the two previous functions are designed to be used together with - * the following idiom: - * - * for (struct aws_hash_iter iter = aws_hash_iter_begin(&map); - * !aws_hash_iter_done(&iter); aws_hash_iter_next(&iter)) { - * const key_type key = *(const key_type *)iter.element.key; - * value_type value = *(value_type *)iter.element.value; - * // etc. - * } - * - * Note that calling this on an iter which is "done" is idempotent: - * i.e. it will return another iter which is "done". - */ -AWS_COMMON_API -void aws_hash_iter_next(struct aws_hash_iter *iter); - -/** - * Deletes the element currently pointed-to by the hash iterator. - * After calling this method, the element member of the iterator - * should not be accessed until the next call to aws_hash_iter_next. - * - * @param destroy_contents If true, the destructors for the key and value - * will be called. - */ -AWS_COMMON_API -void aws_hash_iter_delete(struct aws_hash_iter *iter, bool destroy_contents); - -/** - * Attempts to locate an element at key. If the element is found, a - * pointer to the value is placed in *p_elem; if it is not found, - * *pElem is set to NULL. Either way, AWS_OP_SUCCESS is returned. - * - * This method does not change the state of the hash table. Therefore, it - * is safe to call _find from multiple threads on the same hash table, - * provided no mutating operations happen in parallel. - * - * Calling code may update the value in the hash table by modifying **pElem - * after a successful find. However, this pointer is not guaranteed to - * remain usable after a subsequent call to _put, _delete, _clear, or - * _clean_up. - */ - -AWS_COMMON_API -int aws_hash_table_find(const struct aws_hash_table *map, const void *key, struct aws_hash_element **p_elem); - -/** - * Attempts to locate an element at key. If no such element was found, - * creates a new element, with value initialized to NULL. In either case, a - * pointer to the element is placed in *p_elem. - * - * If was_created is non-NULL, *was_created is set to 0 if an existing - * element was found, or 1 is a new element was created. - * - * Returns AWS_OP_SUCCESS if an item was found or created. - * Raises AWS_ERROR_OOM if hash table expansion was required and memory - * allocation failed. - */ -AWS_COMMON_API -int aws_hash_table_create( - struct aws_hash_table *map, - const void *key, - struct aws_hash_element **p_elem, - int *was_created); - -/** - * Inserts a new element at key, with the given value. If another element - * exists at that key, the old element will be overwritten; both old key and - * value objects will be destroyed. - * - * If was_created is non-NULL, *was_created is set to 0 if an existing - * element was found, or 1 is a new element was created. - * - * Returns AWS_OP_SUCCESS if an item was found or created. - * Raises AWS_ERROR_OOM if hash table expansion was required and memory - */ -AWS_COMMON_API -int aws_hash_table_put(struct aws_hash_table *map, const void *key, void *value, int *was_created); - -/** - * Removes element at key. Always returns AWS_OP_SUCCESS. - * - * If pValue is non-NULL, the existing value (if any) is moved into - * (*value) before removing from the table, and destroy_fn is _not_ - * invoked. If pValue is NULL, then (if the element existed) destroy_fn - * will be invoked on the element being removed. - * - * If was_present is non-NULL, it is set to 0 if the element was - * not present, or 1 if it was present (and is now removed). - */ -AWS_COMMON_API -int aws_hash_table_remove( - struct aws_hash_table *map, - const void *key, - struct aws_hash_element *p_value, - int *was_present); - -/** + */ + +#include <aws/common/common.h> + +#include <stddef.h> + +#define AWS_COMMON_HASH_TABLE_ITER_CONTINUE (1 << 0) +#define AWS_COMMON_HASH_TABLE_ITER_DELETE (1 << 1) + +/** + * Hash table data structure. This module provides an automatically resizing + * hash table implementation for general purpose use. The hash table stores a + * mapping between void * keys and values; it is expected that in most cases, + * these will point to a structure elsewhere in the heap, instead of inlining a + * key or value into the hash table element itself. + * + * Currently, this hash table implements a variant of robin hood hashing, but + * we do not guarantee that this won't change in the future. + * + * Associated with each hash function are four callbacks: + * + * hash_fn - A hash function from the keys to a uint64_t. It is critical that + * the hash function for a key does not change while the key is in the hash + * table; violating this results in undefined behavior. Collisions are + * tolerated, though naturally with reduced performance. + * + * equals_fn - An equality comparison function. This function must be + * reflexive and consistent with hash_fn. + * + * destroy_key_fn, destroy_value_fn - Optional callbacks invoked when the + * table is cleared or cleaned up and at the caller's option when an element + * is removed from the table. Either or both may be set to NULL, which + * has the same effect as a no-op destroy function. + * + * This datastructure can be safely moved between threads, subject to the + * requirements of the underlying allocator. It is also safe to invoke + * non-mutating operations on the hash table from multiple threads. A suitable + * memory barrier must be used when transitioning from single-threaded mutating + * usage to multithreaded usage. + */ +struct hash_table_state; /* Opaque pointer */ +struct aws_hash_table { + struct hash_table_state *p_impl; +}; + +/** + * Represents an element in the hash table. Various operations on the hash + * table may provide pointers to elements stored within the hash table; + * generally, calling code may alter value, but must not alter key (or any + * information used to compute key's hash code). + * + * Pointers to elements within the hash are invalidated whenever an operation + * which may change the number of elements in the hash is invoked (i.e. put, + * delete, clear, and clean_up), regardless of whether the number of elements + * actually changes. + */ +struct aws_hash_element { + const void *key; + void *value; +}; + +enum aws_hash_iter_status { + AWS_HASH_ITER_STATUS_DONE, + AWS_HASH_ITER_STATUS_DELETE_CALLED, + AWS_HASH_ITER_STATUS_READY_FOR_USE, +}; + +struct aws_hash_iter { + const struct aws_hash_table *map; + struct aws_hash_element element; + size_t slot; + size_t limit; + enum aws_hash_iter_status status; + /* + * Reserving extra fields for binary compatibility with future expansion of + * iterator in case hash table implementation changes. + */ + int unused_0; + void *unused_1; + void *unused_2; +}; + +/** + * Prototype for a key hashing function pointer. + */ +typedef uint64_t(aws_hash_fn)(const void *key); + +/** + * Prototype for a hash table equality check function pointer. + * + * This type is usually used for a function that compares two hash table + * keys, but note that the same type is used for a function that compares + * two hash table values in aws_hash_table_eq. + * + * Equality functions used in a hash table must be reflexive (i.e., a == b if + * and only if b == a), and must be consistent with the hash function in use. + */ +typedef bool(aws_hash_callback_eq_fn)(const void *a, const void *b); + +/** + * Prototype for a hash table key or value destructor function pointer. + * + * This function is used to destroy elements in the hash table when the + * table is cleared or cleaned up. + * + * Note that functions which remove individual elements from the hash + * table provide options of whether or not to invoke the destructors + * on the key and value of a removed element. + */ +typedef void(aws_hash_callback_destroy_fn)(void *key_or_value); + +AWS_EXTERN_C_BEGIN + +/** + * Initializes a hash map with initial capacity for 'size' elements + * without resizing. Uses hash_fn to compute the hash of each element. + * equals_fn to compute equality of two keys. Whenever an element is + * removed without being returned, destroy_key_fn is run on the pointer + * to the key and destroy_value_fn is run on the pointer to the value. + * Either or both may be NULL if a callback is not desired in this case. + */ +AWS_COMMON_API +int aws_hash_table_init( + struct aws_hash_table *map, + struct aws_allocator *alloc, + size_t size, + aws_hash_fn *hash_fn, + aws_hash_callback_eq_fn *equals_fn, + aws_hash_callback_destroy_fn *destroy_key_fn, + aws_hash_callback_destroy_fn *destroy_value_fn); + +/** + * Deletes every element from map and frees all associated memory. + * destroy_fn will be called for each element. aws_hash_table_init + * must be called before reusing the hash table. + * + * This method is idempotent. + */ +AWS_COMMON_API +void aws_hash_table_clean_up(struct aws_hash_table *map); + +/** + * Safely swaps two hash tables. Note that we swap the entirety of the hash + * table, including which allocator is associated. + * + * Neither hash table is required to be initialized; if one or both is + * uninitialized, then the uninitialized state is also swapped. + */ +AWS_COMMON_API +void aws_hash_table_swap(struct aws_hash_table *AWS_RESTRICT a, struct aws_hash_table *AWS_RESTRICT b); + +/** + * Moves the hash table in 'from' to 'to'. After this move, 'from' will + * be identical to the state of the original 'to' hash table, and 'to' + * will be in the same state as if it had been passed to aws_hash_table_clean_up + * (that is, it will have no memory allocated, and it will be safe to + * either discard it or call aws_hash_table_clean_up again). + * + * Note that 'to' will not be cleaned up. You should make sure that 'to' + * is either uninitialized or cleaned up before moving a hashtable into + * it. + */ +AWS_COMMON_API +void aws_hash_table_move(struct aws_hash_table *AWS_RESTRICT to, struct aws_hash_table *AWS_RESTRICT from); + +/** + * Returns the current number of entries in the table. + */ +AWS_COMMON_API +size_t aws_hash_table_get_entry_count(const struct aws_hash_table *map); + +/** + * Returns an iterator to be used for iterating through a hash table. + * Iterator will already point to the first element of the table it finds, + * which can be accessed as iter.element. + * + * This function cannot fail, but if there are no elements in the table, + * the returned iterator will return true for aws_hash_iter_done(&iter). + */ +AWS_COMMON_API +struct aws_hash_iter aws_hash_iter_begin(const struct aws_hash_table *map); + +/** + * Returns true if iterator is done iterating through table, false otherwise. + * If this is true, the iterator will not include an element of the table. + */ +AWS_COMMON_API +bool aws_hash_iter_done(const struct aws_hash_iter *iter); + +/** + * Updates iterator so that it points to next element of hash table. + * + * This and the two previous functions are designed to be used together with + * the following idiom: + * + * for (struct aws_hash_iter iter = aws_hash_iter_begin(&map); + * !aws_hash_iter_done(&iter); aws_hash_iter_next(&iter)) { + * const key_type key = *(const key_type *)iter.element.key; + * value_type value = *(value_type *)iter.element.value; + * // etc. + * } + * + * Note that calling this on an iter which is "done" is idempotent: + * i.e. it will return another iter which is "done". + */ +AWS_COMMON_API +void aws_hash_iter_next(struct aws_hash_iter *iter); + +/** + * Deletes the element currently pointed-to by the hash iterator. + * After calling this method, the element member of the iterator + * should not be accessed until the next call to aws_hash_iter_next. + * + * @param destroy_contents If true, the destructors for the key and value + * will be called. + */ +AWS_COMMON_API +void aws_hash_iter_delete(struct aws_hash_iter *iter, bool destroy_contents); + +/** + * Attempts to locate an element at key. If the element is found, a + * pointer to the value is placed in *p_elem; if it is not found, + * *pElem is set to NULL. Either way, AWS_OP_SUCCESS is returned. + * + * This method does not change the state of the hash table. Therefore, it + * is safe to call _find from multiple threads on the same hash table, + * provided no mutating operations happen in parallel. + * + * Calling code may update the value in the hash table by modifying **pElem + * after a successful find. However, this pointer is not guaranteed to + * remain usable after a subsequent call to _put, _delete, _clear, or + * _clean_up. + */ + +AWS_COMMON_API +int aws_hash_table_find(const struct aws_hash_table *map, const void *key, struct aws_hash_element **p_elem); + +/** + * Attempts to locate an element at key. If no such element was found, + * creates a new element, with value initialized to NULL. In either case, a + * pointer to the element is placed in *p_elem. + * + * If was_created is non-NULL, *was_created is set to 0 if an existing + * element was found, or 1 is a new element was created. + * + * Returns AWS_OP_SUCCESS if an item was found or created. + * Raises AWS_ERROR_OOM if hash table expansion was required and memory + * allocation failed. + */ +AWS_COMMON_API +int aws_hash_table_create( + struct aws_hash_table *map, + const void *key, + struct aws_hash_element **p_elem, + int *was_created); + +/** + * Inserts a new element at key, with the given value. If another element + * exists at that key, the old element will be overwritten; both old key and + * value objects will be destroyed. + * + * If was_created is non-NULL, *was_created is set to 0 if an existing + * element was found, or 1 is a new element was created. + * + * Returns AWS_OP_SUCCESS if an item was found or created. + * Raises AWS_ERROR_OOM if hash table expansion was required and memory + */ +AWS_COMMON_API +int aws_hash_table_put(struct aws_hash_table *map, const void *key, void *value, int *was_created); + +/** + * Removes element at key. Always returns AWS_OP_SUCCESS. + * + * If pValue is non-NULL, the existing value (if any) is moved into + * (*value) before removing from the table, and destroy_fn is _not_ + * invoked. If pValue is NULL, then (if the element existed) destroy_fn + * will be invoked on the element being removed. + * + * If was_present is non-NULL, it is set to 0 if the element was + * not present, or 1 if it was present (and is now removed). + */ +AWS_COMMON_API +int aws_hash_table_remove( + struct aws_hash_table *map, + const void *key, + struct aws_hash_element *p_value, + int *was_present); + +/** * Removes element already known (typically by find()). * * p_value should point to a valid element returned by create() or find(). @@ -304,113 +304,113 @@ AWS_COMMON_API int aws_hash_table_remove_element(struct aws_hash_table *map, struct aws_hash_element *p_value); /** - * Iterates through every element in the map and invokes the callback on - * that item. Iteration is performed in an arbitrary, implementation-defined - * order, and is not guaranteed to be consistent across invocations. - * - * The callback may change the value associated with the key by overwriting - * the value pointed-to by value. In this case, the on_element_removed - * callback will not be invoked, unless the callback invokes - * AWS_COMMON_HASH_TABLE_ITER_DELETE (in which case the on_element_removed - * is given the updated value). - * - * The callback must return a bitmask of zero or more of the following values - * ORed together: - * - * # AWS_COMMON_HASH_TABLE_ITER_CONTINUE - Continues iteration to the next - * element (if not set, iteration stops) - * # AWS_COMMON_HASH_TABLE_ITER_DELETE - Deletes the current value and - * continues iteration. destroy_fn will NOT be invoked. - * - * Invoking any method which may change the contents of the hashtable - * during iteration results in undefined behavior. However, you may safely - * invoke non-mutating operations during an iteration. - * - * This operation is mutating only if AWS_COMMON_HASH_TABLE_ITER_DELETE - * is returned at some point during iteration. Otherwise, it is non-mutating - * and is safe to invoke in parallel with other non-mutating operations. - */ - -AWS_COMMON_API -int aws_hash_table_foreach( - struct aws_hash_table *map, - int (*callback)(void *context, struct aws_hash_element *p_element), - void *context); - -/** - * Compares two hash tables for equality. Both hash tables must have equivalent - * key comparators; values will be compared using the comparator passed into this - * function. The key hash function does not need to be equivalent between the - * two hash tables. - */ -AWS_COMMON_API -bool aws_hash_table_eq( - const struct aws_hash_table *a, - const struct aws_hash_table *b, - aws_hash_callback_eq_fn *value_eq); - -/** - * Removes every element from the hash map. destroy_fn will be called for - * each element. - */ -AWS_COMMON_API -void aws_hash_table_clear(struct aws_hash_table *map); - -/** - * Convenience hash function for NULL-terminated C-strings - */ -AWS_COMMON_API -uint64_t aws_hash_c_string(const void *item); - -/** - * Convenience hash function for struct aws_strings. - * Hash is same as used on the string bytes by aws_hash_c_string. - */ -AWS_COMMON_API -uint64_t aws_hash_string(const void *item); - -/** - * Convenience hash function for struct aws_byte_cursor. - * Hash is same as used on the string bytes by aws_hash_c_string. - */ -AWS_COMMON_API -uint64_t aws_hash_byte_cursor_ptr(const void *item); - -/** - * Convenience hash function which hashes the pointer value directly, - * without dereferencing. This can be used in cases where pointer identity - * is desired, or where a uintptr_t is encoded into a const void *. - */ -AWS_COMMON_API -uint64_t aws_hash_ptr(const void *item); - + * Iterates through every element in the map and invokes the callback on + * that item. Iteration is performed in an arbitrary, implementation-defined + * order, and is not guaranteed to be consistent across invocations. + * + * The callback may change the value associated with the key by overwriting + * the value pointed-to by value. In this case, the on_element_removed + * callback will not be invoked, unless the callback invokes + * AWS_COMMON_HASH_TABLE_ITER_DELETE (in which case the on_element_removed + * is given the updated value). + * + * The callback must return a bitmask of zero or more of the following values + * ORed together: + * + * # AWS_COMMON_HASH_TABLE_ITER_CONTINUE - Continues iteration to the next + * element (if not set, iteration stops) + * # AWS_COMMON_HASH_TABLE_ITER_DELETE - Deletes the current value and + * continues iteration. destroy_fn will NOT be invoked. + * + * Invoking any method which may change the contents of the hashtable + * during iteration results in undefined behavior. However, you may safely + * invoke non-mutating operations during an iteration. + * + * This operation is mutating only if AWS_COMMON_HASH_TABLE_ITER_DELETE + * is returned at some point during iteration. Otherwise, it is non-mutating + * and is safe to invoke in parallel with other non-mutating operations. + */ + +AWS_COMMON_API +int aws_hash_table_foreach( + struct aws_hash_table *map, + int (*callback)(void *context, struct aws_hash_element *p_element), + void *context); + +/** + * Compares two hash tables for equality. Both hash tables must have equivalent + * key comparators; values will be compared using the comparator passed into this + * function. The key hash function does not need to be equivalent between the + * two hash tables. + */ +AWS_COMMON_API +bool aws_hash_table_eq( + const struct aws_hash_table *a, + const struct aws_hash_table *b, + aws_hash_callback_eq_fn *value_eq); + +/** + * Removes every element from the hash map. destroy_fn will be called for + * each element. + */ +AWS_COMMON_API +void aws_hash_table_clear(struct aws_hash_table *map); + +/** + * Convenience hash function for NULL-terminated C-strings + */ +AWS_COMMON_API +uint64_t aws_hash_c_string(const void *item); + +/** + * Convenience hash function for struct aws_strings. + * Hash is same as used on the string bytes by aws_hash_c_string. + */ +AWS_COMMON_API +uint64_t aws_hash_string(const void *item); + +/** + * Convenience hash function for struct aws_byte_cursor. + * Hash is same as used on the string bytes by aws_hash_c_string. + */ +AWS_COMMON_API +uint64_t aws_hash_byte_cursor_ptr(const void *item); + +/** + * Convenience hash function which hashes the pointer value directly, + * without dereferencing. This can be used in cases where pointer identity + * is desired, or where a uintptr_t is encoded into a const void *. + */ +AWS_COMMON_API +uint64_t aws_hash_ptr(const void *item); + AWS_COMMON_API uint64_t aws_hash_combine(uint64_t item1, uint64_t item2); -/** - * Convenience eq callback for NULL-terminated C-strings - */ -AWS_COMMON_API -bool aws_hash_callback_c_str_eq(const void *a, const void *b); - -/** - * Convenience eq callback for AWS strings - */ -AWS_COMMON_API -bool aws_hash_callback_string_eq(const void *a, const void *b); - -/** - * Convenience destroy callback for AWS strings - */ -AWS_COMMON_API -void aws_hash_callback_string_destroy(void *a); - -/** - * Equality function which compares pointer equality. - */ -AWS_COMMON_API -bool aws_ptr_eq(const void *a, const void *b); - +/** + * Convenience eq callback for NULL-terminated C-strings + */ +AWS_COMMON_API +bool aws_hash_callback_c_str_eq(const void *a, const void *b); + +/** + * Convenience eq callback for AWS strings + */ +AWS_COMMON_API +bool aws_hash_callback_string_eq(const void *a, const void *b); + +/** + * Convenience destroy callback for AWS strings + */ +AWS_COMMON_API +void aws_hash_callback_string_destroy(void *a); + +/** + * Equality function which compares pointer equality. + */ +AWS_COMMON_API +bool aws_ptr_eq(const void *a, const void *b); + /** * Best-effort check of hash_table_state data-structure invariants */ @@ -423,6 +423,6 @@ bool aws_hash_table_is_valid(const struct aws_hash_table *map); AWS_COMMON_API bool aws_hash_iter_is_valid(const struct aws_hash_iter *iter); -AWS_EXTERN_C_END - -#endif /* AWS_COMMON_HASH_TABLE_H */ +AWS_EXTERN_C_END + +#endif /* AWS_COMMON_HASH_TABLE_H */ diff --git a/contrib/restricted/aws/aws-c-common/include/aws/common/linked_list.h b/contrib/restricted/aws/aws-c-common/include/aws/common/linked_list.h index e4c3be9637..5d578adbf0 100644 --- a/contrib/restricted/aws/aws-c-common/include/aws/common/linked_list.h +++ b/contrib/restricted/aws/aws-c-common/include/aws/common/linked_list.h @@ -1,28 +1,28 @@ -#ifndef AWS_COMMON_LINKED_LIST_H -#define AWS_COMMON_LINKED_LIST_H - +#ifndef AWS_COMMON_LINKED_LIST_H +#define AWS_COMMON_LINKED_LIST_H + /** * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. * SPDX-License-Identifier: Apache-2.0. - */ - -#include <aws/common/common.h> - -#include <stddef.h> - -struct aws_linked_list_node { - struct aws_linked_list_node *next; - struct aws_linked_list_node *prev; -}; - -struct aws_linked_list { - struct aws_linked_list_node head; - struct aws_linked_list_node tail; -}; - + */ + +#include <aws/common/common.h> + +#include <stddef.h> + +struct aws_linked_list_node { + struct aws_linked_list_node *next; + struct aws_linked_list_node *prev; +}; + +struct aws_linked_list { + struct aws_linked_list_node head; + struct aws_linked_list_node tail; +}; + AWS_EXTERN_C_BEGIN -/** +/** * Set node's next and prev pointers to NULL. */ AWS_STATIC_IMPL void aws_linked_list_node_reset(struct aws_linked_list_node *node); @@ -69,99 +69,99 @@ AWS_STATIC_IMPL bool aws_linked_list_node_prev_is_valid(const struct aws_linked_ AWS_STATIC_IMPL bool aws_linked_list_is_valid_deep(const struct aws_linked_list *list); /** - * Initializes the list. List will be empty after this call. - */ + * Initializes the list. List will be empty after this call. + */ AWS_STATIC_IMPL void aws_linked_list_init(struct aws_linked_list *list); - -/** - * Returns an iteration pointer for the first element in the list. - */ + +/** + * Returns an iteration pointer for the first element in the list. + */ AWS_STATIC_IMPL struct aws_linked_list_node *aws_linked_list_begin(const struct aws_linked_list *list); - -/** - * Returns an iteration pointer for one past the last element in the list. - */ + +/** + * Returns an iteration pointer for one past the last element in the list. + */ AWS_STATIC_IMPL const struct aws_linked_list_node *aws_linked_list_end(const struct aws_linked_list *list); - -/** - * Returns a pointer for the last element in the list. - * Used to begin iterating the list in reverse. Ex: - * for (i = aws_linked_list_rbegin(list); i != aws_linked_list_rend(list); i = aws_linked_list_prev(i)) {...} - */ + +/** + * Returns a pointer for the last element in the list. + * Used to begin iterating the list in reverse. Ex: + * for (i = aws_linked_list_rbegin(list); i != aws_linked_list_rend(list); i = aws_linked_list_prev(i)) {...} + */ AWS_STATIC_IMPL struct aws_linked_list_node *aws_linked_list_rbegin(const struct aws_linked_list *list); - -/** - * Returns the pointer to one before the first element in the list. - * Used to end iterating the list in reverse. - */ + +/** + * Returns the pointer to one before the first element in the list. + * Used to end iterating the list in reverse. + */ AWS_STATIC_IMPL const struct aws_linked_list_node *aws_linked_list_rend(const struct aws_linked_list *list); - -/** - * Returns the next element in the list. - */ + +/** + * Returns the next element in the list. + */ AWS_STATIC_IMPL struct aws_linked_list_node *aws_linked_list_next(const struct aws_linked_list_node *node); - -/** - * Returns the previous element in the list. - */ + +/** + * Returns the previous element in the list. + */ AWS_STATIC_IMPL struct aws_linked_list_node *aws_linked_list_prev(const struct aws_linked_list_node *node); - -/** - * Inserts to_add immediately after after. - */ -AWS_STATIC_IMPL void aws_linked_list_insert_after( - struct aws_linked_list_node *after, + +/** + * Inserts to_add immediately after after. + */ +AWS_STATIC_IMPL void aws_linked_list_insert_after( + struct aws_linked_list_node *after, struct aws_linked_list_node *to_add); /** * Swaps the order two nodes in the linked list. */ AWS_STATIC_IMPL void aws_linked_list_swap_nodes(struct aws_linked_list_node *a, struct aws_linked_list_node *b); - -/** - * Inserts to_add immediately before before. - */ -AWS_STATIC_IMPL void aws_linked_list_insert_before( - struct aws_linked_list_node *before, + +/** + * Inserts to_add immediately before before. + */ +AWS_STATIC_IMPL void aws_linked_list_insert_before( + struct aws_linked_list_node *before, struct aws_linked_list_node *to_add); - -/** - * Removes the specified node from the list (prev/next point to each other) and - * returns the next node in the list. - */ + +/** + * Removes the specified node from the list (prev/next point to each other) and + * returns the next node in the list. + */ AWS_STATIC_IMPL void aws_linked_list_remove(struct aws_linked_list_node *node); - -/** - * Append new_node. - */ + +/** + * Append new_node. + */ AWS_STATIC_IMPL void aws_linked_list_push_back(struct aws_linked_list *list, struct aws_linked_list_node *node); - -/** - * Returns the element in the back of the list. - */ + +/** + * Returns the element in the back of the list. + */ AWS_STATIC_IMPL struct aws_linked_list_node *aws_linked_list_back(const struct aws_linked_list *list); - -/** - * Returns the element in the back of the list and removes it - */ + +/** + * Returns the element in the back of the list and removes it + */ AWS_STATIC_IMPL struct aws_linked_list_node *aws_linked_list_pop_back(struct aws_linked_list *list); - -/** - * Prepend new_node. - */ + +/** + * Prepend new_node. + */ AWS_STATIC_IMPL void aws_linked_list_push_front(struct aws_linked_list *list, struct aws_linked_list_node *node); -/** - * Returns the element in the front of the list. - */ +/** + * Returns the element in the front of the list. + */ AWS_STATIC_IMPL struct aws_linked_list_node *aws_linked_list_front(const struct aws_linked_list *list); -/** - * Returns the element in the front of the list and removes it - */ +/** + * Returns the element in the front of the list and removes it + */ AWS_STATIC_IMPL struct aws_linked_list_node *aws_linked_list_pop_front(struct aws_linked_list *list); - + AWS_STATIC_IMPL void aws_linked_list_swap_contents( struct aws_linked_list *AWS_RESTRICT a, struct aws_linked_list *AWS_RESTRICT b); - + /** * Remove all nodes from one list, and add them to the back of another. * @@ -170,7 +170,7 @@ AWS_STATIC_IMPL void aws_linked_list_swap_contents( AWS_STATIC_IMPL void aws_linked_list_move_all_back( struct aws_linked_list *AWS_RESTRICT dst, struct aws_linked_list *AWS_RESTRICT src); - + /** * Remove all nodes from one list, and add them to the front of another. * @@ -179,10 +179,10 @@ AWS_STATIC_IMPL void aws_linked_list_move_all_back( AWS_STATIC_IMPL void aws_linked_list_move_all_front( struct aws_linked_list *AWS_RESTRICT dst, struct aws_linked_list *AWS_RESTRICT src); - + #ifndef AWS_NO_STATIC_IMPL # include <aws/common/linked_list.inl> #endif /* AWS_NO_STATIC_IMPL */ AWS_EXTERN_C_END -#endif /* AWS_COMMON_LINKED_LIST_H */ +#endif /* AWS_COMMON_LINKED_LIST_H */ diff --git a/contrib/restricted/aws/aws-c-common/include/aws/common/lru_cache.h b/contrib/restricted/aws/aws-c-common/include/aws/common/lru_cache.h index 37eff525f5..0aa7162ecf 100644 --- a/contrib/restricted/aws/aws-c-common/include/aws/common/lru_cache.h +++ b/contrib/restricted/aws/aws-c-common/include/aws/common/lru_cache.h @@ -1,42 +1,42 @@ -#ifndef AWS_COMMON_LRU_CACHE_H -#define AWS_COMMON_LRU_CACHE_H +#ifndef AWS_COMMON_LRU_CACHE_H +#define AWS_COMMON_LRU_CACHE_H /** * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. * SPDX-License-Identifier: Apache-2.0. - */ - + */ + #include <aws/common/cache.h> - -AWS_EXTERN_C_BEGIN - -/** + +AWS_EXTERN_C_BEGIN + +/** * Initializes the Least-recently-used cache. Sets up the underlying linked hash table. * Once `max_items` elements have been added, the least recently used item will be removed. For the other parameters, * see aws/common/hash_table.h. Hash table semantics of these arguments are preserved.(Yes the one that was the answer * to that interview question that one time). - */ -AWS_COMMON_API + */ +AWS_COMMON_API struct aws_cache *aws_cache_new_lru( - struct aws_allocator *allocator, - aws_hash_fn *hash_fn, - aws_hash_callback_eq_fn *equals_fn, - aws_hash_callback_destroy_fn *destroy_key_fn, - aws_hash_callback_destroy_fn *destroy_value_fn, - size_t max_items); - -/** - * Accesses the least-recently-used element, sets it to most-recently-used - * element, and returns the value. - */ -AWS_COMMON_API + struct aws_allocator *allocator, + aws_hash_fn *hash_fn, + aws_hash_callback_eq_fn *equals_fn, + aws_hash_callback_destroy_fn *destroy_key_fn, + aws_hash_callback_destroy_fn *destroy_value_fn, + size_t max_items); + +/** + * Accesses the least-recently-used element, sets it to most-recently-used + * element, and returns the value. + */ +AWS_COMMON_API void *aws_lru_cache_use_lru_element(struct aws_cache *cache); - -/** - * Accesses the most-recently-used element and returns its value. - */ -AWS_COMMON_API + +/** + * Accesses the most-recently-used element and returns its value. + */ +AWS_COMMON_API void *aws_lru_cache_get_mru_element(const struct aws_cache *cache); - -AWS_EXTERN_C_END - -#endif /* AWS_COMMON_LRU_CACHE_H */ + +AWS_EXTERN_C_END + +#endif /* AWS_COMMON_LRU_CACHE_H */ diff --git a/contrib/restricted/aws/aws-c-common/include/aws/common/math.gcc_overflow.inl b/contrib/restricted/aws/aws-c-common/include/aws/common/math.gcc_overflow.inl index 24ce3f0e00..d0cfec872e 100644 --- a/contrib/restricted/aws/aws-c-common/include/aws/common/math.gcc_overflow.inl +++ b/contrib/restricted/aws/aws-c-common/include/aws/common/math.gcc_overflow.inl @@ -4,111 +4,111 @@ /** * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. * SPDX-License-Identifier: Apache-2.0. - */ - -/* - * This header is already included, but include it again to make editor - * highlighting happier. - */ -#include <aws/common/common.h> + */ + +/* + * This header is already included, but include it again to make editor + * highlighting happier. + */ +#include <aws/common/common.h> #include <aws/common/math.h> - + AWS_EXTERN_C_BEGIN -/** - * Multiplies a * b. If the result overflows, returns 2^64 - 1. - */ -AWS_STATIC_IMPL uint64_t aws_mul_u64_saturating(uint64_t a, uint64_t b) { - uint64_t res; - - if (__builtin_mul_overflow(a, b, &res)) { - res = UINT64_MAX; - } - - return res; -} - -/** - * If a * b overflows, returns AWS_OP_ERR; otherwise multiplies - * a * b, returns the result in *r, and returns AWS_OP_SUCCESS. - */ -AWS_STATIC_IMPL int aws_mul_u64_checked(uint64_t a, uint64_t b, uint64_t *r) { - if (__builtin_mul_overflow(a, b, r)) { - return aws_raise_error(AWS_ERROR_OVERFLOW_DETECTED); - } - return AWS_OP_SUCCESS; -} - -/** - * Multiplies a * b. If the result overflows, returns 2^32 - 1. - */ -AWS_STATIC_IMPL uint32_t aws_mul_u32_saturating(uint32_t a, uint32_t b) { - uint32_t res; - - if (__builtin_mul_overflow(a, b, &res)) { - res = UINT32_MAX; - } - - return res; -} - -/** - * If a * b overflows, returns AWS_OP_ERR; otherwise multiplies - * a * b, returns the result in *r, and returns AWS_OP_SUCCESS. - */ -AWS_STATIC_IMPL int aws_mul_u32_checked(uint32_t a, uint32_t b, uint32_t *r) { - if (__builtin_mul_overflow(a, b, r)) { - return aws_raise_error(AWS_ERROR_OVERFLOW_DETECTED); - } - return AWS_OP_SUCCESS; -} - -/** - * If a + b overflows, returns AWS_OP_ERR; otherwise adds - * a + b, returns the result in *r, and returns AWS_OP_SUCCESS. - */ -AWS_STATIC_IMPL int aws_add_u64_checked(uint64_t a, uint64_t b, uint64_t *r) { - if (__builtin_add_overflow(a, b, r)) { - return aws_raise_error(AWS_ERROR_OVERFLOW_DETECTED); - } - return AWS_OP_SUCCESS; -} - -/** - * Adds a + b. If the result overflows, returns 2^64 - 1. - */ -AWS_STATIC_IMPL uint64_t aws_add_u64_saturating(uint64_t a, uint64_t b) { - uint64_t res; - - if (__builtin_add_overflow(a, b, &res)) { - res = UINT64_MAX; - } - - return res; -} - -/** - * If a + b overflows, returns AWS_OP_ERR; otherwise adds - * a + b, returns the result in *r, and returns AWS_OP_SUCCESS. - */ -AWS_STATIC_IMPL int aws_add_u32_checked(uint32_t a, uint32_t b, uint32_t *r) { - if (__builtin_add_overflow(a, b, r)) { - return aws_raise_error(AWS_ERROR_OVERFLOW_DETECTED); - } - return AWS_OP_SUCCESS; -} - -/** - * Adds a + b. If the result overflows, returns 2^32 - 1. - */ +/** + * Multiplies a * b. If the result overflows, returns 2^64 - 1. + */ +AWS_STATIC_IMPL uint64_t aws_mul_u64_saturating(uint64_t a, uint64_t b) { + uint64_t res; + + if (__builtin_mul_overflow(a, b, &res)) { + res = UINT64_MAX; + } + + return res; +} + +/** + * If a * b overflows, returns AWS_OP_ERR; otherwise multiplies + * a * b, returns the result in *r, and returns AWS_OP_SUCCESS. + */ +AWS_STATIC_IMPL int aws_mul_u64_checked(uint64_t a, uint64_t b, uint64_t *r) { + if (__builtin_mul_overflow(a, b, r)) { + return aws_raise_error(AWS_ERROR_OVERFLOW_DETECTED); + } + return AWS_OP_SUCCESS; +} + +/** + * Multiplies a * b. If the result overflows, returns 2^32 - 1. + */ +AWS_STATIC_IMPL uint32_t aws_mul_u32_saturating(uint32_t a, uint32_t b) { + uint32_t res; + + if (__builtin_mul_overflow(a, b, &res)) { + res = UINT32_MAX; + } + + return res; +} + +/** + * If a * b overflows, returns AWS_OP_ERR; otherwise multiplies + * a * b, returns the result in *r, and returns AWS_OP_SUCCESS. + */ +AWS_STATIC_IMPL int aws_mul_u32_checked(uint32_t a, uint32_t b, uint32_t *r) { + if (__builtin_mul_overflow(a, b, r)) { + return aws_raise_error(AWS_ERROR_OVERFLOW_DETECTED); + } + return AWS_OP_SUCCESS; +} + +/** + * If a + b overflows, returns AWS_OP_ERR; otherwise adds + * a + b, returns the result in *r, and returns AWS_OP_SUCCESS. + */ +AWS_STATIC_IMPL int aws_add_u64_checked(uint64_t a, uint64_t b, uint64_t *r) { + if (__builtin_add_overflow(a, b, r)) { + return aws_raise_error(AWS_ERROR_OVERFLOW_DETECTED); + } + return AWS_OP_SUCCESS; +} + +/** + * Adds a + b. If the result overflows, returns 2^64 - 1. + */ +AWS_STATIC_IMPL uint64_t aws_add_u64_saturating(uint64_t a, uint64_t b) { + uint64_t res; + + if (__builtin_add_overflow(a, b, &res)) { + res = UINT64_MAX; + } + + return res; +} + +/** + * If a + b overflows, returns AWS_OP_ERR; otherwise adds + * a + b, returns the result in *r, and returns AWS_OP_SUCCESS. + */ +AWS_STATIC_IMPL int aws_add_u32_checked(uint32_t a, uint32_t b, uint32_t *r) { + if (__builtin_add_overflow(a, b, r)) { + return aws_raise_error(AWS_ERROR_OVERFLOW_DETECTED); + } + return AWS_OP_SUCCESS; +} + +/** + * Adds a + b. If the result overflows, returns 2^32 - 1. + */ AWS_STATIC_IMPL uint32_t aws_add_u32_saturating(uint32_t a, uint32_t b) { - uint32_t res; - - if (__builtin_add_overflow(a, b, &res)) { - res = UINT32_MAX; - } - - return res; -} + uint32_t res; + + if (__builtin_add_overflow(a, b, &res)) { + res = UINT32_MAX; + } + + return res; +} AWS_EXTERN_C_END diff --git a/contrib/restricted/aws/aws-c-common/include/aws/common/math.h b/contrib/restricted/aws/aws-c-common/include/aws/common/math.h index 108e983639..027d0ff502 100644 --- a/contrib/restricted/aws/aws-c-common/include/aws/common/math.h +++ b/contrib/restricted/aws/aws-c-common/include/aws/common/math.h @@ -1,101 +1,101 @@ -#ifndef AWS_COMMON_MATH_H -#define AWS_COMMON_MATH_H - +#ifndef AWS_COMMON_MATH_H +#define AWS_COMMON_MATH_H + /** * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. * SPDX-License-Identifier: Apache-2.0. - */ - -#include <aws/common/common.h> -#include <aws/common/config.h> - -#include <limits.h> -#include <stdlib.h> - -/* The number of bits in a size_t variable */ -#if SIZE_MAX == UINT32_MAX -# define SIZE_BITS 32 -#elif SIZE_MAX == UINT64_MAX -# define SIZE_BITS 64 -#else -# error "Target not supported" -#endif - -/* The largest power of two that can be stored in a size_t */ -#define SIZE_MAX_POWER_OF_TWO (((size_t)1) << (SIZE_BITS - 1)) - + */ + +#include <aws/common/common.h> +#include <aws/common/config.h> + +#include <limits.h> +#include <stdlib.h> + +/* The number of bits in a size_t variable */ +#if SIZE_MAX == UINT32_MAX +# define SIZE_BITS 32 +#elif SIZE_MAX == UINT64_MAX +# define SIZE_BITS 64 +#else +# error "Target not supported" +#endif + +/* The largest power of two that can be stored in a size_t */ +#define SIZE_MAX_POWER_OF_TWO (((size_t)1) << (SIZE_BITS - 1)) + AWS_EXTERN_C_BEGIN - + #if defined(AWS_HAVE_GCC_OVERFLOW_MATH_EXTENSIONS) && (defined(__clang__) || !defined(__cplusplus)) || \ (defined(__x86_64__) || defined(__aarch64__)) && defined(AWS_HAVE_GCC_INLINE_ASM) || \ defined(AWS_HAVE_MSVC_MULX) || defined(CBMC) || !defined(AWS_HAVE_GCC_OVERFLOW_MATH_EXTENSIONS) /* In all these cases, we can use fast static inline versions of this code */ # define AWS_COMMON_MATH_API AWS_STATIC_IMPL -#else -/* - * We got here because we are building in C++ mode but we only support overflow extensions - * in C mode. Because the fallback is _slow_ (involving a division), we'd prefer to make a - * non-inline call to the fast C intrinsics. - */ +#else +/* + * We got here because we are building in C++ mode but we only support overflow extensions + * in C mode. Because the fallback is _slow_ (involving a division), we'd prefer to make a + * non-inline call to the fast C intrinsics. + */ # define AWS_COMMON_MATH_API AWS_COMMON_API #endif - -/** - * Multiplies a * b. If the result overflows, returns 2^64 - 1. - */ + +/** + * Multiplies a * b. If the result overflows, returns 2^64 - 1. + */ AWS_COMMON_MATH_API uint64_t aws_mul_u64_saturating(uint64_t a, uint64_t b); - -/** - * If a * b overflows, returns AWS_OP_ERR; otherwise multiplies - * a * b, returns the result in *r, and returns AWS_OP_SUCCESS. - */ + +/** + * If a * b overflows, returns AWS_OP_ERR; otherwise multiplies + * a * b, returns the result in *r, and returns AWS_OP_SUCCESS. + */ AWS_COMMON_MATH_API int aws_mul_u64_checked(uint64_t a, uint64_t b, uint64_t *r); - -/** - * Multiplies a * b. If the result overflows, returns 2^32 - 1. - */ + +/** + * Multiplies a * b. If the result overflows, returns 2^32 - 1. + */ AWS_COMMON_MATH_API uint32_t aws_mul_u32_saturating(uint32_t a, uint32_t b); - -/** - * If a * b overflows, returns AWS_OP_ERR; otherwise multiplies - * a * b, returns the result in *r, and returns AWS_OP_SUCCESS. - */ + +/** + * If a * b overflows, returns AWS_OP_ERR; otherwise multiplies + * a * b, returns the result in *r, and returns AWS_OP_SUCCESS. + */ AWS_COMMON_MATH_API int aws_mul_u32_checked(uint32_t a, uint32_t b, uint32_t *r); - -/** - * Adds a + b. If the result overflows returns 2^64 - 1. - */ + +/** + * Adds a + b. If the result overflows returns 2^64 - 1. + */ AWS_COMMON_MATH_API uint64_t aws_add_u64_saturating(uint64_t a, uint64_t b); - -/** - * If a + b overflows, returns AWS_OP_ERR; otherwise adds - * a + b, returns the result in *r, and returns AWS_OP_SUCCESS. - */ + +/** + * If a + b overflows, returns AWS_OP_ERR; otherwise adds + * a + b, returns the result in *r, and returns AWS_OP_SUCCESS. + */ AWS_COMMON_MATH_API int aws_add_u64_checked(uint64_t a, uint64_t b, uint64_t *r); - -/** - * Adds a + b. If the result overflows returns 2^32 - 1. - */ + +/** + * Adds a + b. If the result overflows returns 2^32 - 1. + */ AWS_COMMON_MATH_API uint32_t aws_add_u32_saturating(uint32_t a, uint32_t b); - -/** - * If a + b overflows, returns AWS_OP_ERR; otherwise adds - * a + b, returns the result in *r, and returns AWS_OP_SUCCESS. - */ + +/** + * If a + b overflows, returns AWS_OP_ERR; otherwise adds + * a + b, returns the result in *r, and returns AWS_OP_SUCCESS. + */ AWS_COMMON_MATH_API int aws_add_u32_checked(uint32_t a, uint32_t b, uint32_t *r); - + /** * Subtracts a - b. If the result overflows returns 0. */ AWS_STATIC_IMPL uint64_t aws_sub_u64_saturating(uint64_t a, uint64_t b); - + /** * If a - b overflows, returns AWS_OP_ERR; otherwise subtracts * a - b, returns the result in *r, and returns AWS_OP_SUCCESS. */ AWS_STATIC_IMPL int aws_sub_u64_checked(uint64_t a, uint64_t b, uint64_t *r); - -/** + +/** * Subtracts a - b. If the result overflows returns 0. */ AWS_STATIC_IMPL uint32_t aws_sub_u32_saturating(uint32_t a, uint32_t b); @@ -107,39 +107,39 @@ AWS_STATIC_IMPL uint32_t aws_sub_u32_saturating(uint32_t a, uint32_t b); AWS_STATIC_IMPL int aws_sub_u32_checked(uint32_t a, uint32_t b, uint32_t *r); /** - * Multiplies a * b. If the result overflows, returns SIZE_MAX. - */ + * Multiplies a * b. If the result overflows, returns SIZE_MAX. + */ AWS_STATIC_IMPL size_t aws_mul_size_saturating(size_t a, size_t b); - -/** - * Multiplies a * b and returns the result in *r. If the result - * overflows, returns AWS_OP_ERR; otherwise returns AWS_OP_SUCCESS. - */ + +/** + * Multiplies a * b and returns the result in *r. If the result + * overflows, returns AWS_OP_ERR; otherwise returns AWS_OP_SUCCESS. + */ AWS_STATIC_IMPL int aws_mul_size_checked(size_t a, size_t b, size_t *r); - -/** - * Adds a + b. If the result overflows returns SIZE_MAX. - */ + +/** + * Adds a + b. If the result overflows returns SIZE_MAX. + */ AWS_STATIC_IMPL size_t aws_add_size_saturating(size_t a, size_t b); - -/** - * Adds a + b and returns the result in *r. If the result - * overflows, returns AWS_OP_ERR; otherwise returns AWS_OP_SUCCESS. - */ + +/** + * Adds a + b and returns the result in *r. If the result + * overflows, returns AWS_OP_ERR; otherwise returns AWS_OP_SUCCESS. + */ AWS_STATIC_IMPL int aws_add_size_checked(size_t a, size_t b, size_t *r); - + /** * Adds [num] arguments (expected to be of size_t), and returns the result in *r. * If the result overflows, returns AWS_OP_ERR; otherwise returns AWS_OP_SUCCESS. */ AWS_COMMON_API int aws_add_size_checked_varargs(size_t num, size_t *r, ...); - -/** + +/** * Subtracts a - b. If the result overflows returns 0. - */ + */ AWS_STATIC_IMPL size_t aws_sub_size_saturating(size_t a, size_t b); - -/** + +/** * If a - b overflows, returns AWS_OP_ERR; otherwise subtracts * a - b, returns the result in *r, and returns AWS_OP_SUCCESS. */ @@ -150,11 +150,11 @@ AWS_STATIC_IMPL int aws_sub_size_checked(size_t a, size_t b, size_t *r); */ AWS_STATIC_IMPL bool aws_is_power_of_two(const size_t x); /** - * Function to find the smallest result that is power of 2 >= n. Returns AWS_OP_ERR if this cannot - * be done without overflow - */ + * Function to find the smallest result that is power of 2 >= n. Returns AWS_OP_ERR if this cannot + * be done without overflow + */ AWS_STATIC_IMPL int aws_round_up_to_power_of_two(size_t n, size_t *result); - + /** * Counts the number of leading 0 bits in an integer */ @@ -163,7 +163,7 @@ AWS_STATIC_IMPL size_t aws_clz_i32(int32_t n); AWS_STATIC_IMPL size_t aws_clz_u64(uint64_t n); AWS_STATIC_IMPL size_t aws_clz_i64(int64_t n); AWS_STATIC_IMPL size_t aws_clz_size(size_t n); - + /** * Counts the number of trailing 0 bits in an integer */ @@ -172,7 +172,7 @@ AWS_STATIC_IMPL size_t aws_ctz_i32(int32_t n); AWS_STATIC_IMPL size_t aws_ctz_u64(uint64_t n); AWS_STATIC_IMPL size_t aws_ctz_i64(int64_t n); AWS_STATIC_IMPL size_t aws_ctz_size(size_t n); - + AWS_STATIC_IMPL uint8_t aws_min_u8(uint8_t a, uint8_t b); AWS_STATIC_IMPL uint8_t aws_max_u8(uint8_t a, uint8_t b); AWS_STATIC_IMPL int8_t aws_min_i8(int8_t a, int8_t b); @@ -197,11 +197,11 @@ AWS_STATIC_IMPL float aws_min_float(float a, float b); AWS_STATIC_IMPL float aws_max_float(float a, float b); AWS_STATIC_IMPL double aws_min_double(double a, double b); AWS_STATIC_IMPL double aws_max_double(double a, double b); - + #ifndef AWS_NO_STATIC_IMPL # include <aws/common/math.inl> #endif /* AWS_NO_STATIC_IMPL */ AWS_EXTERN_C_END -#endif /* AWS_COMMON_MATH_H */ +#endif /* AWS_COMMON_MATH_H */ diff --git a/contrib/restricted/aws/aws-c-common/include/aws/common/mutex.h b/contrib/restricted/aws/aws-c-common/include/aws/common/mutex.h index 73c2ecfa55..5a4bb635c6 100644 --- a/contrib/restricted/aws/aws-c-common/include/aws/common/mutex.h +++ b/contrib/restricted/aws/aws-c-common/include/aws/common/mutex.h @@ -1,72 +1,72 @@ -#ifndef AWS_COMMON_MUTEX_H -#define AWS_COMMON_MUTEX_H - +#ifndef AWS_COMMON_MUTEX_H +#define AWS_COMMON_MUTEX_H + /** * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. * SPDX-License-Identifier: Apache-2.0. - */ - -#include <aws/common/common.h> -#ifdef _WIN32 -/* NOTE: Do not use this macro before including Windows.h */ + */ + +#include <aws/common/common.h> +#ifdef _WIN32 +/* NOTE: Do not use this macro before including Windows.h */ # define AWSMUTEX_TO_WINDOWS(pMutex) (PSRWLOCK) & (pMutex)->mutex_handle -#else -# include <pthread.h> -#endif - -struct aws_mutex { -#ifdef _WIN32 - void *mutex_handle; -#else - pthread_mutex_t mutex_handle; -#endif +#else +# include <pthread.h> +#endif + +struct aws_mutex { +#ifdef _WIN32 + void *mutex_handle; +#else + pthread_mutex_t mutex_handle; +#endif bool initialized; -}; - -#ifdef _WIN32 -# define AWS_MUTEX_INIT \ +}; + +#ifdef _WIN32 +# define AWS_MUTEX_INIT \ { .mutex_handle = NULL, .initialized = true } -#else -# define AWS_MUTEX_INIT \ +#else +# define AWS_MUTEX_INIT \ { .mutex_handle = PTHREAD_MUTEX_INITIALIZER, .initialized = true } -#endif - -AWS_EXTERN_C_BEGIN - -/** - * Initializes a new platform instance of mutex. - */ -AWS_COMMON_API -int aws_mutex_init(struct aws_mutex *mutex); - -/** - * Cleans up internal resources. - */ -AWS_COMMON_API -void aws_mutex_clean_up(struct aws_mutex *mutex); - -/** - * Blocks until it acquires the lock. While on some platforms such as Windows, - * this may behave as a reentrant mutex, you should not treat it like one. On - * platforms it is possible for it to be non-reentrant, it will be. - */ -AWS_COMMON_API -int aws_mutex_lock(struct aws_mutex *mutex); - -/** - * Attempts to acquire the lock but returns immediately if it can not. - * While on some platforms such as Windows, this may behave as a reentrant mutex, - * you should not treat it like one. On platforms it is possible for it to be non-reentrant, it will be. - */ -AWS_COMMON_API -int aws_mutex_try_lock(struct aws_mutex *mutex); - -/** - * Releases the lock. - */ -AWS_COMMON_API -int aws_mutex_unlock(struct aws_mutex *mutex); - -AWS_EXTERN_C_END - -#endif /* AWS_COMMON_MUTEX_H */ +#endif + +AWS_EXTERN_C_BEGIN + +/** + * Initializes a new platform instance of mutex. + */ +AWS_COMMON_API +int aws_mutex_init(struct aws_mutex *mutex); + +/** + * Cleans up internal resources. + */ +AWS_COMMON_API +void aws_mutex_clean_up(struct aws_mutex *mutex); + +/** + * Blocks until it acquires the lock. While on some platforms such as Windows, + * this may behave as a reentrant mutex, you should not treat it like one. On + * platforms it is possible for it to be non-reentrant, it will be. + */ +AWS_COMMON_API +int aws_mutex_lock(struct aws_mutex *mutex); + +/** + * Attempts to acquire the lock but returns immediately if it can not. + * While on some platforms such as Windows, this may behave as a reentrant mutex, + * you should not treat it like one. On platforms it is possible for it to be non-reentrant, it will be. + */ +AWS_COMMON_API +int aws_mutex_try_lock(struct aws_mutex *mutex); + +/** + * Releases the lock. + */ +AWS_COMMON_API +int aws_mutex_unlock(struct aws_mutex *mutex); + +AWS_EXTERN_C_END + +#endif /* AWS_COMMON_MUTEX_H */ diff --git a/contrib/restricted/aws/aws-c-common/include/aws/common/posix/common.inl b/contrib/restricted/aws/aws-c-common/include/aws/common/posix/common.inl index ebf34efbcd..4a6e0a1b6a 100644 --- a/contrib/restricted/aws/aws-c-common/include/aws/common/posix/common.inl +++ b/contrib/restricted/aws/aws-c-common/include/aws/common/posix/common.inl @@ -1,36 +1,36 @@ /** * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. * SPDX-License-Identifier: Apache-2.0. - */ - -#ifndef AWS_COMMON_POSIX_COMMON_INL -#define AWS_COMMON_POSIX_COMMON_INL - -#include <aws/common/common.h> - -#include <errno.h> - -AWS_EXTERN_C_BEGIN - -static inline int aws_private_convert_and_raise_error_code(int error_code) { - switch (error_code) { - case 0: - return AWS_OP_SUCCESS; - case EINVAL: - return aws_raise_error(AWS_ERROR_MUTEX_NOT_INIT); - case EBUSY: - return aws_raise_error(AWS_ERROR_MUTEX_TIMEOUT); - case EPERM: - return aws_raise_error(AWS_ERROR_MUTEX_CALLER_NOT_OWNER); - case ENOMEM: - return aws_raise_error(AWS_ERROR_OOM); - case EDEADLK: - return aws_raise_error(AWS_ERROR_THREAD_DEADLOCK_DETECTED); - default: - return aws_raise_error(AWS_ERROR_MUTEX_FAILED); - } -} - -AWS_EXTERN_C_END - -#endif /* AWS_COMMON_POSIX_COMMON_INL */ + */ + +#ifndef AWS_COMMON_POSIX_COMMON_INL +#define AWS_COMMON_POSIX_COMMON_INL + +#include <aws/common/common.h> + +#include <errno.h> + +AWS_EXTERN_C_BEGIN + +static inline int aws_private_convert_and_raise_error_code(int error_code) { + switch (error_code) { + case 0: + return AWS_OP_SUCCESS; + case EINVAL: + return aws_raise_error(AWS_ERROR_MUTEX_NOT_INIT); + case EBUSY: + return aws_raise_error(AWS_ERROR_MUTEX_TIMEOUT); + case EPERM: + return aws_raise_error(AWS_ERROR_MUTEX_CALLER_NOT_OWNER); + case ENOMEM: + return aws_raise_error(AWS_ERROR_OOM); + case EDEADLK: + return aws_raise_error(AWS_ERROR_THREAD_DEADLOCK_DETECTED); + default: + return aws_raise_error(AWS_ERROR_MUTEX_FAILED); + } +} + +AWS_EXTERN_C_END + +#endif /* AWS_COMMON_POSIX_COMMON_INL */ diff --git a/contrib/restricted/aws/aws-c-common/include/aws/common/priority_queue.h b/contrib/restricted/aws/aws-c-common/include/aws/common/priority_queue.h index 8859729346..392c934d8e 100644 --- a/contrib/restricted/aws/aws-c-common/include/aws/common/priority_queue.h +++ b/contrib/restricted/aws/aws-c-common/include/aws/common/priority_queue.h @@ -1,87 +1,87 @@ -#ifndef AWS_COMMON_PRIORITY_QUEUE_H -#define AWS_COMMON_PRIORITY_QUEUE_H +#ifndef AWS_COMMON_PRIORITY_QUEUE_H +#define AWS_COMMON_PRIORITY_QUEUE_H /** * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. * SPDX-License-Identifier: Apache-2.0. - */ - -#include <aws/common/array_list.h> -#include <aws/common/common.h> - -/* The comparator should return a positive value if the second argument has a - * higher priority than the first; Otherwise, it should return a negative value - * or zero. NOTE: priority_queue pops its highest priority element first. For - * example: int cmp(const void *a, const void *b) { return a < b; } would result - * in a max heap, while: int cmp(const void *a, const void *b) { return a > b; } - * would result in a min heap. - */ -typedef int(aws_priority_queue_compare_fn)(const void *a, const void *b); - -struct aws_priority_queue { - /** - * predicate that determines the priority of the elements in the queue. - */ - aws_priority_queue_compare_fn *pred; - - /** - * The underlying container storing the queue elements. - */ - struct aws_array_list container; - - /** - * An array of pointers to backpointer elements. This array is initialized when - * the first call to aws_priority_queue_push_bp is made, and is subsequently maintained - * through any heap node manipulations. - * - * Each element is a struct aws_priority_queue_node *, pointing to a backpointer field - * owned by the calling code, or a NULL. The backpointer field is continually updated - * with information needed to locate and remove a specific node later on. - */ - struct aws_array_list backpointers; -}; - -struct aws_priority_queue_node { - /** The current index of the node in queuesion, or SIZE_MAX if the node has been removed. */ - size_t current_index; -}; - -AWS_EXTERN_C_BEGIN - -/** - * Initializes a priority queue struct for use. This mode will grow memory automatically (exponential model) - * Default size is the inital size of the queue - * item_size is the size of each element in bytes. Mixing items types is not supported by this API. - * pred is the function that will be used to determine priority. - */ -AWS_COMMON_API -int aws_priority_queue_init_dynamic( - struct aws_priority_queue *queue, - struct aws_allocator *alloc, - size_t default_size, - size_t item_size, - aws_priority_queue_compare_fn *pred); - -/** - * Initializes a priority queue struct for use. This mode will not allocate any additional memory. When the heap fills - * new enqueue operations will fail with AWS_ERROR_PRIORITY_QUEUE_FULL. - * - * Heaps initialized using this call do not support the aws_priority_queue_push_ref call with a non-NULL backpointer - * parameter. - * - * heap is the raw memory allocated for this priority_queue - * item_count is the maximum number of elements the raw heap can contain - * item_size is the size of each element in bytes. Mixing items types is not supported by this API. - * pred is the function that will be used to determine priority. - */ -AWS_COMMON_API -void aws_priority_queue_init_static( - struct aws_priority_queue *queue, - void *heap, - size_t item_count, - size_t item_size, - aws_priority_queue_compare_fn *pred); - -/** + */ + +#include <aws/common/array_list.h> +#include <aws/common/common.h> + +/* The comparator should return a positive value if the second argument has a + * higher priority than the first; Otherwise, it should return a negative value + * or zero. NOTE: priority_queue pops its highest priority element first. For + * example: int cmp(const void *a, const void *b) { return a < b; } would result + * in a max heap, while: int cmp(const void *a, const void *b) { return a > b; } + * would result in a min heap. + */ +typedef int(aws_priority_queue_compare_fn)(const void *a, const void *b); + +struct aws_priority_queue { + /** + * predicate that determines the priority of the elements in the queue. + */ + aws_priority_queue_compare_fn *pred; + + /** + * The underlying container storing the queue elements. + */ + struct aws_array_list container; + + /** + * An array of pointers to backpointer elements. This array is initialized when + * the first call to aws_priority_queue_push_bp is made, and is subsequently maintained + * through any heap node manipulations. + * + * Each element is a struct aws_priority_queue_node *, pointing to a backpointer field + * owned by the calling code, or a NULL. The backpointer field is continually updated + * with information needed to locate and remove a specific node later on. + */ + struct aws_array_list backpointers; +}; + +struct aws_priority_queue_node { + /** The current index of the node in queuesion, or SIZE_MAX if the node has been removed. */ + size_t current_index; +}; + +AWS_EXTERN_C_BEGIN + +/** + * Initializes a priority queue struct for use. This mode will grow memory automatically (exponential model) + * Default size is the inital size of the queue + * item_size is the size of each element in bytes. Mixing items types is not supported by this API. + * pred is the function that will be used to determine priority. + */ +AWS_COMMON_API +int aws_priority_queue_init_dynamic( + struct aws_priority_queue *queue, + struct aws_allocator *alloc, + size_t default_size, + size_t item_size, + aws_priority_queue_compare_fn *pred); + +/** + * Initializes a priority queue struct for use. This mode will not allocate any additional memory. When the heap fills + * new enqueue operations will fail with AWS_ERROR_PRIORITY_QUEUE_FULL. + * + * Heaps initialized using this call do not support the aws_priority_queue_push_ref call with a non-NULL backpointer + * parameter. + * + * heap is the raw memory allocated for this priority_queue + * item_count is the maximum number of elements the raw heap can contain + * item_size is the size of each element in bytes. Mixing items types is not supported by this API. + * pred is the function that will be used to determine priority. + */ +AWS_COMMON_API +void aws_priority_queue_init_static( + struct aws_priority_queue *queue, + void *heap, + size_t item_count, + size_t item_size, + aws_priority_queue_compare_fn *pred); + +/** * Checks that the backpointer at a specific index of the queue is * NULL or points to a correctly allocated aws_priority_queue_node. */ @@ -102,77 +102,77 @@ bool aws_priority_queue_backpointers_valid_deep(const struct aws_priority_queue bool aws_priority_queue_backpointers_valid(const struct aws_priority_queue *const queue); /** - * Set of properties of a valid aws_priority_queue. - */ -AWS_COMMON_API -bool aws_priority_queue_is_valid(const struct aws_priority_queue *const queue); - -/** - * Cleans up any internally allocated memory and resets the struct for reuse or deletion. - */ -AWS_COMMON_API -void aws_priority_queue_clean_up(struct aws_priority_queue *queue); - -/** - * Copies item into the queue and places it in the proper priority order. Complexity: O(log(n)). - */ -AWS_COMMON_API -int aws_priority_queue_push(struct aws_priority_queue *queue, void *item); - -/** - * Copies item into the queue and places it in the proper priority order. Complexity: O(log(n)). - * - * If the backpointer parameter is non-null, the heap will continually update the pointed-to field - * with information needed to remove the node later on. *backpointer must remain valid until the node - * is removed from the heap, and may be updated on any mutating operation on the priority queue. - * - * If the node is removed, the backpointer will be set to a sentinel value that indicates that the - * node has already been removed. It is safe (and a no-op) to call aws_priority_queue_remove with - * such a sentinel value. - */ -AWS_COMMON_API -int aws_priority_queue_push_ref( - struct aws_priority_queue *queue, - void *item, - struct aws_priority_queue_node *backpointer); - -/** - * Copies the element of the highest priority, and removes it from the queue.. Complexity: O(log(n)). - * If queue is empty, AWS_ERROR_PRIORITY_QUEUE_EMPTY will be raised. - */ -AWS_COMMON_API -int aws_priority_queue_pop(struct aws_priority_queue *queue, void *item); - -/** - * Removes a specific node from the priority queue. Complexity: O(log(n)) - * After removing a node (using either _remove or _pop), the backpointer set at push_ref time is set - * to a sentinel value. If this sentinel value is passed to aws_priority_queue_remove, - * AWS_ERROR_PRIORITY_QUEUE_BAD_NODE will be raised. Note, however, that passing uninitialized - * aws_priority_queue_nodes, or ones from different priority queues, results in undefined behavior. - */ -AWS_COMMON_API -int aws_priority_queue_remove(struct aws_priority_queue *queue, void *item, const struct aws_priority_queue_node *node); - -/** - * Obtains a pointer to the element of the highest priority. Complexity: constant time. - * If queue is empty, AWS_ERROR_PRIORITY_QUEUE_EMPTY will be raised. - */ -AWS_COMMON_API -int aws_priority_queue_top(const struct aws_priority_queue *queue, void **item); - -/** - * Current number of elements in the queue - */ -AWS_COMMON_API -size_t aws_priority_queue_size(const struct aws_priority_queue *queue); - -/** - * Current allocated capacity for the queue, in dynamic mode this grows over time, in static mode, this will never - * change. - */ -AWS_COMMON_API -size_t aws_priority_queue_capacity(const struct aws_priority_queue *queue); - -AWS_EXTERN_C_END - -#endif /* AWS_COMMON_PRIORITY_QUEUE_H */ + * Set of properties of a valid aws_priority_queue. + */ +AWS_COMMON_API +bool aws_priority_queue_is_valid(const struct aws_priority_queue *const queue); + +/** + * Cleans up any internally allocated memory and resets the struct for reuse or deletion. + */ +AWS_COMMON_API +void aws_priority_queue_clean_up(struct aws_priority_queue *queue); + +/** + * Copies item into the queue and places it in the proper priority order. Complexity: O(log(n)). + */ +AWS_COMMON_API +int aws_priority_queue_push(struct aws_priority_queue *queue, void *item); + +/** + * Copies item into the queue and places it in the proper priority order. Complexity: O(log(n)). + * + * If the backpointer parameter is non-null, the heap will continually update the pointed-to field + * with information needed to remove the node later on. *backpointer must remain valid until the node + * is removed from the heap, and may be updated on any mutating operation on the priority queue. + * + * If the node is removed, the backpointer will be set to a sentinel value that indicates that the + * node has already been removed. It is safe (and a no-op) to call aws_priority_queue_remove with + * such a sentinel value. + */ +AWS_COMMON_API +int aws_priority_queue_push_ref( + struct aws_priority_queue *queue, + void *item, + struct aws_priority_queue_node *backpointer); + +/** + * Copies the element of the highest priority, and removes it from the queue.. Complexity: O(log(n)). + * If queue is empty, AWS_ERROR_PRIORITY_QUEUE_EMPTY will be raised. + */ +AWS_COMMON_API +int aws_priority_queue_pop(struct aws_priority_queue *queue, void *item); + +/** + * Removes a specific node from the priority queue. Complexity: O(log(n)) + * After removing a node (using either _remove or _pop), the backpointer set at push_ref time is set + * to a sentinel value. If this sentinel value is passed to aws_priority_queue_remove, + * AWS_ERROR_PRIORITY_QUEUE_BAD_NODE will be raised. Note, however, that passing uninitialized + * aws_priority_queue_nodes, or ones from different priority queues, results in undefined behavior. + */ +AWS_COMMON_API +int aws_priority_queue_remove(struct aws_priority_queue *queue, void *item, const struct aws_priority_queue_node *node); + +/** + * Obtains a pointer to the element of the highest priority. Complexity: constant time. + * If queue is empty, AWS_ERROR_PRIORITY_QUEUE_EMPTY will be raised. + */ +AWS_COMMON_API +int aws_priority_queue_top(const struct aws_priority_queue *queue, void **item); + +/** + * Current number of elements in the queue + */ +AWS_COMMON_API +size_t aws_priority_queue_size(const struct aws_priority_queue *queue); + +/** + * Current allocated capacity for the queue, in dynamic mode this grows over time, in static mode, this will never + * change. + */ +AWS_COMMON_API +size_t aws_priority_queue_capacity(const struct aws_priority_queue *queue); + +AWS_EXTERN_C_END + +#endif /* AWS_COMMON_PRIORITY_QUEUE_H */ diff --git a/contrib/restricted/aws/aws-c-common/include/aws/common/private/hash_table_impl.h b/contrib/restricted/aws/aws-c-common/include/aws/common/private/hash_table_impl.h index 86ffb1401f..137a5c5466 100644 --- a/contrib/restricted/aws/aws-c-common/include/aws/common/private/hash_table_impl.h +++ b/contrib/restricted/aws/aws-c-common/include/aws/common/private/hash_table_impl.h @@ -1,62 +1,62 @@ -#ifndef AWS_COMMON_PRIVATE_HASH_TABLE_IMPL_H -#define AWS_COMMON_PRIVATE_HASH_TABLE_IMPL_H - +#ifndef AWS_COMMON_PRIVATE_HASH_TABLE_IMPL_H +#define AWS_COMMON_PRIVATE_HASH_TABLE_IMPL_H + /** * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. * SPDX-License-Identifier: Apache-2.0. - */ - -#include <aws/common/common.h> -#include <aws/common/hash_table.h> -#include <aws/common/math.h> - -struct hash_table_entry { - struct aws_hash_element element; - uint64_t hash_code; /* hash code (0 signals empty) */ -}; - -/* Using a flexible array member is the C99 compliant way to have the hash_table_entries - * immediatly follow the struct. - * - * MSVC doesn't know this for some reason so we need to use a pragma to make - * it happy. - */ -#ifdef _MSC_VER -# pragma warning(push) -# pragma warning(disable : 4200) -#endif -struct hash_table_state { - aws_hash_fn *hash_fn; - aws_hash_callback_eq_fn *equals_fn; - aws_hash_callback_destroy_fn *destroy_key_fn; - aws_hash_callback_destroy_fn *destroy_value_fn; - struct aws_allocator *alloc; - - size_t size, entry_count; - size_t max_load; - /* We AND a hash value with mask to get the slot index */ - size_t mask; - double max_load_factor; - /* actually variable length */ - struct hash_table_entry slots[]; -}; -#ifdef _MSC_VER -# pragma warning(pop) -#endif - -/** + */ + +#include <aws/common/common.h> +#include <aws/common/hash_table.h> +#include <aws/common/math.h> + +struct hash_table_entry { + struct aws_hash_element element; + uint64_t hash_code; /* hash code (0 signals empty) */ +}; + +/* Using a flexible array member is the C99 compliant way to have the hash_table_entries + * immediatly follow the struct. + * + * MSVC doesn't know this for some reason so we need to use a pragma to make + * it happy. + */ +#ifdef _MSC_VER +# pragma warning(push) +# pragma warning(disable : 4200) +#endif +struct hash_table_state { + aws_hash_fn *hash_fn; + aws_hash_callback_eq_fn *equals_fn; + aws_hash_callback_destroy_fn *destroy_key_fn; + aws_hash_callback_destroy_fn *destroy_value_fn; + struct aws_allocator *alloc; + + size_t size, entry_count; + size_t max_load; + /* We AND a hash value with mask to get the slot index */ + size_t mask; + double max_load_factor; + /* actually variable length */ + struct hash_table_entry slots[]; +}; +#ifdef _MSC_VER +# pragma warning(pop) +#endif + +/** * Best-effort check of hash_table_state data-structure invariants * Some invariants, such as that the number of entries is actually the - * same as the entry_count field, would require a loop to check - */ + * same as the entry_count field, would require a loop to check + */ bool hash_table_state_is_valid(const struct hash_table_state *map); - -/** - * Determine the total number of bytes needed for a hash-table with - * "size" slots. If the result would overflow a size_t, return - * AWS_OP_ERR; otherwise, return AWS_OP_SUCCESS with the result in - * "required_bytes". - */ + +/** + * Determine the total number of bytes needed for a hash-table with + * "size" slots. If the result would overflow a size_t, return + * AWS_OP_ERR; otherwise, return AWS_OP_SUCCESS with the result in + * "required_bytes". + */ int hash_table_state_required_bytes(size_t size, size_t *required_bytes); - -#endif /* AWS_COMMON_PRIVATE_HASH_TABLE_IMPL_H */ + +#endif /* AWS_COMMON_PRIVATE_HASH_TABLE_IMPL_H */ diff --git a/contrib/restricted/aws/aws-c-common/include/aws/common/rw_lock.h b/contrib/restricted/aws/aws-c-common/include/aws/common/rw_lock.h index 64863d2c28..f3f551179e 100644 --- a/contrib/restricted/aws/aws-c-common/include/aws/common/rw_lock.h +++ b/contrib/restricted/aws/aws-c-common/include/aws/common/rw_lock.h @@ -1,69 +1,69 @@ -#ifndef AWS_COMMON_RW_LOCK_H -#define AWS_COMMON_RW_LOCK_H - +#ifndef AWS_COMMON_RW_LOCK_H +#define AWS_COMMON_RW_LOCK_H + /** * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. * SPDX-License-Identifier: Apache-2.0. - */ - -#include <aws/common/common.h> -#ifdef _WIN32 -/* NOTE: Do not use this macro before including Windows.h */ -# define AWSSRW_TO_WINDOWS(pCV) (PSRWLOCK) pCV -#else -# include <pthread.h> -#endif - -struct aws_rw_lock { -#ifdef _WIN32 - void *lock_handle; -#else - pthread_rwlock_t lock_handle; -#endif -}; - -#ifdef _WIN32 -# define AWS_RW_LOCK_INIT \ - { .lock_handle = NULL } -#else -# define AWS_RW_LOCK_INIT \ - { .lock_handle = PTHREAD_RWLOCK_INITIALIZER } -#endif - -AWS_EXTERN_C_BEGIN - -/** - * Initializes a new platform instance of mutex. - */ -AWS_COMMON_API int aws_rw_lock_init(struct aws_rw_lock *lock); - -/** - * Cleans up internal resources. - */ -AWS_COMMON_API void aws_rw_lock_clean_up(struct aws_rw_lock *lock); - -/** - * Blocks until it acquires the lock. While on some platforms such as Windows, - * this may behave as a reentrant mutex, you should not treat it like one. On - * platforms it is possible for it to be non-reentrant, it will be. - */ -AWS_COMMON_API int aws_rw_lock_rlock(struct aws_rw_lock *lock); -AWS_COMMON_API int aws_rw_lock_wlock(struct aws_rw_lock *lock); - -/** - * Attempts to acquire the lock but returns immediately if it can not. - * While on some platforms such as Windows, this may behave as a reentrant mutex, - * you should not treat it like one. On platforms it is possible for it to be non-reentrant, it will be. - */ -AWS_COMMON_API int aws_rw_lock_try_rlock(struct aws_rw_lock *lock); -AWS_COMMON_API int aws_rw_lock_try_wlock(struct aws_rw_lock *lock); - -/** - * Releases the lock. - */ -AWS_COMMON_API int aws_rw_lock_runlock(struct aws_rw_lock *lock); -AWS_COMMON_API int aws_rw_lock_wunlock(struct aws_rw_lock *lock); - -AWS_EXTERN_C_END - -#endif /* AWS_COMMON_RW_LOCK_H */ + */ + +#include <aws/common/common.h> +#ifdef _WIN32 +/* NOTE: Do not use this macro before including Windows.h */ +# define AWSSRW_TO_WINDOWS(pCV) (PSRWLOCK) pCV +#else +# include <pthread.h> +#endif + +struct aws_rw_lock { +#ifdef _WIN32 + void *lock_handle; +#else + pthread_rwlock_t lock_handle; +#endif +}; + +#ifdef _WIN32 +# define AWS_RW_LOCK_INIT \ + { .lock_handle = NULL } +#else +# define AWS_RW_LOCK_INIT \ + { .lock_handle = PTHREAD_RWLOCK_INITIALIZER } +#endif + +AWS_EXTERN_C_BEGIN + +/** + * Initializes a new platform instance of mutex. + */ +AWS_COMMON_API int aws_rw_lock_init(struct aws_rw_lock *lock); + +/** + * Cleans up internal resources. + */ +AWS_COMMON_API void aws_rw_lock_clean_up(struct aws_rw_lock *lock); + +/** + * Blocks until it acquires the lock. While on some platforms such as Windows, + * this may behave as a reentrant mutex, you should not treat it like one. On + * platforms it is possible for it to be non-reentrant, it will be. + */ +AWS_COMMON_API int aws_rw_lock_rlock(struct aws_rw_lock *lock); +AWS_COMMON_API int aws_rw_lock_wlock(struct aws_rw_lock *lock); + +/** + * Attempts to acquire the lock but returns immediately if it can not. + * While on some platforms such as Windows, this may behave as a reentrant mutex, + * you should not treat it like one. On platforms it is possible for it to be non-reentrant, it will be. + */ +AWS_COMMON_API int aws_rw_lock_try_rlock(struct aws_rw_lock *lock); +AWS_COMMON_API int aws_rw_lock_try_wlock(struct aws_rw_lock *lock); + +/** + * Releases the lock. + */ +AWS_COMMON_API int aws_rw_lock_runlock(struct aws_rw_lock *lock); +AWS_COMMON_API int aws_rw_lock_wunlock(struct aws_rw_lock *lock); + +AWS_EXTERN_C_END + +#endif /* AWS_COMMON_RW_LOCK_H */ diff --git a/contrib/restricted/aws/aws-c-common/include/aws/common/string.h b/contrib/restricted/aws/aws-c-common/include/aws/common/string.h index 58eba5baf7..9e1bb262e1 100644 --- a/contrib/restricted/aws/aws-c-common/include/aws/common/string.h +++ b/contrib/restricted/aws/aws-c-common/include/aws/common/string.h @@ -1,119 +1,119 @@ -#ifndef AWS_COMMON_STRING_H -#define AWS_COMMON_STRING_H +#ifndef AWS_COMMON_STRING_H +#define AWS_COMMON_STRING_H /** * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. * SPDX-License-Identifier: Apache-2.0. - */ -#include <aws/common/byte_buf.h> -#include <aws/common/common.h> - -/** - * Represents an immutable string holding either text or binary data. If the - * string is in constant memory or memory that should otherwise not be freed by - * this struct, set allocator to NULL and destroy function will be a no-op. - * - * This is for use cases where the entire struct and the data bytes themselves - * need to be held in dynamic memory, such as when held by a struct - * aws_hash_table. The data bytes themselves are always held in contiguous - * memory immediately after the end of the struct aws_string, and the memory for - * both the header and the data bytes is allocated together. - * - * Use the aws_string_bytes function to access the data bytes. A null byte is - * always included immediately after the data but not counted in the length, so - * that the output of aws_string_bytes can be treated as a C-string in cases - * where none of the the data bytes are null. - * - * Note that the fields of this structure are const; this ensures not only that - * they cannot be modified, but also that you can't assign the structure using - * the = operator accidentally. - */ - -/* Using a flexible array member is the C99 compliant way to have the bytes of - * the string immediately follow the header. - * - * MSVC doesn't know this for some reason so we need to use a pragma to make - * it happy. - */ -#ifdef _MSC_VER -# pragma warning(push) -# pragma warning(disable : 4200) -#endif -struct aws_string { - struct aws_allocator *const allocator; - const size_t len; + */ +#include <aws/common/byte_buf.h> +#include <aws/common/common.h> + +/** + * Represents an immutable string holding either text or binary data. If the + * string is in constant memory or memory that should otherwise not be freed by + * this struct, set allocator to NULL and destroy function will be a no-op. + * + * This is for use cases where the entire struct and the data bytes themselves + * need to be held in dynamic memory, such as when held by a struct + * aws_hash_table. The data bytes themselves are always held in contiguous + * memory immediately after the end of the struct aws_string, and the memory for + * both the header and the data bytes is allocated together. + * + * Use the aws_string_bytes function to access the data bytes. A null byte is + * always included immediately after the data but not counted in the length, so + * that the output of aws_string_bytes can be treated as a C-string in cases + * where none of the the data bytes are null. + * + * Note that the fields of this structure are const; this ensures not only that + * they cannot be modified, but also that you can't assign the structure using + * the = operator accidentally. + */ + +/* Using a flexible array member is the C99 compliant way to have the bytes of + * the string immediately follow the header. + * + * MSVC doesn't know this for some reason so we need to use a pragma to make + * it happy. + */ +#ifdef _MSC_VER +# pragma warning(push) +# pragma warning(disable : 4200) +#endif +struct aws_string { + struct aws_allocator *const allocator; + const size_t len; /* give this a storage specifier for C++ purposes. It will likely be larger after init. */ const uint8_t bytes[1]; -}; -#ifdef _MSC_VER -# pragma warning(pop) -#endif - +}; +#ifdef _MSC_VER +# pragma warning(pop) +#endif + AWS_EXTERN_C_BEGIN - -/** - * Returns true if bytes of string are the same, false otherwise. - */ + +/** + * Returns true if bytes of string are the same, false otherwise. + */ AWS_COMMON_API bool aws_string_eq(const struct aws_string *a, const struct aws_string *b); - -/** - * Returns true if bytes of string are equivalent, using a case-insensitive comparison. - */ + +/** + * Returns true if bytes of string are equivalent, using a case-insensitive comparison. + */ AWS_COMMON_API bool aws_string_eq_ignore_case(const struct aws_string *a, const struct aws_string *b); - -/** - * Returns true if bytes of string and cursor are the same, false otherwise. - */ + +/** + * Returns true if bytes of string and cursor are the same, false otherwise. + */ AWS_COMMON_API bool aws_string_eq_byte_cursor(const struct aws_string *str, const struct aws_byte_cursor *cur); - -/** - * Returns true if bytes of string and cursor are equivalent, using a case-insensitive comparison. - */ + +/** + * Returns true if bytes of string and cursor are equivalent, using a case-insensitive comparison. + */ AWS_COMMON_API bool aws_string_eq_byte_cursor_ignore_case(const struct aws_string *str, const struct aws_byte_cursor *cur); - -/** - * Returns true if bytes of string and buffer are the same, false otherwise. - */ + +/** + * Returns true if bytes of string and buffer are the same, false otherwise. + */ AWS_COMMON_API bool aws_string_eq_byte_buf(const struct aws_string *str, const struct aws_byte_buf *buf); - -/** - * Returns true if bytes of string and buffer are equivalent, using a case-insensitive comparison. - */ + +/** + * Returns true if bytes of string and buffer are equivalent, using a case-insensitive comparison. + */ AWS_COMMON_API bool aws_string_eq_byte_buf_ignore_case(const struct aws_string *str, const struct aws_byte_buf *buf); - + AWS_COMMON_API bool aws_string_eq_c_str(const struct aws_string *str, const char *c_str); - -/** - * Returns true if bytes of strings are equivalent, using a case-insensitive comparison. - */ + +/** + * Returns true if bytes of strings are equivalent, using a case-insensitive comparison. + */ AWS_COMMON_API bool aws_string_eq_c_str_ignore_case(const struct aws_string *str, const char *c_str); - -/** - * Constructor functions which copy data from null-terminated C-string or array of bytes. - */ -AWS_COMMON_API -struct aws_string *aws_string_new_from_c_str(struct aws_allocator *allocator, const char *c_str); + +/** + * Constructor functions which copy data from null-terminated C-string or array of bytes. + */ +AWS_COMMON_API +struct aws_string *aws_string_new_from_c_str(struct aws_allocator *allocator, const char *c_str); /** * Allocate a new string with the same contents as array. */ -AWS_COMMON_API -struct aws_string *aws_string_new_from_array(struct aws_allocator *allocator, const uint8_t *bytes, size_t len); - -/** +AWS_COMMON_API +struct aws_string *aws_string_new_from_array(struct aws_allocator *allocator, const uint8_t *bytes, size_t len); + +/** * Allocate a new string with the same contents as another string. - */ -AWS_COMMON_API -struct aws_string *aws_string_new_from_string(struct aws_allocator *allocator, const struct aws_string *str); - -/** + */ +AWS_COMMON_API +struct aws_string *aws_string_new_from_string(struct aws_allocator *allocator, const struct aws_string *str); + +/** * Allocate a new string with the same contents as cursor. */ AWS_COMMON_API @@ -126,110 +126,110 @@ AWS_COMMON_API struct aws_string *aws_string_new_from_buf(struct aws_allocator *allocator, const struct aws_byte_buf *buf); /** - * Deallocate string. - */ -AWS_COMMON_API -void aws_string_destroy(struct aws_string *str); - -/** - * Zeroes out the data bytes of string and then deallocates the memory. - * Not safe to run on a string created with AWS_STATIC_STRING_FROM_LITERAL. - */ -AWS_COMMON_API -void aws_string_destroy_secure(struct aws_string *str); - -/** - * Compares lexicographical ordering of two strings. This is a binary - * byte-by-byte comparison, treating bytes as unsigned integers. It is suitable - * for either textual or binary data and is unaware of unicode or any other byte - * encoding. If both strings are identical in the bytes of the shorter string, - * then the longer string is lexicographically after the shorter. - * - * Returns a positive number if string a > string b. (i.e., string a is - * lexicographically after string b.) Returns zero if string a = string b. - * Returns negative number if string a < string b. - */ -AWS_COMMON_API -int aws_string_compare(const struct aws_string *a, const struct aws_string *b); - -/** - * A convenience function for sorting lists of (const struct aws_string *) elements. This can be used as a - * comparator for aws_array_list_sort. It is just a simple wrapper around aws_string_compare. - */ -AWS_COMMON_API -int aws_array_list_comparator_string(const void *a, const void *b); - -/** - * Defines a (static const struct aws_string *) with name specified in first - * argument that points to constant memory and has data bytes containing the - * string literal in the second argument. - * - * GCC allows direct initilization of structs with variable length final fields - * However, this might not be portable, so we can do this instead - * This will have to be updated whenever the aws_string structure changes - */ -#define AWS_STATIC_STRING_FROM_LITERAL(name, literal) \ - static const struct { \ - struct aws_allocator *const allocator; \ - const size_t len; \ - const uint8_t bytes[sizeof(literal)]; \ - } name##_s = {NULL, sizeof(literal) - 1, literal}; \ - static const struct aws_string *(name) = (struct aws_string *)(&name##_s) - -/* - * A related macro that declares the string pointer without static, allowing it to be externed as a global constant - */ -#define AWS_STRING_FROM_LITERAL(name, literal) \ - static const struct { \ - struct aws_allocator *const allocator; \ - const size_t len; \ - const uint8_t bytes[sizeof(literal)]; \ - } name##_s = {NULL, sizeof(literal) - 1, literal}; \ - const struct aws_string *(name) = (struct aws_string *)(&name##_s) - -/** - * Copies all bytes from string to buf. - * - * On success, returns true and updates the buf pointer/length - * accordingly. If there is insufficient space in the buf, returns - * false, leaving the buf unchanged. - */ + * Deallocate string. + */ +AWS_COMMON_API +void aws_string_destroy(struct aws_string *str); + +/** + * Zeroes out the data bytes of string and then deallocates the memory. + * Not safe to run on a string created with AWS_STATIC_STRING_FROM_LITERAL. + */ +AWS_COMMON_API +void aws_string_destroy_secure(struct aws_string *str); + +/** + * Compares lexicographical ordering of two strings. This is a binary + * byte-by-byte comparison, treating bytes as unsigned integers. It is suitable + * for either textual or binary data and is unaware of unicode or any other byte + * encoding. If both strings are identical in the bytes of the shorter string, + * then the longer string is lexicographically after the shorter. + * + * Returns a positive number if string a > string b. (i.e., string a is + * lexicographically after string b.) Returns zero if string a = string b. + * Returns negative number if string a < string b. + */ +AWS_COMMON_API +int aws_string_compare(const struct aws_string *a, const struct aws_string *b); + +/** + * A convenience function for sorting lists of (const struct aws_string *) elements. This can be used as a + * comparator for aws_array_list_sort. It is just a simple wrapper around aws_string_compare. + */ +AWS_COMMON_API +int aws_array_list_comparator_string(const void *a, const void *b); + +/** + * Defines a (static const struct aws_string *) with name specified in first + * argument that points to constant memory and has data bytes containing the + * string literal in the second argument. + * + * GCC allows direct initilization of structs with variable length final fields + * However, this might not be portable, so we can do this instead + * This will have to be updated whenever the aws_string structure changes + */ +#define AWS_STATIC_STRING_FROM_LITERAL(name, literal) \ + static const struct { \ + struct aws_allocator *const allocator; \ + const size_t len; \ + const uint8_t bytes[sizeof(literal)]; \ + } name##_s = {NULL, sizeof(literal) - 1, literal}; \ + static const struct aws_string *(name) = (struct aws_string *)(&name##_s) + +/* + * A related macro that declares the string pointer without static, allowing it to be externed as a global constant + */ +#define AWS_STRING_FROM_LITERAL(name, literal) \ + static const struct { \ + struct aws_allocator *const allocator; \ + const size_t len; \ + const uint8_t bytes[sizeof(literal)]; \ + } name##_s = {NULL, sizeof(literal) - 1, literal}; \ + const struct aws_string *(name) = (struct aws_string *)(&name##_s) + +/** + * Copies all bytes from string to buf. + * + * On success, returns true and updates the buf pointer/length + * accordingly. If there is insufficient space in the buf, returns + * false, leaving the buf unchanged. + */ AWS_COMMON_API bool aws_byte_buf_write_from_whole_string( - struct aws_byte_buf *AWS_RESTRICT buf, + struct aws_byte_buf *AWS_RESTRICT buf, const struct aws_string *AWS_RESTRICT src); - -/** - * Creates an aws_byte_cursor from an existing string. - */ + +/** + * Creates an aws_byte_cursor from an existing string. + */ AWS_COMMON_API struct aws_byte_cursor aws_byte_cursor_from_string(const struct aws_string *src); - + /** * If the string was dynamically allocated, clones it. If the string was statically allocated (i.e. has no allocator), * returns the original string. */ AWS_COMMON_API struct aws_string *aws_string_clone_or_reuse(struct aws_allocator *allocator, const struct aws_string *str); - + /* Computes the length of a c string in bytes assuming the character set is either ASCII or UTF-8. If no NULL character * is found within max_read_len of str, AWS_ERROR_C_STRING_BUFFER_NOT_NULL_TERMINATED is raised. Otherwise, str_len * will contain the string length minus the NULL character, and AWS_OP_SUCCESS will be returned. */ AWS_COMMON_API int aws_secure_strlen(const char *str, size_t max_read_len, size_t *str_len); -/** +/** * Equivalent to str->bytes. - */ + */ AWS_STATIC_IMPL const uint8_t *aws_string_bytes(const struct aws_string *str); - -/** + +/** * Equivalent to `(const char *)str->bytes`. - */ + */ AWS_STATIC_IMPL const char *aws_string_c_str(const struct aws_string *str); - + /** * Evaluates the set of properties that define the shape of all valid aws_string structures. * It is also a cheap check, in the sense it run in constant time (i.e., no loops or recursion). @@ -255,4 +255,4 @@ bool aws_char_is_space(uint8_t c); AWS_EXTERN_C_END -#endif /* AWS_COMMON_STRING_H */ +#endif /* AWS_COMMON_STRING_H */ diff --git a/contrib/restricted/aws/aws-c-common/include/aws/common/system_info.h b/contrib/restricted/aws/aws-c-common/include/aws/common/system_info.h index 4143fed56b..7ac9be5cf3 100644 --- a/contrib/restricted/aws/aws-c-common/include/aws/common/system_info.h +++ b/contrib/restricted/aws/aws-c-common/include/aws/common/system_info.h @@ -1,29 +1,29 @@ -#ifndef AWS_COMMON_SYSTEM_INFO_H -#define AWS_COMMON_SYSTEM_INFO_H - +#ifndef AWS_COMMON_SYSTEM_INFO_H +#define AWS_COMMON_SYSTEM_INFO_H + /** * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. * SPDX-License-Identifier: Apache-2.0. - */ - -#include <aws/common/common.h> - + */ + +#include <aws/common/common.h> + enum aws_platform_os { AWS_PLATFORM_OS_WINDOWS, AWS_PLATFORM_OS_MAC, AWS_PLATFORM_OS_UNIX, }; -AWS_EXTERN_C_BEGIN - +AWS_EXTERN_C_BEGIN + /* Returns the OS this was built under */ AWS_COMMON_API enum aws_platform_os aws_get_platform_build_os(void); -/* Returns the number of online processors available for usage. */ -AWS_COMMON_API -size_t aws_system_info_processor_count(void); - +/* Returns the number of online processors available for usage. */ +AWS_COMMON_API +size_t aws_system_info_processor_count(void); + /* Returns true if a debugger is currently attached to the process. */ AWS_COMMON_API bool aws_is_debugger_present(void); @@ -76,6 +76,6 @@ void aws_backtrace_print(FILE *fp, void *call_site_data); AWS_COMMON_API void aws_backtrace_log(void); -AWS_EXTERN_C_END - -#endif /* AWS_COMMON_SYSTEM_INFO_H */ +AWS_EXTERN_C_END + +#endif /* AWS_COMMON_SYSTEM_INFO_H */ diff --git a/contrib/restricted/aws/aws-c-common/include/aws/common/task_scheduler.h b/contrib/restricted/aws/aws-c-common/include/aws/common/task_scheduler.h index 1c78fd3e51..60a9091209 100644 --- a/contrib/restricted/aws/aws-c-common/include/aws/common/task_scheduler.h +++ b/contrib/restricted/aws/aws-c-common/include/aws/common/task_scheduler.h @@ -1,51 +1,51 @@ -#ifndef AWS_COMMON_TASK_SCHEDULER_H -#define AWS_COMMON_TASK_SCHEDULER_H - +#ifndef AWS_COMMON_TASK_SCHEDULER_H +#define AWS_COMMON_TASK_SCHEDULER_H + /** * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. * SPDX-License-Identifier: Apache-2.0. - */ - -#include <aws/common/common.h> -#include <aws/common/linked_list.h> -#include <aws/common/priority_queue.h> - -struct aws_task; - -typedef enum aws_task_status { - AWS_TASK_STATUS_RUN_READY, - AWS_TASK_STATUS_CANCELED, -} aws_task_status; - -/** - * A scheduled function. - */ -typedef void(aws_task_fn)(struct aws_task *task, void *arg, enum aws_task_status); - -/* - * A task object. - * Once added to the scheduler, a task must remain in memory until its function is executed. - */ -struct aws_task { - aws_task_fn *fn; - void *arg; - uint64_t timestamp; - struct aws_linked_list_node node; - struct aws_priority_queue_node priority_queue_node; + */ + +#include <aws/common/common.h> +#include <aws/common/linked_list.h> +#include <aws/common/priority_queue.h> + +struct aws_task; + +typedef enum aws_task_status { + AWS_TASK_STATUS_RUN_READY, + AWS_TASK_STATUS_CANCELED, +} aws_task_status; + +/** + * A scheduled function. + */ +typedef void(aws_task_fn)(struct aws_task *task, void *arg, enum aws_task_status); + +/* + * A task object. + * Once added to the scheduler, a task must remain in memory until its function is executed. + */ +struct aws_task { + aws_task_fn *fn; + void *arg; + uint64_t timestamp; + struct aws_linked_list_node node; + struct aws_priority_queue_node priority_queue_node; const char *type_tag; - size_t reserved; -}; - -struct aws_task_scheduler { - struct aws_allocator *alloc; - struct aws_priority_queue timed_queue; /* Tasks scheduled to run at specific times */ - struct aws_linked_list timed_list; /* If timed_queue runs out of memory, further timed tests are stored here */ - struct aws_linked_list asap_list; /* Tasks scheduled to run as soon as possible */ -}; - -AWS_EXTERN_C_BEGIN - -/** + size_t reserved; +}; + +struct aws_task_scheduler { + struct aws_allocator *alloc; + struct aws_priority_queue timed_queue; /* Tasks scheduled to run at specific times */ + struct aws_linked_list timed_list; /* If timed_queue runs out of memory, further timed tests are stored here */ + struct aws_linked_list asap_list; /* Tasks scheduled to run as soon as possible */ +}; + +AWS_EXTERN_C_BEGIN + +/** * Init an aws_task */ AWS_COMMON_API @@ -58,67 +58,67 @@ AWS_COMMON_API void aws_task_run(struct aws_task *task, enum aws_task_status status); /** - * Initializes a task scheduler instance. - */ -AWS_COMMON_API -int aws_task_scheduler_init(struct aws_task_scheduler *scheduler, struct aws_allocator *alloc); - -/** - * Empties and executes all queued tasks, passing the AWS_TASK_STATUS_CANCELED status to the task function. - * Cleans up any memory allocated, and prepares the instance for reuse or deletion. - */ -AWS_COMMON_API -void aws_task_scheduler_clean_up(struct aws_task_scheduler *scheduler); - + * Initializes a task scheduler instance. + */ +AWS_COMMON_API +int aws_task_scheduler_init(struct aws_task_scheduler *scheduler, struct aws_allocator *alloc); + +/** + * Empties and executes all queued tasks, passing the AWS_TASK_STATUS_CANCELED status to the task function. + * Cleans up any memory allocated, and prepares the instance for reuse or deletion. + */ +AWS_COMMON_API +void aws_task_scheduler_clean_up(struct aws_task_scheduler *scheduler); + AWS_COMMON_API bool aws_task_scheduler_is_valid(const struct aws_task_scheduler *scheduler); -/** - * Returns whether the scheduler has any scheduled tasks. - * next_task_time (optional) will be set to time of the next task, note that 0 will be set if tasks were - * added via aws_task_scheduler_schedule_now() and UINT64_MAX will be set if no tasks are scheduled at all. - */ -AWS_COMMON_API -bool aws_task_scheduler_has_tasks(const struct aws_task_scheduler *scheduler, uint64_t *next_task_time); - -/** - * Schedules a task to run immediately. - * The task should not be cleaned up or modified until its function is executed. - */ -AWS_COMMON_API -void aws_task_scheduler_schedule_now(struct aws_task_scheduler *scheduler, struct aws_task *task); - -/** - * Schedules a task to run at time_to_run. - * The task should not be cleaned up or modified until its function is executed. - */ -AWS_COMMON_API -void aws_task_scheduler_schedule_future( - struct aws_task_scheduler *scheduler, - struct aws_task *task, - uint64_t time_to_run); - -/** - * Removes task from the scheduler and invokes the task with the AWS_TASK_STATUS_CANCELED status. - */ -AWS_COMMON_API -void aws_task_scheduler_cancel_task(struct aws_task_scheduler *scheduler, struct aws_task *task); - -/** - * Sequentially execute all tasks scheduled to run at, or before current_time. - * AWS_TASK_STATUS_RUN_READY will be passed to the task function as the task status. - * - * If a task schedules another task, the new task will not be executed until the next call to this function. - */ -AWS_COMMON_API -void aws_task_scheduler_run_all(struct aws_task_scheduler *scheduler, uint64_t current_time); - +/** + * Returns whether the scheduler has any scheduled tasks. + * next_task_time (optional) will be set to time of the next task, note that 0 will be set if tasks were + * added via aws_task_scheduler_schedule_now() and UINT64_MAX will be set if no tasks are scheduled at all. + */ +AWS_COMMON_API +bool aws_task_scheduler_has_tasks(const struct aws_task_scheduler *scheduler, uint64_t *next_task_time); + +/** + * Schedules a task to run immediately. + * The task should not be cleaned up or modified until its function is executed. + */ +AWS_COMMON_API +void aws_task_scheduler_schedule_now(struct aws_task_scheduler *scheduler, struct aws_task *task); + +/** + * Schedules a task to run at time_to_run. + * The task should not be cleaned up or modified until its function is executed. + */ +AWS_COMMON_API +void aws_task_scheduler_schedule_future( + struct aws_task_scheduler *scheduler, + struct aws_task *task, + uint64_t time_to_run); + +/** + * Removes task from the scheduler and invokes the task with the AWS_TASK_STATUS_CANCELED status. + */ +AWS_COMMON_API +void aws_task_scheduler_cancel_task(struct aws_task_scheduler *scheduler, struct aws_task *task); + +/** + * Sequentially execute all tasks scheduled to run at, or before current_time. + * AWS_TASK_STATUS_RUN_READY will be passed to the task function as the task status. + * + * If a task schedules another task, the new task will not be executed until the next call to this function. + */ +AWS_COMMON_API +void aws_task_scheduler_run_all(struct aws_task_scheduler *scheduler, uint64_t current_time); + /** * Convert a status value to a c-string suitable for logging */ AWS_COMMON_API const char *aws_task_status_to_c_str(enum aws_task_status status); -AWS_EXTERN_C_END - -#endif /* AWS_COMMON_TASK_SCHEDULER_H */ +AWS_EXTERN_C_END + +#endif /* AWS_COMMON_TASK_SCHEDULER_H */ diff --git a/contrib/restricted/aws/aws-c-common/include/aws/common/thread.h b/contrib/restricted/aws/aws-c-common/include/aws/common/thread.h index e7abd79f7e..bbc965d37c 100644 --- a/contrib/restricted/aws/aws-c-common/include/aws/common/thread.h +++ b/contrib/restricted/aws/aws-c-common/include/aws/common/thread.h @@ -1,24 +1,24 @@ -#ifndef AWS_COMMON_THREAD_H -#define AWS_COMMON_THREAD_H - +#ifndef AWS_COMMON_THREAD_H +#define AWS_COMMON_THREAD_H + /** * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. * SPDX-License-Identifier: Apache-2.0. - */ -#include <aws/common/common.h> - -#ifndef _WIN32 -# include <pthread.h> -#endif - -enum aws_thread_detach_state { - AWS_THREAD_NOT_CREATED = 1, - AWS_THREAD_JOINABLE, - AWS_THREAD_JOIN_COMPLETED, -}; - -struct aws_thread_options { - size_t stack_size; + */ +#include <aws/common/common.h> + +#ifndef _WIN32 +# include <pthread.h> +#endif + +enum aws_thread_detach_state { + AWS_THREAD_NOT_CREATED = 1, + AWS_THREAD_JOINABLE, + AWS_THREAD_JOIN_COMPLETED, +}; + +struct aws_thread_options { + size_t stack_size; /* default is -1. If you set this to anything >= 0, and the platform supports it, the thread will be pinned to * that cpu. Also, we assume you're doing this for memory throughput purposes. On unix systems, * If libnuma.so is available, upon the thread launching, the memory policy for that thread will be set to @@ -30,21 +30,21 @@ struct aws_thread_options { * On Apple and Android platforms, this setting doesn't do anything at all. */ int32_t cpu_id; -}; - -#ifdef _WIN32 -typedef union { - void *ptr; -} aws_thread_once; -# define AWS_THREAD_ONCE_STATIC_INIT \ - { NULL } +}; + +#ifdef _WIN32 +typedef union { + void *ptr; +} aws_thread_once; +# define AWS_THREAD_ONCE_STATIC_INIT \ + { NULL } typedef unsigned long aws_thread_id_t; -#else -typedef pthread_once_t aws_thread_once; -# define AWS_THREAD_ONCE_STATIC_INIT PTHREAD_ONCE_INIT +#else +typedef pthread_once_t aws_thread_once; +# define AWS_THREAD_ONCE_STATIC_INIT PTHREAD_ONCE_INIT typedef pthread_t aws_thread_id_t; -#endif - +#endif + /* * Buffer size needed to represent aws_thread_id_t as a string (2 hex chars per byte * plus '\0' terminator). Needed for portable printing because pthread_t is @@ -52,89 +52,89 @@ typedef pthread_t aws_thread_id_t; */ #define AWS_THREAD_ID_T_REPR_BUFSZ (sizeof(aws_thread_id_t) * 2 + 1) -struct aws_thread { - struct aws_allocator *allocator; - enum aws_thread_detach_state detach_state; -#ifdef _WIN32 - void *thread_handle; -#endif +struct aws_thread { + struct aws_allocator *allocator; + enum aws_thread_detach_state detach_state; +#ifdef _WIN32 + void *thread_handle; +#endif aws_thread_id_t thread_id; -}; - -AWS_EXTERN_C_BEGIN - -/** - * Returns an instance of system default thread options. - */ -AWS_COMMON_API -const struct aws_thread_options *aws_default_thread_options(void); - +}; + +AWS_EXTERN_C_BEGIN + +/** + * Returns an instance of system default thread options. + */ +AWS_COMMON_API +const struct aws_thread_options *aws_default_thread_options(void); + AWS_COMMON_API void aws_thread_call_once(aws_thread_once *flag, void (*call_once)(void *), void *user_data); - -/** - * Initializes a new platform specific thread object struct (not the os-level - * thread itself). - */ -AWS_COMMON_API -int aws_thread_init(struct aws_thread *thread, struct aws_allocator *allocator); - -/** - * Creates an OS level thread and associates it with func. context will be passed to func when it is executed. - * options will be applied to the thread if they are applicable for the platform. - * You must either call join or detach after creating the thread and before calling clean_up. - */ -AWS_COMMON_API -int aws_thread_launch( - struct aws_thread *thread, - void (*func)(void *arg), - void *arg, - const struct aws_thread_options *options); - -/** - * Gets the id of thread - */ -AWS_COMMON_API + +/** + * Initializes a new platform specific thread object struct (not the os-level + * thread itself). + */ +AWS_COMMON_API +int aws_thread_init(struct aws_thread *thread, struct aws_allocator *allocator); + +/** + * Creates an OS level thread and associates it with func. context will be passed to func when it is executed. + * options will be applied to the thread if they are applicable for the platform. + * You must either call join or detach after creating the thread and before calling clean_up. + */ +AWS_COMMON_API +int aws_thread_launch( + struct aws_thread *thread, + void (*func)(void *arg), + void *arg, + const struct aws_thread_options *options); + +/** + * Gets the id of thread + */ +AWS_COMMON_API aws_thread_id_t aws_thread_get_id(struct aws_thread *thread); - -/** - * Gets the detach state of the thread. For example, is it safe to call join on - * this thread? Has it been detached()? - */ -AWS_COMMON_API -enum aws_thread_detach_state aws_thread_get_detach_state(struct aws_thread *thread); - -/** - * Joins the calling thread to a thread instance. Returns when thread is - * finished. - */ -AWS_COMMON_API -int aws_thread_join(struct aws_thread *thread); - -/** - * Cleans up the thread handle. Either detach or join must be called - * before calling this function. - */ -AWS_COMMON_API -void aws_thread_clean_up(struct aws_thread *thread); - -/** + +/** + * Gets the detach state of the thread. For example, is it safe to call join on + * this thread? Has it been detached()? + */ +AWS_COMMON_API +enum aws_thread_detach_state aws_thread_get_detach_state(struct aws_thread *thread); + +/** + * Joins the calling thread to a thread instance. Returns when thread is + * finished. + */ +AWS_COMMON_API +int aws_thread_join(struct aws_thread *thread); + +/** + * Cleans up the thread handle. Either detach or join must be called + * before calling this function. + */ +AWS_COMMON_API +void aws_thread_clean_up(struct aws_thread *thread); + +/** * Returns the thread id of the calling thread. - */ -AWS_COMMON_API + */ +AWS_COMMON_API aws_thread_id_t aws_thread_current_thread_id(void); - -/** + +/** * Compare thread ids. */ AWS_COMMON_API bool aws_thread_thread_id_equal(aws_thread_id_t t1, aws_thread_id_t t2); /** - * Sleeps the current thread by nanos. - */ -AWS_COMMON_API -void aws_thread_current_sleep(uint64_t nanos); - + * Sleeps the current thread by nanos. + */ +AWS_COMMON_API +void aws_thread_current_sleep(uint64_t nanos); + typedef void(aws_thread_atexit_fn)(void *user_data); /** @@ -146,6 +146,6 @@ typedef void(aws_thread_atexit_fn)(void *user_data); AWS_COMMON_API int aws_thread_current_at_exit(aws_thread_atexit_fn *callback, void *user_data); -AWS_EXTERN_C_END - -#endif /* AWS_COMMON_THREAD_H */ +AWS_EXTERN_C_END + +#endif /* AWS_COMMON_THREAD_H */ diff --git a/contrib/restricted/aws/aws-c-common/include/aws/common/time.h b/contrib/restricted/aws/aws-c-common/include/aws/common/time.h index 6ea6c9c757..d008a2ce80 100644 --- a/contrib/restricted/aws/aws-c-common/include/aws/common/time.h +++ b/contrib/restricted/aws/aws-c-common/include/aws/common/time.h @@ -1,30 +1,30 @@ -#ifndef AWS_COMMON_TIME_H -#define AWS_COMMON_TIME_H +#ifndef AWS_COMMON_TIME_H +#define AWS_COMMON_TIME_H /** * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. * SPDX-License-Identifier: Apache-2.0. - */ -#include <aws/common/common.h> - -#include <time.h> - -AWS_EXTERN_C_BEGIN - -/** - * Cross platform friendly version of timegm - */ -AWS_COMMON_API time_t aws_timegm(struct tm *const t); - -/** - * Cross platform friendly version of localtime_r - */ -AWS_COMMON_API void aws_localtime(time_t time, struct tm *t); - -/** - * Cross platform friendly version of gmtime_r - */ -AWS_COMMON_API void aws_gmtime(time_t time, struct tm *t); - -AWS_EXTERN_C_END - + */ +#include <aws/common/common.h> + +#include <time.h> + +AWS_EXTERN_C_BEGIN + +/** + * Cross platform friendly version of timegm + */ +AWS_COMMON_API time_t aws_timegm(struct tm *const t); + +/** + * Cross platform friendly version of localtime_r + */ +AWS_COMMON_API void aws_localtime(time_t time, struct tm *t); + +/** + * Cross platform friendly version of gmtime_r + */ +AWS_COMMON_API void aws_gmtime(time_t time, struct tm *t); + +AWS_EXTERN_C_END + #endif /* AWS_COMMON_TIME_H */ diff --git a/contrib/restricted/aws/aws-c-common/include/aws/common/uuid.h b/contrib/restricted/aws/aws-c-common/include/aws/common/uuid.h index a8677c5814..83a91457b3 100644 --- a/contrib/restricted/aws/aws-c-common/include/aws/common/uuid.h +++ b/contrib/restricted/aws/aws-c-common/include/aws/common/uuid.h @@ -1,29 +1,29 @@ -#ifndef AWS_COMMON_UUID_H -#define AWS_COMMON_UUID_H - +#ifndef AWS_COMMON_UUID_H +#define AWS_COMMON_UUID_H + /** * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. * SPDX-License-Identifier: Apache-2.0. - */ -#include <aws/common/common.h> - -struct aws_byte_cursor; -struct aws_byte_buf; - -struct aws_uuid { - uint8_t uuid_data[16]; -}; - -/* 36 bytes for the UUID plus one more for the null terminator. */ -#define AWS_UUID_STR_LEN 37 - -AWS_EXTERN_C_BEGIN - -AWS_COMMON_API int aws_uuid_init(struct aws_uuid *uuid); -AWS_COMMON_API int aws_uuid_init_from_str(struct aws_uuid *uuid, const struct aws_byte_cursor *uuid_str); -AWS_COMMON_API int aws_uuid_to_str(const struct aws_uuid *uuid, struct aws_byte_buf *output); -AWS_COMMON_API bool aws_uuid_equals(const struct aws_uuid *a, const struct aws_uuid *b); - -AWS_EXTERN_C_END - -#endif /* AWS_COMMON_UUID_H */ + */ +#include <aws/common/common.h> + +struct aws_byte_cursor; +struct aws_byte_buf; + +struct aws_uuid { + uint8_t uuid_data[16]; +}; + +/* 36 bytes for the UUID plus one more for the null terminator. */ +#define AWS_UUID_STR_LEN 37 + +AWS_EXTERN_C_BEGIN + +AWS_COMMON_API int aws_uuid_init(struct aws_uuid *uuid); +AWS_COMMON_API int aws_uuid_init_from_str(struct aws_uuid *uuid, const struct aws_byte_cursor *uuid_str); +AWS_COMMON_API int aws_uuid_to_str(const struct aws_uuid *uuid, struct aws_byte_buf *output); +AWS_COMMON_API bool aws_uuid_equals(const struct aws_uuid *a, const struct aws_uuid *b); + +AWS_EXTERN_C_END + +#endif /* AWS_COMMON_UUID_H */ diff --git a/contrib/restricted/aws/aws-c-common/source/array_list.c b/contrib/restricted/aws/aws-c-common/source/array_list.c index 7e05636a75..cfcd7d2db2 100644 --- a/contrib/restricted/aws/aws-c-common/source/array_list.c +++ b/contrib/restricted/aws/aws-c-common/source/array_list.c @@ -1,13 +1,13 @@ /** * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. * SPDX-License-Identifier: Apache-2.0. - */ - -#include <aws/common/array_list.h> + */ + +#include <aws/common/array_list.h> #include <aws/common/private/array_list.h> - -#include <stdlib.h> /* qsort */ - + +#include <stdlib.h> /* qsort */ + int aws_array_list_calc_necessary_size(struct aws_array_list *AWS_RESTRICT list, size_t index, size_t *necessary_size) { AWS_PRECONDITION(aws_array_list_is_valid(list)); size_t index_inc; @@ -24,184 +24,184 @@ int aws_array_list_calc_necessary_size(struct aws_array_list *AWS_RESTRICT list, return AWS_OP_SUCCESS; } -int aws_array_list_shrink_to_fit(struct aws_array_list *AWS_RESTRICT list) { - AWS_PRECONDITION(aws_array_list_is_valid(list)); - if (list->alloc) { - size_t ideal_size; - if (aws_mul_size_checked(list->length, list->item_size, &ideal_size)) { - AWS_POSTCONDITION(aws_array_list_is_valid(list)); - return AWS_OP_ERR; - } - - if (ideal_size < list->current_size) { - void *raw_data = NULL; - - if (ideal_size > 0) { - raw_data = aws_mem_acquire(list->alloc, ideal_size); - if (!raw_data) { - AWS_POSTCONDITION(aws_array_list_is_valid(list)); - return AWS_OP_ERR; - } - - memcpy(raw_data, list->data, ideal_size); - aws_mem_release(list->alloc, list->data); - } - list->data = raw_data; - list->current_size = ideal_size; - } - AWS_POSTCONDITION(aws_array_list_is_valid(list)); - return AWS_OP_SUCCESS; - } - - AWS_POSTCONDITION(aws_array_list_is_valid(list)); - return aws_raise_error(AWS_ERROR_LIST_STATIC_MODE_CANT_SHRINK); -} - -int aws_array_list_copy(const struct aws_array_list *AWS_RESTRICT from, struct aws_array_list *AWS_RESTRICT to) { +int aws_array_list_shrink_to_fit(struct aws_array_list *AWS_RESTRICT list) { + AWS_PRECONDITION(aws_array_list_is_valid(list)); + if (list->alloc) { + size_t ideal_size; + if (aws_mul_size_checked(list->length, list->item_size, &ideal_size)) { + AWS_POSTCONDITION(aws_array_list_is_valid(list)); + return AWS_OP_ERR; + } + + if (ideal_size < list->current_size) { + void *raw_data = NULL; + + if (ideal_size > 0) { + raw_data = aws_mem_acquire(list->alloc, ideal_size); + if (!raw_data) { + AWS_POSTCONDITION(aws_array_list_is_valid(list)); + return AWS_OP_ERR; + } + + memcpy(raw_data, list->data, ideal_size); + aws_mem_release(list->alloc, list->data); + } + list->data = raw_data; + list->current_size = ideal_size; + } + AWS_POSTCONDITION(aws_array_list_is_valid(list)); + return AWS_OP_SUCCESS; + } + + AWS_POSTCONDITION(aws_array_list_is_valid(list)); + return aws_raise_error(AWS_ERROR_LIST_STATIC_MODE_CANT_SHRINK); +} + +int aws_array_list_copy(const struct aws_array_list *AWS_RESTRICT from, struct aws_array_list *AWS_RESTRICT to) { AWS_FATAL_PRECONDITION(from->item_size == to->item_size); AWS_FATAL_PRECONDITION(from->data); - AWS_PRECONDITION(aws_array_list_is_valid(from)); - AWS_PRECONDITION(aws_array_list_is_valid(to)); - - size_t copy_size; - if (aws_mul_size_checked(from->length, from->item_size, ©_size)) { - AWS_POSTCONDITION(aws_array_list_is_valid(from)); - AWS_POSTCONDITION(aws_array_list_is_valid(to)); - return AWS_OP_ERR; - } - - if (to->current_size >= copy_size) { - if (copy_size > 0) { - memcpy(to->data, from->data, copy_size); - } - to->length = from->length; - AWS_POSTCONDITION(aws_array_list_is_valid(from)); - AWS_POSTCONDITION(aws_array_list_is_valid(to)); - return AWS_OP_SUCCESS; - } - /* if to is in dynamic mode, we can just reallocate it and copy */ - if (to->alloc != NULL) { - void *tmp = aws_mem_acquire(to->alloc, copy_size); - - if (!tmp) { - AWS_POSTCONDITION(aws_array_list_is_valid(from)); - AWS_POSTCONDITION(aws_array_list_is_valid(to)); - return AWS_OP_ERR; - } - - memcpy(tmp, from->data, copy_size); - if (to->data) { - aws_mem_release(to->alloc, to->data); - } - - to->data = tmp; - to->length = from->length; - to->current_size = copy_size; - AWS_POSTCONDITION(aws_array_list_is_valid(from)); - AWS_POSTCONDITION(aws_array_list_is_valid(to)); - return AWS_OP_SUCCESS; - } - - return aws_raise_error(AWS_ERROR_DEST_COPY_TOO_SMALL); -} - -int aws_array_list_ensure_capacity(struct aws_array_list *AWS_RESTRICT list, size_t index) { - AWS_PRECONDITION(aws_array_list_is_valid(list)); - size_t necessary_size; - if (aws_array_list_calc_necessary_size(list, index, &necessary_size)) { - AWS_POSTCONDITION(aws_array_list_is_valid(list)); - return AWS_OP_ERR; - } - - if (list->current_size < necessary_size) { - if (!list->alloc) { - AWS_POSTCONDITION(aws_array_list_is_valid(list)); - return aws_raise_error(AWS_ERROR_INVALID_INDEX); - } - - /* this will double capacity if the index isn't bigger than what the - * next allocation would be, but allocates the exact requested size if - * it is. This is largely because we don't have a good way to predict - * the usage pattern to make a smart decision about it. However, if the - * user - * is doing this in an iterative fashion, necessary_size will never be - * used.*/ - size_t next_allocation_size = list->current_size << 1; - size_t new_size = next_allocation_size > necessary_size ? next_allocation_size : necessary_size; - - if (new_size < list->current_size) { - /* this means new_size overflowed. The only way this happens is on a - * 32-bit system where size_t is 32 bits, in which case we're out of - * addressable memory anyways, or we're on a 64 bit system and we're - * most certainly out of addressable memory. But since we're simply - * going to fail fast and say, sorry can't do it, we'll just tell - * the user they can't grow the list anymore. */ - AWS_POSTCONDITION(aws_array_list_is_valid(list)); - return aws_raise_error(AWS_ERROR_LIST_EXCEEDS_MAX_SIZE); - } - - void *temp = aws_mem_acquire(list->alloc, new_size); - - if (!temp) { - AWS_POSTCONDITION(aws_array_list_is_valid(list)); - return AWS_OP_ERR; - } - - if (list->data) { - memcpy(temp, list->data, list->current_size); - -#ifdef DEBUG_BUILD - memset( - (void *)((uint8_t *)temp + list->current_size), - AWS_ARRAY_LIST_DEBUG_FILL, - new_size - list->current_size); -#endif - aws_mem_release(list->alloc, list->data); - } - list->data = temp; - list->current_size = new_size; - } - - AWS_POSTCONDITION(aws_array_list_is_valid(list)); - return AWS_OP_SUCCESS; -} - -static void aws_array_list_mem_swap(void *AWS_RESTRICT item1, void *AWS_RESTRICT item2, size_t item_size) { - enum { SLICE = 128 }; - + AWS_PRECONDITION(aws_array_list_is_valid(from)); + AWS_PRECONDITION(aws_array_list_is_valid(to)); + + size_t copy_size; + if (aws_mul_size_checked(from->length, from->item_size, ©_size)) { + AWS_POSTCONDITION(aws_array_list_is_valid(from)); + AWS_POSTCONDITION(aws_array_list_is_valid(to)); + return AWS_OP_ERR; + } + + if (to->current_size >= copy_size) { + if (copy_size > 0) { + memcpy(to->data, from->data, copy_size); + } + to->length = from->length; + AWS_POSTCONDITION(aws_array_list_is_valid(from)); + AWS_POSTCONDITION(aws_array_list_is_valid(to)); + return AWS_OP_SUCCESS; + } + /* if to is in dynamic mode, we can just reallocate it and copy */ + if (to->alloc != NULL) { + void *tmp = aws_mem_acquire(to->alloc, copy_size); + + if (!tmp) { + AWS_POSTCONDITION(aws_array_list_is_valid(from)); + AWS_POSTCONDITION(aws_array_list_is_valid(to)); + return AWS_OP_ERR; + } + + memcpy(tmp, from->data, copy_size); + if (to->data) { + aws_mem_release(to->alloc, to->data); + } + + to->data = tmp; + to->length = from->length; + to->current_size = copy_size; + AWS_POSTCONDITION(aws_array_list_is_valid(from)); + AWS_POSTCONDITION(aws_array_list_is_valid(to)); + return AWS_OP_SUCCESS; + } + + return aws_raise_error(AWS_ERROR_DEST_COPY_TOO_SMALL); +} + +int aws_array_list_ensure_capacity(struct aws_array_list *AWS_RESTRICT list, size_t index) { + AWS_PRECONDITION(aws_array_list_is_valid(list)); + size_t necessary_size; + if (aws_array_list_calc_necessary_size(list, index, &necessary_size)) { + AWS_POSTCONDITION(aws_array_list_is_valid(list)); + return AWS_OP_ERR; + } + + if (list->current_size < necessary_size) { + if (!list->alloc) { + AWS_POSTCONDITION(aws_array_list_is_valid(list)); + return aws_raise_error(AWS_ERROR_INVALID_INDEX); + } + + /* this will double capacity if the index isn't bigger than what the + * next allocation would be, but allocates the exact requested size if + * it is. This is largely because we don't have a good way to predict + * the usage pattern to make a smart decision about it. However, if the + * user + * is doing this in an iterative fashion, necessary_size will never be + * used.*/ + size_t next_allocation_size = list->current_size << 1; + size_t new_size = next_allocation_size > necessary_size ? next_allocation_size : necessary_size; + + if (new_size < list->current_size) { + /* this means new_size overflowed. The only way this happens is on a + * 32-bit system where size_t is 32 bits, in which case we're out of + * addressable memory anyways, or we're on a 64 bit system and we're + * most certainly out of addressable memory. But since we're simply + * going to fail fast and say, sorry can't do it, we'll just tell + * the user they can't grow the list anymore. */ + AWS_POSTCONDITION(aws_array_list_is_valid(list)); + return aws_raise_error(AWS_ERROR_LIST_EXCEEDS_MAX_SIZE); + } + + void *temp = aws_mem_acquire(list->alloc, new_size); + + if (!temp) { + AWS_POSTCONDITION(aws_array_list_is_valid(list)); + return AWS_OP_ERR; + } + + if (list->data) { + memcpy(temp, list->data, list->current_size); + +#ifdef DEBUG_BUILD + memset( + (void *)((uint8_t *)temp + list->current_size), + AWS_ARRAY_LIST_DEBUG_FILL, + new_size - list->current_size); +#endif + aws_mem_release(list->alloc, list->data); + } + list->data = temp; + list->current_size = new_size; + } + + AWS_POSTCONDITION(aws_array_list_is_valid(list)); + return AWS_OP_SUCCESS; +} + +static void aws_array_list_mem_swap(void *AWS_RESTRICT item1, void *AWS_RESTRICT item2, size_t item_size) { + enum { SLICE = 128 }; + AWS_FATAL_PRECONDITION(item1); AWS_FATAL_PRECONDITION(item2); - - /* copy SLICE sized bytes at a time */ - size_t slice_count = item_size / SLICE; - uint8_t temp[SLICE]; - for (size_t i = 0; i < slice_count; i++) { - memcpy((void *)temp, (void *)item1, SLICE); - memcpy((void *)item1, (void *)item2, SLICE); - memcpy((void *)item2, (void *)temp, SLICE); - item1 = (uint8_t *)item1 + SLICE; - item2 = (uint8_t *)item2 + SLICE; - } - - size_t remainder = item_size & (SLICE - 1); /* item_size % SLICE */ - memcpy((void *)temp, (void *)item1, remainder); - memcpy((void *)item1, (void *)item2, remainder); - memcpy((void *)item2, (void *)temp, remainder); -} - -void aws_array_list_swap(struct aws_array_list *AWS_RESTRICT list, size_t a, size_t b) { + + /* copy SLICE sized bytes at a time */ + size_t slice_count = item_size / SLICE; + uint8_t temp[SLICE]; + for (size_t i = 0; i < slice_count; i++) { + memcpy((void *)temp, (void *)item1, SLICE); + memcpy((void *)item1, (void *)item2, SLICE); + memcpy((void *)item2, (void *)temp, SLICE); + item1 = (uint8_t *)item1 + SLICE; + item2 = (uint8_t *)item2 + SLICE; + } + + size_t remainder = item_size & (SLICE - 1); /* item_size % SLICE */ + memcpy((void *)temp, (void *)item1, remainder); + memcpy((void *)item1, (void *)item2, remainder); + memcpy((void *)item2, (void *)temp, remainder); +} + +void aws_array_list_swap(struct aws_array_list *AWS_RESTRICT list, size_t a, size_t b) { AWS_FATAL_PRECONDITION(a < list->length); AWS_FATAL_PRECONDITION(b < list->length); - AWS_PRECONDITION(aws_array_list_is_valid(list)); - - if (a == b) { - AWS_POSTCONDITION(aws_array_list_is_valid(list)); - return; - } - - void *item1 = NULL, *item2 = NULL; - aws_array_list_get_at_ptr(list, &item1, a); - aws_array_list_get_at_ptr(list, &item2, b); - aws_array_list_mem_swap(item1, item2, list->item_size); - AWS_POSTCONDITION(aws_array_list_is_valid(list)); -} + AWS_PRECONDITION(aws_array_list_is_valid(list)); + + if (a == b) { + AWS_POSTCONDITION(aws_array_list_is_valid(list)); + return; + } + + void *item1 = NULL, *item2 = NULL; + aws_array_list_get_at_ptr(list, &item1, a); + aws_array_list_get_at_ptr(list, &item2, b); + aws_array_list_mem_swap(item1, item2, list->item_size); + AWS_POSTCONDITION(aws_array_list_is_valid(list)); +} diff --git a/contrib/restricted/aws/aws-c-common/source/assert.c b/contrib/restricted/aws/aws-c-common/source/assert.c index 9aaae9a19e..adfc4c408a 100644 --- a/contrib/restricted/aws/aws-c-common/source/assert.c +++ b/contrib/restricted/aws/aws-c-common/source/assert.c @@ -1,18 +1,18 @@ /** * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. * SPDX-License-Identifier: Apache-2.0. - */ - -#include <aws/common/common.h> - + */ + +#include <aws/common/common.h> + #include <aws/common/system_info.h> -#include <stdio.h> -#include <stdlib.h> - -void aws_fatal_assert(const char *cond_str, const char *file, int line) { - aws_debug_break(); - fprintf(stderr, "Fatal error condition occurred in %s:%d: %s\nExiting Application\n", file, line, cond_str); - aws_backtrace_print(stderr, NULL); - abort(); -} +#include <stdio.h> +#include <stdlib.h> + +void aws_fatal_assert(const char *cond_str, const char *file, int line) { + aws_debug_break(); + fprintf(stderr, "Fatal error condition occurred in %s:%d: %s\nExiting Application\n", file, line, cond_str); + aws_backtrace_print(stderr, NULL); + abort(); +} diff --git a/contrib/restricted/aws/aws-c-common/source/byte_buf.c b/contrib/restricted/aws/aws-c-common/source/byte_buf.c index ca18f4121b..4b2cf1d279 100644 --- a/contrib/restricted/aws/aws-c-common/source/byte_buf.c +++ b/contrib/restricted/aws/aws-c-common/source/byte_buf.c @@ -1,71 +1,71 @@ /** * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. * SPDX-License-Identifier: Apache-2.0. - */ - -#include <aws/common/byte_buf.h> + */ + +#include <aws/common/byte_buf.h> #include <aws/common/private/byte_buf.h> - -#include <stdarg.h> - -#ifdef _MSC_VER -/* disables warning non const declared initializers for Microsoft compilers */ -# pragma warning(disable : 4204) -# pragma warning(disable : 4706) -#endif - -int aws_byte_buf_init(struct aws_byte_buf *buf, struct aws_allocator *allocator, size_t capacity) { + +#include <stdarg.h> + +#ifdef _MSC_VER +/* disables warning non const declared initializers for Microsoft compilers */ +# pragma warning(disable : 4204) +# pragma warning(disable : 4706) +#endif + +int aws_byte_buf_init(struct aws_byte_buf *buf, struct aws_allocator *allocator, size_t capacity) { AWS_PRECONDITION(buf); AWS_PRECONDITION(allocator); - - buf->buffer = (capacity == 0) ? NULL : aws_mem_acquire(allocator, capacity); - if (capacity != 0 && buf->buffer == NULL) { + + buf->buffer = (capacity == 0) ? NULL : aws_mem_acquire(allocator, capacity); + if (capacity != 0 && buf->buffer == NULL) { AWS_ZERO_STRUCT(*buf); - return AWS_OP_ERR; - } - - buf->len = 0; - buf->capacity = capacity; - buf->allocator = allocator; - AWS_POSTCONDITION(aws_byte_buf_is_valid(buf)); - return AWS_OP_SUCCESS; -} - -int aws_byte_buf_init_copy(struct aws_byte_buf *dest, struct aws_allocator *allocator, const struct aws_byte_buf *src) { + return AWS_OP_ERR; + } + + buf->len = 0; + buf->capacity = capacity; + buf->allocator = allocator; + AWS_POSTCONDITION(aws_byte_buf_is_valid(buf)); + return AWS_OP_SUCCESS; +} + +int aws_byte_buf_init_copy(struct aws_byte_buf *dest, struct aws_allocator *allocator, const struct aws_byte_buf *src) { AWS_PRECONDITION(allocator); AWS_PRECONDITION(dest); AWS_ERROR_PRECONDITION(aws_byte_buf_is_valid(src)); - - if (!src->buffer) { - AWS_ZERO_STRUCT(*dest); - dest->allocator = allocator; - AWS_POSTCONDITION(aws_byte_buf_is_valid(dest)); - return AWS_OP_SUCCESS; - } - - *dest = *src; - dest->allocator = allocator; - dest->buffer = (uint8_t *)aws_mem_acquire(allocator, src->capacity); - if (dest->buffer == NULL) { - AWS_ZERO_STRUCT(*dest); - return AWS_OP_ERR; - } - memcpy(dest->buffer, src->buffer, src->len); - AWS_POSTCONDITION(aws_byte_buf_is_valid(dest)); - return AWS_OP_SUCCESS; -} - -bool aws_byte_buf_is_valid(const struct aws_byte_buf *const buf) { + + if (!src->buffer) { + AWS_ZERO_STRUCT(*dest); + dest->allocator = allocator; + AWS_POSTCONDITION(aws_byte_buf_is_valid(dest)); + return AWS_OP_SUCCESS; + } + + *dest = *src; + dest->allocator = allocator; + dest->buffer = (uint8_t *)aws_mem_acquire(allocator, src->capacity); + if (dest->buffer == NULL) { + AWS_ZERO_STRUCT(*dest); + return AWS_OP_ERR; + } + memcpy(dest->buffer, src->buffer, src->len); + AWS_POSTCONDITION(aws_byte_buf_is_valid(dest)); + return AWS_OP_SUCCESS; +} + +bool aws_byte_buf_is_valid(const struct aws_byte_buf *const buf) { return buf != NULL && ((buf->capacity == 0 && buf->len == 0 && buf->buffer == NULL) || (buf->capacity > 0 && buf->len <= buf->capacity && AWS_MEM_IS_WRITABLE(buf->buffer, buf->capacity))); -} - -bool aws_byte_cursor_is_valid(const struct aws_byte_cursor *cursor) { +} + +bool aws_byte_cursor_is_valid(const struct aws_byte_cursor *cursor) { return cursor != NULL && ((cursor->len == 0) || (cursor->len > 0 && cursor->ptr && AWS_MEM_IS_READABLE(cursor->ptr, cursor->len))); -} - +} + void aws_byte_buf_reset(struct aws_byte_buf *buf, bool zero_contents) { if (zero_contents) { aws_byte_buf_secure_zero(buf); @@ -73,33 +73,33 @@ void aws_byte_buf_reset(struct aws_byte_buf *buf, bool zero_contents) { buf->len = 0; } -void aws_byte_buf_clean_up(struct aws_byte_buf *buf) { - AWS_PRECONDITION(aws_byte_buf_is_valid(buf)); - if (buf->allocator && buf->buffer) { - aws_mem_release(buf->allocator, (void *)buf->buffer); - } - buf->allocator = NULL; - buf->buffer = NULL; - buf->len = 0; - buf->capacity = 0; -} - -void aws_byte_buf_secure_zero(struct aws_byte_buf *buf) { +void aws_byte_buf_clean_up(struct aws_byte_buf *buf) { + AWS_PRECONDITION(aws_byte_buf_is_valid(buf)); + if (buf->allocator && buf->buffer) { + aws_mem_release(buf->allocator, (void *)buf->buffer); + } + buf->allocator = NULL; + buf->buffer = NULL; + buf->len = 0; + buf->capacity = 0; +} + +void aws_byte_buf_secure_zero(struct aws_byte_buf *buf) { + AWS_PRECONDITION(aws_byte_buf_is_valid(buf)); + if (buf->buffer) { + aws_secure_zero(buf->buffer, buf->capacity); + } + buf->len = 0; + AWS_POSTCONDITION(aws_byte_buf_is_valid(buf)); +} + +void aws_byte_buf_clean_up_secure(struct aws_byte_buf *buf) { AWS_PRECONDITION(aws_byte_buf_is_valid(buf)); - if (buf->buffer) { - aws_secure_zero(buf->buffer, buf->capacity); - } - buf->len = 0; + aws_byte_buf_secure_zero(buf); + aws_byte_buf_clean_up(buf); AWS_POSTCONDITION(aws_byte_buf_is_valid(buf)); -} - -void aws_byte_buf_clean_up_secure(struct aws_byte_buf *buf) { - AWS_PRECONDITION(aws_byte_buf_is_valid(buf)); - aws_byte_buf_secure_zero(buf); - aws_byte_buf_clean_up(buf); - AWS_POSTCONDITION(aws_byte_buf_is_valid(buf)); -} - +} + bool aws_byte_buf_eq(const struct aws_byte_buf *const a, const struct aws_byte_buf *const b) { AWS_PRECONDITION(aws_byte_buf_is_valid(a)); AWS_PRECONDITION(aws_byte_buf_is_valid(b)); @@ -107,8 +107,8 @@ bool aws_byte_buf_eq(const struct aws_byte_buf *const a, const struct aws_byte_b AWS_POSTCONDITION(aws_byte_buf_is_valid(a)); AWS_POSTCONDITION(aws_byte_buf_is_valid(b)); return rval; -} - +} + bool aws_byte_buf_eq_ignore_case(const struct aws_byte_buf *const a, const struct aws_byte_buf *const b) { AWS_PRECONDITION(aws_byte_buf_is_valid(a)); AWS_PRECONDITION(aws_byte_buf_is_valid(b)); @@ -116,49 +116,49 @@ bool aws_byte_buf_eq_ignore_case(const struct aws_byte_buf *const a, const struc AWS_POSTCONDITION(aws_byte_buf_is_valid(a)); AWS_POSTCONDITION(aws_byte_buf_is_valid(b)); return rval; -} - +} + bool aws_byte_buf_eq_c_str(const struct aws_byte_buf *const buf, const char *const c_str) { AWS_PRECONDITION(aws_byte_buf_is_valid(buf)); AWS_PRECONDITION(c_str != NULL); bool rval = aws_array_eq_c_str(buf->buffer, buf->len, c_str); AWS_POSTCONDITION(aws_byte_buf_is_valid(buf)); return rval; -} - +} + bool aws_byte_buf_eq_c_str_ignore_case(const struct aws_byte_buf *const buf, const char *const c_str) { AWS_PRECONDITION(aws_byte_buf_is_valid(buf)); AWS_PRECONDITION(c_str != NULL); bool rval = aws_array_eq_c_str_ignore_case(buf->buffer, buf->len, c_str); AWS_POSTCONDITION(aws_byte_buf_is_valid(buf)); return rval; -} - -int aws_byte_buf_init_copy_from_cursor( - struct aws_byte_buf *dest, - struct aws_allocator *allocator, - struct aws_byte_cursor src) { +} + +int aws_byte_buf_init_copy_from_cursor( + struct aws_byte_buf *dest, + struct aws_allocator *allocator, + struct aws_byte_cursor src) { AWS_PRECONDITION(allocator); AWS_PRECONDITION(dest); AWS_ERROR_PRECONDITION(aws_byte_cursor_is_valid(&src)); - - AWS_ZERO_STRUCT(*dest); - - dest->buffer = (src.len > 0) ? (uint8_t *)aws_mem_acquire(allocator, src.len) : NULL; - if (src.len != 0 && dest->buffer == NULL) { - return AWS_OP_ERR; - } - - dest->len = src.len; - dest->capacity = src.len; - dest->allocator = allocator; - if (src.len > 0) { - memcpy(dest->buffer, src.ptr, src.len); - } - AWS_POSTCONDITION(aws_byte_buf_is_valid(dest)); - return AWS_OP_SUCCESS; -} - + + AWS_ZERO_STRUCT(*dest); + + dest->buffer = (src.len > 0) ? (uint8_t *)aws_mem_acquire(allocator, src.len) : NULL; + if (src.len != 0 && dest->buffer == NULL) { + return AWS_OP_ERR; + } + + dest->len = src.len; + dest->capacity = src.len; + dest->allocator = allocator; + if (src.len > 0) { + memcpy(dest->buffer, src.ptr, src.len); + } + AWS_POSTCONDITION(aws_byte_buf_is_valid(dest)); + return AWS_OP_SUCCESS; +} + int aws_byte_buf_init_cache_and_update_cursors(struct aws_byte_buf *dest, struct aws_allocator *allocator, ...) { AWS_PRECONDITION(allocator); AWS_PRECONDITION(dest); @@ -193,16 +193,16 @@ int aws_byte_buf_init_cache_and_update_cursors(struct aws_byte_buf *dest, struct return AWS_OP_SUCCESS; } -bool aws_byte_cursor_next_split( - const struct aws_byte_cursor *AWS_RESTRICT input_str, - char split_on, - struct aws_byte_cursor *AWS_RESTRICT substr) { - +bool aws_byte_cursor_next_split( + const struct aws_byte_cursor *AWS_RESTRICT input_str, + char split_on, + struct aws_byte_cursor *AWS_RESTRICT substr) { + AWS_PRECONDITION(aws_byte_cursor_is_valid(input_str)); - + /* If substr is zeroed-out, then this is the first run. */ const bool first_run = substr->ptr == NULL; - + /* It's legal for input_str to be zeroed out: {.ptr=NULL, .len=0} * Deal with this case separately */ if (AWS_UNLIKELY(input_str->ptr == NULL)) { @@ -212,14 +212,14 @@ bool aws_byte_cursor_next_split( substr->len = 0; return true; } - + /* done */ - AWS_ZERO_STRUCT(*substr); - return false; - } - + AWS_ZERO_STRUCT(*substr); + return false; + } + /* Rest of function deals with non-NULL input_str->ptr */ - + if (first_run) { *substr = *input_str; } else { @@ -235,64 +235,64 @@ bool aws_byte_cursor_next_split( /* done */ AWS_ZERO_STRUCT(*substr); return false; - } + } /* update len to be remainder of the string */ substr->len = input_str->len - (substr->ptr - input_str->ptr); - } - + } + /* substr is now remainder of string, search for next split */ - uint8_t *new_location = memchr(substr->ptr, split_on, substr->len); - if (new_location) { - - /* Character found, update string length. */ - substr->len = new_location - substr->ptr; - } - + uint8_t *new_location = memchr(substr->ptr, split_on, substr->len); + if (new_location) { + + /* Character found, update string length. */ + substr->len = new_location - substr->ptr; + } + AWS_POSTCONDITION(aws_byte_cursor_is_valid(substr)); - return true; -} - -int aws_byte_cursor_split_on_char_n( - const struct aws_byte_cursor *AWS_RESTRICT input_str, - char split_on, - size_t n, - struct aws_array_list *AWS_RESTRICT output) { + return true; +} + +int aws_byte_cursor_split_on_char_n( + const struct aws_byte_cursor *AWS_RESTRICT input_str, + char split_on, + size_t n, + struct aws_array_list *AWS_RESTRICT output) { AWS_ASSERT(aws_byte_cursor_is_valid(input_str)); - AWS_ASSERT(output); - AWS_ASSERT(output->item_size >= sizeof(struct aws_byte_cursor)); - - size_t max_splits = n > 0 ? n : SIZE_MAX; - size_t split_count = 0; - - struct aws_byte_cursor substr; - AWS_ZERO_STRUCT(substr); - - /* Until we run out of substrs or hit the max split count, keep iterating and pushing into the array list. */ - while (split_count <= max_splits && aws_byte_cursor_next_split(input_str, split_on, &substr)) { - - if (split_count == max_splits) { - /* If this is the last split, take the rest of the string. */ - substr.len = input_str->len - (substr.ptr - input_str->ptr); - } - - if (AWS_UNLIKELY(aws_array_list_push_back(output, (const void *)&substr))) { - return AWS_OP_ERR; - } - ++split_count; - } - - return AWS_OP_SUCCESS; -} - -int aws_byte_cursor_split_on_char( - const struct aws_byte_cursor *AWS_RESTRICT input_str, - char split_on, - struct aws_array_list *AWS_RESTRICT output) { - - return aws_byte_cursor_split_on_char_n(input_str, split_on, 0, output); -} - + AWS_ASSERT(output); + AWS_ASSERT(output->item_size >= sizeof(struct aws_byte_cursor)); + + size_t max_splits = n > 0 ? n : SIZE_MAX; + size_t split_count = 0; + + struct aws_byte_cursor substr; + AWS_ZERO_STRUCT(substr); + + /* Until we run out of substrs or hit the max split count, keep iterating and pushing into the array list. */ + while (split_count <= max_splits && aws_byte_cursor_next_split(input_str, split_on, &substr)) { + + if (split_count == max_splits) { + /* If this is the last split, take the rest of the string. */ + substr.len = input_str->len - (substr.ptr - input_str->ptr); + } + + if (AWS_UNLIKELY(aws_array_list_push_back(output, (const void *)&substr))) { + return AWS_OP_ERR; + } + ++split_count; + } + + return AWS_OP_SUCCESS; +} + +int aws_byte_cursor_split_on_char( + const struct aws_byte_cursor *AWS_RESTRICT input_str, + char split_on, + struct aws_array_list *AWS_RESTRICT output) { + + return aws_byte_cursor_split_on_char_n(input_str, split_on, 0, output); +} + int aws_byte_cursor_find_exact( const struct aws_byte_cursor *AWS_RESTRICT input_str, const struct aws_byte_cursor *AWS_RESTRICT to_find, @@ -331,66 +331,66 @@ int aws_byte_cursor_find_exact( return aws_raise_error(AWS_ERROR_STRING_MATCH_NOT_FOUND); } -int aws_byte_buf_cat(struct aws_byte_buf *dest, size_t number_of_args, ...) { +int aws_byte_buf_cat(struct aws_byte_buf *dest, size_t number_of_args, ...) { AWS_PRECONDITION(aws_byte_buf_is_valid(dest)); - - va_list ap; - va_start(ap, number_of_args); - - for (size_t i = 0; i < number_of_args; ++i) { - struct aws_byte_buf *buffer = va_arg(ap, struct aws_byte_buf *); - struct aws_byte_cursor cursor = aws_byte_cursor_from_buf(buffer); - - if (aws_byte_buf_append(dest, &cursor)) { - va_end(ap); + + va_list ap; + va_start(ap, number_of_args); + + for (size_t i = 0; i < number_of_args; ++i) { + struct aws_byte_buf *buffer = va_arg(ap, struct aws_byte_buf *); + struct aws_byte_cursor cursor = aws_byte_cursor_from_buf(buffer); + + if (aws_byte_buf_append(dest, &cursor)) { + va_end(ap); AWS_POSTCONDITION(aws_byte_buf_is_valid(dest)); - return AWS_OP_ERR; - } - } - - va_end(ap); + return AWS_OP_ERR; + } + } + + va_end(ap); AWS_POSTCONDITION(aws_byte_buf_is_valid(dest)); - return AWS_OP_SUCCESS; -} - -bool aws_byte_cursor_eq(const struct aws_byte_cursor *a, const struct aws_byte_cursor *b) { + return AWS_OP_SUCCESS; +} + +bool aws_byte_cursor_eq(const struct aws_byte_cursor *a, const struct aws_byte_cursor *b) { AWS_PRECONDITION(aws_byte_cursor_is_valid(a)); AWS_PRECONDITION(aws_byte_cursor_is_valid(b)); bool rv = aws_array_eq(a->ptr, a->len, b->ptr, b->len); AWS_POSTCONDITION(aws_byte_cursor_is_valid(a)); AWS_POSTCONDITION(aws_byte_cursor_is_valid(b)); return rv; -} - -bool aws_byte_cursor_eq_ignore_case(const struct aws_byte_cursor *a, const struct aws_byte_cursor *b) { +} + +bool aws_byte_cursor_eq_ignore_case(const struct aws_byte_cursor *a, const struct aws_byte_cursor *b) { AWS_PRECONDITION(aws_byte_cursor_is_valid(a)); AWS_PRECONDITION(aws_byte_cursor_is_valid(b)); bool rv = aws_array_eq_ignore_case(a->ptr, a->len, b->ptr, b->len); AWS_POSTCONDITION(aws_byte_cursor_is_valid(a)); AWS_POSTCONDITION(aws_byte_cursor_is_valid(b)); return rv; -} - -/* Every possible uint8_t value, lowercased */ +} + +/* Every possible uint8_t value, lowercased */ static const uint8_t s_tolower_table[] = { - 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, - 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, - 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 'a', - 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', - 'x', 'y', 'z', 91, 92, 93, 94, 95, 96, 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', - 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z', 123, 124, 125, 126, 127, 128, 129, 130, 131, - 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, - 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, - 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, - 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, - 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, - 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255}; + 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, + 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, + 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 'a', + 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', + 'x', 'y', 'z', 91, 92, 93, 94, 95, 96, 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', + 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z', 123, 124, 125, 126, 127, 128, 129, 130, 131, + 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, + 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, + 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, + 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, + 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, + 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255}; AWS_STATIC_ASSERT(AWS_ARRAY_SIZE(s_tolower_table) == 256); - -const uint8_t *aws_lookup_table_to_lower_get(void) { - return s_tolower_table; -} - + +const uint8_t *aws_lookup_table_to_lower_get(void) { + return s_tolower_table; +} + bool aws_array_eq_ignore_case( const void *const array_a, const size_t len_a, @@ -400,128 +400,128 @@ bool aws_array_eq_ignore_case( (len_a == 0) || AWS_MEM_IS_READABLE(array_a, len_a), "Input array [array_a] must be readable up to [len_a]."); AWS_PRECONDITION( (len_b == 0) || AWS_MEM_IS_READABLE(array_b, len_b), "Input array [array_b] must be readable up to [len_b]."); - - if (len_a != len_b) { - return false; - } - - const uint8_t *bytes_a = array_a; - const uint8_t *bytes_b = array_b; - for (size_t i = 0; i < len_a; ++i) { - if (s_tolower_table[bytes_a[i]] != s_tolower_table[bytes_b[i]]) { - return false; - } - } - - return true; -} - + + if (len_a != len_b) { + return false; + } + + const uint8_t *bytes_a = array_a; + const uint8_t *bytes_b = array_b; + for (size_t i = 0; i < len_a; ++i) { + if (s_tolower_table[bytes_a[i]] != s_tolower_table[bytes_b[i]]) { + return false; + } + } + + return true; +} + bool aws_array_eq(const void *const array_a, const size_t len_a, const void *const array_b, const size_t len_b) { AWS_PRECONDITION( (len_a == 0) || AWS_MEM_IS_READABLE(array_a, len_a), "Input array [array_a] must be readable up to [len_a]."); AWS_PRECONDITION( (len_b == 0) || AWS_MEM_IS_READABLE(array_b, len_b), "Input array [array_b] must be readable up to [len_b]."); - - if (len_a != len_b) { - return false; - } - - if (len_a == 0) { - return true; - } - - return !memcmp(array_a, array_b, len_a); -} - + + if (len_a != len_b) { + return false; + } + + if (len_a == 0) { + return true; + } + + return !memcmp(array_a, array_b, len_a); +} + bool aws_array_eq_c_str_ignore_case(const void *const array, const size_t array_len, const char *const c_str) { AWS_PRECONDITION( array || (array_len == 0), "Either input pointer [array_a] mustn't be NULL or input [array_len] mustn't be zero."); AWS_PRECONDITION(c_str != NULL); - - /* Simpler implementation could have been: - * return aws_array_eq_ignore_case(array, array_len, c_str, strlen(c_str)); - * but that would have traversed c_str twice. - * This implementation traverses c_str just once. */ - - const uint8_t *array_bytes = array; - const uint8_t *str_bytes = (const uint8_t *)c_str; - - for (size_t i = 0; i < array_len; ++i) { - uint8_t s = str_bytes[i]; - if (s == '\0') { - return false; - } - - if (s_tolower_table[array_bytes[i]] != s_tolower_table[s]) { - return false; - } - } - - return str_bytes[array_len] == '\0'; -} - + + /* Simpler implementation could have been: + * return aws_array_eq_ignore_case(array, array_len, c_str, strlen(c_str)); + * but that would have traversed c_str twice. + * This implementation traverses c_str just once. */ + + const uint8_t *array_bytes = array; + const uint8_t *str_bytes = (const uint8_t *)c_str; + + for (size_t i = 0; i < array_len; ++i) { + uint8_t s = str_bytes[i]; + if (s == '\0') { + return false; + } + + if (s_tolower_table[array_bytes[i]] != s_tolower_table[s]) { + return false; + } + } + + return str_bytes[array_len] == '\0'; +} + bool aws_array_eq_c_str(const void *const array, const size_t array_len, const char *const c_str) { AWS_PRECONDITION( array || (array_len == 0), "Either input pointer [array_a] mustn't be NULL or input [array_len] mustn't be zero."); AWS_PRECONDITION(c_str != NULL); - - /* Simpler implementation could have been: - * return aws_array_eq(array, array_len, c_str, strlen(c_str)); - * but that would have traversed c_str twice. - * This implementation traverses c_str just once. */ - - const uint8_t *array_bytes = array; - const uint8_t *str_bytes = (const uint8_t *)c_str; - - for (size_t i = 0; i < array_len; ++i) { - uint8_t s = str_bytes[i]; - if (s == '\0') { - return false; - } - - if (array_bytes[i] != s) { - return false; - } - } - - return str_bytes[array_len] == '\0'; -} - + + /* Simpler implementation could have been: + * return aws_array_eq(array, array_len, c_str, strlen(c_str)); + * but that would have traversed c_str twice. + * This implementation traverses c_str just once. */ + + const uint8_t *array_bytes = array; + const uint8_t *str_bytes = (const uint8_t *)c_str; + + for (size_t i = 0; i < array_len; ++i) { + uint8_t s = str_bytes[i]; + if (s == '\0') { + return false; + } + + if (array_bytes[i] != s) { + return false; + } + } + + return str_bytes[array_len] == '\0'; +} + uint64_t aws_hash_array_ignore_case(const void *array, const size_t len) { AWS_PRECONDITION(AWS_MEM_IS_READABLE(array, len)); - /* FNV-1a: https://en.wikipedia.org/wiki/Fowler%E2%80%93Noll%E2%80%93Vo_hash_function */ - const uint64_t fnv_offset_basis = 0xcbf29ce484222325ULL; - const uint64_t fnv_prime = 0x100000001b3ULL; - - const uint8_t *i = array; - const uint8_t *end = i + len; - - uint64_t hash = fnv_offset_basis; - while (i != end) { - const uint8_t lower = s_tolower_table[*i++]; - hash ^= lower; + /* FNV-1a: https://en.wikipedia.org/wiki/Fowler%E2%80%93Noll%E2%80%93Vo_hash_function */ + const uint64_t fnv_offset_basis = 0xcbf29ce484222325ULL; + const uint64_t fnv_prime = 0x100000001b3ULL; + + const uint8_t *i = array; + const uint8_t *end = i + len; + + uint64_t hash = fnv_offset_basis; + while (i != end) { + const uint8_t lower = s_tolower_table[*i++]; + hash ^= lower; #ifdef CBMC # pragma CPROVER check push # pragma CPROVER check disable "unsigned-overflow" #endif - hash *= fnv_prime; + hash *= fnv_prime; #ifdef CBMC # pragma CPROVER check pop #endif - } - return hash; -} - -uint64_t aws_hash_byte_cursor_ptr_ignore_case(const void *item) { + } + return hash; +} + +uint64_t aws_hash_byte_cursor_ptr_ignore_case(const void *item) { AWS_PRECONDITION(aws_byte_cursor_is_valid(item)); const struct aws_byte_cursor *const cursor = item; uint64_t rval = aws_hash_array_ignore_case(cursor->ptr, cursor->len); AWS_POSTCONDITION(aws_byte_cursor_is_valid(item)); return rval; -} - +} + bool aws_byte_cursor_eq_byte_buf(const struct aws_byte_cursor *const a, const struct aws_byte_buf *const b) { AWS_PRECONDITION(aws_byte_cursor_is_valid(a)); AWS_PRECONDITION(aws_byte_buf_is_valid(b)); @@ -529,8 +529,8 @@ bool aws_byte_cursor_eq_byte_buf(const struct aws_byte_cursor *const a, const st AWS_POSTCONDITION(aws_byte_cursor_is_valid(a)); AWS_POSTCONDITION(aws_byte_buf_is_valid(b)); return rv; -} - +} + bool aws_byte_cursor_eq_byte_buf_ignore_case( const struct aws_byte_cursor *const a, const struct aws_byte_buf *const b) { @@ -540,176 +540,176 @@ bool aws_byte_cursor_eq_byte_buf_ignore_case( AWS_POSTCONDITION(aws_byte_cursor_is_valid(a)); AWS_POSTCONDITION(aws_byte_buf_is_valid(b)); return rv; -} - +} + bool aws_byte_cursor_eq_c_str(const struct aws_byte_cursor *const cursor, const char *const c_str) { AWS_PRECONDITION(aws_byte_cursor_is_valid(cursor)); AWS_PRECONDITION(c_str != NULL); bool rv = aws_array_eq_c_str(cursor->ptr, cursor->len, c_str); AWS_POSTCONDITION(aws_byte_cursor_is_valid(cursor)); return rv; -} - +} + bool aws_byte_cursor_eq_c_str_ignore_case(const struct aws_byte_cursor *const cursor, const char *const c_str) { AWS_PRECONDITION(aws_byte_cursor_is_valid(cursor)); AWS_PRECONDITION(c_str != NULL); bool rv = aws_array_eq_c_str_ignore_case(cursor->ptr, cursor->len, c_str); AWS_POSTCONDITION(aws_byte_cursor_is_valid(cursor)); return rv; -} - -int aws_byte_buf_append(struct aws_byte_buf *to, const struct aws_byte_cursor *from) { - AWS_PRECONDITION(aws_byte_buf_is_valid(to)); - AWS_PRECONDITION(aws_byte_cursor_is_valid(from)); - - if (to->capacity - to->len < from->len) { - AWS_POSTCONDITION(aws_byte_buf_is_valid(to)); - AWS_POSTCONDITION(aws_byte_cursor_is_valid(from)); - return aws_raise_error(AWS_ERROR_DEST_COPY_TOO_SMALL); - } - - if (from->len > 0) { - /* This assert teaches clang-tidy that from->ptr and to->buffer cannot be null in a non-empty buffers */ - AWS_ASSERT(from->ptr); - AWS_ASSERT(to->buffer); - memcpy(to->buffer + to->len, from->ptr, from->len); - to->len += from->len; - } - - AWS_POSTCONDITION(aws_byte_buf_is_valid(to)); - AWS_POSTCONDITION(aws_byte_cursor_is_valid(from)); - return AWS_OP_SUCCESS; -} - -int aws_byte_buf_append_with_lookup( - struct aws_byte_buf *AWS_RESTRICT to, - const struct aws_byte_cursor *AWS_RESTRICT from, - const uint8_t *lookup_table) { - AWS_PRECONDITION(aws_byte_buf_is_valid(to)); - AWS_PRECONDITION(aws_byte_cursor_is_valid(from)); +} + +int aws_byte_buf_append(struct aws_byte_buf *to, const struct aws_byte_cursor *from) { + AWS_PRECONDITION(aws_byte_buf_is_valid(to)); + AWS_PRECONDITION(aws_byte_cursor_is_valid(from)); + + if (to->capacity - to->len < from->len) { + AWS_POSTCONDITION(aws_byte_buf_is_valid(to)); + AWS_POSTCONDITION(aws_byte_cursor_is_valid(from)); + return aws_raise_error(AWS_ERROR_DEST_COPY_TOO_SMALL); + } + + if (from->len > 0) { + /* This assert teaches clang-tidy that from->ptr and to->buffer cannot be null in a non-empty buffers */ + AWS_ASSERT(from->ptr); + AWS_ASSERT(to->buffer); + memcpy(to->buffer + to->len, from->ptr, from->len); + to->len += from->len; + } + + AWS_POSTCONDITION(aws_byte_buf_is_valid(to)); + AWS_POSTCONDITION(aws_byte_cursor_is_valid(from)); + return AWS_OP_SUCCESS; +} + +int aws_byte_buf_append_with_lookup( + struct aws_byte_buf *AWS_RESTRICT to, + const struct aws_byte_cursor *AWS_RESTRICT from, + const uint8_t *lookup_table) { + AWS_PRECONDITION(aws_byte_buf_is_valid(to)); + AWS_PRECONDITION(aws_byte_cursor_is_valid(from)); AWS_PRECONDITION( AWS_MEM_IS_READABLE(lookup_table, 256), "Input array [lookup_table] must be at least 256 bytes long."); - - if (to->capacity - to->len < from->len) { - AWS_POSTCONDITION(aws_byte_buf_is_valid(to)); - AWS_POSTCONDITION(aws_byte_cursor_is_valid(from)); - return aws_raise_error(AWS_ERROR_DEST_COPY_TOO_SMALL); - } - - for (size_t i = 0; i < from->len; ++i) { - to->buffer[to->len + i] = lookup_table[from->ptr[i]]; - } - - if (aws_add_size_checked(to->len, from->len, &to->len)) { - return AWS_OP_ERR; - } - - AWS_POSTCONDITION(aws_byte_buf_is_valid(to)); - AWS_POSTCONDITION(aws_byte_cursor_is_valid(from)); - return AWS_OP_SUCCESS; -} - + + if (to->capacity - to->len < from->len) { + AWS_POSTCONDITION(aws_byte_buf_is_valid(to)); + AWS_POSTCONDITION(aws_byte_cursor_is_valid(from)); + return aws_raise_error(AWS_ERROR_DEST_COPY_TOO_SMALL); + } + + for (size_t i = 0; i < from->len; ++i) { + to->buffer[to->len + i] = lookup_table[from->ptr[i]]; + } + + if (aws_add_size_checked(to->len, from->len, &to->len)) { + return AWS_OP_ERR; + } + + AWS_POSTCONDITION(aws_byte_buf_is_valid(to)); + AWS_POSTCONDITION(aws_byte_cursor_is_valid(from)); + return AWS_OP_SUCCESS; +} + static int s_aws_byte_buf_append_dynamic( struct aws_byte_buf *to, const struct aws_byte_cursor *from, bool clear_released_memory) { - AWS_PRECONDITION(aws_byte_buf_is_valid(to)); - AWS_PRECONDITION(aws_byte_cursor_is_valid(from)); + AWS_PRECONDITION(aws_byte_buf_is_valid(to)); + AWS_PRECONDITION(aws_byte_cursor_is_valid(from)); AWS_ERROR_PRECONDITION(to->allocator); - - if (to->capacity - to->len < from->len) { - /* - * NewCapacity = Max(OldCapacity * 2, OldCapacity + MissingCapacity) - */ - size_t missing_capacity = from->len - (to->capacity - to->len); - - size_t required_capacity = 0; - if (aws_add_size_checked(to->capacity, missing_capacity, &required_capacity)) { - AWS_POSTCONDITION(aws_byte_buf_is_valid(to)); - AWS_POSTCONDITION(aws_byte_cursor_is_valid(from)); - return AWS_OP_ERR; - } - - /* - * It's ok if this overflows, just clamp to max possible. - * In theory this lets us still grow a buffer that's larger than 1/2 size_t space - * at least enough to accommodate the append. - */ - size_t growth_capacity = aws_add_size_saturating(to->capacity, to->capacity); - - size_t new_capacity = required_capacity; - if (new_capacity < growth_capacity) { - new_capacity = growth_capacity; - } - - /* - * Attempt to resize - we intentionally do not use reserve() in order to preserve - * the (unlikely) use case of from and to being the same buffer range. - */ - - /* - * Try the max, but if that fails and the required is smaller, try it in fallback - */ - uint8_t *new_buffer = aws_mem_acquire(to->allocator, new_capacity); - if (new_buffer == NULL) { - if (new_capacity > required_capacity) { - new_capacity = required_capacity; - new_buffer = aws_mem_acquire(to->allocator, new_capacity); - if (new_buffer == NULL) { - AWS_POSTCONDITION(aws_byte_buf_is_valid(to)); - AWS_POSTCONDITION(aws_byte_cursor_is_valid(from)); - return AWS_OP_ERR; - } - } else { - AWS_POSTCONDITION(aws_byte_buf_is_valid(to)); - AWS_POSTCONDITION(aws_byte_cursor_is_valid(from)); - return AWS_OP_ERR; - } - } - - /* - * Copy old buffer -> new buffer - */ - if (to->len > 0) { - memcpy(new_buffer, to->buffer, to->len); - } - /* - * Copy what we actually wanted to append in the first place - */ - if (from->len > 0) { - memcpy(new_buffer + to->len, from->ptr, from->len); - } + + if (to->capacity - to->len < from->len) { + /* + * NewCapacity = Max(OldCapacity * 2, OldCapacity + MissingCapacity) + */ + size_t missing_capacity = from->len - (to->capacity - to->len); + + size_t required_capacity = 0; + if (aws_add_size_checked(to->capacity, missing_capacity, &required_capacity)) { + AWS_POSTCONDITION(aws_byte_buf_is_valid(to)); + AWS_POSTCONDITION(aws_byte_cursor_is_valid(from)); + return AWS_OP_ERR; + } + + /* + * It's ok if this overflows, just clamp to max possible. + * In theory this lets us still grow a buffer that's larger than 1/2 size_t space + * at least enough to accommodate the append. + */ + size_t growth_capacity = aws_add_size_saturating(to->capacity, to->capacity); + + size_t new_capacity = required_capacity; + if (new_capacity < growth_capacity) { + new_capacity = growth_capacity; + } + + /* + * Attempt to resize - we intentionally do not use reserve() in order to preserve + * the (unlikely) use case of from and to being the same buffer range. + */ + + /* + * Try the max, but if that fails and the required is smaller, try it in fallback + */ + uint8_t *new_buffer = aws_mem_acquire(to->allocator, new_capacity); + if (new_buffer == NULL) { + if (new_capacity > required_capacity) { + new_capacity = required_capacity; + new_buffer = aws_mem_acquire(to->allocator, new_capacity); + if (new_buffer == NULL) { + AWS_POSTCONDITION(aws_byte_buf_is_valid(to)); + AWS_POSTCONDITION(aws_byte_cursor_is_valid(from)); + return AWS_OP_ERR; + } + } else { + AWS_POSTCONDITION(aws_byte_buf_is_valid(to)); + AWS_POSTCONDITION(aws_byte_cursor_is_valid(from)); + return AWS_OP_ERR; + } + } + + /* + * Copy old buffer -> new buffer + */ + if (to->len > 0) { + memcpy(new_buffer, to->buffer, to->len); + } + /* + * Copy what we actually wanted to append in the first place + */ + if (from->len > 0) { + memcpy(new_buffer + to->len, from->ptr, from->len); + } if (clear_released_memory) { aws_secure_zero(to->buffer, to->capacity); } - /* - * Get rid of the old buffer - */ - aws_mem_release(to->allocator, to->buffer); - - /* - * Switch to the new buffer - */ - to->buffer = new_buffer; - to->capacity = new_capacity; - } else { - if (from->len > 0) { - /* This assert teaches clang-tidy that from->ptr and to->buffer cannot be null in a non-empty buffers */ - AWS_ASSERT(from->ptr); - AWS_ASSERT(to->buffer); - memcpy(to->buffer + to->len, from->ptr, from->len); - } - } - - to->len += from->len; - - AWS_POSTCONDITION(aws_byte_buf_is_valid(to)); - AWS_POSTCONDITION(aws_byte_cursor_is_valid(from)); - return AWS_OP_SUCCESS; -} - + /* + * Get rid of the old buffer + */ + aws_mem_release(to->allocator, to->buffer); + + /* + * Switch to the new buffer + */ + to->buffer = new_buffer; + to->capacity = new_capacity; + } else { + if (from->len > 0) { + /* This assert teaches clang-tidy that from->ptr and to->buffer cannot be null in a non-empty buffers */ + AWS_ASSERT(from->ptr); + AWS_ASSERT(to->buffer); + memcpy(to->buffer + to->len, from->ptr, from->len); + } + } + + to->len += from->len; + + AWS_POSTCONDITION(aws_byte_buf_is_valid(to)); + AWS_POSTCONDITION(aws_byte_cursor_is_valid(from)); + return AWS_OP_SUCCESS; +} + int aws_byte_buf_append_dynamic(struct aws_byte_buf *to, const struct aws_byte_cursor *from) { return s_aws_byte_buf_append_dynamic(to, from, false); } @@ -742,87 +742,87 @@ int aws_byte_buf_append_byte_dynamic_secure(struct aws_byte_buf *buffer, uint8_t return s_aws_byte_buf_append_byte_dynamic(buffer, value, true); } -int aws_byte_buf_reserve(struct aws_byte_buf *buffer, size_t requested_capacity) { +int aws_byte_buf_reserve(struct aws_byte_buf *buffer, size_t requested_capacity) { AWS_ERROR_PRECONDITION(buffer->allocator); AWS_ERROR_PRECONDITION(aws_byte_buf_is_valid(buffer)); - - if (requested_capacity <= buffer->capacity) { - AWS_POSTCONDITION(aws_byte_buf_is_valid(buffer)); - return AWS_OP_SUCCESS; - } - - if (aws_mem_realloc(buffer->allocator, (void **)&buffer->buffer, buffer->capacity, requested_capacity)) { - return AWS_OP_ERR; - } - - buffer->capacity = requested_capacity; - - AWS_POSTCONDITION(aws_byte_buf_is_valid(buffer)); - return AWS_OP_SUCCESS; -} - -int aws_byte_buf_reserve_relative(struct aws_byte_buf *buffer, size_t additional_length) { + + if (requested_capacity <= buffer->capacity) { + AWS_POSTCONDITION(aws_byte_buf_is_valid(buffer)); + return AWS_OP_SUCCESS; + } + + if (aws_mem_realloc(buffer->allocator, (void **)&buffer->buffer, buffer->capacity, requested_capacity)) { + return AWS_OP_ERR; + } + + buffer->capacity = requested_capacity; + + AWS_POSTCONDITION(aws_byte_buf_is_valid(buffer)); + return AWS_OP_SUCCESS; +} + +int aws_byte_buf_reserve_relative(struct aws_byte_buf *buffer, size_t additional_length) { AWS_ERROR_PRECONDITION(buffer->allocator); AWS_ERROR_PRECONDITION(aws_byte_buf_is_valid(buffer)); - - size_t requested_capacity = 0; - if (AWS_UNLIKELY(aws_add_size_checked(buffer->len, additional_length, &requested_capacity))) { - AWS_POSTCONDITION(aws_byte_buf_is_valid(buffer)); - return AWS_OP_ERR; - } - - return aws_byte_buf_reserve(buffer, requested_capacity); -} - -struct aws_byte_cursor aws_byte_cursor_right_trim_pred( - const struct aws_byte_cursor *source, - aws_byte_predicate_fn *predicate) { + + size_t requested_capacity = 0; + if (AWS_UNLIKELY(aws_add_size_checked(buffer->len, additional_length, &requested_capacity))) { + AWS_POSTCONDITION(aws_byte_buf_is_valid(buffer)); + return AWS_OP_ERR; + } + + return aws_byte_buf_reserve(buffer, requested_capacity); +} + +struct aws_byte_cursor aws_byte_cursor_right_trim_pred( + const struct aws_byte_cursor *source, + aws_byte_predicate_fn *predicate) { AWS_PRECONDITION(aws_byte_cursor_is_valid(source)); AWS_PRECONDITION(predicate != NULL); - struct aws_byte_cursor trimmed = *source; - - while (trimmed.len > 0 && predicate(*(trimmed.ptr + trimmed.len - 1))) { - --trimmed.len; - } + struct aws_byte_cursor trimmed = *source; + + while (trimmed.len > 0 && predicate(*(trimmed.ptr + trimmed.len - 1))) { + --trimmed.len; + } AWS_POSTCONDITION(aws_byte_cursor_is_valid(source)); AWS_POSTCONDITION(aws_byte_cursor_is_valid(&trimmed)); - return trimmed; -} - -struct aws_byte_cursor aws_byte_cursor_left_trim_pred( - const struct aws_byte_cursor *source, - aws_byte_predicate_fn *predicate) { + return trimmed; +} + +struct aws_byte_cursor aws_byte_cursor_left_trim_pred( + const struct aws_byte_cursor *source, + aws_byte_predicate_fn *predicate) { AWS_PRECONDITION(aws_byte_cursor_is_valid(source)); AWS_PRECONDITION(predicate != NULL); - struct aws_byte_cursor trimmed = *source; - - while (trimmed.len > 0 && predicate(*(trimmed.ptr))) { - --trimmed.len; - ++trimmed.ptr; - } + struct aws_byte_cursor trimmed = *source; + + while (trimmed.len > 0 && predicate(*(trimmed.ptr))) { + --trimmed.len; + ++trimmed.ptr; + } AWS_POSTCONDITION(aws_byte_cursor_is_valid(source)); AWS_POSTCONDITION(aws_byte_cursor_is_valid(&trimmed)); - return trimmed; -} - -struct aws_byte_cursor aws_byte_cursor_trim_pred( - const struct aws_byte_cursor *source, - aws_byte_predicate_fn *predicate) { + return trimmed; +} + +struct aws_byte_cursor aws_byte_cursor_trim_pred( + const struct aws_byte_cursor *source, + aws_byte_predicate_fn *predicate) { AWS_PRECONDITION(aws_byte_cursor_is_valid(source)); AWS_PRECONDITION(predicate != NULL); - struct aws_byte_cursor left_trimmed = aws_byte_cursor_left_trim_pred(source, predicate); + struct aws_byte_cursor left_trimmed = aws_byte_cursor_left_trim_pred(source, predicate); struct aws_byte_cursor dest = aws_byte_cursor_right_trim_pred(&left_trimmed, predicate); AWS_POSTCONDITION(aws_byte_cursor_is_valid(source)); AWS_POSTCONDITION(aws_byte_cursor_is_valid(&dest)); return dest; -} - -bool aws_byte_cursor_satisfies_pred(const struct aws_byte_cursor *source, aws_byte_predicate_fn *predicate) { - struct aws_byte_cursor trimmed = aws_byte_cursor_left_trim_pred(source, predicate); +} + +bool aws_byte_cursor_satisfies_pred(const struct aws_byte_cursor *source, aws_byte_predicate_fn *predicate) { + struct aws_byte_cursor trimmed = aws_byte_cursor_left_trim_pred(source, predicate); bool rval = (trimmed.len == 0); AWS_POSTCONDITION(aws_byte_cursor_is_valid(source)); return rval; -} +} int aws_byte_cursor_compare_lexical(const struct aws_byte_cursor *lhs, const struct aws_byte_cursor *rhs) { AWS_PRECONDITION(aws_byte_cursor_is_valid(lhs)); diff --git a/contrib/restricted/aws/aws-c-common/source/codegen.c b/contrib/restricted/aws/aws-c-common/source/codegen.c index ea6e95d548..1469e63f37 100644 --- a/contrib/restricted/aws/aws-c-common/source/codegen.c +++ b/contrib/restricted/aws/aws-c-common/source/codegen.c @@ -1,14 +1,14 @@ /** * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. * SPDX-License-Identifier: Apache-2.0. - */ - -/* - * This file generates exportable implementations for inlineable functions. - */ - -#define AWS_STATIC_IMPL AWS_COMMON_API - + */ + +/* + * This file generates exportable implementations for inlineable functions. + */ + +#define AWS_STATIC_IMPL AWS_COMMON_API + #include <aws/common/array_list.inl> #include <aws/common/atomics.inl> #include <aws/common/byte_order.inl> diff --git a/contrib/restricted/aws/aws-c-common/source/command_line_parser.c b/contrib/restricted/aws/aws-c-common/source/command_line_parser.c index ccbe6d1820..bfb3f9f1aa 100644 --- a/contrib/restricted/aws/aws-c-common/source/command_line_parser.c +++ b/contrib/restricted/aws/aws-c-common/source/command_line_parser.c @@ -1,109 +1,109 @@ /** * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. * SPDX-License-Identifier: Apache-2.0. - */ -#include <aws/common/command_line_parser.h> - -int aws_cli_optind = 1; -int aws_cli_opterr = -1; -int aws_cli_optopt = 0; - -const char *aws_cli_optarg = NULL; - -static const struct aws_cli_option *s_find_option_from_char( - const struct aws_cli_option *longopts, - char search_for, - int *longindex) { - int index = 0; - const struct aws_cli_option *option = &longopts[index]; - - while (option->val != 0 || option->name) { - if (option->val == search_for) { - if (longindex) { - *longindex = index; - } - return option; - } - - option = &longopts[++index]; - } - - return NULL; -} - -static const struct aws_cli_option *s_find_option_from_c_str( - const struct aws_cli_option *longopts, - const char *search_for, - int *longindex) { - int index = 0; - const struct aws_cli_option *option = &longopts[index]; - - while (option->name || option->val != 0) { - if (option->name) { - if (option->name && !strcmp(search_for, option->name)) { - if (longindex) { - *longindex = index; - } - return option; - } - } - - option = &longopts[++index]; - } - - return NULL; -} - -int aws_cli_getopt_long( - int argc, - char *const argv[], - const char *optstring, - const struct aws_cli_option *longopts, - int *longindex) { - aws_cli_optarg = NULL; - - if (aws_cli_optind >= argc) { - return -1; - } - - char first_char = argv[aws_cli_optind][0]; - char second_char = argv[aws_cli_optind][1]; - char *option_start = NULL; - const struct aws_cli_option *option = NULL; - - if (first_char == '-' && second_char != '-') { - option_start = &argv[aws_cli_optind][1]; - option = s_find_option_from_char(longopts, *option_start, longindex); - } else if (first_char == '-' && second_char == '-') { - option_start = &argv[aws_cli_optind][2]; - option = s_find_option_from_c_str(longopts, option_start, longindex); - } else { - return -1; - } - - aws_cli_optind++; - if (option) { - bool has_arg = false; - + */ +#include <aws/common/command_line_parser.h> + +int aws_cli_optind = 1; +int aws_cli_opterr = -1; +int aws_cli_optopt = 0; + +const char *aws_cli_optarg = NULL; + +static const struct aws_cli_option *s_find_option_from_char( + const struct aws_cli_option *longopts, + char search_for, + int *longindex) { + int index = 0; + const struct aws_cli_option *option = &longopts[index]; + + while (option->val != 0 || option->name) { + if (option->val == search_for) { + if (longindex) { + *longindex = index; + } + return option; + } + + option = &longopts[++index]; + } + + return NULL; +} + +static const struct aws_cli_option *s_find_option_from_c_str( + const struct aws_cli_option *longopts, + const char *search_for, + int *longindex) { + int index = 0; + const struct aws_cli_option *option = &longopts[index]; + + while (option->name || option->val != 0) { + if (option->name) { + if (option->name && !strcmp(search_for, option->name)) { + if (longindex) { + *longindex = index; + } + return option; + } + } + + option = &longopts[++index]; + } + + return NULL; +} + +int aws_cli_getopt_long( + int argc, + char *const argv[], + const char *optstring, + const struct aws_cli_option *longopts, + int *longindex) { + aws_cli_optarg = NULL; + + if (aws_cli_optind >= argc) { + return -1; + } + + char first_char = argv[aws_cli_optind][0]; + char second_char = argv[aws_cli_optind][1]; + char *option_start = NULL; + const struct aws_cli_option *option = NULL; + + if (first_char == '-' && second_char != '-') { + option_start = &argv[aws_cli_optind][1]; + option = s_find_option_from_char(longopts, *option_start, longindex); + } else if (first_char == '-' && second_char == '-') { + option_start = &argv[aws_cli_optind][2]; + option = s_find_option_from_c_str(longopts, option_start, longindex); + } else { + return -1; + } + + aws_cli_optind++; + if (option) { + bool has_arg = false; + char *opt_value = memchr(optstring, option->val, strlen(optstring)); if (!opt_value) { return '?'; - } - + } + if (opt_value[1] == ':') { has_arg = true; } - if (has_arg) { + if (has_arg) { if (aws_cli_optind >= argc) { - return '?'; - } - - aws_cli_optarg = argv[aws_cli_optind++]; - } - - return option->val; - } - - return '?'; -} + return '?'; + } + + aws_cli_optarg = argv[aws_cli_optind++]; + } + + return option->val; + } + + return '?'; +} diff --git a/contrib/restricted/aws/aws-c-common/source/common.c b/contrib/restricted/aws/aws-c-common/source/common.c index 88c5d262c8..af5b90cd2a 100644 --- a/contrib/restricted/aws/aws-c-common/source/common.c +++ b/contrib/restricted/aws/aws-c-common/source/common.c @@ -1,35 +1,35 @@ /** * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. * SPDX-License-Identifier: Apache-2.0. - */ - -#include <aws/common/common.h> + */ + +#include <aws/common/common.h> #include <aws/common/logging.h> -#include <aws/common/math.h> +#include <aws/common/math.h> #include <aws/common/private/dlloads.h> - -#include <stdarg.h> -#include <stdlib.h> - -#ifdef _WIN32 -# include <Windows.h> + +#include <stdarg.h> +#include <stdlib.h> + +#ifdef _WIN32 +# include <Windows.h> #else # include <dlfcn.h> -#endif - -#ifdef __MACH__ -# include <CoreFoundation/CoreFoundation.h> -#endif - -/* turn off unused named parameter warning on msvc.*/ -#ifdef _MSC_VER -# pragma warning(push) -# pragma warning(disable : 4100) -#endif - +#endif + +#ifdef __MACH__ +# include <CoreFoundation/CoreFoundation.h> +#endif + +/* turn off unused named parameter warning on msvc.*/ +#ifdef _MSC_VER +# pragma warning(push) +# pragma warning(disable : 4100) +#endif + long (*g_set_mempolicy_ptr)(int, const unsigned long *, unsigned long) = NULL; void *g_libnuma_handle = NULL; - + void aws_secure_zero(void *pBuf, size_t bufsize) { #if defined(_WIN32) SecureZeroMemory(pBuf, bufsize); @@ -39,7 +39,7 @@ void aws_secure_zero(void *pBuf, size_t bufsize) { * * We'll try to work around this by using inline asm on GCC-like compilers, * and by exposing the buffer pointer in a volatile local pointer elsewhere. - */ + */ # if defined(__GNUC__) || defined(__clang__) memset(pBuf, 0, bufsize); /* This inline asm serves to convince the compiler that the buffer is (somehow) still @@ -65,143 +65,143 @@ void aws_secure_zero(void *pBuf, size_t bufsize) { memset(pVolBuf, 0, bufsize); # endif // #else not GCC/clang #endif // #else not windows -} - +} + #define AWS_DEFINE_ERROR_INFO_COMMON(C, ES) [(C)-0x0000] = AWS_DEFINE_ERROR_INFO(C, ES, "aws-c-common") -/* clang-format off */ -static struct aws_error_info errors[] = { - AWS_DEFINE_ERROR_INFO_COMMON( - AWS_ERROR_SUCCESS, - "Success."), - AWS_DEFINE_ERROR_INFO_COMMON( - AWS_ERROR_OOM, - "Out of memory."), - AWS_DEFINE_ERROR_INFO_COMMON( - AWS_ERROR_UNKNOWN, - "Unknown error."), - AWS_DEFINE_ERROR_INFO_COMMON( - AWS_ERROR_SHORT_BUFFER, - "Buffer is not large enough to hold result."), - AWS_DEFINE_ERROR_INFO_COMMON( - AWS_ERROR_OVERFLOW_DETECTED, - "Fixed size value overflow was detected."), - AWS_DEFINE_ERROR_INFO_COMMON( - AWS_ERROR_UNSUPPORTED_OPERATION, - "Unsupported operation."), - AWS_DEFINE_ERROR_INFO_COMMON( - AWS_ERROR_INVALID_BUFFER_SIZE, - "Invalid buffer size."), - AWS_DEFINE_ERROR_INFO_COMMON( - AWS_ERROR_INVALID_HEX_STR, - "Invalid hex string."), - AWS_DEFINE_ERROR_INFO_COMMON( - AWS_ERROR_INVALID_BASE64_STR, - "Invalid base64 string."), - AWS_DEFINE_ERROR_INFO_COMMON( - AWS_ERROR_INVALID_INDEX, - "Invalid index for list access."), - AWS_DEFINE_ERROR_INFO_COMMON( - AWS_ERROR_THREAD_INVALID_SETTINGS, - "Invalid thread settings."), - AWS_DEFINE_ERROR_INFO_COMMON( - AWS_ERROR_THREAD_INSUFFICIENT_RESOURCE, - "Insufficent resources for thread."), - AWS_DEFINE_ERROR_INFO_COMMON( - AWS_ERROR_THREAD_NO_PERMISSIONS, - "Insufficient permissions for thread operation."), - AWS_DEFINE_ERROR_INFO_COMMON( - AWS_ERROR_THREAD_NOT_JOINABLE, - "Thread not joinable."), - AWS_DEFINE_ERROR_INFO_COMMON( - AWS_ERROR_THREAD_NO_SUCH_THREAD_ID, - "No such thread ID."), - AWS_DEFINE_ERROR_INFO_COMMON( - AWS_ERROR_THREAD_DEADLOCK_DETECTED, - "Deadlock detected in thread."), - AWS_DEFINE_ERROR_INFO_COMMON( - AWS_ERROR_MUTEX_NOT_INIT, - "Mutex not initialized."), - AWS_DEFINE_ERROR_INFO_COMMON( - AWS_ERROR_MUTEX_TIMEOUT, - "Mutex operation timed out."), - AWS_DEFINE_ERROR_INFO_COMMON( - AWS_ERROR_MUTEX_CALLER_NOT_OWNER, - "The caller of a mutex operation was not the owner."), - AWS_DEFINE_ERROR_INFO_COMMON( - AWS_ERROR_MUTEX_FAILED, - "Mutex operation failed."), - AWS_DEFINE_ERROR_INFO_COMMON( - AWS_ERROR_COND_VARIABLE_INIT_FAILED, - "Condition variable initialization failed."), - AWS_DEFINE_ERROR_INFO_COMMON( - AWS_ERROR_COND_VARIABLE_TIMED_OUT, - "Condition variable wait timed out."), - AWS_DEFINE_ERROR_INFO_COMMON( - AWS_ERROR_COND_VARIABLE_ERROR_UNKNOWN, - "Condition variable unknown error."), - AWS_DEFINE_ERROR_INFO_COMMON( - AWS_ERROR_CLOCK_FAILURE, - "Clock operation failed."), - AWS_DEFINE_ERROR_INFO_COMMON( - AWS_ERROR_LIST_EMPTY, - "Empty list."), - AWS_DEFINE_ERROR_INFO_COMMON( - AWS_ERROR_DEST_COPY_TOO_SMALL, - "Destination of copy is too small."), - AWS_DEFINE_ERROR_INFO_COMMON( - AWS_ERROR_LIST_EXCEEDS_MAX_SIZE, - "A requested operation on a list would exceed it's max size."), - AWS_DEFINE_ERROR_INFO_COMMON( - AWS_ERROR_LIST_STATIC_MODE_CANT_SHRINK, - "Attempt to shrink a list in static mode."), - AWS_DEFINE_ERROR_INFO_COMMON( - AWS_ERROR_PRIORITY_QUEUE_FULL, - "Attempt to add items to a full preallocated queue in static mode."), - AWS_DEFINE_ERROR_INFO_COMMON( - AWS_ERROR_PRIORITY_QUEUE_EMPTY, - "Attempt to pop an item from an empty queue."), - AWS_DEFINE_ERROR_INFO_COMMON( - AWS_ERROR_PRIORITY_QUEUE_BAD_NODE, - "Bad node handle passed to remove."), - AWS_DEFINE_ERROR_INFO_COMMON( - AWS_ERROR_HASHTBL_ITEM_NOT_FOUND, - "Item not found in hash table."), - AWS_DEFINE_ERROR_INFO_COMMON( - AWS_ERROR_INVALID_DATE_STR, - "Date string is invalid and cannot be parsed." - ), - AWS_DEFINE_ERROR_INFO_COMMON( - AWS_ERROR_INVALID_ARGUMENT, - "An invalid argument was passed to a function." - ), - AWS_DEFINE_ERROR_INFO_COMMON( - AWS_ERROR_RANDOM_GEN_FAILED, - "A call to the random number generator failed. Retry later." - ), - AWS_DEFINE_ERROR_INFO_COMMON( - AWS_ERROR_MALFORMED_INPUT_STRING, - "An input string was passed to a parser and the string was incorrectly formatted." - ), - AWS_DEFINE_ERROR_INFO_COMMON( - AWS_ERROR_UNIMPLEMENTED, - "A function was called, but is not implemented." - ), - AWS_DEFINE_ERROR_INFO_COMMON( - AWS_ERROR_INVALID_STATE, - "An invalid state was encountered." - ), - AWS_DEFINE_ERROR_INFO_COMMON( - AWS_ERROR_ENVIRONMENT_GET, - "System call failure when getting an environment variable." - ), - AWS_DEFINE_ERROR_INFO_COMMON( - AWS_ERROR_ENVIRONMENT_SET, - "System call failure when setting an environment variable." - ), - AWS_DEFINE_ERROR_INFO_COMMON( - AWS_ERROR_ENVIRONMENT_UNSET, - "System call failure when unsetting an environment variable." - ), +/* clang-format off */ +static struct aws_error_info errors[] = { + AWS_DEFINE_ERROR_INFO_COMMON( + AWS_ERROR_SUCCESS, + "Success."), + AWS_DEFINE_ERROR_INFO_COMMON( + AWS_ERROR_OOM, + "Out of memory."), + AWS_DEFINE_ERROR_INFO_COMMON( + AWS_ERROR_UNKNOWN, + "Unknown error."), + AWS_DEFINE_ERROR_INFO_COMMON( + AWS_ERROR_SHORT_BUFFER, + "Buffer is not large enough to hold result."), + AWS_DEFINE_ERROR_INFO_COMMON( + AWS_ERROR_OVERFLOW_DETECTED, + "Fixed size value overflow was detected."), + AWS_DEFINE_ERROR_INFO_COMMON( + AWS_ERROR_UNSUPPORTED_OPERATION, + "Unsupported operation."), + AWS_DEFINE_ERROR_INFO_COMMON( + AWS_ERROR_INVALID_BUFFER_SIZE, + "Invalid buffer size."), + AWS_DEFINE_ERROR_INFO_COMMON( + AWS_ERROR_INVALID_HEX_STR, + "Invalid hex string."), + AWS_DEFINE_ERROR_INFO_COMMON( + AWS_ERROR_INVALID_BASE64_STR, + "Invalid base64 string."), + AWS_DEFINE_ERROR_INFO_COMMON( + AWS_ERROR_INVALID_INDEX, + "Invalid index for list access."), + AWS_DEFINE_ERROR_INFO_COMMON( + AWS_ERROR_THREAD_INVALID_SETTINGS, + "Invalid thread settings."), + AWS_DEFINE_ERROR_INFO_COMMON( + AWS_ERROR_THREAD_INSUFFICIENT_RESOURCE, + "Insufficent resources for thread."), + AWS_DEFINE_ERROR_INFO_COMMON( + AWS_ERROR_THREAD_NO_PERMISSIONS, + "Insufficient permissions for thread operation."), + AWS_DEFINE_ERROR_INFO_COMMON( + AWS_ERROR_THREAD_NOT_JOINABLE, + "Thread not joinable."), + AWS_DEFINE_ERROR_INFO_COMMON( + AWS_ERROR_THREAD_NO_SUCH_THREAD_ID, + "No such thread ID."), + AWS_DEFINE_ERROR_INFO_COMMON( + AWS_ERROR_THREAD_DEADLOCK_DETECTED, + "Deadlock detected in thread."), + AWS_DEFINE_ERROR_INFO_COMMON( + AWS_ERROR_MUTEX_NOT_INIT, + "Mutex not initialized."), + AWS_DEFINE_ERROR_INFO_COMMON( + AWS_ERROR_MUTEX_TIMEOUT, + "Mutex operation timed out."), + AWS_DEFINE_ERROR_INFO_COMMON( + AWS_ERROR_MUTEX_CALLER_NOT_OWNER, + "The caller of a mutex operation was not the owner."), + AWS_DEFINE_ERROR_INFO_COMMON( + AWS_ERROR_MUTEX_FAILED, + "Mutex operation failed."), + AWS_DEFINE_ERROR_INFO_COMMON( + AWS_ERROR_COND_VARIABLE_INIT_FAILED, + "Condition variable initialization failed."), + AWS_DEFINE_ERROR_INFO_COMMON( + AWS_ERROR_COND_VARIABLE_TIMED_OUT, + "Condition variable wait timed out."), + AWS_DEFINE_ERROR_INFO_COMMON( + AWS_ERROR_COND_VARIABLE_ERROR_UNKNOWN, + "Condition variable unknown error."), + AWS_DEFINE_ERROR_INFO_COMMON( + AWS_ERROR_CLOCK_FAILURE, + "Clock operation failed."), + AWS_DEFINE_ERROR_INFO_COMMON( + AWS_ERROR_LIST_EMPTY, + "Empty list."), + AWS_DEFINE_ERROR_INFO_COMMON( + AWS_ERROR_DEST_COPY_TOO_SMALL, + "Destination of copy is too small."), + AWS_DEFINE_ERROR_INFO_COMMON( + AWS_ERROR_LIST_EXCEEDS_MAX_SIZE, + "A requested operation on a list would exceed it's max size."), + AWS_DEFINE_ERROR_INFO_COMMON( + AWS_ERROR_LIST_STATIC_MODE_CANT_SHRINK, + "Attempt to shrink a list in static mode."), + AWS_DEFINE_ERROR_INFO_COMMON( + AWS_ERROR_PRIORITY_QUEUE_FULL, + "Attempt to add items to a full preallocated queue in static mode."), + AWS_DEFINE_ERROR_INFO_COMMON( + AWS_ERROR_PRIORITY_QUEUE_EMPTY, + "Attempt to pop an item from an empty queue."), + AWS_DEFINE_ERROR_INFO_COMMON( + AWS_ERROR_PRIORITY_QUEUE_BAD_NODE, + "Bad node handle passed to remove."), + AWS_DEFINE_ERROR_INFO_COMMON( + AWS_ERROR_HASHTBL_ITEM_NOT_FOUND, + "Item not found in hash table."), + AWS_DEFINE_ERROR_INFO_COMMON( + AWS_ERROR_INVALID_DATE_STR, + "Date string is invalid and cannot be parsed." + ), + AWS_DEFINE_ERROR_INFO_COMMON( + AWS_ERROR_INVALID_ARGUMENT, + "An invalid argument was passed to a function." + ), + AWS_DEFINE_ERROR_INFO_COMMON( + AWS_ERROR_RANDOM_GEN_FAILED, + "A call to the random number generator failed. Retry later." + ), + AWS_DEFINE_ERROR_INFO_COMMON( + AWS_ERROR_MALFORMED_INPUT_STRING, + "An input string was passed to a parser and the string was incorrectly formatted." + ), + AWS_DEFINE_ERROR_INFO_COMMON( + AWS_ERROR_UNIMPLEMENTED, + "A function was called, but is not implemented." + ), + AWS_DEFINE_ERROR_INFO_COMMON( + AWS_ERROR_INVALID_STATE, + "An invalid state was encountered." + ), + AWS_DEFINE_ERROR_INFO_COMMON( + AWS_ERROR_ENVIRONMENT_GET, + "System call failure when getting an environment variable." + ), + AWS_DEFINE_ERROR_INFO_COMMON( + AWS_ERROR_ENVIRONMENT_SET, + "System call failure when setting an environment variable." + ), + AWS_DEFINE_ERROR_INFO_COMMON( + AWS_ERROR_ENVIRONMENT_UNSET, + "System call failure when unsetting an environment variable." + ), AWS_DEFINE_ERROR_INFO_COMMON( AWS_ERROR_SYS_CALL_FAILURE, "System call failure"), @@ -226,14 +226,14 @@ static struct aws_error_info errors[] = { AWS_DEFINE_ERROR_INFO_COMMON( AWS_ERROR_DIVIDE_BY_ZERO, "Attempt to divide a number by zero."), -}; -/* clang-format on */ - -static struct aws_error_info_list s_list = { - .error_list = errors, - .count = AWS_ARRAY_SIZE(errors), -}; - +}; +/* clang-format on */ + +static struct aws_error_info_list s_list = { + .error_list = errors, + .count = AWS_ARRAY_SIZE(errors), +}; + static struct aws_log_subject_info s_common_log_subject_infos[] = { DEFINE_LOG_SUBJECT_INFO( AWS_LS_COMMON_GENERAL, @@ -260,7 +260,7 @@ void aws_common_library_init(struct aws_allocator *allocator) { if (!s_common_library_initialized) { s_common_library_initialized = true; - aws_register_error_info(&s_list); + aws_register_error_info(&s_list); aws_register_log_subject_info_list(&s_common_log_subject_list); /* NUMA is funky and we can't rely on libnuma.so being available. We also don't want to take a hard dependency on it, @@ -280,9 +280,9 @@ void aws_common_library_init(struct aws_allocator *allocator) { AWS_LOGF_INFO(AWS_LS_COMMON_GENERAL, "static: libnuma.so failed to load"); } #endif - } -} - + } +} + void aws_common_library_clean_up(void) { if (s_common_library_initialized) { s_common_library_initialized = false; @@ -294,8 +294,8 @@ void aws_common_library_clean_up(void) { } #endif } -} - +} + void aws_common_fatal_assert_library_initialized(void) { if (!s_common_library_initialized) { fprintf( @@ -305,6 +305,6 @@ void aws_common_fatal_assert_library_initialized(void) { } } -#ifdef _MSC_VER -# pragma warning(pop) -#endif +#ifdef _MSC_VER +# pragma warning(pop) +#endif diff --git a/contrib/restricted/aws/aws-c-common/source/condition_variable.c b/contrib/restricted/aws/aws-c-common/source/condition_variable.c index 6d67dbbeaa..88b6f501b6 100644 --- a/contrib/restricted/aws/aws-c-common/source/condition_variable.c +++ b/contrib/restricted/aws/aws-c-common/source/condition_variable.c @@ -1,35 +1,35 @@ /** * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. * SPDX-License-Identifier: Apache-2.0. - */ - -#include <aws/common/condition_variable.h> - -int aws_condition_variable_wait_pred( - struct aws_condition_variable *condition_variable, - struct aws_mutex *mutex, - aws_condition_predicate_fn *pred, - void *pred_ctx) { - - int err_code = 0; - while (!err_code && !pred(pred_ctx)) { - err_code = aws_condition_variable_wait(condition_variable, mutex); - } - - return err_code; -} - -int aws_condition_variable_wait_for_pred( - struct aws_condition_variable *condition_variable, - struct aws_mutex *mutex, - int64_t time_to_wait, - aws_condition_predicate_fn *pred, - void *pred_ctx) { - - int err_code = 0; - while (!err_code && !pred(pred_ctx)) { - err_code = aws_condition_variable_wait_for(condition_variable, mutex, time_to_wait); - } - - return err_code; -} + */ + +#include <aws/common/condition_variable.h> + +int aws_condition_variable_wait_pred( + struct aws_condition_variable *condition_variable, + struct aws_mutex *mutex, + aws_condition_predicate_fn *pred, + void *pred_ctx) { + + int err_code = 0; + while (!err_code && !pred(pred_ctx)) { + err_code = aws_condition_variable_wait(condition_variable, mutex); + } + + return err_code; +} + +int aws_condition_variable_wait_for_pred( + struct aws_condition_variable *condition_variable, + struct aws_mutex *mutex, + int64_t time_to_wait, + aws_condition_predicate_fn *pred, + void *pred_ctx) { + + int err_code = 0; + while (!err_code && !pred(pred_ctx)) { + err_code = aws_condition_variable_wait_for(condition_variable, mutex, time_to_wait); + } + + return err_code; +} diff --git a/contrib/restricted/aws/aws-c-common/source/date_time.c b/contrib/restricted/aws/aws-c-common/source/date_time.c index 8d08e57ad8..40a224490e 100644 --- a/contrib/restricted/aws/aws-c-common/source/date_time.c +++ b/contrib/restricted/aws/aws-c-common/source/date_time.c @@ -1,807 +1,807 @@ /** * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. * SPDX-License-Identifier: Apache-2.0. - */ -#include <aws/common/date_time.h> - -#include <aws/common/array_list.h> -#include <aws/common/byte_buf.h> -#include <aws/common/byte_order.h> -#include <aws/common/clock.h> -#include <aws/common/string.h> -#include <aws/common/time.h> - -#include <ctype.h> - -static const char *RFC822_DATE_FORMAT_STR_MINUS_Z = "%a, %d %b %Y %H:%M:%S GMT"; -static const char *RFC822_DATE_FORMAT_STR_WITH_Z = "%a, %d %b %Y %H:%M:%S %Z"; -static const char *RFC822_SHORT_DATE_FORMAT_STR = "%a, %d %b %Y"; -static const char *ISO_8601_LONG_DATE_FORMAT_STR = "%Y-%m-%dT%H:%M:%SZ"; -static const char *ISO_8601_SHORT_DATE_FORMAT_STR = "%Y-%m-%d"; -static const char *ISO_8601_LONG_BASIC_DATE_FORMAT_STR = "%Y%m%dT%H%M%SZ"; -static const char *ISO_8601_SHORT_BASIC_DATE_FORMAT_STR = "%Y%m%d"; - -#define STR_TRIPLET_TO_INDEX(str) \ - (((uint32_t)(uint8_t)tolower((str)[0]) << 0) | ((uint32_t)(uint8_t)tolower((str)[1]) << 8) | \ - ((uint32_t)(uint8_t)tolower((str)[2]) << 16)) - -static uint32_t s_jan = 0; -static uint32_t s_feb = 0; -static uint32_t s_mar = 0; -static uint32_t s_apr = 0; -static uint32_t s_may = 0; -static uint32_t s_jun = 0; -static uint32_t s_jul = 0; -static uint32_t s_aug = 0; -static uint32_t s_sep = 0; -static uint32_t s_oct = 0; -static uint32_t s_nov = 0; -static uint32_t s_dec = 0; - -static uint32_t s_utc = 0; -static uint32_t s_gmt = 0; - -static void s_check_init_str_to_int(void) { - if (!s_jan) { - s_jan = STR_TRIPLET_TO_INDEX("jan"); - s_feb = STR_TRIPLET_TO_INDEX("feb"); - s_mar = STR_TRIPLET_TO_INDEX("mar"); - s_apr = STR_TRIPLET_TO_INDEX("apr"); - s_may = STR_TRIPLET_TO_INDEX("may"); - s_jun = STR_TRIPLET_TO_INDEX("jun"); - s_jul = STR_TRIPLET_TO_INDEX("jul"); - s_aug = STR_TRIPLET_TO_INDEX("aug"); - s_sep = STR_TRIPLET_TO_INDEX("sep"); - s_oct = STR_TRIPLET_TO_INDEX("oct"); - s_nov = STR_TRIPLET_TO_INDEX("nov"); - s_dec = STR_TRIPLET_TO_INDEX("dec"); - s_utc = STR_TRIPLET_TO_INDEX("utc"); - s_gmt = STR_TRIPLET_TO_INDEX("gmt"); - } -} - -/* Get the 0-11 monthy number from a string representing Month. Case insensitive and will stop on abbreviation*/ -static int get_month_number_from_str(const char *time_string, size_t start_index, size_t stop_index) { - s_check_init_str_to_int(); - - if (stop_index - start_index < 3) { - return -1; - } - - /* This AND forces the string to lowercase (assuming ASCII) */ - uint32_t comp_val = STR_TRIPLET_TO_INDEX(time_string + start_index); - - /* this can't be a switch, because I can't make it a constant expression. */ - if (s_jan == comp_val) { - return 0; - } - - if (s_feb == comp_val) { - return 1; - } - - if (s_mar == comp_val) { - return 2; - } - - if (s_apr == comp_val) { - return 3; - } - - if (s_may == comp_val) { - return 4; - } - - if (s_jun == comp_val) { - return 5; - } - - if (s_jul == comp_val) { - return 6; - } - - if (s_aug == comp_val) { - return 7; - } - - if (s_sep == comp_val) { - return 8; - } - - if (s_oct == comp_val) { - return 9; - } - - if (s_nov == comp_val) { - return 10; - } - - if (s_dec == comp_val) { - return 11; - } - - return -1; -} - -/* Detects whether or not the passed in timezone string is a UTC zone. */ -static bool is_utc_time_zone(const char *str) { - s_check_init_str_to_int(); - - size_t len = strlen(str); - - if (len > 0) { - if (str[0] == 'Z') { - return true; - } - - /* offsets count since their usable */ - if (len == 5 && (str[0] == '+' || str[0] == '-')) { - return true; - } - - if (len == 2) { - return tolower(str[0]) == 'u' && tolower(str[1]) == 't'; - } - - if (len < 3) { - return false; - } - - uint32_t comp_val = STR_TRIPLET_TO_INDEX(str); - - if (comp_val == s_utc || comp_val == s_gmt) { - return true; - } - } - - return false; -} - -struct tm s_get_time_struct(struct aws_date_time *dt, bool local_time) { - struct tm time; - AWS_ZERO_STRUCT(time); - if (local_time) { - aws_localtime(dt->timestamp, &time); - } else { - aws_gmtime(dt->timestamp, &time); - } - - return time; -} - -void aws_date_time_init_now(struct aws_date_time *dt) { - uint64_t current_time = 0; - aws_sys_clock_get_ticks(¤t_time); - dt->timestamp = (time_t)aws_timestamp_convert(current_time, AWS_TIMESTAMP_NANOS, AWS_TIMESTAMP_SECS, NULL); - dt->gmt_time = s_get_time_struct(dt, false); - dt->local_time = s_get_time_struct(dt, true); -} - -void aws_date_time_init_epoch_millis(struct aws_date_time *dt, uint64_t ms_since_epoch) { - dt->timestamp = (time_t)(ms_since_epoch / AWS_TIMESTAMP_MILLIS); - dt->gmt_time = s_get_time_struct(dt, false); - dt->local_time = s_get_time_struct(dt, true); -} - -void aws_date_time_init_epoch_secs(struct aws_date_time *dt, double sec_ms) { - dt->timestamp = (time_t)sec_ms; - dt->gmt_time = s_get_time_struct(dt, false); - dt->local_time = s_get_time_struct(dt, true); -} - -enum parser_state { - ON_WEEKDAY, - ON_SPACE_DELIM, - ON_YEAR, - ON_MONTH, - ON_MONTH_DAY, - ON_HOUR, - ON_MINUTE, - ON_SECOND, - ON_TZ, - FINISHED, -}; - -static int s_parse_iso_8601_basic(const struct aws_byte_cursor *date_str_cursor, struct tm *parsed_time) { - size_t index = 0; - size_t state_start_index = 0; - enum parser_state state = ON_YEAR; - bool error = false; - - AWS_ZERO_STRUCT(*parsed_time); - - while (state < FINISHED && !error && index < date_str_cursor->len) { - char c = date_str_cursor->ptr[index]; - size_t sub_index = index - state_start_index; - switch (state) { - case ON_YEAR: + */ +#include <aws/common/date_time.h> + +#include <aws/common/array_list.h> +#include <aws/common/byte_buf.h> +#include <aws/common/byte_order.h> +#include <aws/common/clock.h> +#include <aws/common/string.h> +#include <aws/common/time.h> + +#include <ctype.h> + +static const char *RFC822_DATE_FORMAT_STR_MINUS_Z = "%a, %d %b %Y %H:%M:%S GMT"; +static const char *RFC822_DATE_FORMAT_STR_WITH_Z = "%a, %d %b %Y %H:%M:%S %Z"; +static const char *RFC822_SHORT_DATE_FORMAT_STR = "%a, %d %b %Y"; +static const char *ISO_8601_LONG_DATE_FORMAT_STR = "%Y-%m-%dT%H:%M:%SZ"; +static const char *ISO_8601_SHORT_DATE_FORMAT_STR = "%Y-%m-%d"; +static const char *ISO_8601_LONG_BASIC_DATE_FORMAT_STR = "%Y%m%dT%H%M%SZ"; +static const char *ISO_8601_SHORT_BASIC_DATE_FORMAT_STR = "%Y%m%d"; + +#define STR_TRIPLET_TO_INDEX(str) \ + (((uint32_t)(uint8_t)tolower((str)[0]) << 0) | ((uint32_t)(uint8_t)tolower((str)[1]) << 8) | \ + ((uint32_t)(uint8_t)tolower((str)[2]) << 16)) + +static uint32_t s_jan = 0; +static uint32_t s_feb = 0; +static uint32_t s_mar = 0; +static uint32_t s_apr = 0; +static uint32_t s_may = 0; +static uint32_t s_jun = 0; +static uint32_t s_jul = 0; +static uint32_t s_aug = 0; +static uint32_t s_sep = 0; +static uint32_t s_oct = 0; +static uint32_t s_nov = 0; +static uint32_t s_dec = 0; + +static uint32_t s_utc = 0; +static uint32_t s_gmt = 0; + +static void s_check_init_str_to_int(void) { + if (!s_jan) { + s_jan = STR_TRIPLET_TO_INDEX("jan"); + s_feb = STR_TRIPLET_TO_INDEX("feb"); + s_mar = STR_TRIPLET_TO_INDEX("mar"); + s_apr = STR_TRIPLET_TO_INDEX("apr"); + s_may = STR_TRIPLET_TO_INDEX("may"); + s_jun = STR_TRIPLET_TO_INDEX("jun"); + s_jul = STR_TRIPLET_TO_INDEX("jul"); + s_aug = STR_TRIPLET_TO_INDEX("aug"); + s_sep = STR_TRIPLET_TO_INDEX("sep"); + s_oct = STR_TRIPLET_TO_INDEX("oct"); + s_nov = STR_TRIPLET_TO_INDEX("nov"); + s_dec = STR_TRIPLET_TO_INDEX("dec"); + s_utc = STR_TRIPLET_TO_INDEX("utc"); + s_gmt = STR_TRIPLET_TO_INDEX("gmt"); + } +} + +/* Get the 0-11 monthy number from a string representing Month. Case insensitive and will stop on abbreviation*/ +static int get_month_number_from_str(const char *time_string, size_t start_index, size_t stop_index) { + s_check_init_str_to_int(); + + if (stop_index - start_index < 3) { + return -1; + } + + /* This AND forces the string to lowercase (assuming ASCII) */ + uint32_t comp_val = STR_TRIPLET_TO_INDEX(time_string + start_index); + + /* this can't be a switch, because I can't make it a constant expression. */ + if (s_jan == comp_val) { + return 0; + } + + if (s_feb == comp_val) { + return 1; + } + + if (s_mar == comp_val) { + return 2; + } + + if (s_apr == comp_val) { + return 3; + } + + if (s_may == comp_val) { + return 4; + } + + if (s_jun == comp_val) { + return 5; + } + + if (s_jul == comp_val) { + return 6; + } + + if (s_aug == comp_val) { + return 7; + } + + if (s_sep == comp_val) { + return 8; + } + + if (s_oct == comp_val) { + return 9; + } + + if (s_nov == comp_val) { + return 10; + } + + if (s_dec == comp_val) { + return 11; + } + + return -1; +} + +/* Detects whether or not the passed in timezone string is a UTC zone. */ +static bool is_utc_time_zone(const char *str) { + s_check_init_str_to_int(); + + size_t len = strlen(str); + + if (len > 0) { + if (str[0] == 'Z') { + return true; + } + + /* offsets count since their usable */ + if (len == 5 && (str[0] == '+' || str[0] == '-')) { + return true; + } + + if (len == 2) { + return tolower(str[0]) == 'u' && tolower(str[1]) == 't'; + } + + if (len < 3) { + return false; + } + + uint32_t comp_val = STR_TRIPLET_TO_INDEX(str); + + if (comp_val == s_utc || comp_val == s_gmt) { + return true; + } + } + + return false; +} + +struct tm s_get_time_struct(struct aws_date_time *dt, bool local_time) { + struct tm time; + AWS_ZERO_STRUCT(time); + if (local_time) { + aws_localtime(dt->timestamp, &time); + } else { + aws_gmtime(dt->timestamp, &time); + } + + return time; +} + +void aws_date_time_init_now(struct aws_date_time *dt) { + uint64_t current_time = 0; + aws_sys_clock_get_ticks(¤t_time); + dt->timestamp = (time_t)aws_timestamp_convert(current_time, AWS_TIMESTAMP_NANOS, AWS_TIMESTAMP_SECS, NULL); + dt->gmt_time = s_get_time_struct(dt, false); + dt->local_time = s_get_time_struct(dt, true); +} + +void aws_date_time_init_epoch_millis(struct aws_date_time *dt, uint64_t ms_since_epoch) { + dt->timestamp = (time_t)(ms_since_epoch / AWS_TIMESTAMP_MILLIS); + dt->gmt_time = s_get_time_struct(dt, false); + dt->local_time = s_get_time_struct(dt, true); +} + +void aws_date_time_init_epoch_secs(struct aws_date_time *dt, double sec_ms) { + dt->timestamp = (time_t)sec_ms; + dt->gmt_time = s_get_time_struct(dt, false); + dt->local_time = s_get_time_struct(dt, true); +} + +enum parser_state { + ON_WEEKDAY, + ON_SPACE_DELIM, + ON_YEAR, + ON_MONTH, + ON_MONTH_DAY, + ON_HOUR, + ON_MINUTE, + ON_SECOND, + ON_TZ, + FINISHED, +}; + +static int s_parse_iso_8601_basic(const struct aws_byte_cursor *date_str_cursor, struct tm *parsed_time) { + size_t index = 0; + size_t state_start_index = 0; + enum parser_state state = ON_YEAR; + bool error = false; + + AWS_ZERO_STRUCT(*parsed_time); + + while (state < FINISHED && !error && index < date_str_cursor->len) { + char c = date_str_cursor->ptr[index]; + size_t sub_index = index - state_start_index; + switch (state) { + case ON_YEAR: if (aws_isdigit(c)) { - parsed_time->tm_year = parsed_time->tm_year * 10 + (c - '0'); - if (sub_index == 3) { - state = ON_MONTH; - state_start_index = index + 1; - parsed_time->tm_year -= 1900; - } - } else { - error = true; - } - break; - - case ON_MONTH: + parsed_time->tm_year = parsed_time->tm_year * 10 + (c - '0'); + if (sub_index == 3) { + state = ON_MONTH; + state_start_index = index + 1; + parsed_time->tm_year -= 1900; + } + } else { + error = true; + } + break; + + case ON_MONTH: if (aws_isdigit(c)) { - parsed_time->tm_mon = parsed_time->tm_mon * 10 + (c - '0'); - if (sub_index == 1) { - state = ON_MONTH_DAY; - state_start_index = index + 1; - parsed_time->tm_mon -= 1; - } - } else { - error = true; - } - break; - - case ON_MONTH_DAY: - if (c == 'T' && sub_index == 2) { - state = ON_HOUR; - state_start_index = index + 1; + parsed_time->tm_mon = parsed_time->tm_mon * 10 + (c - '0'); + if (sub_index == 1) { + state = ON_MONTH_DAY; + state_start_index = index + 1; + parsed_time->tm_mon -= 1; + } + } else { + error = true; + } + break; + + case ON_MONTH_DAY: + if (c == 'T' && sub_index == 2) { + state = ON_HOUR; + state_start_index = index + 1; } else if (aws_isdigit(c)) { - parsed_time->tm_mday = parsed_time->tm_mday * 10 + (c - '0'); - } else { - error = true; - } - break; - - case ON_HOUR: + parsed_time->tm_mday = parsed_time->tm_mday * 10 + (c - '0'); + } else { + error = true; + } + break; + + case ON_HOUR: if (aws_isdigit(c)) { - parsed_time->tm_hour = parsed_time->tm_hour * 10 + (c - '0'); - if (sub_index == 1) { - state = ON_MINUTE; - state_start_index = index + 1; - } - } else { - error = true; - } - break; - - case ON_MINUTE: + parsed_time->tm_hour = parsed_time->tm_hour * 10 + (c - '0'); + if (sub_index == 1) { + state = ON_MINUTE; + state_start_index = index + 1; + } + } else { + error = true; + } + break; + + case ON_MINUTE: if (aws_isdigit(c)) { - parsed_time->tm_min = parsed_time->tm_min * 10 + (c - '0'); - if (sub_index == 1) { - state = ON_SECOND; - state_start_index = index + 1; - } - } else { - error = true; - } - break; - - case ON_SECOND: + parsed_time->tm_min = parsed_time->tm_min * 10 + (c - '0'); + if (sub_index == 1) { + state = ON_SECOND; + state_start_index = index + 1; + } + } else { + error = true; + } + break; + + case ON_SECOND: if (aws_isdigit(c)) { - parsed_time->tm_sec = parsed_time->tm_sec * 10 + (c - '0'); - if (sub_index == 1) { - state = ON_TZ; - state_start_index = index + 1; - } - } else { - error = true; - } - break; - - case ON_TZ: - if (c == 'Z' && (sub_index == 0 || sub_index == 3)) { - state = FINISHED; + parsed_time->tm_sec = parsed_time->tm_sec * 10 + (c - '0'); + if (sub_index == 1) { + state = ON_TZ; + state_start_index = index + 1; + } + } else { + error = true; + } + break; + + case ON_TZ: + if (c == 'Z' && (sub_index == 0 || sub_index == 3)) { + state = FINISHED; } else if (!aws_isdigit(c) || sub_index > 3) { - error = true; - } - break; - - default: - error = true; - break; - } - - index++; - } - - /* ISO8601 supports date only with no time portion. state ==ON_MONTH_DAY catches this case. */ - return (state == FINISHED || state == ON_MONTH_DAY) && !error ? AWS_OP_SUCCESS : AWS_OP_ERR; -} - -static int s_parse_iso_8601(const struct aws_byte_cursor *date_str_cursor, struct tm *parsed_time) { - size_t index = 0; - size_t state_start_index = 0; - enum parser_state state = ON_YEAR; - bool error = false; - bool advance = true; - - AWS_ZERO_STRUCT(*parsed_time); - - while (state < FINISHED && !error && index < date_str_cursor->len) { - char c = date_str_cursor->ptr[index]; - switch (state) { - case ON_YEAR: - if (c == '-' && index - state_start_index == 4) { - state = ON_MONTH; - state_start_index = index + 1; - parsed_time->tm_year -= 1900; + error = true; + } + break; + + default: + error = true; + break; + } + + index++; + } + + /* ISO8601 supports date only with no time portion. state ==ON_MONTH_DAY catches this case. */ + return (state == FINISHED || state == ON_MONTH_DAY) && !error ? AWS_OP_SUCCESS : AWS_OP_ERR; +} + +static int s_parse_iso_8601(const struct aws_byte_cursor *date_str_cursor, struct tm *parsed_time) { + size_t index = 0; + size_t state_start_index = 0; + enum parser_state state = ON_YEAR; + bool error = false; + bool advance = true; + + AWS_ZERO_STRUCT(*parsed_time); + + while (state < FINISHED && !error && index < date_str_cursor->len) { + char c = date_str_cursor->ptr[index]; + switch (state) { + case ON_YEAR: + if (c == '-' && index - state_start_index == 4) { + state = ON_MONTH; + state_start_index = index + 1; + parsed_time->tm_year -= 1900; } else if (aws_isdigit(c)) { - parsed_time->tm_year = parsed_time->tm_year * 10 + (c - '0'); - } else { - error = true; - } - break; - case ON_MONTH: - if (c == '-' && index - state_start_index == 2) { - state = ON_MONTH_DAY; - state_start_index = index + 1; - parsed_time->tm_mon -= 1; + parsed_time->tm_year = parsed_time->tm_year * 10 + (c - '0'); + } else { + error = true; + } + break; + case ON_MONTH: + if (c == '-' && index - state_start_index == 2) { + state = ON_MONTH_DAY; + state_start_index = index + 1; + parsed_time->tm_mon -= 1; } else if (aws_isdigit(c)) { - parsed_time->tm_mon = parsed_time->tm_mon * 10 + (c - '0'); - } else { - error = true; - } - - break; - case ON_MONTH_DAY: - if (c == 'T' && index - state_start_index == 2) { - state = ON_HOUR; - state_start_index = index + 1; + parsed_time->tm_mon = parsed_time->tm_mon * 10 + (c - '0'); + } else { + error = true; + } + + break; + case ON_MONTH_DAY: + if (c == 'T' && index - state_start_index == 2) { + state = ON_HOUR; + state_start_index = index + 1; } else if (aws_isdigit(c)) { - parsed_time->tm_mday = parsed_time->tm_mday * 10 + (c - '0'); - } else { - error = true; - } - break; - /* note: no time portion is spec compliant. */ - case ON_HOUR: - /* time parts can be delimited by ':' or just concatenated together, but must always be 2 digits. */ - if (index - state_start_index == 2) { - state = ON_MINUTE; - state_start_index = index + 1; + parsed_time->tm_mday = parsed_time->tm_mday * 10 + (c - '0'); + } else { + error = true; + } + break; + /* note: no time portion is spec compliant. */ + case ON_HOUR: + /* time parts can be delimited by ':' or just concatenated together, but must always be 2 digits. */ + if (index - state_start_index == 2) { + state = ON_MINUTE; + state_start_index = index + 1; if (aws_isdigit(c)) { - state_start_index = index; - advance = false; - } else if (c != ':') { - error = true; - } + state_start_index = index; + advance = false; + } else if (c != ':') { + error = true; + } } else if (aws_isdigit(c)) { - parsed_time->tm_hour = parsed_time->tm_hour * 10 + (c - '0'); - } else { - error = true; - } - - break; - case ON_MINUTE: - /* time parts can be delimited by ':' or just concatenated together, but must always be 2 digits. */ - if (index - state_start_index == 2) { - state = ON_SECOND; - state_start_index = index + 1; + parsed_time->tm_hour = parsed_time->tm_hour * 10 + (c - '0'); + } else { + error = true; + } + + break; + case ON_MINUTE: + /* time parts can be delimited by ':' or just concatenated together, but must always be 2 digits. */ + if (index - state_start_index == 2) { + state = ON_SECOND; + state_start_index = index + 1; if (aws_isdigit(c)) { - state_start_index = index; - advance = false; - } else if (c != ':') { - error = true; - } + state_start_index = index; + advance = false; + } else if (c != ':') { + error = true; + } } else if (aws_isdigit(c)) { - parsed_time->tm_min = parsed_time->tm_min * 10 + (c - '0'); - } else { - error = true; - } - - break; - case ON_SECOND: - if (c == 'Z' && index - state_start_index == 2) { - state = FINISHED; - state_start_index = index + 1; - } else if (c == '.' && index - state_start_index == 2) { - state = ON_TZ; - state_start_index = index + 1; + parsed_time->tm_min = parsed_time->tm_min * 10 + (c - '0'); + } else { + error = true; + } + + break; + case ON_SECOND: + if (c == 'Z' && index - state_start_index == 2) { + state = FINISHED; + state_start_index = index + 1; + } else if (c == '.' && index - state_start_index == 2) { + state = ON_TZ; + state_start_index = index + 1; } else if (aws_isdigit(c)) { - parsed_time->tm_sec = parsed_time->tm_sec * 10 + (c - '0'); - } else { - error = true; - } - - break; - case ON_TZ: - if (c == 'Z') { - state = FINISHED; - state_start_index = index + 1; + parsed_time->tm_sec = parsed_time->tm_sec * 10 + (c - '0'); + } else { + error = true; + } + + break; + case ON_TZ: + if (c == 'Z') { + state = FINISHED; + state_start_index = index + 1; } else if (!aws_isdigit(c)) { - error = true; - } - break; - default: - error = true; - break; - } - - if (advance) { - index++; - } else { - advance = true; - } - } - - /* ISO8601 supports date only with no time portion. state ==ON_MONTH_DAY catches this case. */ - return (state == FINISHED || state == ON_MONTH_DAY) && !error ? AWS_OP_SUCCESS : AWS_OP_ERR; -} - -static int s_parse_rfc_822( - const struct aws_byte_cursor *date_str_cursor, - struct tm *parsed_time, - struct aws_date_time *dt) { - size_t len = date_str_cursor->len; - - size_t index = 0; - size_t state_start_index = 0; - int state = ON_WEEKDAY; - bool error = false; - - AWS_ZERO_STRUCT(*parsed_time); - - while (!error && index < len) { - char c = date_str_cursor->ptr[index]; - - switch (state) { - /* week day abbr is optional. */ - case ON_WEEKDAY: - if (c == ',') { - state = ON_SPACE_DELIM; - state_start_index = index + 1; + error = true; + } + break; + default: + error = true; + break; + } + + if (advance) { + index++; + } else { + advance = true; + } + } + + /* ISO8601 supports date only with no time portion. state ==ON_MONTH_DAY catches this case. */ + return (state == FINISHED || state == ON_MONTH_DAY) && !error ? AWS_OP_SUCCESS : AWS_OP_ERR; +} + +static int s_parse_rfc_822( + const struct aws_byte_cursor *date_str_cursor, + struct tm *parsed_time, + struct aws_date_time *dt) { + size_t len = date_str_cursor->len; + + size_t index = 0; + size_t state_start_index = 0; + int state = ON_WEEKDAY; + bool error = false; + + AWS_ZERO_STRUCT(*parsed_time); + + while (!error && index < len) { + char c = date_str_cursor->ptr[index]; + + switch (state) { + /* week day abbr is optional. */ + case ON_WEEKDAY: + if (c == ',') { + state = ON_SPACE_DELIM; + state_start_index = index + 1; } else if (aws_isdigit(c)) { - state = ON_MONTH_DAY; + state = ON_MONTH_DAY; } else if (!aws_isalpha(c)) { - error = true; - } - break; - case ON_SPACE_DELIM: + error = true; + } + break; + case ON_SPACE_DELIM: if (aws_isspace(c)) { - state = ON_MONTH_DAY; - state_start_index = index + 1; - } else { - error = true; - } - break; - case ON_MONTH_DAY: + state = ON_MONTH_DAY; + state_start_index = index + 1; + } else { + error = true; + } + break; + case ON_MONTH_DAY: if (aws_isdigit(c)) { - parsed_time->tm_mday = parsed_time->tm_mday * 10 + (c - '0'); + parsed_time->tm_mday = parsed_time->tm_mday * 10 + (c - '0'); } else if (aws_isspace(c)) { - state = ON_MONTH; - state_start_index = index + 1; - } else { - error = true; - } - break; - case ON_MONTH: + state = ON_MONTH; + state_start_index = index + 1; + } else { + error = true; + } + break; + case ON_MONTH: if (aws_isspace(c)) { - int monthNumber = - get_month_number_from_str((const char *)date_str_cursor->ptr, state_start_index, index + 1); - - if (monthNumber > -1) { - state = ON_YEAR; - state_start_index = index + 1; - parsed_time->tm_mon = monthNumber; - } else { - error = true; - } + int monthNumber = + get_month_number_from_str((const char *)date_str_cursor->ptr, state_start_index, index + 1); + + if (monthNumber > -1) { + state = ON_YEAR; + state_start_index = index + 1; + parsed_time->tm_mon = monthNumber; + } else { + error = true; + } } else if (!aws_isalpha(c)) { - error = true; - } - break; - /* year can be 4 or 2 digits. */ - case ON_YEAR: + error = true; + } + break; + /* year can be 4 or 2 digits. */ + case ON_YEAR: if (aws_isspace(c) && index - state_start_index == 4) { - state = ON_HOUR; - state_start_index = index + 1; - parsed_time->tm_year -= 1900; + state = ON_HOUR; + state_start_index = index + 1; + parsed_time->tm_year -= 1900; } else if (aws_isspace(c) && index - state_start_index == 2) { - state = 5; - state_start_index = index + 1; - parsed_time->tm_year += 2000 - 1900; + state = 5; + state_start_index = index + 1; + parsed_time->tm_year += 2000 - 1900; } else if (aws_isdigit(c)) { - parsed_time->tm_year = parsed_time->tm_year * 10 + (c - '0'); - } else { - error = true; - } - break; - case ON_HOUR: - if (c == ':' && index - state_start_index == 2) { - state = ON_MINUTE; - state_start_index = index + 1; + parsed_time->tm_year = parsed_time->tm_year * 10 + (c - '0'); + } else { + error = true; + } + break; + case ON_HOUR: + if (c == ':' && index - state_start_index == 2) { + state = ON_MINUTE; + state_start_index = index + 1; } else if (aws_isdigit(c)) { - parsed_time->tm_hour = parsed_time->tm_hour * 10 + (c - '0'); - } else { - error = true; - } - break; - case ON_MINUTE: - if (c == ':' && index - state_start_index == 2) { - state = ON_SECOND; - state_start_index = index + 1; + parsed_time->tm_hour = parsed_time->tm_hour * 10 + (c - '0'); + } else { + error = true; + } + break; + case ON_MINUTE: + if (c == ':' && index - state_start_index == 2) { + state = ON_SECOND; + state_start_index = index + 1; } else if (aws_isdigit(c)) { - parsed_time->tm_min = parsed_time->tm_min * 10 + (c - '0'); - } else { - error = true; - } - break; - case ON_SECOND: + parsed_time->tm_min = parsed_time->tm_min * 10 + (c - '0'); + } else { + error = true; + } + break; + case ON_SECOND: if (aws_isspace(c) && index - state_start_index == 2) { - state = ON_TZ; - state_start_index = index + 1; + state = ON_TZ; + state_start_index = index + 1; } else if (aws_isdigit(c)) { - parsed_time->tm_sec = parsed_time->tm_sec * 10 + (c - '0'); - } else { - error = true; - } - break; - case ON_TZ: + parsed_time->tm_sec = parsed_time->tm_sec * 10 + (c - '0'); + } else { + error = true; + } + break; + case ON_TZ: if ((aws_isalnum(c) || c == '-' || c == '+') && (index - state_start_index) < 5) { - dt->tz[index - state_start_index] = c; - } else { - error = true; - } - - break; - default: - error = true; - break; - } - - index++; - } - - if (dt->tz[0] != 0) { - if (is_utc_time_zone(dt->tz)) { - dt->utc_assumed = true; - } else { - error = true; - } - } - - return error || state != ON_TZ ? AWS_OP_ERR : AWS_OP_SUCCESS; -} - -int aws_date_time_init_from_str_cursor( - struct aws_date_time *dt, - const struct aws_byte_cursor *date_str_cursor, - enum aws_date_format fmt) { + dt->tz[index - state_start_index] = c; + } else { + error = true; + } + + break; + default: + error = true; + break; + } + + index++; + } + + if (dt->tz[0] != 0) { + if (is_utc_time_zone(dt->tz)) { + dt->utc_assumed = true; + } else { + error = true; + } + } + + return error || state != ON_TZ ? AWS_OP_ERR : AWS_OP_SUCCESS; +} + +int aws_date_time_init_from_str_cursor( + struct aws_date_time *dt, + const struct aws_byte_cursor *date_str_cursor, + enum aws_date_format fmt) { AWS_ERROR_PRECONDITION(date_str_cursor->len <= AWS_DATE_TIME_STR_MAX_LEN, AWS_ERROR_OVERFLOW_DETECTED); - - AWS_ZERO_STRUCT(*dt); - - struct tm parsed_time; - bool successfully_parsed = false; - - time_t seconds_offset = 0; - if (fmt == AWS_DATE_FORMAT_ISO_8601 || fmt == AWS_DATE_FORMAT_AUTO_DETECT) { - if (!s_parse_iso_8601(date_str_cursor, &parsed_time)) { - dt->utc_assumed = true; - successfully_parsed = true; - } - } - - if (fmt == AWS_DATE_FORMAT_ISO_8601_BASIC || (fmt == AWS_DATE_FORMAT_AUTO_DETECT && !successfully_parsed)) { - if (!s_parse_iso_8601_basic(date_str_cursor, &parsed_time)) { - dt->utc_assumed = true; - successfully_parsed = true; - } - } - - if (fmt == AWS_DATE_FORMAT_RFC822 || (fmt == AWS_DATE_FORMAT_AUTO_DETECT && !successfully_parsed)) { - if (!s_parse_rfc_822(date_str_cursor, &parsed_time, dt)) { - successfully_parsed = true; - - if (dt->utc_assumed) { - if (dt->tz[0] == '+' || dt->tz[0] == '-') { - /* in this format, the offset is in format +/-HHMM so convert that to seconds and we'll use - * the offset later. */ - char min_str[3] = {0}; - char hour_str[3] = {0}; - hour_str[0] = dt->tz[1]; - hour_str[1] = dt->tz[2]; - min_str[0] = dt->tz[3]; - min_str[1] = dt->tz[4]; - - long hour = strtol(hour_str, NULL, 10); - long min = strtol(min_str, NULL, 10); - seconds_offset = (time_t)(hour * 3600 + min * 60); - - if (dt->tz[0] == '-') { - seconds_offset = -seconds_offset; - } - } - } - } - } - - if (!successfully_parsed) { - return aws_raise_error(AWS_ERROR_INVALID_DATE_STR); - } - - if (dt->utc_assumed || seconds_offset) { - dt->timestamp = aws_timegm(&parsed_time); - } else { - dt->timestamp = mktime(&parsed_time); - } - - /* negative means we need to move west (increase the timestamp), positive means head east, so decrease the - * timestamp. */ - dt->timestamp -= seconds_offset; - - dt->gmt_time = s_get_time_struct(dt, false); - dt->local_time = s_get_time_struct(dt, true); - - return AWS_OP_SUCCESS; -} - -int aws_date_time_init_from_str( - struct aws_date_time *dt, - const struct aws_byte_buf *date_str, - enum aws_date_format fmt) { + + AWS_ZERO_STRUCT(*dt); + + struct tm parsed_time; + bool successfully_parsed = false; + + time_t seconds_offset = 0; + if (fmt == AWS_DATE_FORMAT_ISO_8601 || fmt == AWS_DATE_FORMAT_AUTO_DETECT) { + if (!s_parse_iso_8601(date_str_cursor, &parsed_time)) { + dt->utc_assumed = true; + successfully_parsed = true; + } + } + + if (fmt == AWS_DATE_FORMAT_ISO_8601_BASIC || (fmt == AWS_DATE_FORMAT_AUTO_DETECT && !successfully_parsed)) { + if (!s_parse_iso_8601_basic(date_str_cursor, &parsed_time)) { + dt->utc_assumed = true; + successfully_parsed = true; + } + } + + if (fmt == AWS_DATE_FORMAT_RFC822 || (fmt == AWS_DATE_FORMAT_AUTO_DETECT && !successfully_parsed)) { + if (!s_parse_rfc_822(date_str_cursor, &parsed_time, dt)) { + successfully_parsed = true; + + if (dt->utc_assumed) { + if (dt->tz[0] == '+' || dt->tz[0] == '-') { + /* in this format, the offset is in format +/-HHMM so convert that to seconds and we'll use + * the offset later. */ + char min_str[3] = {0}; + char hour_str[3] = {0}; + hour_str[0] = dt->tz[1]; + hour_str[1] = dt->tz[2]; + min_str[0] = dt->tz[3]; + min_str[1] = dt->tz[4]; + + long hour = strtol(hour_str, NULL, 10); + long min = strtol(min_str, NULL, 10); + seconds_offset = (time_t)(hour * 3600 + min * 60); + + if (dt->tz[0] == '-') { + seconds_offset = -seconds_offset; + } + } + } + } + } + + if (!successfully_parsed) { + return aws_raise_error(AWS_ERROR_INVALID_DATE_STR); + } + + if (dt->utc_assumed || seconds_offset) { + dt->timestamp = aws_timegm(&parsed_time); + } else { + dt->timestamp = mktime(&parsed_time); + } + + /* negative means we need to move west (increase the timestamp), positive means head east, so decrease the + * timestamp. */ + dt->timestamp -= seconds_offset; + + dt->gmt_time = s_get_time_struct(dt, false); + dt->local_time = s_get_time_struct(dt, true); + + return AWS_OP_SUCCESS; +} + +int aws_date_time_init_from_str( + struct aws_date_time *dt, + const struct aws_byte_buf *date_str, + enum aws_date_format fmt) { AWS_ERROR_PRECONDITION(date_str->len <= AWS_DATE_TIME_STR_MAX_LEN, AWS_ERROR_OVERFLOW_DETECTED); - - struct aws_byte_cursor date_cursor = aws_byte_cursor_from_buf(date_str); - return aws_date_time_init_from_str_cursor(dt, &date_cursor, fmt); -} - -static inline int s_date_to_str(const struct tm *tm, const char *format_str, struct aws_byte_buf *output_buf) { - size_t remaining_space = output_buf->capacity - output_buf->len; - size_t bytes_written = strftime((char *)output_buf->buffer + output_buf->len, remaining_space, format_str, tm); - - if (bytes_written == 0) { - return aws_raise_error(AWS_ERROR_SHORT_BUFFER); - } - - output_buf->len += bytes_written; - - return AWS_OP_SUCCESS; -} - -int aws_date_time_to_local_time_str( - const struct aws_date_time *dt, - enum aws_date_format fmt, - struct aws_byte_buf *output_buf) { - AWS_ASSERT(fmt != AWS_DATE_FORMAT_AUTO_DETECT); - - switch (fmt) { - case AWS_DATE_FORMAT_RFC822: - return s_date_to_str(&dt->local_time, RFC822_DATE_FORMAT_STR_WITH_Z, output_buf); - - case AWS_DATE_FORMAT_ISO_8601: - return s_date_to_str(&dt->local_time, ISO_8601_LONG_DATE_FORMAT_STR, output_buf); - - case AWS_DATE_FORMAT_ISO_8601_BASIC: - return s_date_to_str(&dt->local_time, ISO_8601_LONG_BASIC_DATE_FORMAT_STR, output_buf); - - default: - return aws_raise_error(AWS_ERROR_INVALID_ARGUMENT); - } -} - -int aws_date_time_to_utc_time_str( - const struct aws_date_time *dt, - enum aws_date_format fmt, - struct aws_byte_buf *output_buf) { - AWS_ASSERT(fmt != AWS_DATE_FORMAT_AUTO_DETECT); - - switch (fmt) { - case AWS_DATE_FORMAT_RFC822: - return s_date_to_str(&dt->gmt_time, RFC822_DATE_FORMAT_STR_MINUS_Z, output_buf); - - case AWS_DATE_FORMAT_ISO_8601: - return s_date_to_str(&dt->gmt_time, ISO_8601_LONG_DATE_FORMAT_STR, output_buf); - - case AWS_DATE_FORMAT_ISO_8601_BASIC: - return s_date_to_str(&dt->gmt_time, ISO_8601_LONG_BASIC_DATE_FORMAT_STR, output_buf); - - default: - return aws_raise_error(AWS_ERROR_INVALID_ARGUMENT); - } -} - -int aws_date_time_to_local_time_short_str( - const struct aws_date_time *dt, - enum aws_date_format fmt, - struct aws_byte_buf *output_buf) { - AWS_ASSERT(fmt != AWS_DATE_FORMAT_AUTO_DETECT); - - switch (fmt) { - case AWS_DATE_FORMAT_RFC822: - return s_date_to_str(&dt->local_time, RFC822_SHORT_DATE_FORMAT_STR, output_buf); - - case AWS_DATE_FORMAT_ISO_8601: - return s_date_to_str(&dt->local_time, ISO_8601_SHORT_DATE_FORMAT_STR, output_buf); - - case AWS_DATE_FORMAT_ISO_8601_BASIC: - return s_date_to_str(&dt->local_time, ISO_8601_SHORT_BASIC_DATE_FORMAT_STR, output_buf); - - default: - return aws_raise_error(AWS_ERROR_INVALID_ARGUMENT); - } -} - -int aws_date_time_to_utc_time_short_str( - const struct aws_date_time *dt, - enum aws_date_format fmt, - struct aws_byte_buf *output_buf) { - AWS_ASSERT(fmt != AWS_DATE_FORMAT_AUTO_DETECT); - - switch (fmt) { - case AWS_DATE_FORMAT_RFC822: - return s_date_to_str(&dt->gmt_time, RFC822_SHORT_DATE_FORMAT_STR, output_buf); - - case AWS_DATE_FORMAT_ISO_8601: - return s_date_to_str(&dt->gmt_time, ISO_8601_SHORT_DATE_FORMAT_STR, output_buf); - - case AWS_DATE_FORMAT_ISO_8601_BASIC: - return s_date_to_str(&dt->gmt_time, ISO_8601_SHORT_BASIC_DATE_FORMAT_STR, output_buf); - - default: - return aws_raise_error(AWS_ERROR_INVALID_ARGUMENT); - } -} - -double aws_date_time_as_epoch_secs(const struct aws_date_time *dt) { - return (double)dt->timestamp; -} - -uint64_t aws_date_time_as_nanos(const struct aws_date_time *dt) { - return (uint64_t)dt->timestamp * AWS_TIMESTAMP_NANOS; -} - -uint64_t aws_date_time_as_millis(const struct aws_date_time *dt) { - return (uint64_t)dt->timestamp * AWS_TIMESTAMP_MILLIS; -} - -uint16_t aws_date_time_year(const struct aws_date_time *dt, bool local_time) { - const struct tm *time = local_time ? &dt->local_time : &dt->gmt_time; - - return (uint16_t)(time->tm_year + 1900); -} - -enum aws_date_month aws_date_time_month(const struct aws_date_time *dt, bool local_time) { - const struct tm *time = local_time ? &dt->local_time : &dt->gmt_time; - - return time->tm_mon; -} - -uint8_t aws_date_time_month_day(const struct aws_date_time *dt, bool local_time) { - const struct tm *time = local_time ? &dt->local_time : &dt->gmt_time; - - return (uint8_t)time->tm_mday; -} - -enum aws_date_day_of_week aws_date_time_day_of_week(const struct aws_date_time *dt, bool local_time) { - const struct tm *time = local_time ? &dt->local_time : &dt->gmt_time; - - return time->tm_wday; -} - -uint8_t aws_date_time_hour(const struct aws_date_time *dt, bool local_time) { - const struct tm *time = local_time ? &dt->local_time : &dt->gmt_time; - - return (uint8_t)time->tm_hour; -} - -uint8_t aws_date_time_minute(const struct aws_date_time *dt, bool local_time) { - const struct tm *time = local_time ? &dt->local_time : &dt->gmt_time; - - return (uint8_t)time->tm_min; -} - -uint8_t aws_date_time_second(const struct aws_date_time *dt, bool local_time) { - const struct tm *time = local_time ? &dt->local_time : &dt->gmt_time; - - return (uint8_t)time->tm_sec; -} - -bool aws_date_time_dst(const struct aws_date_time *dt, bool local_time) { - const struct tm *time = local_time ? &dt->local_time : &dt->gmt_time; - - return (bool)time->tm_isdst; -} - -time_t aws_date_time_diff(const struct aws_date_time *a, const struct aws_date_time *b) { - return a->timestamp - b->timestamp; -} + + struct aws_byte_cursor date_cursor = aws_byte_cursor_from_buf(date_str); + return aws_date_time_init_from_str_cursor(dt, &date_cursor, fmt); +} + +static inline int s_date_to_str(const struct tm *tm, const char *format_str, struct aws_byte_buf *output_buf) { + size_t remaining_space = output_buf->capacity - output_buf->len; + size_t bytes_written = strftime((char *)output_buf->buffer + output_buf->len, remaining_space, format_str, tm); + + if (bytes_written == 0) { + return aws_raise_error(AWS_ERROR_SHORT_BUFFER); + } + + output_buf->len += bytes_written; + + return AWS_OP_SUCCESS; +} + +int aws_date_time_to_local_time_str( + const struct aws_date_time *dt, + enum aws_date_format fmt, + struct aws_byte_buf *output_buf) { + AWS_ASSERT(fmt != AWS_DATE_FORMAT_AUTO_DETECT); + + switch (fmt) { + case AWS_DATE_FORMAT_RFC822: + return s_date_to_str(&dt->local_time, RFC822_DATE_FORMAT_STR_WITH_Z, output_buf); + + case AWS_DATE_FORMAT_ISO_8601: + return s_date_to_str(&dt->local_time, ISO_8601_LONG_DATE_FORMAT_STR, output_buf); + + case AWS_DATE_FORMAT_ISO_8601_BASIC: + return s_date_to_str(&dt->local_time, ISO_8601_LONG_BASIC_DATE_FORMAT_STR, output_buf); + + default: + return aws_raise_error(AWS_ERROR_INVALID_ARGUMENT); + } +} + +int aws_date_time_to_utc_time_str( + const struct aws_date_time *dt, + enum aws_date_format fmt, + struct aws_byte_buf *output_buf) { + AWS_ASSERT(fmt != AWS_DATE_FORMAT_AUTO_DETECT); + + switch (fmt) { + case AWS_DATE_FORMAT_RFC822: + return s_date_to_str(&dt->gmt_time, RFC822_DATE_FORMAT_STR_MINUS_Z, output_buf); + + case AWS_DATE_FORMAT_ISO_8601: + return s_date_to_str(&dt->gmt_time, ISO_8601_LONG_DATE_FORMAT_STR, output_buf); + + case AWS_DATE_FORMAT_ISO_8601_BASIC: + return s_date_to_str(&dt->gmt_time, ISO_8601_LONG_BASIC_DATE_FORMAT_STR, output_buf); + + default: + return aws_raise_error(AWS_ERROR_INVALID_ARGUMENT); + } +} + +int aws_date_time_to_local_time_short_str( + const struct aws_date_time *dt, + enum aws_date_format fmt, + struct aws_byte_buf *output_buf) { + AWS_ASSERT(fmt != AWS_DATE_FORMAT_AUTO_DETECT); + + switch (fmt) { + case AWS_DATE_FORMAT_RFC822: + return s_date_to_str(&dt->local_time, RFC822_SHORT_DATE_FORMAT_STR, output_buf); + + case AWS_DATE_FORMAT_ISO_8601: + return s_date_to_str(&dt->local_time, ISO_8601_SHORT_DATE_FORMAT_STR, output_buf); + + case AWS_DATE_FORMAT_ISO_8601_BASIC: + return s_date_to_str(&dt->local_time, ISO_8601_SHORT_BASIC_DATE_FORMAT_STR, output_buf); + + default: + return aws_raise_error(AWS_ERROR_INVALID_ARGUMENT); + } +} + +int aws_date_time_to_utc_time_short_str( + const struct aws_date_time *dt, + enum aws_date_format fmt, + struct aws_byte_buf *output_buf) { + AWS_ASSERT(fmt != AWS_DATE_FORMAT_AUTO_DETECT); + + switch (fmt) { + case AWS_DATE_FORMAT_RFC822: + return s_date_to_str(&dt->gmt_time, RFC822_SHORT_DATE_FORMAT_STR, output_buf); + + case AWS_DATE_FORMAT_ISO_8601: + return s_date_to_str(&dt->gmt_time, ISO_8601_SHORT_DATE_FORMAT_STR, output_buf); + + case AWS_DATE_FORMAT_ISO_8601_BASIC: + return s_date_to_str(&dt->gmt_time, ISO_8601_SHORT_BASIC_DATE_FORMAT_STR, output_buf); + + default: + return aws_raise_error(AWS_ERROR_INVALID_ARGUMENT); + } +} + +double aws_date_time_as_epoch_secs(const struct aws_date_time *dt) { + return (double)dt->timestamp; +} + +uint64_t aws_date_time_as_nanos(const struct aws_date_time *dt) { + return (uint64_t)dt->timestamp * AWS_TIMESTAMP_NANOS; +} + +uint64_t aws_date_time_as_millis(const struct aws_date_time *dt) { + return (uint64_t)dt->timestamp * AWS_TIMESTAMP_MILLIS; +} + +uint16_t aws_date_time_year(const struct aws_date_time *dt, bool local_time) { + const struct tm *time = local_time ? &dt->local_time : &dt->gmt_time; + + return (uint16_t)(time->tm_year + 1900); +} + +enum aws_date_month aws_date_time_month(const struct aws_date_time *dt, bool local_time) { + const struct tm *time = local_time ? &dt->local_time : &dt->gmt_time; + + return time->tm_mon; +} + +uint8_t aws_date_time_month_day(const struct aws_date_time *dt, bool local_time) { + const struct tm *time = local_time ? &dt->local_time : &dt->gmt_time; + + return (uint8_t)time->tm_mday; +} + +enum aws_date_day_of_week aws_date_time_day_of_week(const struct aws_date_time *dt, bool local_time) { + const struct tm *time = local_time ? &dt->local_time : &dt->gmt_time; + + return time->tm_wday; +} + +uint8_t aws_date_time_hour(const struct aws_date_time *dt, bool local_time) { + const struct tm *time = local_time ? &dt->local_time : &dt->gmt_time; + + return (uint8_t)time->tm_hour; +} + +uint8_t aws_date_time_minute(const struct aws_date_time *dt, bool local_time) { + const struct tm *time = local_time ? &dt->local_time : &dt->gmt_time; + + return (uint8_t)time->tm_min; +} + +uint8_t aws_date_time_second(const struct aws_date_time *dt, bool local_time) { + const struct tm *time = local_time ? &dt->local_time : &dt->gmt_time; + + return (uint8_t)time->tm_sec; +} + +bool aws_date_time_dst(const struct aws_date_time *dt, bool local_time) { + const struct tm *time = local_time ? &dt->local_time : &dt->gmt_time; + + return (bool)time->tm_isdst; +} + +time_t aws_date_time_diff(const struct aws_date_time *a, const struct aws_date_time *b) { + return a->timestamp - b->timestamp; +} diff --git a/contrib/restricted/aws/aws-c-common/source/device_random.c b/contrib/restricted/aws/aws-c-common/source/device_random.c index 3df8a218e7..c1731bdb46 100644 --- a/contrib/restricted/aws/aws-c-common/source/device_random.c +++ b/contrib/restricted/aws/aws-c-common/source/device_random.c @@ -1,37 +1,37 @@ /** * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. * SPDX-License-Identifier: Apache-2.0. - */ -#include <aws/common/device_random.h> - -#include <aws/common/byte_buf.h> - -#ifdef _MSC_VER -/* disables warning non const declared initializers for Microsoft compilers */ -# pragma warning(disable : 4204) -# pragma warning(disable : 4706) -#endif - -int aws_device_random_u64(uint64_t *output) { - struct aws_byte_buf buf = aws_byte_buf_from_empty_array((uint8_t *)output, sizeof(uint64_t)); - - return aws_device_random_buffer(&buf); -} - -int aws_device_random_u32(uint32_t *output) { - struct aws_byte_buf buf = aws_byte_buf_from_empty_array((uint8_t *)output, sizeof(uint32_t)); - - return aws_device_random_buffer(&buf); -} - -int aws_device_random_u16(uint16_t *output) { - struct aws_byte_buf buf = aws_byte_buf_from_empty_array((uint8_t *)output, sizeof(uint16_t)); - - return aws_device_random_buffer(&buf); -} - -int aws_device_random_u8(uint8_t *output) { - struct aws_byte_buf buf = aws_byte_buf_from_empty_array((uint8_t *)output, sizeof(uint8_t)); - - return aws_device_random_buffer(&buf); -} + */ +#include <aws/common/device_random.h> + +#include <aws/common/byte_buf.h> + +#ifdef _MSC_VER +/* disables warning non const declared initializers for Microsoft compilers */ +# pragma warning(disable : 4204) +# pragma warning(disable : 4706) +#endif + +int aws_device_random_u64(uint64_t *output) { + struct aws_byte_buf buf = aws_byte_buf_from_empty_array((uint8_t *)output, sizeof(uint64_t)); + + return aws_device_random_buffer(&buf); +} + +int aws_device_random_u32(uint32_t *output) { + struct aws_byte_buf buf = aws_byte_buf_from_empty_array((uint8_t *)output, sizeof(uint32_t)); + + return aws_device_random_buffer(&buf); +} + +int aws_device_random_u16(uint16_t *output) { + struct aws_byte_buf buf = aws_byte_buf_from_empty_array((uint8_t *)output, sizeof(uint16_t)); + + return aws_device_random_buffer(&buf); +} + +int aws_device_random_u8(uint8_t *output) { + struct aws_byte_buf buf = aws_byte_buf_from_empty_array((uint8_t *)output, sizeof(uint8_t)); + + return aws_device_random_buffer(&buf); +} diff --git a/contrib/restricted/aws/aws-c-common/source/encoding.c b/contrib/restricted/aws/aws-c-common/source/encoding.c index 26a41fa163..384780b46b 100644 --- a/contrib/restricted/aws/aws-c-common/source/encoding.c +++ b/contrib/restricted/aws/aws-c-common/source/encoding.c @@ -1,289 +1,289 @@ /** * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. * SPDX-License-Identifier: Apache-2.0. - */ - -#include <aws/common/encoding.h> - -#include <ctype.h> -#include <stdlib.h> - -#ifdef USE_SIMD_ENCODING -size_t aws_common_private_base64_decode_sse41(const unsigned char *in, unsigned char *out, size_t len); -void aws_common_private_base64_encode_sse41(const unsigned char *in, unsigned char *out, size_t len); -bool aws_common_private_has_avx2(void); -#else -/* - * When AVX2 compilation is unavailable, we use these stubs to fall back to the pure-C decoder. - * Since we force aws_common_private_has_avx2 to return false, the encode and decode functions should - * not be called - but we must provide them anyway to avoid link errors. - */ -static inline size_t aws_common_private_base64_decode_sse41(const unsigned char *in, unsigned char *out, size_t len) { - (void)in; - (void)out; - (void)len; - AWS_ASSERT(false); - return (size_t)-1; /* unreachable */ -} -static inline void aws_common_private_base64_encode_sse41(const unsigned char *in, unsigned char *out, size_t len) { - (void)in; - (void)out; - (void)len; - AWS_ASSERT(false); -} -static inline bool aws_common_private_has_avx2(void) { - return false; -} -#endif - -static const uint8_t *HEX_CHARS = (const uint8_t *)"0123456789abcdef"; - -static const uint8_t BASE64_SENTIANAL_VALUE = 0xff; -static const uint8_t BASE64_ENCODING_TABLE[] = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/"; - -/* in this table, 0xDD is an invalid decoded value, if you have to do byte counting for any reason, there's 16 bytes - * per row. Reformatting is turned off to make sure this stays as 16 bytes per line. */ -/* clang-format off */ -static const uint8_t BASE64_DECODING_TABLE[256] = { - 64, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, - 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, - 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 62, 0xDD, 0xDD, 0xDD, 63, - 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 0xDD, 0xDD, 0xDD, 255, 0xDD, 0xDD, - 0xDD, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, - 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, - 0xDD, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, - 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, - 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, - 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, - 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, - 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, - 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, - 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, - 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, - 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD}; -/* clang-format on */ - -int aws_hex_compute_encoded_len(size_t to_encode_len, size_t *encoded_length) { - AWS_ASSERT(encoded_length); - - size_t temp = (to_encode_len << 1) + 1; - - if (AWS_UNLIKELY(temp < to_encode_len)) { - return aws_raise_error(AWS_ERROR_OVERFLOW_DETECTED); - } - - *encoded_length = temp; - - return AWS_OP_SUCCESS; -} - -int aws_hex_encode(const struct aws_byte_cursor *AWS_RESTRICT to_encode, struct aws_byte_buf *AWS_RESTRICT output) { - AWS_PRECONDITION(aws_byte_cursor_is_valid(to_encode)); - AWS_PRECONDITION(aws_byte_buf_is_valid(output)); - - size_t encoded_len = 0; - - if (AWS_UNLIKELY(aws_hex_compute_encoded_len(to_encode->len, &encoded_len))) { - return AWS_OP_ERR; - } - - if (AWS_UNLIKELY(output->capacity < encoded_len)) { - return aws_raise_error(AWS_ERROR_SHORT_BUFFER); - } - - size_t written = 0; - for (size_t i = 0; i < to_encode->len; ++i) { - - output->buffer[written++] = HEX_CHARS[to_encode->ptr[i] >> 4 & 0x0f]; - output->buffer[written++] = HEX_CHARS[to_encode->ptr[i] & 0x0f]; - } - - output->buffer[written] = '\0'; - output->len = encoded_len; - - return AWS_OP_SUCCESS; -} - -int aws_hex_encode_append_dynamic( - const struct aws_byte_cursor *AWS_RESTRICT to_encode, - struct aws_byte_buf *AWS_RESTRICT output) { - AWS_ASSERT(to_encode->ptr); - AWS_ASSERT(aws_byte_buf_is_valid(output)); - - size_t encoded_len = 0; - if (AWS_UNLIKELY(aws_add_size_checked(to_encode->len, to_encode->len, &encoded_len))) { - return AWS_OP_ERR; - } - - if (AWS_UNLIKELY(aws_byte_buf_reserve_relative(output, encoded_len))) { - return AWS_OP_ERR; - } - - size_t written = output->len; - for (size_t i = 0; i < to_encode->len; ++i) { - - output->buffer[written++] = HEX_CHARS[to_encode->ptr[i] >> 4 & 0x0f]; - output->buffer[written++] = HEX_CHARS[to_encode->ptr[i] & 0x0f]; - } - - output->len += encoded_len; - - return AWS_OP_SUCCESS; -} - -static int s_hex_decode_char_to_int(char character, uint8_t *int_val) { - if (character >= 'a' && character <= 'f') { - *int_val = (uint8_t)(10 + (character - 'a')); - return 0; - } - - if (character >= 'A' && character <= 'F') { - *int_val = (uint8_t)(10 + (character - 'A')); - return 0; - } - - if (character >= '0' && character <= '9') { - *int_val = (uint8_t)(character - '0'); - return 0; - } - - return AWS_OP_ERR; -} - -int aws_hex_compute_decoded_len(size_t to_decode_len, size_t *decoded_len) { - AWS_ASSERT(decoded_len); - - size_t temp = (to_decode_len + 1); - - if (AWS_UNLIKELY(temp < to_decode_len)) { - return aws_raise_error(AWS_ERROR_OVERFLOW_DETECTED); - } - - *decoded_len = temp >> 1; - return AWS_OP_SUCCESS; -} - -int aws_hex_decode(const struct aws_byte_cursor *AWS_RESTRICT to_decode, struct aws_byte_buf *AWS_RESTRICT output) { - AWS_PRECONDITION(aws_byte_cursor_is_valid(to_decode)); - AWS_PRECONDITION(aws_byte_buf_is_valid(output)); - - size_t decoded_length = 0; - - if (AWS_UNLIKELY(aws_hex_compute_decoded_len(to_decode->len, &decoded_length))) { - return aws_raise_error(AWS_ERROR_OVERFLOW_DETECTED); - } - - if (AWS_UNLIKELY(output->capacity < decoded_length)) { - return aws_raise_error(AWS_ERROR_SHORT_BUFFER); - } - - size_t written = 0; - size_t i = 0; - uint8_t high_value = 0; - uint8_t low_value = 0; - - /* if the buffer isn't even, prepend a 0 to the buffer. */ - if (AWS_UNLIKELY(to_decode->len & 0x01)) { - i = 1; - if (s_hex_decode_char_to_int(to_decode->ptr[0], &low_value)) { - return aws_raise_error(AWS_ERROR_INVALID_HEX_STR); - } - - output->buffer[written++] = low_value; - } - - for (; i < to_decode->len; i += 2) { - if (AWS_UNLIKELY( - s_hex_decode_char_to_int(to_decode->ptr[i], &high_value) || - s_hex_decode_char_to_int(to_decode->ptr[i + 1], &low_value))) { - return aws_raise_error(AWS_ERROR_INVALID_HEX_STR); - } - - uint8_t value = (uint8_t)(high_value << 4); - value |= low_value; - output->buffer[written++] = value; - } - - output->len = decoded_length; - - return AWS_OP_SUCCESS; -} - -int aws_base64_compute_encoded_len(size_t to_encode_len, size_t *encoded_len) { - AWS_ASSERT(encoded_len); - - size_t tmp = to_encode_len + 2; - - if (AWS_UNLIKELY(tmp < to_encode_len)) { - return aws_raise_error(AWS_ERROR_OVERFLOW_DETECTED); - } - - tmp /= 3; - size_t overflow_check = tmp; - tmp = 4 * tmp + 1; /* plus one for the NULL terminator */ - - if (AWS_UNLIKELY(tmp < overflow_check)) { - return aws_raise_error(AWS_ERROR_OVERFLOW_DETECTED); - } - - *encoded_len = tmp; - - return AWS_OP_SUCCESS; -} - -int aws_base64_compute_decoded_len(const struct aws_byte_cursor *AWS_RESTRICT to_decode, size_t *decoded_len) { - AWS_ASSERT(to_decode); - AWS_ASSERT(decoded_len); - - const size_t len = to_decode->len; - const uint8_t *input = to_decode->ptr; - - if (len == 0) { - *decoded_len = 0; - return AWS_OP_SUCCESS; - } - - if (AWS_UNLIKELY(len & 0x03)) { - return aws_raise_error(AWS_ERROR_INVALID_BASE64_STR); - } - - size_t tmp = len * 3; - - if (AWS_UNLIKELY(tmp < len)) { - return aws_raise_error(AWS_ERROR_OVERFLOW_DETECTED); - } - - size_t padding = 0; - - if (len >= 2 && input[len - 1] == '=' && input[len - 2] == '=') { /*last two chars are = */ - padding = 2; - } else if (input[len - 1] == '=') { /*last char is = */ - padding = 1; - } - - *decoded_len = (tmp / 4 - padding); - return AWS_OP_SUCCESS; -} - -int aws_base64_encode(const struct aws_byte_cursor *AWS_RESTRICT to_encode, struct aws_byte_buf *AWS_RESTRICT output) { - AWS_ASSERT(to_encode->ptr); - AWS_ASSERT(output->buffer); - + */ + +#include <aws/common/encoding.h> + +#include <ctype.h> +#include <stdlib.h> + +#ifdef USE_SIMD_ENCODING +size_t aws_common_private_base64_decode_sse41(const unsigned char *in, unsigned char *out, size_t len); +void aws_common_private_base64_encode_sse41(const unsigned char *in, unsigned char *out, size_t len); +bool aws_common_private_has_avx2(void); +#else +/* + * When AVX2 compilation is unavailable, we use these stubs to fall back to the pure-C decoder. + * Since we force aws_common_private_has_avx2 to return false, the encode and decode functions should + * not be called - but we must provide them anyway to avoid link errors. + */ +static inline size_t aws_common_private_base64_decode_sse41(const unsigned char *in, unsigned char *out, size_t len) { + (void)in; + (void)out; + (void)len; + AWS_ASSERT(false); + return (size_t)-1; /* unreachable */ +} +static inline void aws_common_private_base64_encode_sse41(const unsigned char *in, unsigned char *out, size_t len) { + (void)in; + (void)out; + (void)len; + AWS_ASSERT(false); +} +static inline bool aws_common_private_has_avx2(void) { + return false; +} +#endif + +static const uint8_t *HEX_CHARS = (const uint8_t *)"0123456789abcdef"; + +static const uint8_t BASE64_SENTIANAL_VALUE = 0xff; +static const uint8_t BASE64_ENCODING_TABLE[] = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/"; + +/* in this table, 0xDD is an invalid decoded value, if you have to do byte counting for any reason, there's 16 bytes + * per row. Reformatting is turned off to make sure this stays as 16 bytes per line. */ +/* clang-format off */ +static const uint8_t BASE64_DECODING_TABLE[256] = { + 64, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, + 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, + 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 62, 0xDD, 0xDD, 0xDD, 63, + 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 0xDD, 0xDD, 0xDD, 255, 0xDD, 0xDD, + 0xDD, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, + 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, + 0xDD, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, + 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, + 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, + 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, + 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, + 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, + 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, + 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, + 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, + 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD, 0xDD}; +/* clang-format on */ + +int aws_hex_compute_encoded_len(size_t to_encode_len, size_t *encoded_length) { + AWS_ASSERT(encoded_length); + + size_t temp = (to_encode_len << 1) + 1; + + if (AWS_UNLIKELY(temp < to_encode_len)) { + return aws_raise_error(AWS_ERROR_OVERFLOW_DETECTED); + } + + *encoded_length = temp; + + return AWS_OP_SUCCESS; +} + +int aws_hex_encode(const struct aws_byte_cursor *AWS_RESTRICT to_encode, struct aws_byte_buf *AWS_RESTRICT output) { + AWS_PRECONDITION(aws_byte_cursor_is_valid(to_encode)); + AWS_PRECONDITION(aws_byte_buf_is_valid(output)); + + size_t encoded_len = 0; + + if (AWS_UNLIKELY(aws_hex_compute_encoded_len(to_encode->len, &encoded_len))) { + return AWS_OP_ERR; + } + + if (AWS_UNLIKELY(output->capacity < encoded_len)) { + return aws_raise_error(AWS_ERROR_SHORT_BUFFER); + } + + size_t written = 0; + for (size_t i = 0; i < to_encode->len; ++i) { + + output->buffer[written++] = HEX_CHARS[to_encode->ptr[i] >> 4 & 0x0f]; + output->buffer[written++] = HEX_CHARS[to_encode->ptr[i] & 0x0f]; + } + + output->buffer[written] = '\0'; + output->len = encoded_len; + + return AWS_OP_SUCCESS; +} + +int aws_hex_encode_append_dynamic( + const struct aws_byte_cursor *AWS_RESTRICT to_encode, + struct aws_byte_buf *AWS_RESTRICT output) { + AWS_ASSERT(to_encode->ptr); + AWS_ASSERT(aws_byte_buf_is_valid(output)); + + size_t encoded_len = 0; + if (AWS_UNLIKELY(aws_add_size_checked(to_encode->len, to_encode->len, &encoded_len))) { + return AWS_OP_ERR; + } + + if (AWS_UNLIKELY(aws_byte_buf_reserve_relative(output, encoded_len))) { + return AWS_OP_ERR; + } + + size_t written = output->len; + for (size_t i = 0; i < to_encode->len; ++i) { + + output->buffer[written++] = HEX_CHARS[to_encode->ptr[i] >> 4 & 0x0f]; + output->buffer[written++] = HEX_CHARS[to_encode->ptr[i] & 0x0f]; + } + + output->len += encoded_len; + + return AWS_OP_SUCCESS; +} + +static int s_hex_decode_char_to_int(char character, uint8_t *int_val) { + if (character >= 'a' && character <= 'f') { + *int_val = (uint8_t)(10 + (character - 'a')); + return 0; + } + + if (character >= 'A' && character <= 'F') { + *int_val = (uint8_t)(10 + (character - 'A')); + return 0; + } + + if (character >= '0' && character <= '9') { + *int_val = (uint8_t)(character - '0'); + return 0; + } + + return AWS_OP_ERR; +} + +int aws_hex_compute_decoded_len(size_t to_decode_len, size_t *decoded_len) { + AWS_ASSERT(decoded_len); + + size_t temp = (to_decode_len + 1); + + if (AWS_UNLIKELY(temp < to_decode_len)) { + return aws_raise_error(AWS_ERROR_OVERFLOW_DETECTED); + } + + *decoded_len = temp >> 1; + return AWS_OP_SUCCESS; +} + +int aws_hex_decode(const struct aws_byte_cursor *AWS_RESTRICT to_decode, struct aws_byte_buf *AWS_RESTRICT output) { + AWS_PRECONDITION(aws_byte_cursor_is_valid(to_decode)); + AWS_PRECONDITION(aws_byte_buf_is_valid(output)); + + size_t decoded_length = 0; + + if (AWS_UNLIKELY(aws_hex_compute_decoded_len(to_decode->len, &decoded_length))) { + return aws_raise_error(AWS_ERROR_OVERFLOW_DETECTED); + } + + if (AWS_UNLIKELY(output->capacity < decoded_length)) { + return aws_raise_error(AWS_ERROR_SHORT_BUFFER); + } + + size_t written = 0; + size_t i = 0; + uint8_t high_value = 0; + uint8_t low_value = 0; + + /* if the buffer isn't even, prepend a 0 to the buffer. */ + if (AWS_UNLIKELY(to_decode->len & 0x01)) { + i = 1; + if (s_hex_decode_char_to_int(to_decode->ptr[0], &low_value)) { + return aws_raise_error(AWS_ERROR_INVALID_HEX_STR); + } + + output->buffer[written++] = low_value; + } + + for (; i < to_decode->len; i += 2) { + if (AWS_UNLIKELY( + s_hex_decode_char_to_int(to_decode->ptr[i], &high_value) || + s_hex_decode_char_to_int(to_decode->ptr[i + 1], &low_value))) { + return aws_raise_error(AWS_ERROR_INVALID_HEX_STR); + } + + uint8_t value = (uint8_t)(high_value << 4); + value |= low_value; + output->buffer[written++] = value; + } + + output->len = decoded_length; + + return AWS_OP_SUCCESS; +} + +int aws_base64_compute_encoded_len(size_t to_encode_len, size_t *encoded_len) { + AWS_ASSERT(encoded_len); + + size_t tmp = to_encode_len + 2; + + if (AWS_UNLIKELY(tmp < to_encode_len)) { + return aws_raise_error(AWS_ERROR_OVERFLOW_DETECTED); + } + + tmp /= 3; + size_t overflow_check = tmp; + tmp = 4 * tmp + 1; /* plus one for the NULL terminator */ + + if (AWS_UNLIKELY(tmp < overflow_check)) { + return aws_raise_error(AWS_ERROR_OVERFLOW_DETECTED); + } + + *encoded_len = tmp; + + return AWS_OP_SUCCESS; +} + +int aws_base64_compute_decoded_len(const struct aws_byte_cursor *AWS_RESTRICT to_decode, size_t *decoded_len) { + AWS_ASSERT(to_decode); + AWS_ASSERT(decoded_len); + + const size_t len = to_decode->len; + const uint8_t *input = to_decode->ptr; + + if (len == 0) { + *decoded_len = 0; + return AWS_OP_SUCCESS; + } + + if (AWS_UNLIKELY(len & 0x03)) { + return aws_raise_error(AWS_ERROR_INVALID_BASE64_STR); + } + + size_t tmp = len * 3; + + if (AWS_UNLIKELY(tmp < len)) { + return aws_raise_error(AWS_ERROR_OVERFLOW_DETECTED); + } + + size_t padding = 0; + + if (len >= 2 && input[len - 1] == '=' && input[len - 2] == '=') { /*last two chars are = */ + padding = 2; + } else if (input[len - 1] == '=') { /*last char is = */ + padding = 1; + } + + *decoded_len = (tmp / 4 - padding); + return AWS_OP_SUCCESS; +} + +int aws_base64_encode(const struct aws_byte_cursor *AWS_RESTRICT to_encode, struct aws_byte_buf *AWS_RESTRICT output) { + AWS_ASSERT(to_encode->ptr); + AWS_ASSERT(output->buffer); + size_t terminated_length = 0; - size_t encoded_length = 0; + size_t encoded_length = 0; if (AWS_UNLIKELY(aws_base64_compute_encoded_len(to_encode->len, &terminated_length))) { - return AWS_OP_ERR; - } - + return AWS_OP_ERR; + } + size_t needed_capacity = 0; if (AWS_UNLIKELY(aws_add_size_checked(output->len, terminated_length, &needed_capacity))) { return AWS_OP_ERR; } if (AWS_UNLIKELY(output->capacity < needed_capacity)) { - return aws_raise_error(AWS_ERROR_SHORT_BUFFER); - } - + return aws_raise_error(AWS_ERROR_SHORT_BUFFER); + } + /* * For convenience to standard C functions expecting a null-terminated * string, the output is terminated. As the encoding itself can be used in @@ -291,123 +291,123 @@ int aws_base64_encode(const struct aws_byte_cursor *AWS_RESTRICT to_encode, stru */ encoded_length = (terminated_length - 1); - if (aws_common_private_has_avx2()) { + if (aws_common_private_has_avx2()) { aws_common_private_base64_encode_sse41(to_encode->ptr, output->buffer + output->len, to_encode->len); output->buffer[output->len + encoded_length] = 0; output->len += encoded_length; - return AWS_OP_SUCCESS; - } - - size_t buffer_length = to_encode->len; - size_t block_count = (buffer_length + 2) / 3; - size_t remainder_count = (buffer_length % 3); + return AWS_OP_SUCCESS; + } + + size_t buffer_length = to_encode->len; + size_t block_count = (buffer_length + 2) / 3; + size_t remainder_count = (buffer_length % 3); size_t str_index = output->len; - - for (size_t i = 0; i < to_encode->len; i += 3) { - uint32_t block = to_encode->ptr[i]; - - block <<= 8; - if (AWS_LIKELY(i + 1 < buffer_length)) { - block = block | to_encode->ptr[i + 1]; - } - - block <<= 8; - if (AWS_LIKELY(i + 2 < to_encode->len)) { - block = block | to_encode->ptr[i + 2]; - } - - output->buffer[str_index++] = BASE64_ENCODING_TABLE[(block >> 18) & 0x3F]; - output->buffer[str_index++] = BASE64_ENCODING_TABLE[(block >> 12) & 0x3F]; - output->buffer[str_index++] = BASE64_ENCODING_TABLE[(block >> 6) & 0x3F]; - output->buffer[str_index++] = BASE64_ENCODING_TABLE[block & 0x3F]; - } - - if (remainder_count > 0) { + + for (size_t i = 0; i < to_encode->len; i += 3) { + uint32_t block = to_encode->ptr[i]; + + block <<= 8; + if (AWS_LIKELY(i + 1 < buffer_length)) { + block = block | to_encode->ptr[i + 1]; + } + + block <<= 8; + if (AWS_LIKELY(i + 2 < to_encode->len)) { + block = block | to_encode->ptr[i + 2]; + } + + output->buffer[str_index++] = BASE64_ENCODING_TABLE[(block >> 18) & 0x3F]; + output->buffer[str_index++] = BASE64_ENCODING_TABLE[(block >> 12) & 0x3F]; + output->buffer[str_index++] = BASE64_ENCODING_TABLE[(block >> 6) & 0x3F]; + output->buffer[str_index++] = BASE64_ENCODING_TABLE[block & 0x3F]; + } + + if (remainder_count > 0) { output->buffer[output->len + block_count * 4 - 1] = '='; - if (remainder_count == 1) { + if (remainder_count == 1) { output->buffer[output->len + block_count * 4 - 2] = '='; - } - } - - /* it's a string add the null terminator. */ + } + } + + /* it's a string add the null terminator. */ output->buffer[output->len + encoded_length] = 0; - + output->len += encoded_length; - return AWS_OP_SUCCESS; -} - -static inline int s_base64_get_decoded_value(unsigned char to_decode, uint8_t *value, int8_t allow_sentinal) { - - uint8_t decode_value = BASE64_DECODING_TABLE[(size_t)to_decode]; - if (decode_value != 0xDD && (decode_value != BASE64_SENTIANAL_VALUE || allow_sentinal)) { - *value = decode_value; - return AWS_OP_SUCCESS; - } - - return AWS_OP_ERR; -} - -int aws_base64_decode(const struct aws_byte_cursor *AWS_RESTRICT to_decode, struct aws_byte_buf *AWS_RESTRICT output) { - size_t decoded_length = 0; - - if (AWS_UNLIKELY(aws_base64_compute_decoded_len(to_decode, &decoded_length))) { - return AWS_OP_ERR; - } - - if (output->capacity < decoded_length) { - return aws_raise_error(AWS_ERROR_SHORT_BUFFER); - } - - if (aws_common_private_has_avx2()) { - size_t result = aws_common_private_base64_decode_sse41(to_decode->ptr, output->buffer, to_decode->len); - if (result == -1) { - return aws_raise_error(AWS_ERROR_INVALID_BASE64_STR); - } - - output->len = result; - return AWS_OP_SUCCESS; - } - - int64_t block_count = (int64_t)to_decode->len / 4; - size_t string_index = 0; - uint8_t value1 = 0, value2 = 0, value3 = 0, value4 = 0; - int64_t buffer_index = 0; - - for (int64_t i = 0; i < block_count - 1; ++i) { - if (AWS_UNLIKELY( - s_base64_get_decoded_value(to_decode->ptr[string_index++], &value1, 0) || - s_base64_get_decoded_value(to_decode->ptr[string_index++], &value2, 0) || - s_base64_get_decoded_value(to_decode->ptr[string_index++], &value3, 0) || - s_base64_get_decoded_value(to_decode->ptr[string_index++], &value4, 0))) { - return aws_raise_error(AWS_ERROR_INVALID_BASE64_STR); - } - - buffer_index = i * 3; - output->buffer[buffer_index++] = (uint8_t)((value1 << 2) | ((value2 >> 4) & 0x03)); - output->buffer[buffer_index++] = (uint8_t)(((value2 << 4) & 0xF0) | ((value3 >> 2) & 0x0F)); - output->buffer[buffer_index] = (uint8_t)((value3 & 0x03) << 6 | value4); - } - - buffer_index = (block_count - 1) * 3; - - if (buffer_index >= 0) { - if (s_base64_get_decoded_value(to_decode->ptr[string_index++], &value1, 0) || - s_base64_get_decoded_value(to_decode->ptr[string_index++], &value2, 0) || - s_base64_get_decoded_value(to_decode->ptr[string_index++], &value3, 1) || - s_base64_get_decoded_value(to_decode->ptr[string_index], &value4, 1)) { - return aws_raise_error(AWS_ERROR_INVALID_BASE64_STR); - } - - output->buffer[buffer_index++] = (uint8_t)((value1 << 2) | ((value2 >> 4) & 0x03)); - - if (value3 != BASE64_SENTIANAL_VALUE) { - output->buffer[buffer_index++] = (uint8_t)(((value2 << 4) & 0xF0) | ((value3 >> 2) & 0x0F)); - if (value4 != BASE64_SENTIANAL_VALUE) { - output->buffer[buffer_index] = (uint8_t)((value3 & 0x03) << 6 | value4); - } - } - } - output->len = decoded_length; - return AWS_OP_SUCCESS; -} + return AWS_OP_SUCCESS; +} + +static inline int s_base64_get_decoded_value(unsigned char to_decode, uint8_t *value, int8_t allow_sentinal) { + + uint8_t decode_value = BASE64_DECODING_TABLE[(size_t)to_decode]; + if (decode_value != 0xDD && (decode_value != BASE64_SENTIANAL_VALUE || allow_sentinal)) { + *value = decode_value; + return AWS_OP_SUCCESS; + } + + return AWS_OP_ERR; +} + +int aws_base64_decode(const struct aws_byte_cursor *AWS_RESTRICT to_decode, struct aws_byte_buf *AWS_RESTRICT output) { + size_t decoded_length = 0; + + if (AWS_UNLIKELY(aws_base64_compute_decoded_len(to_decode, &decoded_length))) { + return AWS_OP_ERR; + } + + if (output->capacity < decoded_length) { + return aws_raise_error(AWS_ERROR_SHORT_BUFFER); + } + + if (aws_common_private_has_avx2()) { + size_t result = aws_common_private_base64_decode_sse41(to_decode->ptr, output->buffer, to_decode->len); + if (result == -1) { + return aws_raise_error(AWS_ERROR_INVALID_BASE64_STR); + } + + output->len = result; + return AWS_OP_SUCCESS; + } + + int64_t block_count = (int64_t)to_decode->len / 4; + size_t string_index = 0; + uint8_t value1 = 0, value2 = 0, value3 = 0, value4 = 0; + int64_t buffer_index = 0; + + for (int64_t i = 0; i < block_count - 1; ++i) { + if (AWS_UNLIKELY( + s_base64_get_decoded_value(to_decode->ptr[string_index++], &value1, 0) || + s_base64_get_decoded_value(to_decode->ptr[string_index++], &value2, 0) || + s_base64_get_decoded_value(to_decode->ptr[string_index++], &value3, 0) || + s_base64_get_decoded_value(to_decode->ptr[string_index++], &value4, 0))) { + return aws_raise_error(AWS_ERROR_INVALID_BASE64_STR); + } + + buffer_index = i * 3; + output->buffer[buffer_index++] = (uint8_t)((value1 << 2) | ((value2 >> 4) & 0x03)); + output->buffer[buffer_index++] = (uint8_t)(((value2 << 4) & 0xF0) | ((value3 >> 2) & 0x0F)); + output->buffer[buffer_index] = (uint8_t)((value3 & 0x03) << 6 | value4); + } + + buffer_index = (block_count - 1) * 3; + + if (buffer_index >= 0) { + if (s_base64_get_decoded_value(to_decode->ptr[string_index++], &value1, 0) || + s_base64_get_decoded_value(to_decode->ptr[string_index++], &value2, 0) || + s_base64_get_decoded_value(to_decode->ptr[string_index++], &value3, 1) || + s_base64_get_decoded_value(to_decode->ptr[string_index], &value4, 1)) { + return aws_raise_error(AWS_ERROR_INVALID_BASE64_STR); + } + + output->buffer[buffer_index++] = (uint8_t)((value1 << 2) | ((value2 >> 4) & 0x03)); + + if (value3 != BASE64_SENTIANAL_VALUE) { + output->buffer[buffer_index++] = (uint8_t)(((value2 << 4) & 0xF0) | ((value3 >> 2) & 0x0F)); + if (value4 != BASE64_SENTIANAL_VALUE) { + output->buffer[buffer_index] = (uint8_t)((value3 & 0x03) << 6 | value4); + } + } + } + output->len = decoded_length; + return AWS_OP_SUCCESS; +} diff --git a/contrib/restricted/aws/aws-c-common/source/error.c b/contrib/restricted/aws/aws-c-common/source/error.c index 60e6c9e799..89fc55629f 100644 --- a/contrib/restricted/aws/aws-c-common/source/error.c +++ b/contrib/restricted/aws/aws-c-common/source/error.c @@ -1,149 +1,149 @@ /** * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. * SPDX-License-Identifier: Apache-2.0. - */ - -#include <aws/common/error.h> - -#include <aws/common/common.h> - + */ + +#include <aws/common/error.h> + +#include <aws/common/common.h> + #include <errno.h> -#include <stdio.h> -#include <stdlib.h> - -static AWS_THREAD_LOCAL int tl_last_error = 0; - -static aws_error_handler_fn *s_global_handler = NULL; -static void *s_global_error_context = NULL; - -static AWS_THREAD_LOCAL aws_error_handler_fn *tl_thread_handler = NULL; -AWS_THREAD_LOCAL void *tl_thread_handler_context = NULL; - -/* Since slot size is 00000100 00000000, to divide, we need to shift right by 10 - * bits to find the slot, and to find the modulus, we use a binary and with +#include <stdio.h> +#include <stdlib.h> + +static AWS_THREAD_LOCAL int tl_last_error = 0; + +static aws_error_handler_fn *s_global_handler = NULL; +static void *s_global_error_context = NULL; + +static AWS_THREAD_LOCAL aws_error_handler_fn *tl_thread_handler = NULL; +AWS_THREAD_LOCAL void *tl_thread_handler_context = NULL; + +/* Since slot size is 00000100 00000000, to divide, we need to shift right by 10 + * bits to find the slot, and to find the modulus, we use a binary and with * 00000011 11111111 to find the index in that slot. */ #define SLOT_MASK (AWS_ERROR_ENUM_STRIDE - 1) - + static const int MAX_ERROR_CODE = AWS_ERROR_ENUM_STRIDE * AWS_PACKAGE_SLOTS; - + static const struct aws_error_info_list *volatile ERROR_SLOTS[AWS_PACKAGE_SLOTS] = {0}; - -int aws_last_error(void) { - return tl_last_error; -} - -static const struct aws_error_info *get_error_by_code(int err) { - if (err >= MAX_ERROR_CODE || err < 0) { - return NULL; - } - + +int aws_last_error(void) { + return tl_last_error; +} + +static const struct aws_error_info *get_error_by_code(int err) { + if (err >= MAX_ERROR_CODE || err < 0) { + return NULL; + } + uint32_t slot_index = (uint32_t)err >> AWS_ERROR_ENUM_STRIDE_BITS; uint32_t error_index = (uint32_t)err & SLOT_MASK; - - const struct aws_error_info_list *error_slot = ERROR_SLOTS[slot_index]; - - if (!error_slot || error_index >= error_slot->count) { - return NULL; - } - - return &error_slot->error_list[error_index]; -} - -const char *aws_error_str(int err) { - const struct aws_error_info *error_info = get_error_by_code(err); - - if (error_info) { - return error_info->error_str; - } - - return "Unknown Error Code"; -} - -const char *aws_error_name(int err) { - const struct aws_error_info *error_info = get_error_by_code(err); - - if (error_info) { - return error_info->literal_name; - } - - return "Unknown Error Code"; -} - -const char *aws_error_lib_name(int err) { - const struct aws_error_info *error_info = get_error_by_code(err); - - if (error_info) { - return error_info->lib_name; - } - - return "Unknown Error Code"; -} - -const char *aws_error_debug_str(int err) { - const struct aws_error_info *error_info = get_error_by_code(err); - - if (error_info) { - return error_info->formatted_name; - } - - return "Unknown Error Code"; -} - -void aws_raise_error_private(int err) { - tl_last_error = err; - - if (tl_thread_handler) { - tl_thread_handler(tl_last_error, tl_thread_handler_context); - } else if (s_global_handler) { - s_global_handler(tl_last_error, s_global_error_context); - } -} - -void aws_reset_error(void) { - tl_last_error = 0; -} - -void aws_restore_error(int err) { - tl_last_error = err; -} - -aws_error_handler_fn *aws_set_global_error_handler_fn(aws_error_handler_fn *handler, void *ctx) { - aws_error_handler_fn *old_handler = s_global_handler; - s_global_handler = handler; - s_global_error_context = ctx; - - return old_handler; -} - -aws_error_handler_fn *aws_set_thread_local_error_handler_fn(aws_error_handler_fn *handler, void *ctx) { - aws_error_handler_fn *old_handler = tl_thread_handler; - tl_thread_handler = handler; - tl_thread_handler_context = ctx; - - return old_handler; -} - -void aws_register_error_info(const struct aws_error_info_list *error_info) { - /* - * We're not so worried about these asserts being removed in an NDEBUG build - * - we'll either segfault immediately (for the first two) or for the count - * assert, the registration will be ineffective. - */ + + const struct aws_error_info_list *error_slot = ERROR_SLOTS[slot_index]; + + if (!error_slot || error_index >= error_slot->count) { + return NULL; + } + + return &error_slot->error_list[error_index]; +} + +const char *aws_error_str(int err) { + const struct aws_error_info *error_info = get_error_by_code(err); + + if (error_info) { + return error_info->error_str; + } + + return "Unknown Error Code"; +} + +const char *aws_error_name(int err) { + const struct aws_error_info *error_info = get_error_by_code(err); + + if (error_info) { + return error_info->literal_name; + } + + return "Unknown Error Code"; +} + +const char *aws_error_lib_name(int err) { + const struct aws_error_info *error_info = get_error_by_code(err); + + if (error_info) { + return error_info->lib_name; + } + + return "Unknown Error Code"; +} + +const char *aws_error_debug_str(int err) { + const struct aws_error_info *error_info = get_error_by_code(err); + + if (error_info) { + return error_info->formatted_name; + } + + return "Unknown Error Code"; +} + +void aws_raise_error_private(int err) { + tl_last_error = err; + + if (tl_thread_handler) { + tl_thread_handler(tl_last_error, tl_thread_handler_context); + } else if (s_global_handler) { + s_global_handler(tl_last_error, s_global_error_context); + } +} + +void aws_reset_error(void) { + tl_last_error = 0; +} + +void aws_restore_error(int err) { + tl_last_error = err; +} + +aws_error_handler_fn *aws_set_global_error_handler_fn(aws_error_handler_fn *handler, void *ctx) { + aws_error_handler_fn *old_handler = s_global_handler; + s_global_handler = handler; + s_global_error_context = ctx; + + return old_handler; +} + +aws_error_handler_fn *aws_set_thread_local_error_handler_fn(aws_error_handler_fn *handler, void *ctx) { + aws_error_handler_fn *old_handler = tl_thread_handler; + tl_thread_handler = handler; + tl_thread_handler_context = ctx; + + return old_handler; +} + +void aws_register_error_info(const struct aws_error_info_list *error_info) { + /* + * We're not so worried about these asserts being removed in an NDEBUG build + * - we'll either segfault immediately (for the first two) or for the count + * assert, the registration will be ineffective. + */ AWS_FATAL_ASSERT(error_info); AWS_FATAL_ASSERT(error_info->error_list); AWS_FATAL_ASSERT(error_info->count); - + const int min_range = error_info->error_list[0].error_code; const int slot_index = min_range >> AWS_ERROR_ENUM_STRIDE_BITS; - + if (slot_index >= AWS_PACKAGE_SLOTS || slot_index < 0) { /* This is an NDEBUG build apparently. Kill the process rather than * corrupting heap. */ fprintf(stderr, "Bad error slot index %d\n", slot_index); AWS_FATAL_ASSERT(false); } - + #if DEBUG_BUILD /* Assert that error info entries are in the right order. */ for (int i = 1; i < error_info->count; ++i) { @@ -159,7 +159,7 @@ void aws_register_error_info(const struct aws_error_info_list *error_info) { } } #endif /* DEBUG_BUILD */ - + ERROR_SLOTS[slot_index] = error_info; } @@ -172,14 +172,14 @@ void aws_unregister_error_info(const struct aws_error_info_list *error_info) { const int slot_index = min_range >> AWS_ERROR_ENUM_STRIDE_BITS; if (slot_index >= AWS_PACKAGE_SLOTS || slot_index < 0) { - /* This is an NDEBUG build apparently. Kill the process rather than - * corrupting heap. */ + /* This is an NDEBUG build apparently. Kill the process rather than + * corrupting heap. */ fprintf(stderr, "Bad error slot index %d\n", slot_index); AWS_FATAL_ASSERT(0); - } - + } + ERROR_SLOTS[slot_index] = NULL; -} +} int aws_translate_and_raise_io_error(int error_no) { switch (error_no) { diff --git a/contrib/restricted/aws/aws-c-common/source/hash_table.c b/contrib/restricted/aws/aws-c-common/source/hash_table.c index a8125a2df1..e59a30db18 100644 --- a/contrib/restricted/aws/aws-c-common/source/hash_table.c +++ b/contrib/restricted/aws/aws-c-common/source/hash_table.c @@ -1,57 +1,57 @@ /** * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. * SPDX-License-Identifier: Apache-2.0. - */ - -/* For more information on how the RH hash works and in particular how we do - * deletions, see: - * http://codecapsule.com/2013/11/17/robin-hood-hashing-backward-shift-deletion/ - */ - -#include <aws/common/hash_table.h> -#include <aws/common/math.h> -#include <aws/common/private/hash_table_impl.h> -#include <aws/common/string.h> - -#include <limits.h> -#include <stdio.h> -#include <stdlib.h> - -/* Include lookup3.c so we can (potentially) inline it and make use of the mix() - * macro. */ + */ + +/* For more information on how the RH hash works and in particular how we do + * deletions, see: + * http://codecapsule.com/2013/11/17/robin-hood-hashing-backward-shift-deletion/ + */ + +#include <aws/common/hash_table.h> +#include <aws/common/math.h> +#include <aws/common/private/hash_table_impl.h> +#include <aws/common/string.h> + +#include <limits.h> +#include <stdio.h> +#include <stdlib.h> + +/* Include lookup3.c so we can (potentially) inline it and make use of the mix() + * macro. */ #include <aws/common/private/lookup3.inl> - -static void s_suppress_unused_lookup3_func_warnings(void) { - /* We avoid making changes to lookup3 if we can avoid it, but since it has functions - * we're not using, reference them somewhere to suppress the unused function warning. - */ - (void)hashword; - (void)hashword2; - (void)hashlittle; - (void)hashbig; -} - + +static void s_suppress_unused_lookup3_func_warnings(void) { + /* We avoid making changes to lookup3 if we can avoid it, but since it has functions + * we're not using, reference them somewhere to suppress the unused function warning. + */ + (void)hashword; + (void)hashword2; + (void)hashlittle; + (void)hashbig; +} + /** * Calculate the hash for the given key. * Ensures a reasonable semantics for null keys. * Ensures that no object ever hashes to 0, which is the sentinal value for an empty hash element. */ -static uint64_t s_hash_for(struct hash_table_state *state, const void *key) { +static uint64_t s_hash_for(struct hash_table_state *state, const void *key) { AWS_PRECONDITION(hash_table_state_is_valid(state)); - s_suppress_unused_lookup3_func_warnings(); - + s_suppress_unused_lookup3_func_warnings(); + if (key == NULL) { /* The best answer */ return 42; } - uint64_t hash_code = state->hash_fn(key); - if (!hash_code) { - hash_code = 1; - } + uint64_t hash_code = state->hash_fn(key); + if (!hash_code) { + hash_code = 1; + } AWS_RETURN_WITH_POSTCONDITION(hash_code, hash_code != 0); -} - +} + /** * Check equality of two objects, with a reasonable semantics for null. */ @@ -77,270 +77,270 @@ static bool s_hash_keys_eq(struct hash_table_state *state, const void *a, const AWS_RETURN_WITH_POSTCONDITION(rval, hash_table_state_is_valid(state)); } -static size_t s_index_for(struct hash_table_state *map, struct hash_table_entry *entry) { - AWS_PRECONDITION(hash_table_state_is_valid(map)); - size_t index = entry - map->slots; +static size_t s_index_for(struct hash_table_state *map, struct hash_table_entry *entry) { + AWS_PRECONDITION(hash_table_state_is_valid(map)); + size_t index = entry - map->slots; AWS_RETURN_WITH_POSTCONDITION(index, index < map->size && hash_table_state_is_valid(map)); -} - -#if 0 -/* Useful debugging code for anyone working on this in the future */ -static uint64_t s_distance(struct hash_table_state *state, int index) { - return (index - state->slots[index].hash_code) & state->mask; -} - -void hash_dump(struct aws_hash_table *tbl) { - struct hash_table_state *state = tbl->p_impl; - - printf("Dumping hash table contents:\n"); - - for (int i = 0; i < state->size; i++) { - printf("%7d: ", i); - struct hash_table_entry *e = &state->slots[i]; - if (!e->hash_code) { - printf("EMPTY\n"); - } else { - printf("k: %p v: %p hash_code: %lld displacement: %lld\n", - e->element.key, e->element.value, e->hash_code, - (i - e->hash_code) & state->mask); - } - } -} -#endif - -#if 0 -/* Not currently exposed as an API. Should we have something like this? Useful for benchmarks */ -AWS_COMMON_API -void aws_hash_table_print_stats(struct aws_hash_table *table) { - struct hash_table_state *state = table->p_impl; - uint64_t total_disp = 0; - uint64_t max_disp = 0; - - printf("\n=== Hash table statistics ===\n"); - printf("Table size: %zu/%zu (max load %zu, remaining %zu)\n", state->entry_count, state->size, state->max_load, state->max_load - state->entry_count); - printf("Load factor: %02.2lf%% (max %02.2lf%%)\n", - 100.0 * ((double)state->entry_count / (double)state->size), - state->max_load_factor); - - for (size_t i = 0; i < state->size; i++) { - if (state->slots[i].hash_code) { - int displacement = distance(state, i); - total_disp += displacement; - if (displacement > max_disp) { - max_disp = displacement; - } - } - } - - size_t *disp_counts = calloc(sizeof(*disp_counts), max_disp + 1); - - for (size_t i = 0; i < state->size; i++) { - if (state->slots[i].hash_code) { - disp_counts[distance(state, i)]++; - } - } - - uint64_t median = 0; - uint64_t passed = 0; - for (uint64_t i = 0; i <= max_disp && passed < total_disp / 2; i++) { - median = i; - passed += disp_counts[i]; - } - - printf("Displacement statistics: Avg %02.2lf max %llu median %llu\n", (double)total_disp / (double)state->entry_count, max_disp, median); - for (uint64_t i = 0; i <= max_disp; i++) { - printf("Displacement %2lld: %zu entries\n", i, disp_counts[i]); - } - free(disp_counts); - printf("\n"); -} -#endif - -size_t aws_hash_table_get_entry_count(const struct aws_hash_table *map) { - struct hash_table_state *state = map->p_impl; - return state->entry_count; -} - -/* Given a header template, allocates space for a hash table of the appropriate - * size, and copies the state header into this allocated memory, which is - * returned. - */ -static struct hash_table_state *s_alloc_state(const struct hash_table_state *template) { - size_t required_bytes; - if (hash_table_state_required_bytes(template->size, &required_bytes)) { - return NULL; - } - - /* An empty slot has hashcode 0. So this marks all slots as empty */ - struct hash_table_state *state = aws_mem_calloc(template->alloc, 1, required_bytes); - - if (state == NULL) { - return state; - } - - *state = *template; - return state; -} - -/* Computes the correct size and max_load based on a requested size. */ -static int s_update_template_size(struct hash_table_state *template, size_t expected_elements) { - size_t min_size = expected_elements; - - if (min_size < 2) { - min_size = 2; - } - - /* size is always a power of 2 */ - size_t size; - if (aws_round_up_to_power_of_two(min_size, &size)) { - return AWS_OP_ERR; - } - - /* Update the template once we've calculated everything successfully */ - template->size = size; - template->max_load = (size_t)(template->max_load_factor * (double)template->size); - /* Ensure that there is always at least one empty slot in the hash table */ - if (template->max_load >= size) { - template->max_load = size - 1; - } - - /* Since size is a power of 2: (index & (size - 1)) == (index % size) */ - template->mask = size - 1; - - return AWS_OP_SUCCESS; -} - -int aws_hash_table_init( - struct aws_hash_table *map, - struct aws_allocator *alloc, - size_t size, - aws_hash_fn *hash_fn, - aws_hash_callback_eq_fn *equals_fn, - aws_hash_callback_destroy_fn *destroy_key_fn, - aws_hash_callback_destroy_fn *destroy_value_fn) { +} + +#if 0 +/* Useful debugging code for anyone working on this in the future */ +static uint64_t s_distance(struct hash_table_state *state, int index) { + return (index - state->slots[index].hash_code) & state->mask; +} + +void hash_dump(struct aws_hash_table *tbl) { + struct hash_table_state *state = tbl->p_impl; + + printf("Dumping hash table contents:\n"); + + for (int i = 0; i < state->size; i++) { + printf("%7d: ", i); + struct hash_table_entry *e = &state->slots[i]; + if (!e->hash_code) { + printf("EMPTY\n"); + } else { + printf("k: %p v: %p hash_code: %lld displacement: %lld\n", + e->element.key, e->element.value, e->hash_code, + (i - e->hash_code) & state->mask); + } + } +} +#endif + +#if 0 +/* Not currently exposed as an API. Should we have something like this? Useful for benchmarks */ +AWS_COMMON_API +void aws_hash_table_print_stats(struct aws_hash_table *table) { + struct hash_table_state *state = table->p_impl; + uint64_t total_disp = 0; + uint64_t max_disp = 0; + + printf("\n=== Hash table statistics ===\n"); + printf("Table size: %zu/%zu (max load %zu, remaining %zu)\n", state->entry_count, state->size, state->max_load, state->max_load - state->entry_count); + printf("Load factor: %02.2lf%% (max %02.2lf%%)\n", + 100.0 * ((double)state->entry_count / (double)state->size), + state->max_load_factor); + + for (size_t i = 0; i < state->size; i++) { + if (state->slots[i].hash_code) { + int displacement = distance(state, i); + total_disp += displacement; + if (displacement > max_disp) { + max_disp = displacement; + } + } + } + + size_t *disp_counts = calloc(sizeof(*disp_counts), max_disp + 1); + + for (size_t i = 0; i < state->size; i++) { + if (state->slots[i].hash_code) { + disp_counts[distance(state, i)]++; + } + } + + uint64_t median = 0; + uint64_t passed = 0; + for (uint64_t i = 0; i <= max_disp && passed < total_disp / 2; i++) { + median = i; + passed += disp_counts[i]; + } + + printf("Displacement statistics: Avg %02.2lf max %llu median %llu\n", (double)total_disp / (double)state->entry_count, max_disp, median); + for (uint64_t i = 0; i <= max_disp; i++) { + printf("Displacement %2lld: %zu entries\n", i, disp_counts[i]); + } + free(disp_counts); + printf("\n"); +} +#endif + +size_t aws_hash_table_get_entry_count(const struct aws_hash_table *map) { + struct hash_table_state *state = map->p_impl; + return state->entry_count; +} + +/* Given a header template, allocates space for a hash table of the appropriate + * size, and copies the state header into this allocated memory, which is + * returned. + */ +static struct hash_table_state *s_alloc_state(const struct hash_table_state *template) { + size_t required_bytes; + if (hash_table_state_required_bytes(template->size, &required_bytes)) { + return NULL; + } + + /* An empty slot has hashcode 0. So this marks all slots as empty */ + struct hash_table_state *state = aws_mem_calloc(template->alloc, 1, required_bytes); + + if (state == NULL) { + return state; + } + + *state = *template; + return state; +} + +/* Computes the correct size and max_load based on a requested size. */ +static int s_update_template_size(struct hash_table_state *template, size_t expected_elements) { + size_t min_size = expected_elements; + + if (min_size < 2) { + min_size = 2; + } + + /* size is always a power of 2 */ + size_t size; + if (aws_round_up_to_power_of_two(min_size, &size)) { + return AWS_OP_ERR; + } + + /* Update the template once we've calculated everything successfully */ + template->size = size; + template->max_load = (size_t)(template->max_load_factor * (double)template->size); + /* Ensure that there is always at least one empty slot in the hash table */ + if (template->max_load >= size) { + template->max_load = size - 1; + } + + /* Since size is a power of 2: (index & (size - 1)) == (index % size) */ + template->mask = size - 1; + + return AWS_OP_SUCCESS; +} + +int aws_hash_table_init( + struct aws_hash_table *map, + struct aws_allocator *alloc, + size_t size, + aws_hash_fn *hash_fn, + aws_hash_callback_eq_fn *equals_fn, + aws_hash_callback_destroy_fn *destroy_key_fn, + aws_hash_callback_destroy_fn *destroy_value_fn) { AWS_PRECONDITION(map != NULL); AWS_PRECONDITION(alloc != NULL); AWS_PRECONDITION(hash_fn != NULL); AWS_PRECONDITION(equals_fn != NULL); - - struct hash_table_state template; - template.hash_fn = hash_fn; - template.equals_fn = equals_fn; - template.destroy_key_fn = destroy_key_fn; - template.destroy_value_fn = destroy_value_fn; - template.alloc = alloc; - - template.entry_count = 0; - template.max_load_factor = 0.95; /* TODO - make configurable? */ - - if (s_update_template_size(&template, size)) { - return AWS_OP_ERR; - } - map->p_impl = s_alloc_state(&template); - - if (!map->p_impl) { - return AWS_OP_ERR; - } - + + struct hash_table_state template; + template.hash_fn = hash_fn; + template.equals_fn = equals_fn; + template.destroy_key_fn = destroy_key_fn; + template.destroy_value_fn = destroy_value_fn; + template.alloc = alloc; + + template.entry_count = 0; + template.max_load_factor = 0.95; /* TODO - make configurable? */ + + if (s_update_template_size(&template, size)) { + return AWS_OP_ERR; + } + map->p_impl = s_alloc_state(&template); + + if (!map->p_impl) { + return AWS_OP_ERR; + } + AWS_SUCCEED_WITH_POSTCONDITION(aws_hash_table_is_valid(map)); -} - -void aws_hash_table_clean_up(struct aws_hash_table *map) { +} + +void aws_hash_table_clean_up(struct aws_hash_table *map) { AWS_PRECONDITION(map != NULL); AWS_PRECONDITION( map->p_impl == NULL || aws_hash_table_is_valid(map), "Input aws_hash_table [map] must be valid or hash_table_state pointer [map->p_impl] must be NULL, in case " "aws_hash_table_clean_up was called twice."); - struct hash_table_state *state = map->p_impl; - - /* Ensure that we're idempotent */ - if (!state) { - return; - } - - aws_hash_table_clear(map); + struct hash_table_state *state = map->p_impl; + + /* Ensure that we're idempotent */ + if (!state) { + return; + } + + aws_hash_table_clear(map); aws_mem_release(map->p_impl->alloc, map->p_impl); - - map->p_impl = NULL; + + map->p_impl = NULL; AWS_POSTCONDITION(map->p_impl == NULL); -} - -void aws_hash_table_swap(struct aws_hash_table *AWS_RESTRICT a, struct aws_hash_table *AWS_RESTRICT b) { - AWS_PRECONDITION(a != b); - struct aws_hash_table tmp = *a; - *a = *b; - *b = tmp; -} - -void aws_hash_table_move(struct aws_hash_table *AWS_RESTRICT to, struct aws_hash_table *AWS_RESTRICT from) { +} + +void aws_hash_table_swap(struct aws_hash_table *AWS_RESTRICT a, struct aws_hash_table *AWS_RESTRICT b) { + AWS_PRECONDITION(a != b); + struct aws_hash_table tmp = *a; + *a = *b; + *b = tmp; +} + +void aws_hash_table_move(struct aws_hash_table *AWS_RESTRICT to, struct aws_hash_table *AWS_RESTRICT from) { AWS_PRECONDITION(to != NULL); AWS_PRECONDITION(from != NULL); AWS_PRECONDITION(to != from); AWS_PRECONDITION(aws_hash_table_is_valid(from)); - *to = *from; - AWS_ZERO_STRUCT(*from); - AWS_POSTCONDITION(aws_hash_table_is_valid(to)); -} - -/* Tries to find where the requested key is or where it should go if put. - * Returns AWS_ERROR_SUCCESS if the item existed (leaving it in *entry), - * or AWS_ERROR_HASHTBL_ITEM_NOT_FOUND if it did not (putting its destination - * in *entry). Note that this does not take care of displacing whatever was in - * that entry before. - * - * probe_idx is set to the probe index of the entry found. - */ - -static int s_find_entry1( - struct hash_table_state *state, - uint64_t hash_code, - const void *key, - struct hash_table_entry **p_entry, - size_t *p_probe_idx); - -/* Inlined fast path: Check the first slot, only. */ -/* TODO: Force inlining? */ -static int inline s_find_entry( - struct hash_table_state *state, - uint64_t hash_code, - const void *key, - struct hash_table_entry **p_entry, - size_t *p_probe_idx) { - struct hash_table_entry *entry = &state->slots[hash_code & state->mask]; - - if (entry->hash_code == 0) { - if (p_probe_idx) { - *p_probe_idx = 0; - } - *p_entry = entry; - return AWS_ERROR_HASHTBL_ITEM_NOT_FOUND; - } - + *to = *from; + AWS_ZERO_STRUCT(*from); + AWS_POSTCONDITION(aws_hash_table_is_valid(to)); +} + +/* Tries to find where the requested key is or where it should go if put. + * Returns AWS_ERROR_SUCCESS if the item existed (leaving it in *entry), + * or AWS_ERROR_HASHTBL_ITEM_NOT_FOUND if it did not (putting its destination + * in *entry). Note that this does not take care of displacing whatever was in + * that entry before. + * + * probe_idx is set to the probe index of the entry found. + */ + +static int s_find_entry1( + struct hash_table_state *state, + uint64_t hash_code, + const void *key, + struct hash_table_entry **p_entry, + size_t *p_probe_idx); + +/* Inlined fast path: Check the first slot, only. */ +/* TODO: Force inlining? */ +static int inline s_find_entry( + struct hash_table_state *state, + uint64_t hash_code, + const void *key, + struct hash_table_entry **p_entry, + size_t *p_probe_idx) { + struct hash_table_entry *entry = &state->slots[hash_code & state->mask]; + + if (entry->hash_code == 0) { + if (p_probe_idx) { + *p_probe_idx = 0; + } + *p_entry = entry; + return AWS_ERROR_HASHTBL_ITEM_NOT_FOUND; + } + if (entry->hash_code == hash_code && s_hash_keys_eq(state, key, entry->element.key)) { - if (p_probe_idx) { - *p_probe_idx = 0; - } - *p_entry = entry; - return AWS_OP_SUCCESS; - } - - return s_find_entry1(state, hash_code, key, p_entry, p_probe_idx); -} - -static int s_find_entry1( - struct hash_table_state *state, - uint64_t hash_code, - const void *key, - struct hash_table_entry **p_entry, - size_t *p_probe_idx) { - size_t probe_idx = 1; - /* If we find a deleted entry, we record that index and return it as our probe index (i.e. we'll keep searching to - * see if it already exists, but if not we'll overwrite the deleted entry). - */ - - int rv; - struct hash_table_entry *entry; + if (p_probe_idx) { + *p_probe_idx = 0; + } + *p_entry = entry; + return AWS_OP_SUCCESS; + } + + return s_find_entry1(state, hash_code, key, p_entry, p_probe_idx); +} + +static int s_find_entry1( + struct hash_table_state *state, + uint64_t hash_code, + const void *key, + struct hash_table_entry **p_entry, + size_t *p_probe_idx) { + size_t probe_idx = 1; + /* If we find a deleted entry, we record that index and return it as our probe index (i.e. we'll keep searching to + * see if it already exists, but if not we'll overwrite the deleted entry). + */ + + int rv; + struct hash_table_entry *entry; /* This loop is guaranteed to terminate because entry_probe is bounded above by state->mask (i.e. state->size - 1). * Since probe_idx increments every loop iteration, it will become larger than entry_probe after at most state->size * transitions and the loop will exit (if it hasn't already) @@ -350,82 +350,82 @@ static int s_find_entry1( # pragma CPROVER check push # pragma CPROVER check disable "unsigned-overflow" #endif - uint64_t index = (hash_code + probe_idx) & state->mask; + uint64_t index = (hash_code + probe_idx) & state->mask; #ifdef CBMC # pragma CPROVER check pop #endif - entry = &state->slots[index]; - if (!entry->hash_code) { - rv = AWS_ERROR_HASHTBL_ITEM_NOT_FOUND; - break; - } - + entry = &state->slots[index]; + if (!entry->hash_code) { + rv = AWS_ERROR_HASHTBL_ITEM_NOT_FOUND; + break; + } + if (entry->hash_code == hash_code && s_hash_keys_eq(state, key, entry->element.key)) { - rv = AWS_ERROR_SUCCESS; - break; - } - + rv = AWS_ERROR_SUCCESS; + break; + } + #ifdef CBMC # pragma CPROVER check push # pragma CPROVER check disable "unsigned-overflow" #endif - uint64_t entry_probe = (index - entry->hash_code) & state->mask; + uint64_t entry_probe = (index - entry->hash_code) & state->mask; #ifdef CBMC # pragma CPROVER check pop #endif - - if (entry_probe < probe_idx) { - /* We now know that our target entry cannot exist; if it did exist, - * it would be at the current location as it has a higher probe - * length than the entry we are examining and thus would have - * preempted that item - */ - rv = AWS_ERROR_HASHTBL_ITEM_NOT_FOUND; - break; - } - - probe_idx++; - } - - *p_entry = entry; - if (p_probe_idx) { - *p_probe_idx = probe_idx; + + if (entry_probe < probe_idx) { + /* We now know that our target entry cannot exist; if it did exist, + * it would be at the current location as it has a higher probe + * length than the entry we are examining and thus would have + * preempted that item + */ + rv = AWS_ERROR_HASHTBL_ITEM_NOT_FOUND; + break; + } + + probe_idx++; } - - return rv; -} - -int aws_hash_table_find(const struct aws_hash_table *map, const void *key, struct aws_hash_element **p_elem) { + + *p_entry = entry; + if (p_probe_idx) { + *p_probe_idx = probe_idx; + } + + return rv; +} + +int aws_hash_table_find(const struct aws_hash_table *map, const void *key, struct aws_hash_element **p_elem) { AWS_PRECONDITION(aws_hash_table_is_valid(map)); AWS_PRECONDITION(AWS_OBJECT_PTR_IS_WRITABLE(p_elem), "Input aws_hash_element pointer [p_elem] must be writable."); - struct hash_table_state *state = map->p_impl; - uint64_t hash_code = s_hash_for(state, key); - struct hash_table_entry *entry; - - int rv = s_find_entry(state, hash_code, key, &entry, NULL); - - if (rv == AWS_ERROR_SUCCESS) { - *p_elem = &entry->element; - } else { - *p_elem = NULL; - } + struct hash_table_state *state = map->p_impl; + uint64_t hash_code = s_hash_for(state, key); + struct hash_table_entry *entry; + + int rv = s_find_entry(state, hash_code, key, &entry, NULL); + + if (rv == AWS_ERROR_SUCCESS) { + *p_elem = &entry->element; + } else { + *p_elem = NULL; + } AWS_SUCCEED_WITH_POSTCONDITION(aws_hash_table_is_valid(map)); -} - +} + /** * Attempts to find a home for the given entry. * If the entry was empty (i.e. hash-code of 0), then the function does nothing and returns NULL * Otherwise, it emplaces the item, and returns a pointer to the newly emplaced entry. * This function is only called after the hash-table has been expanded to fit the new element, * so it should never fail. - */ -static struct hash_table_entry *s_emplace_item( - struct hash_table_state *state, - struct hash_table_entry entry, - size_t probe_idx) { + */ +static struct hash_table_entry *s_emplace_item( + struct hash_table_state *state, + struct hash_table_entry entry, + size_t probe_idx) { AWS_PRECONDITION(hash_table_state_is_valid(state)); - + if (entry.hash_code == 0) { AWS_RETURN_WITH_POSTCONDITION(NULL, hash_table_state_is_valid(state)); } @@ -439,263 +439,263 @@ static struct hash_table_entry *s_emplace_item( # pragma CPROVER check push # pragma CPROVER check disable "unsigned-overflow" #endif - size_t index = (size_t)(entry.hash_code + probe_idx) & state->mask; + size_t index = (size_t)(entry.hash_code + probe_idx) & state->mask; #ifdef CBMC # pragma CPROVER check pop #endif - struct hash_table_entry *victim = &state->slots[index]; - + struct hash_table_entry *victim = &state->slots[index]; + #ifdef CBMC # pragma CPROVER check push # pragma CPROVER check disable "unsigned-overflow" #endif - size_t victim_probe_idx = (size_t)(index - victim->hash_code) & state->mask; + size_t victim_probe_idx = (size_t)(index - victim->hash_code) & state->mask; #ifdef CBMC # pragma CPROVER check pop #endif - - if (!victim->hash_code || victim_probe_idx < probe_idx) { + + if (!victim->hash_code || victim_probe_idx < probe_idx) { /* The first thing we emplace is the entry itself. A pointer to its location becomes the rval */ if (!rval) { rval = victim; - } - - struct hash_table_entry tmp = *victim; - *victim = entry; - entry = tmp; - - probe_idx = victim_probe_idx + 1; - } else { - probe_idx++; - } - } - + } + + struct hash_table_entry tmp = *victim; + *victim = entry; + entry = tmp; + + probe_idx = victim_probe_idx + 1; + } else { + probe_idx++; + } + } + AWS_RETURN_WITH_POSTCONDITION( rval, hash_table_state_is_valid(state) && rval >= &state->slots[0] && rval < &state->slots[state->size], "Output hash_table_entry pointer [rval] must point in the slots of [state]."); -} - -static int s_expand_table(struct aws_hash_table *map) { - struct hash_table_state *old_state = map->p_impl; - struct hash_table_state template = *old_state; - +} + +static int s_expand_table(struct aws_hash_table *map) { + struct hash_table_state *old_state = map->p_impl; + struct hash_table_state template = *old_state; + size_t new_size; if (aws_mul_size_checked(template.size, 2, &new_size)) { return AWS_OP_ERR; } - + if (s_update_template_size(&template, new_size)) { return AWS_OP_ERR; } - struct hash_table_state *new_state = s_alloc_state(&template); - if (!new_state) { - return AWS_OP_ERR; - } - - for (size_t i = 0; i < old_state->size; i++) { - struct hash_table_entry entry = old_state->slots[i]; - if (entry.hash_code) { - /* We can directly emplace since we know we won't put the same item twice */ - s_emplace_item(new_state, entry, 0); - } - } - - map->p_impl = new_state; - aws_mem_release(new_state->alloc, old_state); - - return AWS_OP_SUCCESS; -} - -int aws_hash_table_create( - struct aws_hash_table *map, - const void *key, - struct aws_hash_element **p_elem, - int *was_created) { - - struct hash_table_state *state = map->p_impl; - uint64_t hash_code = s_hash_for(state, key); - struct hash_table_entry *entry; - size_t probe_idx; - int ignored; - if (!was_created) { - was_created = &ignored; - } - - int rv = s_find_entry(state, hash_code, key, &entry, &probe_idx); - - if (rv == AWS_ERROR_SUCCESS) { - if (p_elem) { - *p_elem = &entry->element; - } - *was_created = 0; - return AWS_OP_SUCCESS; - } - - /* Okay, we need to add an entry. Check the load factor first. */ - size_t incr_entry_count; - if (aws_add_size_checked(state->entry_count, 1, &incr_entry_count)) { - return AWS_OP_ERR; - } - if (incr_entry_count > state->max_load) { - rv = s_expand_table(map); - if (rv != AWS_OP_SUCCESS) { - /* Any error was already raised in expand_table */ - return rv; - } - state = map->p_impl; - /* If we expanded the table, we need to discard the probe index returned from find_entry, - * as it's likely that we can find a more desirable slot. If we don't, then later gets will - * terminate before reaching our probe index. - - * n.b. currently we ignore this probe_idx subsequently, but leaving - this here so we don't - * forget when we optimize later. */ - probe_idx = 0; - } - - state->entry_count++; - struct hash_table_entry new_entry; - new_entry.element.key = key; - new_entry.element.value = NULL; - new_entry.hash_code = hash_code; - - entry = s_emplace_item(state, new_entry, probe_idx); - - if (p_elem) { - *p_elem = &entry->element; - } - - *was_created = 1; - - return AWS_OP_SUCCESS; -} - -AWS_COMMON_API -int aws_hash_table_put(struct aws_hash_table *map, const void *key, void *value, int *was_created) { - struct aws_hash_element *p_elem; - int was_created_fallback; - - if (!was_created) { - was_created = &was_created_fallback; - } - - if (aws_hash_table_create(map, key, &p_elem, was_created)) { - return AWS_OP_ERR; - } - - /* - * aws_hash_table_create might resize the table, which results in map->p_impl changing. - * It is therefore important to wait to read p_impl until after we return. - */ - struct hash_table_state *state = map->p_impl; - - if (!*was_created) { - if (p_elem->key != key && state->destroy_key_fn) { - state->destroy_key_fn((void *)p_elem->key); - } - - if (state->destroy_value_fn) { - state->destroy_value_fn((void *)p_elem->value); - } - } - - p_elem->key = key; - p_elem->value = value; - - return AWS_OP_SUCCESS; -} - -/* Clears an entry. Does _not_ invoke destructor callbacks. - * Returns the last slot touched (note that if we wrap, we'll report an index - * lower than the original entry's index) - */ -static size_t s_remove_entry(struct hash_table_state *state, struct hash_table_entry *entry) { - AWS_PRECONDITION(hash_table_state_is_valid(state)); - AWS_PRECONDITION(state->entry_count > 0); + struct hash_table_state *new_state = s_alloc_state(&template); + if (!new_state) { + return AWS_OP_ERR; + } + + for (size_t i = 0; i < old_state->size; i++) { + struct hash_table_entry entry = old_state->slots[i]; + if (entry.hash_code) { + /* We can directly emplace since we know we won't put the same item twice */ + s_emplace_item(new_state, entry, 0); + } + } + + map->p_impl = new_state; + aws_mem_release(new_state->alloc, old_state); + + return AWS_OP_SUCCESS; +} + +int aws_hash_table_create( + struct aws_hash_table *map, + const void *key, + struct aws_hash_element **p_elem, + int *was_created) { + + struct hash_table_state *state = map->p_impl; + uint64_t hash_code = s_hash_for(state, key); + struct hash_table_entry *entry; + size_t probe_idx; + int ignored; + if (!was_created) { + was_created = &ignored; + } + + int rv = s_find_entry(state, hash_code, key, &entry, &probe_idx); + + if (rv == AWS_ERROR_SUCCESS) { + if (p_elem) { + *p_elem = &entry->element; + } + *was_created = 0; + return AWS_OP_SUCCESS; + } + + /* Okay, we need to add an entry. Check the load factor first. */ + size_t incr_entry_count; + if (aws_add_size_checked(state->entry_count, 1, &incr_entry_count)) { + return AWS_OP_ERR; + } + if (incr_entry_count > state->max_load) { + rv = s_expand_table(map); + if (rv != AWS_OP_SUCCESS) { + /* Any error was already raised in expand_table */ + return rv; + } + state = map->p_impl; + /* If we expanded the table, we need to discard the probe index returned from find_entry, + * as it's likely that we can find a more desirable slot. If we don't, then later gets will + * terminate before reaching our probe index. + + * n.b. currently we ignore this probe_idx subsequently, but leaving + this here so we don't + * forget when we optimize later. */ + probe_idx = 0; + } + + state->entry_count++; + struct hash_table_entry new_entry; + new_entry.element.key = key; + new_entry.element.value = NULL; + new_entry.hash_code = hash_code; + + entry = s_emplace_item(state, new_entry, probe_idx); + + if (p_elem) { + *p_elem = &entry->element; + } + + *was_created = 1; + + return AWS_OP_SUCCESS; +} + +AWS_COMMON_API +int aws_hash_table_put(struct aws_hash_table *map, const void *key, void *value, int *was_created) { + struct aws_hash_element *p_elem; + int was_created_fallback; + + if (!was_created) { + was_created = &was_created_fallback; + } + + if (aws_hash_table_create(map, key, &p_elem, was_created)) { + return AWS_OP_ERR; + } + + /* + * aws_hash_table_create might resize the table, which results in map->p_impl changing. + * It is therefore important to wait to read p_impl until after we return. + */ + struct hash_table_state *state = map->p_impl; + + if (!*was_created) { + if (p_elem->key != key && state->destroy_key_fn) { + state->destroy_key_fn((void *)p_elem->key); + } + + if (state->destroy_value_fn) { + state->destroy_value_fn((void *)p_elem->value); + } + } + + p_elem->key = key; + p_elem->value = value; + + return AWS_OP_SUCCESS; +} + +/* Clears an entry. Does _not_ invoke destructor callbacks. + * Returns the last slot touched (note that if we wrap, we'll report an index + * lower than the original entry's index) + */ +static size_t s_remove_entry(struct hash_table_state *state, struct hash_table_entry *entry) { + AWS_PRECONDITION(hash_table_state_is_valid(state)); + AWS_PRECONDITION(state->entry_count > 0); AWS_PRECONDITION( entry >= &state->slots[0] && entry < &state->slots[state->size], "Input hash_table_entry [entry] pointer must point in the available slots."); - state->entry_count--; - - /* Shift subsequent entries back until we find an entry that belongs at its - * current position. This is important to ensure that subsequent searches - * don't terminate at the removed element. - */ - size_t index = s_index_for(state, entry); - /* There is always at least one empty slot in the hash table, so this loop always terminates */ - while (1) { - size_t next_index = (index + 1) & state->mask; - - /* If we hit an empty slot, stop */ - if (!state->slots[next_index].hash_code) { - break; - } - /* If the next slot is at the start of the probe sequence, stop. - * We know that nothing with an earlier home slot is after this; - * otherwise this index-zero entry would have been evicted from its - * home. - */ - if ((state->slots[next_index].hash_code & state->mask) == next_index) { - break; - } - - /* Okay, shift this one back */ - state->slots[index] = state->slots[next_index]; - index = next_index; - } - - /* Clear the entry we shifted out of */ - AWS_ZERO_STRUCT(state->slots[index]); + state->entry_count--; + + /* Shift subsequent entries back until we find an entry that belongs at its + * current position. This is important to ensure that subsequent searches + * don't terminate at the removed element. + */ + size_t index = s_index_for(state, entry); + /* There is always at least one empty slot in the hash table, so this loop always terminates */ + while (1) { + size_t next_index = (index + 1) & state->mask; + + /* If we hit an empty slot, stop */ + if (!state->slots[next_index].hash_code) { + break; + } + /* If the next slot is at the start of the probe sequence, stop. + * We know that nothing with an earlier home slot is after this; + * otherwise this index-zero entry would have been evicted from its + * home. + */ + if ((state->slots[next_index].hash_code & state->mask) == next_index) { + break; + } + + /* Okay, shift this one back */ + state->slots[index] = state->slots[next_index]; + index = next_index; + } + + /* Clear the entry we shifted out of */ + AWS_ZERO_STRUCT(state->slots[index]); AWS_RETURN_WITH_POSTCONDITION(index, hash_table_state_is_valid(state) && index <= state->size); -} - -int aws_hash_table_remove( - struct aws_hash_table *map, - const void *key, - struct aws_hash_element *p_value, - int *was_present) { +} + +int aws_hash_table_remove( + struct aws_hash_table *map, + const void *key, + struct aws_hash_element *p_value, + int *was_present) { AWS_PRECONDITION(aws_hash_table_is_valid(map)); AWS_PRECONDITION( p_value == NULL || AWS_OBJECT_PTR_IS_WRITABLE(p_value), "Input pointer [p_value] must be NULL or writable."); AWS_PRECONDITION( was_present == NULL || AWS_OBJECT_PTR_IS_WRITABLE(was_present), "Input pointer [was_present] must be NULL or writable."); - - struct hash_table_state *state = map->p_impl; - uint64_t hash_code = s_hash_for(state, key); - struct hash_table_entry *entry; - int ignored; - - if (!was_present) { - was_present = &ignored; - } - - int rv = s_find_entry(state, hash_code, key, &entry, NULL); - - if (rv != AWS_ERROR_SUCCESS) { - *was_present = 0; + + struct hash_table_state *state = map->p_impl; + uint64_t hash_code = s_hash_for(state, key); + struct hash_table_entry *entry; + int ignored; + + if (!was_present) { + was_present = &ignored; + } + + int rv = s_find_entry(state, hash_code, key, &entry, NULL); + + if (rv != AWS_ERROR_SUCCESS) { + *was_present = 0; AWS_SUCCEED_WITH_POSTCONDITION(aws_hash_table_is_valid(map)); - } - - *was_present = 1; - - if (p_value) { - *p_value = entry->element; - } else { - if (state->destroy_key_fn) { - state->destroy_key_fn((void *)entry->element.key); - } - if (state->destroy_value_fn) { - state->destroy_value_fn(entry->element.value); - } - } - s_remove_entry(state, entry); - + } + + *was_present = 1; + + if (p_value) { + *p_value = entry->element; + } else { + if (state->destroy_key_fn) { + state->destroy_key_fn((void *)entry->element.key); + } + if (state->destroy_value_fn) { + state->destroy_value_fn(entry->element.value); + } + } + s_remove_entry(state, entry); + AWS_SUCCEED_WITH_POSTCONDITION(aws_hash_table_is_valid(map)); -} - +} + int aws_hash_table_remove_element(struct aws_hash_table *map, struct aws_hash_element *p_value) { AWS_PRECONDITION(aws_hash_table_is_valid(map)); AWS_PRECONDITION(p_value != NULL); @@ -708,213 +708,213 @@ int aws_hash_table_remove_element(struct aws_hash_table *map, struct aws_hash_el AWS_SUCCEED_WITH_POSTCONDITION(aws_hash_table_is_valid(map)); } -int aws_hash_table_foreach( - struct aws_hash_table *map, - int (*callback)(void *context, struct aws_hash_element *pElement), - void *context) { - - for (struct aws_hash_iter iter = aws_hash_iter_begin(map); !aws_hash_iter_done(&iter); aws_hash_iter_next(&iter)) { - int rv = callback(context, &iter.element); - - if (rv & AWS_COMMON_HASH_TABLE_ITER_DELETE) { - aws_hash_iter_delete(&iter, false); - } - - if (!(rv & AWS_COMMON_HASH_TABLE_ITER_CONTINUE)) { - break; - } - } - - return AWS_OP_SUCCESS; -} - -bool aws_hash_table_eq( - const struct aws_hash_table *a, - const struct aws_hash_table *b, - aws_hash_callback_eq_fn *value_eq) { +int aws_hash_table_foreach( + struct aws_hash_table *map, + int (*callback)(void *context, struct aws_hash_element *pElement), + void *context) { + + for (struct aws_hash_iter iter = aws_hash_iter_begin(map); !aws_hash_iter_done(&iter); aws_hash_iter_next(&iter)) { + int rv = callback(context, &iter.element); + + if (rv & AWS_COMMON_HASH_TABLE_ITER_DELETE) { + aws_hash_iter_delete(&iter, false); + } + + if (!(rv & AWS_COMMON_HASH_TABLE_ITER_CONTINUE)) { + break; + } + } + + return AWS_OP_SUCCESS; +} + +bool aws_hash_table_eq( + const struct aws_hash_table *a, + const struct aws_hash_table *b, + aws_hash_callback_eq_fn *value_eq) { AWS_PRECONDITION(aws_hash_table_is_valid(a)); AWS_PRECONDITION(aws_hash_table_is_valid(b)); AWS_PRECONDITION(value_eq != NULL); - if (aws_hash_table_get_entry_count(a) != aws_hash_table_get_entry_count(b)) { + if (aws_hash_table_get_entry_count(a) != aws_hash_table_get_entry_count(b)) { AWS_RETURN_WITH_POSTCONDITION(false, aws_hash_table_is_valid(a) && aws_hash_table_is_valid(b)); - } - - /* - * Now that we have established that the two tables have the same number of - * entries, we can simply iterate one and compare against the same key in - * the other. - */ + } + + /* + * Now that we have established that the two tables have the same number of + * entries, we can simply iterate one and compare against the same key in + * the other. + */ for (size_t i = 0; i < a->p_impl->size; ++i) { const struct hash_table_entry *const a_entry = &a->p_impl->slots[i]; if (a_entry->hash_code == 0) { continue; } - struct aws_hash_element *b_element = NULL; - + struct aws_hash_element *b_element = NULL; + aws_hash_table_find(b, a_entry->element.key, &b_element); - - if (!b_element) { - /* Key is present in A only */ + + if (!b_element) { + /* Key is present in A only */ AWS_RETURN_WITH_POSTCONDITION(false, aws_hash_table_is_valid(a) && aws_hash_table_is_valid(b)); - } - + } + if (!s_safe_eq_check(value_eq, a_entry->element.value, b_element->value)) { AWS_RETURN_WITH_POSTCONDITION(false, aws_hash_table_is_valid(a) && aws_hash_table_is_valid(b)); - } - } + } + } AWS_RETURN_WITH_POSTCONDITION(true, aws_hash_table_is_valid(a) && aws_hash_table_is_valid(b)); -} - -/** - * Given an iterator, and a start slot, find the next available filled slot if it exists - * Otherwise, return an iter that will return true for aws_hash_iter_done(). - * Note that aws_hash_iter_is_valid() need not hold on entry to the function, since - * it can be called on a partially constructed iter from aws_hash_iter_begin(). - * - * Note that calling this on an iterator which is "done" is idempotent: it will return another - * iterator which is "done". - */ -static inline void s_get_next_element(struct aws_hash_iter *iter, size_t start_slot) { - AWS_PRECONDITION(iter != NULL); +} + +/** + * Given an iterator, and a start slot, find the next available filled slot if it exists + * Otherwise, return an iter that will return true for aws_hash_iter_done(). + * Note that aws_hash_iter_is_valid() need not hold on entry to the function, since + * it can be called on a partially constructed iter from aws_hash_iter_begin(). + * + * Note that calling this on an iterator which is "done" is idempotent: it will return another + * iterator which is "done". + */ +static inline void s_get_next_element(struct aws_hash_iter *iter, size_t start_slot) { + AWS_PRECONDITION(iter != NULL); AWS_PRECONDITION(aws_hash_table_is_valid(iter->map)); - struct hash_table_state *state = iter->map->p_impl; - size_t limit = iter->limit; - - for (size_t i = start_slot; i < limit; i++) { - struct hash_table_entry *entry = &state->slots[i]; - - if (entry->hash_code) { - iter->element = entry->element; - iter->slot = i; - iter->status = AWS_HASH_ITER_STATUS_READY_FOR_USE; - return; - } - } - iter->element.key = NULL; - iter->element.value = NULL; - iter->slot = iter->limit; - iter->status = AWS_HASH_ITER_STATUS_DONE; - AWS_POSTCONDITION(aws_hash_iter_is_valid(iter)); -} - -struct aws_hash_iter aws_hash_iter_begin(const struct aws_hash_table *map) { - AWS_PRECONDITION(aws_hash_table_is_valid(map)); - struct hash_table_state *state = map->p_impl; - struct aws_hash_iter iter; + struct hash_table_state *state = iter->map->p_impl; + size_t limit = iter->limit; + + for (size_t i = start_slot; i < limit; i++) { + struct hash_table_entry *entry = &state->slots[i]; + + if (entry->hash_code) { + iter->element = entry->element; + iter->slot = i; + iter->status = AWS_HASH_ITER_STATUS_READY_FOR_USE; + return; + } + } + iter->element.key = NULL; + iter->element.value = NULL; + iter->slot = iter->limit; + iter->status = AWS_HASH_ITER_STATUS_DONE; + AWS_POSTCONDITION(aws_hash_iter_is_valid(iter)); +} + +struct aws_hash_iter aws_hash_iter_begin(const struct aws_hash_table *map) { + AWS_PRECONDITION(aws_hash_table_is_valid(map)); + struct hash_table_state *state = map->p_impl; + struct aws_hash_iter iter; AWS_ZERO_STRUCT(iter); - iter.map = map; - iter.limit = state->size; - s_get_next_element(&iter, 0); + iter.map = map; + iter.limit = state->size; + s_get_next_element(&iter, 0); AWS_RETURN_WITH_POSTCONDITION( iter, aws_hash_iter_is_valid(&iter) && (iter.status == AWS_HASH_ITER_STATUS_DONE || iter.status == AWS_HASH_ITER_STATUS_READY_FOR_USE), "The status of output aws_hash_iter [iter] must either be DONE or READY_FOR_USE."); -} - -bool aws_hash_iter_done(const struct aws_hash_iter *iter) { - AWS_PRECONDITION(aws_hash_iter_is_valid(iter)); +} + +bool aws_hash_iter_done(const struct aws_hash_iter *iter) { + AWS_PRECONDITION(aws_hash_iter_is_valid(iter)); AWS_PRECONDITION( iter->status == AWS_HASH_ITER_STATUS_DONE || iter->status == AWS_HASH_ITER_STATUS_READY_FOR_USE, "Input aws_hash_iter [iter] must either be done, or ready to use."); - /* - * SIZE_MAX is a valid (non-terminal) value for iter->slot in the event that - * we delete slot 0. See comments in aws_hash_iter_delete. - * - * As such we must use == rather than >= here. - */ - bool rval = (iter->slot == iter->limit); + /* + * SIZE_MAX is a valid (non-terminal) value for iter->slot in the event that + * we delete slot 0. See comments in aws_hash_iter_delete. + * + * As such we must use == rather than >= here. + */ + bool rval = (iter->slot == iter->limit); AWS_POSTCONDITION( iter->status == AWS_HASH_ITER_STATUS_DONE || iter->status == AWS_HASH_ITER_STATUS_READY_FOR_USE, "The status of output aws_hash_iter [iter] must either be DONE or READY_FOR_USE."); AWS_POSTCONDITION( rval == (iter->status == AWS_HASH_ITER_STATUS_DONE), "Output bool [rval] must be true if and only if the status of [iter] is DONE."); - AWS_POSTCONDITION(aws_hash_iter_is_valid(iter)); - return rval; -} - -void aws_hash_iter_next(struct aws_hash_iter *iter) { - AWS_PRECONDITION(aws_hash_iter_is_valid(iter)); + AWS_POSTCONDITION(aws_hash_iter_is_valid(iter)); + return rval; +} + +void aws_hash_iter_next(struct aws_hash_iter *iter) { + AWS_PRECONDITION(aws_hash_iter_is_valid(iter)); #ifdef CBMC # pragma CPROVER check push # pragma CPROVER check disable "unsigned-overflow" #endif - s_get_next_element(iter, iter->slot + 1); + s_get_next_element(iter, iter->slot + 1); #ifdef CBMC # pragma CPROVER check pop #endif AWS_POSTCONDITION( iter->status == AWS_HASH_ITER_STATUS_DONE || iter->status == AWS_HASH_ITER_STATUS_READY_FOR_USE, "The status of output aws_hash_iter [iter] must either be DONE or READY_FOR_USE."); - AWS_POSTCONDITION(aws_hash_iter_is_valid(iter)); -} - -void aws_hash_iter_delete(struct aws_hash_iter *iter, bool destroy_contents) { + AWS_POSTCONDITION(aws_hash_iter_is_valid(iter)); +} + +void aws_hash_iter_delete(struct aws_hash_iter *iter, bool destroy_contents) { AWS_PRECONDITION( iter->status == AWS_HASH_ITER_STATUS_READY_FOR_USE, "Input aws_hash_iter [iter] must be ready for use."); - AWS_PRECONDITION(aws_hash_iter_is_valid(iter)); + AWS_PRECONDITION(aws_hash_iter_is_valid(iter)); AWS_PRECONDITION( iter->map->p_impl->entry_count > 0, "The hash_table_state pointed by input [iter] must contain at least one entry."); - - struct hash_table_state *state = iter->map->p_impl; - if (destroy_contents) { - if (state->destroy_key_fn) { - state->destroy_key_fn((void *)iter->element.key); - } - if (state->destroy_value_fn) { - state->destroy_value_fn(iter->element.value); - } - } - - size_t last_index = s_remove_entry(state, &state->slots[iter->slot]); - - /* If we shifted elements that are not part of the window we intend to iterate - * over, it means we shifted an element that we already visited into the - * iter->limit - 1 position. To avoid double iteration, we'll now reduce the - * limit to compensate. - * - * Note that last_index cannot equal iter->slot, because slots[iter->slot] - * is empty before we start walking the table. - */ - if (last_index < iter->slot || last_index >= iter->limit) { - iter->limit--; - } - - /* - * After removing this entry, the next entry might be in the same slot, or - * in some later slot, or we might have no further entries. - * - * We also expect that the caller will call aws_hash_iter_done and aws_hash_iter_next - * after this delete call. This gets a bit tricky if we just deleted the value - * in slot 0, and a new value has shifted in. - * - * To deal with this, we'll just step back one slot, and let _next start iteration - * at our current slot. Note that if we just deleted slot 0, this will result in - * underflowing to SIZE_MAX; we have to take care in aws_hash_iter_done to avoid - * treating this as an end-of-iteration condition. - */ + + struct hash_table_state *state = iter->map->p_impl; + if (destroy_contents) { + if (state->destroy_key_fn) { + state->destroy_key_fn((void *)iter->element.key); + } + if (state->destroy_value_fn) { + state->destroy_value_fn(iter->element.value); + } + } + + size_t last_index = s_remove_entry(state, &state->slots[iter->slot]); + + /* If we shifted elements that are not part of the window we intend to iterate + * over, it means we shifted an element that we already visited into the + * iter->limit - 1 position. To avoid double iteration, we'll now reduce the + * limit to compensate. + * + * Note that last_index cannot equal iter->slot, because slots[iter->slot] + * is empty before we start walking the table. + */ + if (last_index < iter->slot || last_index >= iter->limit) { + iter->limit--; + } + + /* + * After removing this entry, the next entry might be in the same slot, or + * in some later slot, or we might have no further entries. + * + * We also expect that the caller will call aws_hash_iter_done and aws_hash_iter_next + * after this delete call. This gets a bit tricky if we just deleted the value + * in slot 0, and a new value has shifted in. + * + * To deal with this, we'll just step back one slot, and let _next start iteration + * at our current slot. Note that if we just deleted slot 0, this will result in + * underflowing to SIZE_MAX; we have to take care in aws_hash_iter_done to avoid + * treating this as an end-of-iteration condition. + */ #ifdef CBMC # pragma CPROVER check push # pragma CPROVER check disable "unsigned-overflow" #endif - iter->slot--; + iter->slot--; #ifdef CBMC # pragma CPROVER check pop #endif - iter->status = AWS_HASH_ITER_STATUS_DELETE_CALLED; + iter->status = AWS_HASH_ITER_STATUS_DELETE_CALLED; AWS_POSTCONDITION( iter->status == AWS_HASH_ITER_STATUS_DELETE_CALLED, "The status of output aws_hash_iter [iter] must be DELETE_CALLED."); - AWS_POSTCONDITION(aws_hash_iter_is_valid(iter)); -} - -void aws_hash_table_clear(struct aws_hash_table *map) { + AWS_POSTCONDITION(aws_hash_iter_is_valid(iter)); +} + +void aws_hash_table_clear(struct aws_hash_table *map) { AWS_PRECONDITION(aws_hash_table_is_valid(map)); - struct hash_table_state *state = map->p_impl; + struct hash_table_state *state = map->p_impl; /* Check that we have at least one destructor before iterating over the table */ if (state->destroy_key_fn || state->destroy_value_fn) { @@ -922,65 +922,65 @@ void aws_hash_table_clear(struct aws_hash_table *map) { struct hash_table_entry *entry = &state->slots[i]; if (!entry->hash_code) { continue; - } + } if (state->destroy_key_fn) { state->destroy_key_fn((void *)entry->element.key); - } + } if (state->destroy_value_fn) { - state->destroy_value_fn(entry->element.value); - } - } - } - /* Since hash code 0 represents an empty slot we can just zero out the - * entire table. */ - memset(state->slots, 0, sizeof(*state->slots) * state->size); - - state->entry_count = 0; + state->destroy_value_fn(entry->element.value); + } + } + } + /* Since hash code 0 represents an empty slot we can just zero out the + * entire table. */ + memset(state->slots, 0, sizeof(*state->slots) * state->size); + + state->entry_count = 0; AWS_POSTCONDITION(aws_hash_table_is_valid(map)); -} - -uint64_t aws_hash_c_string(const void *item) { +} + +uint64_t aws_hash_c_string(const void *item) { AWS_PRECONDITION(aws_c_string_is_valid(item)); - const char *str = item; - - /* first digits of pi in hex */ - uint32_t b = 0x3243F6A8, c = 0x885A308D; - hashlittle2(str, strlen(str), &c, &b); - - return ((uint64_t)b << 32) | c; -} - -uint64_t aws_hash_string(const void *item) { + const char *str = item; + + /* first digits of pi in hex */ + uint32_t b = 0x3243F6A8, c = 0x885A308D; + hashlittle2(str, strlen(str), &c, &b); + + return ((uint64_t)b << 32) | c; +} + +uint64_t aws_hash_string(const void *item) { AWS_PRECONDITION(aws_string_is_valid(item)); - const struct aws_string *str = item; - - /* first digits of pi in hex */ - uint32_t b = 0x3243F6A8, c = 0x885A308D; - hashlittle2(aws_string_bytes(str), str->len, &c, &b); + const struct aws_string *str = item; + + /* first digits of pi in hex */ + uint32_t b = 0x3243F6A8, c = 0x885A308D; + hashlittle2(aws_string_bytes(str), str->len, &c, &b); AWS_RETURN_WITH_POSTCONDITION(((uint64_t)b << 32) | c, aws_string_is_valid(str)); -} - -uint64_t aws_hash_byte_cursor_ptr(const void *item) { +} + +uint64_t aws_hash_byte_cursor_ptr(const void *item) { AWS_PRECONDITION(aws_byte_cursor_is_valid(item)); - const struct aws_byte_cursor *cur = item; - - /* first digits of pi in hex */ - uint32_t b = 0x3243F6A8, c = 0x885A308D; - hashlittle2(cur->ptr, cur->len, &c, &b); + const struct aws_byte_cursor *cur = item; + + /* first digits of pi in hex */ + uint32_t b = 0x3243F6A8, c = 0x885A308D; + hashlittle2(cur->ptr, cur->len, &c, &b); AWS_RETURN_WITH_POSTCONDITION(((uint64_t)b << 32) | c, aws_byte_cursor_is_valid(cur)); -} - -uint64_t aws_hash_ptr(const void *item) { +} + +uint64_t aws_hash_ptr(const void *item) { /* Since the numeric value of the pointer is considered, not the memory behind it, 0 is an acceptable value */ - /* first digits of e in hex - * 2.b7e 1516 28ae d2a6 */ - uint32_t b = 0x2b7e1516, c = 0x28aed2a6; - - hashlittle2(&item, sizeof(item), &c, &b); - - return ((uint64_t)b << 32) | c; -} - + /* first digits of e in hex + * 2.b7e 1516 28ae d2a6 */ + uint32_t b = 0x2b7e1516, c = 0x28aed2a6; + + hashlittle2(&item, sizeof(item), &c, &b); + + return ((uint64_t)b << 32) | c; +} + uint64_t aws_hash_combine(uint64_t item1, uint64_t item2) { uint32_t b = item2 & 0xFFFFFFFF; /* LSB */ uint32_t c = item2 >> 32; /* MSB */ @@ -989,28 +989,28 @@ uint64_t aws_hash_combine(uint64_t item1, uint64_t item2) { return ((uint64_t)b << 32) | c; } -bool aws_hash_callback_c_str_eq(const void *a, const void *b) { - AWS_PRECONDITION(aws_c_string_is_valid(a)); - AWS_PRECONDITION(aws_c_string_is_valid(b)); - bool rval = !strcmp(a, b); +bool aws_hash_callback_c_str_eq(const void *a, const void *b) { + AWS_PRECONDITION(aws_c_string_is_valid(a)); + AWS_PRECONDITION(aws_c_string_is_valid(b)); + bool rval = !strcmp(a, b); AWS_RETURN_WITH_POSTCONDITION(rval, aws_c_string_is_valid(a) && aws_c_string_is_valid(b)); -} - -bool aws_hash_callback_string_eq(const void *a, const void *b) { - AWS_PRECONDITION(aws_string_is_valid(a)); - AWS_PRECONDITION(aws_string_is_valid(b)); - bool rval = aws_string_eq(a, b); +} + +bool aws_hash_callback_string_eq(const void *a, const void *b) { + AWS_PRECONDITION(aws_string_is_valid(a)); + AWS_PRECONDITION(aws_string_is_valid(b)); + bool rval = aws_string_eq(a, b); AWS_RETURN_WITH_POSTCONDITION(rval, aws_c_string_is_valid(a) && aws_c_string_is_valid(b)); -} - -void aws_hash_callback_string_destroy(void *a) { - AWS_PRECONDITION(aws_string_is_valid(a)); - aws_string_destroy(a); -} - -bool aws_ptr_eq(const void *a, const void *b) { - return a == b; -} +} + +void aws_hash_callback_string_destroy(void *a) { + AWS_PRECONDITION(aws_string_is_valid(a)); + aws_string_destroy(a); +} + +bool aws_ptr_eq(const void *a, const void *b) { + return a == b; +} /** * Best-effort check of hash_table_state data-structure invariants diff --git a/contrib/restricted/aws/aws-c-common/source/lru_cache.c b/contrib/restricted/aws/aws-c-common/source/lru_cache.c index 15de626b96..37724fd079 100644 --- a/contrib/restricted/aws/aws-c-common/source/lru_cache.c +++ b/contrib/restricted/aws/aws-c-common/source/lru_cache.c @@ -1,18 +1,18 @@ /** * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. * SPDX-License-Identifier: Apache-2.0. - */ -#include <aws/common/lru_cache.h> + */ +#include <aws/common/lru_cache.h> static int s_lru_cache_put(struct aws_cache *cache, const void *key, void *p_value); static int s_lru_cache_find(struct aws_cache *cache, const void *key, void **p_value); static void *s_lru_cache_use_lru_element(struct aws_cache *cache); static void *s_lru_cache_get_mru_element(const struct aws_cache *cache); - + struct lru_cache_impl_vtable { void *(*use_lru_element)(struct aws_cache *cache); void *(*get_mru_element)(const struct aws_cache *cache); -}; - +}; + static struct aws_cache_vtable s_lru_cache_vtable = { .destroy = aws_cache_base_default_destroy, .find = s_lru_cache_find, @@ -21,23 +21,23 @@ static struct aws_cache_vtable s_lru_cache_vtable = { .clear = aws_cache_base_default_clear, .get_element_count = aws_cache_base_default_get_element_count, }; - + struct aws_cache *aws_cache_new_lru( - struct aws_allocator *allocator, - aws_hash_fn *hash_fn, - aws_hash_callback_eq_fn *equals_fn, - aws_hash_callback_destroy_fn *destroy_key_fn, - aws_hash_callback_destroy_fn *destroy_value_fn, - size_t max_items) { - AWS_ASSERT(allocator); - AWS_ASSERT(max_items); + struct aws_allocator *allocator, + aws_hash_fn *hash_fn, + aws_hash_callback_eq_fn *equals_fn, + aws_hash_callback_destroy_fn *destroy_key_fn, + aws_hash_callback_destroy_fn *destroy_value_fn, + size_t max_items) { + AWS_ASSERT(allocator); + AWS_ASSERT(max_items); struct aws_cache *lru_cache = NULL; struct lru_cache_impl_vtable *impl = NULL; - + if (!aws_mem_acquire_many( allocator, 2, &lru_cache, sizeof(struct aws_cache), &impl, sizeof(struct lru_cache_impl_vtable))) { return NULL; - } + } impl->use_lru_element = s_lru_cache_use_lru_element; impl->get_mru_element = s_lru_cache_get_mru_element; lru_cache->allocator = allocator; @@ -49,15 +49,15 @@ struct aws_cache *aws_cache_new_lru( return NULL; } return lru_cache; -} - +} + /* implementation for lru cache put */ static int s_lru_cache_put(struct aws_cache *cache, const void *key, void *p_value) { - + if (aws_linked_hash_table_put(&cache->table, key, p_value)) { - return AWS_OP_ERR; - } - + return AWS_OP_ERR; + } + /* Manage the space if we actually added a new element and the cache is full. */ if (aws_linked_hash_table_get_element_count(&cache->table) > cache->max_items) { /* we're over the cache size limit. Remove whatever is in the front of @@ -66,46 +66,46 @@ static int s_lru_cache_put(struct aws_cache *cache, const void *key, void *p_val struct aws_linked_list_node *node = aws_linked_list_front(list); struct aws_linked_hash_table_node *table_node = AWS_CONTAINER_OF(node, struct aws_linked_hash_table_node, node); return aws_linked_hash_table_remove(&cache->table, table_node->key); - } - - return AWS_OP_SUCCESS; -} + } + + return AWS_OP_SUCCESS; +} /* implementation for lru cache find */ static int s_lru_cache_find(struct aws_cache *cache, const void *key, void **p_value) { return (aws_linked_hash_table_find_and_move_to_back(&cache->table, key, p_value)); -} - +} + static void *s_lru_cache_use_lru_element(struct aws_cache *cache) { const struct aws_linked_list *list = aws_linked_hash_table_get_iteration_list(&cache->table); if (aws_linked_list_empty(list)) { - return NULL; - } + return NULL; + } struct aws_linked_list_node *node = aws_linked_list_front(list); struct aws_linked_hash_table_node *lru_node = AWS_CONTAINER_OF(node, struct aws_linked_hash_table_node, node); - + aws_linked_hash_table_move_node_to_end_of_list(&cache->table, lru_node); return lru_node->value; -} +} static void *s_lru_cache_get_mru_element(const struct aws_cache *cache) { const struct aws_linked_list *list = aws_linked_hash_table_get_iteration_list(&cache->table); if (aws_linked_list_empty(list)) { - return NULL; - } + return NULL; + } struct aws_linked_list_node *node = aws_linked_list_back(list); struct aws_linked_hash_table_node *mru_node = AWS_CONTAINER_OF(node, struct aws_linked_hash_table_node, node); return mru_node->value; } - + void *aws_lru_cache_use_lru_element(struct aws_cache *cache) { AWS_PRECONDITION(cache); AWS_PRECONDITION(cache->impl); struct lru_cache_impl_vtable *impl_vtable = cache->impl; return impl_vtable->use_lru_element(cache); -} - +} + void *aws_lru_cache_get_mru_element(const struct aws_cache *cache) { AWS_PRECONDITION(cache); AWS_PRECONDITION(cache->impl); struct lru_cache_impl_vtable *impl_vtable = cache->impl; return impl_vtable->get_mru_element(cache); -} +} diff --git a/contrib/restricted/aws/aws-c-common/source/posix/clock.c b/contrib/restricted/aws/aws-c-common/source/posix/clock.c index 90e213ea7c..a74515f5fe 100644 --- a/contrib/restricted/aws/aws-c-common/source/posix/clock.c +++ b/contrib/restricted/aws/aws-c-common/source/posix/clock.c @@ -1,136 +1,136 @@ /** * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. * SPDX-License-Identifier: Apache-2.0. - */ - -#include <aws/common/clock.h> - -#include <time.h> - -static const uint64_t NS_PER_SEC = 1000000000; - -#if defined(CLOCK_MONOTONIC_RAW) -# define HIGH_RES_CLOCK CLOCK_MONOTONIC_RAW -#else -# define HIGH_RES_CLOCK CLOCK_MONOTONIC -#endif - -/* This entire compilation branch has two goals. First, prior to OSX Sierra, clock_gettime does not exist on OSX, so we - * already need to branch on that. Second, even if we compile on a newer OSX, which we will always do for bindings (e.g. - * python, dotnet, java etc...), we have to worry about the same lib being loaded on an older version, and thus, we'd - * get linker errors at runtime. To avoid this, we do a dynamic load - * to keep the function out of linker tables and only use the symbol if the current running process has access to the - * function. */ -#if defined(__MACH__) -# include <AvailabilityMacros.h> -# include <aws/common/thread.h> -# include <dlfcn.h> -# include <sys/time.h> - -static int s_legacy_get_time(uint64_t *timestamp) { - struct timeval tv; - int ret_val = gettimeofday(&tv, NULL); - - if (ret_val) { - return aws_raise_error(AWS_ERROR_CLOCK_FAILURE); - } - + */ + +#include <aws/common/clock.h> + +#include <time.h> + +static const uint64_t NS_PER_SEC = 1000000000; + +#if defined(CLOCK_MONOTONIC_RAW) +# define HIGH_RES_CLOCK CLOCK_MONOTONIC_RAW +#else +# define HIGH_RES_CLOCK CLOCK_MONOTONIC +#endif + +/* This entire compilation branch has two goals. First, prior to OSX Sierra, clock_gettime does not exist on OSX, so we + * already need to branch on that. Second, even if we compile on a newer OSX, which we will always do for bindings (e.g. + * python, dotnet, java etc...), we have to worry about the same lib being loaded on an older version, and thus, we'd + * get linker errors at runtime. To avoid this, we do a dynamic load + * to keep the function out of linker tables and only use the symbol if the current running process has access to the + * function. */ +#if defined(__MACH__) +# include <AvailabilityMacros.h> +# include <aws/common/thread.h> +# include <dlfcn.h> +# include <sys/time.h> + +static int s_legacy_get_time(uint64_t *timestamp) { + struct timeval tv; + int ret_val = gettimeofday(&tv, NULL); + + if (ret_val) { + return aws_raise_error(AWS_ERROR_CLOCK_FAILURE); + } + uint64_t secs = (uint64_t)tv.tv_sec; uint64_t u_secs = (uint64_t)tv.tv_usec; *timestamp = (secs * NS_PER_SEC) + (u_secs * 1000); - return AWS_OP_SUCCESS; -} - -# if MAC_OS_X_VERSION_MAX_ALLOWED >= 101200 -static aws_thread_once s_thread_once_flag = AWS_THREAD_ONCE_STATIC_INIT; -static int (*s_gettime_fn)(clockid_t __clock_id, struct timespec *__tp) = NULL; - + return AWS_OP_SUCCESS; +} + +# if MAC_OS_X_VERSION_MAX_ALLOWED >= 101200 +static aws_thread_once s_thread_once_flag = AWS_THREAD_ONCE_STATIC_INIT; +static int (*s_gettime_fn)(clockid_t __clock_id, struct timespec *__tp) = NULL; + static void s_do_osx_loads(void *user_data) { (void)user_data; - s_gettime_fn = (int (*)(clockid_t __clock_id, struct timespec * __tp)) dlsym(RTLD_DEFAULT, "clock_gettime"); -} - -int aws_high_res_clock_get_ticks(uint64_t *timestamp) { + s_gettime_fn = (int (*)(clockid_t __clock_id, struct timespec * __tp)) dlsym(RTLD_DEFAULT, "clock_gettime"); +} + +int aws_high_res_clock_get_ticks(uint64_t *timestamp) { aws_thread_call_once(&s_thread_once_flag, s_do_osx_loads, NULL); - int ret_val = 0; - - if (s_gettime_fn) { - struct timespec ts; - ret_val = s_gettime_fn(HIGH_RES_CLOCK, &ts); - - if (ret_val) { - return aws_raise_error(AWS_ERROR_CLOCK_FAILURE); - } - + int ret_val = 0; + + if (s_gettime_fn) { + struct timespec ts; + ret_val = s_gettime_fn(HIGH_RES_CLOCK, &ts); + + if (ret_val) { + return aws_raise_error(AWS_ERROR_CLOCK_FAILURE); + } + uint64_t secs = (uint64_t)ts.tv_sec; uint64_t n_secs = (uint64_t)ts.tv_nsec; *timestamp = (secs * NS_PER_SEC) + n_secs; - return AWS_OP_SUCCESS; - } - - return s_legacy_get_time(timestamp); -} - -int aws_sys_clock_get_ticks(uint64_t *timestamp) { + return AWS_OP_SUCCESS; + } + + return s_legacy_get_time(timestamp); +} + +int aws_sys_clock_get_ticks(uint64_t *timestamp) { aws_thread_call_once(&s_thread_once_flag, s_do_osx_loads, NULL); - int ret_val = 0; - - if (s_gettime_fn) { - struct timespec ts; - ret_val = s_gettime_fn(CLOCK_REALTIME, &ts); - if (ret_val) { - return aws_raise_error(AWS_ERROR_CLOCK_FAILURE); - } - + int ret_val = 0; + + if (s_gettime_fn) { + struct timespec ts; + ret_val = s_gettime_fn(CLOCK_REALTIME, &ts); + if (ret_val) { + return aws_raise_error(AWS_ERROR_CLOCK_FAILURE); + } + uint64_t secs = (uint64_t)ts.tv_sec; uint64_t n_secs = (uint64_t)ts.tv_nsec; *timestamp = (secs * NS_PER_SEC) + n_secs; - return AWS_OP_SUCCESS; - } - return s_legacy_get_time(timestamp); -} -# else -int aws_high_res_clock_get_ticks(uint64_t *timestamp) { - return s_legacy_get_time(timestamp); -} - -int aws_sys_clock_get_ticks(uint64_t *timestamp) { - return s_legacy_get_time(timestamp); -} - -# endif /* MAC_OS_X_VERSION_MAX_ALLOWED >= 101200 */ -/* Everywhere else, just link clock_gettime in directly */ -#else -int aws_high_res_clock_get_ticks(uint64_t *timestamp) { - int ret_val = 0; - - struct timespec ts; - - ret_val = clock_gettime(HIGH_RES_CLOCK, &ts); - - if (ret_val) { - return aws_raise_error(AWS_ERROR_CLOCK_FAILURE); - } - + return AWS_OP_SUCCESS; + } + return s_legacy_get_time(timestamp); +} +# else +int aws_high_res_clock_get_ticks(uint64_t *timestamp) { + return s_legacy_get_time(timestamp); +} + +int aws_sys_clock_get_ticks(uint64_t *timestamp) { + return s_legacy_get_time(timestamp); +} + +# endif /* MAC_OS_X_VERSION_MAX_ALLOWED >= 101200 */ +/* Everywhere else, just link clock_gettime in directly */ +#else +int aws_high_res_clock_get_ticks(uint64_t *timestamp) { + int ret_val = 0; + + struct timespec ts; + + ret_val = clock_gettime(HIGH_RES_CLOCK, &ts); + + if (ret_val) { + return aws_raise_error(AWS_ERROR_CLOCK_FAILURE); + } + uint64_t secs = (uint64_t)ts.tv_sec; uint64_t n_secs = (uint64_t)ts.tv_nsec; *timestamp = (secs * NS_PER_SEC) + n_secs; - return AWS_OP_SUCCESS; -} - -int aws_sys_clock_get_ticks(uint64_t *timestamp) { - int ret_val = 0; - - struct timespec ts; - ret_val = clock_gettime(CLOCK_REALTIME, &ts); - if (ret_val) { - return aws_raise_error(AWS_ERROR_CLOCK_FAILURE); - } - + return AWS_OP_SUCCESS; +} + +int aws_sys_clock_get_ticks(uint64_t *timestamp) { + int ret_val = 0; + + struct timespec ts; + ret_val = clock_gettime(CLOCK_REALTIME, &ts); + if (ret_val) { + return aws_raise_error(AWS_ERROR_CLOCK_FAILURE); + } + uint64_t secs = (uint64_t)ts.tv_sec; uint64_t n_secs = (uint64_t)ts.tv_nsec; *timestamp = (secs * NS_PER_SEC) + n_secs; - return AWS_OP_SUCCESS; -} -#endif /* defined(__MACH__) */ + return AWS_OP_SUCCESS; +} +#endif /* defined(__MACH__) */ diff --git a/contrib/restricted/aws/aws-c-common/source/posix/condition_variable.c b/contrib/restricted/aws/aws-c-common/source/posix/condition_variable.c index ca321c6bfa..b4914e919b 100644 --- a/contrib/restricted/aws/aws-c-common/source/posix/condition_variable.c +++ b/contrib/restricted/aws/aws-c-common/source/posix/condition_variable.c @@ -1,39 +1,39 @@ /** * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. * SPDX-License-Identifier: Apache-2.0. - */ - -#include <aws/common/condition_variable.h> - -#include <aws/common/clock.h> -#include <aws/common/mutex.h> - -#include <errno.h> - -static int process_error_code(int err) { - switch (err) { - case ENOMEM: - return aws_raise_error(AWS_ERROR_OOM); - case ETIMEDOUT: - return aws_raise_error(AWS_ERROR_COND_VARIABLE_TIMED_OUT); - default: - return aws_raise_error(AWS_ERROR_COND_VARIABLE_ERROR_UNKNOWN); - } -} - -int aws_condition_variable_init(struct aws_condition_variable *condition_variable) { + */ + +#include <aws/common/condition_variable.h> + +#include <aws/common/clock.h> +#include <aws/common/mutex.h> + +#include <errno.h> + +static int process_error_code(int err) { + switch (err) { + case ENOMEM: + return aws_raise_error(AWS_ERROR_OOM); + case ETIMEDOUT: + return aws_raise_error(AWS_ERROR_COND_VARIABLE_TIMED_OUT); + default: + return aws_raise_error(AWS_ERROR_COND_VARIABLE_ERROR_UNKNOWN); + } +} + +int aws_condition_variable_init(struct aws_condition_variable *condition_variable) { AWS_PRECONDITION(condition_variable); - if (pthread_cond_init(&condition_variable->condition_handle, NULL)) { + if (pthread_cond_init(&condition_variable->condition_handle, NULL)) { AWS_ZERO_STRUCT(*condition_variable); - return aws_raise_error(AWS_ERROR_COND_VARIABLE_INIT_FAILED); - } - + return aws_raise_error(AWS_ERROR_COND_VARIABLE_INIT_FAILED); + } + condition_variable->initialized = true; - return AWS_OP_SUCCESS; -} - -void aws_condition_variable_clean_up(struct aws_condition_variable *condition_variable) { + return AWS_OP_SUCCESS; +} + +void aws_condition_variable_clean_up(struct aws_condition_variable *condition_variable) { AWS_PRECONDITION(condition_variable); if (condition_variable->initialized) { @@ -41,71 +41,71 @@ void aws_condition_variable_clean_up(struct aws_condition_variable *condition_va } AWS_ZERO_STRUCT(*condition_variable); -} - -int aws_condition_variable_notify_one(struct aws_condition_variable *condition_variable) { +} + +int aws_condition_variable_notify_one(struct aws_condition_variable *condition_variable) { AWS_PRECONDITION(condition_variable && condition_variable->initialized); - int err_code = pthread_cond_signal(&condition_variable->condition_handle); - - if (err_code) { - return process_error_code(err_code); - } - - return AWS_OP_SUCCESS; -} - -int aws_condition_variable_notify_all(struct aws_condition_variable *condition_variable) { + int err_code = pthread_cond_signal(&condition_variable->condition_handle); + + if (err_code) { + return process_error_code(err_code); + } + + return AWS_OP_SUCCESS; +} + +int aws_condition_variable_notify_all(struct aws_condition_variable *condition_variable) { AWS_PRECONDITION(condition_variable && condition_variable->initialized); - int err_code = pthread_cond_broadcast(&condition_variable->condition_handle); - - if (err_code) { - return process_error_code(err_code); - } - - return AWS_OP_SUCCESS; -} - -int aws_condition_variable_wait(struct aws_condition_variable *condition_variable, struct aws_mutex *mutex) { + int err_code = pthread_cond_broadcast(&condition_variable->condition_handle); + + if (err_code) { + return process_error_code(err_code); + } + + return AWS_OP_SUCCESS; +} + +int aws_condition_variable_wait(struct aws_condition_variable *condition_variable, struct aws_mutex *mutex) { AWS_PRECONDITION(condition_variable && condition_variable->initialized); AWS_PRECONDITION(mutex && mutex->initialized); - int err_code = pthread_cond_wait(&condition_variable->condition_handle, &mutex->mutex_handle); - - if (err_code) { - return process_error_code(err_code); - } - - return AWS_OP_SUCCESS; -} - -int aws_condition_variable_wait_for( - struct aws_condition_variable *condition_variable, - struct aws_mutex *mutex, - int64_t time_to_wait) { - + int err_code = pthread_cond_wait(&condition_variable->condition_handle, &mutex->mutex_handle); + + if (err_code) { + return process_error_code(err_code); + } + + return AWS_OP_SUCCESS; +} + +int aws_condition_variable_wait_for( + struct aws_condition_variable *condition_variable, + struct aws_mutex *mutex, + int64_t time_to_wait) { + AWS_PRECONDITION(condition_variable && condition_variable->initialized); AWS_PRECONDITION(mutex && mutex->initialized); - uint64_t current_sys_time = 0; - if (aws_sys_clock_get_ticks(¤t_sys_time)) { - return AWS_OP_ERR; - } - - time_to_wait += current_sys_time; - - struct timespec ts; - uint64_t remainder = 0; - ts.tv_sec = - (time_t)aws_timestamp_convert((uint64_t)time_to_wait, AWS_TIMESTAMP_NANOS, AWS_TIMESTAMP_SECS, &remainder); - ts.tv_nsec = (long)remainder; - - int err_code = pthread_cond_timedwait(&condition_variable->condition_handle, &mutex->mutex_handle, &ts); - - if (err_code) { - return process_error_code(err_code); - } - - return AWS_OP_SUCCESS; -} + uint64_t current_sys_time = 0; + if (aws_sys_clock_get_ticks(¤t_sys_time)) { + return AWS_OP_ERR; + } + + time_to_wait += current_sys_time; + + struct timespec ts; + uint64_t remainder = 0; + ts.tv_sec = + (time_t)aws_timestamp_convert((uint64_t)time_to_wait, AWS_TIMESTAMP_NANOS, AWS_TIMESTAMP_SECS, &remainder); + ts.tv_nsec = (long)remainder; + + int err_code = pthread_cond_timedwait(&condition_variable->condition_handle, &mutex->mutex_handle, &ts); + + if (err_code) { + return process_error_code(err_code); + } + + return AWS_OP_SUCCESS; +} diff --git a/contrib/restricted/aws/aws-c-common/source/posix/device_random.c b/contrib/restricted/aws/aws-c-common/source/posix/device_random.c index f446002231..995bb79aaf 100644 --- a/contrib/restricted/aws/aws-c-common/source/posix/device_random.c +++ b/contrib/restricted/aws/aws-c-common/source/posix/device_random.c @@ -1,57 +1,57 @@ /** * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. * SPDX-License-Identifier: Apache-2.0. - */ -#include <aws/common/device_random.h> - -#include <aws/common/byte_buf.h> -#include <aws/common/thread.h> - -#include <fcntl.h> -#include <unistd.h> - -static int s_rand_fd = -1; -static aws_thread_once s_rand_init = AWS_THREAD_ONCE_STATIC_INIT; - -#ifdef O_CLOEXEC -# define OPEN_FLAGS (O_RDONLY | O_CLOEXEC) -#else -# define OPEN_FLAGS (O_RDONLY) -#endif + */ +#include <aws/common/device_random.h> + +#include <aws/common/byte_buf.h> +#include <aws/common/thread.h> + +#include <fcntl.h> +#include <unistd.h> + +static int s_rand_fd = -1; +static aws_thread_once s_rand_init = AWS_THREAD_ONCE_STATIC_INIT; + +#ifdef O_CLOEXEC +# define OPEN_FLAGS (O_RDONLY | O_CLOEXEC) +#else +# define OPEN_FLAGS (O_RDONLY) +#endif static void s_init_rand(void *user_data) { (void)user_data; - s_rand_fd = open("/dev/urandom", OPEN_FLAGS); - - if (s_rand_fd == -1) { - s_rand_fd = open("/dev/urandom", O_RDONLY); - - if (s_rand_fd == -1) { - abort(); - } - } - - if (-1 == fcntl(s_rand_fd, F_SETFD, FD_CLOEXEC)) { - abort(); - } -} - -static int s_fallback_device_random_buffer(struct aws_byte_buf *output) { - + s_rand_fd = open("/dev/urandom", OPEN_FLAGS); + + if (s_rand_fd == -1) { + s_rand_fd = open("/dev/urandom", O_RDONLY); + + if (s_rand_fd == -1) { + abort(); + } + } + + if (-1 == fcntl(s_rand_fd, F_SETFD, FD_CLOEXEC)) { + abort(); + } +} + +static int s_fallback_device_random_buffer(struct aws_byte_buf *output) { + aws_thread_call_once(&s_rand_init, s_init_rand, NULL); - - size_t diff = output->capacity - output->len; - - ssize_t amount_read = read(s_rand_fd, output->buffer + output->len, diff); - - if (amount_read != diff) { - return aws_raise_error(AWS_ERROR_RANDOM_GEN_FAILED); - } - - output->len += diff; - - return AWS_OP_SUCCESS; -} - -int aws_device_random_buffer(struct aws_byte_buf *output) { - return s_fallback_device_random_buffer(output); -} + + size_t diff = output->capacity - output->len; + + ssize_t amount_read = read(s_rand_fd, output->buffer + output->len, diff); + + if (amount_read != diff) { + return aws_raise_error(AWS_ERROR_RANDOM_GEN_FAILED); + } + + output->len += diff; + + return AWS_OP_SUCCESS; +} + +int aws_device_random_buffer(struct aws_byte_buf *output) { + return s_fallback_device_random_buffer(output); +} diff --git a/contrib/restricted/aws/aws-c-common/source/posix/environment.c b/contrib/restricted/aws/aws-c-common/source/posix/environment.c index f4b69caea2..5bc7679d6e 100644 --- a/contrib/restricted/aws/aws-c-common/source/posix/environment.c +++ b/contrib/restricted/aws/aws-c-common/source/posix/environment.c @@ -1,45 +1,45 @@ /** * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. * SPDX-License-Identifier: Apache-2.0. - */ - -#include <aws/common/environment.h> - -#include <aws/common/string.h> -#include <stdlib.h> - -int aws_get_environment_value( - struct aws_allocator *allocator, - const struct aws_string *variable_name, - struct aws_string **value_out) { - + */ + +#include <aws/common/environment.h> + +#include <aws/common/string.h> +#include <stdlib.h> + +int aws_get_environment_value( + struct aws_allocator *allocator, + const struct aws_string *variable_name, + struct aws_string **value_out) { + const char *value = getenv(aws_string_c_str(variable_name)); - if (value == NULL) { - *value_out = NULL; - return AWS_OP_SUCCESS; - } - - *value_out = aws_string_new_from_c_str(allocator, value); - if (*value_out == NULL) { - return aws_raise_error(AWS_ERROR_ENVIRONMENT_GET); - } - - return AWS_OP_SUCCESS; -} - -int aws_set_environment_value(const struct aws_string *variable_name, const struct aws_string *value) { - + if (value == NULL) { + *value_out = NULL; + return AWS_OP_SUCCESS; + } + + *value_out = aws_string_new_from_c_str(allocator, value); + if (*value_out == NULL) { + return aws_raise_error(AWS_ERROR_ENVIRONMENT_GET); + } + + return AWS_OP_SUCCESS; +} + +int aws_set_environment_value(const struct aws_string *variable_name, const struct aws_string *value) { + if (setenv(aws_string_c_str(variable_name), aws_string_c_str(value), 1) != 0) { - return aws_raise_error(AWS_ERROR_ENVIRONMENT_SET); - } - - return AWS_OP_SUCCESS; -} - -int aws_unset_environment_value(const struct aws_string *variable_name) { + return aws_raise_error(AWS_ERROR_ENVIRONMENT_SET); + } + + return AWS_OP_SUCCESS; +} + +int aws_unset_environment_value(const struct aws_string *variable_name) { if (unsetenv(aws_string_c_str(variable_name)) != 0) { - return aws_raise_error(AWS_ERROR_ENVIRONMENT_UNSET); - } - - return AWS_OP_SUCCESS; -} + return aws_raise_error(AWS_ERROR_ENVIRONMENT_UNSET); + } + + return AWS_OP_SUCCESS; +} diff --git a/contrib/restricted/aws/aws-c-common/source/posix/mutex.c b/contrib/restricted/aws/aws-c-common/source/posix/mutex.c index 2cbf2db66c..adca71d8ff 100644 --- a/contrib/restricted/aws/aws-c-common/source/posix/mutex.c +++ b/contrib/restricted/aws/aws-c-common/source/posix/mutex.c @@ -1,53 +1,53 @@ /** * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. * SPDX-License-Identifier: Apache-2.0. - */ - -#include <aws/common/mutex.h> -#include <aws/common/posix/common.inl> - -#include <errno.h> - -void aws_mutex_clean_up(struct aws_mutex *mutex) { + */ + +#include <aws/common/mutex.h> +#include <aws/common/posix/common.inl> + +#include <errno.h> + +void aws_mutex_clean_up(struct aws_mutex *mutex) { AWS_PRECONDITION(mutex); if (mutex->initialized) { pthread_mutex_destroy(&mutex->mutex_handle); } AWS_ZERO_STRUCT(*mutex); -} - -int aws_mutex_init(struct aws_mutex *mutex) { +} + +int aws_mutex_init(struct aws_mutex *mutex) { AWS_PRECONDITION(mutex); - pthread_mutexattr_t attr; - int err_code = pthread_mutexattr_init(&attr); - int return_code = AWS_OP_SUCCESS; - - if (!err_code) { - if ((err_code = pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_NORMAL)) || - (err_code = pthread_mutex_init(&mutex->mutex_handle, &attr))) { - - return_code = aws_private_convert_and_raise_error_code(err_code); - } - pthread_mutexattr_destroy(&attr); - } else { - return_code = aws_private_convert_and_raise_error_code(err_code); - } - + pthread_mutexattr_t attr; + int err_code = pthread_mutexattr_init(&attr); + int return_code = AWS_OP_SUCCESS; + + if (!err_code) { + if ((err_code = pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_NORMAL)) || + (err_code = pthread_mutex_init(&mutex->mutex_handle, &attr))) { + + return_code = aws_private_convert_and_raise_error_code(err_code); + } + pthread_mutexattr_destroy(&attr); + } else { + return_code = aws_private_convert_and_raise_error_code(err_code); + } + mutex->initialized = (return_code == AWS_OP_SUCCESS); - return return_code; -} - -int aws_mutex_lock(struct aws_mutex *mutex) { + return return_code; +} + +int aws_mutex_lock(struct aws_mutex *mutex) { AWS_PRECONDITION(mutex && mutex->initialized); - return aws_private_convert_and_raise_error_code(pthread_mutex_lock(&mutex->mutex_handle)); -} - -int aws_mutex_try_lock(struct aws_mutex *mutex) { + return aws_private_convert_and_raise_error_code(pthread_mutex_lock(&mutex->mutex_handle)); +} + +int aws_mutex_try_lock(struct aws_mutex *mutex) { AWS_PRECONDITION(mutex && mutex->initialized); - return aws_private_convert_and_raise_error_code(pthread_mutex_trylock(&mutex->mutex_handle)); -} - -int aws_mutex_unlock(struct aws_mutex *mutex) { + return aws_private_convert_and_raise_error_code(pthread_mutex_trylock(&mutex->mutex_handle)); +} + +int aws_mutex_unlock(struct aws_mutex *mutex) { AWS_PRECONDITION(mutex && mutex->initialized); - return aws_private_convert_and_raise_error_code(pthread_mutex_unlock(&mutex->mutex_handle)); -} + return aws_private_convert_and_raise_error_code(pthread_mutex_unlock(&mutex->mutex_handle)); +} diff --git a/contrib/restricted/aws/aws-c-common/source/posix/rw_lock.c b/contrib/restricted/aws/aws-c-common/source/posix/rw_lock.c index 824477d6cf..94ebe1fbf2 100644 --- a/contrib/restricted/aws/aws-c-common/source/posix/rw_lock.c +++ b/contrib/restricted/aws/aws-c-common/source/posix/rw_lock.c @@ -1,49 +1,49 @@ /** * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. * SPDX-License-Identifier: Apache-2.0. - */ - -#include <aws/common/atomics.h> -#include <aws/common/rw_lock.h> - -#include <aws/common/posix/common.inl> - -int aws_rw_lock_init(struct aws_rw_lock *lock) { - - return aws_private_convert_and_raise_error_code(pthread_rwlock_init(&lock->lock_handle, NULL)); -} - -void aws_rw_lock_clean_up(struct aws_rw_lock *lock) { - - pthread_rwlock_destroy(&lock->lock_handle); -} - -int aws_rw_lock_rlock(struct aws_rw_lock *lock) { - - return aws_private_convert_and_raise_error_code(pthread_rwlock_rdlock(&lock->lock_handle)); -} - -int aws_rw_lock_wlock(struct aws_rw_lock *lock) { - - return aws_private_convert_and_raise_error_code(pthread_rwlock_wrlock(&lock->lock_handle)); -} - -int aws_rw_lock_try_rlock(struct aws_rw_lock *lock) { - - return aws_private_convert_and_raise_error_code(pthread_rwlock_tryrdlock(&lock->lock_handle)); -} - -int aws_rw_lock_try_wlock(struct aws_rw_lock *lock) { - - return aws_private_convert_and_raise_error_code(pthread_rwlock_trywrlock(&lock->lock_handle)); -} - -int aws_rw_lock_runlock(struct aws_rw_lock *lock) { - - return aws_private_convert_and_raise_error_code(pthread_rwlock_unlock(&lock->lock_handle)); -} - -int aws_rw_lock_wunlock(struct aws_rw_lock *lock) { - - return aws_private_convert_and_raise_error_code(pthread_rwlock_unlock(&lock->lock_handle)); -} + */ + +#include <aws/common/atomics.h> +#include <aws/common/rw_lock.h> + +#include <aws/common/posix/common.inl> + +int aws_rw_lock_init(struct aws_rw_lock *lock) { + + return aws_private_convert_and_raise_error_code(pthread_rwlock_init(&lock->lock_handle, NULL)); +} + +void aws_rw_lock_clean_up(struct aws_rw_lock *lock) { + + pthread_rwlock_destroy(&lock->lock_handle); +} + +int aws_rw_lock_rlock(struct aws_rw_lock *lock) { + + return aws_private_convert_and_raise_error_code(pthread_rwlock_rdlock(&lock->lock_handle)); +} + +int aws_rw_lock_wlock(struct aws_rw_lock *lock) { + + return aws_private_convert_and_raise_error_code(pthread_rwlock_wrlock(&lock->lock_handle)); +} + +int aws_rw_lock_try_rlock(struct aws_rw_lock *lock) { + + return aws_private_convert_and_raise_error_code(pthread_rwlock_tryrdlock(&lock->lock_handle)); +} + +int aws_rw_lock_try_wlock(struct aws_rw_lock *lock) { + + return aws_private_convert_and_raise_error_code(pthread_rwlock_trywrlock(&lock->lock_handle)); +} + +int aws_rw_lock_runlock(struct aws_rw_lock *lock) { + + return aws_private_convert_and_raise_error_code(pthread_rwlock_unlock(&lock->lock_handle)); +} + +int aws_rw_lock_wunlock(struct aws_rw_lock *lock) { + + return aws_private_convert_and_raise_error_code(pthread_rwlock_unlock(&lock->lock_handle)); +} diff --git a/contrib/restricted/aws/aws-c-common/source/posix/system_info.c b/contrib/restricted/aws/aws-c-common/source/posix/system_info.c index 1311be4096..5fae2812ad 100644 --- a/contrib/restricted/aws/aws-c-common/source/posix/system_info.c +++ b/contrib/restricted/aws/aws-c-common/source/posix/system_info.c @@ -1,41 +1,41 @@ /** * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. * SPDX-License-Identifier: Apache-2.0. - */ - -#include <aws/common/system_info.h> - + */ + +#include <aws/common/system_info.h> + #include <aws/common/byte_buf.h> #include <aws/common/logging.h> #include <aws/common/platform.h> -#if defined(__FreeBSD__) || defined(__NetBSD__) -# define __BSD_VISIBLE 1 -#endif - -#include <unistd.h> - -#if defined(HAVE_SYSCONF) -size_t aws_system_info_processor_count(void) { - long nprocs = sysconf(_SC_NPROCESSORS_ONLN); - if (AWS_LIKELY(nprocs >= 0)) { - return (size_t)nprocs; - } - +#if defined(__FreeBSD__) || defined(__NetBSD__) +# define __BSD_VISIBLE 1 +#endif + +#include <unistd.h> + +#if defined(HAVE_SYSCONF) +size_t aws_system_info_processor_count(void) { + long nprocs = sysconf(_SC_NPROCESSORS_ONLN); + if (AWS_LIKELY(nprocs >= 0)) { + return (size_t)nprocs; + } + AWS_FATAL_POSTCONDITION(nprocs >= 0); - return 0; -} -#else -size_t aws_system_info_processor_count(void) { -# if defined(AWS_NUM_CPU_CORES) + return 0; +} +#else +size_t aws_system_info_processor_count(void) { +# if defined(AWS_NUM_CPU_CORES) AWS_FATAL_PRECONDITION(AWS_NUM_CPU_CORES > 0); - return AWS_NUM_CPU_CORES; -# else - return 1; -# endif -} -#endif - + return AWS_NUM_CPU_CORES; +# else + return 1; +# endif +} +#endif + #include <ctype.h> #include <fcntl.h> @@ -72,37 +72,37 @@ bool aws_is_debugger_present(void) { return false; } -#include <signal.h> - -#ifndef __has_builtin -# define __has_builtin(x) 0 -#endif - -void aws_debug_break(void) { -#ifdef DEBUG_BUILD +#include <signal.h> + +#ifndef __has_builtin +# define __has_builtin(x) 0 +#endif + +void aws_debug_break(void) { +#ifdef DEBUG_BUILD if (aws_is_debugger_present()) { -# if __has_builtin(__builtin_debugtrap) +# if __has_builtin(__builtin_debugtrap) __builtin_debugtrap(); -# else +# else raise(SIGTRAP); -# endif +# endif } -#endif /* DEBUG_BUILD */ -} - -#if defined(AWS_HAVE_EXECINFO) -# include <execinfo.h> -# include <limits.h> - -# define AWS_BACKTRACE_DEPTH 128 - -struct aws_stack_frame_info { - char exe[PATH_MAX]; - char addr[32]; - char base[32]; /* base addr for dylib/exe */ - char function[128]; -}; - +#endif /* DEBUG_BUILD */ +} + +#if defined(AWS_HAVE_EXECINFO) +# include <execinfo.h> +# include <limits.h> + +# define AWS_BACKTRACE_DEPTH 128 + +struct aws_stack_frame_info { + char exe[PATH_MAX]; + char addr[32]; + char base[32]; /* base addr for dylib/exe */ + char function[128]; +}; + /* Ensure only safe characters in a path buffer in case someone tries to rename the exe and trigger shell execution via the sub commands used to resolve symbols */ @@ -119,95 +119,95 @@ char *s_whitelist_chars(char *path) { return path; } -# if defined(__APPLE__) -# include <ctype.h> -# include <dlfcn.h> -# include <mach-o/dyld.h> -static char s_exe_path[PATH_MAX]; -const char *s_get_executable_path() { - static const char *s_exe = NULL; - if (AWS_LIKELY(s_exe)) { - return s_exe; - } - uint32_t len = sizeof(s_exe_path); - if (!_NSGetExecutablePath(s_exe_path, &len)) { - s_exe = s_exe_path; - } - return s_exe; -} -int s_parse_symbol(const char *symbol, void *addr, struct aws_stack_frame_info *frame) { - /* symbols look like: <frame_idx> <exe-or-shared-lib> <addr> <function> + <offset> - */ - const char *current_exe = s_get_executable_path(); - /* parse exe/shared lib */ - const char *exe_start = strstr(symbol, " "); +# if defined(__APPLE__) +# include <ctype.h> +# include <dlfcn.h> +# include <mach-o/dyld.h> +static char s_exe_path[PATH_MAX]; +const char *s_get_executable_path() { + static const char *s_exe = NULL; + if (AWS_LIKELY(s_exe)) { + return s_exe; + } + uint32_t len = sizeof(s_exe_path); + if (!_NSGetExecutablePath(s_exe_path, &len)) { + s_exe = s_exe_path; + } + return s_exe; +} +int s_parse_symbol(const char *symbol, void *addr, struct aws_stack_frame_info *frame) { + /* symbols look like: <frame_idx> <exe-or-shared-lib> <addr> <function> + <offset> + */ + const char *current_exe = s_get_executable_path(); + /* parse exe/shared lib */ + const char *exe_start = strstr(symbol, " "); while (aws_isspace(*exe_start)) { - ++exe_start; - } - const char *exe_end = strstr(exe_start, " "); - strncpy(frame->exe, exe_start, exe_end - exe_start); - /* executables get basename'd, so restore the path */ - if (strstr(current_exe, frame->exe)) { - strncpy(frame->exe, current_exe, strlen(current_exe)); + ++exe_start; } + const char *exe_end = strstr(exe_start, " "); + strncpy(frame->exe, exe_start, exe_end - exe_start); + /* executables get basename'd, so restore the path */ + if (strstr(current_exe, frame->exe)) { + strncpy(frame->exe, current_exe, strlen(current_exe)); + } s_whitelist_chars(frame->exe); - - /* parse addr */ - const char *addr_start = strstr(exe_end, "0x"); - const char *addr_end = strstr(addr_start, " "); - strncpy(frame->addr, addr_start, addr_end - addr_start); - - /* parse function */ - const char *function_start = strstr(addr_end, " ") + 1; - const char *function_end = strstr(function_start, " "); + + /* parse addr */ + const char *addr_start = strstr(exe_end, "0x"); + const char *addr_end = strstr(addr_start, " "); + strncpy(frame->addr, addr_start, addr_end - addr_start); + + /* parse function */ + const char *function_start = strstr(addr_end, " ") + 1; + const char *function_end = strstr(function_start, " "); /* truncate function name if needed */ size_t function_len = function_end - function_start; if (function_len >= (sizeof(frame->function) - 1)) { function_len = sizeof(frame->function) - 1; } - strncpy(frame->function, function_start, function_end - function_start); - - /* find base addr for library/exe */ - Dl_info addr_info; - dladdr(addr, &addr_info); - snprintf(frame->base, sizeof(frame->base), "0x%p", addr_info.dli_fbase); - - return AWS_OP_SUCCESS; -} - -void s_resolve_cmd(char *cmd, size_t len, struct aws_stack_frame_info *frame) { - snprintf(cmd, len, "atos -o %s -l %s %s", frame->exe, frame->base, frame->addr); -} -# else -int s_parse_symbol(const char *symbol, void *addr, struct aws_stack_frame_info *frame) { + strncpy(frame->function, function_start, function_end - function_start); + + /* find base addr for library/exe */ + Dl_info addr_info; + dladdr(addr, &addr_info); + snprintf(frame->base, sizeof(frame->base), "0x%p", addr_info.dli_fbase); + + return AWS_OP_SUCCESS; +} + +void s_resolve_cmd(char *cmd, size_t len, struct aws_stack_frame_info *frame) { + snprintf(cmd, len, "atos -o %s -l %s %s", frame->exe, frame->base, frame->addr); +} +# else +int s_parse_symbol(const char *symbol, void *addr, struct aws_stack_frame_info *frame) { /* symbols look like: <exe-or-shared-lib>(<function>+<addr>) [0x<addr>] - * or: <exe-or-shared-lib> [0x<addr>] + * or: <exe-or-shared-lib> [0x<addr>] * or: [0x<addr>] - */ - (void)addr; - const char *open_paren = strstr(symbol, "("); - const char *close_paren = strstr(symbol, ")"); - const char *exe_end = open_paren; - /* there may not be a function in parens, or parens at all */ - if (open_paren == NULL || close_paren == NULL) { + */ + (void)addr; + const char *open_paren = strstr(symbol, "("); + const char *close_paren = strstr(symbol, ")"); + const char *exe_end = open_paren; + /* there may not be a function in parens, or parens at all */ + if (open_paren == NULL || close_paren == NULL) { exe_end = strstr(symbol, "["); - if (!exe_end) { - return AWS_OP_ERR; - } + if (!exe_end) { + return AWS_OP_ERR; + } /* if exe_end == symbol, there's no exe */ if (exe_end != symbol) { exe_end -= 1; } - } - - ptrdiff_t exe_len = exe_end - symbol; + } + + ptrdiff_t exe_len = exe_end - symbol; if (exe_len > 0) { strncpy(frame->exe, symbol, exe_len); - } + } s_whitelist_chars(frame->exe); - - long function_len = (open_paren && close_paren) ? close_paren - open_paren - 1 : 0; - if (function_len > 0) { /* dynamic symbol was found */ + + long function_len = (open_paren && close_paren) ? close_paren - open_paren - 1 : 0; + if (function_len > 0) { /* dynamic symbol was found */ /* there might be (<function>+<addr>) or just (<function>) */ const char *function_start = open_paren + 1; const char *plus = strstr(function_start, "+"); @@ -219,7 +219,7 @@ int s_parse_symbol(const char *symbol, void *addr, struct aws_stack_frame_info * long addr_len = close_paren - plus - 1; strncpy(frame->addr, plus + 1, addr_len); } - } + } if (frame->addr[0] == 0) { /* use the address in []'s, since it's all we have */ const char *addr_start = strstr(exe_end, "[") + 1; @@ -229,14 +229,14 @@ int s_parse_symbol(const char *symbol, void *addr, struct aws_stack_frame_info * } strncpy(frame->addr, addr_start, addr_end - addr_start); } - - return AWS_OP_SUCCESS; -} -void s_resolve_cmd(char *cmd, size_t len, struct aws_stack_frame_info *frame) { - snprintf(cmd, len, "addr2line -afips -e %s %s", frame->exe, frame->addr); -} -# endif - + + return AWS_OP_SUCCESS; +} +void s_resolve_cmd(char *cmd, size_t len, struct aws_stack_frame_info *frame) { + snprintf(cmd, len, "addr2line -afips -e %s %s", frame->exe, frame->addr); +} +# endif + size_t aws_backtrace(void **stack_frames, size_t num_frames) { return backtrace(stack_frames, (int)aws_min_size(num_frames, INT_MAX)); } @@ -294,58 +294,58 @@ char **aws_backtrace_addr2line(void *const *stack_frames, size_t stack_depth) { return (char **)lines.buffer; /* caller is responsible for freeing */ } -void aws_backtrace_print(FILE *fp, void *call_site_data) { - siginfo_t *siginfo = call_site_data; - if (siginfo) { - fprintf(fp, "Signal received: %d, errno: %d\n", siginfo->si_signo, siginfo->si_errno); - if (siginfo->si_signo == SIGSEGV) { - fprintf(fp, " SIGSEGV @ 0x%p\n", siginfo->si_addr); - } - } - - void *stack_frames[AWS_BACKTRACE_DEPTH]; +void aws_backtrace_print(FILE *fp, void *call_site_data) { + siginfo_t *siginfo = call_site_data; + if (siginfo) { + fprintf(fp, "Signal received: %d, errno: %d\n", siginfo->si_signo, siginfo->si_errno); + if (siginfo->si_signo == SIGSEGV) { + fprintf(fp, " SIGSEGV @ 0x%p\n", siginfo->si_addr); + } + } + + void *stack_frames[AWS_BACKTRACE_DEPTH]; size_t stack_depth = aws_backtrace(stack_frames, AWS_BACKTRACE_DEPTH); char **symbols = aws_backtrace_symbols(stack_frames, stack_depth); - if (symbols == NULL) { - fprintf(fp, "Unable to decode backtrace via backtrace_symbols\n"); - return; - } - + if (symbols == NULL) { + fprintf(fp, "Unable to decode backtrace via backtrace_symbols\n"); + return; + } + fprintf(fp, "################################################################################\n"); fprintf(fp, "Resolved stacktrace:\n"); fprintf(fp, "################################################################################\n"); /* symbols look like: <exe-or-shared-lib>(<function>+<addr>) [0x<addr>] - * or: <exe-or-shared-lib> [0x<addr>] + * or: <exe-or-shared-lib> [0x<addr>] * or: [0x<addr>] - * start at 1 to skip the current frame (this function) */ + * start at 1 to skip the current frame (this function) */ for (size_t frame_idx = 1; frame_idx < stack_depth; ++frame_idx) { - struct aws_stack_frame_info frame; - AWS_ZERO_STRUCT(frame); - const char *symbol = symbols[frame_idx]; - if (s_parse_symbol(symbol, stack_frames[frame_idx], &frame)) { - goto parse_failed; - } - - /* TODO: Emulate libunwind */ - char cmd[sizeof(struct aws_stack_frame_info)] = {0}; - s_resolve_cmd(cmd, sizeof(cmd), &frame); - FILE *out = popen(cmd, "r"); - if (!out) { - goto parse_failed; - } - char output[1024]; - if (fgets(output, sizeof(output), out)) { - /* if addr2line or atos don't know what to do with an address, they just echo it */ - /* if there are spaces in the output, then they resolved something */ - if (strstr(output, " ")) { - symbol = output; - } - } - pclose(out); - - parse_failed: - fprintf(fp, "%s%s", symbol, (symbol == symbols[frame_idx]) ? "\n" : ""); - } + struct aws_stack_frame_info frame; + AWS_ZERO_STRUCT(frame); + const char *symbol = symbols[frame_idx]; + if (s_parse_symbol(symbol, stack_frames[frame_idx], &frame)) { + goto parse_failed; + } + + /* TODO: Emulate libunwind */ + char cmd[sizeof(struct aws_stack_frame_info)] = {0}; + s_resolve_cmd(cmd, sizeof(cmd), &frame); + FILE *out = popen(cmd, "r"); + if (!out) { + goto parse_failed; + } + char output[1024]; + if (fgets(output, sizeof(output), out)) { + /* if addr2line or atos don't know what to do with an address, they just echo it */ + /* if there are spaces in the output, then they resolved something */ + if (strstr(output, " ")) { + symbol = output; + } + } + pclose(out); + + parse_failed: + fprintf(fp, "%s%s", symbol, (symbol == symbols[frame_idx]) ? "\n" : ""); + } fprintf(fp, "################################################################################\n"); fprintf(fp, "Raw stacktrace:\n"); @@ -356,14 +356,14 @@ void aws_backtrace_print(FILE *fp, void *call_site_data) { } fflush(fp); - free(symbols); -} - -#else -void aws_backtrace_print(FILE *fp, void *call_site_data) { + free(symbols); +} + +#else +void aws_backtrace_print(FILE *fp, void *call_site_data) { (void)call_site_data; - fprintf(fp, "No call stack information available\n"); -} + fprintf(fp, "No call stack information available\n"); +} size_t aws_backtrace(void **stack_frames, size_t size) { (void)stack_frames; @@ -382,7 +382,7 @@ char **aws_backtrace_addr2line(void *const *stack_frames, size_t stack_depth) { (void)stack_depth; return NULL; } -#endif /* AWS_HAVE_EXECINFO */ +#endif /* AWS_HAVE_EXECINFO */ void aws_backtrace_log() { void *stack_frames[1024]; diff --git a/contrib/restricted/aws/aws-c-common/source/posix/thread.c b/contrib/restricted/aws/aws-c-common/source/posix/thread.c index 064d16882f..1ae2660f1a 100644 --- a/contrib/restricted/aws/aws-c-common/source/posix/thread.c +++ b/contrib/restricted/aws/aws-c-common/source/posix/thread.c @@ -1,57 +1,57 @@ /** * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. * SPDX-License-Identifier: Apache-2.0. - */ - + */ + #if !defined(__MACH__) # define _GNU_SOURCE #endif - -#include <aws/common/clock.h> + +#include <aws/common/clock.h> #include <aws/common/logging.h> #include <aws/common/private/dlloads.h> #include <aws/common/thread.h> - + #include <dlfcn.h> -#include <errno.h> +#include <errno.h> #include <inttypes.h> -#include <limits.h> +#include <limits.h> #include <sched.h> -#include <time.h> +#include <time.h> #include <unistd.h> - + #if defined(__FreeBSD__) || defined(__NETBSD__) # include <pthread_np.h> typedef cpuset_t cpu_set_t; #endif -static struct aws_thread_options s_default_options = { - /* this will make sure platform default stack size is used. */ +static struct aws_thread_options s_default_options = { + /* this will make sure platform default stack size is used. */ .stack_size = 0, .cpu_id = -1, }; - + struct thread_atexit_callback { aws_thread_atexit_fn *callback; void *user_data; struct thread_atexit_callback *next; }; -struct thread_wrapper { - struct aws_allocator *allocator; - void (*func)(void *arg); - void *arg; +struct thread_wrapper { + struct aws_allocator *allocator; + void (*func)(void *arg); + void *arg; struct thread_atexit_callback *atexit; void (*call_once)(void *); void *once_arg; struct aws_thread *thread; bool membind; -}; - +}; + static AWS_THREAD_LOCAL struct thread_wrapper *tl_wrapper = NULL; -static void *thread_fn(void *arg) { - struct thread_wrapper wrapper = *(struct thread_wrapper *)arg; +static void *thread_fn(void *arg) { + struct thread_wrapper wrapper = *(struct thread_wrapper *)arg; struct aws_allocator *allocator = wrapper.allocator; tl_wrapper = &wrapper; if (wrapper.membind && g_set_mempolicy_ptr) { @@ -73,7 +73,7 @@ static void *thread_fn(void *arg) { } } wrapper.func(wrapper.arg); - + struct thread_atexit_callback *exit_callback_data = wrapper.atexit; aws_mem_release(allocator, arg); @@ -89,23 +89,23 @@ static void *thread_fn(void *arg) { } tl_wrapper = NULL; - return NULL; -} - -const struct aws_thread_options *aws_default_thread_options(void) { - return &s_default_options; -} - -void aws_thread_clean_up(struct aws_thread *thread) { - if (thread->detach_state == AWS_THREAD_JOINABLE) { - pthread_detach(thread->thread_id); - } -} - + return NULL; +} + +const struct aws_thread_options *aws_default_thread_options(void) { + return &s_default_options; +} + +void aws_thread_clean_up(struct aws_thread *thread) { + if (thread->detach_state == AWS_THREAD_JOINABLE) { + pthread_detach(thread->thread_id); + } +} + static void s_call_once(void) { tl_wrapper->call_once(tl_wrapper->once_arg); -} - +} + void aws_thread_call_once(aws_thread_once *flag, void (*call_once)(void *), void *user_data) { // If this is a non-aws_thread, then gin up a temp thread wrapper struct thread_wrapper temp_wrapper; @@ -122,39 +122,39 @@ void aws_thread_call_once(aws_thread_once *flag, void (*call_once)(void *), void } } -int aws_thread_init(struct aws_thread *thread, struct aws_allocator *allocator) { +int aws_thread_init(struct aws_thread *thread, struct aws_allocator *allocator) { *thread = (struct aws_thread){.allocator = allocator, .detach_state = AWS_THREAD_NOT_CREATED}; - - return AWS_OP_SUCCESS; -} - -int aws_thread_launch( - struct aws_thread *thread, - void (*func)(void *arg), - void *arg, - const struct aws_thread_options *options) { - - pthread_attr_t attributes; - pthread_attr_t *attributes_ptr = NULL; - int attr_return = 0; - int allocation_failed = 0; - - if (options) { - attr_return = pthread_attr_init(&attributes); - - if (attr_return) { - goto cleanup; - } - - attributes_ptr = &attributes; - - if (options->stack_size > PTHREAD_STACK_MIN) { - attr_return = pthread_attr_setstacksize(attributes_ptr, options->stack_size); - - if (attr_return) { - goto cleanup; - } - } + + return AWS_OP_SUCCESS; +} + +int aws_thread_launch( + struct aws_thread *thread, + void (*func)(void *arg), + void *arg, + const struct aws_thread_options *options) { + + pthread_attr_t attributes; + pthread_attr_t *attributes_ptr = NULL; + int attr_return = 0; + int allocation_failed = 0; + + if (options) { + attr_return = pthread_attr_init(&attributes); + + if (attr_return) { + goto cleanup; + } + + attributes_ptr = &attributes; + + if (options->stack_size > PTHREAD_STACK_MIN) { + attr_return = pthread_attr_setstacksize(attributes_ptr, options->stack_size); + + if (attr_return) { + goto cleanup; + } + } /* AFAIK you can't set thread affinity on apple platforms, and it doesn't really matter since all memory * NUMA or not is setup in interleave mode. @@ -184,106 +184,106 @@ int aws_thread_launch( } } #endif /* !defined(__MACH__) && !defined(__ANDROID__) */ - } - - struct thread_wrapper *wrapper = + } + + struct thread_wrapper *wrapper = (struct thread_wrapper *)aws_mem_calloc(thread->allocator, 1, sizeof(struct thread_wrapper)); - - if (!wrapper) { - allocation_failed = 1; - goto cleanup; - } - + + if (!wrapper) { + allocation_failed = 1; + goto cleanup; + } + if (options && options->cpu_id >= 0) { wrapper->membind = true; } wrapper->thread = thread; - wrapper->allocator = thread->allocator; - wrapper->func = func; - wrapper->arg = arg; - attr_return = pthread_create(&thread->thread_id, attributes_ptr, thread_fn, (void *)wrapper); - - if (attr_return) { - goto cleanup; - } - - thread->detach_state = AWS_THREAD_JOINABLE; - -cleanup: - if (attributes_ptr) { - pthread_attr_destroy(attributes_ptr); - } - - if (attr_return == EINVAL) { - return aws_raise_error(AWS_ERROR_THREAD_INVALID_SETTINGS); - } - - if (attr_return == EAGAIN) { - return aws_raise_error(AWS_ERROR_THREAD_INSUFFICIENT_RESOURCE); - } - - if (attr_return == EPERM) { - return aws_raise_error(AWS_ERROR_THREAD_NO_PERMISSIONS); - } - - if (allocation_failed || attr_return == ENOMEM) { - return aws_raise_error(AWS_ERROR_OOM); - } - - return AWS_OP_SUCCESS; -} - + wrapper->allocator = thread->allocator; + wrapper->func = func; + wrapper->arg = arg; + attr_return = pthread_create(&thread->thread_id, attributes_ptr, thread_fn, (void *)wrapper); + + if (attr_return) { + goto cleanup; + } + + thread->detach_state = AWS_THREAD_JOINABLE; + +cleanup: + if (attributes_ptr) { + pthread_attr_destroy(attributes_ptr); + } + + if (attr_return == EINVAL) { + return aws_raise_error(AWS_ERROR_THREAD_INVALID_SETTINGS); + } + + if (attr_return == EAGAIN) { + return aws_raise_error(AWS_ERROR_THREAD_INSUFFICIENT_RESOURCE); + } + + if (attr_return == EPERM) { + return aws_raise_error(AWS_ERROR_THREAD_NO_PERMISSIONS); + } + + if (allocation_failed || attr_return == ENOMEM) { + return aws_raise_error(AWS_ERROR_OOM); + } + + return AWS_OP_SUCCESS; +} + aws_thread_id_t aws_thread_get_id(struct aws_thread *thread) { return thread->thread_id; -} - -enum aws_thread_detach_state aws_thread_get_detach_state(struct aws_thread *thread) { - return thread->detach_state; -} - -int aws_thread_join(struct aws_thread *thread) { - if (thread->detach_state == AWS_THREAD_JOINABLE) { - int err_no = pthread_join(thread->thread_id, 0); - - if (err_no) { - if (err_no == EINVAL) { - return aws_raise_error(AWS_ERROR_THREAD_NOT_JOINABLE); - } - if (err_no == ESRCH) { - return aws_raise_error(AWS_ERROR_THREAD_NO_SUCH_THREAD_ID); - } - if (err_no == EDEADLK) { - return aws_raise_error(AWS_ERROR_THREAD_DEADLOCK_DETECTED); - } - } - - thread->detach_state = AWS_THREAD_JOIN_COMPLETED; - } - - return AWS_OP_SUCCESS; -} - +} + +enum aws_thread_detach_state aws_thread_get_detach_state(struct aws_thread *thread) { + return thread->detach_state; +} + +int aws_thread_join(struct aws_thread *thread) { + if (thread->detach_state == AWS_THREAD_JOINABLE) { + int err_no = pthread_join(thread->thread_id, 0); + + if (err_no) { + if (err_no == EINVAL) { + return aws_raise_error(AWS_ERROR_THREAD_NOT_JOINABLE); + } + if (err_no == ESRCH) { + return aws_raise_error(AWS_ERROR_THREAD_NO_SUCH_THREAD_ID); + } + if (err_no == EDEADLK) { + return aws_raise_error(AWS_ERROR_THREAD_DEADLOCK_DETECTED); + } + } + + thread->detach_state = AWS_THREAD_JOIN_COMPLETED; + } + + return AWS_OP_SUCCESS; +} + aws_thread_id_t aws_thread_current_thread_id(void) { return pthread_self(); -} - +} + bool aws_thread_thread_id_equal(aws_thread_id_t t1, aws_thread_id_t t2) { return pthread_equal(t1, t2) != 0; } -void aws_thread_current_sleep(uint64_t nanos) { - uint64_t nano = 0; - time_t seconds = (time_t)aws_timestamp_convert(nanos, AWS_TIMESTAMP_NANOS, AWS_TIMESTAMP_SECS, &nano); - - struct timespec tm = { - .tv_sec = seconds, - .tv_nsec = (long)nano, - }; - struct timespec output; - - nanosleep(&tm, &output); -} +void aws_thread_current_sleep(uint64_t nanos) { + uint64_t nano = 0; + time_t seconds = (time_t)aws_timestamp_convert(nanos, AWS_TIMESTAMP_NANOS, AWS_TIMESTAMP_SECS, &nano); + + struct timespec tm = { + .tv_sec = seconds, + .tv_nsec = (long)nano, + }; + struct timespec output; + + nanosleep(&tm, &output); +} int aws_thread_current_at_exit(aws_thread_atexit_fn *callback, void *user_data) { if (!tl_wrapper) { diff --git a/contrib/restricted/aws/aws-c-common/source/posix/time.c b/contrib/restricted/aws/aws-c-common/source/posix/time.c index dd49d6b0b6..73d35945c9 100644 --- a/contrib/restricted/aws/aws-c-common/source/posix/time.c +++ b/contrib/restricted/aws/aws-c-common/source/posix/time.c @@ -1,79 +1,79 @@ /** * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. * SPDX-License-Identifier: Apache-2.0. - */ -#include <aws/common/time.h> - -#if defined(__ANDROID__) && !defined(__LP64__) -/* - * This branch brought to you by the kind folks at google chromium. It's been modified a bit, but - * gotta give credit where it's due.... I'm not a lawyer so I'm just gonna drop their copyright - * notification here to avoid all of that. - */ - -/* - * Copyright 2014 The Chromium Authors. All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions are - * met: - * - * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * Redistributions in binary form must reproduce the above - * copyright notice, this list of conditions and the following disclaimer - * in the documentation and/or other materials provided with the - * distribution. - * Neither the name of Google Inc. nor the names of its - * contributors may be used to endorse or promote products derived from - * this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - - * From src/base/os_compat_android.cc: - */ -# include <time64.h> - -static const time_t s_time_max = ~(1L << ((sizeof(time_t) * __CHAR_BIT__ - 1))); -static const time_t s_time_min = (1L << ((sizeof(time_t)) * __CHAR_BIT__ - 1)); - -/* 32-bit Android has only timegm64() and not timegm(). */ -time_t aws_timegm(struct tm *const t) { - - time64_t result = timegm64(t); - if (result < s_time_min || result > s_time_max) { - return -1; - } - return (time_t)result; -} - -#else - -# ifndef __APPLE__ -/* glibc.... you disappoint me.. */ -extern time_t timegm(struct tm *); -# endif - -time_t aws_timegm(struct tm *const t) { - return timegm(t); -} - -#endif /* defined(__ANDROID__) && !defined(__LP64__) */ - -void aws_localtime(time_t time, struct tm *t) { - localtime_r(&time, t); -} - -void aws_gmtime(time_t time, struct tm *t) { - gmtime_r(&time, t); -} + */ +#include <aws/common/time.h> + +#if defined(__ANDROID__) && !defined(__LP64__) +/* + * This branch brought to you by the kind folks at google chromium. It's been modified a bit, but + * gotta give credit where it's due.... I'm not a lawyer so I'm just gonna drop their copyright + * notification here to avoid all of that. + */ + +/* + * Copyright 2014 The Chromium Authors. All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions are + * met: + * + * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * Redistributions in binary form must reproduce the above + * copyright notice, this list of conditions and the following disclaimer + * in the documentation and/or other materials provided with the + * distribution. + * Neither the name of Google Inc. nor the names of its + * contributors may be used to endorse or promote products derived from + * this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + + * From src/base/os_compat_android.cc: + */ +# include <time64.h> + +static const time_t s_time_max = ~(1L << ((sizeof(time_t) * __CHAR_BIT__ - 1))); +static const time_t s_time_min = (1L << ((sizeof(time_t)) * __CHAR_BIT__ - 1)); + +/* 32-bit Android has only timegm64() and not timegm(). */ +time_t aws_timegm(struct tm *const t) { + + time64_t result = timegm64(t); + if (result < s_time_min || result > s_time_max) { + return -1; + } + return (time_t)result; +} + +#else + +# ifndef __APPLE__ +/* glibc.... you disappoint me.. */ +extern time_t timegm(struct tm *); +# endif + +time_t aws_timegm(struct tm *const t) { + return timegm(t); +} + +#endif /* defined(__ANDROID__) && !defined(__LP64__) */ + +void aws_localtime(time_t time, struct tm *t) { + localtime_r(&time, t); +} + +void aws_gmtime(time_t time, struct tm *t) { + gmtime_r(&time, t); +} diff --git a/contrib/restricted/aws/aws-c-common/source/priority_queue.c b/contrib/restricted/aws/aws-c-common/source/priority_queue.c index 14ff421d5f..a985a39252 100644 --- a/contrib/restricted/aws/aws-c-common/source/priority_queue.c +++ b/contrib/restricted/aws/aws-c-common/source/priority_queue.c @@ -1,161 +1,161 @@ /** * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. * SPDX-License-Identifier: Apache-2.0. - */ - -#include <aws/common/priority_queue.h> - -#include <string.h> - -#define PARENT_OF(index) (((index)&1) ? (index) >> 1 : (index) > 1 ? ((index)-2) >> 1 : 0) -#define LEFT_OF(index) (((index) << 1) + 1) -#define RIGHT_OF(index) (((index) << 1) + 2) - -static void s_swap(struct aws_priority_queue *queue, size_t a, size_t b) { + */ + +#include <aws/common/priority_queue.h> + +#include <string.h> + +#define PARENT_OF(index) (((index)&1) ? (index) >> 1 : (index) > 1 ? ((index)-2) >> 1 : 0) +#define LEFT_OF(index) (((index) << 1) + 1) +#define RIGHT_OF(index) (((index) << 1) + 2) + +static void s_swap(struct aws_priority_queue *queue, size_t a, size_t b) { AWS_PRECONDITION(aws_priority_queue_is_valid(queue)); AWS_PRECONDITION(a < queue->container.length); AWS_PRECONDITION(b < queue->container.length); AWS_PRECONDITION(aws_priority_queue_backpointer_index_valid(queue, a)); AWS_PRECONDITION(aws_priority_queue_backpointer_index_valid(queue, b)); - aws_array_list_swap(&queue->container, a, b); - - /* Invariant: If the backpointer array is initialized, we have enough room for all elements */ + aws_array_list_swap(&queue->container, a, b); + + /* Invariant: If the backpointer array is initialized, we have enough room for all elements */ if (!AWS_IS_ZEROED(queue->backpointers)) { - AWS_ASSERT(queue->backpointers.length > a); - AWS_ASSERT(queue->backpointers.length > b); - - struct aws_priority_queue_node **bp_a = &((struct aws_priority_queue_node **)queue->backpointers.data)[a]; - struct aws_priority_queue_node **bp_b = &((struct aws_priority_queue_node **)queue->backpointers.data)[b]; - - struct aws_priority_queue_node *tmp = *bp_a; - *bp_a = *bp_b; - *bp_b = tmp; - - if (*bp_a) { - (*bp_a)->current_index = a; - } - - if (*bp_b) { - (*bp_b)->current_index = b; - } - } + AWS_ASSERT(queue->backpointers.length > a); + AWS_ASSERT(queue->backpointers.length > b); + + struct aws_priority_queue_node **bp_a = &((struct aws_priority_queue_node **)queue->backpointers.data)[a]; + struct aws_priority_queue_node **bp_b = &((struct aws_priority_queue_node **)queue->backpointers.data)[b]; + + struct aws_priority_queue_node *tmp = *bp_a; + *bp_a = *bp_b; + *bp_b = tmp; + + if (*bp_a) { + (*bp_a)->current_index = a; + } + + if (*bp_b) { + (*bp_b)->current_index = b; + } + } AWS_POSTCONDITION(aws_priority_queue_is_valid(queue)); AWS_POSTCONDITION(aws_priority_queue_backpointer_index_valid(queue, a)); AWS_POSTCONDITION(aws_priority_queue_backpointer_index_valid(queue, b)); -} - -/* Precondition: with the exception of the given root element, the container must be - * in heap order */ -static bool s_sift_down(struct aws_priority_queue *queue, size_t root) { +} + +/* Precondition: with the exception of the given root element, the container must be + * in heap order */ +static bool s_sift_down(struct aws_priority_queue *queue, size_t root) { AWS_PRECONDITION(aws_priority_queue_is_valid(queue)); AWS_PRECONDITION(root < queue->container.length); - bool did_move = false; - - size_t len = aws_array_list_length(&queue->container); - - while (LEFT_OF(root) < len) { - size_t left = LEFT_OF(root); - size_t right = RIGHT_OF(root); - size_t first = root; - void *first_item = NULL, *other_item = NULL; - - aws_array_list_get_at_ptr(&queue->container, &first_item, root); - aws_array_list_get_at_ptr(&queue->container, &other_item, left); - - if (queue->pred(first_item, other_item) > 0) { - first = left; - first_item = other_item; - } - - if (right < len) { - aws_array_list_get_at_ptr(&queue->container, &other_item, right); - - /* choose the larger/smaller of the two in case of a max/min heap - * respectively */ - if (queue->pred(first_item, other_item) > 0) { - first = right; - first_item = other_item; - } - } - - if (first != root) { - s_swap(queue, first, root); - did_move = true; - root = first; - } else { - break; - } - } - + bool did_move = false; + + size_t len = aws_array_list_length(&queue->container); + + while (LEFT_OF(root) < len) { + size_t left = LEFT_OF(root); + size_t right = RIGHT_OF(root); + size_t first = root; + void *first_item = NULL, *other_item = NULL; + + aws_array_list_get_at_ptr(&queue->container, &first_item, root); + aws_array_list_get_at_ptr(&queue->container, &other_item, left); + + if (queue->pred(first_item, other_item) > 0) { + first = left; + first_item = other_item; + } + + if (right < len) { + aws_array_list_get_at_ptr(&queue->container, &other_item, right); + + /* choose the larger/smaller of the two in case of a max/min heap + * respectively */ + if (queue->pred(first_item, other_item) > 0) { + first = right; + first_item = other_item; + } + } + + if (first != root) { + s_swap(queue, first, root); + did_move = true; + root = first; + } else { + break; + } + } + AWS_POSTCONDITION(aws_priority_queue_is_valid(queue)); - return did_move; -} - -/* Precondition: Elements prior to the specified index must be in heap order. */ -static bool s_sift_up(struct aws_priority_queue *queue, size_t index) { + return did_move; +} + +/* Precondition: Elements prior to the specified index must be in heap order. */ +static bool s_sift_up(struct aws_priority_queue *queue, size_t index) { AWS_PRECONDITION(aws_priority_queue_is_valid(queue)); AWS_PRECONDITION(index < queue->container.length); - bool did_move = false; - - void *parent_item, *child_item; - size_t parent = PARENT_OF(index); - while (index) { - /* - * These get_ats are guaranteed to be successful; if they are not, we have - * serious state corruption, so just abort. - */ - - if (aws_array_list_get_at_ptr(&queue->container, &parent_item, parent) || - aws_array_list_get_at_ptr(&queue->container, &child_item, index)) { - abort(); - } - - if (queue->pred(parent_item, child_item) > 0) { - s_swap(queue, index, parent); - did_move = true; - index = parent; - parent = PARENT_OF(index); - } else { - break; - } - } - + bool did_move = false; + + void *parent_item, *child_item; + size_t parent = PARENT_OF(index); + while (index) { + /* + * These get_ats are guaranteed to be successful; if they are not, we have + * serious state corruption, so just abort. + */ + + if (aws_array_list_get_at_ptr(&queue->container, &parent_item, parent) || + aws_array_list_get_at_ptr(&queue->container, &child_item, index)) { + abort(); + } + + if (queue->pred(parent_item, child_item) > 0) { + s_swap(queue, index, parent); + did_move = true; + index = parent; + parent = PARENT_OF(index); + } else { + break; + } + } + AWS_POSTCONDITION(aws_priority_queue_is_valid(queue)); - return did_move; -} - -/* - * Precondition: With the exception of the given index, the heap condition holds for all elements. - * In particular, the parent of the current index is a predecessor of all children of the current index. - */ -static void s_sift_either(struct aws_priority_queue *queue, size_t index) { + return did_move; +} + +/* + * Precondition: With the exception of the given index, the heap condition holds for all elements. + * In particular, the parent of the current index is a predecessor of all children of the current index. + */ +static void s_sift_either(struct aws_priority_queue *queue, size_t index) { AWS_PRECONDITION(aws_priority_queue_is_valid(queue)); AWS_PRECONDITION(index < queue->container.length); - if (!index || !s_sift_up(queue, index)) { - s_sift_down(queue, index); - } + if (!index || !s_sift_up(queue, index)) { + s_sift_down(queue, index); + } AWS_POSTCONDITION(aws_priority_queue_is_valid(queue)); -} - -int aws_priority_queue_init_dynamic( - struct aws_priority_queue *queue, - struct aws_allocator *alloc, - size_t default_size, - size_t item_size, - aws_priority_queue_compare_fn *pred) { - +} + +int aws_priority_queue_init_dynamic( + struct aws_priority_queue *queue, + struct aws_allocator *alloc, + size_t default_size, + size_t item_size, + aws_priority_queue_compare_fn *pred) { + AWS_FATAL_PRECONDITION(queue != NULL); AWS_FATAL_PRECONDITION(alloc != NULL); AWS_FATAL_PRECONDITION(item_size > 0); - queue->pred = pred; - AWS_ZERO_STRUCT(queue->backpointers); - + queue->pred = pred; + AWS_ZERO_STRUCT(queue->backpointers); + int ret = aws_array_list_init_dynamic(&queue->container, alloc, default_size, item_size); if (ret == AWS_OP_SUCCESS) { AWS_POSTCONDITION(aws_priority_queue_is_valid(queue)); @@ -164,28 +164,28 @@ int aws_priority_queue_init_dynamic( AWS_POSTCONDITION(AWS_IS_ZEROED(queue->backpointers)); } return ret; -} - -void aws_priority_queue_init_static( - struct aws_priority_queue *queue, - void *heap, - size_t item_count, - size_t item_size, - aws_priority_queue_compare_fn *pred) { - +} + +void aws_priority_queue_init_static( + struct aws_priority_queue *queue, + void *heap, + size_t item_count, + size_t item_size, + aws_priority_queue_compare_fn *pred) { + AWS_FATAL_PRECONDITION(queue != NULL); AWS_FATAL_PRECONDITION(heap != NULL); AWS_FATAL_PRECONDITION(item_count > 0); AWS_FATAL_PRECONDITION(item_size > 0); - queue->pred = pred; - AWS_ZERO_STRUCT(queue->backpointers); - - aws_array_list_init_static(&queue->container, heap, item_count, item_size); + queue->pred = pred; + AWS_ZERO_STRUCT(queue->backpointers); + + aws_array_list_init_static(&queue->container, heap, item_count, item_size); AWS_POSTCONDITION(aws_priority_queue_is_valid(queue)); -} - +} + bool aws_priority_queue_backpointer_index_valid(const struct aws_priority_queue *const queue, size_t index) { if (AWS_IS_ZEROED(queue->backpointers)) { return true; @@ -243,105 +243,105 @@ bool aws_priority_queue_backpointers_valid(const struct aws_priority_queue *cons return ((backpointer_list_is_valid && backpointer_struct_is_valid) || AWS_IS_ZEROED(queue->backpointers)); } -bool aws_priority_queue_is_valid(const struct aws_priority_queue *const queue) { +bool aws_priority_queue_is_valid(const struct aws_priority_queue *const queue) { /* Pointer validity checks */ - if (!queue) { - return false; - } - bool pred_is_valid = (queue->pred != NULL); - bool container_is_valid = aws_array_list_is_valid(&queue->container); + if (!queue) { + return false; + } + bool pred_is_valid = (queue->pred != NULL); + bool container_is_valid = aws_array_list_is_valid(&queue->container); bool backpointers_valid = aws_priority_queue_backpointers_valid(queue); return pred_is_valid && container_is_valid && backpointers_valid; -} - -void aws_priority_queue_clean_up(struct aws_priority_queue *queue) { - aws_array_list_clean_up(&queue->container); +} + +void aws_priority_queue_clean_up(struct aws_priority_queue *queue) { + aws_array_list_clean_up(&queue->container); if (!AWS_IS_ZEROED(queue->backpointers)) { aws_array_list_clean_up(&queue->backpointers); } -} - -int aws_priority_queue_push(struct aws_priority_queue *queue, void *item) { +} + +int aws_priority_queue_push(struct aws_priority_queue *queue, void *item) { AWS_PRECONDITION(aws_priority_queue_is_valid(queue)); AWS_PRECONDITION(item && AWS_MEM_IS_READABLE(item, queue->container.item_size)); int rval = aws_priority_queue_push_ref(queue, item, NULL); AWS_POSTCONDITION(aws_priority_queue_is_valid(queue)); return rval; -} - -int aws_priority_queue_push_ref( - struct aws_priority_queue *queue, - void *item, - struct aws_priority_queue_node *backpointer) { +} + +int aws_priority_queue_push_ref( + struct aws_priority_queue *queue, + void *item, + struct aws_priority_queue_node *backpointer) { AWS_PRECONDITION(aws_priority_queue_is_valid(queue)); AWS_PRECONDITION(item && AWS_MEM_IS_READABLE(item, queue->container.item_size)); - int err = aws_array_list_push_back(&queue->container, item); - if (err) { + int err = aws_array_list_push_back(&queue->container, item); + if (err) { AWS_POSTCONDITION(aws_priority_queue_is_valid(queue)); - return err; - } - size_t index = aws_array_list_length(&queue->container) - 1; - - if (backpointer && !queue->backpointers.alloc) { - if (!queue->container.alloc) { - aws_raise_error(AWS_ERROR_UNSUPPORTED_OPERATION); - goto backpointer_update_failed; - } - - if (aws_array_list_init_dynamic( - &queue->backpointers, queue->container.alloc, index + 1, sizeof(struct aws_priority_queue_node *))) { - goto backpointer_update_failed; - } - - /* When we initialize the backpointers array we need to zero out all existing entries */ - memset(queue->backpointers.data, 0, queue->backpointers.current_size); - } - - /* - * Once we have any backpointers, we want to make sure we always have room in the backpointers array - * for all elements; otherwise, sift_down gets complicated if it runs out of memory when sifting an - * element with a backpointer down in the array. - */ + return err; + } + size_t index = aws_array_list_length(&queue->container) - 1; + + if (backpointer && !queue->backpointers.alloc) { + if (!queue->container.alloc) { + aws_raise_error(AWS_ERROR_UNSUPPORTED_OPERATION); + goto backpointer_update_failed; + } + + if (aws_array_list_init_dynamic( + &queue->backpointers, queue->container.alloc, index + 1, sizeof(struct aws_priority_queue_node *))) { + goto backpointer_update_failed; + } + + /* When we initialize the backpointers array we need to zero out all existing entries */ + memset(queue->backpointers.data, 0, queue->backpointers.current_size); + } + + /* + * Once we have any backpointers, we want to make sure we always have room in the backpointers array + * for all elements; otherwise, sift_down gets complicated if it runs out of memory when sifting an + * element with a backpointer down in the array. + */ if (!AWS_IS_ZEROED(queue->backpointers)) { - if (aws_array_list_set_at(&queue->backpointers, &backpointer, index)) { - goto backpointer_update_failed; - } - } - - if (backpointer) { - backpointer->current_index = index; - } - - s_sift_up(queue, aws_array_list_length(&queue->container) - 1); - + if (aws_array_list_set_at(&queue->backpointers, &backpointer, index)) { + goto backpointer_update_failed; + } + } + + if (backpointer) { + backpointer->current_index = index; + } + + s_sift_up(queue, aws_array_list_length(&queue->container) - 1); + AWS_POSTCONDITION(aws_priority_queue_is_valid(queue)); - return AWS_OP_SUCCESS; - -backpointer_update_failed: - /* Failed to initialize or grow the backpointer array, back out the node addition */ - aws_array_list_pop_back(&queue->container); + return AWS_OP_SUCCESS; + +backpointer_update_failed: + /* Failed to initialize or grow the backpointer array, back out the node addition */ + aws_array_list_pop_back(&queue->container); AWS_POSTCONDITION(aws_priority_queue_is_valid(queue)); - return AWS_OP_ERR; -} - -static int s_remove_node(struct aws_priority_queue *queue, void *item, size_t item_index) { + return AWS_OP_ERR; +} + +static int s_remove_node(struct aws_priority_queue *queue, void *item, size_t item_index) { AWS_PRECONDITION(aws_priority_queue_is_valid(queue)); AWS_PRECONDITION(item && AWS_MEM_IS_WRITABLE(item, queue->container.item_size)); - if (aws_array_list_get_at(&queue->container, item, item_index)) { - /* shouldn't happen, but if it does we've already raised an error... */ + if (aws_array_list_get_at(&queue->container, item, item_index)) { + /* shouldn't happen, but if it does we've already raised an error... */ AWS_POSTCONDITION(aws_priority_queue_is_valid(queue)); - return AWS_OP_ERR; - } - - size_t swap_with = aws_array_list_length(&queue->container) - 1; - struct aws_priority_queue_node *backpointer = NULL; - - if (item_index != swap_with) { - s_swap(queue, item_index, swap_with); - } - + return AWS_OP_ERR; + } + + size_t swap_with = aws_array_list_length(&queue->container) - 1; + struct aws_priority_queue_node *backpointer = NULL; + + if (item_index != swap_with) { + s_swap(queue, item_index, swap_with); + } + aws_array_list_pop_back(&queue->container); if (!AWS_IS_ZEROED(queue->backpointers)) { @@ -350,51 +350,51 @@ static int s_remove_node(struct aws_priority_queue *queue, void *item, size_t it backpointer->current_index = SIZE_MAX; } aws_array_list_pop_back(&queue->backpointers); - } - - if (item_index != swap_with) { - s_sift_either(queue, item_index); - } - + } + + if (item_index != swap_with) { + s_sift_either(queue, item_index); + } + AWS_POSTCONDITION(aws_priority_queue_is_valid(queue)); - return AWS_OP_SUCCESS; -} - -int aws_priority_queue_remove( - struct aws_priority_queue *queue, - void *item, - const struct aws_priority_queue_node *node) { + return AWS_OP_SUCCESS; +} + +int aws_priority_queue_remove( + struct aws_priority_queue *queue, + void *item, + const struct aws_priority_queue_node *node) { AWS_PRECONDITION(aws_priority_queue_is_valid(queue)); AWS_PRECONDITION(item && AWS_MEM_IS_WRITABLE(item, queue->container.item_size)); AWS_PRECONDITION(node && AWS_MEM_IS_READABLE(node, sizeof(struct aws_priority_queue_node))); AWS_ERROR_PRECONDITION( node->current_index < aws_array_list_length(&queue->container), AWS_ERROR_PRIORITY_QUEUE_BAD_NODE); AWS_ERROR_PRECONDITION(queue->backpointers.data, AWS_ERROR_PRIORITY_QUEUE_BAD_NODE); - + int rval = s_remove_node(queue, item, node->current_index); AWS_POSTCONDITION(aws_priority_queue_is_valid(queue)); return rval; -} - -int aws_priority_queue_pop(struct aws_priority_queue *queue, void *item) { +} + +int aws_priority_queue_pop(struct aws_priority_queue *queue, void *item) { AWS_PRECONDITION(aws_priority_queue_is_valid(queue)); AWS_PRECONDITION(item && AWS_MEM_IS_WRITABLE(item, queue->container.item_size)); AWS_ERROR_PRECONDITION(aws_array_list_length(&queue->container) != 0, AWS_ERROR_PRIORITY_QUEUE_EMPTY); - + int rval = s_remove_node(queue, item, 0); AWS_POSTCONDITION(aws_priority_queue_is_valid(queue)); return rval; -} - -int aws_priority_queue_top(const struct aws_priority_queue *queue, void **item) { +} + +int aws_priority_queue_top(const struct aws_priority_queue *queue, void **item) { AWS_ERROR_PRECONDITION(aws_array_list_length(&queue->container) != 0, AWS_ERROR_PRIORITY_QUEUE_EMPTY); - return aws_array_list_get_at_ptr(&queue->container, item, 0); -} - -size_t aws_priority_queue_size(const struct aws_priority_queue *queue) { - return aws_array_list_length(&queue->container); -} - -size_t aws_priority_queue_capacity(const struct aws_priority_queue *queue) { - return aws_array_list_capacity(&queue->container); -} + return aws_array_list_get_at_ptr(&queue->container, item, 0); +} + +size_t aws_priority_queue_size(const struct aws_priority_queue *queue) { + return aws_array_list_length(&queue->container); +} + +size_t aws_priority_queue_capacity(const struct aws_priority_queue *queue) { + return aws_array_list_capacity(&queue->container); +} diff --git a/contrib/restricted/aws/aws-c-common/source/string.c b/contrib/restricted/aws/aws-c-common/source/string.c index d1abf0dbff..4bd67ca7b2 100644 --- a/contrib/restricted/aws/aws-c-common/source/string.c +++ b/contrib/restricted/aws/aws-c-common/source/string.c @@ -1,41 +1,41 @@ /** * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. * SPDX-License-Identifier: Apache-2.0. - */ -#include <aws/common/string.h> - -struct aws_string *aws_string_new_from_c_str(struct aws_allocator *allocator, const char *c_str) { + */ +#include <aws/common/string.h> + +struct aws_string *aws_string_new_from_c_str(struct aws_allocator *allocator, const char *c_str) { AWS_PRECONDITION(allocator && c_str); - return aws_string_new_from_array(allocator, (const uint8_t *)c_str, strlen(c_str)); -} - -struct aws_string *aws_string_new_from_array(struct aws_allocator *allocator, const uint8_t *bytes, size_t len) { + return aws_string_new_from_array(allocator, (const uint8_t *)c_str, strlen(c_str)); +} + +struct aws_string *aws_string_new_from_array(struct aws_allocator *allocator, const uint8_t *bytes, size_t len) { AWS_PRECONDITION(allocator); AWS_PRECONDITION(AWS_MEM_IS_READABLE(bytes, len)); - size_t malloc_size; - if (aws_add_size_checked(sizeof(struct aws_string) + 1, len, &malloc_size)) { - return NULL; - } - struct aws_string *str = aws_mem_acquire(allocator, malloc_size); - if (!str) { - return NULL; - } - - /* Fields are declared const, so we need to copy them in like this */ - *(struct aws_allocator **)(&str->allocator) = allocator; - *(size_t *)(&str->len) = len; + size_t malloc_size; + if (aws_add_size_checked(sizeof(struct aws_string) + 1, len, &malloc_size)) { + return NULL; + } + struct aws_string *str = aws_mem_acquire(allocator, malloc_size); + if (!str) { + return NULL; + } + + /* Fields are declared const, so we need to copy them in like this */ + *(struct aws_allocator **)(&str->allocator) = allocator; + *(size_t *)(&str->len) = len; if (len > 0) { memcpy((void *)str->bytes, bytes, len); } - *(uint8_t *)&str->bytes[len] = '\0'; + *(uint8_t *)&str->bytes[len] = '\0'; AWS_RETURN_WITH_POSTCONDITION(str, aws_string_is_valid(str)); -} - -struct aws_string *aws_string_new_from_string(struct aws_allocator *allocator, const struct aws_string *str) { +} + +struct aws_string *aws_string_new_from_string(struct aws_allocator *allocator, const struct aws_string *str) { AWS_PRECONDITION(allocator && aws_string_is_valid(str)); - return aws_string_new_from_array(allocator, str->bytes, str->len); -} - + return aws_string_new_from_array(allocator, str->bytes, str->len); +} + struct aws_string *aws_string_new_from_cursor(struct aws_allocator *allocator, const struct aws_byte_cursor *cursor) { AWS_PRECONDITION(allocator && aws_byte_cursor_is_valid(cursor)); return aws_string_new_from_array(allocator, cursor->ptr, cursor->len); @@ -46,24 +46,24 @@ struct aws_string *aws_string_new_from_buf(struct aws_allocator *allocator, cons return aws_string_new_from_array(allocator, buf->buffer, buf->len); } -void aws_string_destroy(struct aws_string *str) { +void aws_string_destroy(struct aws_string *str) { AWS_PRECONDITION(!str || aws_string_is_valid(str)); - if (str && str->allocator) { - aws_mem_release(str->allocator, str); - } -} - -void aws_string_destroy_secure(struct aws_string *str) { + if (str && str->allocator) { + aws_mem_release(str->allocator, str); + } +} + +void aws_string_destroy_secure(struct aws_string *str) { AWS_PRECONDITION(!str || aws_string_is_valid(str)); - if (str) { - aws_secure_zero((void *)aws_string_bytes(str), str->len); - if (str->allocator) { - aws_mem_release(str->allocator, str); - } - } -} - -int aws_string_compare(const struct aws_string *a, const struct aws_string *b) { + if (str) { + aws_secure_zero((void *)aws_string_bytes(str), str->len); + if (str->allocator) { + aws_mem_release(str->allocator, str); + } + } +} + +int aws_string_compare(const struct aws_string *a, const struct aws_string *b) { AWS_PRECONDITION(!a || aws_string_is_valid(a)); AWS_PRECONDITION(!b || aws_string_is_valid(b)); if (a == b) { @@ -76,26 +76,26 @@ int aws_string_compare(const struct aws_string *a, const struct aws_string *b) { return 1; } - size_t len_a = a->len; - size_t len_b = b->len; - size_t min_len = len_a < len_b ? len_a : len_b; - - int ret = memcmp(aws_string_bytes(a), aws_string_bytes(b), min_len); + size_t len_a = a->len; + size_t len_b = b->len; + size_t min_len = len_a < len_b ? len_a : len_b; + + int ret = memcmp(aws_string_bytes(a), aws_string_bytes(b), min_len); AWS_POSTCONDITION(aws_string_is_valid(a)); AWS_POSTCONDITION(aws_string_is_valid(b)); - if (ret) { - return ret; /* overlapping characters differ */ - } - if (len_a == len_b) { - return 0; /* strings identical */ - } - if (len_a > len_b) { - return 1; /* string b is first n characters of string a */ - } - return -1; /* string a is first n characters of string b */ -} - -int aws_array_list_comparator_string(const void *a, const void *b) { + if (ret) { + return ret; /* overlapping characters differ */ + } + if (len_a == len_b) { + return 0; /* strings identical */ + } + if (len_a > len_b) { + return 1; /* string b is first n characters of string a */ + } + return -1; /* string a is first n characters of string b */ +} + +int aws_array_list_comparator_string(const void *a, const void *b) { if (a == b) { return 0; /* strings identical */ } @@ -105,10 +105,10 @@ int aws_array_list_comparator_string(const void *a, const void *b) { if (b == NULL) { return 1; } - const struct aws_string *str_a = *(const struct aws_string **)a; - const struct aws_string *str_b = *(const struct aws_string **)b; - return aws_string_compare(str_a, str_b); -} + const struct aws_string *str_a = *(const struct aws_string **)a; + const struct aws_string *str_b = *(const struct aws_string **)b; + return aws_string_compare(str_a, str_b); +} /** * Returns true if bytes of string are the same, false otherwise. diff --git a/contrib/restricted/aws/aws-c-common/source/task_scheduler.c b/contrib/restricted/aws/aws-c-common/source/task_scheduler.c index 31ce7af1ab..66793d71bd 100644 --- a/contrib/restricted/aws/aws-c-common/source/task_scheduler.c +++ b/contrib/restricted/aws/aws-c-common/source/task_scheduler.c @@ -1,16 +1,16 @@ /** * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. * SPDX-License-Identifier: Apache-2.0. - */ - -#include <aws/common/task_scheduler.h> - + */ + +#include <aws/common/task_scheduler.h> + #include <aws/common/logging.h> #include <inttypes.h> -static const size_t DEFAULT_QUEUE_SIZE = 7; - +static const size_t DEFAULT_QUEUE_SIZE = 7; + void aws_task_init(struct aws_task *task, aws_task_fn *fn, void *arg, const char *type_tag) { AWS_ZERO_STRUCT(*task); task->fn = fn; @@ -43,17 +43,17 @@ void aws_task_run(struct aws_task *task, enum aws_task_status status) { task->fn(task, task->arg, status); } -static int s_compare_timestamps(const void *a, const void *b) { - uint64_t a_time = (*(struct aws_task **)a)->timestamp; - uint64_t b_time = (*(struct aws_task **)b)->timestamp; - return a_time > b_time; /* min-heap */ -} - -static void s_run_all(struct aws_task_scheduler *scheduler, uint64_t current_time, enum aws_task_status status); - -int aws_task_scheduler_init(struct aws_task_scheduler *scheduler, struct aws_allocator *alloc) { - AWS_ASSERT(alloc); - +static int s_compare_timestamps(const void *a, const void *b) { + uint64_t a_time = (*(struct aws_task **)a)->timestamp; + uint64_t b_time = (*(struct aws_task **)b)->timestamp; + return a_time > b_time; /* min-heap */ +} + +static void s_run_all(struct aws_task_scheduler *scheduler, uint64_t current_time, enum aws_task_status status); + +int aws_task_scheduler_init(struct aws_task_scheduler *scheduler, struct aws_allocator *alloc) { + AWS_ASSERT(alloc); + AWS_ZERO_STRUCT(*scheduler); if (aws_priority_queue_init_dynamic( @@ -61,95 +61,95 @@ int aws_task_scheduler_init(struct aws_task_scheduler *scheduler, struct aws_all return AWS_OP_ERR; }; - scheduler->alloc = alloc; - aws_linked_list_init(&scheduler->timed_list); - aws_linked_list_init(&scheduler->asap_list); + scheduler->alloc = alloc; + aws_linked_list_init(&scheduler->timed_list); + aws_linked_list_init(&scheduler->asap_list); AWS_POSTCONDITION(aws_task_scheduler_is_valid(scheduler)); return AWS_OP_SUCCESS; -} - -void aws_task_scheduler_clean_up(struct aws_task_scheduler *scheduler) { - AWS_ASSERT(scheduler); - +} + +void aws_task_scheduler_clean_up(struct aws_task_scheduler *scheduler) { + AWS_ASSERT(scheduler); + if (aws_task_scheduler_is_valid(scheduler)) { /* Execute all remaining tasks as CANCELED. * Do this in a loop so that tasks scheduled by other tasks are executed */ while (aws_task_scheduler_has_tasks(scheduler, NULL)) { s_run_all(scheduler, UINT64_MAX, AWS_TASK_STATUS_CANCELED); } - } - - aws_priority_queue_clean_up(&scheduler->timed_queue); + } + + aws_priority_queue_clean_up(&scheduler->timed_queue); AWS_ZERO_STRUCT(*scheduler); -} - +} + bool aws_task_scheduler_is_valid(const struct aws_task_scheduler *scheduler) { return scheduler && scheduler->alloc && aws_priority_queue_is_valid(&scheduler->timed_queue) && aws_linked_list_is_valid(&scheduler->asap_list) && aws_linked_list_is_valid(&scheduler->timed_list); } -bool aws_task_scheduler_has_tasks(const struct aws_task_scheduler *scheduler, uint64_t *next_task_time) { - AWS_ASSERT(scheduler); - - uint64_t timestamp = UINT64_MAX; - bool has_tasks = false; - - if (!aws_linked_list_empty(&scheduler->asap_list)) { - timestamp = 0; - has_tasks = true; - - } else { - /* Check whether timed_list or timed_queue has the earlier task */ - if (AWS_UNLIKELY(!aws_linked_list_empty(&scheduler->timed_list))) { - struct aws_linked_list_node *node = aws_linked_list_front(&scheduler->timed_list); - struct aws_task *task = AWS_CONTAINER_OF(node, struct aws_task, node); - timestamp = task->timestamp; - has_tasks = true; - } - - struct aws_task **task_ptrptr = NULL; - if (aws_priority_queue_top(&scheduler->timed_queue, (void **)&task_ptrptr) == AWS_OP_SUCCESS) { - if ((*task_ptrptr)->timestamp < timestamp) { - timestamp = (*task_ptrptr)->timestamp; - } - has_tasks = true; - } - } - - if (next_task_time) { - *next_task_time = timestamp; - } - return has_tasks; -} - -void aws_task_scheduler_schedule_now(struct aws_task_scheduler *scheduler, struct aws_task *task) { - AWS_ASSERT(scheduler); - AWS_ASSERT(task); - AWS_ASSERT(task->fn); - +bool aws_task_scheduler_has_tasks(const struct aws_task_scheduler *scheduler, uint64_t *next_task_time) { + AWS_ASSERT(scheduler); + + uint64_t timestamp = UINT64_MAX; + bool has_tasks = false; + + if (!aws_linked_list_empty(&scheduler->asap_list)) { + timestamp = 0; + has_tasks = true; + + } else { + /* Check whether timed_list or timed_queue has the earlier task */ + if (AWS_UNLIKELY(!aws_linked_list_empty(&scheduler->timed_list))) { + struct aws_linked_list_node *node = aws_linked_list_front(&scheduler->timed_list); + struct aws_task *task = AWS_CONTAINER_OF(node, struct aws_task, node); + timestamp = task->timestamp; + has_tasks = true; + } + + struct aws_task **task_ptrptr = NULL; + if (aws_priority_queue_top(&scheduler->timed_queue, (void **)&task_ptrptr) == AWS_OP_SUCCESS) { + if ((*task_ptrptr)->timestamp < timestamp) { + timestamp = (*task_ptrptr)->timestamp; + } + has_tasks = true; + } + } + + if (next_task_time) { + *next_task_time = timestamp; + } + return has_tasks; +} + +void aws_task_scheduler_schedule_now(struct aws_task_scheduler *scheduler, struct aws_task *task) { + AWS_ASSERT(scheduler); + AWS_ASSERT(task); + AWS_ASSERT(task->fn); + AWS_LOGF_DEBUG( AWS_LS_COMMON_TASK_SCHEDULER, "id=%p: Scheduling %s task for immediate execution", (void *)task, task->type_tag); - task->priority_queue_node.current_index = SIZE_MAX; - aws_linked_list_node_reset(&task->node); - task->timestamp = 0; - - aws_linked_list_push_back(&scheduler->asap_list, &task->node); -} - -void aws_task_scheduler_schedule_future( - struct aws_task_scheduler *scheduler, - struct aws_task *task, - uint64_t time_to_run) { - - AWS_ASSERT(scheduler); - AWS_ASSERT(task); - AWS_ASSERT(task->fn); - + task->priority_queue_node.current_index = SIZE_MAX; + aws_linked_list_node_reset(&task->node); + task->timestamp = 0; + + aws_linked_list_push_back(&scheduler->asap_list, &task->node); +} + +void aws_task_scheduler_schedule_future( + struct aws_task_scheduler *scheduler, + struct aws_task *task, + uint64_t time_to_run) { + + AWS_ASSERT(scheduler); + AWS_ASSERT(task); + AWS_ASSERT(task->fn); + AWS_LOGF_DEBUG( AWS_LS_COMMON_TASK_SCHEDULER, "id=%p: Scheduling %s task for future execution at time %" PRIu64, @@ -157,108 +157,108 @@ void aws_task_scheduler_schedule_future( task->type_tag, time_to_run); - task->timestamp = time_to_run; - - task->priority_queue_node.current_index = SIZE_MAX; - aws_linked_list_node_reset(&task->node); - int err = aws_priority_queue_push_ref(&scheduler->timed_queue, &task, &task->priority_queue_node); - if (AWS_UNLIKELY(err)) { - /* In the (very unlikely) case that we can't push into the timed_queue, - * perform a sorted insertion into timed_list. */ - struct aws_linked_list_node *node_i; - for (node_i = aws_linked_list_begin(&scheduler->timed_list); - node_i != aws_linked_list_end(&scheduler->timed_list); - node_i = aws_linked_list_next(node_i)) { - - struct aws_task *task_i = AWS_CONTAINER_OF(node_i, struct aws_task, node); - if (task_i->timestamp > time_to_run) { - break; - } - } - aws_linked_list_insert_before(node_i, &task->node); - } -} - -void aws_task_scheduler_run_all(struct aws_task_scheduler *scheduler, uint64_t current_time) { - AWS_ASSERT(scheduler); - - s_run_all(scheduler, current_time, AWS_TASK_STATUS_RUN_READY); -} - -static void s_run_all(struct aws_task_scheduler *scheduler, uint64_t current_time, enum aws_task_status status) { - - /* Move scheduled tasks to running_list before executing. - * This gives us the desired behavior that: if executing a task results in another task being scheduled, - * that new task is not executed until the next time run() is invoked. */ - struct aws_linked_list running_list; - aws_linked_list_init(&running_list); - - /* First move everything from asap_list */ - aws_linked_list_swap_contents(&running_list, &scheduler->asap_list); - - /* Next move tasks from timed_queue and timed_list, based on whichever's next-task is sooner. - * It's very unlikely that any tasks are in timed_list, so once it has no more valid tasks, - * break out of this complex loop in favor of a simpler one. */ - while (AWS_UNLIKELY(!aws_linked_list_empty(&scheduler->timed_list))) { - - struct aws_linked_list_node *timed_list_node = aws_linked_list_begin(&scheduler->timed_list); - struct aws_task *timed_list_task = AWS_CONTAINER_OF(timed_list_node, struct aws_task, node); - if (timed_list_task->timestamp > current_time) { - /* timed_list is out of valid tasks, break out of complex loop */ - break; - } - - /* Check if timed_queue has a task which is sooner */ - struct aws_task **timed_queue_task_ptrptr = NULL; - if (aws_priority_queue_top(&scheduler->timed_queue, (void **)&timed_queue_task_ptrptr) == AWS_OP_SUCCESS) { - if ((*timed_queue_task_ptrptr)->timestamp <= current_time) { - if ((*timed_queue_task_ptrptr)->timestamp < timed_list_task->timestamp) { - /* Take task from timed_queue */ - struct aws_task *timed_queue_task; - aws_priority_queue_pop(&scheduler->timed_queue, &timed_queue_task); - aws_linked_list_push_back(&running_list, &timed_queue_task->node); - continue; - } - } - } - - /* Take task from timed_list */ - aws_linked_list_pop_front(&scheduler->timed_list); - aws_linked_list_push_back(&running_list, &timed_list_task->node); - } - - /* Simpler loop that moves remaining valid tasks from timed_queue */ - struct aws_task **timed_queue_task_ptrptr = NULL; - while (aws_priority_queue_top(&scheduler->timed_queue, (void **)&timed_queue_task_ptrptr) == AWS_OP_SUCCESS) { - if ((*timed_queue_task_ptrptr)->timestamp > current_time) { - break; - } - - struct aws_task *next_timed_task; - aws_priority_queue_pop(&scheduler->timed_queue, &next_timed_task); - aws_linked_list_push_back(&running_list, &next_timed_task->node); - } - - /* Run tasks */ - while (!aws_linked_list_empty(&running_list)) { - struct aws_linked_list_node *task_node = aws_linked_list_pop_front(&running_list); - struct aws_task *task = AWS_CONTAINER_OF(task_node, struct aws_task, node); - aws_task_run(task, status); - } -} - -void aws_task_scheduler_cancel_task(struct aws_task_scheduler *scheduler, struct aws_task *task) { - /* attempt the linked lists first since those will be faster access and more likely to occur - * anyways. - */ - if (task->node.next) { - aws_linked_list_remove(&task->node); - } else { - aws_priority_queue_remove(&scheduler->timed_queue, &task, &task->priority_queue_node); - } + task->timestamp = time_to_run; + + task->priority_queue_node.current_index = SIZE_MAX; + aws_linked_list_node_reset(&task->node); + int err = aws_priority_queue_push_ref(&scheduler->timed_queue, &task, &task->priority_queue_node); + if (AWS_UNLIKELY(err)) { + /* In the (very unlikely) case that we can't push into the timed_queue, + * perform a sorted insertion into timed_list. */ + struct aws_linked_list_node *node_i; + for (node_i = aws_linked_list_begin(&scheduler->timed_list); + node_i != aws_linked_list_end(&scheduler->timed_list); + node_i = aws_linked_list_next(node_i)) { + + struct aws_task *task_i = AWS_CONTAINER_OF(node_i, struct aws_task, node); + if (task_i->timestamp > time_to_run) { + break; + } + } + aws_linked_list_insert_before(node_i, &task->node); + } +} + +void aws_task_scheduler_run_all(struct aws_task_scheduler *scheduler, uint64_t current_time) { + AWS_ASSERT(scheduler); + + s_run_all(scheduler, current_time, AWS_TASK_STATUS_RUN_READY); +} + +static void s_run_all(struct aws_task_scheduler *scheduler, uint64_t current_time, enum aws_task_status status) { + + /* Move scheduled tasks to running_list before executing. + * This gives us the desired behavior that: if executing a task results in another task being scheduled, + * that new task is not executed until the next time run() is invoked. */ + struct aws_linked_list running_list; + aws_linked_list_init(&running_list); + + /* First move everything from asap_list */ + aws_linked_list_swap_contents(&running_list, &scheduler->asap_list); + + /* Next move tasks from timed_queue and timed_list, based on whichever's next-task is sooner. + * It's very unlikely that any tasks are in timed_list, so once it has no more valid tasks, + * break out of this complex loop in favor of a simpler one. */ + while (AWS_UNLIKELY(!aws_linked_list_empty(&scheduler->timed_list))) { + + struct aws_linked_list_node *timed_list_node = aws_linked_list_begin(&scheduler->timed_list); + struct aws_task *timed_list_task = AWS_CONTAINER_OF(timed_list_node, struct aws_task, node); + if (timed_list_task->timestamp > current_time) { + /* timed_list is out of valid tasks, break out of complex loop */ + break; + } + + /* Check if timed_queue has a task which is sooner */ + struct aws_task **timed_queue_task_ptrptr = NULL; + if (aws_priority_queue_top(&scheduler->timed_queue, (void **)&timed_queue_task_ptrptr) == AWS_OP_SUCCESS) { + if ((*timed_queue_task_ptrptr)->timestamp <= current_time) { + if ((*timed_queue_task_ptrptr)->timestamp < timed_list_task->timestamp) { + /* Take task from timed_queue */ + struct aws_task *timed_queue_task; + aws_priority_queue_pop(&scheduler->timed_queue, &timed_queue_task); + aws_linked_list_push_back(&running_list, &timed_queue_task->node); + continue; + } + } + } + + /* Take task from timed_list */ + aws_linked_list_pop_front(&scheduler->timed_list); + aws_linked_list_push_back(&running_list, &timed_list_task->node); + } + + /* Simpler loop that moves remaining valid tasks from timed_queue */ + struct aws_task **timed_queue_task_ptrptr = NULL; + while (aws_priority_queue_top(&scheduler->timed_queue, (void **)&timed_queue_task_ptrptr) == AWS_OP_SUCCESS) { + if ((*timed_queue_task_ptrptr)->timestamp > current_time) { + break; + } + + struct aws_task *next_timed_task; + aws_priority_queue_pop(&scheduler->timed_queue, &next_timed_task); + aws_linked_list_push_back(&running_list, &next_timed_task->node); + } + + /* Run tasks */ + while (!aws_linked_list_empty(&running_list)) { + struct aws_linked_list_node *task_node = aws_linked_list_pop_front(&running_list); + struct aws_task *task = AWS_CONTAINER_OF(task_node, struct aws_task, node); + aws_task_run(task, status); + } +} + +void aws_task_scheduler_cancel_task(struct aws_task_scheduler *scheduler, struct aws_task *task) { + /* attempt the linked lists first since those will be faster access and more likely to occur + * anyways. + */ + if (task->node.next) { + aws_linked_list_remove(&task->node); + } else { + aws_priority_queue_remove(&scheduler->timed_queue, &task, &task->priority_queue_node); + } /* * No need to log cancellation specially; it will get logged during the run call with the canceled status */ - aws_task_run(task, AWS_TASK_STATUS_CANCELED); -} + aws_task_run(task, AWS_TASK_STATUS_CANCELED); +} diff --git a/contrib/restricted/aws/aws-c-common/source/uuid.c b/contrib/restricted/aws/aws-c-common/source/uuid.c index a962abd653..3cf681ed62 100644 --- a/contrib/restricted/aws/aws-c-common/source/uuid.c +++ b/contrib/restricted/aws/aws-c-common/source/uuid.c @@ -1,99 +1,99 @@ /** * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. * SPDX-License-Identifier: Apache-2.0. - */ -#include <aws/common/uuid.h> - -#include <aws/common/byte_buf.h> -#include <aws/common/device_random.h> - -#include <inttypes.h> -#include <stdio.h> - -#define HEX_CHAR_FMT "%02" SCNx8 - -#define UUID_FORMAT \ - HEX_CHAR_FMT HEX_CHAR_FMT HEX_CHAR_FMT HEX_CHAR_FMT \ - "-" HEX_CHAR_FMT HEX_CHAR_FMT "-" HEX_CHAR_FMT HEX_CHAR_FMT "-" HEX_CHAR_FMT HEX_CHAR_FMT \ - "-" HEX_CHAR_FMT HEX_CHAR_FMT HEX_CHAR_FMT HEX_CHAR_FMT HEX_CHAR_FMT HEX_CHAR_FMT - -#include <stdio.h> - -#ifdef _MSC_VER -/* disables warning non const declared initializers for Microsoft compilers */ -# pragma warning(disable : 4204) -# pragma warning(disable : 4706) -/* sprintf warning (we already check the bounds in this case). */ -# pragma warning(disable : 4996) -#endif - -int aws_uuid_init(struct aws_uuid *uuid) { - struct aws_byte_buf buf = aws_byte_buf_from_empty_array(uuid->uuid_data, sizeof(uuid->uuid_data)); - - return aws_device_random_buffer(&buf); -} - -int aws_uuid_init_from_str(struct aws_uuid *uuid, const struct aws_byte_cursor *uuid_str) { + */ +#include <aws/common/uuid.h> + +#include <aws/common/byte_buf.h> +#include <aws/common/device_random.h> + +#include <inttypes.h> +#include <stdio.h> + +#define HEX_CHAR_FMT "%02" SCNx8 + +#define UUID_FORMAT \ + HEX_CHAR_FMT HEX_CHAR_FMT HEX_CHAR_FMT HEX_CHAR_FMT \ + "-" HEX_CHAR_FMT HEX_CHAR_FMT "-" HEX_CHAR_FMT HEX_CHAR_FMT "-" HEX_CHAR_FMT HEX_CHAR_FMT \ + "-" HEX_CHAR_FMT HEX_CHAR_FMT HEX_CHAR_FMT HEX_CHAR_FMT HEX_CHAR_FMT HEX_CHAR_FMT + +#include <stdio.h> + +#ifdef _MSC_VER +/* disables warning non const declared initializers for Microsoft compilers */ +# pragma warning(disable : 4204) +# pragma warning(disable : 4706) +/* sprintf warning (we already check the bounds in this case). */ +# pragma warning(disable : 4996) +#endif + +int aws_uuid_init(struct aws_uuid *uuid) { + struct aws_byte_buf buf = aws_byte_buf_from_empty_array(uuid->uuid_data, sizeof(uuid->uuid_data)); + + return aws_device_random_buffer(&buf); +} + +int aws_uuid_init_from_str(struct aws_uuid *uuid, const struct aws_byte_cursor *uuid_str) { AWS_ERROR_PRECONDITION(uuid_str->len >= AWS_UUID_STR_LEN - 1, AWS_ERROR_INVALID_BUFFER_SIZE); - - char cpy[AWS_UUID_STR_LEN] = {0}; - memcpy(cpy, uuid_str->ptr, AWS_UUID_STR_LEN - 1); - - AWS_ZERO_STRUCT(*uuid); - - if (16 != sscanf( - cpy, - UUID_FORMAT, - &uuid->uuid_data[0], - &uuid->uuid_data[1], - &uuid->uuid_data[2], - &uuid->uuid_data[3], - &uuid->uuid_data[4], - &uuid->uuid_data[5], - &uuid->uuid_data[6], - &uuid->uuid_data[7], - &uuid->uuid_data[8], - &uuid->uuid_data[9], - &uuid->uuid_data[10], - &uuid->uuid_data[11], - &uuid->uuid_data[12], - &uuid->uuid_data[13], - &uuid->uuid_data[14], - &uuid->uuid_data[15])) { - return aws_raise_error(AWS_ERROR_MALFORMED_INPUT_STRING); - } - - return AWS_OP_SUCCESS; -} - -int aws_uuid_to_str(const struct aws_uuid *uuid, struct aws_byte_buf *output) { + + char cpy[AWS_UUID_STR_LEN] = {0}; + memcpy(cpy, uuid_str->ptr, AWS_UUID_STR_LEN - 1); + + AWS_ZERO_STRUCT(*uuid); + + if (16 != sscanf( + cpy, + UUID_FORMAT, + &uuid->uuid_data[0], + &uuid->uuid_data[1], + &uuid->uuid_data[2], + &uuid->uuid_data[3], + &uuid->uuid_data[4], + &uuid->uuid_data[5], + &uuid->uuid_data[6], + &uuid->uuid_data[7], + &uuid->uuid_data[8], + &uuid->uuid_data[9], + &uuid->uuid_data[10], + &uuid->uuid_data[11], + &uuid->uuid_data[12], + &uuid->uuid_data[13], + &uuid->uuid_data[14], + &uuid->uuid_data[15])) { + return aws_raise_error(AWS_ERROR_MALFORMED_INPUT_STRING); + } + + return AWS_OP_SUCCESS; +} + +int aws_uuid_to_str(const struct aws_uuid *uuid, struct aws_byte_buf *output) { AWS_ERROR_PRECONDITION(output->capacity - output->len >= AWS_UUID_STR_LEN, AWS_ERROR_SHORT_BUFFER); - - sprintf( - (char *)(output->buffer + output->len), - UUID_FORMAT, - uuid->uuid_data[0], - uuid->uuid_data[1], - uuid->uuid_data[2], - uuid->uuid_data[3], - uuid->uuid_data[4], - uuid->uuid_data[5], - uuid->uuid_data[6], - uuid->uuid_data[7], - uuid->uuid_data[8], - uuid->uuid_data[9], - uuid->uuid_data[10], - uuid->uuid_data[11], - uuid->uuid_data[12], - uuid->uuid_data[13], - uuid->uuid_data[14], - uuid->uuid_data[15]); - - output->len += AWS_UUID_STR_LEN - 1; - - return AWS_OP_SUCCESS; -} - -bool aws_uuid_equals(const struct aws_uuid *a, const struct aws_uuid *b) { - return 0 == memcmp(a->uuid_data, b->uuid_data, sizeof(a->uuid_data)); -} + + sprintf( + (char *)(output->buffer + output->len), + UUID_FORMAT, + uuid->uuid_data[0], + uuid->uuid_data[1], + uuid->uuid_data[2], + uuid->uuid_data[3], + uuid->uuid_data[4], + uuid->uuid_data[5], + uuid->uuid_data[6], + uuid->uuid_data[7], + uuid->uuid_data[8], + uuid->uuid_data[9], + uuid->uuid_data[10], + uuid->uuid_data[11], + uuid->uuid_data[12], + uuid->uuid_data[13], + uuid->uuid_data[14], + uuid->uuid_data[15]); + + output->len += AWS_UUID_STR_LEN - 1; + + return AWS_OP_SUCCESS; +} + +bool aws_uuid_equals(const struct aws_uuid *a, const struct aws_uuid *b) { + return 0 == memcmp(a->uuid_data, b->uuid_data, sizeof(a->uuid_data)); +} diff --git a/contrib/restricted/aws/aws-c-common/ya.make b/contrib/restricted/aws/aws-c-common/ya.make index e2f9e4113b..ecdd568d4b 100644 --- a/contrib/restricted/aws/aws-c-common/ya.make +++ b/contrib/restricted/aws/aws-c-common/ya.make @@ -1,13 +1,13 @@ # Generated by devtools/yamaker from nixpkgs 980c4c3c2f664ccc5002f7fd6e08059cf1f00e75. - -LIBRARY() - + +LIBRARY() + OWNER(g:cpp-contrib) - + VERSION(0.4.63) - + ORIGINAL_SOURCE(https://github.com/awslabs/aws-c-common/archive/v0.4.63.tar.gz) - + LICENSE( Apache-2.0 AND BSD-3-Clause AND @@ -16,15 +16,15 @@ LICENSE( LICENSE_TEXTS(.yandex_meta/licenses.list.txt) -ADDINCL( +ADDINCL( GLOBAL contrib/restricted/aws/aws-c-common/generated/include GLOBAL contrib/restricted/aws/aws-c-common/include -) - -NO_COMPILER_WARNINGS() - +) + +NO_COMPILER_WARNINGS() + NO_RUNTIME() - + IF (OS_DARWIN) LDFLAGS( -framework @@ -32,54 +32,54 @@ IF (OS_DARWIN) ) ENDIF() -SRCS( +SRCS( source/allocator.c source/allocator_sba.c - source/array_list.c - source/assert.c - source/byte_buf.c + source/array_list.c + source/assert.c + source/byte_buf.c source/cache.c - source/codegen.c - source/command_line_parser.c - source/common.c - source/condition_variable.c - source/date_time.c - source/device_random.c - source/encoding.c - source/error.c + source/codegen.c + source/command_line_parser.c + source/common.c + source/condition_variable.c + source/date_time.c + source/device_random.c + source/encoding.c + source/error.c source/fifo_cache.c - source/hash_table.c + source/hash_table.c source/lifo_cache.c source/linked_hash_table.c source/log_channel.c source/log_formatter.c source/log_writer.c source/logging.c - source/lru_cache.c + source/lru_cache.c source/math.c source/memtrace.c - source/posix/clock.c - source/posix/condition_variable.c - source/posix/device_random.c - source/posix/environment.c - source/posix/mutex.c + source/posix/clock.c + source/posix/condition_variable.c + source/posix/device_random.c + source/posix/environment.c + source/posix/mutex.c source/posix/process.c - source/posix/rw_lock.c - source/posix/system_info.c - source/posix/thread.c - source/posix/time.c - source/priority_queue.c + source/posix/rw_lock.c + source/posix/system_info.c + source/posix/thread.c + source/posix/time.c + source/priority_queue.c source/process_common.c source/ref_count.c source/resource_name.c source/ring_buffer.c source/statistics.c - source/string.c - source/task_scheduler.c - source/uuid.c + source/string.c + source/task_scheduler.c + source/uuid.c source/xml_parser.c -) - +) + IF (ARCH_ARM) SRCS( source/arch/arm/asm/cpuid.c @@ -91,4 +91,4 @@ ELSEIF (ARCH_X86_64) ) ENDIF() -END() +END() |